domain
stringclasses 48
values | url
stringlengths 35
137
| text
stringlengths 0
836k
| topic
stringclasses 13
values |
---|---|---|---|
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/data_table.html |
data.table
==========
Another package for data processing that has been useful to many is data.table. It works in a notably different way than dplyr. However, you’d use it for the same reasons, e.g. subset, grouping, update, ordered joins etc., but with key advantages in speed and memory efficiency. Like dplyr, the data objects are both data.frames and a package specific class.
```
library(data.table)
dt = data.table(x = sample(1:10, 6),
g = 1:3,
y = runif(6))
class(dt)
```
```
[1] "data.table" "data.frame"
```
data.table Basics
-----------------
In general, data.table works with brackets as in base R data frames. However, in order to use data.table effectively you’ll need to forget the data frame similarity. The brackets actually work like a function call, with several key arguments. Consider the following notation to start.
```
x[i, j, by, keyby, with = TRUE, ...]
```
Importantly: *you don’t use the brackets as you would with data.frames*. What **i** and **j** can be are fairly complex.
In general, you use **i** for filtering by rows.
```
dt[2] # rows! not columns as with standard data.frame
dt[2,]
```
```
x g y
1: 5 2 0.1079452
x g y
1: 5 2 0.1079452
```
You use **j** to select (by name!) or create new columns. We can define a new column with the :\= operator.
```
dt[,x]
dt[,z := x+y] # dt now has a new column
dt[,z]
dt[g > 1, mean(z), by = g]
dt
```
```
[1] 6 5 2 9 8 1
[1] 6.908980 5.107945 2.843715 9.780681 8.215221 1.334649
g V1
1: 2 6.661583
2: 3 2.089182
x g y z
1: 6 1 0.9089802 6.908980
2: 5 2 0.1079452 5.107945
3: 2 3 0.8437154 2.843715
4: 9 1 0.7806815 9.780681
5: 8 2 0.2152209 8.215221
6: 1 3 0.3346486 1.334649
```
Because **j** is an argument, dropping columns is awkward.
```
dt[, -y] # creates negative values of y
dt[, -'y', with = F] # drops y, but now needs quotes
## dt[, y := NULL] # drops y, but this is just a base R approach
## dt$y = NULL
```
```
[1] -0.9089802 -0.1079452 -0.8437154 -0.7806815 -0.2152209 -0.3346486
x g z
1: 6 1 6.908980
2: 5 2 5.107945
3: 2 3 2.843715
4: 9 1 9.780681
5: 8 2 8.215221
6: 1 3 1.334649
```
Data table does not make unnecessary copies. For example if we do the following…
```
DT = data.table(A = 5:1, B = letters[5:1])
DT2 = DT
DT3 = copy(DT)
```
DT2 and DT are just names for the same table. You’d actually need to use the copy function to make an explicit copy, otherwise whatever you do to DT2 will be done to DT.
```
DT2[,q:=1]
DT
```
```
A B q
1: 5 e 1
2: 4 d 1
3: 3 c 1
4: 2 b 1
5: 1 a 1
```
```
DT3
```
```
A B
1: 5 e
2: 4 d
3: 3 c
4: 2 b
5: 1 a
```
Grouped Operations
------------------
We can now attempt a ‘group\-by’ operation, along with creation of a new variable. Note that these operations actually modify the dt object *in place*, a key distinction with dplyr. Fewer copies means less of a memory hit.
```
dt1 = dt2 = dt
dt[, sum(x, y), by = g] # sum of all x and y values
```
```
g V1
1: 1 16.689662
2: 2 13.323166
3: 3 4.178364
```
```
dt1[, mysum := sum(x), by = g] # add new variable to the original data
dt1
```
```
x g y z mysum
1: 6 1 0.9089802 6.908980 15
2: 5 2 0.1079452 5.107945 13
3: 2 3 0.8437154 2.843715 3
4: 9 1 0.7806815 9.780681 15
5: 8 2 0.2152209 8.215221 13
6: 1 3 0.3346486 1.334649 3
```
We can also create groupings on the fly. For a new summary data set, we’ll take the following approach\- we create a grouping based on whether `g` is a value of one or not, then get the mean and sum of `x` for those two categories. The corresponding dplyr approach is also shown (but not evaluated) for comparison.
```
dt2[, list(mean_x = mean(x), sum_x = sum(x)), by = g == 1]
```
```
g mean_x sum_x
1: TRUE 7.5 15
2: FALSE 4.0 16
```
```
## dt2 %>%
## group_by(g == 1) %>%
## summarise(mean_x = mean(x), sum_x = sum(x))
```
Faster!
-------
As mentioned, the reason to use data.table is speed. If you have large data or large operations it’ll be useful.
### Joins
Joins can not only be faster but also easy to do. Note that the `i` argument can be a data.table object itself. I compare its speed to the comparable dplyr’s left\_join function.
```
dt1 = setkey(dt1, x)
dt1[dt2]
dt1_df = dt2_df = as.data.frame(dt1)
left_join(dt1_df, dt2_df, by = 'x')
```
| func | mean (microseconds) |
| --- | --- |
| dt\_join | 504\.77 |
| dplyr\_join | 1588\.46 |
### Group by
We can use the setkey function to order a data set by a certain column(s). This ordering is done by reference; again, no copy is made. Doing this will allow for faster grouped operations, though you likely will only see the speed gain with very large data. The timing regards creating a new variable
```
test_dt0 = data.table(x = rnorm(10000000),
g = sample(letters, 10000000, replace = T))
test_dt1 = copy(test_dt0)
test_dt2 = setkey(test_dt1, g)
identical(test_dt0, test_dt1)
```
```
[1] FALSE
```
```
identical(test_dt1, test_dt2)
```
```
[1] TRUE
```
```
test_dt0 = test_dt0[, mean := mean(x), by = g]
test_dt1 = test_dt1[, mean := mean(x), by = g]
test_dt2 = test_dt2[, mean := mean(x), by = g]
```
| func | mean (milliseconds) |
| --- | --- |
| test\_dt0 | 381\.29 |
| test\_dt1 | 118\.52 |
| test\_dt2 | 109\.97 |
### String matching
The chin function returns a vector of the *positions* of (first) matches of its first argument in its second, where both arguments are character vectors. Essentially it’s just like the %in% function for character vectors.
Consider the following. We sample the first 14 letters 1000 times with replacement and see which ones match in a subset of another subset of letters. I compare the same operation to stringr and the stringi package whose functionality stringr using. They are both far slower than chin.
```
lets_1 = sample(letters[1:14], 1000, replace=T)
lets_1 %chin% letters[13:26] %>% head(10)
```
```
[1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
```
```
# stri_detect_regex(lets_1, paste(letters[13:26], collapse='|'))
```
### Reading files
If you use data.table for nothing else, you’d still want to consider it strongly for reading in large text files. The function fread may be quite useful in being memory efficient too. I compare it to readr.
```
fread('data/cars.csv')
```
| func | mean (microseconds) |
| --- | --- |
| dt | 430\.91 |
| readr | 2900\.19 |
### More speed
The following demonstrates some timings from [here](http://stackoverflow.com/questions/3505701/r-grouping-functions-sapply-vs-lapply-vs-apply-vs-tapply-vs-by-vs-aggrega/34167477#34167477). I reproduced it on my own machine based on 50 million observations. The grouped operations that are applied are just a sum and length on a vector.
By the way, never, ever use aggregate. For anything.
| fun | elapsed |
| --- | --- |
| aggregate | 56\.857 |
| by | 18\.118 |
| dplyr | 14\.447 |
| sapply | 12\.200 |
| lapply | 11\.309 |
| tapply | 10\.570 |
| data.table | 0\.866 |
Ever.
Really.
Another thing to note is that the tidy approach is more about clarity and code efficiency relative to base R, as well as doing important background data checks and returning more usable results. In practice, it likely won’t be notably faster except in some cases, like with aggregate.
Pipe with data.table
--------------------
Piping can be done with data.table objects too, using the brackets, but it’s awkward at best.
```
mydt[, newvar := mean(x), ][, newvar2 := sum(newvar), by = group][, -'y', with = FALSE]
mydt[, newvar := mean(x),
][, newvar2 := sum(newvar), by = group
][,-'y', with=FALSE]
```
Probably better to just use a standard pipe and dot approach if you really need it.
```
mydt[, newvar := mean(x), ] %>%
.[, newvar2 := sum(newvar), by = group] %>%
.[, -'y', with = FALSE]
```
data.table Summary
------------------
Faster and more memory\-efficient methods are great to have. If you have large data this is one package that can help.
* For reading data
* Especially for group\-by and joins.
Drawbacks:
* Complex
* The syntax can be awkward
* It doesn’t work like a data.frame, which can be confusing
* Piping with brackets isn’t really feasible, and the dot approach is awkward
* Does not have its own ‘verse’, though many packages use it
If speed and/or memory is (potentially) a concern, data.table.
For interactive exploration, dplyr.
Piping allows one to use both, so no need to choose.
And on the horizon…
Faster dplyr Alternatives
-------------------------
So we have data.table as a starting point for faster data processing operations, but there are others. The dtplyr package implements the data.table back\-end for dplyr, so that you can seamlessly use them together. The newer package tidyfast works directly with a data.table object, but uses dplyr\-esque functions. The following shows times for a counting unique arrival times in the nycflights13 flights data (336776 rows).
| package | timing |
| --- | --- |
| dplyr | 10\.580 |
| dtplyr | 4\.575 |
| data.table | 3\.519 |
| tidyfast | 3\.507 |
| a Median time in milliseconds to do a count of arr\_time on nycflights::flights |
| --- |
Just for giggles I did the same in Python with a pandas DataFrame, and it was notably slower than all of these options (almost 10x slower than standard dplyr). A lot of folks that use Python think R is slow, but that is mostly because they don’t know how to effectively program with R for data science.
#### Out of memory situations
For very large data sets, especially in cases where distributed data solutions like Spark (and sparklyr) are not viable for practical or security reasons, you may need to try another approach. The disk.frame package does data processing on disk rather than in memory, as is the case with default R approaches. This allows you to process data that may be too large or time consuming to do so otherwise. For example, it’d be a great option if you are starting out with extremely large data, but for which your subset of interest is easily manageable within R. With disk.frame, you can do the initial filtering and selection before bringing it into memory.
data.table Exercises
--------------------
### Exercise 0
Install and load the data.table package.
Create the following data table.
```
mydt = data.table(
expand.grid(x = 1:3,
y = c('a', 'b', 'c')),
z = sample(1:20, 9)
)
```
### Exercise 1
Create a new object that contains only the ‘a’ group. Think back to how you use a logical to select rows.
### Exercise 2
Create a new object that is the sum of z grouped by x. You don’t need to name the sum variable.
data.table Basics
-----------------
In general, data.table works with brackets as in base R data frames. However, in order to use data.table effectively you’ll need to forget the data frame similarity. The brackets actually work like a function call, with several key arguments. Consider the following notation to start.
```
x[i, j, by, keyby, with = TRUE, ...]
```
Importantly: *you don’t use the brackets as you would with data.frames*. What **i** and **j** can be are fairly complex.
In general, you use **i** for filtering by rows.
```
dt[2] # rows! not columns as with standard data.frame
dt[2,]
```
```
x g y
1: 5 2 0.1079452
x g y
1: 5 2 0.1079452
```
You use **j** to select (by name!) or create new columns. We can define a new column with the :\= operator.
```
dt[,x]
dt[,z := x+y] # dt now has a new column
dt[,z]
dt[g > 1, mean(z), by = g]
dt
```
```
[1] 6 5 2 9 8 1
[1] 6.908980 5.107945 2.843715 9.780681 8.215221 1.334649
g V1
1: 2 6.661583
2: 3 2.089182
x g y z
1: 6 1 0.9089802 6.908980
2: 5 2 0.1079452 5.107945
3: 2 3 0.8437154 2.843715
4: 9 1 0.7806815 9.780681
5: 8 2 0.2152209 8.215221
6: 1 3 0.3346486 1.334649
```
Because **j** is an argument, dropping columns is awkward.
```
dt[, -y] # creates negative values of y
dt[, -'y', with = F] # drops y, but now needs quotes
## dt[, y := NULL] # drops y, but this is just a base R approach
## dt$y = NULL
```
```
[1] -0.9089802 -0.1079452 -0.8437154 -0.7806815 -0.2152209 -0.3346486
x g z
1: 6 1 6.908980
2: 5 2 5.107945
3: 2 3 2.843715
4: 9 1 9.780681
5: 8 2 8.215221
6: 1 3 1.334649
```
Data table does not make unnecessary copies. For example if we do the following…
```
DT = data.table(A = 5:1, B = letters[5:1])
DT2 = DT
DT3 = copy(DT)
```
DT2 and DT are just names for the same table. You’d actually need to use the copy function to make an explicit copy, otherwise whatever you do to DT2 will be done to DT.
```
DT2[,q:=1]
DT
```
```
A B q
1: 5 e 1
2: 4 d 1
3: 3 c 1
4: 2 b 1
5: 1 a 1
```
```
DT3
```
```
A B
1: 5 e
2: 4 d
3: 3 c
4: 2 b
5: 1 a
```
Grouped Operations
------------------
We can now attempt a ‘group\-by’ operation, along with creation of a new variable. Note that these operations actually modify the dt object *in place*, a key distinction with dplyr. Fewer copies means less of a memory hit.
```
dt1 = dt2 = dt
dt[, sum(x, y), by = g] # sum of all x and y values
```
```
g V1
1: 1 16.689662
2: 2 13.323166
3: 3 4.178364
```
```
dt1[, mysum := sum(x), by = g] # add new variable to the original data
dt1
```
```
x g y z mysum
1: 6 1 0.9089802 6.908980 15
2: 5 2 0.1079452 5.107945 13
3: 2 3 0.8437154 2.843715 3
4: 9 1 0.7806815 9.780681 15
5: 8 2 0.2152209 8.215221 13
6: 1 3 0.3346486 1.334649 3
```
We can also create groupings on the fly. For a new summary data set, we’ll take the following approach\- we create a grouping based on whether `g` is a value of one or not, then get the mean and sum of `x` for those two categories. The corresponding dplyr approach is also shown (but not evaluated) for comparison.
```
dt2[, list(mean_x = mean(x), sum_x = sum(x)), by = g == 1]
```
```
g mean_x sum_x
1: TRUE 7.5 15
2: FALSE 4.0 16
```
```
## dt2 %>%
## group_by(g == 1) %>%
## summarise(mean_x = mean(x), sum_x = sum(x))
```
Faster!
-------
As mentioned, the reason to use data.table is speed. If you have large data or large operations it’ll be useful.
### Joins
Joins can not only be faster but also easy to do. Note that the `i` argument can be a data.table object itself. I compare its speed to the comparable dplyr’s left\_join function.
```
dt1 = setkey(dt1, x)
dt1[dt2]
dt1_df = dt2_df = as.data.frame(dt1)
left_join(dt1_df, dt2_df, by = 'x')
```
| func | mean (microseconds) |
| --- | --- |
| dt\_join | 504\.77 |
| dplyr\_join | 1588\.46 |
### Group by
We can use the setkey function to order a data set by a certain column(s). This ordering is done by reference; again, no copy is made. Doing this will allow for faster grouped operations, though you likely will only see the speed gain with very large data. The timing regards creating a new variable
```
test_dt0 = data.table(x = rnorm(10000000),
g = sample(letters, 10000000, replace = T))
test_dt1 = copy(test_dt0)
test_dt2 = setkey(test_dt1, g)
identical(test_dt0, test_dt1)
```
```
[1] FALSE
```
```
identical(test_dt1, test_dt2)
```
```
[1] TRUE
```
```
test_dt0 = test_dt0[, mean := mean(x), by = g]
test_dt1 = test_dt1[, mean := mean(x), by = g]
test_dt2 = test_dt2[, mean := mean(x), by = g]
```
| func | mean (milliseconds) |
| --- | --- |
| test\_dt0 | 381\.29 |
| test\_dt1 | 118\.52 |
| test\_dt2 | 109\.97 |
### String matching
The chin function returns a vector of the *positions* of (first) matches of its first argument in its second, where both arguments are character vectors. Essentially it’s just like the %in% function for character vectors.
Consider the following. We sample the first 14 letters 1000 times with replacement and see which ones match in a subset of another subset of letters. I compare the same operation to stringr and the stringi package whose functionality stringr using. They are both far slower than chin.
```
lets_1 = sample(letters[1:14], 1000, replace=T)
lets_1 %chin% letters[13:26] %>% head(10)
```
```
[1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
```
```
# stri_detect_regex(lets_1, paste(letters[13:26], collapse='|'))
```
### Reading files
If you use data.table for nothing else, you’d still want to consider it strongly for reading in large text files. The function fread may be quite useful in being memory efficient too. I compare it to readr.
```
fread('data/cars.csv')
```
| func | mean (microseconds) |
| --- | --- |
| dt | 430\.91 |
| readr | 2900\.19 |
### More speed
The following demonstrates some timings from [here](http://stackoverflow.com/questions/3505701/r-grouping-functions-sapply-vs-lapply-vs-apply-vs-tapply-vs-by-vs-aggrega/34167477#34167477). I reproduced it on my own machine based on 50 million observations. The grouped operations that are applied are just a sum and length on a vector.
By the way, never, ever use aggregate. For anything.
| fun | elapsed |
| --- | --- |
| aggregate | 56\.857 |
| by | 18\.118 |
| dplyr | 14\.447 |
| sapply | 12\.200 |
| lapply | 11\.309 |
| tapply | 10\.570 |
| data.table | 0\.866 |
Ever.
Really.
Another thing to note is that the tidy approach is more about clarity and code efficiency relative to base R, as well as doing important background data checks and returning more usable results. In practice, it likely won’t be notably faster except in some cases, like with aggregate.
### Joins
Joins can not only be faster but also easy to do. Note that the `i` argument can be a data.table object itself. I compare its speed to the comparable dplyr’s left\_join function.
```
dt1 = setkey(dt1, x)
dt1[dt2]
dt1_df = dt2_df = as.data.frame(dt1)
left_join(dt1_df, dt2_df, by = 'x')
```
| func | mean (microseconds) |
| --- | --- |
| dt\_join | 504\.77 |
| dplyr\_join | 1588\.46 |
### Group by
We can use the setkey function to order a data set by a certain column(s). This ordering is done by reference; again, no copy is made. Doing this will allow for faster grouped operations, though you likely will only see the speed gain with very large data. The timing regards creating a new variable
```
test_dt0 = data.table(x = rnorm(10000000),
g = sample(letters, 10000000, replace = T))
test_dt1 = copy(test_dt0)
test_dt2 = setkey(test_dt1, g)
identical(test_dt0, test_dt1)
```
```
[1] FALSE
```
```
identical(test_dt1, test_dt2)
```
```
[1] TRUE
```
```
test_dt0 = test_dt0[, mean := mean(x), by = g]
test_dt1 = test_dt1[, mean := mean(x), by = g]
test_dt2 = test_dt2[, mean := mean(x), by = g]
```
| func | mean (milliseconds) |
| --- | --- |
| test\_dt0 | 381\.29 |
| test\_dt1 | 118\.52 |
| test\_dt2 | 109\.97 |
### String matching
The chin function returns a vector of the *positions* of (first) matches of its first argument in its second, where both arguments are character vectors. Essentially it’s just like the %in% function for character vectors.
Consider the following. We sample the first 14 letters 1000 times with replacement and see which ones match in a subset of another subset of letters. I compare the same operation to stringr and the stringi package whose functionality stringr using. They are both far slower than chin.
```
lets_1 = sample(letters[1:14], 1000, replace=T)
lets_1 %chin% letters[13:26] %>% head(10)
```
```
[1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
```
```
# stri_detect_regex(lets_1, paste(letters[13:26], collapse='|'))
```
### Reading files
If you use data.table for nothing else, you’d still want to consider it strongly for reading in large text files. The function fread may be quite useful in being memory efficient too. I compare it to readr.
```
fread('data/cars.csv')
```
| func | mean (microseconds) |
| --- | --- |
| dt | 430\.91 |
| readr | 2900\.19 |
### More speed
The following demonstrates some timings from [here](http://stackoverflow.com/questions/3505701/r-grouping-functions-sapply-vs-lapply-vs-apply-vs-tapply-vs-by-vs-aggrega/34167477#34167477). I reproduced it on my own machine based on 50 million observations. The grouped operations that are applied are just a sum and length on a vector.
By the way, never, ever use aggregate. For anything.
| fun | elapsed |
| --- | --- |
| aggregate | 56\.857 |
| by | 18\.118 |
| dplyr | 14\.447 |
| sapply | 12\.200 |
| lapply | 11\.309 |
| tapply | 10\.570 |
| data.table | 0\.866 |
Ever.
Really.
Another thing to note is that the tidy approach is more about clarity and code efficiency relative to base R, as well as doing important background data checks and returning more usable results. In practice, it likely won’t be notably faster except in some cases, like with aggregate.
Pipe with data.table
--------------------
Piping can be done with data.table objects too, using the brackets, but it’s awkward at best.
```
mydt[, newvar := mean(x), ][, newvar2 := sum(newvar), by = group][, -'y', with = FALSE]
mydt[, newvar := mean(x),
][, newvar2 := sum(newvar), by = group
][,-'y', with=FALSE]
```
Probably better to just use a standard pipe and dot approach if you really need it.
```
mydt[, newvar := mean(x), ] %>%
.[, newvar2 := sum(newvar), by = group] %>%
.[, -'y', with = FALSE]
```
data.table Summary
------------------
Faster and more memory\-efficient methods are great to have. If you have large data this is one package that can help.
* For reading data
* Especially for group\-by and joins.
Drawbacks:
* Complex
* The syntax can be awkward
* It doesn’t work like a data.frame, which can be confusing
* Piping with brackets isn’t really feasible, and the dot approach is awkward
* Does not have its own ‘verse’, though many packages use it
If speed and/or memory is (potentially) a concern, data.table.
For interactive exploration, dplyr.
Piping allows one to use both, so no need to choose.
And on the horizon…
Faster dplyr Alternatives
-------------------------
So we have data.table as a starting point for faster data processing operations, but there are others. The dtplyr package implements the data.table back\-end for dplyr, so that you can seamlessly use them together. The newer package tidyfast works directly with a data.table object, but uses dplyr\-esque functions. The following shows times for a counting unique arrival times in the nycflights13 flights data (336776 rows).
| package | timing |
| --- | --- |
| dplyr | 10\.580 |
| dtplyr | 4\.575 |
| data.table | 3\.519 |
| tidyfast | 3\.507 |
| a Median time in milliseconds to do a count of arr\_time on nycflights::flights |
| --- |
Just for giggles I did the same in Python with a pandas DataFrame, and it was notably slower than all of these options (almost 10x slower than standard dplyr). A lot of folks that use Python think R is slow, but that is mostly because they don’t know how to effectively program with R for data science.
#### Out of memory situations
For very large data sets, especially in cases where distributed data solutions like Spark (and sparklyr) are not viable for practical or security reasons, you may need to try another approach. The disk.frame package does data processing on disk rather than in memory, as is the case with default R approaches. This allows you to process data that may be too large or time consuming to do so otherwise. For example, it’d be a great option if you are starting out with extremely large data, but for which your subset of interest is easily manageable within R. With disk.frame, you can do the initial filtering and selection before bringing it into memory.
#### Out of memory situations
For very large data sets, especially in cases where distributed data solutions like Spark (and sparklyr) are not viable for practical or security reasons, you may need to try another approach. The disk.frame package does data processing on disk rather than in memory, as is the case with default R approaches. This allows you to process data that may be too large or time consuming to do so otherwise. For example, it’d be a great option if you are starting out with extremely large data, but for which your subset of interest is easily manageable within R. With disk.frame, you can do the initial filtering and selection before bringing it into memory.
data.table Exercises
--------------------
### Exercise 0
Install and load the data.table package.
Create the following data table.
```
mydt = data.table(
expand.grid(x = 1:3,
y = c('a', 'b', 'c')),
z = sample(1:20, 9)
)
```
### Exercise 1
Create a new object that contains only the ‘a’ group. Think back to how you use a logical to select rows.
### Exercise 2
Create a new object that is the sum of z grouped by x. You don’t need to name the sum variable.
### Exercise 0
Install and load the data.table package.
Create the following data table.
```
mydt = data.table(
expand.grid(x = 1:3,
y = c('a', 'b', 'c')),
z = sample(1:20, 9)
)
```
### Exercise 1
Create a new object that contains only the ‘a’ group. Think back to how you use a logical to select rows.
### Exercise 2
Create a new object that is the sum of z grouped by x. You don’t need to name the sum variable.
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/programming.html |
Programming Basics
==================
Becoming a better programmer is in many ways like learning any language. While it may be literal, there is much nuance, and many ways are available to express yourself in order to solve some problem. However, it doesn’t take much in the way of practice to develop a few skills that will not only last, but go a long way toward saving you time and allowing you to explore your data, models, and visualizations more extensively. So let’s get to it!
R Objects
---------
### Object Inspection \& Exploration
Let’s say you’ve imported your data into R. If you are going to be able to do anything with it, you’ll have had to create an R object that represents that data. What is that object? By now you know it’s a data frame, specifically, an object of [class](https://en.wikipedia.org/wiki/Class_(computer_programming)) data.frame or possibly a tibble if you’re working within the tidyverse. If you want to look at it, you might be tempted to look at it this way with View, or clicking on it in your Environment viewer.
```
View(diamonds)
```
While this is certainly one way to inspect it, it’s not very useful. There’s far too much information to get much out of it, and information you may need to know is absent.
Consider the following:
```
str(diamonds)
```
```
tibble [53,940 × 10] (S3: tbl_df/tbl/data.frame)
$ carat : num [1:53940] 0.23 0.21 0.23 0.29 0.31 0.24 0.24 0.26 0.22 0.23 ...
$ cut : Ord.factor w/ 5 levels "Fair"<"Good"<..: 5 4 2 4 2 3 3 3 1 3 ...
$ color : Ord.factor w/ 7 levels "D"<"E"<"F"<"G"<..: 2 2 2 6 7 7 6 5 2 5 ...
$ clarity: Ord.factor w/ 8 levels "I1"<"SI2"<"SI1"<..: 2 3 5 4 2 6 7 3 4 5 ...
$ depth : num [1:53940] 61.5 59.8 56.9 62.4 63.3 62.8 62.3 61.9 65.1 59.4 ...
$ table : num [1:53940] 55 61 65 58 58 57 57 55 61 61 ...
$ price : int [1:53940] 326 326 327 334 335 336 336 337 337 338 ...
$ x : num [1:53940] 3.95 3.89 4.05 4.2 4.34 3.94 3.95 4.07 3.87 4 ...
$ y : num [1:53940] 3.98 3.84 4.07 4.23 4.35 3.96 3.98 4.11 3.78 4.05 ...
$ z : num [1:53940] 2.43 2.31 2.31 2.63 2.75 2.48 2.47 2.53 2.49 2.39 ...
```
```
glimpse(diamonds)
```
```
Rows: 53,940
Columns: 10
$ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.30, 0.23, 0.22, 0.31, 0.20, 0.32, 0.30, 0.30, 0.30, 0.30, 0.30, 0.23, 0.23, 0.31, 0.31, 0.23, 0.24, 0.30, 0.23, 0.23, 0.23, 0.23, 0.23, 0.2…
$ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Very Good, Fair, Very Good, Good, Ideal, Premium, Ideal, Premium, Premium, Ideal, Good, Good, Very Good, Good, Very Good, Very Good, Very Good…
$ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I, E, H, J, J, G, I, J, D, F, F, F, E, E, D, F, E, H, D, I, I, J, D, D, H, F, H, H, E, H, F, G, I, E, D, I, J, I, I, I, I, D, D, D, I, G, I, …
$ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, SI1, SI2, SI2, I1, SI2, SI1, SI1, SI1, SI2, VS2, VS1, SI1, SI1, VVS2, VS1, VS2, VS2, VS1, VS1, VS1, VS1, VS1, VS1, VS1, VS1, SI1, VS2, SI2,…
$ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64.0, 62.8, 60.4, 62.2, 60.2, 60.9, 62.0, 63.4, 63.8, 62.7, 63.3, 63.8, 61.0, 59.4, 58.1, 60.4, 62.5, 62.2, 60.5, 60.9, 60.0, 59.8, 60.7, 59.…
$ table <dbl> 55.0, 61.0, 65.0, 58.0, 58.0, 57.0, 57.0, 55.0, 61.0, 61.0, 55.0, 56.0, 61.0, 54.0, 62.0, 58.0, 54.0, 54.0, 56.0, 59.0, 56.0, 55.0, 57.0, 62.0, 62.0, 58.0, 57.0, 57.0, 61.0, 57.0, 57.0, 57.0, 59.0, 58.…
$ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 342, 344, 345, 345, 348, 351, 351, 351, 351, 352, 353, 353, 353, 354, 355, 357, 357, 357, 402, 402, 402, 402, 402, 402, 402, 402, 403, 403, 4…
$ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.25, 3.93, 3.88, 4.35, 3.79, 4.38, 4.31, 4.23, 4.23, 4.21, 4.26, 3.85, 3.94, 4.39, 4.44, 3.97, 3.97, 4.28, 3.96, 3.96, 4.00, 4.04, 3.97, 4.0…
$ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.28, 3.90, 3.84, 4.37, 3.75, 4.42, 4.34, 4.29, 4.26, 4.27, 4.30, 3.92, 3.96, 4.43, 4.47, 4.01, 3.94, 4.30, 3.97, 3.99, 4.03, 4.06, 4.01, 4.0…
$ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.73, 2.46, 2.33, 2.71, 2.27, 2.68, 2.68, 2.70, 2.71, 2.66, 2.71, 2.48, 2.41, 2.62, 2.59, 2.41, 2.47, 2.67, 2.40, 2.42, 2.41, 2.42, 2.42, 2.4…
```
The str function looks at the *structure* of the object, while glimpse perhaps provides a possibly more readable version, and is just str specifically suited toward data frames. In both cases, we get info about the object and the various things within it.
While you might be doing this with data frames, you should be doing it with any of the objects you’re interested in. Consider a regression model object.
```
lm_mod = lm(mpg ~ ., data=mtcars)
str(lm_mod, 0)
```
```
List of 12
- attr(*, "class")= chr "lm"
```
```
str(lm_mod, 1)
```
```
List of 12
$ coefficients : Named num [1:11] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ effects : Named num [1:32] -113.65 -28.6 6.13 -3.06 -4.06 ...
..- attr(*, "names")= chr [1:32] "(Intercept)" "cyl" "disp" "hp" ...
$ rank : int 11
$ fitted.values: Named num [1:32] 22.6 22.1 26.3 21.2 17.7 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ assign : int [1:11] 0 1 2 3 4 5 6 7 8 9 ...
$ qr :List of 5
..- attr(*, "class")= chr "qr"
$ df.residual : int 21
$ xlevels : Named list()
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ model :'data.frame': 32 obs. of 11 variables:
..- attr(*, "terms")=Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. .. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. .. ..- attr(*, "dimnames")=List of 2
.. .. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. .. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. .. ..- attr(*, "intercept")= int 1
.. .. ..- attr(*, "response")= int 1
.. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. .. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "lm"
```
Here we look at the object at the lowest level of detail (0\), which basically just tells us that it’s a list of stuff. But if we go into more depth, we can see that there is quite a bit going on in here! Coefficients, the data frame used in the model (i.e. only the variables used and no `NA`), and much more are available to us, and we can pluck out any piece of it.
```
lm_mod$coefficients
```
```
(Intercept) cyl disp hp drat wt qsec vs am gear carb
12.30337416 -0.11144048 0.01333524 -0.02148212 0.78711097 -3.71530393 0.82104075 0.31776281 2.52022689 0.65541302 -0.19941925
```
```
lm_mod$model %>%
head()
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
Let’s do a summary of it, something you’ve probably done many times.
```
summary(lm_mod)
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
But you can assign that to an object and inspect it too!
```
lm_mod_summary = summary(lm_mod)
str(lm_mod_summary)
```
```
List of 11
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
.. .. .. ..$ : chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ coefficients : num [1:11, 1:4] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:4] "Estimate" "Std. Error" "t value" "Pr(>|t|)"
$ aliased : Named logi [1:11] FALSE FALSE FALSE FALSE FALSE FALSE ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ sigma : num 2.65
$ df : int [1:3] 11 21 11
$ r.squared : num 0.869
$ adj.r.squared: num 0.807
$ fstatistic : Named num [1:3] 13.9 10 21
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:11, 1:11] 49.883532 -1.874242 -0.000841 -0.003789 -1.842635 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "summary.lm"
```
If we pull the coefficients from this object, we are not just getting the values, but the table that’s printed in the summary. And we can now get that ready for publishing for example[9](#fn9).
```
lm_mod_summary$coefficients %>%
kableExtra::kable(digits = 2)
```
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| (Intercept) | 12\.30 | 18\.72 | 0\.66 | 0\.52 |
| cyl | \-0\.11 | 1\.05 | \-0\.11 | 0\.92 |
| disp | 0\.01 | 0\.02 | 0\.75 | 0\.46 |
| hp | \-0\.02 | 0\.02 | \-0\.99 | 0\.33 |
| drat | 0\.79 | 1\.64 | 0\.48 | 0\.64 |
| wt | \-3\.72 | 1\.89 | \-1\.96 | 0\.06 |
| qsec | 0\.82 | 0\.73 | 1\.12 | 0\.27 |
| vs | 0\.32 | 2\.10 | 0\.15 | 0\.88 |
| am | 2\.52 | 2\.06 | 1\.23 | 0\.23 |
| gear | 0\.66 | 1\.49 | 0\.44 | 0\.67 |
| carb | \-0\.20 | 0\.83 | \-0\.24 | 0\.81 |
After a while, you’ll know what’s in the objects you use most often, and that will allow you more easily work with the content they contain, allowing you to work with them more efficiently.
### Methods
Consider the following:
```
summary(diamonds) # data frame
```
```
carat cut color clarity depth table price x y z
Min. :0.2000 Fair : 1610 D: 6775 SI1 :13065 Min. :43.00 Min. :43.00 Min. : 326 Min. : 0.000 Min. : 0.000 Min. : 0.000
1st Qu.:0.4000 Good : 4906 E: 9797 VS2 :12258 1st Qu.:61.00 1st Qu.:56.00 1st Qu.: 950 1st Qu.: 4.710 1st Qu.: 4.720 1st Qu.: 2.910
Median :0.7000 Very Good:12082 F: 9542 SI2 : 9194 Median :61.80 Median :57.00 Median : 2401 Median : 5.700 Median : 5.710 Median : 3.530
Mean :0.7979 Premium :13791 G:11292 VS1 : 8171 Mean :61.75 Mean :57.46 Mean : 3933 Mean : 5.731 Mean : 5.735 Mean : 3.539
3rd Qu.:1.0400 Ideal :21551 H: 8304 VVS2 : 5066 3rd Qu.:62.50 3rd Qu.:59.00 3rd Qu.: 5324 3rd Qu.: 6.540 3rd Qu.: 6.540 3rd Qu.: 4.040
Max. :5.0100 I: 5422 VVS1 : 3655 Max. :79.00 Max. :95.00 Max. :18823 Max. :10.740 Max. :58.900 Max. :31.800
J: 2808 (Other): 2531
```
```
summary(diamonds$clarity) # vector
```
```
I1 SI2 SI1 VS2 VS1 VVS2 VVS1 IF
741 9194 13065 12258 8171 5066 3655 1790
```
```
summary(lm_mod) # lm object
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
```
summary(lm_mod_summary) # lm summary object
```
```
Length Class Mode
call 3 -none- call
terms 3 terms call
residuals 32 -none- numeric
coefficients 44 -none- numeric
aliased 11 -none- logical
sigma 1 -none- numeric
df 3 -none- numeric
r.squared 1 -none- numeric
adj.r.squared 1 -none- numeric
fstatistic 3 -none- numeric
cov.unscaled 121 -none- numeric
```
How is it that one function works on all these different types of objects? That’s not all. In RStudio, type `summary.` and hit the tab key.
When you load additional packages, you’ll see even more methods for the summary function. When you call summary on an object, the appropriate type of summary method will be used depending on the class of the object. If there is no specific type, e.g. when we called summary on something that already had summary called on it, it will just use a default version listing the contents. To see all the methods for summary, type the following, and you’ll see all that is currently available for your R session.
```
methods('summary')
```
```
[1] summary,ANY-method summary,DBIObject-method summary,diagonalMatrix-method summary,sparseMatrix-method summary.aov
[6] summary.aovlist* summary.aspell* summary.check_packages_in_dir* summary.connection summary.corAR1*
[11] summary.corARMA* summary.corCAR1* summary.corCompSymm* summary.corExp* summary.corGaus*
[16] summary.corIdent* summary.corLin* summary.corNatural* summary.corRatio* summary.corSpher*
[21] summary.corStruct* summary.corSymm* summary.data.frame summary.Date summary.default
[26] summary.Duration* summary.ecdf* summary.factor summary.gam summary.ggplot*
[31] summary.glm summary.gls* summary.haven_labelled* summary.hcl_palettes* summary.infl*
[36] summary.Interval* summary.lm summary.lme* summary.lmList* summary.loess*
[41] summary.manova summary.matrix summary.microbenchmark* summary.mlm* summary.modelStruct*
[46] summary.nls* summary.nlsList* summary.packageStatus* summary.pandas.core.frame.DataFrame* summary.pandas.core.series.Series*
[51] summary.pdBlocked* summary.pdCompSymm* summary.pdDiag* summary.pdIdent* summary.pdIdnot*
[56] summary.pdLogChol* summary.pdMat* summary.pdNatural* summary.pdSymm* summary.pdTens*
[61] summary.Period* summary.POSIXct summary.POSIXlt summary.ppr* summary.prcomp*
[66] summary.princomp* summary.proc_time summary.python.builtin.object* summary.reStruct* summary.rlang_error*
[71] summary.rlang_trace* summary.shingle* summary.srcfile summary.srcref summary.stepfun
[76] summary.stl* summary.table summary.trellis* summary.tukeysmooth* summary.varComb*
[81] summary.varConstPower* summary.varExp* summary.varFixed* summary.varFunc* summary.varIdent*
[86] summary.varPower* summary.vctrs_sclr* summary.vctrs_vctr* summary.warnings
see '?methods' for accessing help and source code
```
Say you are new to a modeling package, and as such, you might want to see what all you can do with the resulting object. Once you’ve discerned the class of the model object, you can then list all the functions that can be used on that object.
```
library(brms)
methods(class = 'brmsfit')
```
```
[1] add_criterion add_ic as.array as.data.frame as.matrix as.mcmc autocor bayes_factor bayes_R2
[10] bridge_sampler coef conditional_effects conditional_smooths control_params expose_functions family fitted fixef
[19] formula getCall hypothesis kfold launch_shinystan log_lik log_posterior logLik loo_compare
[28] loo_linpred loo_model_weights loo_moment_match loo_predict loo_predictive_interval loo_R2 loo_subsample loo LOO
[37] marginal_effects marginal_smooths mcmc_plot model_weights model.frame neff_ratio ngrps nobs nsamples
[46] nuts_params pairs parnames plot_coefficients plot post_prob posterior_average posterior_epred posterior_interval
[55] posterior_linpred posterior_predict posterior_samples posterior_summary pp_average pp_check pp_mixture predict predictive_error
[64] predictive_interval prepare_predictions print prior_samples prior_summary ranef reloo residuals rhat
[73] stancode standata stanplot summary update VarCorr vcov waic WAIC
see '?methods' for accessing help and source code
```
This allows you to more quickly get familiar with a package and the objects it produces, and provides utility you might not have even known to look for in the first place!
### S4 classes
Everything we’ve been dealing with at this point are S3 objects, classes, and methods. R is a dialect of the [S language](https://www.r-project.org/conferences/useR-2006/Slides/Chambers.pdf), and the S3 name reflects the version of S at the time of R’s creation. S4 was the next iteration of S, but I’m not going to say much about the S4 system of objects other than they are a separate type of object with their own methods. For practical use you might not see much difference, but if you see an S4 object, it will have slots accessible via `@`.
```
car_matrix = mtcars %>%
as.matrix() %>% # convert from df to matrix
Matrix::Matrix() # convert to Matrix class (S4)
typeof(car_matrix)
```
```
[1] "S4"
```
```
str(car_matrix)
```
```
Formal class 'dgeMatrix' [package "Matrix"] with 4 slots
..@ x : num [1:352] 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
..@ Dim : int [1:2] 32 11
..@ Dimnames:List of 2
.. ..$ : chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
.. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
..@ factors : list()
```
Usually you will access the contents via methods rather than using the `@`, and that assumes you know what those methods are. Mostly, I just find S4 objects slightly more annoying to work with for applied work, but you should be at least somewhat familiar with them so that you won’t be thrown off course when they appear.
### Others
Indeed there are more types of R objects, but they will probably not be of much notice to the applied user. As an example, packages like mlr3 and text2vec package uses [R6](https://cran.r-project.org/web/packages/R6/vignettes/Introduction.html). I can only say that you’ll just have to cross that bridge should you get to it.
### Inspecting Functions
You might not think of them as such, but in R, everything’s an object, including functions. You can inspect them like anything else.
```
str(lm)
```
```
function (formula, data, subset, weights, na.action, method = "qr", model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE, contrasts = NULL, offset, ...)
```
```
## lm
```
```
1 function (formula, data, subset, weights, na.action, method = "qr",
2 model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE,
3 contrasts = NULL, offset, ...)
4 {
5 ret.x <- x
6 ret.y <- y
7 cl <- match.call()
8 mf <- match.call(expand.dots = FALSE)
9 m <- match(c("formula", "data", "subset", "weights", "na.action",
10 "offset"), names(mf), 0L)
11 mf <- mf[c(1L, m)]
12 mf$drop.unused.levels <- TRUE
13 mf[[1L]] <- quote(stats::model.frame)
14 mf <- eval(mf, parent.frame())
15 if (method == "model.frame")
16 return(mf)
17 else if (method != "qr")
18 warning(gettextf("method = '%s' is not supported. Using 'qr'",
19 method), domain = NA)
20 mt <- attr(mf, "terms")
```
One of the primary reasons for R’s popularity is the accessibility of the underlying code. People can very easily access the code for some function, modify it, extend it, etc. From an applied perspective, if you want to get better at writing code, or modify existing code, all you have to do is dive in! We’ll talk more about writing functions [later](functions.html#writing-functions).
Documentation
-------------
Many applied users of R are quick to search the web for available help when they come to a problem. This is great, you’ll find a lot of information out there. However, it will likely take you a bit to sort through things and find exactly what you need. Strangely, I see many users of R don’t use the documentation, e.g. help files, package website, etc., first, and yet this is typically the quickest way to answer many of the questions they’ll have.
Let’s start with an example. We’ll use the sample function to get a random sample of 10 values from the range of numbers 1 through 5\. So, go ahead and do so!
```
sample(?)
```
Don’t know what to put? Consult the help file!
We get a brief description of a function at the top, then we see how to actually use it, i.e. the form the syntax should take. We find out there is even an additional function, sample.int, that we could use. Next we see what arguments are possible. First we need an `x`, so what is the thing we’re trying to sample from? The numbers 1 through 5\. Next is the size, which is how many values we want, in this case 10\. So let’s try it.
```
nums = 1:5
sample(nums, 10)
```
```
Error in sample.int(length(x), size, replace, prob): cannot take a sample larger than the population when 'replace = FALSE'
```
Uh oh\- we have a problem with the `replace` argument! We can see in the help file that, by default, it is `FALSE`[10](#fn10), but if we want to sample 10 times from only 5 numbers, we’ll need to change it to `TRUE`.
Now we are on our way!
The help file gives detailed information about the sampling that is possible, which actually is not as simple as one would think! The **`Value`** is important, as it tells us what we can expect the function to return, whether a data frame, list, or whatever. We even get references, other functions that might be of interest (**`See Also`**), and examples. There is a lot to digest for this function!
Not all functions have all this information, but most do, and if they are adhering to standards they will[11](#fn11). However, all functions have this same documentation form, which puts R above and beyond most programming languages in this regard. Once you look at a couple of help files, you’ll always be able to quickly find the information you need from any other.
Objects Exercises
-----------------
With one function, find out what the class, number of rows, number of columns are of the following object, including what kind of object the last three columns are. Inspect the help file also.
```
library(dplyr)
?starwars
```
R Objects
---------
### Object Inspection \& Exploration
Let’s say you’ve imported your data into R. If you are going to be able to do anything with it, you’ll have had to create an R object that represents that data. What is that object? By now you know it’s a data frame, specifically, an object of [class](https://en.wikipedia.org/wiki/Class_(computer_programming)) data.frame or possibly a tibble if you’re working within the tidyverse. If you want to look at it, you might be tempted to look at it this way with View, or clicking on it in your Environment viewer.
```
View(diamonds)
```
While this is certainly one way to inspect it, it’s not very useful. There’s far too much information to get much out of it, and information you may need to know is absent.
Consider the following:
```
str(diamonds)
```
```
tibble [53,940 × 10] (S3: tbl_df/tbl/data.frame)
$ carat : num [1:53940] 0.23 0.21 0.23 0.29 0.31 0.24 0.24 0.26 0.22 0.23 ...
$ cut : Ord.factor w/ 5 levels "Fair"<"Good"<..: 5 4 2 4 2 3 3 3 1 3 ...
$ color : Ord.factor w/ 7 levels "D"<"E"<"F"<"G"<..: 2 2 2 6 7 7 6 5 2 5 ...
$ clarity: Ord.factor w/ 8 levels "I1"<"SI2"<"SI1"<..: 2 3 5 4 2 6 7 3 4 5 ...
$ depth : num [1:53940] 61.5 59.8 56.9 62.4 63.3 62.8 62.3 61.9 65.1 59.4 ...
$ table : num [1:53940] 55 61 65 58 58 57 57 55 61 61 ...
$ price : int [1:53940] 326 326 327 334 335 336 336 337 337 338 ...
$ x : num [1:53940] 3.95 3.89 4.05 4.2 4.34 3.94 3.95 4.07 3.87 4 ...
$ y : num [1:53940] 3.98 3.84 4.07 4.23 4.35 3.96 3.98 4.11 3.78 4.05 ...
$ z : num [1:53940] 2.43 2.31 2.31 2.63 2.75 2.48 2.47 2.53 2.49 2.39 ...
```
```
glimpse(diamonds)
```
```
Rows: 53,940
Columns: 10
$ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.30, 0.23, 0.22, 0.31, 0.20, 0.32, 0.30, 0.30, 0.30, 0.30, 0.30, 0.23, 0.23, 0.31, 0.31, 0.23, 0.24, 0.30, 0.23, 0.23, 0.23, 0.23, 0.23, 0.2…
$ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Very Good, Fair, Very Good, Good, Ideal, Premium, Ideal, Premium, Premium, Ideal, Good, Good, Very Good, Good, Very Good, Very Good, Very Good…
$ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I, E, H, J, J, G, I, J, D, F, F, F, E, E, D, F, E, H, D, I, I, J, D, D, H, F, H, H, E, H, F, G, I, E, D, I, J, I, I, I, I, D, D, D, I, G, I, …
$ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, SI1, SI2, SI2, I1, SI2, SI1, SI1, SI1, SI2, VS2, VS1, SI1, SI1, VVS2, VS1, VS2, VS2, VS1, VS1, VS1, VS1, VS1, VS1, VS1, VS1, SI1, VS2, SI2,…
$ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64.0, 62.8, 60.4, 62.2, 60.2, 60.9, 62.0, 63.4, 63.8, 62.7, 63.3, 63.8, 61.0, 59.4, 58.1, 60.4, 62.5, 62.2, 60.5, 60.9, 60.0, 59.8, 60.7, 59.…
$ table <dbl> 55.0, 61.0, 65.0, 58.0, 58.0, 57.0, 57.0, 55.0, 61.0, 61.0, 55.0, 56.0, 61.0, 54.0, 62.0, 58.0, 54.0, 54.0, 56.0, 59.0, 56.0, 55.0, 57.0, 62.0, 62.0, 58.0, 57.0, 57.0, 61.0, 57.0, 57.0, 57.0, 59.0, 58.…
$ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 342, 344, 345, 345, 348, 351, 351, 351, 351, 352, 353, 353, 353, 354, 355, 357, 357, 357, 402, 402, 402, 402, 402, 402, 402, 402, 403, 403, 4…
$ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.25, 3.93, 3.88, 4.35, 3.79, 4.38, 4.31, 4.23, 4.23, 4.21, 4.26, 3.85, 3.94, 4.39, 4.44, 3.97, 3.97, 4.28, 3.96, 3.96, 4.00, 4.04, 3.97, 4.0…
$ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.28, 3.90, 3.84, 4.37, 3.75, 4.42, 4.34, 4.29, 4.26, 4.27, 4.30, 3.92, 3.96, 4.43, 4.47, 4.01, 3.94, 4.30, 3.97, 3.99, 4.03, 4.06, 4.01, 4.0…
$ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.73, 2.46, 2.33, 2.71, 2.27, 2.68, 2.68, 2.70, 2.71, 2.66, 2.71, 2.48, 2.41, 2.62, 2.59, 2.41, 2.47, 2.67, 2.40, 2.42, 2.41, 2.42, 2.42, 2.4…
```
The str function looks at the *structure* of the object, while glimpse perhaps provides a possibly more readable version, and is just str specifically suited toward data frames. In both cases, we get info about the object and the various things within it.
While you might be doing this with data frames, you should be doing it with any of the objects you’re interested in. Consider a regression model object.
```
lm_mod = lm(mpg ~ ., data=mtcars)
str(lm_mod, 0)
```
```
List of 12
- attr(*, "class")= chr "lm"
```
```
str(lm_mod, 1)
```
```
List of 12
$ coefficients : Named num [1:11] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ effects : Named num [1:32] -113.65 -28.6 6.13 -3.06 -4.06 ...
..- attr(*, "names")= chr [1:32] "(Intercept)" "cyl" "disp" "hp" ...
$ rank : int 11
$ fitted.values: Named num [1:32] 22.6 22.1 26.3 21.2 17.7 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ assign : int [1:11] 0 1 2 3 4 5 6 7 8 9 ...
$ qr :List of 5
..- attr(*, "class")= chr "qr"
$ df.residual : int 21
$ xlevels : Named list()
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ model :'data.frame': 32 obs. of 11 variables:
..- attr(*, "terms")=Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. .. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. .. ..- attr(*, "dimnames")=List of 2
.. .. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. .. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. .. ..- attr(*, "intercept")= int 1
.. .. ..- attr(*, "response")= int 1
.. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. .. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "lm"
```
Here we look at the object at the lowest level of detail (0\), which basically just tells us that it’s a list of stuff. But if we go into more depth, we can see that there is quite a bit going on in here! Coefficients, the data frame used in the model (i.e. only the variables used and no `NA`), and much more are available to us, and we can pluck out any piece of it.
```
lm_mod$coefficients
```
```
(Intercept) cyl disp hp drat wt qsec vs am gear carb
12.30337416 -0.11144048 0.01333524 -0.02148212 0.78711097 -3.71530393 0.82104075 0.31776281 2.52022689 0.65541302 -0.19941925
```
```
lm_mod$model %>%
head()
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
Let’s do a summary of it, something you’ve probably done many times.
```
summary(lm_mod)
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
But you can assign that to an object and inspect it too!
```
lm_mod_summary = summary(lm_mod)
str(lm_mod_summary)
```
```
List of 11
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
.. .. .. ..$ : chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ coefficients : num [1:11, 1:4] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:4] "Estimate" "Std. Error" "t value" "Pr(>|t|)"
$ aliased : Named logi [1:11] FALSE FALSE FALSE FALSE FALSE FALSE ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ sigma : num 2.65
$ df : int [1:3] 11 21 11
$ r.squared : num 0.869
$ adj.r.squared: num 0.807
$ fstatistic : Named num [1:3] 13.9 10 21
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:11, 1:11] 49.883532 -1.874242 -0.000841 -0.003789 -1.842635 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "summary.lm"
```
If we pull the coefficients from this object, we are not just getting the values, but the table that’s printed in the summary. And we can now get that ready for publishing for example[9](#fn9).
```
lm_mod_summary$coefficients %>%
kableExtra::kable(digits = 2)
```
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| (Intercept) | 12\.30 | 18\.72 | 0\.66 | 0\.52 |
| cyl | \-0\.11 | 1\.05 | \-0\.11 | 0\.92 |
| disp | 0\.01 | 0\.02 | 0\.75 | 0\.46 |
| hp | \-0\.02 | 0\.02 | \-0\.99 | 0\.33 |
| drat | 0\.79 | 1\.64 | 0\.48 | 0\.64 |
| wt | \-3\.72 | 1\.89 | \-1\.96 | 0\.06 |
| qsec | 0\.82 | 0\.73 | 1\.12 | 0\.27 |
| vs | 0\.32 | 2\.10 | 0\.15 | 0\.88 |
| am | 2\.52 | 2\.06 | 1\.23 | 0\.23 |
| gear | 0\.66 | 1\.49 | 0\.44 | 0\.67 |
| carb | \-0\.20 | 0\.83 | \-0\.24 | 0\.81 |
After a while, you’ll know what’s in the objects you use most often, and that will allow you more easily work with the content they contain, allowing you to work with them more efficiently.
### Methods
Consider the following:
```
summary(diamonds) # data frame
```
```
carat cut color clarity depth table price x y z
Min. :0.2000 Fair : 1610 D: 6775 SI1 :13065 Min. :43.00 Min. :43.00 Min. : 326 Min. : 0.000 Min. : 0.000 Min. : 0.000
1st Qu.:0.4000 Good : 4906 E: 9797 VS2 :12258 1st Qu.:61.00 1st Qu.:56.00 1st Qu.: 950 1st Qu.: 4.710 1st Qu.: 4.720 1st Qu.: 2.910
Median :0.7000 Very Good:12082 F: 9542 SI2 : 9194 Median :61.80 Median :57.00 Median : 2401 Median : 5.700 Median : 5.710 Median : 3.530
Mean :0.7979 Premium :13791 G:11292 VS1 : 8171 Mean :61.75 Mean :57.46 Mean : 3933 Mean : 5.731 Mean : 5.735 Mean : 3.539
3rd Qu.:1.0400 Ideal :21551 H: 8304 VVS2 : 5066 3rd Qu.:62.50 3rd Qu.:59.00 3rd Qu.: 5324 3rd Qu.: 6.540 3rd Qu.: 6.540 3rd Qu.: 4.040
Max. :5.0100 I: 5422 VVS1 : 3655 Max. :79.00 Max. :95.00 Max. :18823 Max. :10.740 Max. :58.900 Max. :31.800
J: 2808 (Other): 2531
```
```
summary(diamonds$clarity) # vector
```
```
I1 SI2 SI1 VS2 VS1 VVS2 VVS1 IF
741 9194 13065 12258 8171 5066 3655 1790
```
```
summary(lm_mod) # lm object
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
```
summary(lm_mod_summary) # lm summary object
```
```
Length Class Mode
call 3 -none- call
terms 3 terms call
residuals 32 -none- numeric
coefficients 44 -none- numeric
aliased 11 -none- logical
sigma 1 -none- numeric
df 3 -none- numeric
r.squared 1 -none- numeric
adj.r.squared 1 -none- numeric
fstatistic 3 -none- numeric
cov.unscaled 121 -none- numeric
```
How is it that one function works on all these different types of objects? That’s not all. In RStudio, type `summary.` and hit the tab key.
When you load additional packages, you’ll see even more methods for the summary function. When you call summary on an object, the appropriate type of summary method will be used depending on the class of the object. If there is no specific type, e.g. when we called summary on something that already had summary called on it, it will just use a default version listing the contents. To see all the methods for summary, type the following, and you’ll see all that is currently available for your R session.
```
methods('summary')
```
```
[1] summary,ANY-method summary,DBIObject-method summary,diagonalMatrix-method summary,sparseMatrix-method summary.aov
[6] summary.aovlist* summary.aspell* summary.check_packages_in_dir* summary.connection summary.corAR1*
[11] summary.corARMA* summary.corCAR1* summary.corCompSymm* summary.corExp* summary.corGaus*
[16] summary.corIdent* summary.corLin* summary.corNatural* summary.corRatio* summary.corSpher*
[21] summary.corStruct* summary.corSymm* summary.data.frame summary.Date summary.default
[26] summary.Duration* summary.ecdf* summary.factor summary.gam summary.ggplot*
[31] summary.glm summary.gls* summary.haven_labelled* summary.hcl_palettes* summary.infl*
[36] summary.Interval* summary.lm summary.lme* summary.lmList* summary.loess*
[41] summary.manova summary.matrix summary.microbenchmark* summary.mlm* summary.modelStruct*
[46] summary.nls* summary.nlsList* summary.packageStatus* summary.pandas.core.frame.DataFrame* summary.pandas.core.series.Series*
[51] summary.pdBlocked* summary.pdCompSymm* summary.pdDiag* summary.pdIdent* summary.pdIdnot*
[56] summary.pdLogChol* summary.pdMat* summary.pdNatural* summary.pdSymm* summary.pdTens*
[61] summary.Period* summary.POSIXct summary.POSIXlt summary.ppr* summary.prcomp*
[66] summary.princomp* summary.proc_time summary.python.builtin.object* summary.reStruct* summary.rlang_error*
[71] summary.rlang_trace* summary.shingle* summary.srcfile summary.srcref summary.stepfun
[76] summary.stl* summary.table summary.trellis* summary.tukeysmooth* summary.varComb*
[81] summary.varConstPower* summary.varExp* summary.varFixed* summary.varFunc* summary.varIdent*
[86] summary.varPower* summary.vctrs_sclr* summary.vctrs_vctr* summary.warnings
see '?methods' for accessing help and source code
```
Say you are new to a modeling package, and as such, you might want to see what all you can do with the resulting object. Once you’ve discerned the class of the model object, you can then list all the functions that can be used on that object.
```
library(brms)
methods(class = 'brmsfit')
```
```
[1] add_criterion add_ic as.array as.data.frame as.matrix as.mcmc autocor bayes_factor bayes_R2
[10] bridge_sampler coef conditional_effects conditional_smooths control_params expose_functions family fitted fixef
[19] formula getCall hypothesis kfold launch_shinystan log_lik log_posterior logLik loo_compare
[28] loo_linpred loo_model_weights loo_moment_match loo_predict loo_predictive_interval loo_R2 loo_subsample loo LOO
[37] marginal_effects marginal_smooths mcmc_plot model_weights model.frame neff_ratio ngrps nobs nsamples
[46] nuts_params pairs parnames plot_coefficients plot post_prob posterior_average posterior_epred posterior_interval
[55] posterior_linpred posterior_predict posterior_samples posterior_summary pp_average pp_check pp_mixture predict predictive_error
[64] predictive_interval prepare_predictions print prior_samples prior_summary ranef reloo residuals rhat
[73] stancode standata stanplot summary update VarCorr vcov waic WAIC
see '?methods' for accessing help and source code
```
This allows you to more quickly get familiar with a package and the objects it produces, and provides utility you might not have even known to look for in the first place!
### S4 classes
Everything we’ve been dealing with at this point are S3 objects, classes, and methods. R is a dialect of the [S language](https://www.r-project.org/conferences/useR-2006/Slides/Chambers.pdf), and the S3 name reflects the version of S at the time of R’s creation. S4 was the next iteration of S, but I’m not going to say much about the S4 system of objects other than they are a separate type of object with their own methods. For practical use you might not see much difference, but if you see an S4 object, it will have slots accessible via `@`.
```
car_matrix = mtcars %>%
as.matrix() %>% # convert from df to matrix
Matrix::Matrix() # convert to Matrix class (S4)
typeof(car_matrix)
```
```
[1] "S4"
```
```
str(car_matrix)
```
```
Formal class 'dgeMatrix' [package "Matrix"] with 4 slots
..@ x : num [1:352] 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
..@ Dim : int [1:2] 32 11
..@ Dimnames:List of 2
.. ..$ : chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
.. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
..@ factors : list()
```
Usually you will access the contents via methods rather than using the `@`, and that assumes you know what those methods are. Mostly, I just find S4 objects slightly more annoying to work with for applied work, but you should be at least somewhat familiar with them so that you won’t be thrown off course when they appear.
### Others
Indeed there are more types of R objects, but they will probably not be of much notice to the applied user. As an example, packages like mlr3 and text2vec package uses [R6](https://cran.r-project.org/web/packages/R6/vignettes/Introduction.html). I can only say that you’ll just have to cross that bridge should you get to it.
### Inspecting Functions
You might not think of them as such, but in R, everything’s an object, including functions. You can inspect them like anything else.
```
str(lm)
```
```
function (formula, data, subset, weights, na.action, method = "qr", model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE, contrasts = NULL, offset, ...)
```
```
## lm
```
```
1 function (formula, data, subset, weights, na.action, method = "qr",
2 model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE,
3 contrasts = NULL, offset, ...)
4 {
5 ret.x <- x
6 ret.y <- y
7 cl <- match.call()
8 mf <- match.call(expand.dots = FALSE)
9 m <- match(c("formula", "data", "subset", "weights", "na.action",
10 "offset"), names(mf), 0L)
11 mf <- mf[c(1L, m)]
12 mf$drop.unused.levels <- TRUE
13 mf[[1L]] <- quote(stats::model.frame)
14 mf <- eval(mf, parent.frame())
15 if (method == "model.frame")
16 return(mf)
17 else if (method != "qr")
18 warning(gettextf("method = '%s' is not supported. Using 'qr'",
19 method), domain = NA)
20 mt <- attr(mf, "terms")
```
One of the primary reasons for R’s popularity is the accessibility of the underlying code. People can very easily access the code for some function, modify it, extend it, etc. From an applied perspective, if you want to get better at writing code, or modify existing code, all you have to do is dive in! We’ll talk more about writing functions [later](functions.html#writing-functions).
### Object Inspection \& Exploration
Let’s say you’ve imported your data into R. If you are going to be able to do anything with it, you’ll have had to create an R object that represents that data. What is that object? By now you know it’s a data frame, specifically, an object of [class](https://en.wikipedia.org/wiki/Class_(computer_programming)) data.frame or possibly a tibble if you’re working within the tidyverse. If you want to look at it, you might be tempted to look at it this way with View, or clicking on it in your Environment viewer.
```
View(diamonds)
```
While this is certainly one way to inspect it, it’s not very useful. There’s far too much information to get much out of it, and information you may need to know is absent.
Consider the following:
```
str(diamonds)
```
```
tibble [53,940 × 10] (S3: tbl_df/tbl/data.frame)
$ carat : num [1:53940] 0.23 0.21 0.23 0.29 0.31 0.24 0.24 0.26 0.22 0.23 ...
$ cut : Ord.factor w/ 5 levels "Fair"<"Good"<..: 5 4 2 4 2 3 3 3 1 3 ...
$ color : Ord.factor w/ 7 levels "D"<"E"<"F"<"G"<..: 2 2 2 6 7 7 6 5 2 5 ...
$ clarity: Ord.factor w/ 8 levels "I1"<"SI2"<"SI1"<..: 2 3 5 4 2 6 7 3 4 5 ...
$ depth : num [1:53940] 61.5 59.8 56.9 62.4 63.3 62.8 62.3 61.9 65.1 59.4 ...
$ table : num [1:53940] 55 61 65 58 58 57 57 55 61 61 ...
$ price : int [1:53940] 326 326 327 334 335 336 336 337 337 338 ...
$ x : num [1:53940] 3.95 3.89 4.05 4.2 4.34 3.94 3.95 4.07 3.87 4 ...
$ y : num [1:53940] 3.98 3.84 4.07 4.23 4.35 3.96 3.98 4.11 3.78 4.05 ...
$ z : num [1:53940] 2.43 2.31 2.31 2.63 2.75 2.48 2.47 2.53 2.49 2.39 ...
```
```
glimpse(diamonds)
```
```
Rows: 53,940
Columns: 10
$ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.30, 0.23, 0.22, 0.31, 0.20, 0.32, 0.30, 0.30, 0.30, 0.30, 0.30, 0.23, 0.23, 0.31, 0.31, 0.23, 0.24, 0.30, 0.23, 0.23, 0.23, 0.23, 0.23, 0.2…
$ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Very Good, Fair, Very Good, Good, Ideal, Premium, Ideal, Premium, Premium, Ideal, Good, Good, Very Good, Good, Very Good, Very Good, Very Good…
$ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I, E, H, J, J, G, I, J, D, F, F, F, E, E, D, F, E, H, D, I, I, J, D, D, H, F, H, H, E, H, F, G, I, E, D, I, J, I, I, I, I, D, D, D, I, G, I, …
$ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, SI1, SI2, SI2, I1, SI2, SI1, SI1, SI1, SI2, VS2, VS1, SI1, SI1, VVS2, VS1, VS2, VS2, VS1, VS1, VS1, VS1, VS1, VS1, VS1, VS1, SI1, VS2, SI2,…
$ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64.0, 62.8, 60.4, 62.2, 60.2, 60.9, 62.0, 63.4, 63.8, 62.7, 63.3, 63.8, 61.0, 59.4, 58.1, 60.4, 62.5, 62.2, 60.5, 60.9, 60.0, 59.8, 60.7, 59.…
$ table <dbl> 55.0, 61.0, 65.0, 58.0, 58.0, 57.0, 57.0, 55.0, 61.0, 61.0, 55.0, 56.0, 61.0, 54.0, 62.0, 58.0, 54.0, 54.0, 56.0, 59.0, 56.0, 55.0, 57.0, 62.0, 62.0, 58.0, 57.0, 57.0, 61.0, 57.0, 57.0, 57.0, 59.0, 58.…
$ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 342, 344, 345, 345, 348, 351, 351, 351, 351, 352, 353, 353, 353, 354, 355, 357, 357, 357, 402, 402, 402, 402, 402, 402, 402, 402, 403, 403, 4…
$ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.25, 3.93, 3.88, 4.35, 3.79, 4.38, 4.31, 4.23, 4.23, 4.21, 4.26, 3.85, 3.94, 4.39, 4.44, 3.97, 3.97, 4.28, 3.96, 3.96, 4.00, 4.04, 3.97, 4.0…
$ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.28, 3.90, 3.84, 4.37, 3.75, 4.42, 4.34, 4.29, 4.26, 4.27, 4.30, 3.92, 3.96, 4.43, 4.47, 4.01, 3.94, 4.30, 3.97, 3.99, 4.03, 4.06, 4.01, 4.0…
$ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.73, 2.46, 2.33, 2.71, 2.27, 2.68, 2.68, 2.70, 2.71, 2.66, 2.71, 2.48, 2.41, 2.62, 2.59, 2.41, 2.47, 2.67, 2.40, 2.42, 2.41, 2.42, 2.42, 2.4…
```
The str function looks at the *structure* of the object, while glimpse perhaps provides a possibly more readable version, and is just str specifically suited toward data frames. In both cases, we get info about the object and the various things within it.
While you might be doing this with data frames, you should be doing it with any of the objects you’re interested in. Consider a regression model object.
```
lm_mod = lm(mpg ~ ., data=mtcars)
str(lm_mod, 0)
```
```
List of 12
- attr(*, "class")= chr "lm"
```
```
str(lm_mod, 1)
```
```
List of 12
$ coefficients : Named num [1:11] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ effects : Named num [1:32] -113.65 -28.6 6.13 -3.06 -4.06 ...
..- attr(*, "names")= chr [1:32] "(Intercept)" "cyl" "disp" "hp" ...
$ rank : int 11
$ fitted.values: Named num [1:32] 22.6 22.1 26.3 21.2 17.7 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ assign : int [1:11] 0 1 2 3 4 5 6 7 8 9 ...
$ qr :List of 5
..- attr(*, "class")= chr "qr"
$ df.residual : int 21
$ xlevels : Named list()
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ model :'data.frame': 32 obs. of 11 variables:
..- attr(*, "terms")=Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. .. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. .. ..- attr(*, "dimnames")=List of 2
.. .. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. .. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. .. ..- attr(*, "intercept")= int 1
.. .. ..- attr(*, "response")= int 1
.. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. .. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "lm"
```
Here we look at the object at the lowest level of detail (0\), which basically just tells us that it’s a list of stuff. But if we go into more depth, we can see that there is quite a bit going on in here! Coefficients, the data frame used in the model (i.e. only the variables used and no `NA`), and much more are available to us, and we can pluck out any piece of it.
```
lm_mod$coefficients
```
```
(Intercept) cyl disp hp drat wt qsec vs am gear carb
12.30337416 -0.11144048 0.01333524 -0.02148212 0.78711097 -3.71530393 0.82104075 0.31776281 2.52022689 0.65541302 -0.19941925
```
```
lm_mod$model %>%
head()
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
Let’s do a summary of it, something you’ve probably done many times.
```
summary(lm_mod)
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
But you can assign that to an object and inspect it too!
```
lm_mod_summary = summary(lm_mod)
str(lm_mod_summary)
```
```
List of 11
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
.. .. .. ..$ : chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ coefficients : num [1:11, 1:4] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:4] "Estimate" "Std. Error" "t value" "Pr(>|t|)"
$ aliased : Named logi [1:11] FALSE FALSE FALSE FALSE FALSE FALSE ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ sigma : num 2.65
$ df : int [1:3] 11 21 11
$ r.squared : num 0.869
$ adj.r.squared: num 0.807
$ fstatistic : Named num [1:3] 13.9 10 21
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:11, 1:11] 49.883532 -1.874242 -0.000841 -0.003789 -1.842635 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "summary.lm"
```
If we pull the coefficients from this object, we are not just getting the values, but the table that’s printed in the summary. And we can now get that ready for publishing for example[9](#fn9).
```
lm_mod_summary$coefficients %>%
kableExtra::kable(digits = 2)
```
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| (Intercept) | 12\.30 | 18\.72 | 0\.66 | 0\.52 |
| cyl | \-0\.11 | 1\.05 | \-0\.11 | 0\.92 |
| disp | 0\.01 | 0\.02 | 0\.75 | 0\.46 |
| hp | \-0\.02 | 0\.02 | \-0\.99 | 0\.33 |
| drat | 0\.79 | 1\.64 | 0\.48 | 0\.64 |
| wt | \-3\.72 | 1\.89 | \-1\.96 | 0\.06 |
| qsec | 0\.82 | 0\.73 | 1\.12 | 0\.27 |
| vs | 0\.32 | 2\.10 | 0\.15 | 0\.88 |
| am | 2\.52 | 2\.06 | 1\.23 | 0\.23 |
| gear | 0\.66 | 1\.49 | 0\.44 | 0\.67 |
| carb | \-0\.20 | 0\.83 | \-0\.24 | 0\.81 |
After a while, you’ll know what’s in the objects you use most often, and that will allow you more easily work with the content they contain, allowing you to work with them more efficiently.
### Methods
Consider the following:
```
summary(diamonds) # data frame
```
```
carat cut color clarity depth table price x y z
Min. :0.2000 Fair : 1610 D: 6775 SI1 :13065 Min. :43.00 Min. :43.00 Min. : 326 Min. : 0.000 Min. : 0.000 Min. : 0.000
1st Qu.:0.4000 Good : 4906 E: 9797 VS2 :12258 1st Qu.:61.00 1st Qu.:56.00 1st Qu.: 950 1st Qu.: 4.710 1st Qu.: 4.720 1st Qu.: 2.910
Median :0.7000 Very Good:12082 F: 9542 SI2 : 9194 Median :61.80 Median :57.00 Median : 2401 Median : 5.700 Median : 5.710 Median : 3.530
Mean :0.7979 Premium :13791 G:11292 VS1 : 8171 Mean :61.75 Mean :57.46 Mean : 3933 Mean : 5.731 Mean : 5.735 Mean : 3.539
3rd Qu.:1.0400 Ideal :21551 H: 8304 VVS2 : 5066 3rd Qu.:62.50 3rd Qu.:59.00 3rd Qu.: 5324 3rd Qu.: 6.540 3rd Qu.: 6.540 3rd Qu.: 4.040
Max. :5.0100 I: 5422 VVS1 : 3655 Max. :79.00 Max. :95.00 Max. :18823 Max. :10.740 Max. :58.900 Max. :31.800
J: 2808 (Other): 2531
```
```
summary(diamonds$clarity) # vector
```
```
I1 SI2 SI1 VS2 VS1 VVS2 VVS1 IF
741 9194 13065 12258 8171 5066 3655 1790
```
```
summary(lm_mod) # lm object
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
```
summary(lm_mod_summary) # lm summary object
```
```
Length Class Mode
call 3 -none- call
terms 3 terms call
residuals 32 -none- numeric
coefficients 44 -none- numeric
aliased 11 -none- logical
sigma 1 -none- numeric
df 3 -none- numeric
r.squared 1 -none- numeric
adj.r.squared 1 -none- numeric
fstatistic 3 -none- numeric
cov.unscaled 121 -none- numeric
```
How is it that one function works on all these different types of objects? That’s not all. In RStudio, type `summary.` and hit the tab key.
When you load additional packages, you’ll see even more methods for the summary function. When you call summary on an object, the appropriate type of summary method will be used depending on the class of the object. If there is no specific type, e.g. when we called summary on something that already had summary called on it, it will just use a default version listing the contents. To see all the methods for summary, type the following, and you’ll see all that is currently available for your R session.
```
methods('summary')
```
```
[1] summary,ANY-method summary,DBIObject-method summary,diagonalMatrix-method summary,sparseMatrix-method summary.aov
[6] summary.aovlist* summary.aspell* summary.check_packages_in_dir* summary.connection summary.corAR1*
[11] summary.corARMA* summary.corCAR1* summary.corCompSymm* summary.corExp* summary.corGaus*
[16] summary.corIdent* summary.corLin* summary.corNatural* summary.corRatio* summary.corSpher*
[21] summary.corStruct* summary.corSymm* summary.data.frame summary.Date summary.default
[26] summary.Duration* summary.ecdf* summary.factor summary.gam summary.ggplot*
[31] summary.glm summary.gls* summary.haven_labelled* summary.hcl_palettes* summary.infl*
[36] summary.Interval* summary.lm summary.lme* summary.lmList* summary.loess*
[41] summary.manova summary.matrix summary.microbenchmark* summary.mlm* summary.modelStruct*
[46] summary.nls* summary.nlsList* summary.packageStatus* summary.pandas.core.frame.DataFrame* summary.pandas.core.series.Series*
[51] summary.pdBlocked* summary.pdCompSymm* summary.pdDiag* summary.pdIdent* summary.pdIdnot*
[56] summary.pdLogChol* summary.pdMat* summary.pdNatural* summary.pdSymm* summary.pdTens*
[61] summary.Period* summary.POSIXct summary.POSIXlt summary.ppr* summary.prcomp*
[66] summary.princomp* summary.proc_time summary.python.builtin.object* summary.reStruct* summary.rlang_error*
[71] summary.rlang_trace* summary.shingle* summary.srcfile summary.srcref summary.stepfun
[76] summary.stl* summary.table summary.trellis* summary.tukeysmooth* summary.varComb*
[81] summary.varConstPower* summary.varExp* summary.varFixed* summary.varFunc* summary.varIdent*
[86] summary.varPower* summary.vctrs_sclr* summary.vctrs_vctr* summary.warnings
see '?methods' for accessing help and source code
```
Say you are new to a modeling package, and as such, you might want to see what all you can do with the resulting object. Once you’ve discerned the class of the model object, you can then list all the functions that can be used on that object.
```
library(brms)
methods(class = 'brmsfit')
```
```
[1] add_criterion add_ic as.array as.data.frame as.matrix as.mcmc autocor bayes_factor bayes_R2
[10] bridge_sampler coef conditional_effects conditional_smooths control_params expose_functions family fitted fixef
[19] formula getCall hypothesis kfold launch_shinystan log_lik log_posterior logLik loo_compare
[28] loo_linpred loo_model_weights loo_moment_match loo_predict loo_predictive_interval loo_R2 loo_subsample loo LOO
[37] marginal_effects marginal_smooths mcmc_plot model_weights model.frame neff_ratio ngrps nobs nsamples
[46] nuts_params pairs parnames plot_coefficients plot post_prob posterior_average posterior_epred posterior_interval
[55] posterior_linpred posterior_predict posterior_samples posterior_summary pp_average pp_check pp_mixture predict predictive_error
[64] predictive_interval prepare_predictions print prior_samples prior_summary ranef reloo residuals rhat
[73] stancode standata stanplot summary update VarCorr vcov waic WAIC
see '?methods' for accessing help and source code
```
This allows you to more quickly get familiar with a package and the objects it produces, and provides utility you might not have even known to look for in the first place!
### S4 classes
Everything we’ve been dealing with at this point are S3 objects, classes, and methods. R is a dialect of the [S language](https://www.r-project.org/conferences/useR-2006/Slides/Chambers.pdf), and the S3 name reflects the version of S at the time of R’s creation. S4 was the next iteration of S, but I’m not going to say much about the S4 system of objects other than they are a separate type of object with their own methods. For practical use you might not see much difference, but if you see an S4 object, it will have slots accessible via `@`.
```
car_matrix = mtcars %>%
as.matrix() %>% # convert from df to matrix
Matrix::Matrix() # convert to Matrix class (S4)
typeof(car_matrix)
```
```
[1] "S4"
```
```
str(car_matrix)
```
```
Formal class 'dgeMatrix' [package "Matrix"] with 4 slots
..@ x : num [1:352] 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
..@ Dim : int [1:2] 32 11
..@ Dimnames:List of 2
.. ..$ : chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
.. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
..@ factors : list()
```
Usually you will access the contents via methods rather than using the `@`, and that assumes you know what those methods are. Mostly, I just find S4 objects slightly more annoying to work with for applied work, but you should be at least somewhat familiar with them so that you won’t be thrown off course when they appear.
### Others
Indeed there are more types of R objects, but they will probably not be of much notice to the applied user. As an example, packages like mlr3 and text2vec package uses [R6](https://cran.r-project.org/web/packages/R6/vignettes/Introduction.html). I can only say that you’ll just have to cross that bridge should you get to it.
### Inspecting Functions
You might not think of them as such, but in R, everything’s an object, including functions. You can inspect them like anything else.
```
str(lm)
```
```
function (formula, data, subset, weights, na.action, method = "qr", model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE, contrasts = NULL, offset, ...)
```
```
## lm
```
```
1 function (formula, data, subset, weights, na.action, method = "qr",
2 model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE,
3 contrasts = NULL, offset, ...)
4 {
5 ret.x <- x
6 ret.y <- y
7 cl <- match.call()
8 mf <- match.call(expand.dots = FALSE)
9 m <- match(c("formula", "data", "subset", "weights", "na.action",
10 "offset"), names(mf), 0L)
11 mf <- mf[c(1L, m)]
12 mf$drop.unused.levels <- TRUE
13 mf[[1L]] <- quote(stats::model.frame)
14 mf <- eval(mf, parent.frame())
15 if (method == "model.frame")
16 return(mf)
17 else if (method != "qr")
18 warning(gettextf("method = '%s' is not supported. Using 'qr'",
19 method), domain = NA)
20 mt <- attr(mf, "terms")
```
One of the primary reasons for R’s popularity is the accessibility of the underlying code. People can very easily access the code for some function, modify it, extend it, etc. From an applied perspective, if you want to get better at writing code, or modify existing code, all you have to do is dive in! We’ll talk more about writing functions [later](functions.html#writing-functions).
Documentation
-------------
Many applied users of R are quick to search the web for available help when they come to a problem. This is great, you’ll find a lot of information out there. However, it will likely take you a bit to sort through things and find exactly what you need. Strangely, I see many users of R don’t use the documentation, e.g. help files, package website, etc., first, and yet this is typically the quickest way to answer many of the questions they’ll have.
Let’s start with an example. We’ll use the sample function to get a random sample of 10 values from the range of numbers 1 through 5\. So, go ahead and do so!
```
sample(?)
```
Don’t know what to put? Consult the help file!
We get a brief description of a function at the top, then we see how to actually use it, i.e. the form the syntax should take. We find out there is even an additional function, sample.int, that we could use. Next we see what arguments are possible. First we need an `x`, so what is the thing we’re trying to sample from? The numbers 1 through 5\. Next is the size, which is how many values we want, in this case 10\. So let’s try it.
```
nums = 1:5
sample(nums, 10)
```
```
Error in sample.int(length(x), size, replace, prob): cannot take a sample larger than the population when 'replace = FALSE'
```
Uh oh\- we have a problem with the `replace` argument! We can see in the help file that, by default, it is `FALSE`[10](#fn10), but if we want to sample 10 times from only 5 numbers, we’ll need to change it to `TRUE`.
Now we are on our way!
The help file gives detailed information about the sampling that is possible, which actually is not as simple as one would think! The **`Value`** is important, as it tells us what we can expect the function to return, whether a data frame, list, or whatever. We even get references, other functions that might be of interest (**`See Also`**), and examples. There is a lot to digest for this function!
Not all functions have all this information, but most do, and if they are adhering to standards they will[11](#fn11). However, all functions have this same documentation form, which puts R above and beyond most programming languages in this regard. Once you look at a couple of help files, you’ll always be able to quickly find the information you need from any other.
Objects Exercises
-----------------
With one function, find out what the class, number of rows, number of columns are of the following object, including what kind of object the last three columns are. Inspect the help file also.
```
library(dplyr)
?starwars
```
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/programming.html |
Programming Basics
==================
Becoming a better programmer is in many ways like learning any language. While it may be literal, there is much nuance, and many ways are available to express yourself in order to solve some problem. However, it doesn’t take much in the way of practice to develop a few skills that will not only last, but go a long way toward saving you time and allowing you to explore your data, models, and visualizations more extensively. So let’s get to it!
R Objects
---------
### Object Inspection \& Exploration
Let’s say you’ve imported your data into R. If you are going to be able to do anything with it, you’ll have had to create an R object that represents that data. What is that object? By now you know it’s a data frame, specifically, an object of [class](https://en.wikipedia.org/wiki/Class_(computer_programming)) data.frame or possibly a tibble if you’re working within the tidyverse. If you want to look at it, you might be tempted to look at it this way with View, or clicking on it in your Environment viewer.
```
View(diamonds)
```
While this is certainly one way to inspect it, it’s not very useful. There’s far too much information to get much out of it, and information you may need to know is absent.
Consider the following:
```
str(diamonds)
```
```
tibble [53,940 × 10] (S3: tbl_df/tbl/data.frame)
$ carat : num [1:53940] 0.23 0.21 0.23 0.29 0.31 0.24 0.24 0.26 0.22 0.23 ...
$ cut : Ord.factor w/ 5 levels "Fair"<"Good"<..: 5 4 2 4 2 3 3 3 1 3 ...
$ color : Ord.factor w/ 7 levels "D"<"E"<"F"<"G"<..: 2 2 2 6 7 7 6 5 2 5 ...
$ clarity: Ord.factor w/ 8 levels "I1"<"SI2"<"SI1"<..: 2 3 5 4 2 6 7 3 4 5 ...
$ depth : num [1:53940] 61.5 59.8 56.9 62.4 63.3 62.8 62.3 61.9 65.1 59.4 ...
$ table : num [1:53940] 55 61 65 58 58 57 57 55 61 61 ...
$ price : int [1:53940] 326 326 327 334 335 336 336 337 337 338 ...
$ x : num [1:53940] 3.95 3.89 4.05 4.2 4.34 3.94 3.95 4.07 3.87 4 ...
$ y : num [1:53940] 3.98 3.84 4.07 4.23 4.35 3.96 3.98 4.11 3.78 4.05 ...
$ z : num [1:53940] 2.43 2.31 2.31 2.63 2.75 2.48 2.47 2.53 2.49 2.39 ...
```
```
glimpse(diamonds)
```
```
Rows: 53,940
Columns: 10
$ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.30, 0.23, 0.22, 0.31, 0.20, 0.32, 0.30, 0.30, 0.30, 0.30, 0.30, 0.23, 0.23, 0.31, 0.31, 0.23, 0.24, 0.30, 0.23, 0.23, 0.23, 0.23, 0.23, 0.2…
$ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Very Good, Fair, Very Good, Good, Ideal, Premium, Ideal, Premium, Premium, Ideal, Good, Good, Very Good, Good, Very Good, Very Good, Very Good…
$ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I, E, H, J, J, G, I, J, D, F, F, F, E, E, D, F, E, H, D, I, I, J, D, D, H, F, H, H, E, H, F, G, I, E, D, I, J, I, I, I, I, D, D, D, I, G, I, …
$ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, SI1, SI2, SI2, I1, SI2, SI1, SI1, SI1, SI2, VS2, VS1, SI1, SI1, VVS2, VS1, VS2, VS2, VS1, VS1, VS1, VS1, VS1, VS1, VS1, VS1, SI1, VS2, SI2,…
$ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64.0, 62.8, 60.4, 62.2, 60.2, 60.9, 62.0, 63.4, 63.8, 62.7, 63.3, 63.8, 61.0, 59.4, 58.1, 60.4, 62.5, 62.2, 60.5, 60.9, 60.0, 59.8, 60.7, 59.…
$ table <dbl> 55.0, 61.0, 65.0, 58.0, 58.0, 57.0, 57.0, 55.0, 61.0, 61.0, 55.0, 56.0, 61.0, 54.0, 62.0, 58.0, 54.0, 54.0, 56.0, 59.0, 56.0, 55.0, 57.0, 62.0, 62.0, 58.0, 57.0, 57.0, 61.0, 57.0, 57.0, 57.0, 59.0, 58.…
$ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 342, 344, 345, 345, 348, 351, 351, 351, 351, 352, 353, 353, 353, 354, 355, 357, 357, 357, 402, 402, 402, 402, 402, 402, 402, 402, 403, 403, 4…
$ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.25, 3.93, 3.88, 4.35, 3.79, 4.38, 4.31, 4.23, 4.23, 4.21, 4.26, 3.85, 3.94, 4.39, 4.44, 3.97, 3.97, 4.28, 3.96, 3.96, 4.00, 4.04, 3.97, 4.0…
$ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.28, 3.90, 3.84, 4.37, 3.75, 4.42, 4.34, 4.29, 4.26, 4.27, 4.30, 3.92, 3.96, 4.43, 4.47, 4.01, 3.94, 4.30, 3.97, 3.99, 4.03, 4.06, 4.01, 4.0…
$ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.73, 2.46, 2.33, 2.71, 2.27, 2.68, 2.68, 2.70, 2.71, 2.66, 2.71, 2.48, 2.41, 2.62, 2.59, 2.41, 2.47, 2.67, 2.40, 2.42, 2.41, 2.42, 2.42, 2.4…
```
The str function looks at the *structure* of the object, while glimpse perhaps provides a possibly more readable version, and is just str specifically suited toward data frames. In both cases, we get info about the object and the various things within it.
While you might be doing this with data frames, you should be doing it with any of the objects you’re interested in. Consider a regression model object.
```
lm_mod = lm(mpg ~ ., data=mtcars)
str(lm_mod, 0)
```
```
List of 12
- attr(*, "class")= chr "lm"
```
```
str(lm_mod, 1)
```
```
List of 12
$ coefficients : Named num [1:11] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ effects : Named num [1:32] -113.65 -28.6 6.13 -3.06 -4.06 ...
..- attr(*, "names")= chr [1:32] "(Intercept)" "cyl" "disp" "hp" ...
$ rank : int 11
$ fitted.values: Named num [1:32] 22.6 22.1 26.3 21.2 17.7 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ assign : int [1:11] 0 1 2 3 4 5 6 7 8 9 ...
$ qr :List of 5
..- attr(*, "class")= chr "qr"
$ df.residual : int 21
$ xlevels : Named list()
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ model :'data.frame': 32 obs. of 11 variables:
..- attr(*, "terms")=Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. .. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. .. ..- attr(*, "dimnames")=List of 2
.. .. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. .. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. .. ..- attr(*, "intercept")= int 1
.. .. ..- attr(*, "response")= int 1
.. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. .. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "lm"
```
Here we look at the object at the lowest level of detail (0\), which basically just tells us that it’s a list of stuff. But if we go into more depth, we can see that there is quite a bit going on in here! Coefficients, the data frame used in the model (i.e. only the variables used and no `NA`), and much more are available to us, and we can pluck out any piece of it.
```
lm_mod$coefficients
```
```
(Intercept) cyl disp hp drat wt qsec vs am gear carb
12.30337416 -0.11144048 0.01333524 -0.02148212 0.78711097 -3.71530393 0.82104075 0.31776281 2.52022689 0.65541302 -0.19941925
```
```
lm_mod$model %>%
head()
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
Let’s do a summary of it, something you’ve probably done many times.
```
summary(lm_mod)
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
But you can assign that to an object and inspect it too!
```
lm_mod_summary = summary(lm_mod)
str(lm_mod_summary)
```
```
List of 11
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
.. .. .. ..$ : chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ coefficients : num [1:11, 1:4] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:4] "Estimate" "Std. Error" "t value" "Pr(>|t|)"
$ aliased : Named logi [1:11] FALSE FALSE FALSE FALSE FALSE FALSE ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ sigma : num 2.65
$ df : int [1:3] 11 21 11
$ r.squared : num 0.869
$ adj.r.squared: num 0.807
$ fstatistic : Named num [1:3] 13.9 10 21
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:11, 1:11] 49.883532 -1.874242 -0.000841 -0.003789 -1.842635 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "summary.lm"
```
If we pull the coefficients from this object, we are not just getting the values, but the table that’s printed in the summary. And we can now get that ready for publishing for example[9](#fn9).
```
lm_mod_summary$coefficients %>%
kableExtra::kable(digits = 2)
```
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| (Intercept) | 12\.30 | 18\.72 | 0\.66 | 0\.52 |
| cyl | \-0\.11 | 1\.05 | \-0\.11 | 0\.92 |
| disp | 0\.01 | 0\.02 | 0\.75 | 0\.46 |
| hp | \-0\.02 | 0\.02 | \-0\.99 | 0\.33 |
| drat | 0\.79 | 1\.64 | 0\.48 | 0\.64 |
| wt | \-3\.72 | 1\.89 | \-1\.96 | 0\.06 |
| qsec | 0\.82 | 0\.73 | 1\.12 | 0\.27 |
| vs | 0\.32 | 2\.10 | 0\.15 | 0\.88 |
| am | 2\.52 | 2\.06 | 1\.23 | 0\.23 |
| gear | 0\.66 | 1\.49 | 0\.44 | 0\.67 |
| carb | \-0\.20 | 0\.83 | \-0\.24 | 0\.81 |
After a while, you’ll know what’s in the objects you use most often, and that will allow you more easily work with the content they contain, allowing you to work with them more efficiently.
### Methods
Consider the following:
```
summary(diamonds) # data frame
```
```
carat cut color clarity depth table price x y z
Min. :0.2000 Fair : 1610 D: 6775 SI1 :13065 Min. :43.00 Min. :43.00 Min. : 326 Min. : 0.000 Min. : 0.000 Min. : 0.000
1st Qu.:0.4000 Good : 4906 E: 9797 VS2 :12258 1st Qu.:61.00 1st Qu.:56.00 1st Qu.: 950 1st Qu.: 4.710 1st Qu.: 4.720 1st Qu.: 2.910
Median :0.7000 Very Good:12082 F: 9542 SI2 : 9194 Median :61.80 Median :57.00 Median : 2401 Median : 5.700 Median : 5.710 Median : 3.530
Mean :0.7979 Premium :13791 G:11292 VS1 : 8171 Mean :61.75 Mean :57.46 Mean : 3933 Mean : 5.731 Mean : 5.735 Mean : 3.539
3rd Qu.:1.0400 Ideal :21551 H: 8304 VVS2 : 5066 3rd Qu.:62.50 3rd Qu.:59.00 3rd Qu.: 5324 3rd Qu.: 6.540 3rd Qu.: 6.540 3rd Qu.: 4.040
Max. :5.0100 I: 5422 VVS1 : 3655 Max. :79.00 Max. :95.00 Max. :18823 Max. :10.740 Max. :58.900 Max. :31.800
J: 2808 (Other): 2531
```
```
summary(diamonds$clarity) # vector
```
```
I1 SI2 SI1 VS2 VS1 VVS2 VVS1 IF
741 9194 13065 12258 8171 5066 3655 1790
```
```
summary(lm_mod) # lm object
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
```
summary(lm_mod_summary) # lm summary object
```
```
Length Class Mode
call 3 -none- call
terms 3 terms call
residuals 32 -none- numeric
coefficients 44 -none- numeric
aliased 11 -none- logical
sigma 1 -none- numeric
df 3 -none- numeric
r.squared 1 -none- numeric
adj.r.squared 1 -none- numeric
fstatistic 3 -none- numeric
cov.unscaled 121 -none- numeric
```
How is it that one function works on all these different types of objects? That’s not all. In RStudio, type `summary.` and hit the tab key.
When you load additional packages, you’ll see even more methods for the summary function. When you call summary on an object, the appropriate type of summary method will be used depending on the class of the object. If there is no specific type, e.g. when we called summary on something that already had summary called on it, it will just use a default version listing the contents. To see all the methods for summary, type the following, and you’ll see all that is currently available for your R session.
```
methods('summary')
```
```
[1] summary,ANY-method summary,DBIObject-method summary,diagonalMatrix-method summary,sparseMatrix-method summary.aov
[6] summary.aovlist* summary.aspell* summary.check_packages_in_dir* summary.connection summary.corAR1*
[11] summary.corARMA* summary.corCAR1* summary.corCompSymm* summary.corExp* summary.corGaus*
[16] summary.corIdent* summary.corLin* summary.corNatural* summary.corRatio* summary.corSpher*
[21] summary.corStruct* summary.corSymm* summary.data.frame summary.Date summary.default
[26] summary.Duration* summary.ecdf* summary.factor summary.gam summary.ggplot*
[31] summary.glm summary.gls* summary.haven_labelled* summary.hcl_palettes* summary.infl*
[36] summary.Interval* summary.lm summary.lme* summary.lmList* summary.loess*
[41] summary.manova summary.matrix summary.microbenchmark* summary.mlm* summary.modelStruct*
[46] summary.nls* summary.nlsList* summary.packageStatus* summary.pandas.core.frame.DataFrame* summary.pandas.core.series.Series*
[51] summary.pdBlocked* summary.pdCompSymm* summary.pdDiag* summary.pdIdent* summary.pdIdnot*
[56] summary.pdLogChol* summary.pdMat* summary.pdNatural* summary.pdSymm* summary.pdTens*
[61] summary.Period* summary.POSIXct summary.POSIXlt summary.ppr* summary.prcomp*
[66] summary.princomp* summary.proc_time summary.python.builtin.object* summary.reStruct* summary.rlang_error*
[71] summary.rlang_trace* summary.shingle* summary.srcfile summary.srcref summary.stepfun
[76] summary.stl* summary.table summary.trellis* summary.tukeysmooth* summary.varComb*
[81] summary.varConstPower* summary.varExp* summary.varFixed* summary.varFunc* summary.varIdent*
[86] summary.varPower* summary.vctrs_sclr* summary.vctrs_vctr* summary.warnings
see '?methods' for accessing help and source code
```
Say you are new to a modeling package, and as such, you might want to see what all you can do with the resulting object. Once you’ve discerned the class of the model object, you can then list all the functions that can be used on that object.
```
library(brms)
methods(class = 'brmsfit')
```
```
[1] add_criterion add_ic as.array as.data.frame as.matrix as.mcmc autocor bayes_factor bayes_R2
[10] bridge_sampler coef conditional_effects conditional_smooths control_params expose_functions family fitted fixef
[19] formula getCall hypothesis kfold launch_shinystan log_lik log_posterior logLik loo_compare
[28] loo_linpred loo_model_weights loo_moment_match loo_predict loo_predictive_interval loo_R2 loo_subsample loo LOO
[37] marginal_effects marginal_smooths mcmc_plot model_weights model.frame neff_ratio ngrps nobs nsamples
[46] nuts_params pairs parnames plot_coefficients plot post_prob posterior_average posterior_epred posterior_interval
[55] posterior_linpred posterior_predict posterior_samples posterior_summary pp_average pp_check pp_mixture predict predictive_error
[64] predictive_interval prepare_predictions print prior_samples prior_summary ranef reloo residuals rhat
[73] stancode standata stanplot summary update VarCorr vcov waic WAIC
see '?methods' for accessing help and source code
```
This allows you to more quickly get familiar with a package and the objects it produces, and provides utility you might not have even known to look for in the first place!
### S4 classes
Everything we’ve been dealing with at this point are S3 objects, classes, and methods. R is a dialect of the [S language](https://www.r-project.org/conferences/useR-2006/Slides/Chambers.pdf), and the S3 name reflects the version of S at the time of R’s creation. S4 was the next iteration of S, but I’m not going to say much about the S4 system of objects other than they are a separate type of object with their own methods. For practical use you might not see much difference, but if you see an S4 object, it will have slots accessible via `@`.
```
car_matrix = mtcars %>%
as.matrix() %>% # convert from df to matrix
Matrix::Matrix() # convert to Matrix class (S4)
typeof(car_matrix)
```
```
[1] "S4"
```
```
str(car_matrix)
```
```
Formal class 'dgeMatrix' [package "Matrix"] with 4 slots
..@ x : num [1:352] 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
..@ Dim : int [1:2] 32 11
..@ Dimnames:List of 2
.. ..$ : chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
.. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
..@ factors : list()
```
Usually you will access the contents via methods rather than using the `@`, and that assumes you know what those methods are. Mostly, I just find S4 objects slightly more annoying to work with for applied work, but you should be at least somewhat familiar with them so that you won’t be thrown off course when they appear.
### Others
Indeed there are more types of R objects, but they will probably not be of much notice to the applied user. As an example, packages like mlr3 and text2vec package uses [R6](https://cran.r-project.org/web/packages/R6/vignettes/Introduction.html). I can only say that you’ll just have to cross that bridge should you get to it.
### Inspecting Functions
You might not think of them as such, but in R, everything’s an object, including functions. You can inspect them like anything else.
```
str(lm)
```
```
function (formula, data, subset, weights, na.action, method = "qr", model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE, contrasts = NULL, offset, ...)
```
```
## lm
```
```
1 function (formula, data, subset, weights, na.action, method = "qr",
2 model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE,
3 contrasts = NULL, offset, ...)
4 {
5 ret.x <- x
6 ret.y <- y
7 cl <- match.call()
8 mf <- match.call(expand.dots = FALSE)
9 m <- match(c("formula", "data", "subset", "weights", "na.action",
10 "offset"), names(mf), 0L)
11 mf <- mf[c(1L, m)]
12 mf$drop.unused.levels <- TRUE
13 mf[[1L]] <- quote(stats::model.frame)
14 mf <- eval(mf, parent.frame())
15 if (method == "model.frame")
16 return(mf)
17 else if (method != "qr")
18 warning(gettextf("method = '%s' is not supported. Using 'qr'",
19 method), domain = NA)
20 mt <- attr(mf, "terms")
```
One of the primary reasons for R’s popularity is the accessibility of the underlying code. People can very easily access the code for some function, modify it, extend it, etc. From an applied perspective, if you want to get better at writing code, or modify existing code, all you have to do is dive in! We’ll talk more about writing functions [later](functions.html#writing-functions).
Documentation
-------------
Many applied users of R are quick to search the web for available help when they come to a problem. This is great, you’ll find a lot of information out there. However, it will likely take you a bit to sort through things and find exactly what you need. Strangely, I see many users of R don’t use the documentation, e.g. help files, package website, etc., first, and yet this is typically the quickest way to answer many of the questions they’ll have.
Let’s start with an example. We’ll use the sample function to get a random sample of 10 values from the range of numbers 1 through 5\. So, go ahead and do so!
```
sample(?)
```
Don’t know what to put? Consult the help file!
We get a brief description of a function at the top, then we see how to actually use it, i.e. the form the syntax should take. We find out there is even an additional function, sample.int, that we could use. Next we see what arguments are possible. First we need an `x`, so what is the thing we’re trying to sample from? The numbers 1 through 5\. Next is the size, which is how many values we want, in this case 10\. So let’s try it.
```
nums = 1:5
sample(nums, 10)
```
```
Error in sample.int(length(x), size, replace, prob): cannot take a sample larger than the population when 'replace = FALSE'
```
Uh oh\- we have a problem with the `replace` argument! We can see in the help file that, by default, it is `FALSE`[10](#fn10), but if we want to sample 10 times from only 5 numbers, we’ll need to change it to `TRUE`.
Now we are on our way!
The help file gives detailed information about the sampling that is possible, which actually is not as simple as one would think! The **`Value`** is important, as it tells us what we can expect the function to return, whether a data frame, list, or whatever. We even get references, other functions that might be of interest (**`See Also`**), and examples. There is a lot to digest for this function!
Not all functions have all this information, but most do, and if they are adhering to standards they will[11](#fn11). However, all functions have this same documentation form, which puts R above and beyond most programming languages in this regard. Once you look at a couple of help files, you’ll always be able to quickly find the information you need from any other.
Objects Exercises
-----------------
With one function, find out what the class, number of rows, number of columns are of the following object, including what kind of object the last three columns are. Inspect the help file also.
```
library(dplyr)
?starwars
```
R Objects
---------
### Object Inspection \& Exploration
Let’s say you’ve imported your data into R. If you are going to be able to do anything with it, you’ll have had to create an R object that represents that data. What is that object? By now you know it’s a data frame, specifically, an object of [class](https://en.wikipedia.org/wiki/Class_(computer_programming)) data.frame or possibly a tibble if you’re working within the tidyverse. If you want to look at it, you might be tempted to look at it this way with View, or clicking on it in your Environment viewer.
```
View(diamonds)
```
While this is certainly one way to inspect it, it’s not very useful. There’s far too much information to get much out of it, and information you may need to know is absent.
Consider the following:
```
str(diamonds)
```
```
tibble [53,940 × 10] (S3: tbl_df/tbl/data.frame)
$ carat : num [1:53940] 0.23 0.21 0.23 0.29 0.31 0.24 0.24 0.26 0.22 0.23 ...
$ cut : Ord.factor w/ 5 levels "Fair"<"Good"<..: 5 4 2 4 2 3 3 3 1 3 ...
$ color : Ord.factor w/ 7 levels "D"<"E"<"F"<"G"<..: 2 2 2 6 7 7 6 5 2 5 ...
$ clarity: Ord.factor w/ 8 levels "I1"<"SI2"<"SI1"<..: 2 3 5 4 2 6 7 3 4 5 ...
$ depth : num [1:53940] 61.5 59.8 56.9 62.4 63.3 62.8 62.3 61.9 65.1 59.4 ...
$ table : num [1:53940] 55 61 65 58 58 57 57 55 61 61 ...
$ price : int [1:53940] 326 326 327 334 335 336 336 337 337 338 ...
$ x : num [1:53940] 3.95 3.89 4.05 4.2 4.34 3.94 3.95 4.07 3.87 4 ...
$ y : num [1:53940] 3.98 3.84 4.07 4.23 4.35 3.96 3.98 4.11 3.78 4.05 ...
$ z : num [1:53940] 2.43 2.31 2.31 2.63 2.75 2.48 2.47 2.53 2.49 2.39 ...
```
```
glimpse(diamonds)
```
```
Rows: 53,940
Columns: 10
$ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.30, 0.23, 0.22, 0.31, 0.20, 0.32, 0.30, 0.30, 0.30, 0.30, 0.30, 0.23, 0.23, 0.31, 0.31, 0.23, 0.24, 0.30, 0.23, 0.23, 0.23, 0.23, 0.23, 0.2…
$ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Very Good, Fair, Very Good, Good, Ideal, Premium, Ideal, Premium, Premium, Ideal, Good, Good, Very Good, Good, Very Good, Very Good, Very Good…
$ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I, E, H, J, J, G, I, J, D, F, F, F, E, E, D, F, E, H, D, I, I, J, D, D, H, F, H, H, E, H, F, G, I, E, D, I, J, I, I, I, I, D, D, D, I, G, I, …
$ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, SI1, SI2, SI2, I1, SI2, SI1, SI1, SI1, SI2, VS2, VS1, SI1, SI1, VVS2, VS1, VS2, VS2, VS1, VS1, VS1, VS1, VS1, VS1, VS1, VS1, SI1, VS2, SI2,…
$ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64.0, 62.8, 60.4, 62.2, 60.2, 60.9, 62.0, 63.4, 63.8, 62.7, 63.3, 63.8, 61.0, 59.4, 58.1, 60.4, 62.5, 62.2, 60.5, 60.9, 60.0, 59.8, 60.7, 59.…
$ table <dbl> 55.0, 61.0, 65.0, 58.0, 58.0, 57.0, 57.0, 55.0, 61.0, 61.0, 55.0, 56.0, 61.0, 54.0, 62.0, 58.0, 54.0, 54.0, 56.0, 59.0, 56.0, 55.0, 57.0, 62.0, 62.0, 58.0, 57.0, 57.0, 61.0, 57.0, 57.0, 57.0, 59.0, 58.…
$ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 342, 344, 345, 345, 348, 351, 351, 351, 351, 352, 353, 353, 353, 354, 355, 357, 357, 357, 402, 402, 402, 402, 402, 402, 402, 402, 403, 403, 4…
$ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.25, 3.93, 3.88, 4.35, 3.79, 4.38, 4.31, 4.23, 4.23, 4.21, 4.26, 3.85, 3.94, 4.39, 4.44, 3.97, 3.97, 4.28, 3.96, 3.96, 4.00, 4.04, 3.97, 4.0…
$ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.28, 3.90, 3.84, 4.37, 3.75, 4.42, 4.34, 4.29, 4.26, 4.27, 4.30, 3.92, 3.96, 4.43, 4.47, 4.01, 3.94, 4.30, 3.97, 3.99, 4.03, 4.06, 4.01, 4.0…
$ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.73, 2.46, 2.33, 2.71, 2.27, 2.68, 2.68, 2.70, 2.71, 2.66, 2.71, 2.48, 2.41, 2.62, 2.59, 2.41, 2.47, 2.67, 2.40, 2.42, 2.41, 2.42, 2.42, 2.4…
```
The str function looks at the *structure* of the object, while glimpse perhaps provides a possibly more readable version, and is just str specifically suited toward data frames. In both cases, we get info about the object and the various things within it.
While you might be doing this with data frames, you should be doing it with any of the objects you’re interested in. Consider a regression model object.
```
lm_mod = lm(mpg ~ ., data=mtcars)
str(lm_mod, 0)
```
```
List of 12
- attr(*, "class")= chr "lm"
```
```
str(lm_mod, 1)
```
```
List of 12
$ coefficients : Named num [1:11] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ effects : Named num [1:32] -113.65 -28.6 6.13 -3.06 -4.06 ...
..- attr(*, "names")= chr [1:32] "(Intercept)" "cyl" "disp" "hp" ...
$ rank : int 11
$ fitted.values: Named num [1:32] 22.6 22.1 26.3 21.2 17.7 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ assign : int [1:11] 0 1 2 3 4 5 6 7 8 9 ...
$ qr :List of 5
..- attr(*, "class")= chr "qr"
$ df.residual : int 21
$ xlevels : Named list()
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ model :'data.frame': 32 obs. of 11 variables:
..- attr(*, "terms")=Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. .. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. .. ..- attr(*, "dimnames")=List of 2
.. .. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. .. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. .. ..- attr(*, "intercept")= int 1
.. .. ..- attr(*, "response")= int 1
.. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. .. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "lm"
```
Here we look at the object at the lowest level of detail (0\), which basically just tells us that it’s a list of stuff. But if we go into more depth, we can see that there is quite a bit going on in here! Coefficients, the data frame used in the model (i.e. only the variables used and no `NA`), and much more are available to us, and we can pluck out any piece of it.
```
lm_mod$coefficients
```
```
(Intercept) cyl disp hp drat wt qsec vs am gear carb
12.30337416 -0.11144048 0.01333524 -0.02148212 0.78711097 -3.71530393 0.82104075 0.31776281 2.52022689 0.65541302 -0.19941925
```
```
lm_mod$model %>%
head()
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
Let’s do a summary of it, something you’ve probably done many times.
```
summary(lm_mod)
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
But you can assign that to an object and inspect it too!
```
lm_mod_summary = summary(lm_mod)
str(lm_mod_summary)
```
```
List of 11
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
.. .. .. ..$ : chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ coefficients : num [1:11, 1:4] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:4] "Estimate" "Std. Error" "t value" "Pr(>|t|)"
$ aliased : Named logi [1:11] FALSE FALSE FALSE FALSE FALSE FALSE ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ sigma : num 2.65
$ df : int [1:3] 11 21 11
$ r.squared : num 0.869
$ adj.r.squared: num 0.807
$ fstatistic : Named num [1:3] 13.9 10 21
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:11, 1:11] 49.883532 -1.874242 -0.000841 -0.003789 -1.842635 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "summary.lm"
```
If we pull the coefficients from this object, we are not just getting the values, but the table that’s printed in the summary. And we can now get that ready for publishing for example[9](#fn9).
```
lm_mod_summary$coefficients %>%
kableExtra::kable(digits = 2)
```
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| (Intercept) | 12\.30 | 18\.72 | 0\.66 | 0\.52 |
| cyl | \-0\.11 | 1\.05 | \-0\.11 | 0\.92 |
| disp | 0\.01 | 0\.02 | 0\.75 | 0\.46 |
| hp | \-0\.02 | 0\.02 | \-0\.99 | 0\.33 |
| drat | 0\.79 | 1\.64 | 0\.48 | 0\.64 |
| wt | \-3\.72 | 1\.89 | \-1\.96 | 0\.06 |
| qsec | 0\.82 | 0\.73 | 1\.12 | 0\.27 |
| vs | 0\.32 | 2\.10 | 0\.15 | 0\.88 |
| am | 2\.52 | 2\.06 | 1\.23 | 0\.23 |
| gear | 0\.66 | 1\.49 | 0\.44 | 0\.67 |
| carb | \-0\.20 | 0\.83 | \-0\.24 | 0\.81 |
After a while, you’ll know what’s in the objects you use most often, and that will allow you more easily work with the content they contain, allowing you to work with them more efficiently.
### Methods
Consider the following:
```
summary(diamonds) # data frame
```
```
carat cut color clarity depth table price x y z
Min. :0.2000 Fair : 1610 D: 6775 SI1 :13065 Min. :43.00 Min. :43.00 Min. : 326 Min. : 0.000 Min. : 0.000 Min. : 0.000
1st Qu.:0.4000 Good : 4906 E: 9797 VS2 :12258 1st Qu.:61.00 1st Qu.:56.00 1st Qu.: 950 1st Qu.: 4.710 1st Qu.: 4.720 1st Qu.: 2.910
Median :0.7000 Very Good:12082 F: 9542 SI2 : 9194 Median :61.80 Median :57.00 Median : 2401 Median : 5.700 Median : 5.710 Median : 3.530
Mean :0.7979 Premium :13791 G:11292 VS1 : 8171 Mean :61.75 Mean :57.46 Mean : 3933 Mean : 5.731 Mean : 5.735 Mean : 3.539
3rd Qu.:1.0400 Ideal :21551 H: 8304 VVS2 : 5066 3rd Qu.:62.50 3rd Qu.:59.00 3rd Qu.: 5324 3rd Qu.: 6.540 3rd Qu.: 6.540 3rd Qu.: 4.040
Max. :5.0100 I: 5422 VVS1 : 3655 Max. :79.00 Max. :95.00 Max. :18823 Max. :10.740 Max. :58.900 Max. :31.800
J: 2808 (Other): 2531
```
```
summary(diamonds$clarity) # vector
```
```
I1 SI2 SI1 VS2 VS1 VVS2 VVS1 IF
741 9194 13065 12258 8171 5066 3655 1790
```
```
summary(lm_mod) # lm object
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
```
summary(lm_mod_summary) # lm summary object
```
```
Length Class Mode
call 3 -none- call
terms 3 terms call
residuals 32 -none- numeric
coefficients 44 -none- numeric
aliased 11 -none- logical
sigma 1 -none- numeric
df 3 -none- numeric
r.squared 1 -none- numeric
adj.r.squared 1 -none- numeric
fstatistic 3 -none- numeric
cov.unscaled 121 -none- numeric
```
How is it that one function works on all these different types of objects? That’s not all. In RStudio, type `summary.` and hit the tab key.
When you load additional packages, you’ll see even more methods for the summary function. When you call summary on an object, the appropriate type of summary method will be used depending on the class of the object. If there is no specific type, e.g. when we called summary on something that already had summary called on it, it will just use a default version listing the contents. To see all the methods for summary, type the following, and you’ll see all that is currently available for your R session.
```
methods('summary')
```
```
[1] summary,ANY-method summary,DBIObject-method summary,diagonalMatrix-method summary,sparseMatrix-method summary.aov
[6] summary.aovlist* summary.aspell* summary.check_packages_in_dir* summary.connection summary.corAR1*
[11] summary.corARMA* summary.corCAR1* summary.corCompSymm* summary.corExp* summary.corGaus*
[16] summary.corIdent* summary.corLin* summary.corNatural* summary.corRatio* summary.corSpher*
[21] summary.corStruct* summary.corSymm* summary.data.frame summary.Date summary.default
[26] summary.Duration* summary.ecdf* summary.factor summary.gam summary.ggplot*
[31] summary.glm summary.gls* summary.haven_labelled* summary.hcl_palettes* summary.infl*
[36] summary.Interval* summary.lm summary.lme* summary.lmList* summary.loess*
[41] summary.manova summary.matrix summary.microbenchmark* summary.mlm* summary.modelStruct*
[46] summary.nls* summary.nlsList* summary.packageStatus* summary.pandas.core.frame.DataFrame* summary.pandas.core.series.Series*
[51] summary.pdBlocked* summary.pdCompSymm* summary.pdDiag* summary.pdIdent* summary.pdIdnot*
[56] summary.pdLogChol* summary.pdMat* summary.pdNatural* summary.pdSymm* summary.pdTens*
[61] summary.Period* summary.POSIXct summary.POSIXlt summary.ppr* summary.prcomp*
[66] summary.princomp* summary.proc_time summary.python.builtin.object* summary.reStruct* summary.rlang_error*
[71] summary.rlang_trace* summary.shingle* summary.srcfile summary.srcref summary.stepfun
[76] summary.stl* summary.table summary.trellis* summary.tukeysmooth* summary.varComb*
[81] summary.varConstPower* summary.varExp* summary.varFixed* summary.varFunc* summary.varIdent*
[86] summary.varPower* summary.vctrs_sclr* summary.vctrs_vctr* summary.warnings
see '?methods' for accessing help and source code
```
Say you are new to a modeling package, and as such, you might want to see what all you can do with the resulting object. Once you’ve discerned the class of the model object, you can then list all the functions that can be used on that object.
```
library(brms)
methods(class = 'brmsfit')
```
```
[1] add_criterion add_ic as.array as.data.frame as.matrix as.mcmc autocor bayes_factor bayes_R2
[10] bridge_sampler coef conditional_effects conditional_smooths control_params expose_functions family fitted fixef
[19] formula getCall hypothesis kfold launch_shinystan log_lik log_posterior logLik loo_compare
[28] loo_linpred loo_model_weights loo_moment_match loo_predict loo_predictive_interval loo_R2 loo_subsample loo LOO
[37] marginal_effects marginal_smooths mcmc_plot model_weights model.frame neff_ratio ngrps nobs nsamples
[46] nuts_params pairs parnames plot_coefficients plot post_prob posterior_average posterior_epred posterior_interval
[55] posterior_linpred posterior_predict posterior_samples posterior_summary pp_average pp_check pp_mixture predict predictive_error
[64] predictive_interval prepare_predictions print prior_samples prior_summary ranef reloo residuals rhat
[73] stancode standata stanplot summary update VarCorr vcov waic WAIC
see '?methods' for accessing help and source code
```
This allows you to more quickly get familiar with a package and the objects it produces, and provides utility you might not have even known to look for in the first place!
### S4 classes
Everything we’ve been dealing with at this point are S3 objects, classes, and methods. R is a dialect of the [S language](https://www.r-project.org/conferences/useR-2006/Slides/Chambers.pdf), and the S3 name reflects the version of S at the time of R’s creation. S4 was the next iteration of S, but I’m not going to say much about the S4 system of objects other than they are a separate type of object with their own methods. For practical use you might not see much difference, but if you see an S4 object, it will have slots accessible via `@`.
```
car_matrix = mtcars %>%
as.matrix() %>% # convert from df to matrix
Matrix::Matrix() # convert to Matrix class (S4)
typeof(car_matrix)
```
```
[1] "S4"
```
```
str(car_matrix)
```
```
Formal class 'dgeMatrix' [package "Matrix"] with 4 slots
..@ x : num [1:352] 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
..@ Dim : int [1:2] 32 11
..@ Dimnames:List of 2
.. ..$ : chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
.. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
..@ factors : list()
```
Usually you will access the contents via methods rather than using the `@`, and that assumes you know what those methods are. Mostly, I just find S4 objects slightly more annoying to work with for applied work, but you should be at least somewhat familiar with them so that you won’t be thrown off course when they appear.
### Others
Indeed there are more types of R objects, but they will probably not be of much notice to the applied user. As an example, packages like mlr3 and text2vec package uses [R6](https://cran.r-project.org/web/packages/R6/vignettes/Introduction.html). I can only say that you’ll just have to cross that bridge should you get to it.
### Inspecting Functions
You might not think of them as such, but in R, everything’s an object, including functions. You can inspect them like anything else.
```
str(lm)
```
```
function (formula, data, subset, weights, na.action, method = "qr", model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE, contrasts = NULL, offset, ...)
```
```
## lm
```
```
1 function (formula, data, subset, weights, na.action, method = "qr",
2 model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE,
3 contrasts = NULL, offset, ...)
4 {
5 ret.x <- x
6 ret.y <- y
7 cl <- match.call()
8 mf <- match.call(expand.dots = FALSE)
9 m <- match(c("formula", "data", "subset", "weights", "na.action",
10 "offset"), names(mf), 0L)
11 mf <- mf[c(1L, m)]
12 mf$drop.unused.levels <- TRUE
13 mf[[1L]] <- quote(stats::model.frame)
14 mf <- eval(mf, parent.frame())
15 if (method == "model.frame")
16 return(mf)
17 else if (method != "qr")
18 warning(gettextf("method = '%s' is not supported. Using 'qr'",
19 method), domain = NA)
20 mt <- attr(mf, "terms")
```
One of the primary reasons for R’s popularity is the accessibility of the underlying code. People can very easily access the code for some function, modify it, extend it, etc. From an applied perspective, if you want to get better at writing code, or modify existing code, all you have to do is dive in! We’ll talk more about writing functions [later](functions.html#writing-functions).
### Object Inspection \& Exploration
Let’s say you’ve imported your data into R. If you are going to be able to do anything with it, you’ll have had to create an R object that represents that data. What is that object? By now you know it’s a data frame, specifically, an object of [class](https://en.wikipedia.org/wiki/Class_(computer_programming)) data.frame or possibly a tibble if you’re working within the tidyverse. If you want to look at it, you might be tempted to look at it this way with View, or clicking on it in your Environment viewer.
```
View(diamonds)
```
While this is certainly one way to inspect it, it’s not very useful. There’s far too much information to get much out of it, and information you may need to know is absent.
Consider the following:
```
str(diamonds)
```
```
tibble [53,940 × 10] (S3: tbl_df/tbl/data.frame)
$ carat : num [1:53940] 0.23 0.21 0.23 0.29 0.31 0.24 0.24 0.26 0.22 0.23 ...
$ cut : Ord.factor w/ 5 levels "Fair"<"Good"<..: 5 4 2 4 2 3 3 3 1 3 ...
$ color : Ord.factor w/ 7 levels "D"<"E"<"F"<"G"<..: 2 2 2 6 7 7 6 5 2 5 ...
$ clarity: Ord.factor w/ 8 levels "I1"<"SI2"<"SI1"<..: 2 3 5 4 2 6 7 3 4 5 ...
$ depth : num [1:53940] 61.5 59.8 56.9 62.4 63.3 62.8 62.3 61.9 65.1 59.4 ...
$ table : num [1:53940] 55 61 65 58 58 57 57 55 61 61 ...
$ price : int [1:53940] 326 326 327 334 335 336 336 337 337 338 ...
$ x : num [1:53940] 3.95 3.89 4.05 4.2 4.34 3.94 3.95 4.07 3.87 4 ...
$ y : num [1:53940] 3.98 3.84 4.07 4.23 4.35 3.96 3.98 4.11 3.78 4.05 ...
$ z : num [1:53940] 2.43 2.31 2.31 2.63 2.75 2.48 2.47 2.53 2.49 2.39 ...
```
```
glimpse(diamonds)
```
```
Rows: 53,940
Columns: 10
$ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.30, 0.23, 0.22, 0.31, 0.20, 0.32, 0.30, 0.30, 0.30, 0.30, 0.30, 0.23, 0.23, 0.31, 0.31, 0.23, 0.24, 0.30, 0.23, 0.23, 0.23, 0.23, 0.23, 0.2…
$ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Very Good, Fair, Very Good, Good, Ideal, Premium, Ideal, Premium, Premium, Ideal, Good, Good, Very Good, Good, Very Good, Very Good, Very Good…
$ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I, E, H, J, J, G, I, J, D, F, F, F, E, E, D, F, E, H, D, I, I, J, D, D, H, F, H, H, E, H, F, G, I, E, D, I, J, I, I, I, I, D, D, D, I, G, I, …
$ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, SI1, SI2, SI2, I1, SI2, SI1, SI1, SI1, SI2, VS2, VS1, SI1, SI1, VVS2, VS1, VS2, VS2, VS1, VS1, VS1, VS1, VS1, VS1, VS1, VS1, SI1, VS2, SI2,…
$ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64.0, 62.8, 60.4, 62.2, 60.2, 60.9, 62.0, 63.4, 63.8, 62.7, 63.3, 63.8, 61.0, 59.4, 58.1, 60.4, 62.5, 62.2, 60.5, 60.9, 60.0, 59.8, 60.7, 59.…
$ table <dbl> 55.0, 61.0, 65.0, 58.0, 58.0, 57.0, 57.0, 55.0, 61.0, 61.0, 55.0, 56.0, 61.0, 54.0, 62.0, 58.0, 54.0, 54.0, 56.0, 59.0, 56.0, 55.0, 57.0, 62.0, 62.0, 58.0, 57.0, 57.0, 61.0, 57.0, 57.0, 57.0, 59.0, 58.…
$ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 342, 344, 345, 345, 348, 351, 351, 351, 351, 352, 353, 353, 353, 354, 355, 357, 357, 357, 402, 402, 402, 402, 402, 402, 402, 402, 403, 403, 4…
$ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.25, 3.93, 3.88, 4.35, 3.79, 4.38, 4.31, 4.23, 4.23, 4.21, 4.26, 3.85, 3.94, 4.39, 4.44, 3.97, 3.97, 4.28, 3.96, 3.96, 4.00, 4.04, 3.97, 4.0…
$ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.28, 3.90, 3.84, 4.37, 3.75, 4.42, 4.34, 4.29, 4.26, 4.27, 4.30, 3.92, 3.96, 4.43, 4.47, 4.01, 3.94, 4.30, 3.97, 3.99, 4.03, 4.06, 4.01, 4.0…
$ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.73, 2.46, 2.33, 2.71, 2.27, 2.68, 2.68, 2.70, 2.71, 2.66, 2.71, 2.48, 2.41, 2.62, 2.59, 2.41, 2.47, 2.67, 2.40, 2.42, 2.41, 2.42, 2.42, 2.4…
```
The str function looks at the *structure* of the object, while glimpse perhaps provides a possibly more readable version, and is just str specifically suited toward data frames. In both cases, we get info about the object and the various things within it.
While you might be doing this with data frames, you should be doing it with any of the objects you’re interested in. Consider a regression model object.
```
lm_mod = lm(mpg ~ ., data=mtcars)
str(lm_mod, 0)
```
```
List of 12
- attr(*, "class")= chr "lm"
```
```
str(lm_mod, 1)
```
```
List of 12
$ coefficients : Named num [1:11] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ effects : Named num [1:32] -113.65 -28.6 6.13 -3.06 -4.06 ...
..- attr(*, "names")= chr [1:32] "(Intercept)" "cyl" "disp" "hp" ...
$ rank : int 11
$ fitted.values: Named num [1:32] 22.6 22.1 26.3 21.2 17.7 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ assign : int [1:11] 0 1 2 3 4 5 6 7 8 9 ...
$ qr :List of 5
..- attr(*, "class")= chr "qr"
$ df.residual : int 21
$ xlevels : Named list()
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ model :'data.frame': 32 obs. of 11 variables:
..- attr(*, "terms")=Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. .. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. .. ..- attr(*, "dimnames")=List of 2
.. .. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. .. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. .. ..- attr(*, "intercept")= int 1
.. .. ..- attr(*, "response")= int 1
.. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. .. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "lm"
```
Here we look at the object at the lowest level of detail (0\), which basically just tells us that it’s a list of stuff. But if we go into more depth, we can see that there is quite a bit going on in here! Coefficients, the data frame used in the model (i.e. only the variables used and no `NA`), and much more are available to us, and we can pluck out any piece of it.
```
lm_mod$coefficients
```
```
(Intercept) cyl disp hp drat wt qsec vs am gear carb
12.30337416 -0.11144048 0.01333524 -0.02148212 0.78711097 -3.71530393 0.82104075 0.31776281 2.52022689 0.65541302 -0.19941925
```
```
lm_mod$model %>%
head()
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
Let’s do a summary of it, something you’ve probably done many times.
```
summary(lm_mod)
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
But you can assign that to an object and inspect it too!
```
lm_mod_summary = summary(lm_mod)
str(lm_mod_summary)
```
```
List of 11
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
.. .. .. ..$ : chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ coefficients : num [1:11, 1:4] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:4] "Estimate" "Std. Error" "t value" "Pr(>|t|)"
$ aliased : Named logi [1:11] FALSE FALSE FALSE FALSE FALSE FALSE ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ sigma : num 2.65
$ df : int [1:3] 11 21 11
$ r.squared : num 0.869
$ adj.r.squared: num 0.807
$ fstatistic : Named num [1:3] 13.9 10 21
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:11, 1:11] 49.883532 -1.874242 -0.000841 -0.003789 -1.842635 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "summary.lm"
```
If we pull the coefficients from this object, we are not just getting the values, but the table that’s printed in the summary. And we can now get that ready for publishing for example[9](#fn9).
```
lm_mod_summary$coefficients %>%
kableExtra::kable(digits = 2)
```
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| (Intercept) | 12\.30 | 18\.72 | 0\.66 | 0\.52 |
| cyl | \-0\.11 | 1\.05 | \-0\.11 | 0\.92 |
| disp | 0\.01 | 0\.02 | 0\.75 | 0\.46 |
| hp | \-0\.02 | 0\.02 | \-0\.99 | 0\.33 |
| drat | 0\.79 | 1\.64 | 0\.48 | 0\.64 |
| wt | \-3\.72 | 1\.89 | \-1\.96 | 0\.06 |
| qsec | 0\.82 | 0\.73 | 1\.12 | 0\.27 |
| vs | 0\.32 | 2\.10 | 0\.15 | 0\.88 |
| am | 2\.52 | 2\.06 | 1\.23 | 0\.23 |
| gear | 0\.66 | 1\.49 | 0\.44 | 0\.67 |
| carb | \-0\.20 | 0\.83 | \-0\.24 | 0\.81 |
After a while, you’ll know what’s in the objects you use most often, and that will allow you more easily work with the content they contain, allowing you to work with them more efficiently.
### Methods
Consider the following:
```
summary(diamonds) # data frame
```
```
carat cut color clarity depth table price x y z
Min. :0.2000 Fair : 1610 D: 6775 SI1 :13065 Min. :43.00 Min. :43.00 Min. : 326 Min. : 0.000 Min. : 0.000 Min. : 0.000
1st Qu.:0.4000 Good : 4906 E: 9797 VS2 :12258 1st Qu.:61.00 1st Qu.:56.00 1st Qu.: 950 1st Qu.: 4.710 1st Qu.: 4.720 1st Qu.: 2.910
Median :0.7000 Very Good:12082 F: 9542 SI2 : 9194 Median :61.80 Median :57.00 Median : 2401 Median : 5.700 Median : 5.710 Median : 3.530
Mean :0.7979 Premium :13791 G:11292 VS1 : 8171 Mean :61.75 Mean :57.46 Mean : 3933 Mean : 5.731 Mean : 5.735 Mean : 3.539
3rd Qu.:1.0400 Ideal :21551 H: 8304 VVS2 : 5066 3rd Qu.:62.50 3rd Qu.:59.00 3rd Qu.: 5324 3rd Qu.: 6.540 3rd Qu.: 6.540 3rd Qu.: 4.040
Max. :5.0100 I: 5422 VVS1 : 3655 Max. :79.00 Max. :95.00 Max. :18823 Max. :10.740 Max. :58.900 Max. :31.800
J: 2808 (Other): 2531
```
```
summary(diamonds$clarity) # vector
```
```
I1 SI2 SI1 VS2 VS1 VVS2 VVS1 IF
741 9194 13065 12258 8171 5066 3655 1790
```
```
summary(lm_mod) # lm object
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
```
summary(lm_mod_summary) # lm summary object
```
```
Length Class Mode
call 3 -none- call
terms 3 terms call
residuals 32 -none- numeric
coefficients 44 -none- numeric
aliased 11 -none- logical
sigma 1 -none- numeric
df 3 -none- numeric
r.squared 1 -none- numeric
adj.r.squared 1 -none- numeric
fstatistic 3 -none- numeric
cov.unscaled 121 -none- numeric
```
How is it that one function works on all these different types of objects? That’s not all. In RStudio, type `summary.` and hit the tab key.
When you load additional packages, you’ll see even more methods for the summary function. When you call summary on an object, the appropriate type of summary method will be used depending on the class of the object. If there is no specific type, e.g. when we called summary on something that already had summary called on it, it will just use a default version listing the contents. To see all the methods for summary, type the following, and you’ll see all that is currently available for your R session.
```
methods('summary')
```
```
[1] summary,ANY-method summary,DBIObject-method summary,diagonalMatrix-method summary,sparseMatrix-method summary.aov
[6] summary.aovlist* summary.aspell* summary.check_packages_in_dir* summary.connection summary.corAR1*
[11] summary.corARMA* summary.corCAR1* summary.corCompSymm* summary.corExp* summary.corGaus*
[16] summary.corIdent* summary.corLin* summary.corNatural* summary.corRatio* summary.corSpher*
[21] summary.corStruct* summary.corSymm* summary.data.frame summary.Date summary.default
[26] summary.Duration* summary.ecdf* summary.factor summary.gam summary.ggplot*
[31] summary.glm summary.gls* summary.haven_labelled* summary.hcl_palettes* summary.infl*
[36] summary.Interval* summary.lm summary.lme* summary.lmList* summary.loess*
[41] summary.manova summary.matrix summary.microbenchmark* summary.mlm* summary.modelStruct*
[46] summary.nls* summary.nlsList* summary.packageStatus* summary.pandas.core.frame.DataFrame* summary.pandas.core.series.Series*
[51] summary.pdBlocked* summary.pdCompSymm* summary.pdDiag* summary.pdIdent* summary.pdIdnot*
[56] summary.pdLogChol* summary.pdMat* summary.pdNatural* summary.pdSymm* summary.pdTens*
[61] summary.Period* summary.POSIXct summary.POSIXlt summary.ppr* summary.prcomp*
[66] summary.princomp* summary.proc_time summary.python.builtin.object* summary.reStruct* summary.rlang_error*
[71] summary.rlang_trace* summary.shingle* summary.srcfile summary.srcref summary.stepfun
[76] summary.stl* summary.table summary.trellis* summary.tukeysmooth* summary.varComb*
[81] summary.varConstPower* summary.varExp* summary.varFixed* summary.varFunc* summary.varIdent*
[86] summary.varPower* summary.vctrs_sclr* summary.vctrs_vctr* summary.warnings
see '?methods' for accessing help and source code
```
Say you are new to a modeling package, and as such, you might want to see what all you can do with the resulting object. Once you’ve discerned the class of the model object, you can then list all the functions that can be used on that object.
```
library(brms)
methods(class = 'brmsfit')
```
```
[1] add_criterion add_ic as.array as.data.frame as.matrix as.mcmc autocor bayes_factor bayes_R2
[10] bridge_sampler coef conditional_effects conditional_smooths control_params expose_functions family fitted fixef
[19] formula getCall hypothesis kfold launch_shinystan log_lik log_posterior logLik loo_compare
[28] loo_linpred loo_model_weights loo_moment_match loo_predict loo_predictive_interval loo_R2 loo_subsample loo LOO
[37] marginal_effects marginal_smooths mcmc_plot model_weights model.frame neff_ratio ngrps nobs nsamples
[46] nuts_params pairs parnames plot_coefficients plot post_prob posterior_average posterior_epred posterior_interval
[55] posterior_linpred posterior_predict posterior_samples posterior_summary pp_average pp_check pp_mixture predict predictive_error
[64] predictive_interval prepare_predictions print prior_samples prior_summary ranef reloo residuals rhat
[73] stancode standata stanplot summary update VarCorr vcov waic WAIC
see '?methods' for accessing help and source code
```
This allows you to more quickly get familiar with a package and the objects it produces, and provides utility you might not have even known to look for in the first place!
### S4 classes
Everything we’ve been dealing with at this point are S3 objects, classes, and methods. R is a dialect of the [S language](https://www.r-project.org/conferences/useR-2006/Slides/Chambers.pdf), and the S3 name reflects the version of S at the time of R’s creation. S4 was the next iteration of S, but I’m not going to say much about the S4 system of objects other than they are a separate type of object with their own methods. For practical use you might not see much difference, but if you see an S4 object, it will have slots accessible via `@`.
```
car_matrix = mtcars %>%
as.matrix() %>% # convert from df to matrix
Matrix::Matrix() # convert to Matrix class (S4)
typeof(car_matrix)
```
```
[1] "S4"
```
```
str(car_matrix)
```
```
Formal class 'dgeMatrix' [package "Matrix"] with 4 slots
..@ x : num [1:352] 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
..@ Dim : int [1:2] 32 11
..@ Dimnames:List of 2
.. ..$ : chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
.. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
..@ factors : list()
```
Usually you will access the contents via methods rather than using the `@`, and that assumes you know what those methods are. Mostly, I just find S4 objects slightly more annoying to work with for applied work, but you should be at least somewhat familiar with them so that you won’t be thrown off course when they appear.
### Others
Indeed there are more types of R objects, but they will probably not be of much notice to the applied user. As an example, packages like mlr3 and text2vec package uses [R6](https://cran.r-project.org/web/packages/R6/vignettes/Introduction.html). I can only say that you’ll just have to cross that bridge should you get to it.
### Inspecting Functions
You might not think of them as such, but in R, everything’s an object, including functions. You can inspect them like anything else.
```
str(lm)
```
```
function (formula, data, subset, weights, na.action, method = "qr", model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE, contrasts = NULL, offset, ...)
```
```
## lm
```
```
1 function (formula, data, subset, weights, na.action, method = "qr",
2 model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE,
3 contrasts = NULL, offset, ...)
4 {
5 ret.x <- x
6 ret.y <- y
7 cl <- match.call()
8 mf <- match.call(expand.dots = FALSE)
9 m <- match(c("formula", "data", "subset", "weights", "na.action",
10 "offset"), names(mf), 0L)
11 mf <- mf[c(1L, m)]
12 mf$drop.unused.levels <- TRUE
13 mf[[1L]] <- quote(stats::model.frame)
14 mf <- eval(mf, parent.frame())
15 if (method == "model.frame")
16 return(mf)
17 else if (method != "qr")
18 warning(gettextf("method = '%s' is not supported. Using 'qr'",
19 method), domain = NA)
20 mt <- attr(mf, "terms")
```
One of the primary reasons for R’s popularity is the accessibility of the underlying code. People can very easily access the code for some function, modify it, extend it, etc. From an applied perspective, if you want to get better at writing code, or modify existing code, all you have to do is dive in! We’ll talk more about writing functions [later](functions.html#writing-functions).
Documentation
-------------
Many applied users of R are quick to search the web for available help when they come to a problem. This is great, you’ll find a lot of information out there. However, it will likely take you a bit to sort through things and find exactly what you need. Strangely, I see many users of R don’t use the documentation, e.g. help files, package website, etc., first, and yet this is typically the quickest way to answer many of the questions they’ll have.
Let’s start with an example. We’ll use the sample function to get a random sample of 10 values from the range of numbers 1 through 5\. So, go ahead and do so!
```
sample(?)
```
Don’t know what to put? Consult the help file!
We get a brief description of a function at the top, then we see how to actually use it, i.e. the form the syntax should take. We find out there is even an additional function, sample.int, that we could use. Next we see what arguments are possible. First we need an `x`, so what is the thing we’re trying to sample from? The numbers 1 through 5\. Next is the size, which is how many values we want, in this case 10\. So let’s try it.
```
nums = 1:5
sample(nums, 10)
```
```
Error in sample.int(length(x), size, replace, prob): cannot take a sample larger than the population when 'replace = FALSE'
```
Uh oh\- we have a problem with the `replace` argument! We can see in the help file that, by default, it is `FALSE`[10](#fn10), but if we want to sample 10 times from only 5 numbers, we’ll need to change it to `TRUE`.
Now we are on our way!
The help file gives detailed information about the sampling that is possible, which actually is not as simple as one would think! The **`Value`** is important, as it tells us what we can expect the function to return, whether a data frame, list, or whatever. We even get references, other functions that might be of interest (**`See Also`**), and examples. There is a lot to digest for this function!
Not all functions have all this information, but most do, and if they are adhering to standards they will[11](#fn11). However, all functions have this same documentation form, which puts R above and beyond most programming languages in this regard. Once you look at a couple of help files, you’ll always be able to quickly find the information you need from any other.
Objects Exercises
-----------------
With one function, find out what the class, number of rows, number of columns are of the following object, including what kind of object the last three columns are. Inspect the help file also.
```
library(dplyr)
?starwars
```
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/programming.html |
Programming Basics
==================
Becoming a better programmer is in many ways like learning any language. While it may be literal, there is much nuance, and many ways are available to express yourself in order to solve some problem. However, it doesn’t take much in the way of practice to develop a few skills that will not only last, but go a long way toward saving you time and allowing you to explore your data, models, and visualizations more extensively. So let’s get to it!
R Objects
---------
### Object Inspection \& Exploration
Let’s say you’ve imported your data into R. If you are going to be able to do anything with it, you’ll have had to create an R object that represents that data. What is that object? By now you know it’s a data frame, specifically, an object of [class](https://en.wikipedia.org/wiki/Class_(computer_programming)) data.frame or possibly a tibble if you’re working within the tidyverse. If you want to look at it, you might be tempted to look at it this way with View, or clicking on it in your Environment viewer.
```
View(diamonds)
```
While this is certainly one way to inspect it, it’s not very useful. There’s far too much information to get much out of it, and information you may need to know is absent.
Consider the following:
```
str(diamonds)
```
```
tibble [53,940 × 10] (S3: tbl_df/tbl/data.frame)
$ carat : num [1:53940] 0.23 0.21 0.23 0.29 0.31 0.24 0.24 0.26 0.22 0.23 ...
$ cut : Ord.factor w/ 5 levels "Fair"<"Good"<..: 5 4 2 4 2 3 3 3 1 3 ...
$ color : Ord.factor w/ 7 levels "D"<"E"<"F"<"G"<..: 2 2 2 6 7 7 6 5 2 5 ...
$ clarity: Ord.factor w/ 8 levels "I1"<"SI2"<"SI1"<..: 2 3 5 4 2 6 7 3 4 5 ...
$ depth : num [1:53940] 61.5 59.8 56.9 62.4 63.3 62.8 62.3 61.9 65.1 59.4 ...
$ table : num [1:53940] 55 61 65 58 58 57 57 55 61 61 ...
$ price : int [1:53940] 326 326 327 334 335 336 336 337 337 338 ...
$ x : num [1:53940] 3.95 3.89 4.05 4.2 4.34 3.94 3.95 4.07 3.87 4 ...
$ y : num [1:53940] 3.98 3.84 4.07 4.23 4.35 3.96 3.98 4.11 3.78 4.05 ...
$ z : num [1:53940] 2.43 2.31 2.31 2.63 2.75 2.48 2.47 2.53 2.49 2.39 ...
```
```
glimpse(diamonds)
```
```
Rows: 53,940
Columns: 10
$ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.30, 0.23, 0.22, 0.31, 0.20, 0.32, 0.30, 0.30, 0.30, 0.30, 0.30, 0.23, 0.23, 0.31, 0.31, 0.23, 0.24, 0.30, 0.23, 0.23, 0.23, 0.23, 0.23, 0.2…
$ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Very Good, Fair, Very Good, Good, Ideal, Premium, Ideal, Premium, Premium, Ideal, Good, Good, Very Good, Good, Very Good, Very Good, Very Good…
$ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I, E, H, J, J, G, I, J, D, F, F, F, E, E, D, F, E, H, D, I, I, J, D, D, H, F, H, H, E, H, F, G, I, E, D, I, J, I, I, I, I, D, D, D, I, G, I, …
$ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, SI1, SI2, SI2, I1, SI2, SI1, SI1, SI1, SI2, VS2, VS1, SI1, SI1, VVS2, VS1, VS2, VS2, VS1, VS1, VS1, VS1, VS1, VS1, VS1, VS1, SI1, VS2, SI2,…
$ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64.0, 62.8, 60.4, 62.2, 60.2, 60.9, 62.0, 63.4, 63.8, 62.7, 63.3, 63.8, 61.0, 59.4, 58.1, 60.4, 62.5, 62.2, 60.5, 60.9, 60.0, 59.8, 60.7, 59.…
$ table <dbl> 55.0, 61.0, 65.0, 58.0, 58.0, 57.0, 57.0, 55.0, 61.0, 61.0, 55.0, 56.0, 61.0, 54.0, 62.0, 58.0, 54.0, 54.0, 56.0, 59.0, 56.0, 55.0, 57.0, 62.0, 62.0, 58.0, 57.0, 57.0, 61.0, 57.0, 57.0, 57.0, 59.0, 58.…
$ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 342, 344, 345, 345, 348, 351, 351, 351, 351, 352, 353, 353, 353, 354, 355, 357, 357, 357, 402, 402, 402, 402, 402, 402, 402, 402, 403, 403, 4…
$ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.25, 3.93, 3.88, 4.35, 3.79, 4.38, 4.31, 4.23, 4.23, 4.21, 4.26, 3.85, 3.94, 4.39, 4.44, 3.97, 3.97, 4.28, 3.96, 3.96, 4.00, 4.04, 3.97, 4.0…
$ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.28, 3.90, 3.84, 4.37, 3.75, 4.42, 4.34, 4.29, 4.26, 4.27, 4.30, 3.92, 3.96, 4.43, 4.47, 4.01, 3.94, 4.30, 3.97, 3.99, 4.03, 4.06, 4.01, 4.0…
$ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.73, 2.46, 2.33, 2.71, 2.27, 2.68, 2.68, 2.70, 2.71, 2.66, 2.71, 2.48, 2.41, 2.62, 2.59, 2.41, 2.47, 2.67, 2.40, 2.42, 2.41, 2.42, 2.42, 2.4…
```
The str function looks at the *structure* of the object, while glimpse perhaps provides a possibly more readable version, and is just str specifically suited toward data frames. In both cases, we get info about the object and the various things within it.
While you might be doing this with data frames, you should be doing it with any of the objects you’re interested in. Consider a regression model object.
```
lm_mod = lm(mpg ~ ., data=mtcars)
str(lm_mod, 0)
```
```
List of 12
- attr(*, "class")= chr "lm"
```
```
str(lm_mod, 1)
```
```
List of 12
$ coefficients : Named num [1:11] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ effects : Named num [1:32] -113.65 -28.6 6.13 -3.06 -4.06 ...
..- attr(*, "names")= chr [1:32] "(Intercept)" "cyl" "disp" "hp" ...
$ rank : int 11
$ fitted.values: Named num [1:32] 22.6 22.1 26.3 21.2 17.7 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ assign : int [1:11] 0 1 2 3 4 5 6 7 8 9 ...
$ qr :List of 5
..- attr(*, "class")= chr "qr"
$ df.residual : int 21
$ xlevels : Named list()
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ model :'data.frame': 32 obs. of 11 variables:
..- attr(*, "terms")=Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. .. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. .. ..- attr(*, "dimnames")=List of 2
.. .. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. .. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. .. ..- attr(*, "intercept")= int 1
.. .. ..- attr(*, "response")= int 1
.. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. .. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "lm"
```
Here we look at the object at the lowest level of detail (0\), which basically just tells us that it’s a list of stuff. But if we go into more depth, we can see that there is quite a bit going on in here! Coefficients, the data frame used in the model (i.e. only the variables used and no `NA`), and much more are available to us, and we can pluck out any piece of it.
```
lm_mod$coefficients
```
```
(Intercept) cyl disp hp drat wt qsec vs am gear carb
12.30337416 -0.11144048 0.01333524 -0.02148212 0.78711097 -3.71530393 0.82104075 0.31776281 2.52022689 0.65541302 -0.19941925
```
```
lm_mod$model %>%
head()
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
Let’s do a summary of it, something you’ve probably done many times.
```
summary(lm_mod)
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
But you can assign that to an object and inspect it too!
```
lm_mod_summary = summary(lm_mod)
str(lm_mod_summary)
```
```
List of 11
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
.. .. .. ..$ : chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ coefficients : num [1:11, 1:4] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:4] "Estimate" "Std. Error" "t value" "Pr(>|t|)"
$ aliased : Named logi [1:11] FALSE FALSE FALSE FALSE FALSE FALSE ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ sigma : num 2.65
$ df : int [1:3] 11 21 11
$ r.squared : num 0.869
$ adj.r.squared: num 0.807
$ fstatistic : Named num [1:3] 13.9 10 21
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:11, 1:11] 49.883532 -1.874242 -0.000841 -0.003789 -1.842635 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "summary.lm"
```
If we pull the coefficients from this object, we are not just getting the values, but the table that’s printed in the summary. And we can now get that ready for publishing for example[9](#fn9).
```
lm_mod_summary$coefficients %>%
kableExtra::kable(digits = 2)
```
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| (Intercept) | 12\.30 | 18\.72 | 0\.66 | 0\.52 |
| cyl | \-0\.11 | 1\.05 | \-0\.11 | 0\.92 |
| disp | 0\.01 | 0\.02 | 0\.75 | 0\.46 |
| hp | \-0\.02 | 0\.02 | \-0\.99 | 0\.33 |
| drat | 0\.79 | 1\.64 | 0\.48 | 0\.64 |
| wt | \-3\.72 | 1\.89 | \-1\.96 | 0\.06 |
| qsec | 0\.82 | 0\.73 | 1\.12 | 0\.27 |
| vs | 0\.32 | 2\.10 | 0\.15 | 0\.88 |
| am | 2\.52 | 2\.06 | 1\.23 | 0\.23 |
| gear | 0\.66 | 1\.49 | 0\.44 | 0\.67 |
| carb | \-0\.20 | 0\.83 | \-0\.24 | 0\.81 |
After a while, you’ll know what’s in the objects you use most often, and that will allow you more easily work with the content they contain, allowing you to work with them more efficiently.
### Methods
Consider the following:
```
summary(diamonds) # data frame
```
```
carat cut color clarity depth table price x y z
Min. :0.2000 Fair : 1610 D: 6775 SI1 :13065 Min. :43.00 Min. :43.00 Min. : 326 Min. : 0.000 Min. : 0.000 Min. : 0.000
1st Qu.:0.4000 Good : 4906 E: 9797 VS2 :12258 1st Qu.:61.00 1st Qu.:56.00 1st Qu.: 950 1st Qu.: 4.710 1st Qu.: 4.720 1st Qu.: 2.910
Median :0.7000 Very Good:12082 F: 9542 SI2 : 9194 Median :61.80 Median :57.00 Median : 2401 Median : 5.700 Median : 5.710 Median : 3.530
Mean :0.7979 Premium :13791 G:11292 VS1 : 8171 Mean :61.75 Mean :57.46 Mean : 3933 Mean : 5.731 Mean : 5.735 Mean : 3.539
3rd Qu.:1.0400 Ideal :21551 H: 8304 VVS2 : 5066 3rd Qu.:62.50 3rd Qu.:59.00 3rd Qu.: 5324 3rd Qu.: 6.540 3rd Qu.: 6.540 3rd Qu.: 4.040
Max. :5.0100 I: 5422 VVS1 : 3655 Max. :79.00 Max. :95.00 Max. :18823 Max. :10.740 Max. :58.900 Max. :31.800
J: 2808 (Other): 2531
```
```
summary(diamonds$clarity) # vector
```
```
I1 SI2 SI1 VS2 VS1 VVS2 VVS1 IF
741 9194 13065 12258 8171 5066 3655 1790
```
```
summary(lm_mod) # lm object
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
```
summary(lm_mod_summary) # lm summary object
```
```
Length Class Mode
call 3 -none- call
terms 3 terms call
residuals 32 -none- numeric
coefficients 44 -none- numeric
aliased 11 -none- logical
sigma 1 -none- numeric
df 3 -none- numeric
r.squared 1 -none- numeric
adj.r.squared 1 -none- numeric
fstatistic 3 -none- numeric
cov.unscaled 121 -none- numeric
```
How is it that one function works on all these different types of objects? That’s not all. In RStudio, type `summary.` and hit the tab key.
When you load additional packages, you’ll see even more methods for the summary function. When you call summary on an object, the appropriate type of summary method will be used depending on the class of the object. If there is no specific type, e.g. when we called summary on something that already had summary called on it, it will just use a default version listing the contents. To see all the methods for summary, type the following, and you’ll see all that is currently available for your R session.
```
methods('summary')
```
```
[1] summary,ANY-method summary,DBIObject-method summary,diagonalMatrix-method summary,sparseMatrix-method summary.aov
[6] summary.aovlist* summary.aspell* summary.check_packages_in_dir* summary.connection summary.corAR1*
[11] summary.corARMA* summary.corCAR1* summary.corCompSymm* summary.corExp* summary.corGaus*
[16] summary.corIdent* summary.corLin* summary.corNatural* summary.corRatio* summary.corSpher*
[21] summary.corStruct* summary.corSymm* summary.data.frame summary.Date summary.default
[26] summary.Duration* summary.ecdf* summary.factor summary.gam summary.ggplot*
[31] summary.glm summary.gls* summary.haven_labelled* summary.hcl_palettes* summary.infl*
[36] summary.Interval* summary.lm summary.lme* summary.lmList* summary.loess*
[41] summary.manova summary.matrix summary.microbenchmark* summary.mlm* summary.modelStruct*
[46] summary.nls* summary.nlsList* summary.packageStatus* summary.pandas.core.frame.DataFrame* summary.pandas.core.series.Series*
[51] summary.pdBlocked* summary.pdCompSymm* summary.pdDiag* summary.pdIdent* summary.pdIdnot*
[56] summary.pdLogChol* summary.pdMat* summary.pdNatural* summary.pdSymm* summary.pdTens*
[61] summary.Period* summary.POSIXct summary.POSIXlt summary.ppr* summary.prcomp*
[66] summary.princomp* summary.proc_time summary.python.builtin.object* summary.reStruct* summary.rlang_error*
[71] summary.rlang_trace* summary.shingle* summary.srcfile summary.srcref summary.stepfun
[76] summary.stl* summary.table summary.trellis* summary.tukeysmooth* summary.varComb*
[81] summary.varConstPower* summary.varExp* summary.varFixed* summary.varFunc* summary.varIdent*
[86] summary.varPower* summary.vctrs_sclr* summary.vctrs_vctr* summary.warnings
see '?methods' for accessing help and source code
```
Say you are new to a modeling package, and as such, you might want to see what all you can do with the resulting object. Once you’ve discerned the class of the model object, you can then list all the functions that can be used on that object.
```
library(brms)
methods(class = 'brmsfit')
```
```
[1] add_criterion add_ic as.array as.data.frame as.matrix as.mcmc autocor bayes_factor bayes_R2
[10] bridge_sampler coef conditional_effects conditional_smooths control_params expose_functions family fitted fixef
[19] formula getCall hypothesis kfold launch_shinystan log_lik log_posterior logLik loo_compare
[28] loo_linpred loo_model_weights loo_moment_match loo_predict loo_predictive_interval loo_R2 loo_subsample loo LOO
[37] marginal_effects marginal_smooths mcmc_plot model_weights model.frame neff_ratio ngrps nobs nsamples
[46] nuts_params pairs parnames plot_coefficients plot post_prob posterior_average posterior_epred posterior_interval
[55] posterior_linpred posterior_predict posterior_samples posterior_summary pp_average pp_check pp_mixture predict predictive_error
[64] predictive_interval prepare_predictions print prior_samples prior_summary ranef reloo residuals rhat
[73] stancode standata stanplot summary update VarCorr vcov waic WAIC
see '?methods' for accessing help and source code
```
This allows you to more quickly get familiar with a package and the objects it produces, and provides utility you might not have even known to look for in the first place!
### S4 classes
Everything we’ve been dealing with at this point are S3 objects, classes, and methods. R is a dialect of the [S language](https://www.r-project.org/conferences/useR-2006/Slides/Chambers.pdf), and the S3 name reflects the version of S at the time of R’s creation. S4 was the next iteration of S, but I’m not going to say much about the S4 system of objects other than they are a separate type of object with their own methods. For practical use you might not see much difference, but if you see an S4 object, it will have slots accessible via `@`.
```
car_matrix = mtcars %>%
as.matrix() %>% # convert from df to matrix
Matrix::Matrix() # convert to Matrix class (S4)
typeof(car_matrix)
```
```
[1] "S4"
```
```
str(car_matrix)
```
```
Formal class 'dgeMatrix' [package "Matrix"] with 4 slots
..@ x : num [1:352] 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
..@ Dim : int [1:2] 32 11
..@ Dimnames:List of 2
.. ..$ : chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
.. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
..@ factors : list()
```
Usually you will access the contents via methods rather than using the `@`, and that assumes you know what those methods are. Mostly, I just find S4 objects slightly more annoying to work with for applied work, but you should be at least somewhat familiar with them so that you won’t be thrown off course when they appear.
### Others
Indeed there are more types of R objects, but they will probably not be of much notice to the applied user. As an example, packages like mlr3 and text2vec package uses [R6](https://cran.r-project.org/web/packages/R6/vignettes/Introduction.html). I can only say that you’ll just have to cross that bridge should you get to it.
### Inspecting Functions
You might not think of them as such, but in R, everything’s an object, including functions. You can inspect them like anything else.
```
str(lm)
```
```
function (formula, data, subset, weights, na.action, method = "qr", model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE, contrasts = NULL, offset, ...)
```
```
## lm
```
```
1 function (formula, data, subset, weights, na.action, method = "qr",
2 model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE,
3 contrasts = NULL, offset, ...)
4 {
5 ret.x <- x
6 ret.y <- y
7 cl <- match.call()
8 mf <- match.call(expand.dots = FALSE)
9 m <- match(c("formula", "data", "subset", "weights", "na.action",
10 "offset"), names(mf), 0L)
11 mf <- mf[c(1L, m)]
12 mf$drop.unused.levels <- TRUE
13 mf[[1L]] <- quote(stats::model.frame)
14 mf <- eval(mf, parent.frame())
15 if (method == "model.frame")
16 return(mf)
17 else if (method != "qr")
18 warning(gettextf("method = '%s' is not supported. Using 'qr'",
19 method), domain = NA)
20 mt <- attr(mf, "terms")
```
One of the primary reasons for R’s popularity is the accessibility of the underlying code. People can very easily access the code for some function, modify it, extend it, etc. From an applied perspective, if you want to get better at writing code, or modify existing code, all you have to do is dive in! We’ll talk more about writing functions [later](functions.html#writing-functions).
Documentation
-------------
Many applied users of R are quick to search the web for available help when they come to a problem. This is great, you’ll find a lot of information out there. However, it will likely take you a bit to sort through things and find exactly what you need. Strangely, I see many users of R don’t use the documentation, e.g. help files, package website, etc., first, and yet this is typically the quickest way to answer many of the questions they’ll have.
Let’s start with an example. We’ll use the sample function to get a random sample of 10 values from the range of numbers 1 through 5\. So, go ahead and do so!
```
sample(?)
```
Don’t know what to put? Consult the help file!
We get a brief description of a function at the top, then we see how to actually use it, i.e. the form the syntax should take. We find out there is even an additional function, sample.int, that we could use. Next we see what arguments are possible. First we need an `x`, so what is the thing we’re trying to sample from? The numbers 1 through 5\. Next is the size, which is how many values we want, in this case 10\. So let’s try it.
```
nums = 1:5
sample(nums, 10)
```
```
Error in sample.int(length(x), size, replace, prob): cannot take a sample larger than the population when 'replace = FALSE'
```
Uh oh\- we have a problem with the `replace` argument! We can see in the help file that, by default, it is `FALSE`[10](#fn10), but if we want to sample 10 times from only 5 numbers, we’ll need to change it to `TRUE`.
Now we are on our way!
The help file gives detailed information about the sampling that is possible, which actually is not as simple as one would think! The **`Value`** is important, as it tells us what we can expect the function to return, whether a data frame, list, or whatever. We even get references, other functions that might be of interest (**`See Also`**), and examples. There is a lot to digest for this function!
Not all functions have all this information, but most do, and if they are adhering to standards they will[11](#fn11). However, all functions have this same documentation form, which puts R above and beyond most programming languages in this regard. Once you look at a couple of help files, you’ll always be able to quickly find the information you need from any other.
Objects Exercises
-----------------
With one function, find out what the class, number of rows, number of columns are of the following object, including what kind of object the last three columns are. Inspect the help file also.
```
library(dplyr)
?starwars
```
R Objects
---------
### Object Inspection \& Exploration
Let’s say you’ve imported your data into R. If you are going to be able to do anything with it, you’ll have had to create an R object that represents that data. What is that object? By now you know it’s a data frame, specifically, an object of [class](https://en.wikipedia.org/wiki/Class_(computer_programming)) data.frame or possibly a tibble if you’re working within the tidyverse. If you want to look at it, you might be tempted to look at it this way with View, or clicking on it in your Environment viewer.
```
View(diamonds)
```
While this is certainly one way to inspect it, it’s not very useful. There’s far too much information to get much out of it, and information you may need to know is absent.
Consider the following:
```
str(diamonds)
```
```
tibble [53,940 × 10] (S3: tbl_df/tbl/data.frame)
$ carat : num [1:53940] 0.23 0.21 0.23 0.29 0.31 0.24 0.24 0.26 0.22 0.23 ...
$ cut : Ord.factor w/ 5 levels "Fair"<"Good"<..: 5 4 2 4 2 3 3 3 1 3 ...
$ color : Ord.factor w/ 7 levels "D"<"E"<"F"<"G"<..: 2 2 2 6 7 7 6 5 2 5 ...
$ clarity: Ord.factor w/ 8 levels "I1"<"SI2"<"SI1"<..: 2 3 5 4 2 6 7 3 4 5 ...
$ depth : num [1:53940] 61.5 59.8 56.9 62.4 63.3 62.8 62.3 61.9 65.1 59.4 ...
$ table : num [1:53940] 55 61 65 58 58 57 57 55 61 61 ...
$ price : int [1:53940] 326 326 327 334 335 336 336 337 337 338 ...
$ x : num [1:53940] 3.95 3.89 4.05 4.2 4.34 3.94 3.95 4.07 3.87 4 ...
$ y : num [1:53940] 3.98 3.84 4.07 4.23 4.35 3.96 3.98 4.11 3.78 4.05 ...
$ z : num [1:53940] 2.43 2.31 2.31 2.63 2.75 2.48 2.47 2.53 2.49 2.39 ...
```
```
glimpse(diamonds)
```
```
Rows: 53,940
Columns: 10
$ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.30, 0.23, 0.22, 0.31, 0.20, 0.32, 0.30, 0.30, 0.30, 0.30, 0.30, 0.23, 0.23, 0.31, 0.31, 0.23, 0.24, 0.30, 0.23, 0.23, 0.23, 0.23, 0.23, 0.2…
$ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Very Good, Fair, Very Good, Good, Ideal, Premium, Ideal, Premium, Premium, Ideal, Good, Good, Very Good, Good, Very Good, Very Good, Very Good…
$ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I, E, H, J, J, G, I, J, D, F, F, F, E, E, D, F, E, H, D, I, I, J, D, D, H, F, H, H, E, H, F, G, I, E, D, I, J, I, I, I, I, D, D, D, I, G, I, …
$ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, SI1, SI2, SI2, I1, SI2, SI1, SI1, SI1, SI2, VS2, VS1, SI1, SI1, VVS2, VS1, VS2, VS2, VS1, VS1, VS1, VS1, VS1, VS1, VS1, VS1, SI1, VS2, SI2,…
$ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64.0, 62.8, 60.4, 62.2, 60.2, 60.9, 62.0, 63.4, 63.8, 62.7, 63.3, 63.8, 61.0, 59.4, 58.1, 60.4, 62.5, 62.2, 60.5, 60.9, 60.0, 59.8, 60.7, 59.…
$ table <dbl> 55.0, 61.0, 65.0, 58.0, 58.0, 57.0, 57.0, 55.0, 61.0, 61.0, 55.0, 56.0, 61.0, 54.0, 62.0, 58.0, 54.0, 54.0, 56.0, 59.0, 56.0, 55.0, 57.0, 62.0, 62.0, 58.0, 57.0, 57.0, 61.0, 57.0, 57.0, 57.0, 59.0, 58.…
$ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 342, 344, 345, 345, 348, 351, 351, 351, 351, 352, 353, 353, 353, 354, 355, 357, 357, 357, 402, 402, 402, 402, 402, 402, 402, 402, 403, 403, 4…
$ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.25, 3.93, 3.88, 4.35, 3.79, 4.38, 4.31, 4.23, 4.23, 4.21, 4.26, 3.85, 3.94, 4.39, 4.44, 3.97, 3.97, 4.28, 3.96, 3.96, 4.00, 4.04, 3.97, 4.0…
$ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.28, 3.90, 3.84, 4.37, 3.75, 4.42, 4.34, 4.29, 4.26, 4.27, 4.30, 3.92, 3.96, 4.43, 4.47, 4.01, 3.94, 4.30, 3.97, 3.99, 4.03, 4.06, 4.01, 4.0…
$ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.73, 2.46, 2.33, 2.71, 2.27, 2.68, 2.68, 2.70, 2.71, 2.66, 2.71, 2.48, 2.41, 2.62, 2.59, 2.41, 2.47, 2.67, 2.40, 2.42, 2.41, 2.42, 2.42, 2.4…
```
The str function looks at the *structure* of the object, while glimpse perhaps provides a possibly more readable version, and is just str specifically suited toward data frames. In both cases, we get info about the object and the various things within it.
While you might be doing this with data frames, you should be doing it with any of the objects you’re interested in. Consider a regression model object.
```
lm_mod = lm(mpg ~ ., data=mtcars)
str(lm_mod, 0)
```
```
List of 12
- attr(*, "class")= chr "lm"
```
```
str(lm_mod, 1)
```
```
List of 12
$ coefficients : Named num [1:11] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ effects : Named num [1:32] -113.65 -28.6 6.13 -3.06 -4.06 ...
..- attr(*, "names")= chr [1:32] "(Intercept)" "cyl" "disp" "hp" ...
$ rank : int 11
$ fitted.values: Named num [1:32] 22.6 22.1 26.3 21.2 17.7 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ assign : int [1:11] 0 1 2 3 4 5 6 7 8 9 ...
$ qr :List of 5
..- attr(*, "class")= chr "qr"
$ df.residual : int 21
$ xlevels : Named list()
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ model :'data.frame': 32 obs. of 11 variables:
..- attr(*, "terms")=Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. .. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. .. ..- attr(*, "dimnames")=List of 2
.. .. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. .. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. .. ..- attr(*, "intercept")= int 1
.. .. ..- attr(*, "response")= int 1
.. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. .. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "lm"
```
Here we look at the object at the lowest level of detail (0\), which basically just tells us that it’s a list of stuff. But if we go into more depth, we can see that there is quite a bit going on in here! Coefficients, the data frame used in the model (i.e. only the variables used and no `NA`), and much more are available to us, and we can pluck out any piece of it.
```
lm_mod$coefficients
```
```
(Intercept) cyl disp hp drat wt qsec vs am gear carb
12.30337416 -0.11144048 0.01333524 -0.02148212 0.78711097 -3.71530393 0.82104075 0.31776281 2.52022689 0.65541302 -0.19941925
```
```
lm_mod$model %>%
head()
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
Let’s do a summary of it, something you’ve probably done many times.
```
summary(lm_mod)
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
But you can assign that to an object and inspect it too!
```
lm_mod_summary = summary(lm_mod)
str(lm_mod_summary)
```
```
List of 11
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
.. .. .. ..$ : chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ coefficients : num [1:11, 1:4] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:4] "Estimate" "Std. Error" "t value" "Pr(>|t|)"
$ aliased : Named logi [1:11] FALSE FALSE FALSE FALSE FALSE FALSE ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ sigma : num 2.65
$ df : int [1:3] 11 21 11
$ r.squared : num 0.869
$ adj.r.squared: num 0.807
$ fstatistic : Named num [1:3] 13.9 10 21
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:11, 1:11] 49.883532 -1.874242 -0.000841 -0.003789 -1.842635 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "summary.lm"
```
If we pull the coefficients from this object, we are not just getting the values, but the table that’s printed in the summary. And we can now get that ready for publishing for example[9](#fn9).
```
lm_mod_summary$coefficients %>%
kableExtra::kable(digits = 2)
```
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| (Intercept) | 12\.30 | 18\.72 | 0\.66 | 0\.52 |
| cyl | \-0\.11 | 1\.05 | \-0\.11 | 0\.92 |
| disp | 0\.01 | 0\.02 | 0\.75 | 0\.46 |
| hp | \-0\.02 | 0\.02 | \-0\.99 | 0\.33 |
| drat | 0\.79 | 1\.64 | 0\.48 | 0\.64 |
| wt | \-3\.72 | 1\.89 | \-1\.96 | 0\.06 |
| qsec | 0\.82 | 0\.73 | 1\.12 | 0\.27 |
| vs | 0\.32 | 2\.10 | 0\.15 | 0\.88 |
| am | 2\.52 | 2\.06 | 1\.23 | 0\.23 |
| gear | 0\.66 | 1\.49 | 0\.44 | 0\.67 |
| carb | \-0\.20 | 0\.83 | \-0\.24 | 0\.81 |
After a while, you’ll know what’s in the objects you use most often, and that will allow you more easily work with the content they contain, allowing you to work with them more efficiently.
### Methods
Consider the following:
```
summary(diamonds) # data frame
```
```
carat cut color clarity depth table price x y z
Min. :0.2000 Fair : 1610 D: 6775 SI1 :13065 Min. :43.00 Min. :43.00 Min. : 326 Min. : 0.000 Min. : 0.000 Min. : 0.000
1st Qu.:0.4000 Good : 4906 E: 9797 VS2 :12258 1st Qu.:61.00 1st Qu.:56.00 1st Qu.: 950 1st Qu.: 4.710 1st Qu.: 4.720 1st Qu.: 2.910
Median :0.7000 Very Good:12082 F: 9542 SI2 : 9194 Median :61.80 Median :57.00 Median : 2401 Median : 5.700 Median : 5.710 Median : 3.530
Mean :0.7979 Premium :13791 G:11292 VS1 : 8171 Mean :61.75 Mean :57.46 Mean : 3933 Mean : 5.731 Mean : 5.735 Mean : 3.539
3rd Qu.:1.0400 Ideal :21551 H: 8304 VVS2 : 5066 3rd Qu.:62.50 3rd Qu.:59.00 3rd Qu.: 5324 3rd Qu.: 6.540 3rd Qu.: 6.540 3rd Qu.: 4.040
Max. :5.0100 I: 5422 VVS1 : 3655 Max. :79.00 Max. :95.00 Max. :18823 Max. :10.740 Max. :58.900 Max. :31.800
J: 2808 (Other): 2531
```
```
summary(diamonds$clarity) # vector
```
```
I1 SI2 SI1 VS2 VS1 VVS2 VVS1 IF
741 9194 13065 12258 8171 5066 3655 1790
```
```
summary(lm_mod) # lm object
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
```
summary(lm_mod_summary) # lm summary object
```
```
Length Class Mode
call 3 -none- call
terms 3 terms call
residuals 32 -none- numeric
coefficients 44 -none- numeric
aliased 11 -none- logical
sigma 1 -none- numeric
df 3 -none- numeric
r.squared 1 -none- numeric
adj.r.squared 1 -none- numeric
fstatistic 3 -none- numeric
cov.unscaled 121 -none- numeric
```
How is it that one function works on all these different types of objects? That’s not all. In RStudio, type `summary.` and hit the tab key.
When you load additional packages, you’ll see even more methods for the summary function. When you call summary on an object, the appropriate type of summary method will be used depending on the class of the object. If there is no specific type, e.g. when we called summary on something that already had summary called on it, it will just use a default version listing the contents. To see all the methods for summary, type the following, and you’ll see all that is currently available for your R session.
```
methods('summary')
```
```
[1] summary,ANY-method summary,DBIObject-method summary,diagonalMatrix-method summary,sparseMatrix-method summary.aov
[6] summary.aovlist* summary.aspell* summary.check_packages_in_dir* summary.connection summary.corAR1*
[11] summary.corARMA* summary.corCAR1* summary.corCompSymm* summary.corExp* summary.corGaus*
[16] summary.corIdent* summary.corLin* summary.corNatural* summary.corRatio* summary.corSpher*
[21] summary.corStruct* summary.corSymm* summary.data.frame summary.Date summary.default
[26] summary.Duration* summary.ecdf* summary.factor summary.gam summary.ggplot*
[31] summary.glm summary.gls* summary.haven_labelled* summary.hcl_palettes* summary.infl*
[36] summary.Interval* summary.lm summary.lme* summary.lmList* summary.loess*
[41] summary.manova summary.matrix summary.microbenchmark* summary.mlm* summary.modelStruct*
[46] summary.nls* summary.nlsList* summary.packageStatus* summary.pandas.core.frame.DataFrame* summary.pandas.core.series.Series*
[51] summary.pdBlocked* summary.pdCompSymm* summary.pdDiag* summary.pdIdent* summary.pdIdnot*
[56] summary.pdLogChol* summary.pdMat* summary.pdNatural* summary.pdSymm* summary.pdTens*
[61] summary.Period* summary.POSIXct summary.POSIXlt summary.ppr* summary.prcomp*
[66] summary.princomp* summary.proc_time summary.python.builtin.object* summary.reStruct* summary.rlang_error*
[71] summary.rlang_trace* summary.shingle* summary.srcfile summary.srcref summary.stepfun
[76] summary.stl* summary.table summary.trellis* summary.tukeysmooth* summary.varComb*
[81] summary.varConstPower* summary.varExp* summary.varFixed* summary.varFunc* summary.varIdent*
[86] summary.varPower* summary.vctrs_sclr* summary.vctrs_vctr* summary.warnings
see '?methods' for accessing help and source code
```
Say you are new to a modeling package, and as such, you might want to see what all you can do with the resulting object. Once you’ve discerned the class of the model object, you can then list all the functions that can be used on that object.
```
library(brms)
methods(class = 'brmsfit')
```
```
[1] add_criterion add_ic as.array as.data.frame as.matrix as.mcmc autocor bayes_factor bayes_R2
[10] bridge_sampler coef conditional_effects conditional_smooths control_params expose_functions family fitted fixef
[19] formula getCall hypothesis kfold launch_shinystan log_lik log_posterior logLik loo_compare
[28] loo_linpred loo_model_weights loo_moment_match loo_predict loo_predictive_interval loo_R2 loo_subsample loo LOO
[37] marginal_effects marginal_smooths mcmc_plot model_weights model.frame neff_ratio ngrps nobs nsamples
[46] nuts_params pairs parnames plot_coefficients plot post_prob posterior_average posterior_epred posterior_interval
[55] posterior_linpred posterior_predict posterior_samples posterior_summary pp_average pp_check pp_mixture predict predictive_error
[64] predictive_interval prepare_predictions print prior_samples prior_summary ranef reloo residuals rhat
[73] stancode standata stanplot summary update VarCorr vcov waic WAIC
see '?methods' for accessing help and source code
```
This allows you to more quickly get familiar with a package and the objects it produces, and provides utility you might not have even known to look for in the first place!
### S4 classes
Everything we’ve been dealing with at this point are S3 objects, classes, and methods. R is a dialect of the [S language](https://www.r-project.org/conferences/useR-2006/Slides/Chambers.pdf), and the S3 name reflects the version of S at the time of R’s creation. S4 was the next iteration of S, but I’m not going to say much about the S4 system of objects other than they are a separate type of object with their own methods. For practical use you might not see much difference, but if you see an S4 object, it will have slots accessible via `@`.
```
car_matrix = mtcars %>%
as.matrix() %>% # convert from df to matrix
Matrix::Matrix() # convert to Matrix class (S4)
typeof(car_matrix)
```
```
[1] "S4"
```
```
str(car_matrix)
```
```
Formal class 'dgeMatrix' [package "Matrix"] with 4 slots
..@ x : num [1:352] 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
..@ Dim : int [1:2] 32 11
..@ Dimnames:List of 2
.. ..$ : chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
.. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
..@ factors : list()
```
Usually you will access the contents via methods rather than using the `@`, and that assumes you know what those methods are. Mostly, I just find S4 objects slightly more annoying to work with for applied work, but you should be at least somewhat familiar with them so that you won’t be thrown off course when they appear.
### Others
Indeed there are more types of R objects, but they will probably not be of much notice to the applied user. As an example, packages like mlr3 and text2vec package uses [R6](https://cran.r-project.org/web/packages/R6/vignettes/Introduction.html). I can only say that you’ll just have to cross that bridge should you get to it.
### Inspecting Functions
You might not think of them as such, but in R, everything’s an object, including functions. You can inspect them like anything else.
```
str(lm)
```
```
function (formula, data, subset, weights, na.action, method = "qr", model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE, contrasts = NULL, offset, ...)
```
```
## lm
```
```
1 function (formula, data, subset, weights, na.action, method = "qr",
2 model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE,
3 contrasts = NULL, offset, ...)
4 {
5 ret.x <- x
6 ret.y <- y
7 cl <- match.call()
8 mf <- match.call(expand.dots = FALSE)
9 m <- match(c("formula", "data", "subset", "weights", "na.action",
10 "offset"), names(mf), 0L)
11 mf <- mf[c(1L, m)]
12 mf$drop.unused.levels <- TRUE
13 mf[[1L]] <- quote(stats::model.frame)
14 mf <- eval(mf, parent.frame())
15 if (method == "model.frame")
16 return(mf)
17 else if (method != "qr")
18 warning(gettextf("method = '%s' is not supported. Using 'qr'",
19 method), domain = NA)
20 mt <- attr(mf, "terms")
```
One of the primary reasons for R’s popularity is the accessibility of the underlying code. People can very easily access the code for some function, modify it, extend it, etc. From an applied perspective, if you want to get better at writing code, or modify existing code, all you have to do is dive in! We’ll talk more about writing functions [later](functions.html#writing-functions).
### Object Inspection \& Exploration
Let’s say you’ve imported your data into R. If you are going to be able to do anything with it, you’ll have had to create an R object that represents that data. What is that object? By now you know it’s a data frame, specifically, an object of [class](https://en.wikipedia.org/wiki/Class_(computer_programming)) data.frame or possibly a tibble if you’re working within the tidyverse. If you want to look at it, you might be tempted to look at it this way with View, or clicking on it in your Environment viewer.
```
View(diamonds)
```
While this is certainly one way to inspect it, it’s not very useful. There’s far too much information to get much out of it, and information you may need to know is absent.
Consider the following:
```
str(diamonds)
```
```
tibble [53,940 × 10] (S3: tbl_df/tbl/data.frame)
$ carat : num [1:53940] 0.23 0.21 0.23 0.29 0.31 0.24 0.24 0.26 0.22 0.23 ...
$ cut : Ord.factor w/ 5 levels "Fair"<"Good"<..: 5 4 2 4 2 3 3 3 1 3 ...
$ color : Ord.factor w/ 7 levels "D"<"E"<"F"<"G"<..: 2 2 2 6 7 7 6 5 2 5 ...
$ clarity: Ord.factor w/ 8 levels "I1"<"SI2"<"SI1"<..: 2 3 5 4 2 6 7 3 4 5 ...
$ depth : num [1:53940] 61.5 59.8 56.9 62.4 63.3 62.8 62.3 61.9 65.1 59.4 ...
$ table : num [1:53940] 55 61 65 58 58 57 57 55 61 61 ...
$ price : int [1:53940] 326 326 327 334 335 336 336 337 337 338 ...
$ x : num [1:53940] 3.95 3.89 4.05 4.2 4.34 3.94 3.95 4.07 3.87 4 ...
$ y : num [1:53940] 3.98 3.84 4.07 4.23 4.35 3.96 3.98 4.11 3.78 4.05 ...
$ z : num [1:53940] 2.43 2.31 2.31 2.63 2.75 2.48 2.47 2.53 2.49 2.39 ...
```
```
glimpse(diamonds)
```
```
Rows: 53,940
Columns: 10
$ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.30, 0.23, 0.22, 0.31, 0.20, 0.32, 0.30, 0.30, 0.30, 0.30, 0.30, 0.23, 0.23, 0.31, 0.31, 0.23, 0.24, 0.30, 0.23, 0.23, 0.23, 0.23, 0.23, 0.2…
$ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Very Good, Fair, Very Good, Good, Ideal, Premium, Ideal, Premium, Premium, Ideal, Good, Good, Very Good, Good, Very Good, Very Good, Very Good…
$ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I, E, H, J, J, G, I, J, D, F, F, F, E, E, D, F, E, H, D, I, I, J, D, D, H, F, H, H, E, H, F, G, I, E, D, I, J, I, I, I, I, D, D, D, I, G, I, …
$ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, SI1, SI2, SI2, I1, SI2, SI1, SI1, SI1, SI2, VS2, VS1, SI1, SI1, VVS2, VS1, VS2, VS2, VS1, VS1, VS1, VS1, VS1, VS1, VS1, VS1, SI1, VS2, SI2,…
$ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64.0, 62.8, 60.4, 62.2, 60.2, 60.9, 62.0, 63.4, 63.8, 62.7, 63.3, 63.8, 61.0, 59.4, 58.1, 60.4, 62.5, 62.2, 60.5, 60.9, 60.0, 59.8, 60.7, 59.…
$ table <dbl> 55.0, 61.0, 65.0, 58.0, 58.0, 57.0, 57.0, 55.0, 61.0, 61.0, 55.0, 56.0, 61.0, 54.0, 62.0, 58.0, 54.0, 54.0, 56.0, 59.0, 56.0, 55.0, 57.0, 62.0, 62.0, 58.0, 57.0, 57.0, 61.0, 57.0, 57.0, 57.0, 59.0, 58.…
$ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 342, 344, 345, 345, 348, 351, 351, 351, 351, 352, 353, 353, 353, 354, 355, 357, 357, 357, 402, 402, 402, 402, 402, 402, 402, 402, 403, 403, 4…
$ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.25, 3.93, 3.88, 4.35, 3.79, 4.38, 4.31, 4.23, 4.23, 4.21, 4.26, 3.85, 3.94, 4.39, 4.44, 3.97, 3.97, 4.28, 3.96, 3.96, 4.00, 4.04, 3.97, 4.0…
$ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.28, 3.90, 3.84, 4.37, 3.75, 4.42, 4.34, 4.29, 4.26, 4.27, 4.30, 3.92, 3.96, 4.43, 4.47, 4.01, 3.94, 4.30, 3.97, 3.99, 4.03, 4.06, 4.01, 4.0…
$ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.73, 2.46, 2.33, 2.71, 2.27, 2.68, 2.68, 2.70, 2.71, 2.66, 2.71, 2.48, 2.41, 2.62, 2.59, 2.41, 2.47, 2.67, 2.40, 2.42, 2.41, 2.42, 2.42, 2.4…
```
The str function looks at the *structure* of the object, while glimpse perhaps provides a possibly more readable version, and is just str specifically suited toward data frames. In both cases, we get info about the object and the various things within it.
While you might be doing this with data frames, you should be doing it with any of the objects you’re interested in. Consider a regression model object.
```
lm_mod = lm(mpg ~ ., data=mtcars)
str(lm_mod, 0)
```
```
List of 12
- attr(*, "class")= chr "lm"
```
```
str(lm_mod, 1)
```
```
List of 12
$ coefficients : Named num [1:11] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ effects : Named num [1:32] -113.65 -28.6 6.13 -3.06 -4.06 ...
..- attr(*, "names")= chr [1:32] "(Intercept)" "cyl" "disp" "hp" ...
$ rank : int 11
$ fitted.values: Named num [1:32] 22.6 22.1 26.3 21.2 17.7 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ assign : int [1:11] 0 1 2 3 4 5 6 7 8 9 ...
$ qr :List of 5
..- attr(*, "class")= chr "qr"
$ df.residual : int 21
$ xlevels : Named list()
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ model :'data.frame': 32 obs. of 11 variables:
..- attr(*, "terms")=Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. .. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. .. ..- attr(*, "dimnames")=List of 2
.. .. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. .. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. .. ..- attr(*, "intercept")= int 1
.. .. ..- attr(*, "response")= int 1
.. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. .. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "lm"
```
Here we look at the object at the lowest level of detail (0\), which basically just tells us that it’s a list of stuff. But if we go into more depth, we can see that there is quite a bit going on in here! Coefficients, the data frame used in the model (i.e. only the variables used and no `NA`), and much more are available to us, and we can pluck out any piece of it.
```
lm_mod$coefficients
```
```
(Intercept) cyl disp hp drat wt qsec vs am gear carb
12.30337416 -0.11144048 0.01333524 -0.02148212 0.78711097 -3.71530393 0.82104075 0.31776281 2.52022689 0.65541302 -0.19941925
```
```
lm_mod$model %>%
head()
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
Let’s do a summary of it, something you’ve probably done many times.
```
summary(lm_mod)
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
But you can assign that to an object and inspect it too!
```
lm_mod_summary = summary(lm_mod)
str(lm_mod_summary)
```
```
List of 11
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
.. .. .. ..$ : chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ coefficients : num [1:11, 1:4] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:4] "Estimate" "Std. Error" "t value" "Pr(>|t|)"
$ aliased : Named logi [1:11] FALSE FALSE FALSE FALSE FALSE FALSE ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ sigma : num 2.65
$ df : int [1:3] 11 21 11
$ r.squared : num 0.869
$ adj.r.squared: num 0.807
$ fstatistic : Named num [1:3] 13.9 10 21
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:11, 1:11] 49.883532 -1.874242 -0.000841 -0.003789 -1.842635 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "summary.lm"
```
If we pull the coefficients from this object, we are not just getting the values, but the table that’s printed in the summary. And we can now get that ready for publishing for example[9](#fn9).
```
lm_mod_summary$coefficients %>%
kableExtra::kable(digits = 2)
```
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| (Intercept) | 12\.30 | 18\.72 | 0\.66 | 0\.52 |
| cyl | \-0\.11 | 1\.05 | \-0\.11 | 0\.92 |
| disp | 0\.01 | 0\.02 | 0\.75 | 0\.46 |
| hp | \-0\.02 | 0\.02 | \-0\.99 | 0\.33 |
| drat | 0\.79 | 1\.64 | 0\.48 | 0\.64 |
| wt | \-3\.72 | 1\.89 | \-1\.96 | 0\.06 |
| qsec | 0\.82 | 0\.73 | 1\.12 | 0\.27 |
| vs | 0\.32 | 2\.10 | 0\.15 | 0\.88 |
| am | 2\.52 | 2\.06 | 1\.23 | 0\.23 |
| gear | 0\.66 | 1\.49 | 0\.44 | 0\.67 |
| carb | \-0\.20 | 0\.83 | \-0\.24 | 0\.81 |
After a while, you’ll know what’s in the objects you use most often, and that will allow you more easily work with the content they contain, allowing you to work with them more efficiently.
### Methods
Consider the following:
```
summary(diamonds) # data frame
```
```
carat cut color clarity depth table price x y z
Min. :0.2000 Fair : 1610 D: 6775 SI1 :13065 Min. :43.00 Min. :43.00 Min. : 326 Min. : 0.000 Min. : 0.000 Min. : 0.000
1st Qu.:0.4000 Good : 4906 E: 9797 VS2 :12258 1st Qu.:61.00 1st Qu.:56.00 1st Qu.: 950 1st Qu.: 4.710 1st Qu.: 4.720 1st Qu.: 2.910
Median :0.7000 Very Good:12082 F: 9542 SI2 : 9194 Median :61.80 Median :57.00 Median : 2401 Median : 5.700 Median : 5.710 Median : 3.530
Mean :0.7979 Premium :13791 G:11292 VS1 : 8171 Mean :61.75 Mean :57.46 Mean : 3933 Mean : 5.731 Mean : 5.735 Mean : 3.539
3rd Qu.:1.0400 Ideal :21551 H: 8304 VVS2 : 5066 3rd Qu.:62.50 3rd Qu.:59.00 3rd Qu.: 5324 3rd Qu.: 6.540 3rd Qu.: 6.540 3rd Qu.: 4.040
Max. :5.0100 I: 5422 VVS1 : 3655 Max. :79.00 Max. :95.00 Max. :18823 Max. :10.740 Max. :58.900 Max. :31.800
J: 2808 (Other): 2531
```
```
summary(diamonds$clarity) # vector
```
```
I1 SI2 SI1 VS2 VS1 VVS2 VVS1 IF
741 9194 13065 12258 8171 5066 3655 1790
```
```
summary(lm_mod) # lm object
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
```
summary(lm_mod_summary) # lm summary object
```
```
Length Class Mode
call 3 -none- call
terms 3 terms call
residuals 32 -none- numeric
coefficients 44 -none- numeric
aliased 11 -none- logical
sigma 1 -none- numeric
df 3 -none- numeric
r.squared 1 -none- numeric
adj.r.squared 1 -none- numeric
fstatistic 3 -none- numeric
cov.unscaled 121 -none- numeric
```
How is it that one function works on all these different types of objects? That’s not all. In RStudio, type `summary.` and hit the tab key.
When you load additional packages, you’ll see even more methods for the summary function. When you call summary on an object, the appropriate type of summary method will be used depending on the class of the object. If there is no specific type, e.g. when we called summary on something that already had summary called on it, it will just use a default version listing the contents. To see all the methods for summary, type the following, and you’ll see all that is currently available for your R session.
```
methods('summary')
```
```
[1] summary,ANY-method summary,DBIObject-method summary,diagonalMatrix-method summary,sparseMatrix-method summary.aov
[6] summary.aovlist* summary.aspell* summary.check_packages_in_dir* summary.connection summary.corAR1*
[11] summary.corARMA* summary.corCAR1* summary.corCompSymm* summary.corExp* summary.corGaus*
[16] summary.corIdent* summary.corLin* summary.corNatural* summary.corRatio* summary.corSpher*
[21] summary.corStruct* summary.corSymm* summary.data.frame summary.Date summary.default
[26] summary.Duration* summary.ecdf* summary.factor summary.gam summary.ggplot*
[31] summary.glm summary.gls* summary.haven_labelled* summary.hcl_palettes* summary.infl*
[36] summary.Interval* summary.lm summary.lme* summary.lmList* summary.loess*
[41] summary.manova summary.matrix summary.microbenchmark* summary.mlm* summary.modelStruct*
[46] summary.nls* summary.nlsList* summary.packageStatus* summary.pandas.core.frame.DataFrame* summary.pandas.core.series.Series*
[51] summary.pdBlocked* summary.pdCompSymm* summary.pdDiag* summary.pdIdent* summary.pdIdnot*
[56] summary.pdLogChol* summary.pdMat* summary.pdNatural* summary.pdSymm* summary.pdTens*
[61] summary.Period* summary.POSIXct summary.POSIXlt summary.ppr* summary.prcomp*
[66] summary.princomp* summary.proc_time summary.python.builtin.object* summary.reStruct* summary.rlang_error*
[71] summary.rlang_trace* summary.shingle* summary.srcfile summary.srcref summary.stepfun
[76] summary.stl* summary.table summary.trellis* summary.tukeysmooth* summary.varComb*
[81] summary.varConstPower* summary.varExp* summary.varFixed* summary.varFunc* summary.varIdent*
[86] summary.varPower* summary.vctrs_sclr* summary.vctrs_vctr* summary.warnings
see '?methods' for accessing help and source code
```
Say you are new to a modeling package, and as such, you might want to see what all you can do with the resulting object. Once you’ve discerned the class of the model object, you can then list all the functions that can be used on that object.
```
library(brms)
methods(class = 'brmsfit')
```
```
[1] add_criterion add_ic as.array as.data.frame as.matrix as.mcmc autocor bayes_factor bayes_R2
[10] bridge_sampler coef conditional_effects conditional_smooths control_params expose_functions family fitted fixef
[19] formula getCall hypothesis kfold launch_shinystan log_lik log_posterior logLik loo_compare
[28] loo_linpred loo_model_weights loo_moment_match loo_predict loo_predictive_interval loo_R2 loo_subsample loo LOO
[37] marginal_effects marginal_smooths mcmc_plot model_weights model.frame neff_ratio ngrps nobs nsamples
[46] nuts_params pairs parnames plot_coefficients plot post_prob posterior_average posterior_epred posterior_interval
[55] posterior_linpred posterior_predict posterior_samples posterior_summary pp_average pp_check pp_mixture predict predictive_error
[64] predictive_interval prepare_predictions print prior_samples prior_summary ranef reloo residuals rhat
[73] stancode standata stanplot summary update VarCorr vcov waic WAIC
see '?methods' for accessing help and source code
```
This allows you to more quickly get familiar with a package and the objects it produces, and provides utility you might not have even known to look for in the first place!
### S4 classes
Everything we’ve been dealing with at this point are S3 objects, classes, and methods. R is a dialect of the [S language](https://www.r-project.org/conferences/useR-2006/Slides/Chambers.pdf), and the S3 name reflects the version of S at the time of R’s creation. S4 was the next iteration of S, but I’m not going to say much about the S4 system of objects other than they are a separate type of object with their own methods. For practical use you might not see much difference, but if you see an S4 object, it will have slots accessible via `@`.
```
car_matrix = mtcars %>%
as.matrix() %>% # convert from df to matrix
Matrix::Matrix() # convert to Matrix class (S4)
typeof(car_matrix)
```
```
[1] "S4"
```
```
str(car_matrix)
```
```
Formal class 'dgeMatrix' [package "Matrix"] with 4 slots
..@ x : num [1:352] 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
..@ Dim : int [1:2] 32 11
..@ Dimnames:List of 2
.. ..$ : chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
.. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
..@ factors : list()
```
Usually you will access the contents via methods rather than using the `@`, and that assumes you know what those methods are. Mostly, I just find S4 objects slightly more annoying to work with for applied work, but you should be at least somewhat familiar with them so that you won’t be thrown off course when they appear.
### Others
Indeed there are more types of R objects, but they will probably not be of much notice to the applied user. As an example, packages like mlr3 and text2vec package uses [R6](https://cran.r-project.org/web/packages/R6/vignettes/Introduction.html). I can only say that you’ll just have to cross that bridge should you get to it.
### Inspecting Functions
You might not think of them as such, but in R, everything’s an object, including functions. You can inspect them like anything else.
```
str(lm)
```
```
function (formula, data, subset, weights, na.action, method = "qr", model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE, contrasts = NULL, offset, ...)
```
```
## lm
```
```
1 function (formula, data, subset, weights, na.action, method = "qr",
2 model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE,
3 contrasts = NULL, offset, ...)
4 {
5 ret.x <- x
6 ret.y <- y
7 cl <- match.call()
8 mf <- match.call(expand.dots = FALSE)
9 m <- match(c("formula", "data", "subset", "weights", "na.action",
10 "offset"), names(mf), 0L)
11 mf <- mf[c(1L, m)]
12 mf$drop.unused.levels <- TRUE
13 mf[[1L]] <- quote(stats::model.frame)
14 mf <- eval(mf, parent.frame())
15 if (method == "model.frame")
16 return(mf)
17 else if (method != "qr")
18 warning(gettextf("method = '%s' is not supported. Using 'qr'",
19 method), domain = NA)
20 mt <- attr(mf, "terms")
```
One of the primary reasons for R’s popularity is the accessibility of the underlying code. People can very easily access the code for some function, modify it, extend it, etc. From an applied perspective, if you want to get better at writing code, or modify existing code, all you have to do is dive in! We’ll talk more about writing functions [later](functions.html#writing-functions).
Documentation
-------------
Many applied users of R are quick to search the web for available help when they come to a problem. This is great, you’ll find a lot of information out there. However, it will likely take you a bit to sort through things and find exactly what you need. Strangely, I see many users of R don’t use the documentation, e.g. help files, package website, etc., first, and yet this is typically the quickest way to answer many of the questions they’ll have.
Let’s start with an example. We’ll use the sample function to get a random sample of 10 values from the range of numbers 1 through 5\. So, go ahead and do so!
```
sample(?)
```
Don’t know what to put? Consult the help file!
We get a brief description of a function at the top, then we see how to actually use it, i.e. the form the syntax should take. We find out there is even an additional function, sample.int, that we could use. Next we see what arguments are possible. First we need an `x`, so what is the thing we’re trying to sample from? The numbers 1 through 5\. Next is the size, which is how many values we want, in this case 10\. So let’s try it.
```
nums = 1:5
sample(nums, 10)
```
```
Error in sample.int(length(x), size, replace, prob): cannot take a sample larger than the population when 'replace = FALSE'
```
Uh oh\- we have a problem with the `replace` argument! We can see in the help file that, by default, it is `FALSE`[10](#fn10), but if we want to sample 10 times from only 5 numbers, we’ll need to change it to `TRUE`.
Now we are on our way!
The help file gives detailed information about the sampling that is possible, which actually is not as simple as one would think! The **`Value`** is important, as it tells us what we can expect the function to return, whether a data frame, list, or whatever. We even get references, other functions that might be of interest (**`See Also`**), and examples. There is a lot to digest for this function!
Not all functions have all this information, but most do, and if they are adhering to standards they will[11](#fn11). However, all functions have this same documentation form, which puts R above and beyond most programming languages in this regard. Once you look at a couple of help files, you’ll always be able to quickly find the information you need from any other.
Objects Exercises
-----------------
With one function, find out what the class, number of rows, number of columns are of the following object, including what kind of object the last three columns are. Inspect the help file also.
```
library(dplyr)
?starwars
```
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/programming.html |
Programming Basics
==================
Becoming a better programmer is in many ways like learning any language. While it may be literal, there is much nuance, and many ways are available to express yourself in order to solve some problem. However, it doesn’t take much in the way of practice to develop a few skills that will not only last, but go a long way toward saving you time and allowing you to explore your data, models, and visualizations more extensively. So let’s get to it!
R Objects
---------
### Object Inspection \& Exploration
Let’s say you’ve imported your data into R. If you are going to be able to do anything with it, you’ll have had to create an R object that represents that data. What is that object? By now you know it’s a data frame, specifically, an object of [class](https://en.wikipedia.org/wiki/Class_(computer_programming)) data.frame or possibly a tibble if you’re working within the tidyverse. If you want to look at it, you might be tempted to look at it this way with View, or clicking on it in your Environment viewer.
```
View(diamonds)
```
While this is certainly one way to inspect it, it’s not very useful. There’s far too much information to get much out of it, and information you may need to know is absent.
Consider the following:
```
str(diamonds)
```
```
tibble [53,940 × 10] (S3: tbl_df/tbl/data.frame)
$ carat : num [1:53940] 0.23 0.21 0.23 0.29 0.31 0.24 0.24 0.26 0.22 0.23 ...
$ cut : Ord.factor w/ 5 levels "Fair"<"Good"<..: 5 4 2 4 2 3 3 3 1 3 ...
$ color : Ord.factor w/ 7 levels "D"<"E"<"F"<"G"<..: 2 2 2 6 7 7 6 5 2 5 ...
$ clarity: Ord.factor w/ 8 levels "I1"<"SI2"<"SI1"<..: 2 3 5 4 2 6 7 3 4 5 ...
$ depth : num [1:53940] 61.5 59.8 56.9 62.4 63.3 62.8 62.3 61.9 65.1 59.4 ...
$ table : num [1:53940] 55 61 65 58 58 57 57 55 61 61 ...
$ price : int [1:53940] 326 326 327 334 335 336 336 337 337 338 ...
$ x : num [1:53940] 3.95 3.89 4.05 4.2 4.34 3.94 3.95 4.07 3.87 4 ...
$ y : num [1:53940] 3.98 3.84 4.07 4.23 4.35 3.96 3.98 4.11 3.78 4.05 ...
$ z : num [1:53940] 2.43 2.31 2.31 2.63 2.75 2.48 2.47 2.53 2.49 2.39 ...
```
```
glimpse(diamonds)
```
```
Rows: 53,940
Columns: 10
$ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.30, 0.23, 0.22, 0.31, 0.20, 0.32, 0.30, 0.30, 0.30, 0.30, 0.30, 0.23, 0.23, 0.31, 0.31, 0.23, 0.24, 0.30, 0.23, 0.23, 0.23, 0.23, 0.23, 0.2…
$ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Very Good, Fair, Very Good, Good, Ideal, Premium, Ideal, Premium, Premium, Ideal, Good, Good, Very Good, Good, Very Good, Very Good, Very Good…
$ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I, E, H, J, J, G, I, J, D, F, F, F, E, E, D, F, E, H, D, I, I, J, D, D, H, F, H, H, E, H, F, G, I, E, D, I, J, I, I, I, I, D, D, D, I, G, I, …
$ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, SI1, SI2, SI2, I1, SI2, SI1, SI1, SI1, SI2, VS2, VS1, SI1, SI1, VVS2, VS1, VS2, VS2, VS1, VS1, VS1, VS1, VS1, VS1, VS1, VS1, SI1, VS2, SI2,…
$ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64.0, 62.8, 60.4, 62.2, 60.2, 60.9, 62.0, 63.4, 63.8, 62.7, 63.3, 63.8, 61.0, 59.4, 58.1, 60.4, 62.5, 62.2, 60.5, 60.9, 60.0, 59.8, 60.7, 59.…
$ table <dbl> 55.0, 61.0, 65.0, 58.0, 58.0, 57.0, 57.0, 55.0, 61.0, 61.0, 55.0, 56.0, 61.0, 54.0, 62.0, 58.0, 54.0, 54.0, 56.0, 59.0, 56.0, 55.0, 57.0, 62.0, 62.0, 58.0, 57.0, 57.0, 61.0, 57.0, 57.0, 57.0, 59.0, 58.…
$ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 342, 344, 345, 345, 348, 351, 351, 351, 351, 352, 353, 353, 353, 354, 355, 357, 357, 357, 402, 402, 402, 402, 402, 402, 402, 402, 403, 403, 4…
$ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.25, 3.93, 3.88, 4.35, 3.79, 4.38, 4.31, 4.23, 4.23, 4.21, 4.26, 3.85, 3.94, 4.39, 4.44, 3.97, 3.97, 4.28, 3.96, 3.96, 4.00, 4.04, 3.97, 4.0…
$ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.28, 3.90, 3.84, 4.37, 3.75, 4.42, 4.34, 4.29, 4.26, 4.27, 4.30, 3.92, 3.96, 4.43, 4.47, 4.01, 3.94, 4.30, 3.97, 3.99, 4.03, 4.06, 4.01, 4.0…
$ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.73, 2.46, 2.33, 2.71, 2.27, 2.68, 2.68, 2.70, 2.71, 2.66, 2.71, 2.48, 2.41, 2.62, 2.59, 2.41, 2.47, 2.67, 2.40, 2.42, 2.41, 2.42, 2.42, 2.4…
```
The str function looks at the *structure* of the object, while glimpse perhaps provides a possibly more readable version, and is just str specifically suited toward data frames. In both cases, we get info about the object and the various things within it.
While you might be doing this with data frames, you should be doing it with any of the objects you’re interested in. Consider a regression model object.
```
lm_mod = lm(mpg ~ ., data=mtcars)
str(lm_mod, 0)
```
```
List of 12
- attr(*, "class")= chr "lm"
```
```
str(lm_mod, 1)
```
```
List of 12
$ coefficients : Named num [1:11] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ effects : Named num [1:32] -113.65 -28.6 6.13 -3.06 -4.06 ...
..- attr(*, "names")= chr [1:32] "(Intercept)" "cyl" "disp" "hp" ...
$ rank : int 11
$ fitted.values: Named num [1:32] 22.6 22.1 26.3 21.2 17.7 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ assign : int [1:11] 0 1 2 3 4 5 6 7 8 9 ...
$ qr :List of 5
..- attr(*, "class")= chr "qr"
$ df.residual : int 21
$ xlevels : Named list()
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ model :'data.frame': 32 obs. of 11 variables:
..- attr(*, "terms")=Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. .. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. .. ..- attr(*, "dimnames")=List of 2
.. .. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. .. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. .. ..- attr(*, "intercept")= int 1
.. .. ..- attr(*, "response")= int 1
.. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. .. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "lm"
```
Here we look at the object at the lowest level of detail (0\), which basically just tells us that it’s a list of stuff. But if we go into more depth, we can see that there is quite a bit going on in here! Coefficients, the data frame used in the model (i.e. only the variables used and no `NA`), and much more are available to us, and we can pluck out any piece of it.
```
lm_mod$coefficients
```
```
(Intercept) cyl disp hp drat wt qsec vs am gear carb
12.30337416 -0.11144048 0.01333524 -0.02148212 0.78711097 -3.71530393 0.82104075 0.31776281 2.52022689 0.65541302 -0.19941925
```
```
lm_mod$model %>%
head()
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
Let’s do a summary of it, something you’ve probably done many times.
```
summary(lm_mod)
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
But you can assign that to an object and inspect it too!
```
lm_mod_summary = summary(lm_mod)
str(lm_mod_summary)
```
```
List of 11
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
.. .. .. ..$ : chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ coefficients : num [1:11, 1:4] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:4] "Estimate" "Std. Error" "t value" "Pr(>|t|)"
$ aliased : Named logi [1:11] FALSE FALSE FALSE FALSE FALSE FALSE ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ sigma : num 2.65
$ df : int [1:3] 11 21 11
$ r.squared : num 0.869
$ adj.r.squared: num 0.807
$ fstatistic : Named num [1:3] 13.9 10 21
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:11, 1:11] 49.883532 -1.874242 -0.000841 -0.003789 -1.842635 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "summary.lm"
```
If we pull the coefficients from this object, we are not just getting the values, but the table that’s printed in the summary. And we can now get that ready for publishing for example[9](#fn9).
```
lm_mod_summary$coefficients %>%
kableExtra::kable(digits = 2)
```
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| (Intercept) | 12\.30 | 18\.72 | 0\.66 | 0\.52 |
| cyl | \-0\.11 | 1\.05 | \-0\.11 | 0\.92 |
| disp | 0\.01 | 0\.02 | 0\.75 | 0\.46 |
| hp | \-0\.02 | 0\.02 | \-0\.99 | 0\.33 |
| drat | 0\.79 | 1\.64 | 0\.48 | 0\.64 |
| wt | \-3\.72 | 1\.89 | \-1\.96 | 0\.06 |
| qsec | 0\.82 | 0\.73 | 1\.12 | 0\.27 |
| vs | 0\.32 | 2\.10 | 0\.15 | 0\.88 |
| am | 2\.52 | 2\.06 | 1\.23 | 0\.23 |
| gear | 0\.66 | 1\.49 | 0\.44 | 0\.67 |
| carb | \-0\.20 | 0\.83 | \-0\.24 | 0\.81 |
After a while, you’ll know what’s in the objects you use most often, and that will allow you more easily work with the content they contain, allowing you to work with them more efficiently.
### Methods
Consider the following:
```
summary(diamonds) # data frame
```
```
carat cut color clarity depth table price x y z
Min. :0.2000 Fair : 1610 D: 6775 SI1 :13065 Min. :43.00 Min. :43.00 Min. : 326 Min. : 0.000 Min. : 0.000 Min. : 0.000
1st Qu.:0.4000 Good : 4906 E: 9797 VS2 :12258 1st Qu.:61.00 1st Qu.:56.00 1st Qu.: 950 1st Qu.: 4.710 1st Qu.: 4.720 1st Qu.: 2.910
Median :0.7000 Very Good:12082 F: 9542 SI2 : 9194 Median :61.80 Median :57.00 Median : 2401 Median : 5.700 Median : 5.710 Median : 3.530
Mean :0.7979 Premium :13791 G:11292 VS1 : 8171 Mean :61.75 Mean :57.46 Mean : 3933 Mean : 5.731 Mean : 5.735 Mean : 3.539
3rd Qu.:1.0400 Ideal :21551 H: 8304 VVS2 : 5066 3rd Qu.:62.50 3rd Qu.:59.00 3rd Qu.: 5324 3rd Qu.: 6.540 3rd Qu.: 6.540 3rd Qu.: 4.040
Max. :5.0100 I: 5422 VVS1 : 3655 Max. :79.00 Max. :95.00 Max. :18823 Max. :10.740 Max. :58.900 Max. :31.800
J: 2808 (Other): 2531
```
```
summary(diamonds$clarity) # vector
```
```
I1 SI2 SI1 VS2 VS1 VVS2 VVS1 IF
741 9194 13065 12258 8171 5066 3655 1790
```
```
summary(lm_mod) # lm object
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
```
summary(lm_mod_summary) # lm summary object
```
```
Length Class Mode
call 3 -none- call
terms 3 terms call
residuals 32 -none- numeric
coefficients 44 -none- numeric
aliased 11 -none- logical
sigma 1 -none- numeric
df 3 -none- numeric
r.squared 1 -none- numeric
adj.r.squared 1 -none- numeric
fstatistic 3 -none- numeric
cov.unscaled 121 -none- numeric
```
How is it that one function works on all these different types of objects? That’s not all. In RStudio, type `summary.` and hit the tab key.
When you load additional packages, you’ll see even more methods for the summary function. When you call summary on an object, the appropriate type of summary method will be used depending on the class of the object. If there is no specific type, e.g. when we called summary on something that already had summary called on it, it will just use a default version listing the contents. To see all the methods for summary, type the following, and you’ll see all that is currently available for your R session.
```
methods('summary')
```
```
[1] summary,ANY-method summary,DBIObject-method summary,diagonalMatrix-method summary,sparseMatrix-method summary.aov
[6] summary.aovlist* summary.aspell* summary.check_packages_in_dir* summary.connection summary.corAR1*
[11] summary.corARMA* summary.corCAR1* summary.corCompSymm* summary.corExp* summary.corGaus*
[16] summary.corIdent* summary.corLin* summary.corNatural* summary.corRatio* summary.corSpher*
[21] summary.corStruct* summary.corSymm* summary.data.frame summary.Date summary.default
[26] summary.Duration* summary.ecdf* summary.factor summary.gam summary.ggplot*
[31] summary.glm summary.gls* summary.haven_labelled* summary.hcl_palettes* summary.infl*
[36] summary.Interval* summary.lm summary.lme* summary.lmList* summary.loess*
[41] summary.manova summary.matrix summary.microbenchmark* summary.mlm* summary.modelStruct*
[46] summary.nls* summary.nlsList* summary.packageStatus* summary.pandas.core.frame.DataFrame* summary.pandas.core.series.Series*
[51] summary.pdBlocked* summary.pdCompSymm* summary.pdDiag* summary.pdIdent* summary.pdIdnot*
[56] summary.pdLogChol* summary.pdMat* summary.pdNatural* summary.pdSymm* summary.pdTens*
[61] summary.Period* summary.POSIXct summary.POSIXlt summary.ppr* summary.prcomp*
[66] summary.princomp* summary.proc_time summary.python.builtin.object* summary.reStruct* summary.rlang_error*
[71] summary.rlang_trace* summary.shingle* summary.srcfile summary.srcref summary.stepfun
[76] summary.stl* summary.table summary.trellis* summary.tukeysmooth* summary.varComb*
[81] summary.varConstPower* summary.varExp* summary.varFixed* summary.varFunc* summary.varIdent*
[86] summary.varPower* summary.vctrs_sclr* summary.vctrs_vctr* summary.warnings
see '?methods' for accessing help and source code
```
Say you are new to a modeling package, and as such, you might want to see what all you can do with the resulting object. Once you’ve discerned the class of the model object, you can then list all the functions that can be used on that object.
```
library(brms)
methods(class = 'brmsfit')
```
```
[1] add_criterion add_ic as.array as.data.frame as.matrix as.mcmc autocor bayes_factor bayes_R2
[10] bridge_sampler coef conditional_effects conditional_smooths control_params expose_functions family fitted fixef
[19] formula getCall hypothesis kfold launch_shinystan log_lik log_posterior logLik loo_compare
[28] loo_linpred loo_model_weights loo_moment_match loo_predict loo_predictive_interval loo_R2 loo_subsample loo LOO
[37] marginal_effects marginal_smooths mcmc_plot model_weights model.frame neff_ratio ngrps nobs nsamples
[46] nuts_params pairs parnames plot_coefficients plot post_prob posterior_average posterior_epred posterior_interval
[55] posterior_linpred posterior_predict posterior_samples posterior_summary pp_average pp_check pp_mixture predict predictive_error
[64] predictive_interval prepare_predictions print prior_samples prior_summary ranef reloo residuals rhat
[73] stancode standata stanplot summary update VarCorr vcov waic WAIC
see '?methods' for accessing help and source code
```
This allows you to more quickly get familiar with a package and the objects it produces, and provides utility you might not have even known to look for in the first place!
### S4 classes
Everything we’ve been dealing with at this point are S3 objects, classes, and methods. R is a dialect of the [S language](https://www.r-project.org/conferences/useR-2006/Slides/Chambers.pdf), and the S3 name reflects the version of S at the time of R’s creation. S4 was the next iteration of S, but I’m not going to say much about the S4 system of objects other than they are a separate type of object with their own methods. For practical use you might not see much difference, but if you see an S4 object, it will have slots accessible via `@`.
```
car_matrix = mtcars %>%
as.matrix() %>% # convert from df to matrix
Matrix::Matrix() # convert to Matrix class (S4)
typeof(car_matrix)
```
```
[1] "S4"
```
```
str(car_matrix)
```
```
Formal class 'dgeMatrix' [package "Matrix"] with 4 slots
..@ x : num [1:352] 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
..@ Dim : int [1:2] 32 11
..@ Dimnames:List of 2
.. ..$ : chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
.. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
..@ factors : list()
```
Usually you will access the contents via methods rather than using the `@`, and that assumes you know what those methods are. Mostly, I just find S4 objects slightly more annoying to work with for applied work, but you should be at least somewhat familiar with them so that you won’t be thrown off course when they appear.
### Others
Indeed there are more types of R objects, but they will probably not be of much notice to the applied user. As an example, packages like mlr3 and text2vec package uses [R6](https://cran.r-project.org/web/packages/R6/vignettes/Introduction.html). I can only say that you’ll just have to cross that bridge should you get to it.
### Inspecting Functions
You might not think of them as such, but in R, everything’s an object, including functions. You can inspect them like anything else.
```
str(lm)
```
```
function (formula, data, subset, weights, na.action, method = "qr", model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE, contrasts = NULL, offset, ...)
```
```
## lm
```
```
1 function (formula, data, subset, weights, na.action, method = "qr",
2 model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE,
3 contrasts = NULL, offset, ...)
4 {
5 ret.x <- x
6 ret.y <- y
7 cl <- match.call()
8 mf <- match.call(expand.dots = FALSE)
9 m <- match(c("formula", "data", "subset", "weights", "na.action",
10 "offset"), names(mf), 0L)
11 mf <- mf[c(1L, m)]
12 mf$drop.unused.levels <- TRUE
13 mf[[1L]] <- quote(stats::model.frame)
14 mf <- eval(mf, parent.frame())
15 if (method == "model.frame")
16 return(mf)
17 else if (method != "qr")
18 warning(gettextf("method = '%s' is not supported. Using 'qr'",
19 method), domain = NA)
20 mt <- attr(mf, "terms")
```
One of the primary reasons for R’s popularity is the accessibility of the underlying code. People can very easily access the code for some function, modify it, extend it, etc. From an applied perspective, if you want to get better at writing code, or modify existing code, all you have to do is dive in! We’ll talk more about writing functions [later](functions.html#writing-functions).
Documentation
-------------
Many applied users of R are quick to search the web for available help when they come to a problem. This is great, you’ll find a lot of information out there. However, it will likely take you a bit to sort through things and find exactly what you need. Strangely, I see many users of R don’t use the documentation, e.g. help files, package website, etc., first, and yet this is typically the quickest way to answer many of the questions they’ll have.
Let’s start with an example. We’ll use the sample function to get a random sample of 10 values from the range of numbers 1 through 5\. So, go ahead and do so!
```
sample(?)
```
Don’t know what to put? Consult the help file!
We get a brief description of a function at the top, then we see how to actually use it, i.e. the form the syntax should take. We find out there is even an additional function, sample.int, that we could use. Next we see what arguments are possible. First we need an `x`, so what is the thing we’re trying to sample from? The numbers 1 through 5\. Next is the size, which is how many values we want, in this case 10\. So let’s try it.
```
nums = 1:5
sample(nums, 10)
```
```
Error in sample.int(length(x), size, replace, prob): cannot take a sample larger than the population when 'replace = FALSE'
```
Uh oh\- we have a problem with the `replace` argument! We can see in the help file that, by default, it is `FALSE`[10](#fn10), but if we want to sample 10 times from only 5 numbers, we’ll need to change it to `TRUE`.
Now we are on our way!
The help file gives detailed information about the sampling that is possible, which actually is not as simple as one would think! The **`Value`** is important, as it tells us what we can expect the function to return, whether a data frame, list, or whatever. We even get references, other functions that might be of interest (**`See Also`**), and examples. There is a lot to digest for this function!
Not all functions have all this information, but most do, and if they are adhering to standards they will[11](#fn11). However, all functions have this same documentation form, which puts R above and beyond most programming languages in this regard. Once you look at a couple of help files, you’ll always be able to quickly find the information you need from any other.
Objects Exercises
-----------------
With one function, find out what the class, number of rows, number of columns are of the following object, including what kind of object the last three columns are. Inspect the help file also.
```
library(dplyr)
?starwars
```
R Objects
---------
### Object Inspection \& Exploration
Let’s say you’ve imported your data into R. If you are going to be able to do anything with it, you’ll have had to create an R object that represents that data. What is that object? By now you know it’s a data frame, specifically, an object of [class](https://en.wikipedia.org/wiki/Class_(computer_programming)) data.frame or possibly a tibble if you’re working within the tidyverse. If you want to look at it, you might be tempted to look at it this way with View, or clicking on it in your Environment viewer.
```
View(diamonds)
```
While this is certainly one way to inspect it, it’s not very useful. There’s far too much information to get much out of it, and information you may need to know is absent.
Consider the following:
```
str(diamonds)
```
```
tibble [53,940 × 10] (S3: tbl_df/tbl/data.frame)
$ carat : num [1:53940] 0.23 0.21 0.23 0.29 0.31 0.24 0.24 0.26 0.22 0.23 ...
$ cut : Ord.factor w/ 5 levels "Fair"<"Good"<..: 5 4 2 4 2 3 3 3 1 3 ...
$ color : Ord.factor w/ 7 levels "D"<"E"<"F"<"G"<..: 2 2 2 6 7 7 6 5 2 5 ...
$ clarity: Ord.factor w/ 8 levels "I1"<"SI2"<"SI1"<..: 2 3 5 4 2 6 7 3 4 5 ...
$ depth : num [1:53940] 61.5 59.8 56.9 62.4 63.3 62.8 62.3 61.9 65.1 59.4 ...
$ table : num [1:53940] 55 61 65 58 58 57 57 55 61 61 ...
$ price : int [1:53940] 326 326 327 334 335 336 336 337 337 338 ...
$ x : num [1:53940] 3.95 3.89 4.05 4.2 4.34 3.94 3.95 4.07 3.87 4 ...
$ y : num [1:53940] 3.98 3.84 4.07 4.23 4.35 3.96 3.98 4.11 3.78 4.05 ...
$ z : num [1:53940] 2.43 2.31 2.31 2.63 2.75 2.48 2.47 2.53 2.49 2.39 ...
```
```
glimpse(diamonds)
```
```
Rows: 53,940
Columns: 10
$ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.30, 0.23, 0.22, 0.31, 0.20, 0.32, 0.30, 0.30, 0.30, 0.30, 0.30, 0.23, 0.23, 0.31, 0.31, 0.23, 0.24, 0.30, 0.23, 0.23, 0.23, 0.23, 0.23, 0.2…
$ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Very Good, Fair, Very Good, Good, Ideal, Premium, Ideal, Premium, Premium, Ideal, Good, Good, Very Good, Good, Very Good, Very Good, Very Good…
$ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I, E, H, J, J, G, I, J, D, F, F, F, E, E, D, F, E, H, D, I, I, J, D, D, H, F, H, H, E, H, F, G, I, E, D, I, J, I, I, I, I, D, D, D, I, G, I, …
$ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, SI1, SI2, SI2, I1, SI2, SI1, SI1, SI1, SI2, VS2, VS1, SI1, SI1, VVS2, VS1, VS2, VS2, VS1, VS1, VS1, VS1, VS1, VS1, VS1, VS1, SI1, VS2, SI2,…
$ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64.0, 62.8, 60.4, 62.2, 60.2, 60.9, 62.0, 63.4, 63.8, 62.7, 63.3, 63.8, 61.0, 59.4, 58.1, 60.4, 62.5, 62.2, 60.5, 60.9, 60.0, 59.8, 60.7, 59.…
$ table <dbl> 55.0, 61.0, 65.0, 58.0, 58.0, 57.0, 57.0, 55.0, 61.0, 61.0, 55.0, 56.0, 61.0, 54.0, 62.0, 58.0, 54.0, 54.0, 56.0, 59.0, 56.0, 55.0, 57.0, 62.0, 62.0, 58.0, 57.0, 57.0, 61.0, 57.0, 57.0, 57.0, 59.0, 58.…
$ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 342, 344, 345, 345, 348, 351, 351, 351, 351, 352, 353, 353, 353, 354, 355, 357, 357, 357, 402, 402, 402, 402, 402, 402, 402, 402, 403, 403, 4…
$ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.25, 3.93, 3.88, 4.35, 3.79, 4.38, 4.31, 4.23, 4.23, 4.21, 4.26, 3.85, 3.94, 4.39, 4.44, 3.97, 3.97, 4.28, 3.96, 3.96, 4.00, 4.04, 3.97, 4.0…
$ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.28, 3.90, 3.84, 4.37, 3.75, 4.42, 4.34, 4.29, 4.26, 4.27, 4.30, 3.92, 3.96, 4.43, 4.47, 4.01, 3.94, 4.30, 3.97, 3.99, 4.03, 4.06, 4.01, 4.0…
$ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.73, 2.46, 2.33, 2.71, 2.27, 2.68, 2.68, 2.70, 2.71, 2.66, 2.71, 2.48, 2.41, 2.62, 2.59, 2.41, 2.47, 2.67, 2.40, 2.42, 2.41, 2.42, 2.42, 2.4…
```
The str function looks at the *structure* of the object, while glimpse perhaps provides a possibly more readable version, and is just str specifically suited toward data frames. In both cases, we get info about the object and the various things within it.
While you might be doing this with data frames, you should be doing it with any of the objects you’re interested in. Consider a regression model object.
```
lm_mod = lm(mpg ~ ., data=mtcars)
str(lm_mod, 0)
```
```
List of 12
- attr(*, "class")= chr "lm"
```
```
str(lm_mod, 1)
```
```
List of 12
$ coefficients : Named num [1:11] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ effects : Named num [1:32] -113.65 -28.6 6.13 -3.06 -4.06 ...
..- attr(*, "names")= chr [1:32] "(Intercept)" "cyl" "disp" "hp" ...
$ rank : int 11
$ fitted.values: Named num [1:32] 22.6 22.1 26.3 21.2 17.7 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ assign : int [1:11] 0 1 2 3 4 5 6 7 8 9 ...
$ qr :List of 5
..- attr(*, "class")= chr "qr"
$ df.residual : int 21
$ xlevels : Named list()
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ model :'data.frame': 32 obs. of 11 variables:
..- attr(*, "terms")=Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. .. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. .. ..- attr(*, "dimnames")=List of 2
.. .. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. .. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. .. ..- attr(*, "intercept")= int 1
.. .. ..- attr(*, "response")= int 1
.. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. .. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "lm"
```
Here we look at the object at the lowest level of detail (0\), which basically just tells us that it’s a list of stuff. But if we go into more depth, we can see that there is quite a bit going on in here! Coefficients, the data frame used in the model (i.e. only the variables used and no `NA`), and much more are available to us, and we can pluck out any piece of it.
```
lm_mod$coefficients
```
```
(Intercept) cyl disp hp drat wt qsec vs am gear carb
12.30337416 -0.11144048 0.01333524 -0.02148212 0.78711097 -3.71530393 0.82104075 0.31776281 2.52022689 0.65541302 -0.19941925
```
```
lm_mod$model %>%
head()
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
Let’s do a summary of it, something you’ve probably done many times.
```
summary(lm_mod)
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
But you can assign that to an object and inspect it too!
```
lm_mod_summary = summary(lm_mod)
str(lm_mod_summary)
```
```
List of 11
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
.. .. .. ..$ : chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ coefficients : num [1:11, 1:4] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:4] "Estimate" "Std. Error" "t value" "Pr(>|t|)"
$ aliased : Named logi [1:11] FALSE FALSE FALSE FALSE FALSE FALSE ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ sigma : num 2.65
$ df : int [1:3] 11 21 11
$ r.squared : num 0.869
$ adj.r.squared: num 0.807
$ fstatistic : Named num [1:3] 13.9 10 21
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:11, 1:11] 49.883532 -1.874242 -0.000841 -0.003789 -1.842635 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "summary.lm"
```
If we pull the coefficients from this object, we are not just getting the values, but the table that’s printed in the summary. And we can now get that ready for publishing for example[9](#fn9).
```
lm_mod_summary$coefficients %>%
kableExtra::kable(digits = 2)
```
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| (Intercept) | 12\.30 | 18\.72 | 0\.66 | 0\.52 |
| cyl | \-0\.11 | 1\.05 | \-0\.11 | 0\.92 |
| disp | 0\.01 | 0\.02 | 0\.75 | 0\.46 |
| hp | \-0\.02 | 0\.02 | \-0\.99 | 0\.33 |
| drat | 0\.79 | 1\.64 | 0\.48 | 0\.64 |
| wt | \-3\.72 | 1\.89 | \-1\.96 | 0\.06 |
| qsec | 0\.82 | 0\.73 | 1\.12 | 0\.27 |
| vs | 0\.32 | 2\.10 | 0\.15 | 0\.88 |
| am | 2\.52 | 2\.06 | 1\.23 | 0\.23 |
| gear | 0\.66 | 1\.49 | 0\.44 | 0\.67 |
| carb | \-0\.20 | 0\.83 | \-0\.24 | 0\.81 |
After a while, you’ll know what’s in the objects you use most often, and that will allow you more easily work with the content they contain, allowing you to work with them more efficiently.
### Methods
Consider the following:
```
summary(diamonds) # data frame
```
```
carat cut color clarity depth table price x y z
Min. :0.2000 Fair : 1610 D: 6775 SI1 :13065 Min. :43.00 Min. :43.00 Min. : 326 Min. : 0.000 Min. : 0.000 Min. : 0.000
1st Qu.:0.4000 Good : 4906 E: 9797 VS2 :12258 1st Qu.:61.00 1st Qu.:56.00 1st Qu.: 950 1st Qu.: 4.710 1st Qu.: 4.720 1st Qu.: 2.910
Median :0.7000 Very Good:12082 F: 9542 SI2 : 9194 Median :61.80 Median :57.00 Median : 2401 Median : 5.700 Median : 5.710 Median : 3.530
Mean :0.7979 Premium :13791 G:11292 VS1 : 8171 Mean :61.75 Mean :57.46 Mean : 3933 Mean : 5.731 Mean : 5.735 Mean : 3.539
3rd Qu.:1.0400 Ideal :21551 H: 8304 VVS2 : 5066 3rd Qu.:62.50 3rd Qu.:59.00 3rd Qu.: 5324 3rd Qu.: 6.540 3rd Qu.: 6.540 3rd Qu.: 4.040
Max. :5.0100 I: 5422 VVS1 : 3655 Max. :79.00 Max. :95.00 Max. :18823 Max. :10.740 Max. :58.900 Max. :31.800
J: 2808 (Other): 2531
```
```
summary(diamonds$clarity) # vector
```
```
I1 SI2 SI1 VS2 VS1 VVS2 VVS1 IF
741 9194 13065 12258 8171 5066 3655 1790
```
```
summary(lm_mod) # lm object
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
```
summary(lm_mod_summary) # lm summary object
```
```
Length Class Mode
call 3 -none- call
terms 3 terms call
residuals 32 -none- numeric
coefficients 44 -none- numeric
aliased 11 -none- logical
sigma 1 -none- numeric
df 3 -none- numeric
r.squared 1 -none- numeric
adj.r.squared 1 -none- numeric
fstatistic 3 -none- numeric
cov.unscaled 121 -none- numeric
```
How is it that one function works on all these different types of objects? That’s not all. In RStudio, type `summary.` and hit the tab key.
When you load additional packages, you’ll see even more methods for the summary function. When you call summary on an object, the appropriate type of summary method will be used depending on the class of the object. If there is no specific type, e.g. when we called summary on something that already had summary called on it, it will just use a default version listing the contents. To see all the methods for summary, type the following, and you’ll see all that is currently available for your R session.
```
methods('summary')
```
```
[1] summary,ANY-method summary,DBIObject-method summary,diagonalMatrix-method summary,sparseMatrix-method summary.aov
[6] summary.aovlist* summary.aspell* summary.check_packages_in_dir* summary.connection summary.corAR1*
[11] summary.corARMA* summary.corCAR1* summary.corCompSymm* summary.corExp* summary.corGaus*
[16] summary.corIdent* summary.corLin* summary.corNatural* summary.corRatio* summary.corSpher*
[21] summary.corStruct* summary.corSymm* summary.data.frame summary.Date summary.default
[26] summary.Duration* summary.ecdf* summary.factor summary.gam summary.ggplot*
[31] summary.glm summary.gls* summary.haven_labelled* summary.hcl_palettes* summary.infl*
[36] summary.Interval* summary.lm summary.lme* summary.lmList* summary.loess*
[41] summary.manova summary.matrix summary.microbenchmark* summary.mlm* summary.modelStruct*
[46] summary.nls* summary.nlsList* summary.packageStatus* summary.pandas.core.frame.DataFrame* summary.pandas.core.series.Series*
[51] summary.pdBlocked* summary.pdCompSymm* summary.pdDiag* summary.pdIdent* summary.pdIdnot*
[56] summary.pdLogChol* summary.pdMat* summary.pdNatural* summary.pdSymm* summary.pdTens*
[61] summary.Period* summary.POSIXct summary.POSIXlt summary.ppr* summary.prcomp*
[66] summary.princomp* summary.proc_time summary.python.builtin.object* summary.reStruct* summary.rlang_error*
[71] summary.rlang_trace* summary.shingle* summary.srcfile summary.srcref summary.stepfun
[76] summary.stl* summary.table summary.trellis* summary.tukeysmooth* summary.varComb*
[81] summary.varConstPower* summary.varExp* summary.varFixed* summary.varFunc* summary.varIdent*
[86] summary.varPower* summary.vctrs_sclr* summary.vctrs_vctr* summary.warnings
see '?methods' for accessing help and source code
```
Say you are new to a modeling package, and as such, you might want to see what all you can do with the resulting object. Once you’ve discerned the class of the model object, you can then list all the functions that can be used on that object.
```
library(brms)
methods(class = 'brmsfit')
```
```
[1] add_criterion add_ic as.array as.data.frame as.matrix as.mcmc autocor bayes_factor bayes_R2
[10] bridge_sampler coef conditional_effects conditional_smooths control_params expose_functions family fitted fixef
[19] formula getCall hypothesis kfold launch_shinystan log_lik log_posterior logLik loo_compare
[28] loo_linpred loo_model_weights loo_moment_match loo_predict loo_predictive_interval loo_R2 loo_subsample loo LOO
[37] marginal_effects marginal_smooths mcmc_plot model_weights model.frame neff_ratio ngrps nobs nsamples
[46] nuts_params pairs parnames plot_coefficients plot post_prob posterior_average posterior_epred posterior_interval
[55] posterior_linpred posterior_predict posterior_samples posterior_summary pp_average pp_check pp_mixture predict predictive_error
[64] predictive_interval prepare_predictions print prior_samples prior_summary ranef reloo residuals rhat
[73] stancode standata stanplot summary update VarCorr vcov waic WAIC
see '?methods' for accessing help and source code
```
This allows you to more quickly get familiar with a package and the objects it produces, and provides utility you might not have even known to look for in the first place!
### S4 classes
Everything we’ve been dealing with at this point are S3 objects, classes, and methods. R is a dialect of the [S language](https://www.r-project.org/conferences/useR-2006/Slides/Chambers.pdf), and the S3 name reflects the version of S at the time of R’s creation. S4 was the next iteration of S, but I’m not going to say much about the S4 system of objects other than they are a separate type of object with their own methods. For practical use you might not see much difference, but if you see an S4 object, it will have slots accessible via `@`.
```
car_matrix = mtcars %>%
as.matrix() %>% # convert from df to matrix
Matrix::Matrix() # convert to Matrix class (S4)
typeof(car_matrix)
```
```
[1] "S4"
```
```
str(car_matrix)
```
```
Formal class 'dgeMatrix' [package "Matrix"] with 4 slots
..@ x : num [1:352] 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
..@ Dim : int [1:2] 32 11
..@ Dimnames:List of 2
.. ..$ : chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
.. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
..@ factors : list()
```
Usually you will access the contents via methods rather than using the `@`, and that assumes you know what those methods are. Mostly, I just find S4 objects slightly more annoying to work with for applied work, but you should be at least somewhat familiar with them so that you won’t be thrown off course when they appear.
### Others
Indeed there are more types of R objects, but they will probably not be of much notice to the applied user. As an example, packages like mlr3 and text2vec package uses [R6](https://cran.r-project.org/web/packages/R6/vignettes/Introduction.html). I can only say that you’ll just have to cross that bridge should you get to it.
### Inspecting Functions
You might not think of them as such, but in R, everything’s an object, including functions. You can inspect them like anything else.
```
str(lm)
```
```
function (formula, data, subset, weights, na.action, method = "qr", model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE, contrasts = NULL, offset, ...)
```
```
## lm
```
```
1 function (formula, data, subset, weights, na.action, method = "qr",
2 model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE,
3 contrasts = NULL, offset, ...)
4 {
5 ret.x <- x
6 ret.y <- y
7 cl <- match.call()
8 mf <- match.call(expand.dots = FALSE)
9 m <- match(c("formula", "data", "subset", "weights", "na.action",
10 "offset"), names(mf), 0L)
11 mf <- mf[c(1L, m)]
12 mf$drop.unused.levels <- TRUE
13 mf[[1L]] <- quote(stats::model.frame)
14 mf <- eval(mf, parent.frame())
15 if (method == "model.frame")
16 return(mf)
17 else if (method != "qr")
18 warning(gettextf("method = '%s' is not supported. Using 'qr'",
19 method), domain = NA)
20 mt <- attr(mf, "terms")
```
One of the primary reasons for R’s popularity is the accessibility of the underlying code. People can very easily access the code for some function, modify it, extend it, etc. From an applied perspective, if you want to get better at writing code, or modify existing code, all you have to do is dive in! We’ll talk more about writing functions [later](functions.html#writing-functions).
### Object Inspection \& Exploration
Let’s say you’ve imported your data into R. If you are going to be able to do anything with it, you’ll have had to create an R object that represents that data. What is that object? By now you know it’s a data frame, specifically, an object of [class](https://en.wikipedia.org/wiki/Class_(computer_programming)) data.frame or possibly a tibble if you’re working within the tidyverse. If you want to look at it, you might be tempted to look at it this way with View, or clicking on it in your Environment viewer.
```
View(diamonds)
```
While this is certainly one way to inspect it, it’s not very useful. There’s far too much information to get much out of it, and information you may need to know is absent.
Consider the following:
```
str(diamonds)
```
```
tibble [53,940 × 10] (S3: tbl_df/tbl/data.frame)
$ carat : num [1:53940] 0.23 0.21 0.23 0.29 0.31 0.24 0.24 0.26 0.22 0.23 ...
$ cut : Ord.factor w/ 5 levels "Fair"<"Good"<..: 5 4 2 4 2 3 3 3 1 3 ...
$ color : Ord.factor w/ 7 levels "D"<"E"<"F"<"G"<..: 2 2 2 6 7 7 6 5 2 5 ...
$ clarity: Ord.factor w/ 8 levels "I1"<"SI2"<"SI1"<..: 2 3 5 4 2 6 7 3 4 5 ...
$ depth : num [1:53940] 61.5 59.8 56.9 62.4 63.3 62.8 62.3 61.9 65.1 59.4 ...
$ table : num [1:53940] 55 61 65 58 58 57 57 55 61 61 ...
$ price : int [1:53940] 326 326 327 334 335 336 336 337 337 338 ...
$ x : num [1:53940] 3.95 3.89 4.05 4.2 4.34 3.94 3.95 4.07 3.87 4 ...
$ y : num [1:53940] 3.98 3.84 4.07 4.23 4.35 3.96 3.98 4.11 3.78 4.05 ...
$ z : num [1:53940] 2.43 2.31 2.31 2.63 2.75 2.48 2.47 2.53 2.49 2.39 ...
```
```
glimpse(diamonds)
```
```
Rows: 53,940
Columns: 10
$ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.30, 0.23, 0.22, 0.31, 0.20, 0.32, 0.30, 0.30, 0.30, 0.30, 0.30, 0.23, 0.23, 0.31, 0.31, 0.23, 0.24, 0.30, 0.23, 0.23, 0.23, 0.23, 0.23, 0.2…
$ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Very Good, Fair, Very Good, Good, Ideal, Premium, Ideal, Premium, Premium, Ideal, Good, Good, Very Good, Good, Very Good, Very Good, Very Good…
$ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I, E, H, J, J, G, I, J, D, F, F, F, E, E, D, F, E, H, D, I, I, J, D, D, H, F, H, H, E, H, F, G, I, E, D, I, J, I, I, I, I, D, D, D, I, G, I, …
$ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, SI1, SI2, SI2, I1, SI2, SI1, SI1, SI1, SI2, VS2, VS1, SI1, SI1, VVS2, VS1, VS2, VS2, VS1, VS1, VS1, VS1, VS1, VS1, VS1, VS1, SI1, VS2, SI2,…
$ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64.0, 62.8, 60.4, 62.2, 60.2, 60.9, 62.0, 63.4, 63.8, 62.7, 63.3, 63.8, 61.0, 59.4, 58.1, 60.4, 62.5, 62.2, 60.5, 60.9, 60.0, 59.8, 60.7, 59.…
$ table <dbl> 55.0, 61.0, 65.0, 58.0, 58.0, 57.0, 57.0, 55.0, 61.0, 61.0, 55.0, 56.0, 61.0, 54.0, 62.0, 58.0, 54.0, 54.0, 56.0, 59.0, 56.0, 55.0, 57.0, 62.0, 62.0, 58.0, 57.0, 57.0, 61.0, 57.0, 57.0, 57.0, 59.0, 58.…
$ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 342, 344, 345, 345, 348, 351, 351, 351, 351, 352, 353, 353, 353, 354, 355, 357, 357, 357, 402, 402, 402, 402, 402, 402, 402, 402, 403, 403, 4…
$ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.25, 3.93, 3.88, 4.35, 3.79, 4.38, 4.31, 4.23, 4.23, 4.21, 4.26, 3.85, 3.94, 4.39, 4.44, 3.97, 3.97, 4.28, 3.96, 3.96, 4.00, 4.04, 3.97, 4.0…
$ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.28, 3.90, 3.84, 4.37, 3.75, 4.42, 4.34, 4.29, 4.26, 4.27, 4.30, 3.92, 3.96, 4.43, 4.47, 4.01, 3.94, 4.30, 3.97, 3.99, 4.03, 4.06, 4.01, 4.0…
$ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.73, 2.46, 2.33, 2.71, 2.27, 2.68, 2.68, 2.70, 2.71, 2.66, 2.71, 2.48, 2.41, 2.62, 2.59, 2.41, 2.47, 2.67, 2.40, 2.42, 2.41, 2.42, 2.42, 2.4…
```
The str function looks at the *structure* of the object, while glimpse perhaps provides a possibly more readable version, and is just str specifically suited toward data frames. In both cases, we get info about the object and the various things within it.
While you might be doing this with data frames, you should be doing it with any of the objects you’re interested in. Consider a regression model object.
```
lm_mod = lm(mpg ~ ., data=mtcars)
str(lm_mod, 0)
```
```
List of 12
- attr(*, "class")= chr "lm"
```
```
str(lm_mod, 1)
```
```
List of 12
$ coefficients : Named num [1:11] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ effects : Named num [1:32] -113.65 -28.6 6.13 -3.06 -4.06 ...
..- attr(*, "names")= chr [1:32] "(Intercept)" "cyl" "disp" "hp" ...
$ rank : int 11
$ fitted.values: Named num [1:32] 22.6 22.1 26.3 21.2 17.7 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ assign : int [1:11] 0 1 2 3 4 5 6 7 8 9 ...
$ qr :List of 5
..- attr(*, "class")= chr "qr"
$ df.residual : int 21
$ xlevels : Named list()
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ model :'data.frame': 32 obs. of 11 variables:
..- attr(*, "terms")=Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. .. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. .. ..- attr(*, "dimnames")=List of 2
.. .. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. .. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. .. ..- attr(*, "intercept")= int 1
.. .. ..- attr(*, "response")= int 1
.. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. .. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. .. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "lm"
```
Here we look at the object at the lowest level of detail (0\), which basically just tells us that it’s a list of stuff. But if we go into more depth, we can see that there is quite a bit going on in here! Coefficients, the data frame used in the model (i.e. only the variables used and no `NA`), and much more are available to us, and we can pluck out any piece of it.
```
lm_mod$coefficients
```
```
(Intercept) cyl disp hp drat wt qsec vs am gear carb
12.30337416 -0.11144048 0.01333524 -0.02148212 0.78711097 -3.71530393 0.82104075 0.31776281 2.52022689 0.65541302 -0.19941925
```
```
lm_mod$model %>%
head()
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
```
Let’s do a summary of it, something you’ve probably done many times.
```
summary(lm_mod)
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
But you can assign that to an object and inspect it too!
```
lm_mod_summary = summary(lm_mod)
str(lm_mod_summary)
```
```
List of 11
$ call : language lm(formula = mpg ~ ., data = mtcars)
$ terms :Classes 'terms', 'formula' language mpg ~ cyl + disp + hp + drat + wt + qsec + vs + am + gear + carb
.. ..- attr(*, "variables")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "factors")= int [1:11, 1:10] 0 1 0 0 0 0 0 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. .. .. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
.. .. .. ..$ : chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "term.labels")= chr [1:10] "cyl" "disp" "hp" "drat" ...
.. ..- attr(*, "order")= int [1:10] 1 1 1 1 1 1 1 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)
.. ..- attr(*, "dataClasses")= Named chr [1:11] "numeric" "numeric" "numeric" "numeric" ...
.. .. ..- attr(*, "names")= chr [1:11] "mpg" "cyl" "disp" "hp" ...
$ residuals : Named num [1:32] -1.6 -1.112 -3.451 0.163 1.007 ...
..- attr(*, "names")= chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
$ coefficients : num [1:11, 1:4] 12.3034 -0.1114 0.0133 -0.0215 0.7871 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:4] "Estimate" "Std. Error" "t value" "Pr(>|t|)"
$ aliased : Named logi [1:11] FALSE FALSE FALSE FALSE FALSE FALSE ...
..- attr(*, "names")= chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
$ sigma : num 2.65
$ df : int [1:3] 11 21 11
$ r.squared : num 0.869
$ adj.r.squared: num 0.807
$ fstatistic : Named num [1:3] 13.9 10 21
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:11, 1:11] 49.883532 -1.874242 -0.000841 -0.003789 -1.842635 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
.. ..$ : chr [1:11] "(Intercept)" "cyl" "disp" "hp" ...
- attr(*, "class")= chr "summary.lm"
```
If we pull the coefficients from this object, we are not just getting the values, but the table that’s printed in the summary. And we can now get that ready for publishing for example[9](#fn9).
```
lm_mod_summary$coefficients %>%
kableExtra::kable(digits = 2)
```
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| (Intercept) | 12\.30 | 18\.72 | 0\.66 | 0\.52 |
| cyl | \-0\.11 | 1\.05 | \-0\.11 | 0\.92 |
| disp | 0\.01 | 0\.02 | 0\.75 | 0\.46 |
| hp | \-0\.02 | 0\.02 | \-0\.99 | 0\.33 |
| drat | 0\.79 | 1\.64 | 0\.48 | 0\.64 |
| wt | \-3\.72 | 1\.89 | \-1\.96 | 0\.06 |
| qsec | 0\.82 | 0\.73 | 1\.12 | 0\.27 |
| vs | 0\.32 | 2\.10 | 0\.15 | 0\.88 |
| am | 2\.52 | 2\.06 | 1\.23 | 0\.23 |
| gear | 0\.66 | 1\.49 | 0\.44 | 0\.67 |
| carb | \-0\.20 | 0\.83 | \-0\.24 | 0\.81 |
After a while, you’ll know what’s in the objects you use most often, and that will allow you more easily work with the content they contain, allowing you to work with them more efficiently.
### Methods
Consider the following:
```
summary(diamonds) # data frame
```
```
carat cut color clarity depth table price x y z
Min. :0.2000 Fair : 1610 D: 6775 SI1 :13065 Min. :43.00 Min. :43.00 Min. : 326 Min. : 0.000 Min. : 0.000 Min. : 0.000
1st Qu.:0.4000 Good : 4906 E: 9797 VS2 :12258 1st Qu.:61.00 1st Qu.:56.00 1st Qu.: 950 1st Qu.: 4.710 1st Qu.: 4.720 1st Qu.: 2.910
Median :0.7000 Very Good:12082 F: 9542 SI2 : 9194 Median :61.80 Median :57.00 Median : 2401 Median : 5.700 Median : 5.710 Median : 3.530
Mean :0.7979 Premium :13791 G:11292 VS1 : 8171 Mean :61.75 Mean :57.46 Mean : 3933 Mean : 5.731 Mean : 5.735 Mean : 3.539
3rd Qu.:1.0400 Ideal :21551 H: 8304 VVS2 : 5066 3rd Qu.:62.50 3rd Qu.:59.00 3rd Qu.: 5324 3rd Qu.: 6.540 3rd Qu.: 6.540 3rd Qu.: 4.040
Max. :5.0100 I: 5422 VVS1 : 3655 Max. :79.00 Max. :95.00 Max. :18823 Max. :10.740 Max. :58.900 Max. :31.800
J: 2808 (Other): 2531
```
```
summary(diamonds$clarity) # vector
```
```
I1 SI2 SI1 VS2 VS1 VVS2 VVS1 IF
741 9194 13065 12258 8171 5066 3655 1790
```
```
summary(lm_mod) # lm object
```
```
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.30337 18.71788 0.657 0.5181
cyl -0.11144 1.04502 -0.107 0.9161
disp 0.01334 0.01786 0.747 0.4635
hp -0.02148 0.02177 -0.987 0.3350
drat 0.78711 1.63537 0.481 0.6353
wt -3.71530 1.89441 -1.961 0.0633 .
qsec 0.82104 0.73084 1.123 0.2739
vs 0.31776 2.10451 0.151 0.8814
am 2.52023 2.05665 1.225 0.2340
gear 0.65541 1.49326 0.439 0.6652
carb -0.19942 0.82875 -0.241 0.8122
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.65 on 21 degrees of freedom
Multiple R-squared: 0.869, Adjusted R-squared: 0.8066
F-statistic: 13.93 on 10 and 21 DF, p-value: 3.793e-07
```
```
summary(lm_mod_summary) # lm summary object
```
```
Length Class Mode
call 3 -none- call
terms 3 terms call
residuals 32 -none- numeric
coefficients 44 -none- numeric
aliased 11 -none- logical
sigma 1 -none- numeric
df 3 -none- numeric
r.squared 1 -none- numeric
adj.r.squared 1 -none- numeric
fstatistic 3 -none- numeric
cov.unscaled 121 -none- numeric
```
How is it that one function works on all these different types of objects? That’s not all. In RStudio, type `summary.` and hit the tab key.
When you load additional packages, you’ll see even more methods for the summary function. When you call summary on an object, the appropriate type of summary method will be used depending on the class of the object. If there is no specific type, e.g. when we called summary on something that already had summary called on it, it will just use a default version listing the contents. To see all the methods for summary, type the following, and you’ll see all that is currently available for your R session.
```
methods('summary')
```
```
[1] summary,ANY-method summary,DBIObject-method summary,diagonalMatrix-method summary,sparseMatrix-method summary.aov
[6] summary.aovlist* summary.aspell* summary.check_packages_in_dir* summary.connection summary.corAR1*
[11] summary.corARMA* summary.corCAR1* summary.corCompSymm* summary.corExp* summary.corGaus*
[16] summary.corIdent* summary.corLin* summary.corNatural* summary.corRatio* summary.corSpher*
[21] summary.corStruct* summary.corSymm* summary.data.frame summary.Date summary.default
[26] summary.Duration* summary.ecdf* summary.factor summary.gam summary.ggplot*
[31] summary.glm summary.gls* summary.haven_labelled* summary.hcl_palettes* summary.infl*
[36] summary.Interval* summary.lm summary.lme* summary.lmList* summary.loess*
[41] summary.manova summary.matrix summary.microbenchmark* summary.mlm* summary.modelStruct*
[46] summary.nls* summary.nlsList* summary.packageStatus* summary.pandas.core.frame.DataFrame* summary.pandas.core.series.Series*
[51] summary.pdBlocked* summary.pdCompSymm* summary.pdDiag* summary.pdIdent* summary.pdIdnot*
[56] summary.pdLogChol* summary.pdMat* summary.pdNatural* summary.pdSymm* summary.pdTens*
[61] summary.Period* summary.POSIXct summary.POSIXlt summary.ppr* summary.prcomp*
[66] summary.princomp* summary.proc_time summary.python.builtin.object* summary.reStruct* summary.rlang_error*
[71] summary.rlang_trace* summary.shingle* summary.srcfile summary.srcref summary.stepfun
[76] summary.stl* summary.table summary.trellis* summary.tukeysmooth* summary.varComb*
[81] summary.varConstPower* summary.varExp* summary.varFixed* summary.varFunc* summary.varIdent*
[86] summary.varPower* summary.vctrs_sclr* summary.vctrs_vctr* summary.warnings
see '?methods' for accessing help and source code
```
Say you are new to a modeling package, and as such, you might want to see what all you can do with the resulting object. Once you’ve discerned the class of the model object, you can then list all the functions that can be used on that object.
```
library(brms)
methods(class = 'brmsfit')
```
```
[1] add_criterion add_ic as.array as.data.frame as.matrix as.mcmc autocor bayes_factor bayes_R2
[10] bridge_sampler coef conditional_effects conditional_smooths control_params expose_functions family fitted fixef
[19] formula getCall hypothesis kfold launch_shinystan log_lik log_posterior logLik loo_compare
[28] loo_linpred loo_model_weights loo_moment_match loo_predict loo_predictive_interval loo_R2 loo_subsample loo LOO
[37] marginal_effects marginal_smooths mcmc_plot model_weights model.frame neff_ratio ngrps nobs nsamples
[46] nuts_params pairs parnames plot_coefficients plot post_prob posterior_average posterior_epred posterior_interval
[55] posterior_linpred posterior_predict posterior_samples posterior_summary pp_average pp_check pp_mixture predict predictive_error
[64] predictive_interval prepare_predictions print prior_samples prior_summary ranef reloo residuals rhat
[73] stancode standata stanplot summary update VarCorr vcov waic WAIC
see '?methods' for accessing help and source code
```
This allows you to more quickly get familiar with a package and the objects it produces, and provides utility you might not have even known to look for in the first place!
### S4 classes
Everything we’ve been dealing with at this point are S3 objects, classes, and methods. R is a dialect of the [S language](https://www.r-project.org/conferences/useR-2006/Slides/Chambers.pdf), and the S3 name reflects the version of S at the time of R’s creation. S4 was the next iteration of S, but I’m not going to say much about the S4 system of objects other than they are a separate type of object with their own methods. For practical use you might not see much difference, but if you see an S4 object, it will have slots accessible via `@`.
```
car_matrix = mtcars %>%
as.matrix() %>% # convert from df to matrix
Matrix::Matrix() # convert to Matrix class (S4)
typeof(car_matrix)
```
```
[1] "S4"
```
```
str(car_matrix)
```
```
Formal class 'dgeMatrix' [package "Matrix"] with 4 slots
..@ x : num [1:352] 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...
..@ Dim : int [1:2] 32 11
..@ Dimnames:List of 2
.. ..$ : chr [1:32] "Mazda RX4" "Mazda RX4 Wag" "Datsun 710" "Hornet 4 Drive" ...
.. ..$ : chr [1:11] "mpg" "cyl" "disp" "hp" ...
..@ factors : list()
```
Usually you will access the contents via methods rather than using the `@`, and that assumes you know what those methods are. Mostly, I just find S4 objects slightly more annoying to work with for applied work, but you should be at least somewhat familiar with them so that you won’t be thrown off course when they appear.
### Others
Indeed there are more types of R objects, but they will probably not be of much notice to the applied user. As an example, packages like mlr3 and text2vec package uses [R6](https://cran.r-project.org/web/packages/R6/vignettes/Introduction.html). I can only say that you’ll just have to cross that bridge should you get to it.
### Inspecting Functions
You might not think of them as such, but in R, everything’s an object, including functions. You can inspect them like anything else.
```
str(lm)
```
```
function (formula, data, subset, weights, na.action, method = "qr", model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE, contrasts = NULL, offset, ...)
```
```
## lm
```
```
1 function (formula, data, subset, weights, na.action, method = "qr",
2 model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE,
3 contrasts = NULL, offset, ...)
4 {
5 ret.x <- x
6 ret.y <- y
7 cl <- match.call()
8 mf <- match.call(expand.dots = FALSE)
9 m <- match(c("formula", "data", "subset", "weights", "na.action",
10 "offset"), names(mf), 0L)
11 mf <- mf[c(1L, m)]
12 mf$drop.unused.levels <- TRUE
13 mf[[1L]] <- quote(stats::model.frame)
14 mf <- eval(mf, parent.frame())
15 if (method == "model.frame")
16 return(mf)
17 else if (method != "qr")
18 warning(gettextf("method = '%s' is not supported. Using 'qr'",
19 method), domain = NA)
20 mt <- attr(mf, "terms")
```
One of the primary reasons for R’s popularity is the accessibility of the underlying code. People can very easily access the code for some function, modify it, extend it, etc. From an applied perspective, if you want to get better at writing code, or modify existing code, all you have to do is dive in! We’ll talk more about writing functions [later](functions.html#writing-functions).
Documentation
-------------
Many applied users of R are quick to search the web for available help when they come to a problem. This is great, you’ll find a lot of information out there. However, it will likely take you a bit to sort through things and find exactly what you need. Strangely, I see many users of R don’t use the documentation, e.g. help files, package website, etc., first, and yet this is typically the quickest way to answer many of the questions they’ll have.
Let’s start with an example. We’ll use the sample function to get a random sample of 10 values from the range of numbers 1 through 5\. So, go ahead and do so!
```
sample(?)
```
Don’t know what to put? Consult the help file!
We get a brief description of a function at the top, then we see how to actually use it, i.e. the form the syntax should take. We find out there is even an additional function, sample.int, that we could use. Next we see what arguments are possible. First we need an `x`, so what is the thing we’re trying to sample from? The numbers 1 through 5\. Next is the size, which is how many values we want, in this case 10\. So let’s try it.
```
nums = 1:5
sample(nums, 10)
```
```
Error in sample.int(length(x), size, replace, prob): cannot take a sample larger than the population when 'replace = FALSE'
```
Uh oh\- we have a problem with the `replace` argument! We can see in the help file that, by default, it is `FALSE`[10](#fn10), but if we want to sample 10 times from only 5 numbers, we’ll need to change it to `TRUE`.
Now we are on our way!
The help file gives detailed information about the sampling that is possible, which actually is not as simple as one would think! The **`Value`** is important, as it tells us what we can expect the function to return, whether a data frame, list, or whatever. We even get references, other functions that might be of interest (**`See Also`**), and examples. There is a lot to digest for this function!
Not all functions have all this information, but most do, and if they are adhering to standards they will[11](#fn11). However, all functions have this same documentation form, which puts R above and beyond most programming languages in this regard. Once you look at a couple of help files, you’ll always be able to quickly find the information you need from any other.
Objects Exercises
-----------------
With one function, find out what the class, number of rows, number of columns are of the following object, including what kind of object the last three columns are. Inspect the help file also.
```
library(dplyr)
?starwars
```
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/iterative.html |
Iterative Programming
=====================
Almost everything you do when dealing with data will need to be done again, and again, and again. If you are copy\-pasting your way to repetitively do the same thing, you’re not only doing things inefficiently, you’re almost certainly setting yourself up for trouble if anything changes about the data or underlying process.
In order to avoid this, you need to be familiar with basic programming, and a starting point is to use an iterative approach to repetitive problems. Let’s look at the following. Let’s say we want to get the means of some columns in our data set. Do you do something like this?
```
means1 = mean(df$x)
means2 = mean(df$y)
means3 = mean(df$z)
means4 = mean(df$q)
```
Now consider what you have to change if you change a variable name, decide to do a median, or the data object name changes. Any minor change with the data will cause you to have to redo that code, and possibly every line of it.
For Loops
---------
A for loop will help us get around the problem. The idea is that we want to perform a particular action *for* every iteration of some sequence. That sequence may be over columns, rows, lines in a text, whatever. Here is a loop.
```
for (column in c('x','y','z','q')) {
mean(df[[column]])
}
```
What’s going on here? We’ve created an iterative process in which, *for* every *element* in `c('x','y','z','q')`, we are going to do something. We use the completely arbitrary word `column` as a placeholder to index which of the four columns we’re dealing with at a given point in the process. On the first iteration, `column` will equal `x`, on the second `y`, and so on. We then take the mean of `df[[column]]`, which will be `df[['x']]`, then `df[['y']]`, etc.
Here is an example with the nycflights data, which regards flights that departed New York City in 2013\. The weather data set has columns for things like temperature, humidity, and so forth.
```
weather = nycflights13::weather
for (column in c('temp', 'humid', 'wind_speed', 'precip')) {
print(mean(weather[[column]], na.rm = TRUE))
}
```
```
[1] 55.26039
[1] 62.53006
[1] 10.51749
[1] 0.004469079
```
You can check this for yourself by testing a column or two directly with just `mean(df$x)`.
Now if the data name changes, the columns we want change, or we want to calculate something else, we usually end up only changing one thing, rather than at least changing one at a minimum, and probably many more things. In addition, the amount of code is the same whether the loop goes over 100 columns or 4\.
Let’s do things a little differently.
```
columns = c('temp', 'humid', 'wind_speed', 'precip')
nyc_means = rep(NA, length(columns))
for (i in seq_along(columns)) {
column = columns[i]
nyc_means[i] = mean(weather[[column]], na.rm = TRUE)
# alternative without the initial first step
# nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means
```
```
[1] 55.260392127 62.530058972 10.517488384 0.004469079
```
By creating a columns object, if anything changes about the columns we want, that’s the only line in the code that would need to be changed. The `i` is now a place holder for a number that goes from 1 to the length of columns (i.e. 4\). We make an empty nyc\_means object that’s the length of the columns, so that each element will eventually be the mean of the corresponding column.
In the following I remove precipitation and add visibility and air pressure.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = rep(NA, length(columns))
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
Had we been copy\-pasting, this would require deleting or commenting out a line in our code, pasting two more, and changing each one after pasting to represent the new columns. That’s tedious, and not a fun way to code.
### A slight speed gain
Note that you do not have to create an empty object like we did. The following works also.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = numeric()
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
However, the other approach is slightly faster, because memory is already allocated for all elements of nyc\_means, rather than updating it every iteration of the loop. This speed gain can become noticeable when dealing with thousands of columns and complex operations.
### While alternative
When you look at some people’s R code, you may see a loop of a different sort.
```
columns = c('temp','humid','wind_speed', 'visib', 'pressure')
nyc_means = c()
i = 1
while (i <= length(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
i = i + 1
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
This involves a while statement. It states, while `i` is less than or equal to the length (number) of columns, compute the value of the ith element of nyc\_means as the mean of ith column of weather. After that, increase the value of `i`. So, we start with `i = 1`, compute that subsequent mean, `i` now equals 2, do the process again, and so on. The process will stop as soon as the value of `i` is greater than the length of columns.
*There is zero difference to using the while approach vs. the for loop*. While is often used when there is a check to be made, e.g. in modeling functions that have to stop the estimation process at some point, or else they’d go on indefinitely. In that case the while syntax is probably more natural. Either is fine.
### Loops summary
Understanding loops is fundamental toward spending less time processing data and more time toward exploring it. Your code will be more succinct and more able to handle the usual changes that come with dealing with data. Now that you have a sense of it, know that once you are armed with the sorts of things we’ll be talking about next\- apply functions, writing functions, and vectorization \- you’ll likely have little need to write explicit loops. While there is always a need for iterative processing of data, R provides even more efficient means to do so.
Implicit Loops
--------------
Writing loops is straightforward once you get the initial hang of it. However, R offers alternative ways to do loops that can simplify code without losing readability. As such, even when you loop in R, you don’t have to do so explicitly.
### apply family
A family of functions comes with R that allows for a succinct way of looping when it is appropriate. Common functions in this family include:
* apply
+ arrays, matrices, data.frames
* lapply, sapply, vapply
+ lists, data.frames, vectors
* tapply
+ grouped operations (table apply)
* mapply
+ multivariate version of sapply
* replicate
+ performs an operation N times
As an example we’ll consider standardizing variables, i.e. taking a set of numbers, subtracting the mean, and dividing by the standard deviation. This results in a variable with mean of 0 and standard deviation of 1\. Let’s start with a loop approach.
```
for (i in 1:ncol(mydf)) {
x = mydf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
```
The above would be a really bad way to use R. It goes over each column individually, then over each value of the column.
Conversely, apply will take a matrix or data frame, and apply a function over the margin, row or column, you want to loop over. The first argument is the data you’re considering, the margin is the second argument (1 for rows, 2 for columns[12](#fn12)), and the function you want to apply to those rows is the third argument. The following example is much cleaner compared to the loop, and now you’d have a function you can use elsewhere if needed.
```
stdize <- function(x) {
(x - mean(x)) / sd(x)
}
apply(mydf, 2, stdize) # 1 for rows, 2 for columnwise application
```
Many of the other apply functions work similarly, taking an object and a function to do the work on the object (possibly implicit), possibly with other arguments specified if necessary.
#### lapply
Let’s say we have a list object, or even just a vector of values. There are no rows or columns to iterate over, so what do we do here?
```
x = list('aba', 'abb', 'abc', 'abd', 'abe')
lapply(x, str_remove, pattern = 'ab')
```
```
[[1]]
[1] "a"
[[2]]
[1] "b"
[[3]]
[1] "c"
[[4]]
[1] "d"
[[5]]
[1] "e"
```
The lapply operation iterates over each element of the list and applies a function to them. In this case, the function is str\_remove. It has an argument for the string pattern we want to take out of the character string that is fed to it (‘ab’). For example, for ‘aba’ we will be left with just the ‘a’.
As can be seen, lapply starts with a list and returns a list. The only difference with sapply is that sapply will return a simplified form if possible[13](#fn13).
```
sapply(x, str_remove, pattern = 'ab')
```
```
[1] "a" "b" "c" "d" "e"
```
In this case we just get a vector back.
### Apply functions
It is important to be familiar with the apply family for efficient data processing, if only because you’ll regularly come code employing these functions. A summary of benefits includes:
* Cleaner/simpler code
* Environment kept clear of unnecessary objects
* Potentially more reproducible
+ more likely to use generalizable functions
* Parallelizable
Note that apply functions are NOT necessarily faster than explicit loops, and if you create an empty object for the loop as discussed previously, the explicit loop will likely be faster. On top of that, functions like replicate and mapply are especially slow.
However, the apply family can ALWAYS *potentially* be faster than standard R loops do to parallelization. With base R’s parallel package, there are parallel versions of the apply family, e.g.parApply, parLapply etc. As every modern computer has at least four cores to play with, you’ll always potentially have nearly a 4x speedup by using the parallel apply functions.
Apply functions and similar approaches should be a part of your regular R experience. We’ll talk about other options that may have even more benefits, but you need to know the basics of how apply functions work in order to use those.
I use R every day, and very rarely use explicit loops. Note that there is no speed difference for a for loop vs. using while. And if you must use an explicit loop, create an empty object of the dimension/form you need, and then fill it in via the loop. This will be notably faster.
I pretty much never use an explicit double loop, as a little more thinking about the problem will usually provide a more efficient path to solving the problem.
### purrr
The purrr package allows you to take the apply family approach to the tidyverse. And with packages future \+ furrr, they too are parallelizable.
Consider the following. We’ll use the map function to map the sum function to each element in the list, the same way we would with lapply.
```
x = list(1:3, 4:6, 7:9)
map(x, sum)
```
```
[[1]]
[1] 6
[[2]]
[1] 15
[[3]]
[1] 24
```
The map functions take some getting used to, and in my experience they are typically slower than the apply functions, sometimes notably so. However they allow you stay within the tidy realm, which has its own benefits, and have more control over the nature of the output[14](#fn14), which is especially important in reproducibility, package development, producing production\-level code, etc. The key idea is that the map functions will always return something the same length as the input given to it.
The purrr functions want a list or vector, i.e. they don’t work with data.frame objects in the same way we’ve done with mutate and summarize except in the sense that data.frames are lists.
```
## mtcars %>%
## map(scale) # returns a list, not shown
mtcars %>%
map_df(scale) # returns a df
```
```
# A tibble: 32 x 11
mpg[,1] cyl[,1] disp[,1] hp[,1] drat[,1] wt[,1] qsec[,1] vs[,1] am[,1] gear[,1] carb[,1]
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.151 -0.105 -0.571 -0.535 0.568 -0.610 -0.777 -0.868 1.19 0.424 0.735
2 0.151 -0.105 -0.571 -0.535 0.568 -0.350 -0.464 -0.868 1.19 0.424 0.735
3 0.450 -1.22 -0.990 -0.783 0.474 -0.917 0.426 1.12 1.19 0.424 -1.12
4 0.217 -0.105 0.220 -0.535 -0.966 -0.00230 0.890 1.12 -0.814 -0.932 -1.12
5 -0.231 1.01 1.04 0.413 -0.835 0.228 -0.464 -0.868 -0.814 -0.932 -0.503
6 -0.330 -0.105 -0.0462 -0.608 -1.56 0.248 1.33 1.12 -0.814 -0.932 -1.12
7 -0.961 1.01 1.04 1.43 -0.723 0.361 -1.12 -0.868 -0.814 -0.932 0.735
8 0.715 -1.22 -0.678 -1.24 0.175 -0.0278 1.20 1.12 -0.814 0.424 -0.503
9 0.450 -1.22 -0.726 -0.754 0.605 -0.0687 2.83 1.12 -0.814 0.424 -0.503
10 -0.148 -0.105 -0.509 -0.345 0.605 0.228 0.253 1.12 -0.814 0.424 0.735
# … with 22 more rows
```
```
mtcars %>%
map_dbl(sum) # returns a numeric (double) vector of column sums
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
642.900 198.000 7383.100 4694.000 115.090 102.952 571.160 14.000 13.000 118.000 90.000
```
```
diamonds %>%
map_at(
vars(carat, depth, price),
function(x)
as.integer(x > median(x))
) %>%
as_tibble()
```
```
# A tibble: 53,940 x 10
carat cut color clarity depth table price x y z
<int> <ord> <ord> <ord> <int> <dbl> <int> <dbl> <dbl> <dbl>
1 0 Ideal E SI2 0 55 0 3.95 3.98 2.43
2 0 Premium E SI1 0 61 0 3.89 3.84 2.31
3 0 Good E VS1 0 65 0 4.05 4.07 2.31
4 0 Premium I VS2 1 58 0 4.2 4.23 2.63
5 0 Good J SI2 1 58 0 4.34 4.35 2.75
6 0 Very Good J VVS2 1 57 0 3.94 3.96 2.48
7 0 Very Good I VVS1 1 57 0 3.95 3.98 2.47
8 0 Very Good H SI1 1 55 0 4.07 4.11 2.53
9 0 Fair E VS2 1 61 0 3.87 3.78 2.49
10 0 Very Good H VS1 0 61 0 4 4.05 2.39
# … with 53,930 more rows
```
However, working with lists is very useful, so let’s turn to that.
Looping with Lists
------------------
Aside from data frames, you may think you don’t have much need for list objects. However, list objects make it very easy to iterate some form of data processing.
Let’s say you have models of increasing complexity, and you want to easily summarise and/or compare them. We create a list for which each element is a model object. We then apply a function, e.g. to get the AIC value for each, or adjusted R square (this requires a custom function).
```
library(mgcv) # for gam
mtcars$cyl = factor(mtcars$cyl)
mod_lm = lm(mpg ~ wt, data = mtcars)
mod_poly = lm(mpg ~ poly(wt, 2), data = mtcars)
mod_inter = lm(mpg ~ wt * cyl, data = mtcars)
mod_gam = gam(mpg ~ s(wt), data = mtcars)
mod_gam_inter = gam(mpg ~ cyl + s(wt, by = cyl), data = mtcars)
model_list = list(
mod_lm = mod_lm,
mod_poly = mod_poly,
mod_inter = mod_inter,
mod_gam = mod_gam,
mod_gam_inter = mod_gam_inter
)
# lowest wins
model_list %>%
map_dbl(AIC) %>%
sort()
```
```
mod_gam_inter mod_inter mod_poly mod_gam mod_lm
150.6324 155.4811 158.0484 158.5717 166.0294
```
```
# highest wins
model_list %>%
map_dbl(
function(x)
if_else(inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj)
) %>%
sort(decreasing = TRUE)
```
```
mod_gam_inter mod_inter mod_poly mod_gam mod_lm
0.8643020 0.8349382 0.8065828 0.8041651 0.7445939
```
Let’s go further and create a plot of these results. We’ll map to a data frame, use pivot\_longer to melt it to two columns of model and value, then use ggplot2 to plot the results[15](#fn15).
```
model_list %>%
map_df(
function(x)
if_else(inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj)
) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = "Adj. Rsq") %>%
arrange(desc(`Adj. Rsq`)) %>%
mutate(model = factor(model, levels = model)) %>% # sigh
ggplot(aes(x = model, y = `Adj. Rsq`)) +
geom_point(aes(color = model), size = 10, show.legend = F)
```
Why not throw in AIC also?
```
mod_rsq =
model_list %>%
map_df(
function(x)
if_else(
inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj
)
) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = 'Rsq')
mod_aic =
model_list %>%
map_df(AIC) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = 'AIC')
left_join(mod_rsq, mod_aic) %>%
arrange(AIC) %>%
mutate(model = factor(model, levels = model)) %>%
pivot_longer(cols = -model, names_to = 'measure', values_to = 'value') %>%
ggplot(aes(x = model, y = value)) +
geom_point(aes(color = model), size = 10, show.legend = F) +
facet_wrap(~ measure, scales = 'free')
```
#### List columns
As data.frames are lists, anything can be put into a column just as you would a list element. We’ll use pmap here, as it can take more than one argument, and we’re feeding all columns of the data.frame. You don’t need to worry about the details here, we just want to create a column that is actually a list. In this case the column will contain a data frame in each entry.
```
mtcars2 = as.matrix(mtcars)
mtcars2[sample(1:length(mtcars2), 50)] = NA # add some missing data
mtcars2 = data.frame(mtcars2) %>%
rownames_to_column(var = 'observation') %>%
as_tibble()
head(mtcars2)
```
```
# A tibble: 6 x 12
observation mpg cyl disp hp drat wt qsec vs am gear carb
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1
```
```
mtcars2 =
mtcars2 %>%
mutate(
newvar =
pmap(., ~ data.frame(
N = sum(!is.na(c(...))),
Missing = sum(is.na(c(...)))
)
)
)
```
Now check out the list column.
```
mtcars2
```
```
# A tibble: 32 x 13
observation mpg cyl disp hp drat wt qsec vs am gear carb newvar
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <list>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 <df[,2] [1 × 2]>
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 <df[,2] [1 × 2]>
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 <df[,2] [1 × 2]>
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 <df[,2] [1 × 2]>
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 <df[,2] [1 × 2]>
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 <df[,2] [1 × 2]>
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 <df[,2] [1 × 2]>
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 <df[,2] [1 × 2]>
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> <df[,2] [1 × 2]>
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 <df[,2] [1 × 2]>
# … with 22 more rows
```
```
mtcars2$newvar %>% head(3)
```
```
[[1]]
N Missing
1 11 1
[[2]]
N Missing
1 12 0
[[3]]
N Missing
1 12 0
```
Unnest it with the tidyr function.
```
mtcars2 %>%
unnest(newvar)
```
```
# A tibble: 32 x 14
observation mpg cyl disp hp drat wt qsec vs am gear carb N Missing
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <int> <int>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 11 1
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 12 0
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 12 0
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 11 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 11 1
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 9 3
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 11 1
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 11 1
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> 10 2
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 9 3
# … with 22 more rows
```
This is a pretty esoteric demonstration, and not something you’d normally want to do, as mutate or other approaches would be far more efficient and sensical. However, the idea is that you might want to retain the information you might otherwise store in a list with the data that was used to create it. As an example, you could potentially attach models as a list column to a dataframe that contains meta\-information about each model. Once you have a list column, you can use that column as you would any list for iterative programming.
Iterative Programming Exercises
-------------------------------
### Exercise 1
With the following matrix, use apply and the sum function to get row or column sums of the matrix x.
```
x = matrix(1:9, 3, 3)
```
### Exercise 2
With the following list object x, use lapply and sapply and the sum function to get sums for the elements. There is no margin to specify for a list, so just supply the list and the sum function.
```
x = list(1:3, 4:10, 11:100)
```
### Exercise 3
As in the previous example, use a map function to create a data frame of the column means. See `?map` to see all your options.
```
d = tibble(
x = rnorm(100),
y = rnorm(100, 10, 2),
z = rnorm(100, 50, 10),
)
```
For Loops
---------
A for loop will help us get around the problem. The idea is that we want to perform a particular action *for* every iteration of some sequence. That sequence may be over columns, rows, lines in a text, whatever. Here is a loop.
```
for (column in c('x','y','z','q')) {
mean(df[[column]])
}
```
What’s going on here? We’ve created an iterative process in which, *for* every *element* in `c('x','y','z','q')`, we are going to do something. We use the completely arbitrary word `column` as a placeholder to index which of the four columns we’re dealing with at a given point in the process. On the first iteration, `column` will equal `x`, on the second `y`, and so on. We then take the mean of `df[[column]]`, which will be `df[['x']]`, then `df[['y']]`, etc.
Here is an example with the nycflights data, which regards flights that departed New York City in 2013\. The weather data set has columns for things like temperature, humidity, and so forth.
```
weather = nycflights13::weather
for (column in c('temp', 'humid', 'wind_speed', 'precip')) {
print(mean(weather[[column]], na.rm = TRUE))
}
```
```
[1] 55.26039
[1] 62.53006
[1] 10.51749
[1] 0.004469079
```
You can check this for yourself by testing a column or two directly with just `mean(df$x)`.
Now if the data name changes, the columns we want change, or we want to calculate something else, we usually end up only changing one thing, rather than at least changing one at a minimum, and probably many more things. In addition, the amount of code is the same whether the loop goes over 100 columns or 4\.
Let’s do things a little differently.
```
columns = c('temp', 'humid', 'wind_speed', 'precip')
nyc_means = rep(NA, length(columns))
for (i in seq_along(columns)) {
column = columns[i]
nyc_means[i] = mean(weather[[column]], na.rm = TRUE)
# alternative without the initial first step
# nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means
```
```
[1] 55.260392127 62.530058972 10.517488384 0.004469079
```
By creating a columns object, if anything changes about the columns we want, that’s the only line in the code that would need to be changed. The `i` is now a place holder for a number that goes from 1 to the length of columns (i.e. 4\). We make an empty nyc\_means object that’s the length of the columns, so that each element will eventually be the mean of the corresponding column.
In the following I remove precipitation and add visibility and air pressure.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = rep(NA, length(columns))
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
Had we been copy\-pasting, this would require deleting or commenting out a line in our code, pasting two more, and changing each one after pasting to represent the new columns. That’s tedious, and not a fun way to code.
### A slight speed gain
Note that you do not have to create an empty object like we did. The following works also.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = numeric()
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
However, the other approach is slightly faster, because memory is already allocated for all elements of nyc\_means, rather than updating it every iteration of the loop. This speed gain can become noticeable when dealing with thousands of columns and complex operations.
### While alternative
When you look at some people’s R code, you may see a loop of a different sort.
```
columns = c('temp','humid','wind_speed', 'visib', 'pressure')
nyc_means = c()
i = 1
while (i <= length(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
i = i + 1
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
This involves a while statement. It states, while `i` is less than or equal to the length (number) of columns, compute the value of the ith element of nyc\_means as the mean of ith column of weather. After that, increase the value of `i`. So, we start with `i = 1`, compute that subsequent mean, `i` now equals 2, do the process again, and so on. The process will stop as soon as the value of `i` is greater than the length of columns.
*There is zero difference to using the while approach vs. the for loop*. While is often used when there is a check to be made, e.g. in modeling functions that have to stop the estimation process at some point, or else they’d go on indefinitely. In that case the while syntax is probably more natural. Either is fine.
### Loops summary
Understanding loops is fundamental toward spending less time processing data and more time toward exploring it. Your code will be more succinct and more able to handle the usual changes that come with dealing with data. Now that you have a sense of it, know that once you are armed with the sorts of things we’ll be talking about next\- apply functions, writing functions, and vectorization \- you’ll likely have little need to write explicit loops. While there is always a need for iterative processing of data, R provides even more efficient means to do so.
### A slight speed gain
Note that you do not have to create an empty object like we did. The following works also.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = numeric()
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
However, the other approach is slightly faster, because memory is already allocated for all elements of nyc\_means, rather than updating it every iteration of the loop. This speed gain can become noticeable when dealing with thousands of columns and complex operations.
### While alternative
When you look at some people’s R code, you may see a loop of a different sort.
```
columns = c('temp','humid','wind_speed', 'visib', 'pressure')
nyc_means = c()
i = 1
while (i <= length(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
i = i + 1
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
This involves a while statement. It states, while `i` is less than or equal to the length (number) of columns, compute the value of the ith element of nyc\_means as the mean of ith column of weather. After that, increase the value of `i`. So, we start with `i = 1`, compute that subsequent mean, `i` now equals 2, do the process again, and so on. The process will stop as soon as the value of `i` is greater than the length of columns.
*There is zero difference to using the while approach vs. the for loop*. While is often used when there is a check to be made, e.g. in modeling functions that have to stop the estimation process at some point, or else they’d go on indefinitely. In that case the while syntax is probably more natural. Either is fine.
### Loops summary
Understanding loops is fundamental toward spending less time processing data and more time toward exploring it. Your code will be more succinct and more able to handle the usual changes that come with dealing with data. Now that you have a sense of it, know that once you are armed with the sorts of things we’ll be talking about next\- apply functions, writing functions, and vectorization \- you’ll likely have little need to write explicit loops. While there is always a need for iterative processing of data, R provides even more efficient means to do so.
Implicit Loops
--------------
Writing loops is straightforward once you get the initial hang of it. However, R offers alternative ways to do loops that can simplify code without losing readability. As such, even when you loop in R, you don’t have to do so explicitly.
### apply family
A family of functions comes with R that allows for a succinct way of looping when it is appropriate. Common functions in this family include:
* apply
+ arrays, matrices, data.frames
* lapply, sapply, vapply
+ lists, data.frames, vectors
* tapply
+ grouped operations (table apply)
* mapply
+ multivariate version of sapply
* replicate
+ performs an operation N times
As an example we’ll consider standardizing variables, i.e. taking a set of numbers, subtracting the mean, and dividing by the standard deviation. This results in a variable with mean of 0 and standard deviation of 1\. Let’s start with a loop approach.
```
for (i in 1:ncol(mydf)) {
x = mydf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
```
The above would be a really bad way to use R. It goes over each column individually, then over each value of the column.
Conversely, apply will take a matrix or data frame, and apply a function over the margin, row or column, you want to loop over. The first argument is the data you’re considering, the margin is the second argument (1 for rows, 2 for columns[12](#fn12)), and the function you want to apply to those rows is the third argument. The following example is much cleaner compared to the loop, and now you’d have a function you can use elsewhere if needed.
```
stdize <- function(x) {
(x - mean(x)) / sd(x)
}
apply(mydf, 2, stdize) # 1 for rows, 2 for columnwise application
```
Many of the other apply functions work similarly, taking an object and a function to do the work on the object (possibly implicit), possibly with other arguments specified if necessary.
#### lapply
Let’s say we have a list object, or even just a vector of values. There are no rows or columns to iterate over, so what do we do here?
```
x = list('aba', 'abb', 'abc', 'abd', 'abe')
lapply(x, str_remove, pattern = 'ab')
```
```
[[1]]
[1] "a"
[[2]]
[1] "b"
[[3]]
[1] "c"
[[4]]
[1] "d"
[[5]]
[1] "e"
```
The lapply operation iterates over each element of the list and applies a function to them. In this case, the function is str\_remove. It has an argument for the string pattern we want to take out of the character string that is fed to it (‘ab’). For example, for ‘aba’ we will be left with just the ‘a’.
As can be seen, lapply starts with a list and returns a list. The only difference with sapply is that sapply will return a simplified form if possible[13](#fn13).
```
sapply(x, str_remove, pattern = 'ab')
```
```
[1] "a" "b" "c" "d" "e"
```
In this case we just get a vector back.
### Apply functions
It is important to be familiar with the apply family for efficient data processing, if only because you’ll regularly come code employing these functions. A summary of benefits includes:
* Cleaner/simpler code
* Environment kept clear of unnecessary objects
* Potentially more reproducible
+ more likely to use generalizable functions
* Parallelizable
Note that apply functions are NOT necessarily faster than explicit loops, and if you create an empty object for the loop as discussed previously, the explicit loop will likely be faster. On top of that, functions like replicate and mapply are especially slow.
However, the apply family can ALWAYS *potentially* be faster than standard R loops do to parallelization. With base R’s parallel package, there are parallel versions of the apply family, e.g.parApply, parLapply etc. As every modern computer has at least four cores to play with, you’ll always potentially have nearly a 4x speedup by using the parallel apply functions.
Apply functions and similar approaches should be a part of your regular R experience. We’ll talk about other options that may have even more benefits, but you need to know the basics of how apply functions work in order to use those.
I use R every day, and very rarely use explicit loops. Note that there is no speed difference for a for loop vs. using while. And if you must use an explicit loop, create an empty object of the dimension/form you need, and then fill it in via the loop. This will be notably faster.
I pretty much never use an explicit double loop, as a little more thinking about the problem will usually provide a more efficient path to solving the problem.
### purrr
The purrr package allows you to take the apply family approach to the tidyverse. And with packages future \+ furrr, they too are parallelizable.
Consider the following. We’ll use the map function to map the sum function to each element in the list, the same way we would with lapply.
```
x = list(1:3, 4:6, 7:9)
map(x, sum)
```
```
[[1]]
[1] 6
[[2]]
[1] 15
[[3]]
[1] 24
```
The map functions take some getting used to, and in my experience they are typically slower than the apply functions, sometimes notably so. However they allow you stay within the tidy realm, which has its own benefits, and have more control over the nature of the output[14](#fn14), which is especially important in reproducibility, package development, producing production\-level code, etc. The key idea is that the map functions will always return something the same length as the input given to it.
The purrr functions want a list or vector, i.e. they don’t work with data.frame objects in the same way we’ve done with mutate and summarize except in the sense that data.frames are lists.
```
## mtcars %>%
## map(scale) # returns a list, not shown
mtcars %>%
map_df(scale) # returns a df
```
```
# A tibble: 32 x 11
mpg[,1] cyl[,1] disp[,1] hp[,1] drat[,1] wt[,1] qsec[,1] vs[,1] am[,1] gear[,1] carb[,1]
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.151 -0.105 -0.571 -0.535 0.568 -0.610 -0.777 -0.868 1.19 0.424 0.735
2 0.151 -0.105 -0.571 -0.535 0.568 -0.350 -0.464 -0.868 1.19 0.424 0.735
3 0.450 -1.22 -0.990 -0.783 0.474 -0.917 0.426 1.12 1.19 0.424 -1.12
4 0.217 -0.105 0.220 -0.535 -0.966 -0.00230 0.890 1.12 -0.814 -0.932 -1.12
5 -0.231 1.01 1.04 0.413 -0.835 0.228 -0.464 -0.868 -0.814 -0.932 -0.503
6 -0.330 -0.105 -0.0462 -0.608 -1.56 0.248 1.33 1.12 -0.814 -0.932 -1.12
7 -0.961 1.01 1.04 1.43 -0.723 0.361 -1.12 -0.868 -0.814 -0.932 0.735
8 0.715 -1.22 -0.678 -1.24 0.175 -0.0278 1.20 1.12 -0.814 0.424 -0.503
9 0.450 -1.22 -0.726 -0.754 0.605 -0.0687 2.83 1.12 -0.814 0.424 -0.503
10 -0.148 -0.105 -0.509 -0.345 0.605 0.228 0.253 1.12 -0.814 0.424 0.735
# … with 22 more rows
```
```
mtcars %>%
map_dbl(sum) # returns a numeric (double) vector of column sums
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
642.900 198.000 7383.100 4694.000 115.090 102.952 571.160 14.000 13.000 118.000 90.000
```
```
diamonds %>%
map_at(
vars(carat, depth, price),
function(x)
as.integer(x > median(x))
) %>%
as_tibble()
```
```
# A tibble: 53,940 x 10
carat cut color clarity depth table price x y z
<int> <ord> <ord> <ord> <int> <dbl> <int> <dbl> <dbl> <dbl>
1 0 Ideal E SI2 0 55 0 3.95 3.98 2.43
2 0 Premium E SI1 0 61 0 3.89 3.84 2.31
3 0 Good E VS1 0 65 0 4.05 4.07 2.31
4 0 Premium I VS2 1 58 0 4.2 4.23 2.63
5 0 Good J SI2 1 58 0 4.34 4.35 2.75
6 0 Very Good J VVS2 1 57 0 3.94 3.96 2.48
7 0 Very Good I VVS1 1 57 0 3.95 3.98 2.47
8 0 Very Good H SI1 1 55 0 4.07 4.11 2.53
9 0 Fair E VS2 1 61 0 3.87 3.78 2.49
10 0 Very Good H VS1 0 61 0 4 4.05 2.39
# … with 53,930 more rows
```
However, working with lists is very useful, so let’s turn to that.
### apply family
A family of functions comes with R that allows for a succinct way of looping when it is appropriate. Common functions in this family include:
* apply
+ arrays, matrices, data.frames
* lapply, sapply, vapply
+ lists, data.frames, vectors
* tapply
+ grouped operations (table apply)
* mapply
+ multivariate version of sapply
* replicate
+ performs an operation N times
As an example we’ll consider standardizing variables, i.e. taking a set of numbers, subtracting the mean, and dividing by the standard deviation. This results in a variable with mean of 0 and standard deviation of 1\. Let’s start with a loop approach.
```
for (i in 1:ncol(mydf)) {
x = mydf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
```
The above would be a really bad way to use R. It goes over each column individually, then over each value of the column.
Conversely, apply will take a matrix or data frame, and apply a function over the margin, row or column, you want to loop over. The first argument is the data you’re considering, the margin is the second argument (1 for rows, 2 for columns[12](#fn12)), and the function you want to apply to those rows is the third argument. The following example is much cleaner compared to the loop, and now you’d have a function you can use elsewhere if needed.
```
stdize <- function(x) {
(x - mean(x)) / sd(x)
}
apply(mydf, 2, stdize) # 1 for rows, 2 for columnwise application
```
Many of the other apply functions work similarly, taking an object and a function to do the work on the object (possibly implicit), possibly with other arguments specified if necessary.
#### lapply
Let’s say we have a list object, or even just a vector of values. There are no rows or columns to iterate over, so what do we do here?
```
x = list('aba', 'abb', 'abc', 'abd', 'abe')
lapply(x, str_remove, pattern = 'ab')
```
```
[[1]]
[1] "a"
[[2]]
[1] "b"
[[3]]
[1] "c"
[[4]]
[1] "d"
[[5]]
[1] "e"
```
The lapply operation iterates over each element of the list and applies a function to them. In this case, the function is str\_remove. It has an argument for the string pattern we want to take out of the character string that is fed to it (‘ab’). For example, for ‘aba’ we will be left with just the ‘a’.
As can be seen, lapply starts with a list and returns a list. The only difference with sapply is that sapply will return a simplified form if possible[13](#fn13).
```
sapply(x, str_remove, pattern = 'ab')
```
```
[1] "a" "b" "c" "d" "e"
```
In this case we just get a vector back.
#### lapply
Let’s say we have a list object, or even just a vector of values. There are no rows or columns to iterate over, so what do we do here?
```
x = list('aba', 'abb', 'abc', 'abd', 'abe')
lapply(x, str_remove, pattern = 'ab')
```
```
[[1]]
[1] "a"
[[2]]
[1] "b"
[[3]]
[1] "c"
[[4]]
[1] "d"
[[5]]
[1] "e"
```
The lapply operation iterates over each element of the list and applies a function to them. In this case, the function is str\_remove. It has an argument for the string pattern we want to take out of the character string that is fed to it (‘ab’). For example, for ‘aba’ we will be left with just the ‘a’.
As can be seen, lapply starts with a list and returns a list. The only difference with sapply is that sapply will return a simplified form if possible[13](#fn13).
```
sapply(x, str_remove, pattern = 'ab')
```
```
[1] "a" "b" "c" "d" "e"
```
In this case we just get a vector back.
### Apply functions
It is important to be familiar with the apply family for efficient data processing, if only because you’ll regularly come code employing these functions. A summary of benefits includes:
* Cleaner/simpler code
* Environment kept clear of unnecessary objects
* Potentially more reproducible
+ more likely to use generalizable functions
* Parallelizable
Note that apply functions are NOT necessarily faster than explicit loops, and if you create an empty object for the loop as discussed previously, the explicit loop will likely be faster. On top of that, functions like replicate and mapply are especially slow.
However, the apply family can ALWAYS *potentially* be faster than standard R loops do to parallelization. With base R’s parallel package, there are parallel versions of the apply family, e.g.parApply, parLapply etc. As every modern computer has at least four cores to play with, you’ll always potentially have nearly a 4x speedup by using the parallel apply functions.
Apply functions and similar approaches should be a part of your regular R experience. We’ll talk about other options that may have even more benefits, but you need to know the basics of how apply functions work in order to use those.
I use R every day, and very rarely use explicit loops. Note that there is no speed difference for a for loop vs. using while. And if you must use an explicit loop, create an empty object of the dimension/form you need, and then fill it in via the loop. This will be notably faster.
I pretty much never use an explicit double loop, as a little more thinking about the problem will usually provide a more efficient path to solving the problem.
### purrr
The purrr package allows you to take the apply family approach to the tidyverse. And with packages future \+ furrr, they too are parallelizable.
Consider the following. We’ll use the map function to map the sum function to each element in the list, the same way we would with lapply.
```
x = list(1:3, 4:6, 7:9)
map(x, sum)
```
```
[[1]]
[1] 6
[[2]]
[1] 15
[[3]]
[1] 24
```
The map functions take some getting used to, and in my experience they are typically slower than the apply functions, sometimes notably so. However they allow you stay within the tidy realm, which has its own benefits, and have more control over the nature of the output[14](#fn14), which is especially important in reproducibility, package development, producing production\-level code, etc. The key idea is that the map functions will always return something the same length as the input given to it.
The purrr functions want a list or vector, i.e. they don’t work with data.frame objects in the same way we’ve done with mutate and summarize except in the sense that data.frames are lists.
```
## mtcars %>%
## map(scale) # returns a list, not shown
mtcars %>%
map_df(scale) # returns a df
```
```
# A tibble: 32 x 11
mpg[,1] cyl[,1] disp[,1] hp[,1] drat[,1] wt[,1] qsec[,1] vs[,1] am[,1] gear[,1] carb[,1]
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.151 -0.105 -0.571 -0.535 0.568 -0.610 -0.777 -0.868 1.19 0.424 0.735
2 0.151 -0.105 -0.571 -0.535 0.568 -0.350 -0.464 -0.868 1.19 0.424 0.735
3 0.450 -1.22 -0.990 -0.783 0.474 -0.917 0.426 1.12 1.19 0.424 -1.12
4 0.217 -0.105 0.220 -0.535 -0.966 -0.00230 0.890 1.12 -0.814 -0.932 -1.12
5 -0.231 1.01 1.04 0.413 -0.835 0.228 -0.464 -0.868 -0.814 -0.932 -0.503
6 -0.330 -0.105 -0.0462 -0.608 -1.56 0.248 1.33 1.12 -0.814 -0.932 -1.12
7 -0.961 1.01 1.04 1.43 -0.723 0.361 -1.12 -0.868 -0.814 -0.932 0.735
8 0.715 -1.22 -0.678 -1.24 0.175 -0.0278 1.20 1.12 -0.814 0.424 -0.503
9 0.450 -1.22 -0.726 -0.754 0.605 -0.0687 2.83 1.12 -0.814 0.424 -0.503
10 -0.148 -0.105 -0.509 -0.345 0.605 0.228 0.253 1.12 -0.814 0.424 0.735
# … with 22 more rows
```
```
mtcars %>%
map_dbl(sum) # returns a numeric (double) vector of column sums
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
642.900 198.000 7383.100 4694.000 115.090 102.952 571.160 14.000 13.000 118.000 90.000
```
```
diamonds %>%
map_at(
vars(carat, depth, price),
function(x)
as.integer(x > median(x))
) %>%
as_tibble()
```
```
# A tibble: 53,940 x 10
carat cut color clarity depth table price x y z
<int> <ord> <ord> <ord> <int> <dbl> <int> <dbl> <dbl> <dbl>
1 0 Ideal E SI2 0 55 0 3.95 3.98 2.43
2 0 Premium E SI1 0 61 0 3.89 3.84 2.31
3 0 Good E VS1 0 65 0 4.05 4.07 2.31
4 0 Premium I VS2 1 58 0 4.2 4.23 2.63
5 0 Good J SI2 1 58 0 4.34 4.35 2.75
6 0 Very Good J VVS2 1 57 0 3.94 3.96 2.48
7 0 Very Good I VVS1 1 57 0 3.95 3.98 2.47
8 0 Very Good H SI1 1 55 0 4.07 4.11 2.53
9 0 Fair E VS2 1 61 0 3.87 3.78 2.49
10 0 Very Good H VS1 0 61 0 4 4.05 2.39
# … with 53,930 more rows
```
However, working with lists is very useful, so let’s turn to that.
Looping with Lists
------------------
Aside from data frames, you may think you don’t have much need for list objects. However, list objects make it very easy to iterate some form of data processing.
Let’s say you have models of increasing complexity, and you want to easily summarise and/or compare them. We create a list for which each element is a model object. We then apply a function, e.g. to get the AIC value for each, or adjusted R square (this requires a custom function).
```
library(mgcv) # for gam
mtcars$cyl = factor(mtcars$cyl)
mod_lm = lm(mpg ~ wt, data = mtcars)
mod_poly = lm(mpg ~ poly(wt, 2), data = mtcars)
mod_inter = lm(mpg ~ wt * cyl, data = mtcars)
mod_gam = gam(mpg ~ s(wt), data = mtcars)
mod_gam_inter = gam(mpg ~ cyl + s(wt, by = cyl), data = mtcars)
model_list = list(
mod_lm = mod_lm,
mod_poly = mod_poly,
mod_inter = mod_inter,
mod_gam = mod_gam,
mod_gam_inter = mod_gam_inter
)
# lowest wins
model_list %>%
map_dbl(AIC) %>%
sort()
```
```
mod_gam_inter mod_inter mod_poly mod_gam mod_lm
150.6324 155.4811 158.0484 158.5717 166.0294
```
```
# highest wins
model_list %>%
map_dbl(
function(x)
if_else(inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj)
) %>%
sort(decreasing = TRUE)
```
```
mod_gam_inter mod_inter mod_poly mod_gam mod_lm
0.8643020 0.8349382 0.8065828 0.8041651 0.7445939
```
Let’s go further and create a plot of these results. We’ll map to a data frame, use pivot\_longer to melt it to two columns of model and value, then use ggplot2 to plot the results[15](#fn15).
```
model_list %>%
map_df(
function(x)
if_else(inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj)
) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = "Adj. Rsq") %>%
arrange(desc(`Adj. Rsq`)) %>%
mutate(model = factor(model, levels = model)) %>% # sigh
ggplot(aes(x = model, y = `Adj. Rsq`)) +
geom_point(aes(color = model), size = 10, show.legend = F)
```
Why not throw in AIC also?
```
mod_rsq =
model_list %>%
map_df(
function(x)
if_else(
inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj
)
) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = 'Rsq')
mod_aic =
model_list %>%
map_df(AIC) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = 'AIC')
left_join(mod_rsq, mod_aic) %>%
arrange(AIC) %>%
mutate(model = factor(model, levels = model)) %>%
pivot_longer(cols = -model, names_to = 'measure', values_to = 'value') %>%
ggplot(aes(x = model, y = value)) +
geom_point(aes(color = model), size = 10, show.legend = F) +
facet_wrap(~ measure, scales = 'free')
```
#### List columns
As data.frames are lists, anything can be put into a column just as you would a list element. We’ll use pmap here, as it can take more than one argument, and we’re feeding all columns of the data.frame. You don’t need to worry about the details here, we just want to create a column that is actually a list. In this case the column will contain a data frame in each entry.
```
mtcars2 = as.matrix(mtcars)
mtcars2[sample(1:length(mtcars2), 50)] = NA # add some missing data
mtcars2 = data.frame(mtcars2) %>%
rownames_to_column(var = 'observation') %>%
as_tibble()
head(mtcars2)
```
```
# A tibble: 6 x 12
observation mpg cyl disp hp drat wt qsec vs am gear carb
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1
```
```
mtcars2 =
mtcars2 %>%
mutate(
newvar =
pmap(., ~ data.frame(
N = sum(!is.na(c(...))),
Missing = sum(is.na(c(...)))
)
)
)
```
Now check out the list column.
```
mtcars2
```
```
# A tibble: 32 x 13
observation mpg cyl disp hp drat wt qsec vs am gear carb newvar
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <list>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 <df[,2] [1 × 2]>
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 <df[,2] [1 × 2]>
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 <df[,2] [1 × 2]>
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 <df[,2] [1 × 2]>
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 <df[,2] [1 × 2]>
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 <df[,2] [1 × 2]>
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 <df[,2] [1 × 2]>
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 <df[,2] [1 × 2]>
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> <df[,2] [1 × 2]>
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 <df[,2] [1 × 2]>
# … with 22 more rows
```
```
mtcars2$newvar %>% head(3)
```
```
[[1]]
N Missing
1 11 1
[[2]]
N Missing
1 12 0
[[3]]
N Missing
1 12 0
```
Unnest it with the tidyr function.
```
mtcars2 %>%
unnest(newvar)
```
```
# A tibble: 32 x 14
observation mpg cyl disp hp drat wt qsec vs am gear carb N Missing
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <int> <int>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 11 1
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 12 0
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 12 0
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 11 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 11 1
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 9 3
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 11 1
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 11 1
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> 10 2
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 9 3
# … with 22 more rows
```
This is a pretty esoteric demonstration, and not something you’d normally want to do, as mutate or other approaches would be far more efficient and sensical. However, the idea is that you might want to retain the information you might otherwise store in a list with the data that was used to create it. As an example, you could potentially attach models as a list column to a dataframe that contains meta\-information about each model. Once you have a list column, you can use that column as you would any list for iterative programming.
#### List columns
As data.frames are lists, anything can be put into a column just as you would a list element. We’ll use pmap here, as it can take more than one argument, and we’re feeding all columns of the data.frame. You don’t need to worry about the details here, we just want to create a column that is actually a list. In this case the column will contain a data frame in each entry.
```
mtcars2 = as.matrix(mtcars)
mtcars2[sample(1:length(mtcars2), 50)] = NA # add some missing data
mtcars2 = data.frame(mtcars2) %>%
rownames_to_column(var = 'observation') %>%
as_tibble()
head(mtcars2)
```
```
# A tibble: 6 x 12
observation mpg cyl disp hp drat wt qsec vs am gear carb
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1
```
```
mtcars2 =
mtcars2 %>%
mutate(
newvar =
pmap(., ~ data.frame(
N = sum(!is.na(c(...))),
Missing = sum(is.na(c(...)))
)
)
)
```
Now check out the list column.
```
mtcars2
```
```
# A tibble: 32 x 13
observation mpg cyl disp hp drat wt qsec vs am gear carb newvar
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <list>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 <df[,2] [1 × 2]>
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 <df[,2] [1 × 2]>
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 <df[,2] [1 × 2]>
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 <df[,2] [1 × 2]>
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 <df[,2] [1 × 2]>
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 <df[,2] [1 × 2]>
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 <df[,2] [1 × 2]>
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 <df[,2] [1 × 2]>
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> <df[,2] [1 × 2]>
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 <df[,2] [1 × 2]>
# … with 22 more rows
```
```
mtcars2$newvar %>% head(3)
```
```
[[1]]
N Missing
1 11 1
[[2]]
N Missing
1 12 0
[[3]]
N Missing
1 12 0
```
Unnest it with the tidyr function.
```
mtcars2 %>%
unnest(newvar)
```
```
# A tibble: 32 x 14
observation mpg cyl disp hp drat wt qsec vs am gear carb N Missing
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <int> <int>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 11 1
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 12 0
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 12 0
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 11 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 11 1
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 9 3
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 11 1
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 11 1
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> 10 2
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 9 3
# … with 22 more rows
```
This is a pretty esoteric demonstration, and not something you’d normally want to do, as mutate or other approaches would be far more efficient and sensical. However, the idea is that you might want to retain the information you might otherwise store in a list with the data that was used to create it. As an example, you could potentially attach models as a list column to a dataframe that contains meta\-information about each model. Once you have a list column, you can use that column as you would any list for iterative programming.
Iterative Programming Exercises
-------------------------------
### Exercise 1
With the following matrix, use apply and the sum function to get row or column sums of the matrix x.
```
x = matrix(1:9, 3, 3)
```
### Exercise 2
With the following list object x, use lapply and sapply and the sum function to get sums for the elements. There is no margin to specify for a list, so just supply the list and the sum function.
```
x = list(1:3, 4:10, 11:100)
```
### Exercise 3
As in the previous example, use a map function to create a data frame of the column means. See `?map` to see all your options.
```
d = tibble(
x = rnorm(100),
y = rnorm(100, 10, 2),
z = rnorm(100, 50, 10),
)
```
### Exercise 1
With the following matrix, use apply and the sum function to get row or column sums of the matrix x.
```
x = matrix(1:9, 3, 3)
```
### Exercise 2
With the following list object x, use lapply and sapply and the sum function to get sums for the elements. There is no margin to specify for a list, so just supply the list and the sum function.
```
x = list(1:3, 4:10, 11:100)
```
### Exercise 3
As in the previous example, use a map function to create a data frame of the column means. See `?map` to see all your options.
```
d = tibble(
x = rnorm(100),
y = rnorm(100, 10, 2),
z = rnorm(100, 50, 10),
)
```
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/iterative.html |
Iterative Programming
=====================
Almost everything you do when dealing with data will need to be done again, and again, and again. If you are copy\-pasting your way to repetitively do the same thing, you’re not only doing things inefficiently, you’re almost certainly setting yourself up for trouble if anything changes about the data or underlying process.
In order to avoid this, you need to be familiar with basic programming, and a starting point is to use an iterative approach to repetitive problems. Let’s look at the following. Let’s say we want to get the means of some columns in our data set. Do you do something like this?
```
means1 = mean(df$x)
means2 = mean(df$y)
means3 = mean(df$z)
means4 = mean(df$q)
```
Now consider what you have to change if you change a variable name, decide to do a median, or the data object name changes. Any minor change with the data will cause you to have to redo that code, and possibly every line of it.
For Loops
---------
A for loop will help us get around the problem. The idea is that we want to perform a particular action *for* every iteration of some sequence. That sequence may be over columns, rows, lines in a text, whatever. Here is a loop.
```
for (column in c('x','y','z','q')) {
mean(df[[column]])
}
```
What’s going on here? We’ve created an iterative process in which, *for* every *element* in `c('x','y','z','q')`, we are going to do something. We use the completely arbitrary word `column` as a placeholder to index which of the four columns we’re dealing with at a given point in the process. On the first iteration, `column` will equal `x`, on the second `y`, and so on. We then take the mean of `df[[column]]`, which will be `df[['x']]`, then `df[['y']]`, etc.
Here is an example with the nycflights data, which regards flights that departed New York City in 2013\. The weather data set has columns for things like temperature, humidity, and so forth.
```
weather = nycflights13::weather
for (column in c('temp', 'humid', 'wind_speed', 'precip')) {
print(mean(weather[[column]], na.rm = TRUE))
}
```
```
[1] 55.26039
[1] 62.53006
[1] 10.51749
[1] 0.004469079
```
You can check this for yourself by testing a column or two directly with just `mean(df$x)`.
Now if the data name changes, the columns we want change, or we want to calculate something else, we usually end up only changing one thing, rather than at least changing one at a minimum, and probably many more things. In addition, the amount of code is the same whether the loop goes over 100 columns or 4\.
Let’s do things a little differently.
```
columns = c('temp', 'humid', 'wind_speed', 'precip')
nyc_means = rep(NA, length(columns))
for (i in seq_along(columns)) {
column = columns[i]
nyc_means[i] = mean(weather[[column]], na.rm = TRUE)
# alternative without the initial first step
# nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means
```
```
[1] 55.260392127 62.530058972 10.517488384 0.004469079
```
By creating a columns object, if anything changes about the columns we want, that’s the only line in the code that would need to be changed. The `i` is now a place holder for a number that goes from 1 to the length of columns (i.e. 4\). We make an empty nyc\_means object that’s the length of the columns, so that each element will eventually be the mean of the corresponding column.
In the following I remove precipitation and add visibility and air pressure.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = rep(NA, length(columns))
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
Had we been copy\-pasting, this would require deleting or commenting out a line in our code, pasting two more, and changing each one after pasting to represent the new columns. That’s tedious, and not a fun way to code.
### A slight speed gain
Note that you do not have to create an empty object like we did. The following works also.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = numeric()
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
However, the other approach is slightly faster, because memory is already allocated for all elements of nyc\_means, rather than updating it every iteration of the loop. This speed gain can become noticeable when dealing with thousands of columns and complex operations.
### While alternative
When you look at some people’s R code, you may see a loop of a different sort.
```
columns = c('temp','humid','wind_speed', 'visib', 'pressure')
nyc_means = c()
i = 1
while (i <= length(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
i = i + 1
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
This involves a while statement. It states, while `i` is less than or equal to the length (number) of columns, compute the value of the ith element of nyc\_means as the mean of ith column of weather. After that, increase the value of `i`. So, we start with `i = 1`, compute that subsequent mean, `i` now equals 2, do the process again, and so on. The process will stop as soon as the value of `i` is greater than the length of columns.
*There is zero difference to using the while approach vs. the for loop*. While is often used when there is a check to be made, e.g. in modeling functions that have to stop the estimation process at some point, or else they’d go on indefinitely. In that case the while syntax is probably more natural. Either is fine.
### Loops summary
Understanding loops is fundamental toward spending less time processing data and more time toward exploring it. Your code will be more succinct and more able to handle the usual changes that come with dealing with data. Now that you have a sense of it, know that once you are armed with the sorts of things we’ll be talking about next\- apply functions, writing functions, and vectorization \- you’ll likely have little need to write explicit loops. While there is always a need for iterative processing of data, R provides even more efficient means to do so.
Implicit Loops
--------------
Writing loops is straightforward once you get the initial hang of it. However, R offers alternative ways to do loops that can simplify code without losing readability. As such, even when you loop in R, you don’t have to do so explicitly.
### apply family
A family of functions comes with R that allows for a succinct way of looping when it is appropriate. Common functions in this family include:
* apply
+ arrays, matrices, data.frames
* lapply, sapply, vapply
+ lists, data.frames, vectors
* tapply
+ grouped operations (table apply)
* mapply
+ multivariate version of sapply
* replicate
+ performs an operation N times
As an example we’ll consider standardizing variables, i.e. taking a set of numbers, subtracting the mean, and dividing by the standard deviation. This results in a variable with mean of 0 and standard deviation of 1\. Let’s start with a loop approach.
```
for (i in 1:ncol(mydf)) {
x = mydf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
```
The above would be a really bad way to use R. It goes over each column individually, then over each value of the column.
Conversely, apply will take a matrix or data frame, and apply a function over the margin, row or column, you want to loop over. The first argument is the data you’re considering, the margin is the second argument (1 for rows, 2 for columns[12](#fn12)), and the function you want to apply to those rows is the third argument. The following example is much cleaner compared to the loop, and now you’d have a function you can use elsewhere if needed.
```
stdize <- function(x) {
(x - mean(x)) / sd(x)
}
apply(mydf, 2, stdize) # 1 for rows, 2 for columnwise application
```
Many of the other apply functions work similarly, taking an object and a function to do the work on the object (possibly implicit), possibly with other arguments specified if necessary.
#### lapply
Let’s say we have a list object, or even just a vector of values. There are no rows or columns to iterate over, so what do we do here?
```
x = list('aba', 'abb', 'abc', 'abd', 'abe')
lapply(x, str_remove, pattern = 'ab')
```
```
[[1]]
[1] "a"
[[2]]
[1] "b"
[[3]]
[1] "c"
[[4]]
[1] "d"
[[5]]
[1] "e"
```
The lapply operation iterates over each element of the list and applies a function to them. In this case, the function is str\_remove. It has an argument for the string pattern we want to take out of the character string that is fed to it (‘ab’). For example, for ‘aba’ we will be left with just the ‘a’.
As can be seen, lapply starts with a list and returns a list. The only difference with sapply is that sapply will return a simplified form if possible[13](#fn13).
```
sapply(x, str_remove, pattern = 'ab')
```
```
[1] "a" "b" "c" "d" "e"
```
In this case we just get a vector back.
### Apply functions
It is important to be familiar with the apply family for efficient data processing, if only because you’ll regularly come code employing these functions. A summary of benefits includes:
* Cleaner/simpler code
* Environment kept clear of unnecessary objects
* Potentially more reproducible
+ more likely to use generalizable functions
* Parallelizable
Note that apply functions are NOT necessarily faster than explicit loops, and if you create an empty object for the loop as discussed previously, the explicit loop will likely be faster. On top of that, functions like replicate and mapply are especially slow.
However, the apply family can ALWAYS *potentially* be faster than standard R loops do to parallelization. With base R’s parallel package, there are parallel versions of the apply family, e.g.parApply, parLapply etc. As every modern computer has at least four cores to play with, you’ll always potentially have nearly a 4x speedup by using the parallel apply functions.
Apply functions and similar approaches should be a part of your regular R experience. We’ll talk about other options that may have even more benefits, but you need to know the basics of how apply functions work in order to use those.
I use R every day, and very rarely use explicit loops. Note that there is no speed difference for a for loop vs. using while. And if you must use an explicit loop, create an empty object of the dimension/form you need, and then fill it in via the loop. This will be notably faster.
I pretty much never use an explicit double loop, as a little more thinking about the problem will usually provide a more efficient path to solving the problem.
### purrr
The purrr package allows you to take the apply family approach to the tidyverse. And with packages future \+ furrr, they too are parallelizable.
Consider the following. We’ll use the map function to map the sum function to each element in the list, the same way we would with lapply.
```
x = list(1:3, 4:6, 7:9)
map(x, sum)
```
```
[[1]]
[1] 6
[[2]]
[1] 15
[[3]]
[1] 24
```
The map functions take some getting used to, and in my experience they are typically slower than the apply functions, sometimes notably so. However they allow you stay within the tidy realm, which has its own benefits, and have more control over the nature of the output[14](#fn14), which is especially important in reproducibility, package development, producing production\-level code, etc. The key idea is that the map functions will always return something the same length as the input given to it.
The purrr functions want a list or vector, i.e. they don’t work with data.frame objects in the same way we’ve done with mutate and summarize except in the sense that data.frames are lists.
```
## mtcars %>%
## map(scale) # returns a list, not shown
mtcars %>%
map_df(scale) # returns a df
```
```
# A tibble: 32 x 11
mpg[,1] cyl[,1] disp[,1] hp[,1] drat[,1] wt[,1] qsec[,1] vs[,1] am[,1] gear[,1] carb[,1]
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.151 -0.105 -0.571 -0.535 0.568 -0.610 -0.777 -0.868 1.19 0.424 0.735
2 0.151 -0.105 -0.571 -0.535 0.568 -0.350 -0.464 -0.868 1.19 0.424 0.735
3 0.450 -1.22 -0.990 -0.783 0.474 -0.917 0.426 1.12 1.19 0.424 -1.12
4 0.217 -0.105 0.220 -0.535 -0.966 -0.00230 0.890 1.12 -0.814 -0.932 -1.12
5 -0.231 1.01 1.04 0.413 -0.835 0.228 -0.464 -0.868 -0.814 -0.932 -0.503
6 -0.330 -0.105 -0.0462 -0.608 -1.56 0.248 1.33 1.12 -0.814 -0.932 -1.12
7 -0.961 1.01 1.04 1.43 -0.723 0.361 -1.12 -0.868 -0.814 -0.932 0.735
8 0.715 -1.22 -0.678 -1.24 0.175 -0.0278 1.20 1.12 -0.814 0.424 -0.503
9 0.450 -1.22 -0.726 -0.754 0.605 -0.0687 2.83 1.12 -0.814 0.424 -0.503
10 -0.148 -0.105 -0.509 -0.345 0.605 0.228 0.253 1.12 -0.814 0.424 0.735
# … with 22 more rows
```
```
mtcars %>%
map_dbl(sum) # returns a numeric (double) vector of column sums
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
642.900 198.000 7383.100 4694.000 115.090 102.952 571.160 14.000 13.000 118.000 90.000
```
```
diamonds %>%
map_at(
vars(carat, depth, price),
function(x)
as.integer(x > median(x))
) %>%
as_tibble()
```
```
# A tibble: 53,940 x 10
carat cut color clarity depth table price x y z
<int> <ord> <ord> <ord> <int> <dbl> <int> <dbl> <dbl> <dbl>
1 0 Ideal E SI2 0 55 0 3.95 3.98 2.43
2 0 Premium E SI1 0 61 0 3.89 3.84 2.31
3 0 Good E VS1 0 65 0 4.05 4.07 2.31
4 0 Premium I VS2 1 58 0 4.2 4.23 2.63
5 0 Good J SI2 1 58 0 4.34 4.35 2.75
6 0 Very Good J VVS2 1 57 0 3.94 3.96 2.48
7 0 Very Good I VVS1 1 57 0 3.95 3.98 2.47
8 0 Very Good H SI1 1 55 0 4.07 4.11 2.53
9 0 Fair E VS2 1 61 0 3.87 3.78 2.49
10 0 Very Good H VS1 0 61 0 4 4.05 2.39
# … with 53,930 more rows
```
However, working with lists is very useful, so let’s turn to that.
Looping with Lists
------------------
Aside from data frames, you may think you don’t have much need for list objects. However, list objects make it very easy to iterate some form of data processing.
Let’s say you have models of increasing complexity, and you want to easily summarise and/or compare them. We create a list for which each element is a model object. We then apply a function, e.g. to get the AIC value for each, or adjusted R square (this requires a custom function).
```
library(mgcv) # for gam
mtcars$cyl = factor(mtcars$cyl)
mod_lm = lm(mpg ~ wt, data = mtcars)
mod_poly = lm(mpg ~ poly(wt, 2), data = mtcars)
mod_inter = lm(mpg ~ wt * cyl, data = mtcars)
mod_gam = gam(mpg ~ s(wt), data = mtcars)
mod_gam_inter = gam(mpg ~ cyl + s(wt, by = cyl), data = mtcars)
model_list = list(
mod_lm = mod_lm,
mod_poly = mod_poly,
mod_inter = mod_inter,
mod_gam = mod_gam,
mod_gam_inter = mod_gam_inter
)
# lowest wins
model_list %>%
map_dbl(AIC) %>%
sort()
```
```
mod_gam_inter mod_inter mod_poly mod_gam mod_lm
150.6324 155.4811 158.0484 158.5717 166.0294
```
```
# highest wins
model_list %>%
map_dbl(
function(x)
if_else(inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj)
) %>%
sort(decreasing = TRUE)
```
```
mod_gam_inter mod_inter mod_poly mod_gam mod_lm
0.8643020 0.8349382 0.8065828 0.8041651 0.7445939
```
Let’s go further and create a plot of these results. We’ll map to a data frame, use pivot\_longer to melt it to two columns of model and value, then use ggplot2 to plot the results[15](#fn15).
```
model_list %>%
map_df(
function(x)
if_else(inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj)
) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = "Adj. Rsq") %>%
arrange(desc(`Adj. Rsq`)) %>%
mutate(model = factor(model, levels = model)) %>% # sigh
ggplot(aes(x = model, y = `Adj. Rsq`)) +
geom_point(aes(color = model), size = 10, show.legend = F)
```
Why not throw in AIC also?
```
mod_rsq =
model_list %>%
map_df(
function(x)
if_else(
inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj
)
) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = 'Rsq')
mod_aic =
model_list %>%
map_df(AIC) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = 'AIC')
left_join(mod_rsq, mod_aic) %>%
arrange(AIC) %>%
mutate(model = factor(model, levels = model)) %>%
pivot_longer(cols = -model, names_to = 'measure', values_to = 'value') %>%
ggplot(aes(x = model, y = value)) +
geom_point(aes(color = model), size = 10, show.legend = F) +
facet_wrap(~ measure, scales = 'free')
```
#### List columns
As data.frames are lists, anything can be put into a column just as you would a list element. We’ll use pmap here, as it can take more than one argument, and we’re feeding all columns of the data.frame. You don’t need to worry about the details here, we just want to create a column that is actually a list. In this case the column will contain a data frame in each entry.
```
mtcars2 = as.matrix(mtcars)
mtcars2[sample(1:length(mtcars2), 50)] = NA # add some missing data
mtcars2 = data.frame(mtcars2) %>%
rownames_to_column(var = 'observation') %>%
as_tibble()
head(mtcars2)
```
```
# A tibble: 6 x 12
observation mpg cyl disp hp drat wt qsec vs am gear carb
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1
```
```
mtcars2 =
mtcars2 %>%
mutate(
newvar =
pmap(., ~ data.frame(
N = sum(!is.na(c(...))),
Missing = sum(is.na(c(...)))
)
)
)
```
Now check out the list column.
```
mtcars2
```
```
# A tibble: 32 x 13
observation mpg cyl disp hp drat wt qsec vs am gear carb newvar
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <list>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 <df[,2] [1 × 2]>
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 <df[,2] [1 × 2]>
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 <df[,2] [1 × 2]>
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 <df[,2] [1 × 2]>
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 <df[,2] [1 × 2]>
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 <df[,2] [1 × 2]>
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 <df[,2] [1 × 2]>
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 <df[,2] [1 × 2]>
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> <df[,2] [1 × 2]>
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 <df[,2] [1 × 2]>
# … with 22 more rows
```
```
mtcars2$newvar %>% head(3)
```
```
[[1]]
N Missing
1 11 1
[[2]]
N Missing
1 12 0
[[3]]
N Missing
1 12 0
```
Unnest it with the tidyr function.
```
mtcars2 %>%
unnest(newvar)
```
```
# A tibble: 32 x 14
observation mpg cyl disp hp drat wt qsec vs am gear carb N Missing
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <int> <int>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 11 1
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 12 0
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 12 0
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 11 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 11 1
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 9 3
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 11 1
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 11 1
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> 10 2
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 9 3
# … with 22 more rows
```
This is a pretty esoteric demonstration, and not something you’d normally want to do, as mutate or other approaches would be far more efficient and sensical. However, the idea is that you might want to retain the information you might otherwise store in a list with the data that was used to create it. As an example, you could potentially attach models as a list column to a dataframe that contains meta\-information about each model. Once you have a list column, you can use that column as you would any list for iterative programming.
Iterative Programming Exercises
-------------------------------
### Exercise 1
With the following matrix, use apply and the sum function to get row or column sums of the matrix x.
```
x = matrix(1:9, 3, 3)
```
### Exercise 2
With the following list object x, use lapply and sapply and the sum function to get sums for the elements. There is no margin to specify for a list, so just supply the list and the sum function.
```
x = list(1:3, 4:10, 11:100)
```
### Exercise 3
As in the previous example, use a map function to create a data frame of the column means. See `?map` to see all your options.
```
d = tibble(
x = rnorm(100),
y = rnorm(100, 10, 2),
z = rnorm(100, 50, 10),
)
```
For Loops
---------
A for loop will help us get around the problem. The idea is that we want to perform a particular action *for* every iteration of some sequence. That sequence may be over columns, rows, lines in a text, whatever. Here is a loop.
```
for (column in c('x','y','z','q')) {
mean(df[[column]])
}
```
What’s going on here? We’ve created an iterative process in which, *for* every *element* in `c('x','y','z','q')`, we are going to do something. We use the completely arbitrary word `column` as a placeholder to index which of the four columns we’re dealing with at a given point in the process. On the first iteration, `column` will equal `x`, on the second `y`, and so on. We then take the mean of `df[[column]]`, which will be `df[['x']]`, then `df[['y']]`, etc.
Here is an example with the nycflights data, which regards flights that departed New York City in 2013\. The weather data set has columns for things like temperature, humidity, and so forth.
```
weather = nycflights13::weather
for (column in c('temp', 'humid', 'wind_speed', 'precip')) {
print(mean(weather[[column]], na.rm = TRUE))
}
```
```
[1] 55.26039
[1] 62.53006
[1] 10.51749
[1] 0.004469079
```
You can check this for yourself by testing a column or two directly with just `mean(df$x)`.
Now if the data name changes, the columns we want change, or we want to calculate something else, we usually end up only changing one thing, rather than at least changing one at a minimum, and probably many more things. In addition, the amount of code is the same whether the loop goes over 100 columns or 4\.
Let’s do things a little differently.
```
columns = c('temp', 'humid', 'wind_speed', 'precip')
nyc_means = rep(NA, length(columns))
for (i in seq_along(columns)) {
column = columns[i]
nyc_means[i] = mean(weather[[column]], na.rm = TRUE)
# alternative without the initial first step
# nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means
```
```
[1] 55.260392127 62.530058972 10.517488384 0.004469079
```
By creating a columns object, if anything changes about the columns we want, that’s the only line in the code that would need to be changed. The `i` is now a place holder for a number that goes from 1 to the length of columns (i.e. 4\). We make an empty nyc\_means object that’s the length of the columns, so that each element will eventually be the mean of the corresponding column.
In the following I remove precipitation and add visibility and air pressure.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = rep(NA, length(columns))
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
Had we been copy\-pasting, this would require deleting or commenting out a line in our code, pasting two more, and changing each one after pasting to represent the new columns. That’s tedious, and not a fun way to code.
### A slight speed gain
Note that you do not have to create an empty object like we did. The following works also.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = numeric()
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
However, the other approach is slightly faster, because memory is already allocated for all elements of nyc\_means, rather than updating it every iteration of the loop. This speed gain can become noticeable when dealing with thousands of columns and complex operations.
### While alternative
When you look at some people’s R code, you may see a loop of a different sort.
```
columns = c('temp','humid','wind_speed', 'visib', 'pressure')
nyc_means = c()
i = 1
while (i <= length(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
i = i + 1
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
This involves a while statement. It states, while `i` is less than or equal to the length (number) of columns, compute the value of the ith element of nyc\_means as the mean of ith column of weather. After that, increase the value of `i`. So, we start with `i = 1`, compute that subsequent mean, `i` now equals 2, do the process again, and so on. The process will stop as soon as the value of `i` is greater than the length of columns.
*There is zero difference to using the while approach vs. the for loop*. While is often used when there is a check to be made, e.g. in modeling functions that have to stop the estimation process at some point, or else they’d go on indefinitely. In that case the while syntax is probably more natural. Either is fine.
### Loops summary
Understanding loops is fundamental toward spending less time processing data and more time toward exploring it. Your code will be more succinct and more able to handle the usual changes that come with dealing with data. Now that you have a sense of it, know that once you are armed with the sorts of things we’ll be talking about next\- apply functions, writing functions, and vectorization \- you’ll likely have little need to write explicit loops. While there is always a need for iterative processing of data, R provides even more efficient means to do so.
### A slight speed gain
Note that you do not have to create an empty object like we did. The following works also.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = numeric()
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
However, the other approach is slightly faster, because memory is already allocated for all elements of nyc\_means, rather than updating it every iteration of the loop. This speed gain can become noticeable when dealing with thousands of columns and complex operations.
### While alternative
When you look at some people’s R code, you may see a loop of a different sort.
```
columns = c('temp','humid','wind_speed', 'visib', 'pressure')
nyc_means = c()
i = 1
while (i <= length(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
i = i + 1
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
This involves a while statement. It states, while `i` is less than or equal to the length (number) of columns, compute the value of the ith element of nyc\_means as the mean of ith column of weather. After that, increase the value of `i`. So, we start with `i = 1`, compute that subsequent mean, `i` now equals 2, do the process again, and so on. The process will stop as soon as the value of `i` is greater than the length of columns.
*There is zero difference to using the while approach vs. the for loop*. While is often used when there is a check to be made, e.g. in modeling functions that have to stop the estimation process at some point, or else they’d go on indefinitely. In that case the while syntax is probably more natural. Either is fine.
### Loops summary
Understanding loops is fundamental toward spending less time processing data and more time toward exploring it. Your code will be more succinct and more able to handle the usual changes that come with dealing with data. Now that you have a sense of it, know that once you are armed with the sorts of things we’ll be talking about next\- apply functions, writing functions, and vectorization \- you’ll likely have little need to write explicit loops. While there is always a need for iterative processing of data, R provides even more efficient means to do so.
Implicit Loops
--------------
Writing loops is straightforward once you get the initial hang of it. However, R offers alternative ways to do loops that can simplify code without losing readability. As such, even when you loop in R, you don’t have to do so explicitly.
### apply family
A family of functions comes with R that allows for a succinct way of looping when it is appropriate. Common functions in this family include:
* apply
+ arrays, matrices, data.frames
* lapply, sapply, vapply
+ lists, data.frames, vectors
* tapply
+ grouped operations (table apply)
* mapply
+ multivariate version of sapply
* replicate
+ performs an operation N times
As an example we’ll consider standardizing variables, i.e. taking a set of numbers, subtracting the mean, and dividing by the standard deviation. This results in a variable with mean of 0 and standard deviation of 1\. Let’s start with a loop approach.
```
for (i in 1:ncol(mydf)) {
x = mydf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
```
The above would be a really bad way to use R. It goes over each column individually, then over each value of the column.
Conversely, apply will take a matrix or data frame, and apply a function over the margin, row or column, you want to loop over. The first argument is the data you’re considering, the margin is the second argument (1 for rows, 2 for columns[12](#fn12)), and the function you want to apply to those rows is the third argument. The following example is much cleaner compared to the loop, and now you’d have a function you can use elsewhere if needed.
```
stdize <- function(x) {
(x - mean(x)) / sd(x)
}
apply(mydf, 2, stdize) # 1 for rows, 2 for columnwise application
```
Many of the other apply functions work similarly, taking an object and a function to do the work on the object (possibly implicit), possibly with other arguments specified if necessary.
#### lapply
Let’s say we have a list object, or even just a vector of values. There are no rows or columns to iterate over, so what do we do here?
```
x = list('aba', 'abb', 'abc', 'abd', 'abe')
lapply(x, str_remove, pattern = 'ab')
```
```
[[1]]
[1] "a"
[[2]]
[1] "b"
[[3]]
[1] "c"
[[4]]
[1] "d"
[[5]]
[1] "e"
```
The lapply operation iterates over each element of the list and applies a function to them. In this case, the function is str\_remove. It has an argument for the string pattern we want to take out of the character string that is fed to it (‘ab’). For example, for ‘aba’ we will be left with just the ‘a’.
As can be seen, lapply starts with a list and returns a list. The only difference with sapply is that sapply will return a simplified form if possible[13](#fn13).
```
sapply(x, str_remove, pattern = 'ab')
```
```
[1] "a" "b" "c" "d" "e"
```
In this case we just get a vector back.
### Apply functions
It is important to be familiar with the apply family for efficient data processing, if only because you’ll regularly come code employing these functions. A summary of benefits includes:
* Cleaner/simpler code
* Environment kept clear of unnecessary objects
* Potentially more reproducible
+ more likely to use generalizable functions
* Parallelizable
Note that apply functions are NOT necessarily faster than explicit loops, and if you create an empty object for the loop as discussed previously, the explicit loop will likely be faster. On top of that, functions like replicate and mapply are especially slow.
However, the apply family can ALWAYS *potentially* be faster than standard R loops do to parallelization. With base R’s parallel package, there are parallel versions of the apply family, e.g.parApply, parLapply etc. As every modern computer has at least four cores to play with, you’ll always potentially have nearly a 4x speedup by using the parallel apply functions.
Apply functions and similar approaches should be a part of your regular R experience. We’ll talk about other options that may have even more benefits, but you need to know the basics of how apply functions work in order to use those.
I use R every day, and very rarely use explicit loops. Note that there is no speed difference for a for loop vs. using while. And if you must use an explicit loop, create an empty object of the dimension/form you need, and then fill it in via the loop. This will be notably faster.
I pretty much never use an explicit double loop, as a little more thinking about the problem will usually provide a more efficient path to solving the problem.
### purrr
The purrr package allows you to take the apply family approach to the tidyverse. And with packages future \+ furrr, they too are parallelizable.
Consider the following. We’ll use the map function to map the sum function to each element in the list, the same way we would with lapply.
```
x = list(1:3, 4:6, 7:9)
map(x, sum)
```
```
[[1]]
[1] 6
[[2]]
[1] 15
[[3]]
[1] 24
```
The map functions take some getting used to, and in my experience they are typically slower than the apply functions, sometimes notably so. However they allow you stay within the tidy realm, which has its own benefits, and have more control over the nature of the output[14](#fn14), which is especially important in reproducibility, package development, producing production\-level code, etc. The key idea is that the map functions will always return something the same length as the input given to it.
The purrr functions want a list or vector, i.e. they don’t work with data.frame objects in the same way we’ve done with mutate and summarize except in the sense that data.frames are lists.
```
## mtcars %>%
## map(scale) # returns a list, not shown
mtcars %>%
map_df(scale) # returns a df
```
```
# A tibble: 32 x 11
mpg[,1] cyl[,1] disp[,1] hp[,1] drat[,1] wt[,1] qsec[,1] vs[,1] am[,1] gear[,1] carb[,1]
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.151 -0.105 -0.571 -0.535 0.568 -0.610 -0.777 -0.868 1.19 0.424 0.735
2 0.151 -0.105 -0.571 -0.535 0.568 -0.350 -0.464 -0.868 1.19 0.424 0.735
3 0.450 -1.22 -0.990 -0.783 0.474 -0.917 0.426 1.12 1.19 0.424 -1.12
4 0.217 -0.105 0.220 -0.535 -0.966 -0.00230 0.890 1.12 -0.814 -0.932 -1.12
5 -0.231 1.01 1.04 0.413 -0.835 0.228 -0.464 -0.868 -0.814 -0.932 -0.503
6 -0.330 -0.105 -0.0462 -0.608 -1.56 0.248 1.33 1.12 -0.814 -0.932 -1.12
7 -0.961 1.01 1.04 1.43 -0.723 0.361 -1.12 -0.868 -0.814 -0.932 0.735
8 0.715 -1.22 -0.678 -1.24 0.175 -0.0278 1.20 1.12 -0.814 0.424 -0.503
9 0.450 -1.22 -0.726 -0.754 0.605 -0.0687 2.83 1.12 -0.814 0.424 -0.503
10 -0.148 -0.105 -0.509 -0.345 0.605 0.228 0.253 1.12 -0.814 0.424 0.735
# … with 22 more rows
```
```
mtcars %>%
map_dbl(sum) # returns a numeric (double) vector of column sums
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
642.900 198.000 7383.100 4694.000 115.090 102.952 571.160 14.000 13.000 118.000 90.000
```
```
diamonds %>%
map_at(
vars(carat, depth, price),
function(x)
as.integer(x > median(x))
) %>%
as_tibble()
```
```
# A tibble: 53,940 x 10
carat cut color clarity depth table price x y z
<int> <ord> <ord> <ord> <int> <dbl> <int> <dbl> <dbl> <dbl>
1 0 Ideal E SI2 0 55 0 3.95 3.98 2.43
2 0 Premium E SI1 0 61 0 3.89 3.84 2.31
3 0 Good E VS1 0 65 0 4.05 4.07 2.31
4 0 Premium I VS2 1 58 0 4.2 4.23 2.63
5 0 Good J SI2 1 58 0 4.34 4.35 2.75
6 0 Very Good J VVS2 1 57 0 3.94 3.96 2.48
7 0 Very Good I VVS1 1 57 0 3.95 3.98 2.47
8 0 Very Good H SI1 1 55 0 4.07 4.11 2.53
9 0 Fair E VS2 1 61 0 3.87 3.78 2.49
10 0 Very Good H VS1 0 61 0 4 4.05 2.39
# … with 53,930 more rows
```
However, working with lists is very useful, so let’s turn to that.
### apply family
A family of functions comes with R that allows for a succinct way of looping when it is appropriate. Common functions in this family include:
* apply
+ arrays, matrices, data.frames
* lapply, sapply, vapply
+ lists, data.frames, vectors
* tapply
+ grouped operations (table apply)
* mapply
+ multivariate version of sapply
* replicate
+ performs an operation N times
As an example we’ll consider standardizing variables, i.e. taking a set of numbers, subtracting the mean, and dividing by the standard deviation. This results in a variable with mean of 0 and standard deviation of 1\. Let’s start with a loop approach.
```
for (i in 1:ncol(mydf)) {
x = mydf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
```
The above would be a really bad way to use R. It goes over each column individually, then over each value of the column.
Conversely, apply will take a matrix or data frame, and apply a function over the margin, row or column, you want to loop over. The first argument is the data you’re considering, the margin is the second argument (1 for rows, 2 for columns[12](#fn12)), and the function you want to apply to those rows is the third argument. The following example is much cleaner compared to the loop, and now you’d have a function you can use elsewhere if needed.
```
stdize <- function(x) {
(x - mean(x)) / sd(x)
}
apply(mydf, 2, stdize) # 1 for rows, 2 for columnwise application
```
Many of the other apply functions work similarly, taking an object and a function to do the work on the object (possibly implicit), possibly with other arguments specified if necessary.
#### lapply
Let’s say we have a list object, or even just a vector of values. There are no rows or columns to iterate over, so what do we do here?
```
x = list('aba', 'abb', 'abc', 'abd', 'abe')
lapply(x, str_remove, pattern = 'ab')
```
```
[[1]]
[1] "a"
[[2]]
[1] "b"
[[3]]
[1] "c"
[[4]]
[1] "d"
[[5]]
[1] "e"
```
The lapply operation iterates over each element of the list and applies a function to them. In this case, the function is str\_remove. It has an argument for the string pattern we want to take out of the character string that is fed to it (‘ab’). For example, for ‘aba’ we will be left with just the ‘a’.
As can be seen, lapply starts with a list and returns a list. The only difference with sapply is that sapply will return a simplified form if possible[13](#fn13).
```
sapply(x, str_remove, pattern = 'ab')
```
```
[1] "a" "b" "c" "d" "e"
```
In this case we just get a vector back.
#### lapply
Let’s say we have a list object, or even just a vector of values. There are no rows or columns to iterate over, so what do we do here?
```
x = list('aba', 'abb', 'abc', 'abd', 'abe')
lapply(x, str_remove, pattern = 'ab')
```
```
[[1]]
[1] "a"
[[2]]
[1] "b"
[[3]]
[1] "c"
[[4]]
[1] "d"
[[5]]
[1] "e"
```
The lapply operation iterates over each element of the list and applies a function to them. In this case, the function is str\_remove. It has an argument for the string pattern we want to take out of the character string that is fed to it (‘ab’). For example, for ‘aba’ we will be left with just the ‘a’.
As can be seen, lapply starts with a list and returns a list. The only difference with sapply is that sapply will return a simplified form if possible[13](#fn13).
```
sapply(x, str_remove, pattern = 'ab')
```
```
[1] "a" "b" "c" "d" "e"
```
In this case we just get a vector back.
### Apply functions
It is important to be familiar with the apply family for efficient data processing, if only because you’ll regularly come code employing these functions. A summary of benefits includes:
* Cleaner/simpler code
* Environment kept clear of unnecessary objects
* Potentially more reproducible
+ more likely to use generalizable functions
* Parallelizable
Note that apply functions are NOT necessarily faster than explicit loops, and if you create an empty object for the loop as discussed previously, the explicit loop will likely be faster. On top of that, functions like replicate and mapply are especially slow.
However, the apply family can ALWAYS *potentially* be faster than standard R loops do to parallelization. With base R’s parallel package, there are parallel versions of the apply family, e.g.parApply, parLapply etc. As every modern computer has at least four cores to play with, you’ll always potentially have nearly a 4x speedup by using the parallel apply functions.
Apply functions and similar approaches should be a part of your regular R experience. We’ll talk about other options that may have even more benefits, but you need to know the basics of how apply functions work in order to use those.
I use R every day, and very rarely use explicit loops. Note that there is no speed difference for a for loop vs. using while. And if you must use an explicit loop, create an empty object of the dimension/form you need, and then fill it in via the loop. This will be notably faster.
I pretty much never use an explicit double loop, as a little more thinking about the problem will usually provide a more efficient path to solving the problem.
### purrr
The purrr package allows you to take the apply family approach to the tidyverse. And with packages future \+ furrr, they too are parallelizable.
Consider the following. We’ll use the map function to map the sum function to each element in the list, the same way we would with lapply.
```
x = list(1:3, 4:6, 7:9)
map(x, sum)
```
```
[[1]]
[1] 6
[[2]]
[1] 15
[[3]]
[1] 24
```
The map functions take some getting used to, and in my experience they are typically slower than the apply functions, sometimes notably so. However they allow you stay within the tidy realm, which has its own benefits, and have more control over the nature of the output[14](#fn14), which is especially important in reproducibility, package development, producing production\-level code, etc. The key idea is that the map functions will always return something the same length as the input given to it.
The purrr functions want a list or vector, i.e. they don’t work with data.frame objects in the same way we’ve done with mutate and summarize except in the sense that data.frames are lists.
```
## mtcars %>%
## map(scale) # returns a list, not shown
mtcars %>%
map_df(scale) # returns a df
```
```
# A tibble: 32 x 11
mpg[,1] cyl[,1] disp[,1] hp[,1] drat[,1] wt[,1] qsec[,1] vs[,1] am[,1] gear[,1] carb[,1]
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.151 -0.105 -0.571 -0.535 0.568 -0.610 -0.777 -0.868 1.19 0.424 0.735
2 0.151 -0.105 -0.571 -0.535 0.568 -0.350 -0.464 -0.868 1.19 0.424 0.735
3 0.450 -1.22 -0.990 -0.783 0.474 -0.917 0.426 1.12 1.19 0.424 -1.12
4 0.217 -0.105 0.220 -0.535 -0.966 -0.00230 0.890 1.12 -0.814 -0.932 -1.12
5 -0.231 1.01 1.04 0.413 -0.835 0.228 -0.464 -0.868 -0.814 -0.932 -0.503
6 -0.330 -0.105 -0.0462 -0.608 -1.56 0.248 1.33 1.12 -0.814 -0.932 -1.12
7 -0.961 1.01 1.04 1.43 -0.723 0.361 -1.12 -0.868 -0.814 -0.932 0.735
8 0.715 -1.22 -0.678 -1.24 0.175 -0.0278 1.20 1.12 -0.814 0.424 -0.503
9 0.450 -1.22 -0.726 -0.754 0.605 -0.0687 2.83 1.12 -0.814 0.424 -0.503
10 -0.148 -0.105 -0.509 -0.345 0.605 0.228 0.253 1.12 -0.814 0.424 0.735
# … with 22 more rows
```
```
mtcars %>%
map_dbl(sum) # returns a numeric (double) vector of column sums
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
642.900 198.000 7383.100 4694.000 115.090 102.952 571.160 14.000 13.000 118.000 90.000
```
```
diamonds %>%
map_at(
vars(carat, depth, price),
function(x)
as.integer(x > median(x))
) %>%
as_tibble()
```
```
# A tibble: 53,940 x 10
carat cut color clarity depth table price x y z
<int> <ord> <ord> <ord> <int> <dbl> <int> <dbl> <dbl> <dbl>
1 0 Ideal E SI2 0 55 0 3.95 3.98 2.43
2 0 Premium E SI1 0 61 0 3.89 3.84 2.31
3 0 Good E VS1 0 65 0 4.05 4.07 2.31
4 0 Premium I VS2 1 58 0 4.2 4.23 2.63
5 0 Good J SI2 1 58 0 4.34 4.35 2.75
6 0 Very Good J VVS2 1 57 0 3.94 3.96 2.48
7 0 Very Good I VVS1 1 57 0 3.95 3.98 2.47
8 0 Very Good H SI1 1 55 0 4.07 4.11 2.53
9 0 Fair E VS2 1 61 0 3.87 3.78 2.49
10 0 Very Good H VS1 0 61 0 4 4.05 2.39
# … with 53,930 more rows
```
However, working with lists is very useful, so let’s turn to that.
Looping with Lists
------------------
Aside from data frames, you may think you don’t have much need for list objects. However, list objects make it very easy to iterate some form of data processing.
Let’s say you have models of increasing complexity, and you want to easily summarise and/or compare them. We create a list for which each element is a model object. We then apply a function, e.g. to get the AIC value for each, or adjusted R square (this requires a custom function).
```
library(mgcv) # for gam
mtcars$cyl = factor(mtcars$cyl)
mod_lm = lm(mpg ~ wt, data = mtcars)
mod_poly = lm(mpg ~ poly(wt, 2), data = mtcars)
mod_inter = lm(mpg ~ wt * cyl, data = mtcars)
mod_gam = gam(mpg ~ s(wt), data = mtcars)
mod_gam_inter = gam(mpg ~ cyl + s(wt, by = cyl), data = mtcars)
model_list = list(
mod_lm = mod_lm,
mod_poly = mod_poly,
mod_inter = mod_inter,
mod_gam = mod_gam,
mod_gam_inter = mod_gam_inter
)
# lowest wins
model_list %>%
map_dbl(AIC) %>%
sort()
```
```
mod_gam_inter mod_inter mod_poly mod_gam mod_lm
150.6324 155.4811 158.0484 158.5717 166.0294
```
```
# highest wins
model_list %>%
map_dbl(
function(x)
if_else(inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj)
) %>%
sort(decreasing = TRUE)
```
```
mod_gam_inter mod_inter mod_poly mod_gam mod_lm
0.8643020 0.8349382 0.8065828 0.8041651 0.7445939
```
Let’s go further and create a plot of these results. We’ll map to a data frame, use pivot\_longer to melt it to two columns of model and value, then use ggplot2 to plot the results[15](#fn15).
```
model_list %>%
map_df(
function(x)
if_else(inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj)
) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = "Adj. Rsq") %>%
arrange(desc(`Adj. Rsq`)) %>%
mutate(model = factor(model, levels = model)) %>% # sigh
ggplot(aes(x = model, y = `Adj. Rsq`)) +
geom_point(aes(color = model), size = 10, show.legend = F)
```
Why not throw in AIC also?
```
mod_rsq =
model_list %>%
map_df(
function(x)
if_else(
inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj
)
) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = 'Rsq')
mod_aic =
model_list %>%
map_df(AIC) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = 'AIC')
left_join(mod_rsq, mod_aic) %>%
arrange(AIC) %>%
mutate(model = factor(model, levels = model)) %>%
pivot_longer(cols = -model, names_to = 'measure', values_to = 'value') %>%
ggplot(aes(x = model, y = value)) +
geom_point(aes(color = model), size = 10, show.legend = F) +
facet_wrap(~ measure, scales = 'free')
```
#### List columns
As data.frames are lists, anything can be put into a column just as you would a list element. We’ll use pmap here, as it can take more than one argument, and we’re feeding all columns of the data.frame. You don’t need to worry about the details here, we just want to create a column that is actually a list. In this case the column will contain a data frame in each entry.
```
mtcars2 = as.matrix(mtcars)
mtcars2[sample(1:length(mtcars2), 50)] = NA # add some missing data
mtcars2 = data.frame(mtcars2) %>%
rownames_to_column(var = 'observation') %>%
as_tibble()
head(mtcars2)
```
```
# A tibble: 6 x 12
observation mpg cyl disp hp drat wt qsec vs am gear carb
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1
```
```
mtcars2 =
mtcars2 %>%
mutate(
newvar =
pmap(., ~ data.frame(
N = sum(!is.na(c(...))),
Missing = sum(is.na(c(...)))
)
)
)
```
Now check out the list column.
```
mtcars2
```
```
# A tibble: 32 x 13
observation mpg cyl disp hp drat wt qsec vs am gear carb newvar
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <list>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 <df[,2] [1 × 2]>
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 <df[,2] [1 × 2]>
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 <df[,2] [1 × 2]>
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 <df[,2] [1 × 2]>
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 <df[,2] [1 × 2]>
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 <df[,2] [1 × 2]>
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 <df[,2] [1 × 2]>
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 <df[,2] [1 × 2]>
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> <df[,2] [1 × 2]>
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 <df[,2] [1 × 2]>
# … with 22 more rows
```
```
mtcars2$newvar %>% head(3)
```
```
[[1]]
N Missing
1 11 1
[[2]]
N Missing
1 12 0
[[3]]
N Missing
1 12 0
```
Unnest it with the tidyr function.
```
mtcars2 %>%
unnest(newvar)
```
```
# A tibble: 32 x 14
observation mpg cyl disp hp drat wt qsec vs am gear carb N Missing
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <int> <int>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 11 1
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 12 0
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 12 0
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 11 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 11 1
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 9 3
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 11 1
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 11 1
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> 10 2
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 9 3
# … with 22 more rows
```
This is a pretty esoteric demonstration, and not something you’d normally want to do, as mutate or other approaches would be far more efficient and sensical. However, the idea is that you might want to retain the information you might otherwise store in a list with the data that was used to create it. As an example, you could potentially attach models as a list column to a dataframe that contains meta\-information about each model. Once you have a list column, you can use that column as you would any list for iterative programming.
#### List columns
As data.frames are lists, anything can be put into a column just as you would a list element. We’ll use pmap here, as it can take more than one argument, and we’re feeding all columns of the data.frame. You don’t need to worry about the details here, we just want to create a column that is actually a list. In this case the column will contain a data frame in each entry.
```
mtcars2 = as.matrix(mtcars)
mtcars2[sample(1:length(mtcars2), 50)] = NA # add some missing data
mtcars2 = data.frame(mtcars2) %>%
rownames_to_column(var = 'observation') %>%
as_tibble()
head(mtcars2)
```
```
# A tibble: 6 x 12
observation mpg cyl disp hp drat wt qsec vs am gear carb
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1
```
```
mtcars2 =
mtcars2 %>%
mutate(
newvar =
pmap(., ~ data.frame(
N = sum(!is.na(c(...))),
Missing = sum(is.na(c(...)))
)
)
)
```
Now check out the list column.
```
mtcars2
```
```
# A tibble: 32 x 13
observation mpg cyl disp hp drat wt qsec vs am gear carb newvar
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <list>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 <df[,2] [1 × 2]>
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 <df[,2] [1 × 2]>
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 <df[,2] [1 × 2]>
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 <df[,2] [1 × 2]>
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 <df[,2] [1 × 2]>
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 <df[,2] [1 × 2]>
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 <df[,2] [1 × 2]>
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 <df[,2] [1 × 2]>
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> <df[,2] [1 × 2]>
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 <df[,2] [1 × 2]>
# … with 22 more rows
```
```
mtcars2$newvar %>% head(3)
```
```
[[1]]
N Missing
1 11 1
[[2]]
N Missing
1 12 0
[[3]]
N Missing
1 12 0
```
Unnest it with the tidyr function.
```
mtcars2 %>%
unnest(newvar)
```
```
# A tibble: 32 x 14
observation mpg cyl disp hp drat wt qsec vs am gear carb N Missing
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <int> <int>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 11 1
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 12 0
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 12 0
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 11 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 11 1
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 9 3
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 11 1
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 11 1
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> 10 2
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 9 3
# … with 22 more rows
```
This is a pretty esoteric demonstration, and not something you’d normally want to do, as mutate or other approaches would be far more efficient and sensical. However, the idea is that you might want to retain the information you might otherwise store in a list with the data that was used to create it. As an example, you could potentially attach models as a list column to a dataframe that contains meta\-information about each model. Once you have a list column, you can use that column as you would any list for iterative programming.
Iterative Programming Exercises
-------------------------------
### Exercise 1
With the following matrix, use apply and the sum function to get row or column sums of the matrix x.
```
x = matrix(1:9, 3, 3)
```
### Exercise 2
With the following list object x, use lapply and sapply and the sum function to get sums for the elements. There is no margin to specify for a list, so just supply the list and the sum function.
```
x = list(1:3, 4:10, 11:100)
```
### Exercise 3
As in the previous example, use a map function to create a data frame of the column means. See `?map` to see all your options.
```
d = tibble(
x = rnorm(100),
y = rnorm(100, 10, 2),
z = rnorm(100, 50, 10),
)
```
### Exercise 1
With the following matrix, use apply and the sum function to get row or column sums of the matrix x.
```
x = matrix(1:9, 3, 3)
```
### Exercise 2
With the following list object x, use lapply and sapply and the sum function to get sums for the elements. There is no margin to specify for a list, so just supply the list and the sum function.
```
x = list(1:3, 4:10, 11:100)
```
### Exercise 3
As in the previous example, use a map function to create a data frame of the column means. See `?map` to see all your options.
```
d = tibble(
x = rnorm(100),
y = rnorm(100, 10, 2),
z = rnorm(100, 50, 10),
)
```
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/iterative.html |
Iterative Programming
=====================
Almost everything you do when dealing with data will need to be done again, and again, and again. If you are copy\-pasting your way to repetitively do the same thing, you’re not only doing things inefficiently, you’re almost certainly setting yourself up for trouble if anything changes about the data or underlying process.
In order to avoid this, you need to be familiar with basic programming, and a starting point is to use an iterative approach to repetitive problems. Let’s look at the following. Let’s say we want to get the means of some columns in our data set. Do you do something like this?
```
means1 = mean(df$x)
means2 = mean(df$y)
means3 = mean(df$z)
means4 = mean(df$q)
```
Now consider what you have to change if you change a variable name, decide to do a median, or the data object name changes. Any minor change with the data will cause you to have to redo that code, and possibly every line of it.
For Loops
---------
A for loop will help us get around the problem. The idea is that we want to perform a particular action *for* every iteration of some sequence. That sequence may be over columns, rows, lines in a text, whatever. Here is a loop.
```
for (column in c('x','y','z','q')) {
mean(df[[column]])
}
```
What’s going on here? We’ve created an iterative process in which, *for* every *element* in `c('x','y','z','q')`, we are going to do something. We use the completely arbitrary word `column` as a placeholder to index which of the four columns we’re dealing with at a given point in the process. On the first iteration, `column` will equal `x`, on the second `y`, and so on. We then take the mean of `df[[column]]`, which will be `df[['x']]`, then `df[['y']]`, etc.
Here is an example with the nycflights data, which regards flights that departed New York City in 2013\. The weather data set has columns for things like temperature, humidity, and so forth.
```
weather = nycflights13::weather
for (column in c('temp', 'humid', 'wind_speed', 'precip')) {
print(mean(weather[[column]], na.rm = TRUE))
}
```
```
[1] 55.26039
[1] 62.53006
[1] 10.51749
[1] 0.004469079
```
You can check this for yourself by testing a column or two directly with just `mean(df$x)`.
Now if the data name changes, the columns we want change, or we want to calculate something else, we usually end up only changing one thing, rather than at least changing one at a minimum, and probably many more things. In addition, the amount of code is the same whether the loop goes over 100 columns or 4\.
Let’s do things a little differently.
```
columns = c('temp', 'humid', 'wind_speed', 'precip')
nyc_means = rep(NA, length(columns))
for (i in seq_along(columns)) {
column = columns[i]
nyc_means[i] = mean(weather[[column]], na.rm = TRUE)
# alternative without the initial first step
# nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means
```
```
[1] 55.260392127 62.530058972 10.517488384 0.004469079
```
By creating a columns object, if anything changes about the columns we want, that’s the only line in the code that would need to be changed. The `i` is now a place holder for a number that goes from 1 to the length of columns (i.e. 4\). We make an empty nyc\_means object that’s the length of the columns, so that each element will eventually be the mean of the corresponding column.
In the following I remove precipitation and add visibility and air pressure.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = rep(NA, length(columns))
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
Had we been copy\-pasting, this would require deleting or commenting out a line in our code, pasting two more, and changing each one after pasting to represent the new columns. That’s tedious, and not a fun way to code.
### A slight speed gain
Note that you do not have to create an empty object like we did. The following works also.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = numeric()
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
However, the other approach is slightly faster, because memory is already allocated for all elements of nyc\_means, rather than updating it every iteration of the loop. This speed gain can become noticeable when dealing with thousands of columns and complex operations.
### While alternative
When you look at some people’s R code, you may see a loop of a different sort.
```
columns = c('temp','humid','wind_speed', 'visib', 'pressure')
nyc_means = c()
i = 1
while (i <= length(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
i = i + 1
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
This involves a while statement. It states, while `i` is less than or equal to the length (number) of columns, compute the value of the ith element of nyc\_means as the mean of ith column of weather. After that, increase the value of `i`. So, we start with `i = 1`, compute that subsequent mean, `i` now equals 2, do the process again, and so on. The process will stop as soon as the value of `i` is greater than the length of columns.
*There is zero difference to using the while approach vs. the for loop*. While is often used when there is a check to be made, e.g. in modeling functions that have to stop the estimation process at some point, or else they’d go on indefinitely. In that case the while syntax is probably more natural. Either is fine.
### Loops summary
Understanding loops is fundamental toward spending less time processing data and more time toward exploring it. Your code will be more succinct and more able to handle the usual changes that come with dealing with data. Now that you have a sense of it, know that once you are armed with the sorts of things we’ll be talking about next\- apply functions, writing functions, and vectorization \- you’ll likely have little need to write explicit loops. While there is always a need for iterative processing of data, R provides even more efficient means to do so.
Implicit Loops
--------------
Writing loops is straightforward once you get the initial hang of it. However, R offers alternative ways to do loops that can simplify code without losing readability. As such, even when you loop in R, you don’t have to do so explicitly.
### apply family
A family of functions comes with R that allows for a succinct way of looping when it is appropriate. Common functions in this family include:
* apply
+ arrays, matrices, data.frames
* lapply, sapply, vapply
+ lists, data.frames, vectors
* tapply
+ grouped operations (table apply)
* mapply
+ multivariate version of sapply
* replicate
+ performs an operation N times
As an example we’ll consider standardizing variables, i.e. taking a set of numbers, subtracting the mean, and dividing by the standard deviation. This results in a variable with mean of 0 and standard deviation of 1\. Let’s start with a loop approach.
```
for (i in 1:ncol(mydf)) {
x = mydf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
```
The above would be a really bad way to use R. It goes over each column individually, then over each value of the column.
Conversely, apply will take a matrix or data frame, and apply a function over the margin, row or column, you want to loop over. The first argument is the data you’re considering, the margin is the second argument (1 for rows, 2 for columns[12](#fn12)), and the function you want to apply to those rows is the third argument. The following example is much cleaner compared to the loop, and now you’d have a function you can use elsewhere if needed.
```
stdize <- function(x) {
(x - mean(x)) / sd(x)
}
apply(mydf, 2, stdize) # 1 for rows, 2 for columnwise application
```
Many of the other apply functions work similarly, taking an object and a function to do the work on the object (possibly implicit), possibly with other arguments specified if necessary.
#### lapply
Let’s say we have a list object, or even just a vector of values. There are no rows or columns to iterate over, so what do we do here?
```
x = list('aba', 'abb', 'abc', 'abd', 'abe')
lapply(x, str_remove, pattern = 'ab')
```
```
[[1]]
[1] "a"
[[2]]
[1] "b"
[[3]]
[1] "c"
[[4]]
[1] "d"
[[5]]
[1] "e"
```
The lapply operation iterates over each element of the list and applies a function to them. In this case, the function is str\_remove. It has an argument for the string pattern we want to take out of the character string that is fed to it (‘ab’). For example, for ‘aba’ we will be left with just the ‘a’.
As can be seen, lapply starts with a list and returns a list. The only difference with sapply is that sapply will return a simplified form if possible[13](#fn13).
```
sapply(x, str_remove, pattern = 'ab')
```
```
[1] "a" "b" "c" "d" "e"
```
In this case we just get a vector back.
### Apply functions
It is important to be familiar with the apply family for efficient data processing, if only because you’ll regularly come code employing these functions. A summary of benefits includes:
* Cleaner/simpler code
* Environment kept clear of unnecessary objects
* Potentially more reproducible
+ more likely to use generalizable functions
* Parallelizable
Note that apply functions are NOT necessarily faster than explicit loops, and if you create an empty object for the loop as discussed previously, the explicit loop will likely be faster. On top of that, functions like replicate and mapply are especially slow.
However, the apply family can ALWAYS *potentially* be faster than standard R loops do to parallelization. With base R’s parallel package, there are parallel versions of the apply family, e.g.parApply, parLapply etc. As every modern computer has at least four cores to play with, you’ll always potentially have nearly a 4x speedup by using the parallel apply functions.
Apply functions and similar approaches should be a part of your regular R experience. We’ll talk about other options that may have even more benefits, but you need to know the basics of how apply functions work in order to use those.
I use R every day, and very rarely use explicit loops. Note that there is no speed difference for a for loop vs. using while. And if you must use an explicit loop, create an empty object of the dimension/form you need, and then fill it in via the loop. This will be notably faster.
I pretty much never use an explicit double loop, as a little more thinking about the problem will usually provide a more efficient path to solving the problem.
### purrr
The purrr package allows you to take the apply family approach to the tidyverse. And with packages future \+ furrr, they too are parallelizable.
Consider the following. We’ll use the map function to map the sum function to each element in the list, the same way we would with lapply.
```
x = list(1:3, 4:6, 7:9)
map(x, sum)
```
```
[[1]]
[1] 6
[[2]]
[1] 15
[[3]]
[1] 24
```
The map functions take some getting used to, and in my experience they are typically slower than the apply functions, sometimes notably so. However they allow you stay within the tidy realm, which has its own benefits, and have more control over the nature of the output[14](#fn14), which is especially important in reproducibility, package development, producing production\-level code, etc. The key idea is that the map functions will always return something the same length as the input given to it.
The purrr functions want a list or vector, i.e. they don’t work with data.frame objects in the same way we’ve done with mutate and summarize except in the sense that data.frames are lists.
```
## mtcars %>%
## map(scale) # returns a list, not shown
mtcars %>%
map_df(scale) # returns a df
```
```
# A tibble: 32 x 11
mpg[,1] cyl[,1] disp[,1] hp[,1] drat[,1] wt[,1] qsec[,1] vs[,1] am[,1] gear[,1] carb[,1]
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.151 -0.105 -0.571 -0.535 0.568 -0.610 -0.777 -0.868 1.19 0.424 0.735
2 0.151 -0.105 -0.571 -0.535 0.568 -0.350 -0.464 -0.868 1.19 0.424 0.735
3 0.450 -1.22 -0.990 -0.783 0.474 -0.917 0.426 1.12 1.19 0.424 -1.12
4 0.217 -0.105 0.220 -0.535 -0.966 -0.00230 0.890 1.12 -0.814 -0.932 -1.12
5 -0.231 1.01 1.04 0.413 -0.835 0.228 -0.464 -0.868 -0.814 -0.932 -0.503
6 -0.330 -0.105 -0.0462 -0.608 -1.56 0.248 1.33 1.12 -0.814 -0.932 -1.12
7 -0.961 1.01 1.04 1.43 -0.723 0.361 -1.12 -0.868 -0.814 -0.932 0.735
8 0.715 -1.22 -0.678 -1.24 0.175 -0.0278 1.20 1.12 -0.814 0.424 -0.503
9 0.450 -1.22 -0.726 -0.754 0.605 -0.0687 2.83 1.12 -0.814 0.424 -0.503
10 -0.148 -0.105 -0.509 -0.345 0.605 0.228 0.253 1.12 -0.814 0.424 0.735
# … with 22 more rows
```
```
mtcars %>%
map_dbl(sum) # returns a numeric (double) vector of column sums
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
642.900 198.000 7383.100 4694.000 115.090 102.952 571.160 14.000 13.000 118.000 90.000
```
```
diamonds %>%
map_at(
vars(carat, depth, price),
function(x)
as.integer(x > median(x))
) %>%
as_tibble()
```
```
# A tibble: 53,940 x 10
carat cut color clarity depth table price x y z
<int> <ord> <ord> <ord> <int> <dbl> <int> <dbl> <dbl> <dbl>
1 0 Ideal E SI2 0 55 0 3.95 3.98 2.43
2 0 Premium E SI1 0 61 0 3.89 3.84 2.31
3 0 Good E VS1 0 65 0 4.05 4.07 2.31
4 0 Premium I VS2 1 58 0 4.2 4.23 2.63
5 0 Good J SI2 1 58 0 4.34 4.35 2.75
6 0 Very Good J VVS2 1 57 0 3.94 3.96 2.48
7 0 Very Good I VVS1 1 57 0 3.95 3.98 2.47
8 0 Very Good H SI1 1 55 0 4.07 4.11 2.53
9 0 Fair E VS2 1 61 0 3.87 3.78 2.49
10 0 Very Good H VS1 0 61 0 4 4.05 2.39
# … with 53,930 more rows
```
However, working with lists is very useful, so let’s turn to that.
Looping with Lists
------------------
Aside from data frames, you may think you don’t have much need for list objects. However, list objects make it very easy to iterate some form of data processing.
Let’s say you have models of increasing complexity, and you want to easily summarise and/or compare them. We create a list for which each element is a model object. We then apply a function, e.g. to get the AIC value for each, or adjusted R square (this requires a custom function).
```
library(mgcv) # for gam
mtcars$cyl = factor(mtcars$cyl)
mod_lm = lm(mpg ~ wt, data = mtcars)
mod_poly = lm(mpg ~ poly(wt, 2), data = mtcars)
mod_inter = lm(mpg ~ wt * cyl, data = mtcars)
mod_gam = gam(mpg ~ s(wt), data = mtcars)
mod_gam_inter = gam(mpg ~ cyl + s(wt, by = cyl), data = mtcars)
model_list = list(
mod_lm = mod_lm,
mod_poly = mod_poly,
mod_inter = mod_inter,
mod_gam = mod_gam,
mod_gam_inter = mod_gam_inter
)
# lowest wins
model_list %>%
map_dbl(AIC) %>%
sort()
```
```
mod_gam_inter mod_inter mod_poly mod_gam mod_lm
150.6324 155.4811 158.0484 158.5717 166.0294
```
```
# highest wins
model_list %>%
map_dbl(
function(x)
if_else(inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj)
) %>%
sort(decreasing = TRUE)
```
```
mod_gam_inter mod_inter mod_poly mod_gam mod_lm
0.8643020 0.8349382 0.8065828 0.8041651 0.7445939
```
Let’s go further and create a plot of these results. We’ll map to a data frame, use pivot\_longer to melt it to two columns of model and value, then use ggplot2 to plot the results[15](#fn15).
```
model_list %>%
map_df(
function(x)
if_else(inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj)
) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = "Adj. Rsq") %>%
arrange(desc(`Adj. Rsq`)) %>%
mutate(model = factor(model, levels = model)) %>% # sigh
ggplot(aes(x = model, y = `Adj. Rsq`)) +
geom_point(aes(color = model), size = 10, show.legend = F)
```
Why not throw in AIC also?
```
mod_rsq =
model_list %>%
map_df(
function(x)
if_else(
inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj
)
) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = 'Rsq')
mod_aic =
model_list %>%
map_df(AIC) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = 'AIC')
left_join(mod_rsq, mod_aic) %>%
arrange(AIC) %>%
mutate(model = factor(model, levels = model)) %>%
pivot_longer(cols = -model, names_to = 'measure', values_to = 'value') %>%
ggplot(aes(x = model, y = value)) +
geom_point(aes(color = model), size = 10, show.legend = F) +
facet_wrap(~ measure, scales = 'free')
```
#### List columns
As data.frames are lists, anything can be put into a column just as you would a list element. We’ll use pmap here, as it can take more than one argument, and we’re feeding all columns of the data.frame. You don’t need to worry about the details here, we just want to create a column that is actually a list. In this case the column will contain a data frame in each entry.
```
mtcars2 = as.matrix(mtcars)
mtcars2[sample(1:length(mtcars2), 50)] = NA # add some missing data
mtcars2 = data.frame(mtcars2) %>%
rownames_to_column(var = 'observation') %>%
as_tibble()
head(mtcars2)
```
```
# A tibble: 6 x 12
observation mpg cyl disp hp drat wt qsec vs am gear carb
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1
```
```
mtcars2 =
mtcars2 %>%
mutate(
newvar =
pmap(., ~ data.frame(
N = sum(!is.na(c(...))),
Missing = sum(is.na(c(...)))
)
)
)
```
Now check out the list column.
```
mtcars2
```
```
# A tibble: 32 x 13
observation mpg cyl disp hp drat wt qsec vs am gear carb newvar
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <list>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 <df[,2] [1 × 2]>
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 <df[,2] [1 × 2]>
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 <df[,2] [1 × 2]>
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 <df[,2] [1 × 2]>
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 <df[,2] [1 × 2]>
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 <df[,2] [1 × 2]>
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 <df[,2] [1 × 2]>
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 <df[,2] [1 × 2]>
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> <df[,2] [1 × 2]>
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 <df[,2] [1 × 2]>
# … with 22 more rows
```
```
mtcars2$newvar %>% head(3)
```
```
[[1]]
N Missing
1 11 1
[[2]]
N Missing
1 12 0
[[3]]
N Missing
1 12 0
```
Unnest it with the tidyr function.
```
mtcars2 %>%
unnest(newvar)
```
```
# A tibble: 32 x 14
observation mpg cyl disp hp drat wt qsec vs am gear carb N Missing
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <int> <int>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 11 1
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 12 0
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 12 0
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 11 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 11 1
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 9 3
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 11 1
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 11 1
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> 10 2
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 9 3
# … with 22 more rows
```
This is a pretty esoteric demonstration, and not something you’d normally want to do, as mutate or other approaches would be far more efficient and sensical. However, the idea is that you might want to retain the information you might otherwise store in a list with the data that was used to create it. As an example, you could potentially attach models as a list column to a dataframe that contains meta\-information about each model. Once you have a list column, you can use that column as you would any list for iterative programming.
Iterative Programming Exercises
-------------------------------
### Exercise 1
With the following matrix, use apply and the sum function to get row or column sums of the matrix x.
```
x = matrix(1:9, 3, 3)
```
### Exercise 2
With the following list object x, use lapply and sapply and the sum function to get sums for the elements. There is no margin to specify for a list, so just supply the list and the sum function.
```
x = list(1:3, 4:10, 11:100)
```
### Exercise 3
As in the previous example, use a map function to create a data frame of the column means. See `?map` to see all your options.
```
d = tibble(
x = rnorm(100),
y = rnorm(100, 10, 2),
z = rnorm(100, 50, 10),
)
```
For Loops
---------
A for loop will help us get around the problem. The idea is that we want to perform a particular action *for* every iteration of some sequence. That sequence may be over columns, rows, lines in a text, whatever. Here is a loop.
```
for (column in c('x','y','z','q')) {
mean(df[[column]])
}
```
What’s going on here? We’ve created an iterative process in which, *for* every *element* in `c('x','y','z','q')`, we are going to do something. We use the completely arbitrary word `column` as a placeholder to index which of the four columns we’re dealing with at a given point in the process. On the first iteration, `column` will equal `x`, on the second `y`, and so on. We then take the mean of `df[[column]]`, which will be `df[['x']]`, then `df[['y']]`, etc.
Here is an example with the nycflights data, which regards flights that departed New York City in 2013\. The weather data set has columns for things like temperature, humidity, and so forth.
```
weather = nycflights13::weather
for (column in c('temp', 'humid', 'wind_speed', 'precip')) {
print(mean(weather[[column]], na.rm = TRUE))
}
```
```
[1] 55.26039
[1] 62.53006
[1] 10.51749
[1] 0.004469079
```
You can check this for yourself by testing a column or two directly with just `mean(df$x)`.
Now if the data name changes, the columns we want change, or we want to calculate something else, we usually end up only changing one thing, rather than at least changing one at a minimum, and probably many more things. In addition, the amount of code is the same whether the loop goes over 100 columns or 4\.
Let’s do things a little differently.
```
columns = c('temp', 'humid', 'wind_speed', 'precip')
nyc_means = rep(NA, length(columns))
for (i in seq_along(columns)) {
column = columns[i]
nyc_means[i] = mean(weather[[column]], na.rm = TRUE)
# alternative without the initial first step
# nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means
```
```
[1] 55.260392127 62.530058972 10.517488384 0.004469079
```
By creating a columns object, if anything changes about the columns we want, that’s the only line in the code that would need to be changed. The `i` is now a place holder for a number that goes from 1 to the length of columns (i.e. 4\). We make an empty nyc\_means object that’s the length of the columns, so that each element will eventually be the mean of the corresponding column.
In the following I remove precipitation and add visibility and air pressure.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = rep(NA, length(columns))
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
Had we been copy\-pasting, this would require deleting or commenting out a line in our code, pasting two more, and changing each one after pasting to represent the new columns. That’s tedious, and not a fun way to code.
### A slight speed gain
Note that you do not have to create an empty object like we did. The following works also.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = numeric()
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
However, the other approach is slightly faster, because memory is already allocated for all elements of nyc\_means, rather than updating it every iteration of the loop. This speed gain can become noticeable when dealing with thousands of columns and complex operations.
### While alternative
When you look at some people’s R code, you may see a loop of a different sort.
```
columns = c('temp','humid','wind_speed', 'visib', 'pressure')
nyc_means = c()
i = 1
while (i <= length(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
i = i + 1
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
This involves a while statement. It states, while `i` is less than or equal to the length (number) of columns, compute the value of the ith element of nyc\_means as the mean of ith column of weather. After that, increase the value of `i`. So, we start with `i = 1`, compute that subsequent mean, `i` now equals 2, do the process again, and so on. The process will stop as soon as the value of `i` is greater than the length of columns.
*There is zero difference to using the while approach vs. the for loop*. While is often used when there is a check to be made, e.g. in modeling functions that have to stop the estimation process at some point, or else they’d go on indefinitely. In that case the while syntax is probably more natural. Either is fine.
### Loops summary
Understanding loops is fundamental toward spending less time processing data and more time toward exploring it. Your code will be more succinct and more able to handle the usual changes that come with dealing with data. Now that you have a sense of it, know that once you are armed with the sorts of things we’ll be talking about next\- apply functions, writing functions, and vectorization \- you’ll likely have little need to write explicit loops. While there is always a need for iterative processing of data, R provides even more efficient means to do so.
### A slight speed gain
Note that you do not have to create an empty object like we did. The following works also.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = numeric()
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
However, the other approach is slightly faster, because memory is already allocated for all elements of nyc\_means, rather than updating it every iteration of the loop. This speed gain can become noticeable when dealing with thousands of columns and complex operations.
### While alternative
When you look at some people’s R code, you may see a loop of a different sort.
```
columns = c('temp','humid','wind_speed', 'visib', 'pressure')
nyc_means = c()
i = 1
while (i <= length(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
i = i + 1
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
This involves a while statement. It states, while `i` is less than or equal to the length (number) of columns, compute the value of the ith element of nyc\_means as the mean of ith column of weather. After that, increase the value of `i`. So, we start with `i = 1`, compute that subsequent mean, `i` now equals 2, do the process again, and so on. The process will stop as soon as the value of `i` is greater than the length of columns.
*There is zero difference to using the while approach vs. the for loop*. While is often used when there is a check to be made, e.g. in modeling functions that have to stop the estimation process at some point, or else they’d go on indefinitely. In that case the while syntax is probably more natural. Either is fine.
### Loops summary
Understanding loops is fundamental toward spending less time processing data and more time toward exploring it. Your code will be more succinct and more able to handle the usual changes that come with dealing with data. Now that you have a sense of it, know that once you are armed with the sorts of things we’ll be talking about next\- apply functions, writing functions, and vectorization \- you’ll likely have little need to write explicit loops. While there is always a need for iterative processing of data, R provides even more efficient means to do so.
Implicit Loops
--------------
Writing loops is straightforward once you get the initial hang of it. However, R offers alternative ways to do loops that can simplify code without losing readability. As such, even when you loop in R, you don’t have to do so explicitly.
### apply family
A family of functions comes with R that allows for a succinct way of looping when it is appropriate. Common functions in this family include:
* apply
+ arrays, matrices, data.frames
* lapply, sapply, vapply
+ lists, data.frames, vectors
* tapply
+ grouped operations (table apply)
* mapply
+ multivariate version of sapply
* replicate
+ performs an operation N times
As an example we’ll consider standardizing variables, i.e. taking a set of numbers, subtracting the mean, and dividing by the standard deviation. This results in a variable with mean of 0 and standard deviation of 1\. Let’s start with a loop approach.
```
for (i in 1:ncol(mydf)) {
x = mydf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
```
The above would be a really bad way to use R. It goes over each column individually, then over each value of the column.
Conversely, apply will take a matrix or data frame, and apply a function over the margin, row or column, you want to loop over. The first argument is the data you’re considering, the margin is the second argument (1 for rows, 2 for columns[12](#fn12)), and the function you want to apply to those rows is the third argument. The following example is much cleaner compared to the loop, and now you’d have a function you can use elsewhere if needed.
```
stdize <- function(x) {
(x - mean(x)) / sd(x)
}
apply(mydf, 2, stdize) # 1 for rows, 2 for columnwise application
```
Many of the other apply functions work similarly, taking an object and a function to do the work on the object (possibly implicit), possibly with other arguments specified if necessary.
#### lapply
Let’s say we have a list object, or even just a vector of values. There are no rows or columns to iterate over, so what do we do here?
```
x = list('aba', 'abb', 'abc', 'abd', 'abe')
lapply(x, str_remove, pattern = 'ab')
```
```
[[1]]
[1] "a"
[[2]]
[1] "b"
[[3]]
[1] "c"
[[4]]
[1] "d"
[[5]]
[1] "e"
```
The lapply operation iterates over each element of the list and applies a function to them. In this case, the function is str\_remove. It has an argument for the string pattern we want to take out of the character string that is fed to it (‘ab’). For example, for ‘aba’ we will be left with just the ‘a’.
As can be seen, lapply starts with a list and returns a list. The only difference with sapply is that sapply will return a simplified form if possible[13](#fn13).
```
sapply(x, str_remove, pattern = 'ab')
```
```
[1] "a" "b" "c" "d" "e"
```
In this case we just get a vector back.
### Apply functions
It is important to be familiar with the apply family for efficient data processing, if only because you’ll regularly come code employing these functions. A summary of benefits includes:
* Cleaner/simpler code
* Environment kept clear of unnecessary objects
* Potentially more reproducible
+ more likely to use generalizable functions
* Parallelizable
Note that apply functions are NOT necessarily faster than explicit loops, and if you create an empty object for the loop as discussed previously, the explicit loop will likely be faster. On top of that, functions like replicate and mapply are especially slow.
However, the apply family can ALWAYS *potentially* be faster than standard R loops do to parallelization. With base R’s parallel package, there are parallel versions of the apply family, e.g.parApply, parLapply etc. As every modern computer has at least four cores to play with, you’ll always potentially have nearly a 4x speedup by using the parallel apply functions.
Apply functions and similar approaches should be a part of your regular R experience. We’ll talk about other options that may have even more benefits, but you need to know the basics of how apply functions work in order to use those.
I use R every day, and very rarely use explicit loops. Note that there is no speed difference for a for loop vs. using while. And if you must use an explicit loop, create an empty object of the dimension/form you need, and then fill it in via the loop. This will be notably faster.
I pretty much never use an explicit double loop, as a little more thinking about the problem will usually provide a more efficient path to solving the problem.
### purrr
The purrr package allows you to take the apply family approach to the tidyverse. And with packages future \+ furrr, they too are parallelizable.
Consider the following. We’ll use the map function to map the sum function to each element in the list, the same way we would with lapply.
```
x = list(1:3, 4:6, 7:9)
map(x, sum)
```
```
[[1]]
[1] 6
[[2]]
[1] 15
[[3]]
[1] 24
```
The map functions take some getting used to, and in my experience they are typically slower than the apply functions, sometimes notably so. However they allow you stay within the tidy realm, which has its own benefits, and have more control over the nature of the output[14](#fn14), which is especially important in reproducibility, package development, producing production\-level code, etc. The key idea is that the map functions will always return something the same length as the input given to it.
The purrr functions want a list or vector, i.e. they don’t work with data.frame objects in the same way we’ve done with mutate and summarize except in the sense that data.frames are lists.
```
## mtcars %>%
## map(scale) # returns a list, not shown
mtcars %>%
map_df(scale) # returns a df
```
```
# A tibble: 32 x 11
mpg[,1] cyl[,1] disp[,1] hp[,1] drat[,1] wt[,1] qsec[,1] vs[,1] am[,1] gear[,1] carb[,1]
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.151 -0.105 -0.571 -0.535 0.568 -0.610 -0.777 -0.868 1.19 0.424 0.735
2 0.151 -0.105 -0.571 -0.535 0.568 -0.350 -0.464 -0.868 1.19 0.424 0.735
3 0.450 -1.22 -0.990 -0.783 0.474 -0.917 0.426 1.12 1.19 0.424 -1.12
4 0.217 -0.105 0.220 -0.535 -0.966 -0.00230 0.890 1.12 -0.814 -0.932 -1.12
5 -0.231 1.01 1.04 0.413 -0.835 0.228 -0.464 -0.868 -0.814 -0.932 -0.503
6 -0.330 -0.105 -0.0462 -0.608 -1.56 0.248 1.33 1.12 -0.814 -0.932 -1.12
7 -0.961 1.01 1.04 1.43 -0.723 0.361 -1.12 -0.868 -0.814 -0.932 0.735
8 0.715 -1.22 -0.678 -1.24 0.175 -0.0278 1.20 1.12 -0.814 0.424 -0.503
9 0.450 -1.22 -0.726 -0.754 0.605 -0.0687 2.83 1.12 -0.814 0.424 -0.503
10 -0.148 -0.105 -0.509 -0.345 0.605 0.228 0.253 1.12 -0.814 0.424 0.735
# … with 22 more rows
```
```
mtcars %>%
map_dbl(sum) # returns a numeric (double) vector of column sums
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
642.900 198.000 7383.100 4694.000 115.090 102.952 571.160 14.000 13.000 118.000 90.000
```
```
diamonds %>%
map_at(
vars(carat, depth, price),
function(x)
as.integer(x > median(x))
) %>%
as_tibble()
```
```
# A tibble: 53,940 x 10
carat cut color clarity depth table price x y z
<int> <ord> <ord> <ord> <int> <dbl> <int> <dbl> <dbl> <dbl>
1 0 Ideal E SI2 0 55 0 3.95 3.98 2.43
2 0 Premium E SI1 0 61 0 3.89 3.84 2.31
3 0 Good E VS1 0 65 0 4.05 4.07 2.31
4 0 Premium I VS2 1 58 0 4.2 4.23 2.63
5 0 Good J SI2 1 58 0 4.34 4.35 2.75
6 0 Very Good J VVS2 1 57 0 3.94 3.96 2.48
7 0 Very Good I VVS1 1 57 0 3.95 3.98 2.47
8 0 Very Good H SI1 1 55 0 4.07 4.11 2.53
9 0 Fair E VS2 1 61 0 3.87 3.78 2.49
10 0 Very Good H VS1 0 61 0 4 4.05 2.39
# … with 53,930 more rows
```
However, working with lists is very useful, so let’s turn to that.
### apply family
A family of functions comes with R that allows for a succinct way of looping when it is appropriate. Common functions in this family include:
* apply
+ arrays, matrices, data.frames
* lapply, sapply, vapply
+ lists, data.frames, vectors
* tapply
+ grouped operations (table apply)
* mapply
+ multivariate version of sapply
* replicate
+ performs an operation N times
As an example we’ll consider standardizing variables, i.e. taking a set of numbers, subtracting the mean, and dividing by the standard deviation. This results in a variable with mean of 0 and standard deviation of 1\. Let’s start with a loop approach.
```
for (i in 1:ncol(mydf)) {
x = mydf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
```
The above would be a really bad way to use R. It goes over each column individually, then over each value of the column.
Conversely, apply will take a matrix or data frame, and apply a function over the margin, row or column, you want to loop over. The first argument is the data you’re considering, the margin is the second argument (1 for rows, 2 for columns[12](#fn12)), and the function you want to apply to those rows is the third argument. The following example is much cleaner compared to the loop, and now you’d have a function you can use elsewhere if needed.
```
stdize <- function(x) {
(x - mean(x)) / sd(x)
}
apply(mydf, 2, stdize) # 1 for rows, 2 for columnwise application
```
Many of the other apply functions work similarly, taking an object and a function to do the work on the object (possibly implicit), possibly with other arguments specified if necessary.
#### lapply
Let’s say we have a list object, or even just a vector of values. There are no rows or columns to iterate over, so what do we do here?
```
x = list('aba', 'abb', 'abc', 'abd', 'abe')
lapply(x, str_remove, pattern = 'ab')
```
```
[[1]]
[1] "a"
[[2]]
[1] "b"
[[3]]
[1] "c"
[[4]]
[1] "d"
[[5]]
[1] "e"
```
The lapply operation iterates over each element of the list and applies a function to them. In this case, the function is str\_remove. It has an argument for the string pattern we want to take out of the character string that is fed to it (‘ab’). For example, for ‘aba’ we will be left with just the ‘a’.
As can be seen, lapply starts with a list and returns a list. The only difference with sapply is that sapply will return a simplified form if possible[13](#fn13).
```
sapply(x, str_remove, pattern = 'ab')
```
```
[1] "a" "b" "c" "d" "e"
```
In this case we just get a vector back.
#### lapply
Let’s say we have a list object, or even just a vector of values. There are no rows or columns to iterate over, so what do we do here?
```
x = list('aba', 'abb', 'abc', 'abd', 'abe')
lapply(x, str_remove, pattern = 'ab')
```
```
[[1]]
[1] "a"
[[2]]
[1] "b"
[[3]]
[1] "c"
[[4]]
[1] "d"
[[5]]
[1] "e"
```
The lapply operation iterates over each element of the list and applies a function to them. In this case, the function is str\_remove. It has an argument for the string pattern we want to take out of the character string that is fed to it (‘ab’). For example, for ‘aba’ we will be left with just the ‘a’.
As can be seen, lapply starts with a list and returns a list. The only difference with sapply is that sapply will return a simplified form if possible[13](#fn13).
```
sapply(x, str_remove, pattern = 'ab')
```
```
[1] "a" "b" "c" "d" "e"
```
In this case we just get a vector back.
### Apply functions
It is important to be familiar with the apply family for efficient data processing, if only because you’ll regularly come code employing these functions. A summary of benefits includes:
* Cleaner/simpler code
* Environment kept clear of unnecessary objects
* Potentially more reproducible
+ more likely to use generalizable functions
* Parallelizable
Note that apply functions are NOT necessarily faster than explicit loops, and if you create an empty object for the loop as discussed previously, the explicit loop will likely be faster. On top of that, functions like replicate and mapply are especially slow.
However, the apply family can ALWAYS *potentially* be faster than standard R loops do to parallelization. With base R’s parallel package, there are parallel versions of the apply family, e.g.parApply, parLapply etc. As every modern computer has at least four cores to play with, you’ll always potentially have nearly a 4x speedup by using the parallel apply functions.
Apply functions and similar approaches should be a part of your regular R experience. We’ll talk about other options that may have even more benefits, but you need to know the basics of how apply functions work in order to use those.
I use R every day, and very rarely use explicit loops. Note that there is no speed difference for a for loop vs. using while. And if you must use an explicit loop, create an empty object of the dimension/form you need, and then fill it in via the loop. This will be notably faster.
I pretty much never use an explicit double loop, as a little more thinking about the problem will usually provide a more efficient path to solving the problem.
### purrr
The purrr package allows you to take the apply family approach to the tidyverse. And with packages future \+ furrr, they too are parallelizable.
Consider the following. We’ll use the map function to map the sum function to each element in the list, the same way we would with lapply.
```
x = list(1:3, 4:6, 7:9)
map(x, sum)
```
```
[[1]]
[1] 6
[[2]]
[1] 15
[[3]]
[1] 24
```
The map functions take some getting used to, and in my experience they are typically slower than the apply functions, sometimes notably so. However they allow you stay within the tidy realm, which has its own benefits, and have more control over the nature of the output[14](#fn14), which is especially important in reproducibility, package development, producing production\-level code, etc. The key idea is that the map functions will always return something the same length as the input given to it.
The purrr functions want a list or vector, i.e. they don’t work with data.frame objects in the same way we’ve done with mutate and summarize except in the sense that data.frames are lists.
```
## mtcars %>%
## map(scale) # returns a list, not shown
mtcars %>%
map_df(scale) # returns a df
```
```
# A tibble: 32 x 11
mpg[,1] cyl[,1] disp[,1] hp[,1] drat[,1] wt[,1] qsec[,1] vs[,1] am[,1] gear[,1] carb[,1]
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.151 -0.105 -0.571 -0.535 0.568 -0.610 -0.777 -0.868 1.19 0.424 0.735
2 0.151 -0.105 -0.571 -0.535 0.568 -0.350 -0.464 -0.868 1.19 0.424 0.735
3 0.450 -1.22 -0.990 -0.783 0.474 -0.917 0.426 1.12 1.19 0.424 -1.12
4 0.217 -0.105 0.220 -0.535 -0.966 -0.00230 0.890 1.12 -0.814 -0.932 -1.12
5 -0.231 1.01 1.04 0.413 -0.835 0.228 -0.464 -0.868 -0.814 -0.932 -0.503
6 -0.330 -0.105 -0.0462 -0.608 -1.56 0.248 1.33 1.12 -0.814 -0.932 -1.12
7 -0.961 1.01 1.04 1.43 -0.723 0.361 -1.12 -0.868 -0.814 -0.932 0.735
8 0.715 -1.22 -0.678 -1.24 0.175 -0.0278 1.20 1.12 -0.814 0.424 -0.503
9 0.450 -1.22 -0.726 -0.754 0.605 -0.0687 2.83 1.12 -0.814 0.424 -0.503
10 -0.148 -0.105 -0.509 -0.345 0.605 0.228 0.253 1.12 -0.814 0.424 0.735
# … with 22 more rows
```
```
mtcars %>%
map_dbl(sum) # returns a numeric (double) vector of column sums
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
642.900 198.000 7383.100 4694.000 115.090 102.952 571.160 14.000 13.000 118.000 90.000
```
```
diamonds %>%
map_at(
vars(carat, depth, price),
function(x)
as.integer(x > median(x))
) %>%
as_tibble()
```
```
# A tibble: 53,940 x 10
carat cut color clarity depth table price x y z
<int> <ord> <ord> <ord> <int> <dbl> <int> <dbl> <dbl> <dbl>
1 0 Ideal E SI2 0 55 0 3.95 3.98 2.43
2 0 Premium E SI1 0 61 0 3.89 3.84 2.31
3 0 Good E VS1 0 65 0 4.05 4.07 2.31
4 0 Premium I VS2 1 58 0 4.2 4.23 2.63
5 0 Good J SI2 1 58 0 4.34 4.35 2.75
6 0 Very Good J VVS2 1 57 0 3.94 3.96 2.48
7 0 Very Good I VVS1 1 57 0 3.95 3.98 2.47
8 0 Very Good H SI1 1 55 0 4.07 4.11 2.53
9 0 Fair E VS2 1 61 0 3.87 3.78 2.49
10 0 Very Good H VS1 0 61 0 4 4.05 2.39
# … with 53,930 more rows
```
However, working with lists is very useful, so let’s turn to that.
Looping with Lists
------------------
Aside from data frames, you may think you don’t have much need for list objects. However, list objects make it very easy to iterate some form of data processing.
Let’s say you have models of increasing complexity, and you want to easily summarise and/or compare them. We create a list for which each element is a model object. We then apply a function, e.g. to get the AIC value for each, or adjusted R square (this requires a custom function).
```
library(mgcv) # for gam
mtcars$cyl = factor(mtcars$cyl)
mod_lm = lm(mpg ~ wt, data = mtcars)
mod_poly = lm(mpg ~ poly(wt, 2), data = mtcars)
mod_inter = lm(mpg ~ wt * cyl, data = mtcars)
mod_gam = gam(mpg ~ s(wt), data = mtcars)
mod_gam_inter = gam(mpg ~ cyl + s(wt, by = cyl), data = mtcars)
model_list = list(
mod_lm = mod_lm,
mod_poly = mod_poly,
mod_inter = mod_inter,
mod_gam = mod_gam,
mod_gam_inter = mod_gam_inter
)
# lowest wins
model_list %>%
map_dbl(AIC) %>%
sort()
```
```
mod_gam_inter mod_inter mod_poly mod_gam mod_lm
150.6324 155.4811 158.0484 158.5717 166.0294
```
```
# highest wins
model_list %>%
map_dbl(
function(x)
if_else(inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj)
) %>%
sort(decreasing = TRUE)
```
```
mod_gam_inter mod_inter mod_poly mod_gam mod_lm
0.8643020 0.8349382 0.8065828 0.8041651 0.7445939
```
Let’s go further and create a plot of these results. We’ll map to a data frame, use pivot\_longer to melt it to two columns of model and value, then use ggplot2 to plot the results[15](#fn15).
```
model_list %>%
map_df(
function(x)
if_else(inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj)
) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = "Adj. Rsq") %>%
arrange(desc(`Adj. Rsq`)) %>%
mutate(model = factor(model, levels = model)) %>% # sigh
ggplot(aes(x = model, y = `Adj. Rsq`)) +
geom_point(aes(color = model), size = 10, show.legend = F)
```
Why not throw in AIC also?
```
mod_rsq =
model_list %>%
map_df(
function(x)
if_else(
inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj
)
) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = 'Rsq')
mod_aic =
model_list %>%
map_df(AIC) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = 'AIC')
left_join(mod_rsq, mod_aic) %>%
arrange(AIC) %>%
mutate(model = factor(model, levels = model)) %>%
pivot_longer(cols = -model, names_to = 'measure', values_to = 'value') %>%
ggplot(aes(x = model, y = value)) +
geom_point(aes(color = model), size = 10, show.legend = F) +
facet_wrap(~ measure, scales = 'free')
```
#### List columns
As data.frames are lists, anything can be put into a column just as you would a list element. We’ll use pmap here, as it can take more than one argument, and we’re feeding all columns of the data.frame. You don’t need to worry about the details here, we just want to create a column that is actually a list. In this case the column will contain a data frame in each entry.
```
mtcars2 = as.matrix(mtcars)
mtcars2[sample(1:length(mtcars2), 50)] = NA # add some missing data
mtcars2 = data.frame(mtcars2) %>%
rownames_to_column(var = 'observation') %>%
as_tibble()
head(mtcars2)
```
```
# A tibble: 6 x 12
observation mpg cyl disp hp drat wt qsec vs am gear carb
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1
```
```
mtcars2 =
mtcars2 %>%
mutate(
newvar =
pmap(., ~ data.frame(
N = sum(!is.na(c(...))),
Missing = sum(is.na(c(...)))
)
)
)
```
Now check out the list column.
```
mtcars2
```
```
# A tibble: 32 x 13
observation mpg cyl disp hp drat wt qsec vs am gear carb newvar
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <list>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 <df[,2] [1 × 2]>
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 <df[,2] [1 × 2]>
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 <df[,2] [1 × 2]>
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 <df[,2] [1 × 2]>
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 <df[,2] [1 × 2]>
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 <df[,2] [1 × 2]>
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 <df[,2] [1 × 2]>
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 <df[,2] [1 × 2]>
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> <df[,2] [1 × 2]>
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 <df[,2] [1 × 2]>
# … with 22 more rows
```
```
mtcars2$newvar %>% head(3)
```
```
[[1]]
N Missing
1 11 1
[[2]]
N Missing
1 12 0
[[3]]
N Missing
1 12 0
```
Unnest it with the tidyr function.
```
mtcars2 %>%
unnest(newvar)
```
```
# A tibble: 32 x 14
observation mpg cyl disp hp drat wt qsec vs am gear carb N Missing
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <int> <int>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 11 1
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 12 0
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 12 0
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 11 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 11 1
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 9 3
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 11 1
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 11 1
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> 10 2
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 9 3
# … with 22 more rows
```
This is a pretty esoteric demonstration, and not something you’d normally want to do, as mutate or other approaches would be far more efficient and sensical. However, the idea is that you might want to retain the information you might otherwise store in a list with the data that was used to create it. As an example, you could potentially attach models as a list column to a dataframe that contains meta\-information about each model. Once you have a list column, you can use that column as you would any list for iterative programming.
#### List columns
As data.frames are lists, anything can be put into a column just as you would a list element. We’ll use pmap here, as it can take more than one argument, and we’re feeding all columns of the data.frame. You don’t need to worry about the details here, we just want to create a column that is actually a list. In this case the column will contain a data frame in each entry.
```
mtcars2 = as.matrix(mtcars)
mtcars2[sample(1:length(mtcars2), 50)] = NA # add some missing data
mtcars2 = data.frame(mtcars2) %>%
rownames_to_column(var = 'observation') %>%
as_tibble()
head(mtcars2)
```
```
# A tibble: 6 x 12
observation mpg cyl disp hp drat wt qsec vs am gear carb
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1
```
```
mtcars2 =
mtcars2 %>%
mutate(
newvar =
pmap(., ~ data.frame(
N = sum(!is.na(c(...))),
Missing = sum(is.na(c(...)))
)
)
)
```
Now check out the list column.
```
mtcars2
```
```
# A tibble: 32 x 13
observation mpg cyl disp hp drat wt qsec vs am gear carb newvar
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <list>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 <df[,2] [1 × 2]>
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 <df[,2] [1 × 2]>
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 <df[,2] [1 × 2]>
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 <df[,2] [1 × 2]>
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 <df[,2] [1 × 2]>
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 <df[,2] [1 × 2]>
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 <df[,2] [1 × 2]>
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 <df[,2] [1 × 2]>
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> <df[,2] [1 × 2]>
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 <df[,2] [1 × 2]>
# … with 22 more rows
```
```
mtcars2$newvar %>% head(3)
```
```
[[1]]
N Missing
1 11 1
[[2]]
N Missing
1 12 0
[[3]]
N Missing
1 12 0
```
Unnest it with the tidyr function.
```
mtcars2 %>%
unnest(newvar)
```
```
# A tibble: 32 x 14
observation mpg cyl disp hp drat wt qsec vs am gear carb N Missing
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <int> <int>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 11 1
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 12 0
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 12 0
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 11 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 11 1
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 9 3
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 11 1
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 11 1
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> 10 2
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 9 3
# … with 22 more rows
```
This is a pretty esoteric demonstration, and not something you’d normally want to do, as mutate or other approaches would be far more efficient and sensical. However, the idea is that you might want to retain the information you might otherwise store in a list with the data that was used to create it. As an example, you could potentially attach models as a list column to a dataframe that contains meta\-information about each model. Once you have a list column, you can use that column as you would any list for iterative programming.
Iterative Programming Exercises
-------------------------------
### Exercise 1
With the following matrix, use apply and the sum function to get row or column sums of the matrix x.
```
x = matrix(1:9, 3, 3)
```
### Exercise 2
With the following list object x, use lapply and sapply and the sum function to get sums for the elements. There is no margin to specify for a list, so just supply the list and the sum function.
```
x = list(1:3, 4:10, 11:100)
```
### Exercise 3
As in the previous example, use a map function to create a data frame of the column means. See `?map` to see all your options.
```
d = tibble(
x = rnorm(100),
y = rnorm(100, 10, 2),
z = rnorm(100, 50, 10),
)
```
### Exercise 1
With the following matrix, use apply and the sum function to get row or column sums of the matrix x.
```
x = matrix(1:9, 3, 3)
```
### Exercise 2
With the following list object x, use lapply and sapply and the sum function to get sums for the elements. There is no margin to specify for a list, so just supply the list and the sum function.
```
x = list(1:3, 4:10, 11:100)
```
### Exercise 3
As in the previous example, use a map function to create a data frame of the column means. See `?map` to see all your options.
```
d = tibble(
x = rnorm(100),
y = rnorm(100, 10, 2),
z = rnorm(100, 50, 10),
)
```
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/iterative.html |
Iterative Programming
=====================
Almost everything you do when dealing with data will need to be done again, and again, and again. If you are copy\-pasting your way to repetitively do the same thing, you’re not only doing things inefficiently, you’re almost certainly setting yourself up for trouble if anything changes about the data or underlying process.
In order to avoid this, you need to be familiar with basic programming, and a starting point is to use an iterative approach to repetitive problems. Let’s look at the following. Let’s say we want to get the means of some columns in our data set. Do you do something like this?
```
means1 = mean(df$x)
means2 = mean(df$y)
means3 = mean(df$z)
means4 = mean(df$q)
```
Now consider what you have to change if you change a variable name, decide to do a median, or the data object name changes. Any minor change with the data will cause you to have to redo that code, and possibly every line of it.
For Loops
---------
A for loop will help us get around the problem. The idea is that we want to perform a particular action *for* every iteration of some sequence. That sequence may be over columns, rows, lines in a text, whatever. Here is a loop.
```
for (column in c('x','y','z','q')) {
mean(df[[column]])
}
```
What’s going on here? We’ve created an iterative process in which, *for* every *element* in `c('x','y','z','q')`, we are going to do something. We use the completely arbitrary word `column` as a placeholder to index which of the four columns we’re dealing with at a given point in the process. On the first iteration, `column` will equal `x`, on the second `y`, and so on. We then take the mean of `df[[column]]`, which will be `df[['x']]`, then `df[['y']]`, etc.
Here is an example with the nycflights data, which regards flights that departed New York City in 2013\. The weather data set has columns for things like temperature, humidity, and so forth.
```
weather = nycflights13::weather
for (column in c('temp', 'humid', 'wind_speed', 'precip')) {
print(mean(weather[[column]], na.rm = TRUE))
}
```
```
[1] 55.26039
[1] 62.53006
[1] 10.51749
[1] 0.004469079
```
You can check this for yourself by testing a column or two directly with just `mean(df$x)`.
Now if the data name changes, the columns we want change, or we want to calculate something else, we usually end up only changing one thing, rather than at least changing one at a minimum, and probably many more things. In addition, the amount of code is the same whether the loop goes over 100 columns or 4\.
Let’s do things a little differently.
```
columns = c('temp', 'humid', 'wind_speed', 'precip')
nyc_means = rep(NA, length(columns))
for (i in seq_along(columns)) {
column = columns[i]
nyc_means[i] = mean(weather[[column]], na.rm = TRUE)
# alternative without the initial first step
# nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means
```
```
[1] 55.260392127 62.530058972 10.517488384 0.004469079
```
By creating a columns object, if anything changes about the columns we want, that’s the only line in the code that would need to be changed. The `i` is now a place holder for a number that goes from 1 to the length of columns (i.e. 4\). We make an empty nyc\_means object that’s the length of the columns, so that each element will eventually be the mean of the corresponding column.
In the following I remove precipitation and add visibility and air pressure.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = rep(NA, length(columns))
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
Had we been copy\-pasting, this would require deleting or commenting out a line in our code, pasting two more, and changing each one after pasting to represent the new columns. That’s tedious, and not a fun way to code.
### A slight speed gain
Note that you do not have to create an empty object like we did. The following works also.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = numeric()
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
However, the other approach is slightly faster, because memory is already allocated for all elements of nyc\_means, rather than updating it every iteration of the loop. This speed gain can become noticeable when dealing with thousands of columns and complex operations.
### While alternative
When you look at some people’s R code, you may see a loop of a different sort.
```
columns = c('temp','humid','wind_speed', 'visib', 'pressure')
nyc_means = c()
i = 1
while (i <= length(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
i = i + 1
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
This involves a while statement. It states, while `i` is less than or equal to the length (number) of columns, compute the value of the ith element of nyc\_means as the mean of ith column of weather. After that, increase the value of `i`. So, we start with `i = 1`, compute that subsequent mean, `i` now equals 2, do the process again, and so on. The process will stop as soon as the value of `i` is greater than the length of columns.
*There is zero difference to using the while approach vs. the for loop*. While is often used when there is a check to be made, e.g. in modeling functions that have to stop the estimation process at some point, or else they’d go on indefinitely. In that case the while syntax is probably more natural. Either is fine.
### Loops summary
Understanding loops is fundamental toward spending less time processing data and more time toward exploring it. Your code will be more succinct and more able to handle the usual changes that come with dealing with data. Now that you have a sense of it, know that once you are armed with the sorts of things we’ll be talking about next\- apply functions, writing functions, and vectorization \- you’ll likely have little need to write explicit loops. While there is always a need for iterative processing of data, R provides even more efficient means to do so.
Implicit Loops
--------------
Writing loops is straightforward once you get the initial hang of it. However, R offers alternative ways to do loops that can simplify code without losing readability. As such, even when you loop in R, you don’t have to do so explicitly.
### apply family
A family of functions comes with R that allows for a succinct way of looping when it is appropriate. Common functions in this family include:
* apply
+ arrays, matrices, data.frames
* lapply, sapply, vapply
+ lists, data.frames, vectors
* tapply
+ grouped operations (table apply)
* mapply
+ multivariate version of sapply
* replicate
+ performs an operation N times
As an example we’ll consider standardizing variables, i.e. taking a set of numbers, subtracting the mean, and dividing by the standard deviation. This results in a variable with mean of 0 and standard deviation of 1\. Let’s start with a loop approach.
```
for (i in 1:ncol(mydf)) {
x = mydf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
```
The above would be a really bad way to use R. It goes over each column individually, then over each value of the column.
Conversely, apply will take a matrix or data frame, and apply a function over the margin, row or column, you want to loop over. The first argument is the data you’re considering, the margin is the second argument (1 for rows, 2 for columns[12](#fn12)), and the function you want to apply to those rows is the third argument. The following example is much cleaner compared to the loop, and now you’d have a function you can use elsewhere if needed.
```
stdize <- function(x) {
(x - mean(x)) / sd(x)
}
apply(mydf, 2, stdize) # 1 for rows, 2 for columnwise application
```
Many of the other apply functions work similarly, taking an object and a function to do the work on the object (possibly implicit), possibly with other arguments specified if necessary.
#### lapply
Let’s say we have a list object, or even just a vector of values. There are no rows or columns to iterate over, so what do we do here?
```
x = list('aba', 'abb', 'abc', 'abd', 'abe')
lapply(x, str_remove, pattern = 'ab')
```
```
[[1]]
[1] "a"
[[2]]
[1] "b"
[[3]]
[1] "c"
[[4]]
[1] "d"
[[5]]
[1] "e"
```
The lapply operation iterates over each element of the list and applies a function to them. In this case, the function is str\_remove. It has an argument for the string pattern we want to take out of the character string that is fed to it (‘ab’). For example, for ‘aba’ we will be left with just the ‘a’.
As can be seen, lapply starts with a list and returns a list. The only difference with sapply is that sapply will return a simplified form if possible[13](#fn13).
```
sapply(x, str_remove, pattern = 'ab')
```
```
[1] "a" "b" "c" "d" "e"
```
In this case we just get a vector back.
### Apply functions
It is important to be familiar with the apply family for efficient data processing, if only because you’ll regularly come code employing these functions. A summary of benefits includes:
* Cleaner/simpler code
* Environment kept clear of unnecessary objects
* Potentially more reproducible
+ more likely to use generalizable functions
* Parallelizable
Note that apply functions are NOT necessarily faster than explicit loops, and if you create an empty object for the loop as discussed previously, the explicit loop will likely be faster. On top of that, functions like replicate and mapply are especially slow.
However, the apply family can ALWAYS *potentially* be faster than standard R loops do to parallelization. With base R’s parallel package, there are parallel versions of the apply family, e.g.parApply, parLapply etc. As every modern computer has at least four cores to play with, you’ll always potentially have nearly a 4x speedup by using the parallel apply functions.
Apply functions and similar approaches should be a part of your regular R experience. We’ll talk about other options that may have even more benefits, but you need to know the basics of how apply functions work in order to use those.
I use R every day, and very rarely use explicit loops. Note that there is no speed difference for a for loop vs. using while. And if you must use an explicit loop, create an empty object of the dimension/form you need, and then fill it in via the loop. This will be notably faster.
I pretty much never use an explicit double loop, as a little more thinking about the problem will usually provide a more efficient path to solving the problem.
### purrr
The purrr package allows you to take the apply family approach to the tidyverse. And with packages future \+ furrr, they too are parallelizable.
Consider the following. We’ll use the map function to map the sum function to each element in the list, the same way we would with lapply.
```
x = list(1:3, 4:6, 7:9)
map(x, sum)
```
```
[[1]]
[1] 6
[[2]]
[1] 15
[[3]]
[1] 24
```
The map functions take some getting used to, and in my experience they are typically slower than the apply functions, sometimes notably so. However they allow you stay within the tidy realm, which has its own benefits, and have more control over the nature of the output[14](#fn14), which is especially important in reproducibility, package development, producing production\-level code, etc. The key idea is that the map functions will always return something the same length as the input given to it.
The purrr functions want a list or vector, i.e. they don’t work with data.frame objects in the same way we’ve done with mutate and summarize except in the sense that data.frames are lists.
```
## mtcars %>%
## map(scale) # returns a list, not shown
mtcars %>%
map_df(scale) # returns a df
```
```
# A tibble: 32 x 11
mpg[,1] cyl[,1] disp[,1] hp[,1] drat[,1] wt[,1] qsec[,1] vs[,1] am[,1] gear[,1] carb[,1]
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.151 -0.105 -0.571 -0.535 0.568 -0.610 -0.777 -0.868 1.19 0.424 0.735
2 0.151 -0.105 -0.571 -0.535 0.568 -0.350 -0.464 -0.868 1.19 0.424 0.735
3 0.450 -1.22 -0.990 -0.783 0.474 -0.917 0.426 1.12 1.19 0.424 -1.12
4 0.217 -0.105 0.220 -0.535 -0.966 -0.00230 0.890 1.12 -0.814 -0.932 -1.12
5 -0.231 1.01 1.04 0.413 -0.835 0.228 -0.464 -0.868 -0.814 -0.932 -0.503
6 -0.330 -0.105 -0.0462 -0.608 -1.56 0.248 1.33 1.12 -0.814 -0.932 -1.12
7 -0.961 1.01 1.04 1.43 -0.723 0.361 -1.12 -0.868 -0.814 -0.932 0.735
8 0.715 -1.22 -0.678 -1.24 0.175 -0.0278 1.20 1.12 -0.814 0.424 -0.503
9 0.450 -1.22 -0.726 -0.754 0.605 -0.0687 2.83 1.12 -0.814 0.424 -0.503
10 -0.148 -0.105 -0.509 -0.345 0.605 0.228 0.253 1.12 -0.814 0.424 0.735
# … with 22 more rows
```
```
mtcars %>%
map_dbl(sum) # returns a numeric (double) vector of column sums
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
642.900 198.000 7383.100 4694.000 115.090 102.952 571.160 14.000 13.000 118.000 90.000
```
```
diamonds %>%
map_at(
vars(carat, depth, price),
function(x)
as.integer(x > median(x))
) %>%
as_tibble()
```
```
# A tibble: 53,940 x 10
carat cut color clarity depth table price x y z
<int> <ord> <ord> <ord> <int> <dbl> <int> <dbl> <dbl> <dbl>
1 0 Ideal E SI2 0 55 0 3.95 3.98 2.43
2 0 Premium E SI1 0 61 0 3.89 3.84 2.31
3 0 Good E VS1 0 65 0 4.05 4.07 2.31
4 0 Premium I VS2 1 58 0 4.2 4.23 2.63
5 0 Good J SI2 1 58 0 4.34 4.35 2.75
6 0 Very Good J VVS2 1 57 0 3.94 3.96 2.48
7 0 Very Good I VVS1 1 57 0 3.95 3.98 2.47
8 0 Very Good H SI1 1 55 0 4.07 4.11 2.53
9 0 Fair E VS2 1 61 0 3.87 3.78 2.49
10 0 Very Good H VS1 0 61 0 4 4.05 2.39
# … with 53,930 more rows
```
However, working with lists is very useful, so let’s turn to that.
Looping with Lists
------------------
Aside from data frames, you may think you don’t have much need for list objects. However, list objects make it very easy to iterate some form of data processing.
Let’s say you have models of increasing complexity, and you want to easily summarise and/or compare them. We create a list for which each element is a model object. We then apply a function, e.g. to get the AIC value for each, or adjusted R square (this requires a custom function).
```
library(mgcv) # for gam
mtcars$cyl = factor(mtcars$cyl)
mod_lm = lm(mpg ~ wt, data = mtcars)
mod_poly = lm(mpg ~ poly(wt, 2), data = mtcars)
mod_inter = lm(mpg ~ wt * cyl, data = mtcars)
mod_gam = gam(mpg ~ s(wt), data = mtcars)
mod_gam_inter = gam(mpg ~ cyl + s(wt, by = cyl), data = mtcars)
model_list = list(
mod_lm = mod_lm,
mod_poly = mod_poly,
mod_inter = mod_inter,
mod_gam = mod_gam,
mod_gam_inter = mod_gam_inter
)
# lowest wins
model_list %>%
map_dbl(AIC) %>%
sort()
```
```
mod_gam_inter mod_inter mod_poly mod_gam mod_lm
150.6324 155.4811 158.0484 158.5717 166.0294
```
```
# highest wins
model_list %>%
map_dbl(
function(x)
if_else(inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj)
) %>%
sort(decreasing = TRUE)
```
```
mod_gam_inter mod_inter mod_poly mod_gam mod_lm
0.8643020 0.8349382 0.8065828 0.8041651 0.7445939
```
Let’s go further and create a plot of these results. We’ll map to a data frame, use pivot\_longer to melt it to two columns of model and value, then use ggplot2 to plot the results[15](#fn15).
```
model_list %>%
map_df(
function(x)
if_else(inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj)
) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = "Adj. Rsq") %>%
arrange(desc(`Adj. Rsq`)) %>%
mutate(model = factor(model, levels = model)) %>% # sigh
ggplot(aes(x = model, y = `Adj. Rsq`)) +
geom_point(aes(color = model), size = 10, show.legend = F)
```
Why not throw in AIC also?
```
mod_rsq =
model_list %>%
map_df(
function(x)
if_else(
inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj
)
) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = 'Rsq')
mod_aic =
model_list %>%
map_df(AIC) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = 'AIC')
left_join(mod_rsq, mod_aic) %>%
arrange(AIC) %>%
mutate(model = factor(model, levels = model)) %>%
pivot_longer(cols = -model, names_to = 'measure', values_to = 'value') %>%
ggplot(aes(x = model, y = value)) +
geom_point(aes(color = model), size = 10, show.legend = F) +
facet_wrap(~ measure, scales = 'free')
```
#### List columns
As data.frames are lists, anything can be put into a column just as you would a list element. We’ll use pmap here, as it can take more than one argument, and we’re feeding all columns of the data.frame. You don’t need to worry about the details here, we just want to create a column that is actually a list. In this case the column will contain a data frame in each entry.
```
mtcars2 = as.matrix(mtcars)
mtcars2[sample(1:length(mtcars2), 50)] = NA # add some missing data
mtcars2 = data.frame(mtcars2) %>%
rownames_to_column(var = 'observation') %>%
as_tibble()
head(mtcars2)
```
```
# A tibble: 6 x 12
observation mpg cyl disp hp drat wt qsec vs am gear carb
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1
```
```
mtcars2 =
mtcars2 %>%
mutate(
newvar =
pmap(., ~ data.frame(
N = sum(!is.na(c(...))),
Missing = sum(is.na(c(...)))
)
)
)
```
Now check out the list column.
```
mtcars2
```
```
# A tibble: 32 x 13
observation mpg cyl disp hp drat wt qsec vs am gear carb newvar
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <list>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 <df[,2] [1 × 2]>
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 <df[,2] [1 × 2]>
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 <df[,2] [1 × 2]>
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 <df[,2] [1 × 2]>
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 <df[,2] [1 × 2]>
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 <df[,2] [1 × 2]>
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 <df[,2] [1 × 2]>
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 <df[,2] [1 × 2]>
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> <df[,2] [1 × 2]>
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 <df[,2] [1 × 2]>
# … with 22 more rows
```
```
mtcars2$newvar %>% head(3)
```
```
[[1]]
N Missing
1 11 1
[[2]]
N Missing
1 12 0
[[3]]
N Missing
1 12 0
```
Unnest it with the tidyr function.
```
mtcars2 %>%
unnest(newvar)
```
```
# A tibble: 32 x 14
observation mpg cyl disp hp drat wt qsec vs am gear carb N Missing
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <int> <int>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 11 1
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 12 0
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 12 0
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 11 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 11 1
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 9 3
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 11 1
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 11 1
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> 10 2
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 9 3
# … with 22 more rows
```
This is a pretty esoteric demonstration, and not something you’d normally want to do, as mutate or other approaches would be far more efficient and sensical. However, the idea is that you might want to retain the information you might otherwise store in a list with the data that was used to create it. As an example, you could potentially attach models as a list column to a dataframe that contains meta\-information about each model. Once you have a list column, you can use that column as you would any list for iterative programming.
Iterative Programming Exercises
-------------------------------
### Exercise 1
With the following matrix, use apply and the sum function to get row or column sums of the matrix x.
```
x = matrix(1:9, 3, 3)
```
### Exercise 2
With the following list object x, use lapply and sapply and the sum function to get sums for the elements. There is no margin to specify for a list, so just supply the list and the sum function.
```
x = list(1:3, 4:10, 11:100)
```
### Exercise 3
As in the previous example, use a map function to create a data frame of the column means. See `?map` to see all your options.
```
d = tibble(
x = rnorm(100),
y = rnorm(100, 10, 2),
z = rnorm(100, 50, 10),
)
```
For Loops
---------
A for loop will help us get around the problem. The idea is that we want to perform a particular action *for* every iteration of some sequence. That sequence may be over columns, rows, lines in a text, whatever. Here is a loop.
```
for (column in c('x','y','z','q')) {
mean(df[[column]])
}
```
What’s going on here? We’ve created an iterative process in which, *for* every *element* in `c('x','y','z','q')`, we are going to do something. We use the completely arbitrary word `column` as a placeholder to index which of the four columns we’re dealing with at a given point in the process. On the first iteration, `column` will equal `x`, on the second `y`, and so on. We then take the mean of `df[[column]]`, which will be `df[['x']]`, then `df[['y']]`, etc.
Here is an example with the nycflights data, which regards flights that departed New York City in 2013\. The weather data set has columns for things like temperature, humidity, and so forth.
```
weather = nycflights13::weather
for (column in c('temp', 'humid', 'wind_speed', 'precip')) {
print(mean(weather[[column]], na.rm = TRUE))
}
```
```
[1] 55.26039
[1] 62.53006
[1] 10.51749
[1] 0.004469079
```
You can check this for yourself by testing a column or two directly with just `mean(df$x)`.
Now if the data name changes, the columns we want change, or we want to calculate something else, we usually end up only changing one thing, rather than at least changing one at a minimum, and probably many more things. In addition, the amount of code is the same whether the loop goes over 100 columns or 4\.
Let’s do things a little differently.
```
columns = c('temp', 'humid', 'wind_speed', 'precip')
nyc_means = rep(NA, length(columns))
for (i in seq_along(columns)) {
column = columns[i]
nyc_means[i] = mean(weather[[column]], na.rm = TRUE)
# alternative without the initial first step
# nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means
```
```
[1] 55.260392127 62.530058972 10.517488384 0.004469079
```
By creating a columns object, if anything changes about the columns we want, that’s the only line in the code that would need to be changed. The `i` is now a place holder for a number that goes from 1 to the length of columns (i.e. 4\). We make an empty nyc\_means object that’s the length of the columns, so that each element will eventually be the mean of the corresponding column.
In the following I remove precipitation and add visibility and air pressure.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = rep(NA, length(columns))
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
Had we been copy\-pasting, this would require deleting or commenting out a line in our code, pasting two more, and changing each one after pasting to represent the new columns. That’s tedious, and not a fun way to code.
### A slight speed gain
Note that you do not have to create an empty object like we did. The following works also.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = numeric()
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
However, the other approach is slightly faster, because memory is already allocated for all elements of nyc\_means, rather than updating it every iteration of the loop. This speed gain can become noticeable when dealing with thousands of columns and complex operations.
### While alternative
When you look at some people’s R code, you may see a loop of a different sort.
```
columns = c('temp','humid','wind_speed', 'visib', 'pressure')
nyc_means = c()
i = 1
while (i <= length(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
i = i + 1
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
This involves a while statement. It states, while `i` is less than or equal to the length (number) of columns, compute the value of the ith element of nyc\_means as the mean of ith column of weather. After that, increase the value of `i`. So, we start with `i = 1`, compute that subsequent mean, `i` now equals 2, do the process again, and so on. The process will stop as soon as the value of `i` is greater than the length of columns.
*There is zero difference to using the while approach vs. the for loop*. While is often used when there is a check to be made, e.g. in modeling functions that have to stop the estimation process at some point, or else they’d go on indefinitely. In that case the while syntax is probably more natural. Either is fine.
### Loops summary
Understanding loops is fundamental toward spending less time processing data and more time toward exploring it. Your code will be more succinct and more able to handle the usual changes that come with dealing with data. Now that you have a sense of it, know that once you are armed with the sorts of things we’ll be talking about next\- apply functions, writing functions, and vectorization \- you’ll likely have little need to write explicit loops. While there is always a need for iterative processing of data, R provides even more efficient means to do so.
### A slight speed gain
Note that you do not have to create an empty object like we did. The following works also.
```
columns = c('temp', 'humid', 'wind_speed', 'visib', 'pressure')
nyc_means = numeric()
for (i in seq_along(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
However, the other approach is slightly faster, because memory is already allocated for all elements of nyc\_means, rather than updating it every iteration of the loop. This speed gain can become noticeable when dealing with thousands of columns and complex operations.
### While alternative
When you look at some people’s R code, you may see a loop of a different sort.
```
columns = c('temp','humid','wind_speed', 'visib', 'pressure')
nyc_means = c()
i = 1
while (i <= length(columns)) {
nyc_means[i] = mean(weather[[columns[i]]], na.rm = TRUE)
i = i + 1
}
nyc_means %>% round(2)
```
```
[1] 55.26 62.53 10.52 9.26 1017.90
```
This involves a while statement. It states, while `i` is less than or equal to the length (number) of columns, compute the value of the ith element of nyc\_means as the mean of ith column of weather. After that, increase the value of `i`. So, we start with `i = 1`, compute that subsequent mean, `i` now equals 2, do the process again, and so on. The process will stop as soon as the value of `i` is greater than the length of columns.
*There is zero difference to using the while approach vs. the for loop*. While is often used when there is a check to be made, e.g. in modeling functions that have to stop the estimation process at some point, or else they’d go on indefinitely. In that case the while syntax is probably more natural. Either is fine.
### Loops summary
Understanding loops is fundamental toward spending less time processing data and more time toward exploring it. Your code will be more succinct and more able to handle the usual changes that come with dealing with data. Now that you have a sense of it, know that once you are armed with the sorts of things we’ll be talking about next\- apply functions, writing functions, and vectorization \- you’ll likely have little need to write explicit loops. While there is always a need for iterative processing of data, R provides even more efficient means to do so.
Implicit Loops
--------------
Writing loops is straightforward once you get the initial hang of it. However, R offers alternative ways to do loops that can simplify code without losing readability. As such, even when you loop in R, you don’t have to do so explicitly.
### apply family
A family of functions comes with R that allows for a succinct way of looping when it is appropriate. Common functions in this family include:
* apply
+ arrays, matrices, data.frames
* lapply, sapply, vapply
+ lists, data.frames, vectors
* tapply
+ grouped operations (table apply)
* mapply
+ multivariate version of sapply
* replicate
+ performs an operation N times
As an example we’ll consider standardizing variables, i.e. taking a set of numbers, subtracting the mean, and dividing by the standard deviation. This results in a variable with mean of 0 and standard deviation of 1\. Let’s start with a loop approach.
```
for (i in 1:ncol(mydf)) {
x = mydf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
```
The above would be a really bad way to use R. It goes over each column individually, then over each value of the column.
Conversely, apply will take a matrix or data frame, and apply a function over the margin, row or column, you want to loop over. The first argument is the data you’re considering, the margin is the second argument (1 for rows, 2 for columns[12](#fn12)), and the function you want to apply to those rows is the third argument. The following example is much cleaner compared to the loop, and now you’d have a function you can use elsewhere if needed.
```
stdize <- function(x) {
(x - mean(x)) / sd(x)
}
apply(mydf, 2, stdize) # 1 for rows, 2 for columnwise application
```
Many of the other apply functions work similarly, taking an object and a function to do the work on the object (possibly implicit), possibly with other arguments specified if necessary.
#### lapply
Let’s say we have a list object, or even just a vector of values. There are no rows or columns to iterate over, so what do we do here?
```
x = list('aba', 'abb', 'abc', 'abd', 'abe')
lapply(x, str_remove, pattern = 'ab')
```
```
[[1]]
[1] "a"
[[2]]
[1] "b"
[[3]]
[1] "c"
[[4]]
[1] "d"
[[5]]
[1] "e"
```
The lapply operation iterates over each element of the list and applies a function to them. In this case, the function is str\_remove. It has an argument for the string pattern we want to take out of the character string that is fed to it (‘ab’). For example, for ‘aba’ we will be left with just the ‘a’.
As can be seen, lapply starts with a list and returns a list. The only difference with sapply is that sapply will return a simplified form if possible[13](#fn13).
```
sapply(x, str_remove, pattern = 'ab')
```
```
[1] "a" "b" "c" "d" "e"
```
In this case we just get a vector back.
### Apply functions
It is important to be familiar with the apply family for efficient data processing, if only because you’ll regularly come code employing these functions. A summary of benefits includes:
* Cleaner/simpler code
* Environment kept clear of unnecessary objects
* Potentially more reproducible
+ more likely to use generalizable functions
* Parallelizable
Note that apply functions are NOT necessarily faster than explicit loops, and if you create an empty object for the loop as discussed previously, the explicit loop will likely be faster. On top of that, functions like replicate and mapply are especially slow.
However, the apply family can ALWAYS *potentially* be faster than standard R loops do to parallelization. With base R’s parallel package, there are parallel versions of the apply family, e.g.parApply, parLapply etc. As every modern computer has at least four cores to play with, you’ll always potentially have nearly a 4x speedup by using the parallel apply functions.
Apply functions and similar approaches should be a part of your regular R experience. We’ll talk about other options that may have even more benefits, but you need to know the basics of how apply functions work in order to use those.
I use R every day, and very rarely use explicit loops. Note that there is no speed difference for a for loop vs. using while. And if you must use an explicit loop, create an empty object of the dimension/form you need, and then fill it in via the loop. This will be notably faster.
I pretty much never use an explicit double loop, as a little more thinking about the problem will usually provide a more efficient path to solving the problem.
### purrr
The purrr package allows you to take the apply family approach to the tidyverse. And with packages future \+ furrr, they too are parallelizable.
Consider the following. We’ll use the map function to map the sum function to each element in the list, the same way we would with lapply.
```
x = list(1:3, 4:6, 7:9)
map(x, sum)
```
```
[[1]]
[1] 6
[[2]]
[1] 15
[[3]]
[1] 24
```
The map functions take some getting used to, and in my experience they are typically slower than the apply functions, sometimes notably so. However they allow you stay within the tidy realm, which has its own benefits, and have more control over the nature of the output[14](#fn14), which is especially important in reproducibility, package development, producing production\-level code, etc. The key idea is that the map functions will always return something the same length as the input given to it.
The purrr functions want a list or vector, i.e. they don’t work with data.frame objects in the same way we’ve done with mutate and summarize except in the sense that data.frames are lists.
```
## mtcars %>%
## map(scale) # returns a list, not shown
mtcars %>%
map_df(scale) # returns a df
```
```
# A tibble: 32 x 11
mpg[,1] cyl[,1] disp[,1] hp[,1] drat[,1] wt[,1] qsec[,1] vs[,1] am[,1] gear[,1] carb[,1]
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.151 -0.105 -0.571 -0.535 0.568 -0.610 -0.777 -0.868 1.19 0.424 0.735
2 0.151 -0.105 -0.571 -0.535 0.568 -0.350 -0.464 -0.868 1.19 0.424 0.735
3 0.450 -1.22 -0.990 -0.783 0.474 -0.917 0.426 1.12 1.19 0.424 -1.12
4 0.217 -0.105 0.220 -0.535 -0.966 -0.00230 0.890 1.12 -0.814 -0.932 -1.12
5 -0.231 1.01 1.04 0.413 -0.835 0.228 -0.464 -0.868 -0.814 -0.932 -0.503
6 -0.330 -0.105 -0.0462 -0.608 -1.56 0.248 1.33 1.12 -0.814 -0.932 -1.12
7 -0.961 1.01 1.04 1.43 -0.723 0.361 -1.12 -0.868 -0.814 -0.932 0.735
8 0.715 -1.22 -0.678 -1.24 0.175 -0.0278 1.20 1.12 -0.814 0.424 -0.503
9 0.450 -1.22 -0.726 -0.754 0.605 -0.0687 2.83 1.12 -0.814 0.424 -0.503
10 -0.148 -0.105 -0.509 -0.345 0.605 0.228 0.253 1.12 -0.814 0.424 0.735
# … with 22 more rows
```
```
mtcars %>%
map_dbl(sum) # returns a numeric (double) vector of column sums
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
642.900 198.000 7383.100 4694.000 115.090 102.952 571.160 14.000 13.000 118.000 90.000
```
```
diamonds %>%
map_at(
vars(carat, depth, price),
function(x)
as.integer(x > median(x))
) %>%
as_tibble()
```
```
# A tibble: 53,940 x 10
carat cut color clarity depth table price x y z
<int> <ord> <ord> <ord> <int> <dbl> <int> <dbl> <dbl> <dbl>
1 0 Ideal E SI2 0 55 0 3.95 3.98 2.43
2 0 Premium E SI1 0 61 0 3.89 3.84 2.31
3 0 Good E VS1 0 65 0 4.05 4.07 2.31
4 0 Premium I VS2 1 58 0 4.2 4.23 2.63
5 0 Good J SI2 1 58 0 4.34 4.35 2.75
6 0 Very Good J VVS2 1 57 0 3.94 3.96 2.48
7 0 Very Good I VVS1 1 57 0 3.95 3.98 2.47
8 0 Very Good H SI1 1 55 0 4.07 4.11 2.53
9 0 Fair E VS2 1 61 0 3.87 3.78 2.49
10 0 Very Good H VS1 0 61 0 4 4.05 2.39
# … with 53,930 more rows
```
However, working with lists is very useful, so let’s turn to that.
### apply family
A family of functions comes with R that allows for a succinct way of looping when it is appropriate. Common functions in this family include:
* apply
+ arrays, matrices, data.frames
* lapply, sapply, vapply
+ lists, data.frames, vectors
* tapply
+ grouped operations (table apply)
* mapply
+ multivariate version of sapply
* replicate
+ performs an operation N times
As an example we’ll consider standardizing variables, i.e. taking a set of numbers, subtracting the mean, and dividing by the standard deviation. This results in a variable with mean of 0 and standard deviation of 1\. Let’s start with a loop approach.
```
for (i in 1:ncol(mydf)) {
x = mydf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
```
The above would be a really bad way to use R. It goes over each column individually, then over each value of the column.
Conversely, apply will take a matrix or data frame, and apply a function over the margin, row or column, you want to loop over. The first argument is the data you’re considering, the margin is the second argument (1 for rows, 2 for columns[12](#fn12)), and the function you want to apply to those rows is the third argument. The following example is much cleaner compared to the loop, and now you’d have a function you can use elsewhere if needed.
```
stdize <- function(x) {
(x - mean(x)) / sd(x)
}
apply(mydf, 2, stdize) # 1 for rows, 2 for columnwise application
```
Many of the other apply functions work similarly, taking an object and a function to do the work on the object (possibly implicit), possibly with other arguments specified if necessary.
#### lapply
Let’s say we have a list object, or even just a vector of values. There are no rows or columns to iterate over, so what do we do here?
```
x = list('aba', 'abb', 'abc', 'abd', 'abe')
lapply(x, str_remove, pattern = 'ab')
```
```
[[1]]
[1] "a"
[[2]]
[1] "b"
[[3]]
[1] "c"
[[4]]
[1] "d"
[[5]]
[1] "e"
```
The lapply operation iterates over each element of the list and applies a function to them. In this case, the function is str\_remove. It has an argument for the string pattern we want to take out of the character string that is fed to it (‘ab’). For example, for ‘aba’ we will be left with just the ‘a’.
As can be seen, lapply starts with a list and returns a list. The only difference with sapply is that sapply will return a simplified form if possible[13](#fn13).
```
sapply(x, str_remove, pattern = 'ab')
```
```
[1] "a" "b" "c" "d" "e"
```
In this case we just get a vector back.
#### lapply
Let’s say we have a list object, or even just a vector of values. There are no rows or columns to iterate over, so what do we do here?
```
x = list('aba', 'abb', 'abc', 'abd', 'abe')
lapply(x, str_remove, pattern = 'ab')
```
```
[[1]]
[1] "a"
[[2]]
[1] "b"
[[3]]
[1] "c"
[[4]]
[1] "d"
[[5]]
[1] "e"
```
The lapply operation iterates over each element of the list and applies a function to them. In this case, the function is str\_remove. It has an argument for the string pattern we want to take out of the character string that is fed to it (‘ab’). For example, for ‘aba’ we will be left with just the ‘a’.
As can be seen, lapply starts with a list and returns a list. The only difference with sapply is that sapply will return a simplified form if possible[13](#fn13).
```
sapply(x, str_remove, pattern = 'ab')
```
```
[1] "a" "b" "c" "d" "e"
```
In this case we just get a vector back.
### Apply functions
It is important to be familiar with the apply family for efficient data processing, if only because you’ll regularly come code employing these functions. A summary of benefits includes:
* Cleaner/simpler code
* Environment kept clear of unnecessary objects
* Potentially more reproducible
+ more likely to use generalizable functions
* Parallelizable
Note that apply functions are NOT necessarily faster than explicit loops, and if you create an empty object for the loop as discussed previously, the explicit loop will likely be faster. On top of that, functions like replicate and mapply are especially slow.
However, the apply family can ALWAYS *potentially* be faster than standard R loops do to parallelization. With base R’s parallel package, there are parallel versions of the apply family, e.g.parApply, parLapply etc. As every modern computer has at least four cores to play with, you’ll always potentially have nearly a 4x speedup by using the parallel apply functions.
Apply functions and similar approaches should be a part of your regular R experience. We’ll talk about other options that may have even more benefits, but you need to know the basics of how apply functions work in order to use those.
I use R every day, and very rarely use explicit loops. Note that there is no speed difference for a for loop vs. using while. And if you must use an explicit loop, create an empty object of the dimension/form you need, and then fill it in via the loop. This will be notably faster.
I pretty much never use an explicit double loop, as a little more thinking about the problem will usually provide a more efficient path to solving the problem.
### purrr
The purrr package allows you to take the apply family approach to the tidyverse. And with packages future \+ furrr, they too are parallelizable.
Consider the following. We’ll use the map function to map the sum function to each element in the list, the same way we would with lapply.
```
x = list(1:3, 4:6, 7:9)
map(x, sum)
```
```
[[1]]
[1] 6
[[2]]
[1] 15
[[3]]
[1] 24
```
The map functions take some getting used to, and in my experience they are typically slower than the apply functions, sometimes notably so. However they allow you stay within the tidy realm, which has its own benefits, and have more control over the nature of the output[14](#fn14), which is especially important in reproducibility, package development, producing production\-level code, etc. The key idea is that the map functions will always return something the same length as the input given to it.
The purrr functions want a list or vector, i.e. they don’t work with data.frame objects in the same way we’ve done with mutate and summarize except in the sense that data.frames are lists.
```
## mtcars %>%
## map(scale) # returns a list, not shown
mtcars %>%
map_df(scale) # returns a df
```
```
# A tibble: 32 x 11
mpg[,1] cyl[,1] disp[,1] hp[,1] drat[,1] wt[,1] qsec[,1] vs[,1] am[,1] gear[,1] carb[,1]
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.151 -0.105 -0.571 -0.535 0.568 -0.610 -0.777 -0.868 1.19 0.424 0.735
2 0.151 -0.105 -0.571 -0.535 0.568 -0.350 -0.464 -0.868 1.19 0.424 0.735
3 0.450 -1.22 -0.990 -0.783 0.474 -0.917 0.426 1.12 1.19 0.424 -1.12
4 0.217 -0.105 0.220 -0.535 -0.966 -0.00230 0.890 1.12 -0.814 -0.932 -1.12
5 -0.231 1.01 1.04 0.413 -0.835 0.228 -0.464 -0.868 -0.814 -0.932 -0.503
6 -0.330 -0.105 -0.0462 -0.608 -1.56 0.248 1.33 1.12 -0.814 -0.932 -1.12
7 -0.961 1.01 1.04 1.43 -0.723 0.361 -1.12 -0.868 -0.814 -0.932 0.735
8 0.715 -1.22 -0.678 -1.24 0.175 -0.0278 1.20 1.12 -0.814 0.424 -0.503
9 0.450 -1.22 -0.726 -0.754 0.605 -0.0687 2.83 1.12 -0.814 0.424 -0.503
10 -0.148 -0.105 -0.509 -0.345 0.605 0.228 0.253 1.12 -0.814 0.424 0.735
# … with 22 more rows
```
```
mtcars %>%
map_dbl(sum) # returns a numeric (double) vector of column sums
```
```
mpg cyl disp hp drat wt qsec vs am gear carb
642.900 198.000 7383.100 4694.000 115.090 102.952 571.160 14.000 13.000 118.000 90.000
```
```
diamonds %>%
map_at(
vars(carat, depth, price),
function(x)
as.integer(x > median(x))
) %>%
as_tibble()
```
```
# A tibble: 53,940 x 10
carat cut color clarity depth table price x y z
<int> <ord> <ord> <ord> <int> <dbl> <int> <dbl> <dbl> <dbl>
1 0 Ideal E SI2 0 55 0 3.95 3.98 2.43
2 0 Premium E SI1 0 61 0 3.89 3.84 2.31
3 0 Good E VS1 0 65 0 4.05 4.07 2.31
4 0 Premium I VS2 1 58 0 4.2 4.23 2.63
5 0 Good J SI2 1 58 0 4.34 4.35 2.75
6 0 Very Good J VVS2 1 57 0 3.94 3.96 2.48
7 0 Very Good I VVS1 1 57 0 3.95 3.98 2.47
8 0 Very Good H SI1 1 55 0 4.07 4.11 2.53
9 0 Fair E VS2 1 61 0 3.87 3.78 2.49
10 0 Very Good H VS1 0 61 0 4 4.05 2.39
# … with 53,930 more rows
```
However, working with lists is very useful, so let’s turn to that.
Looping with Lists
------------------
Aside from data frames, you may think you don’t have much need for list objects. However, list objects make it very easy to iterate some form of data processing.
Let’s say you have models of increasing complexity, and you want to easily summarise and/or compare them. We create a list for which each element is a model object. We then apply a function, e.g. to get the AIC value for each, or adjusted R square (this requires a custom function).
```
library(mgcv) # for gam
mtcars$cyl = factor(mtcars$cyl)
mod_lm = lm(mpg ~ wt, data = mtcars)
mod_poly = lm(mpg ~ poly(wt, 2), data = mtcars)
mod_inter = lm(mpg ~ wt * cyl, data = mtcars)
mod_gam = gam(mpg ~ s(wt), data = mtcars)
mod_gam_inter = gam(mpg ~ cyl + s(wt, by = cyl), data = mtcars)
model_list = list(
mod_lm = mod_lm,
mod_poly = mod_poly,
mod_inter = mod_inter,
mod_gam = mod_gam,
mod_gam_inter = mod_gam_inter
)
# lowest wins
model_list %>%
map_dbl(AIC) %>%
sort()
```
```
mod_gam_inter mod_inter mod_poly mod_gam mod_lm
150.6324 155.4811 158.0484 158.5717 166.0294
```
```
# highest wins
model_list %>%
map_dbl(
function(x)
if_else(inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj)
) %>%
sort(decreasing = TRUE)
```
```
mod_gam_inter mod_inter mod_poly mod_gam mod_lm
0.8643020 0.8349382 0.8065828 0.8041651 0.7445939
```
Let’s go further and create a plot of these results. We’ll map to a data frame, use pivot\_longer to melt it to two columns of model and value, then use ggplot2 to plot the results[15](#fn15).
```
model_list %>%
map_df(
function(x)
if_else(inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj)
) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = "Adj. Rsq") %>%
arrange(desc(`Adj. Rsq`)) %>%
mutate(model = factor(model, levels = model)) %>% # sigh
ggplot(aes(x = model, y = `Adj. Rsq`)) +
geom_point(aes(color = model), size = 10, show.legend = F)
```
Why not throw in AIC also?
```
mod_rsq =
model_list %>%
map_df(
function(x)
if_else(
inherits(x, 'gam'),
summary(x)$r.sq,
summary(x)$adj
)
) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = 'Rsq')
mod_aic =
model_list %>%
map_df(AIC) %>%
pivot_longer(cols = starts_with('mod'),
names_to = 'model',
values_to = 'AIC')
left_join(mod_rsq, mod_aic) %>%
arrange(AIC) %>%
mutate(model = factor(model, levels = model)) %>%
pivot_longer(cols = -model, names_to = 'measure', values_to = 'value') %>%
ggplot(aes(x = model, y = value)) +
geom_point(aes(color = model), size = 10, show.legend = F) +
facet_wrap(~ measure, scales = 'free')
```
#### List columns
As data.frames are lists, anything can be put into a column just as you would a list element. We’ll use pmap here, as it can take more than one argument, and we’re feeding all columns of the data.frame. You don’t need to worry about the details here, we just want to create a column that is actually a list. In this case the column will contain a data frame in each entry.
```
mtcars2 = as.matrix(mtcars)
mtcars2[sample(1:length(mtcars2), 50)] = NA # add some missing data
mtcars2 = data.frame(mtcars2) %>%
rownames_to_column(var = 'observation') %>%
as_tibble()
head(mtcars2)
```
```
# A tibble: 6 x 12
observation mpg cyl disp hp drat wt qsec vs am gear carb
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1
```
```
mtcars2 =
mtcars2 %>%
mutate(
newvar =
pmap(., ~ data.frame(
N = sum(!is.na(c(...))),
Missing = sum(is.na(c(...)))
)
)
)
```
Now check out the list column.
```
mtcars2
```
```
# A tibble: 32 x 13
observation mpg cyl disp hp drat wt qsec vs am gear carb newvar
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <list>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 <df[,2] [1 × 2]>
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 <df[,2] [1 × 2]>
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 <df[,2] [1 × 2]>
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 <df[,2] [1 × 2]>
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 <df[,2] [1 × 2]>
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 <df[,2] [1 × 2]>
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 <df[,2] [1 × 2]>
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 <df[,2] [1 × 2]>
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> <df[,2] [1 × 2]>
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 <df[,2] [1 × 2]>
# … with 22 more rows
```
```
mtcars2$newvar %>% head(3)
```
```
[[1]]
N Missing
1 11 1
[[2]]
N Missing
1 12 0
[[3]]
N Missing
1 12 0
```
Unnest it with the tidyr function.
```
mtcars2 %>%
unnest(newvar)
```
```
# A tibble: 32 x 14
observation mpg cyl disp hp drat wt qsec vs am gear carb N Missing
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <int> <int>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 11 1
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 12 0
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 12 0
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 11 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 11 1
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 9 3
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 11 1
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 11 1
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> 10 2
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 9 3
# … with 22 more rows
```
This is a pretty esoteric demonstration, and not something you’d normally want to do, as mutate or other approaches would be far more efficient and sensical. However, the idea is that you might want to retain the information you might otherwise store in a list with the data that was used to create it. As an example, you could potentially attach models as a list column to a dataframe that contains meta\-information about each model. Once you have a list column, you can use that column as you would any list for iterative programming.
#### List columns
As data.frames are lists, anything can be put into a column just as you would a list element. We’ll use pmap here, as it can take more than one argument, and we’re feeding all columns of the data.frame. You don’t need to worry about the details here, we just want to create a column that is actually a list. In this case the column will contain a data frame in each entry.
```
mtcars2 = as.matrix(mtcars)
mtcars2[sample(1:length(mtcars2), 50)] = NA # add some missing data
mtcars2 = data.frame(mtcars2) %>%
rownames_to_column(var = 'observation') %>%
as_tibble()
head(mtcars2)
```
```
# A tibble: 6 x 12
observation mpg cyl disp hp drat wt qsec vs am gear carb
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1
```
```
mtcars2 =
mtcars2 %>%
mutate(
newvar =
pmap(., ~ data.frame(
N = sum(!is.na(c(...))),
Missing = sum(is.na(c(...)))
)
)
)
```
Now check out the list column.
```
mtcars2
```
```
# A tibble: 32 x 13
observation mpg cyl disp hp drat wt qsec vs am gear carb newvar
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <list>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 <df[,2] [1 × 2]>
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 <df[,2] [1 × 2]>
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 <df[,2] [1 × 2]>
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 <df[,2] [1 × 2]>
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 <df[,2] [1 × 2]>
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 <df[,2] [1 × 2]>
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 <df[,2] [1 × 2]>
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 <df[,2] [1 × 2]>
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> <df[,2] [1 × 2]>
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 <df[,2] [1 × 2]>
# … with 22 more rows
```
```
mtcars2$newvar %>% head(3)
```
```
[[1]]
N Missing
1 11 1
[[2]]
N Missing
1 12 0
[[3]]
N Missing
1 12 0
```
Unnest it with the tidyr function.
```
mtcars2 %>%
unnest(newvar)
```
```
# A tibble: 32 x 14
observation mpg cyl disp hp drat wt qsec vs am gear carb N Missing
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <int> <int>
1 Mazda RX4 21.0 6 160.0 "110" 3.90 2.620 <NA> 0 1 4 4 11 1
2 Mazda RX4 Wag 21.0 6 160.0 "110" 3.90 2.875 17.02 0 1 4 4 12 0
3 Datsun 710 22.8 4 108.0 " 93" 3.85 2.320 18.61 1 1 4 1 12 0
4 Hornet 4 Drive 21.4 6 258.0 "110" 3.08 3.215 19.44 1 <NA> 3 1 11 1
5 Hornet Sportabout 18.7 <NA> 360.0 "175" 3.15 3.440 17.02 0 0 3 2 11 1
6 Valiant <NA> 6 225.0 "105" <NA> 3.460 20.22 <NA> 0 3 1 9 3
7 Duster 360 <NA> 8 360.0 "245" 3.21 3.570 15.84 0 0 3 4 11 1
8 Merc 240D 24.4 4 <NA> " 62" 3.69 3.190 20.00 1 0 4 2 11 1
9 Merc 230 22.8 4 140.8 <NA> 3.92 3.150 22.90 1 0 4 <NA> 10 2
10 Merc 280 19.2 6 <NA> "123" 3.92 <NA> 18.30 1 <NA> 4 4 9 3
# … with 22 more rows
```
This is a pretty esoteric demonstration, and not something you’d normally want to do, as mutate or other approaches would be far more efficient and sensical. However, the idea is that you might want to retain the information you might otherwise store in a list with the data that was used to create it. As an example, you could potentially attach models as a list column to a dataframe that contains meta\-information about each model. Once you have a list column, you can use that column as you would any list for iterative programming.
Iterative Programming Exercises
-------------------------------
### Exercise 1
With the following matrix, use apply and the sum function to get row or column sums of the matrix x.
```
x = matrix(1:9, 3, 3)
```
### Exercise 2
With the following list object x, use lapply and sapply and the sum function to get sums for the elements. There is no margin to specify for a list, so just supply the list and the sum function.
```
x = list(1:3, 4:10, 11:100)
```
### Exercise 3
As in the previous example, use a map function to create a data frame of the column means. See `?map` to see all your options.
```
d = tibble(
x = rnorm(100),
y = rnorm(100, 10, 2),
z = rnorm(100, 50, 10),
)
```
### Exercise 1
With the following matrix, use apply and the sum function to get row or column sums of the matrix x.
```
x = matrix(1:9, 3, 3)
```
### Exercise 2
With the following list object x, use lapply and sapply and the sum function to get sums for the elements. There is no margin to specify for a list, so just supply the list and the sum function.
```
x = list(1:3, 4:10, 11:100)
```
### Exercise 3
As in the previous example, use a map function to create a data frame of the column means. See `?map` to see all your options.
```
d = tibble(
x = rnorm(100),
y = rnorm(100, 10, 2),
z = rnorm(100, 50, 10),
)
```
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/functions.html |
Writing Functions
=================
You can’t do anything in R without using functions, but have you ever written your own? Why would you?
* Efficiency
* Customized functionality
* Reproducibility
* Extend the work that’s already been done
There are many benefits to writing your own functions, and it’s actually easy to do. Once you get the basic concept down, you’ll likely find yourself using your own functions more and more.
A Starting Point
----------------
Let’s assume you want to calculate the mean, standard deviation, and number of missing values for a variable, called `myvar`. We could do something like the following
```
mean(myvar)
sd(myvar)
sum(is.na(myvar))
```
Now let’s say you need to do it for several variables. Here’s what your custom function could look like. It takes a single input, the variable you want information about, and returns a data frame with that info.
```
my_summary <- function(x) {
data.frame(
mean = mean(x),
sd = sd(x),
N_missing = sum(is.na(x))
)
}
```
In the above, `x` is an arbitrary name for an input. You can name it whatever you want, but the more meaningful the better. In R (and other languages) these are called *arguments*, but these inputs will determine in part what is eventually produced as output by the function.
```
my_summary(mtcars$mpg)
```
```
mean sd N_missing
1 20.09062 6.026948 0
```
Works fine. However, data typically isn’t that pretty. It often has missing values.
```
load('data/gapminder.RData')
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_missing
1 NA NA 516
```
If there are actually missing values, we need to set `na.rm = TRUE` or the mean and sd will return `NA`. Let’s try it. We can either hard bake it in, as in the initial example, or add an argument to let us control how to handle NAs with our custom function.
```
my_summary <- function(x) {
data.frame(
mean = mean(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
N_missing = sum(is.na(x))
)
}
my_summary_na <- function(x, remove_na = TRUE) {
data.frame(
mean = mean(x, na.rm = remove_na),
sd = sd(x, na.rm = remove_na),
N_missing = sum(is.na(x))
)
}
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_missing
1 43.13218 16.31355 516
```
```
my_summary_na(gapminder_2019$lifeExp, remove_na = FALSE)
```
```
mean sd N_missing
1 NA NA 516
```
Seems to work fine. Let’s add how many total observations there are.
```
my_summary <- function(x) {
# create an arbitrarily named object with the summary information
summary_data = data.frame(
mean = mean(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
N_total = length(x),
N_missing = sum(is.na(x))
)
# return the result!
summary_data
}
```
That was easy! Let’s try it.
```
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_total N_missing
1 43.13218 16.31355 40953 516
```
Now let’s do it for every column! We’ve used the map function before, now let’s use a variant that will return a data frame.
```
gapminder_2019 %>%
select_if(is.numeric) %>%
map_dfr(my_summary, .id = 'variable')
```
```
variable mean sd N_total N_missing
1 year 1.909000e+03 6.321997e+01 40953 0
2 lifeExp 4.313218e+01 1.631355e+01 40953 516
3 pop 1.353928e+07 6.565653e+07 40953 0
4 gdpPercap 4.591026e+03 1.016210e+04 40953 0
5 giniPercap 4.005331e+01 9.102757e+00 40953 0
```
The map\_dfr function is just like our previous usage in the [iterative programming](iterative.html#iterative-programming) section, just that it will create mini\-data.frames then row\-bind them together.
This shows that writing the first part of any function can be straightforward. Then, once in place, you can usually add functionality without too much trouble. Eventually you could have something very complicated, but which will make sense to you because you built it from the ground up.
Keep in mind as you start out that your initial decisions to make are:
* What are the inputs (arguments) to the function?
* What is the value to be returned?
When you think about writing a function, just write the code that can do it first. The goal is then to generalize beyond that single use case. RStudio even has a shortcut to help you get started. Consider our starting point. Highlight the code, hit Ctrl/Cmd \+ Shft \+ X, then give it a name.
```
mean(myvar)
sd(myvar)
sum(is.na(myvar))
```
It should look something like this.
```
test_fun <- function(myvar) {
mean(myvar)
sd(myvar)
sum(is.na(myvar))
}
```
RStudio could tell that you would need at least one input `myvar`, but beyond that, you’re now on your way to tweaking the function as you see fit.
Note that what goes in and what comes out could be anything, even nothing!
```
two <- function() {
2
}
two()
```
```
[1] 2
```
Or even another function!
```
center <- function(type) {
if (type == 'mean') {
mean
}
else {
median
}
}
center(type = 'mean')
```
```
function (x, ...)
UseMethod("mean")
<bytecode: 0x7fe3efc05860>
<environment: namespace:base>
```
```
myfun = center(type = 'mean')
myfun(1:5)
```
```
[1] 3
```
```
myfun = center(type = 'median')
myfun(1:4)
```
```
[1] 2.5
```
We can also set default values for the inputs.
```
hi <- function(name = 'Beyoncé') {
paste0('Hi ', name, '!')
}
hi()
```
```
[1] "Hi Beyoncé!"
```
```
hi(name = 'Jay-Z')
```
```
[1] "Hi Jay-Z!"
```
If you are working within an RStudio project, it would be a good idea to create a folder for your functions and save each as their own script. When you need the function just use the following:
```
source('my_functions/awesome_func.R')
```
This would make it easy to even create your own personal package with the functions you create.
However you go about creating a function and for whatever purpose, try to make a clear decision at the beginning
* What is the (specific) goal of your function?
* What is the minimum needed to obtain that goal?
There is even a keyboard shortcut to create R style documentation automatically!
Cmd/Ctrl \+ Option/Alt \+ Shift \+ R
DRY
---
An oft\-quoted mantra in programming is ***D**on’t **R**epeat **Y**ourself*. One context regards iterative programming, where we would rather write one line of code than one\-hundred. More generally though, we would like to gain efficiency where possible. A good rule of thumb is, if you are writing the same set of code more than twice, you should write a function to do it instead.
Consider the following example, where we want to subset the data given a set of conditions. Given the cylinder, engine displacement, and mileage, we’ll get different parts of the data.
```
good_mileage_displ_low_cyl_4 = if_else(cyl == 4 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_low_cyl_6 = if_else(cyl == 6 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_low_cyl_8 = if_else(cyl == 8 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_4 = if_else(cyl == 4 & displ > mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_6 = if_else(cyl == 6 & displ > mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_8 = if_else(cyl == 8 & displ > mean(displ) & hwy > 30, 'yes', 'no')
```
It was tedious, but that’s not much code. But now consider\- what if you want to change the mpg cutoff? The mean to median? Something else? You have to change all of it. Screw that\- let’s write a function instead! What kinds of inputs will we need?
* cyl: Which cylinder type we want
* mpg\_cutoff: The cutoff for ‘good’ mileage
* displ\_fun: Whether the displacement to be based on the mean or something else
* displ\_low: Whether we are interested in low or high displacement vehicles
* cls: the class of the vehicle (e.g. compact or suv)
```
good_mileage <- function(
cylinder = 4,
mpg_cutoff = 30,
displ_fun = mean,
displ_low = TRUE,
cls = 'compact'
) {
if (displ_low == TRUE) { # condition to check, if it holds,
result <- mpg %>% # filter data given the arguments
filter(
cyl == cylinder,
displ <= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls
)
}
else { # if the condition doesn't hold, filter
result <- mpg %>% # the data this way instead
filter(
cyl == cylinder,
displ >= displ_fun(displ), # the only change is here
hwy >= mpg_cutoff,
class == cls
)
}
result # return the object
}
```
So what’s going on here? Not a whole lot really. The function just filters the data to observations that match the input criteria, and returns that result at the end. We also put *default values* to the arguments, which can be done to your discretion.
Conditionals
------------
The core of the above function uses a *conditional statement* using standard if…else structure. The if part determines whether some condition holds. If it does, then proceed to the next step in the brackets. If not, skip to the else part. You may have used the ifelse function in base R, or dplyr’s if\_else as above, which are a short cuts for this approach. We can also add conditional else statements (else if), drop the else part entirely, nest conditionals within other conditionals, etc. Like loops, conditional statements look very similar across all programming languages.
JavaScript:
```
if (Math.random() < 0.5) {
console.log("You got Heads!")
} else {
console.log("You got Tails!")
}
```
Python:
```
if x == 2:
print(x)
else:
print(x*x)
```
In any case, with our function at the ready, we can now do the things we want to as needed:
```
good_mileage(mpg_cutoff = 40)
```
```
# A tibble: 1 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 volkswagen jetta 1.9 1999 4 manual(m5) f 33 44 d compact
```
```
good_mileage(
cylinder = 8,
mpg_cutoff = 15,
displ_low = FALSE,
cls = 'suv'
)
```
```
# A tibble: 34 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
2 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 11 15 e suv
3 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
4 chevrolet c1500 suburban 2wd 5.7 1999 8 auto(l4) r 13 17 r suv
5 chevrolet c1500 suburban 2wd 6 2008 8 auto(l4) r 12 17 r suv
6 chevrolet k1500 tahoe 4wd 5.3 2008 8 auto(l4) 4 14 19 r suv
7 chevrolet k1500 tahoe 4wd 5.7 1999 8 auto(l4) 4 11 15 r suv
8 chevrolet k1500 tahoe 4wd 6.5 1999 8 auto(l4) 4 14 17 d suv
9 dodge durango 4wd 4.7 2008 8 auto(l5) 4 13 17 r suv
10 dodge durango 4wd 4.7 2008 8 auto(l5) 4 13 17 r suv
# … with 24 more rows
```
Let’s extend the functionality by adding a year argument (the only values available are 2008 and 1999\).
```
good_mileage <- function(
cylinder = 4,
mpg_cutoff = 30,
displ_fun = mean,
displ_low = TRUE,
cls = 'compact',
yr = 2008
) {
if (displ_low) {
result = mpg %>%
filter(cyl == cylinder,
displ <= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls,
year == yr)
}
else {
result = mpg %>%
filter(cyl == cylinder,
displ >= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls,
year == yr)
}
result
}
```
```
good_mileage(
cylinder = 8,
mpg_cutoff = 19,
displ_low = FALSE,
cls = 'suv',
yr = 2008
)
```
```
# A tibble: 6 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
2 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
3 chevrolet k1500 tahoe 4wd 5.3 2008 8 auto(l4) 4 14 19 r suv
4 ford explorer 4wd 4.6 2008 8 auto(l6) 4 13 19 r suv
5 jeep grand cherokee 4wd 4.7 2008 8 auto(l5) 4 14 19 r suv
6 mercury mountaineer 4wd 4.6 2008 8 auto(l6) 4 13 19 r suv
```
So we now have something that is *flexible*, *reusable*, and *extensible*, and it took less code than writing out the individual lines of code.
Anonymous functions
-------------------
Oftentimes we just need a quick and easy function for a one\-off application, especially when using apply/map functions. Consider the following two lines of code.
```
apply(mtcars, 2, sd)
apply(mtcars, 2, function(x) x / 2 )
```
The difference between the two is that for the latter, our function didn’t have to be a named object already available. We created a function on the fly just to serve a specific purpose. A function doesn’t exist in base R that just does nothing but divide by two, but since it is simple, we just created it as needed.
To further illustrate this, we’ll create a robust standardization function that uses the median and median absolute deviation rather than the mean and standard deviation.
```
# some variables have a mad = 0, and so return Inf (x/0) or NaN (0/0)
# apply(mtcars, 2, function(x) (x - median(x))/mad(x)) %>%
# head()
mtcars %>%
map_df(function(x) (x - median(x))/mad(x))
```
```
# A tibble: 32 x 11
mpg cyl disp hp drat wt qsec vs am gear carb
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.333 0 -0.258 -0.169 0.291 -0.919 -0.883 NaN Inf 0 1.35
2 0.333 0 -0.258 -0.169 0.291 -0.587 -0.487 NaN Inf 0 1.35
3 0.665 -0.674 -0.629 -0.389 0.220 -1.31 0.636 Inf Inf 0 -0.674
4 0.407 0 0.439 -0.169 -0.873 -0.143 1.22 Inf NaN -0.674 -0.674
5 -0.0924 0.674 1.17 0.674 -0.774 0.150 -0.487 NaN NaN -0.674 0
6 -0.203 0 0.204 -0.233 -1.33 0.176 1.77 Inf NaN -0.674 -0.674
7 -0.905 0.674 1.17 1.58 -0.689 0.319 -1.32 NaN NaN -0.674 1.35
8 0.961 -0.674 -0.353 -0.791 -0.00710 -0.176 1.62 Inf NaN 0 0
9 0.665 -0.674 -0.395 -0.363 0.319 -0.228 3.67 Inf NaN 0 0
10 0 0 -0.204 0 0.319 0.150 0.417 Inf NaN 0 1.35
# … with 22 more rows
```
Even if you don’t use [anonymous functions](https://en.wikipedia.org/wiki/Anonymous_function) (sometimes called *lambda* functions), it’s important to understand them, because you’ll often see other people’s code using them.
While it goes beyond the scope of this document at present, I should note that RStudio has a very nice and easy to use debugger. Once you get comfortable writing functions, you can use the debugger to troubleshoot problems that arise, and test new functionality (see the ‘Debug’ menu). In addition, one can profile functions to see what parts are, for example, more memory intensive, or otherwise serve as a bottleneck (see the ‘Profile’ menu). You can use the profiler on any code, not just functions.
Writing Functions Exercises
---------------------------
### Excercise 1
Write a function that takes the log of the sum of two values (i.e. just two single numbers) using the log function. Just remember that within a function, you can write R code just like you normally would.
```
log_sum <- function(a, b) {
?
}
```
### Excercise 1b
What happens if the sum of the two numbers is negative? You can’t take a log of a negative value, so it’s an error. How might we deal with this? Try using a conditional to provide an error message using the stop function. The first part is basically identical to the function you just did. But given that result, you will need to check for whether it is negative or not. The message can be whatever you want.
```
log_sum <- function(a, b) {
?
if (? < 0) {
stop('Your message here.')
} else {
?
return(your_log_sum_results) # this is an arbitrary name, change accordingly
}
}
```
### Exercise 2
Let’s write a function that will take a numeric variable and convert it to a character string of ‘positive’ vs. ‘negative’. We can use `if {}... else {}` structure, ifelse, or dplyr::if\_else\- they all would accomplish this. In this case, the input is a single vector of numbers, and the output will recode any negative value to ‘negative’ and positive values to ‘positive’ (or whatever you want). Here is an example of how we would just do it as a one\-off.
```
set.seed(123) # so you get the exact same 'random' result
x <- rnorm(10)
if_else(x < 0, "negative", "positive")
```
Now try your hand at writing a function for that.
```
pos_neg <- function(?) {
?
}
```
A Starting Point
----------------
Let’s assume you want to calculate the mean, standard deviation, and number of missing values for a variable, called `myvar`. We could do something like the following
```
mean(myvar)
sd(myvar)
sum(is.na(myvar))
```
Now let’s say you need to do it for several variables. Here’s what your custom function could look like. It takes a single input, the variable you want information about, and returns a data frame with that info.
```
my_summary <- function(x) {
data.frame(
mean = mean(x),
sd = sd(x),
N_missing = sum(is.na(x))
)
}
```
In the above, `x` is an arbitrary name for an input. You can name it whatever you want, but the more meaningful the better. In R (and other languages) these are called *arguments*, but these inputs will determine in part what is eventually produced as output by the function.
```
my_summary(mtcars$mpg)
```
```
mean sd N_missing
1 20.09062 6.026948 0
```
Works fine. However, data typically isn’t that pretty. It often has missing values.
```
load('data/gapminder.RData')
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_missing
1 NA NA 516
```
If there are actually missing values, we need to set `na.rm = TRUE` or the mean and sd will return `NA`. Let’s try it. We can either hard bake it in, as in the initial example, or add an argument to let us control how to handle NAs with our custom function.
```
my_summary <- function(x) {
data.frame(
mean = mean(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
N_missing = sum(is.na(x))
)
}
my_summary_na <- function(x, remove_na = TRUE) {
data.frame(
mean = mean(x, na.rm = remove_na),
sd = sd(x, na.rm = remove_na),
N_missing = sum(is.na(x))
)
}
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_missing
1 43.13218 16.31355 516
```
```
my_summary_na(gapminder_2019$lifeExp, remove_na = FALSE)
```
```
mean sd N_missing
1 NA NA 516
```
Seems to work fine. Let’s add how many total observations there are.
```
my_summary <- function(x) {
# create an arbitrarily named object with the summary information
summary_data = data.frame(
mean = mean(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
N_total = length(x),
N_missing = sum(is.na(x))
)
# return the result!
summary_data
}
```
That was easy! Let’s try it.
```
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_total N_missing
1 43.13218 16.31355 40953 516
```
Now let’s do it for every column! We’ve used the map function before, now let’s use a variant that will return a data frame.
```
gapminder_2019 %>%
select_if(is.numeric) %>%
map_dfr(my_summary, .id = 'variable')
```
```
variable mean sd N_total N_missing
1 year 1.909000e+03 6.321997e+01 40953 0
2 lifeExp 4.313218e+01 1.631355e+01 40953 516
3 pop 1.353928e+07 6.565653e+07 40953 0
4 gdpPercap 4.591026e+03 1.016210e+04 40953 0
5 giniPercap 4.005331e+01 9.102757e+00 40953 0
```
The map\_dfr function is just like our previous usage in the [iterative programming](iterative.html#iterative-programming) section, just that it will create mini\-data.frames then row\-bind them together.
This shows that writing the first part of any function can be straightforward. Then, once in place, you can usually add functionality without too much trouble. Eventually you could have something very complicated, but which will make sense to you because you built it from the ground up.
Keep in mind as you start out that your initial decisions to make are:
* What are the inputs (arguments) to the function?
* What is the value to be returned?
When you think about writing a function, just write the code that can do it first. The goal is then to generalize beyond that single use case. RStudio even has a shortcut to help you get started. Consider our starting point. Highlight the code, hit Ctrl/Cmd \+ Shft \+ X, then give it a name.
```
mean(myvar)
sd(myvar)
sum(is.na(myvar))
```
It should look something like this.
```
test_fun <- function(myvar) {
mean(myvar)
sd(myvar)
sum(is.na(myvar))
}
```
RStudio could tell that you would need at least one input `myvar`, but beyond that, you’re now on your way to tweaking the function as you see fit.
Note that what goes in and what comes out could be anything, even nothing!
```
two <- function() {
2
}
two()
```
```
[1] 2
```
Or even another function!
```
center <- function(type) {
if (type == 'mean') {
mean
}
else {
median
}
}
center(type = 'mean')
```
```
function (x, ...)
UseMethod("mean")
<bytecode: 0x7fe3efc05860>
<environment: namespace:base>
```
```
myfun = center(type = 'mean')
myfun(1:5)
```
```
[1] 3
```
```
myfun = center(type = 'median')
myfun(1:4)
```
```
[1] 2.5
```
We can also set default values for the inputs.
```
hi <- function(name = 'Beyoncé') {
paste0('Hi ', name, '!')
}
hi()
```
```
[1] "Hi Beyoncé!"
```
```
hi(name = 'Jay-Z')
```
```
[1] "Hi Jay-Z!"
```
If you are working within an RStudio project, it would be a good idea to create a folder for your functions and save each as their own script. When you need the function just use the following:
```
source('my_functions/awesome_func.R')
```
This would make it easy to even create your own personal package with the functions you create.
However you go about creating a function and for whatever purpose, try to make a clear decision at the beginning
* What is the (specific) goal of your function?
* What is the minimum needed to obtain that goal?
There is even a keyboard shortcut to create R style documentation automatically!
Cmd/Ctrl \+ Option/Alt \+ Shift \+ R
DRY
---
An oft\-quoted mantra in programming is ***D**on’t **R**epeat **Y**ourself*. One context regards iterative programming, where we would rather write one line of code than one\-hundred. More generally though, we would like to gain efficiency where possible. A good rule of thumb is, if you are writing the same set of code more than twice, you should write a function to do it instead.
Consider the following example, where we want to subset the data given a set of conditions. Given the cylinder, engine displacement, and mileage, we’ll get different parts of the data.
```
good_mileage_displ_low_cyl_4 = if_else(cyl == 4 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_low_cyl_6 = if_else(cyl == 6 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_low_cyl_8 = if_else(cyl == 8 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_4 = if_else(cyl == 4 & displ > mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_6 = if_else(cyl == 6 & displ > mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_8 = if_else(cyl == 8 & displ > mean(displ) & hwy > 30, 'yes', 'no')
```
It was tedious, but that’s not much code. But now consider\- what if you want to change the mpg cutoff? The mean to median? Something else? You have to change all of it. Screw that\- let’s write a function instead! What kinds of inputs will we need?
* cyl: Which cylinder type we want
* mpg\_cutoff: The cutoff for ‘good’ mileage
* displ\_fun: Whether the displacement to be based on the mean or something else
* displ\_low: Whether we are interested in low or high displacement vehicles
* cls: the class of the vehicle (e.g. compact or suv)
```
good_mileage <- function(
cylinder = 4,
mpg_cutoff = 30,
displ_fun = mean,
displ_low = TRUE,
cls = 'compact'
) {
if (displ_low == TRUE) { # condition to check, if it holds,
result <- mpg %>% # filter data given the arguments
filter(
cyl == cylinder,
displ <= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls
)
}
else { # if the condition doesn't hold, filter
result <- mpg %>% # the data this way instead
filter(
cyl == cylinder,
displ >= displ_fun(displ), # the only change is here
hwy >= mpg_cutoff,
class == cls
)
}
result # return the object
}
```
So what’s going on here? Not a whole lot really. The function just filters the data to observations that match the input criteria, and returns that result at the end. We also put *default values* to the arguments, which can be done to your discretion.
Conditionals
------------
The core of the above function uses a *conditional statement* using standard if…else structure. The if part determines whether some condition holds. If it does, then proceed to the next step in the brackets. If not, skip to the else part. You may have used the ifelse function in base R, or dplyr’s if\_else as above, which are a short cuts for this approach. We can also add conditional else statements (else if), drop the else part entirely, nest conditionals within other conditionals, etc. Like loops, conditional statements look very similar across all programming languages.
JavaScript:
```
if (Math.random() < 0.5) {
console.log("You got Heads!")
} else {
console.log("You got Tails!")
}
```
Python:
```
if x == 2:
print(x)
else:
print(x*x)
```
In any case, with our function at the ready, we can now do the things we want to as needed:
```
good_mileage(mpg_cutoff = 40)
```
```
# A tibble: 1 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 volkswagen jetta 1.9 1999 4 manual(m5) f 33 44 d compact
```
```
good_mileage(
cylinder = 8,
mpg_cutoff = 15,
displ_low = FALSE,
cls = 'suv'
)
```
```
# A tibble: 34 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
2 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 11 15 e suv
3 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
4 chevrolet c1500 suburban 2wd 5.7 1999 8 auto(l4) r 13 17 r suv
5 chevrolet c1500 suburban 2wd 6 2008 8 auto(l4) r 12 17 r suv
6 chevrolet k1500 tahoe 4wd 5.3 2008 8 auto(l4) 4 14 19 r suv
7 chevrolet k1500 tahoe 4wd 5.7 1999 8 auto(l4) 4 11 15 r suv
8 chevrolet k1500 tahoe 4wd 6.5 1999 8 auto(l4) 4 14 17 d suv
9 dodge durango 4wd 4.7 2008 8 auto(l5) 4 13 17 r suv
10 dodge durango 4wd 4.7 2008 8 auto(l5) 4 13 17 r suv
# … with 24 more rows
```
Let’s extend the functionality by adding a year argument (the only values available are 2008 and 1999\).
```
good_mileage <- function(
cylinder = 4,
mpg_cutoff = 30,
displ_fun = mean,
displ_low = TRUE,
cls = 'compact',
yr = 2008
) {
if (displ_low) {
result = mpg %>%
filter(cyl == cylinder,
displ <= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls,
year == yr)
}
else {
result = mpg %>%
filter(cyl == cylinder,
displ >= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls,
year == yr)
}
result
}
```
```
good_mileage(
cylinder = 8,
mpg_cutoff = 19,
displ_low = FALSE,
cls = 'suv',
yr = 2008
)
```
```
# A tibble: 6 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
2 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
3 chevrolet k1500 tahoe 4wd 5.3 2008 8 auto(l4) 4 14 19 r suv
4 ford explorer 4wd 4.6 2008 8 auto(l6) 4 13 19 r suv
5 jeep grand cherokee 4wd 4.7 2008 8 auto(l5) 4 14 19 r suv
6 mercury mountaineer 4wd 4.6 2008 8 auto(l6) 4 13 19 r suv
```
So we now have something that is *flexible*, *reusable*, and *extensible*, and it took less code than writing out the individual lines of code.
Anonymous functions
-------------------
Oftentimes we just need a quick and easy function for a one\-off application, especially when using apply/map functions. Consider the following two lines of code.
```
apply(mtcars, 2, sd)
apply(mtcars, 2, function(x) x / 2 )
```
The difference between the two is that for the latter, our function didn’t have to be a named object already available. We created a function on the fly just to serve a specific purpose. A function doesn’t exist in base R that just does nothing but divide by two, but since it is simple, we just created it as needed.
To further illustrate this, we’ll create a robust standardization function that uses the median and median absolute deviation rather than the mean and standard deviation.
```
# some variables have a mad = 0, and so return Inf (x/0) or NaN (0/0)
# apply(mtcars, 2, function(x) (x - median(x))/mad(x)) %>%
# head()
mtcars %>%
map_df(function(x) (x - median(x))/mad(x))
```
```
# A tibble: 32 x 11
mpg cyl disp hp drat wt qsec vs am gear carb
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.333 0 -0.258 -0.169 0.291 -0.919 -0.883 NaN Inf 0 1.35
2 0.333 0 -0.258 -0.169 0.291 -0.587 -0.487 NaN Inf 0 1.35
3 0.665 -0.674 -0.629 -0.389 0.220 -1.31 0.636 Inf Inf 0 -0.674
4 0.407 0 0.439 -0.169 -0.873 -0.143 1.22 Inf NaN -0.674 -0.674
5 -0.0924 0.674 1.17 0.674 -0.774 0.150 -0.487 NaN NaN -0.674 0
6 -0.203 0 0.204 -0.233 -1.33 0.176 1.77 Inf NaN -0.674 -0.674
7 -0.905 0.674 1.17 1.58 -0.689 0.319 -1.32 NaN NaN -0.674 1.35
8 0.961 -0.674 -0.353 -0.791 -0.00710 -0.176 1.62 Inf NaN 0 0
9 0.665 -0.674 -0.395 -0.363 0.319 -0.228 3.67 Inf NaN 0 0
10 0 0 -0.204 0 0.319 0.150 0.417 Inf NaN 0 1.35
# … with 22 more rows
```
Even if you don’t use [anonymous functions](https://en.wikipedia.org/wiki/Anonymous_function) (sometimes called *lambda* functions), it’s important to understand them, because you’ll often see other people’s code using them.
While it goes beyond the scope of this document at present, I should note that RStudio has a very nice and easy to use debugger. Once you get comfortable writing functions, you can use the debugger to troubleshoot problems that arise, and test new functionality (see the ‘Debug’ menu). In addition, one can profile functions to see what parts are, for example, more memory intensive, or otherwise serve as a bottleneck (see the ‘Profile’ menu). You can use the profiler on any code, not just functions.
Writing Functions Exercises
---------------------------
### Excercise 1
Write a function that takes the log of the sum of two values (i.e. just two single numbers) using the log function. Just remember that within a function, you can write R code just like you normally would.
```
log_sum <- function(a, b) {
?
}
```
### Excercise 1b
What happens if the sum of the two numbers is negative? You can’t take a log of a negative value, so it’s an error. How might we deal with this? Try using a conditional to provide an error message using the stop function. The first part is basically identical to the function you just did. But given that result, you will need to check for whether it is negative or not. The message can be whatever you want.
```
log_sum <- function(a, b) {
?
if (? < 0) {
stop('Your message here.')
} else {
?
return(your_log_sum_results) # this is an arbitrary name, change accordingly
}
}
```
### Exercise 2
Let’s write a function that will take a numeric variable and convert it to a character string of ‘positive’ vs. ‘negative’. We can use `if {}... else {}` structure, ifelse, or dplyr::if\_else\- they all would accomplish this. In this case, the input is a single vector of numbers, and the output will recode any negative value to ‘negative’ and positive values to ‘positive’ (or whatever you want). Here is an example of how we would just do it as a one\-off.
```
set.seed(123) # so you get the exact same 'random' result
x <- rnorm(10)
if_else(x < 0, "negative", "positive")
```
Now try your hand at writing a function for that.
```
pos_neg <- function(?) {
?
}
```
### Excercise 1
Write a function that takes the log of the sum of two values (i.e. just two single numbers) using the log function. Just remember that within a function, you can write R code just like you normally would.
```
log_sum <- function(a, b) {
?
}
```
### Excercise 1b
What happens if the sum of the two numbers is negative? You can’t take a log of a negative value, so it’s an error. How might we deal with this? Try using a conditional to provide an error message using the stop function. The first part is basically identical to the function you just did. But given that result, you will need to check for whether it is negative or not. The message can be whatever you want.
```
log_sum <- function(a, b) {
?
if (? < 0) {
stop('Your message here.')
} else {
?
return(your_log_sum_results) # this is an arbitrary name, change accordingly
}
}
```
### Exercise 2
Let’s write a function that will take a numeric variable and convert it to a character string of ‘positive’ vs. ‘negative’. We can use `if {}... else {}` structure, ifelse, or dplyr::if\_else\- they all would accomplish this. In this case, the input is a single vector of numbers, and the output will recode any negative value to ‘negative’ and positive values to ‘positive’ (or whatever you want). Here is an example of how we would just do it as a one\-off.
```
set.seed(123) # so you get the exact same 'random' result
x <- rnorm(10)
if_else(x < 0, "negative", "positive")
```
Now try your hand at writing a function for that.
```
pos_neg <- function(?) {
?
}
```
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/functions.html |
Writing Functions
=================
You can’t do anything in R without using functions, but have you ever written your own? Why would you?
* Efficiency
* Customized functionality
* Reproducibility
* Extend the work that’s already been done
There are many benefits to writing your own functions, and it’s actually easy to do. Once you get the basic concept down, you’ll likely find yourself using your own functions more and more.
A Starting Point
----------------
Let’s assume you want to calculate the mean, standard deviation, and number of missing values for a variable, called `myvar`. We could do something like the following
```
mean(myvar)
sd(myvar)
sum(is.na(myvar))
```
Now let’s say you need to do it for several variables. Here’s what your custom function could look like. It takes a single input, the variable you want information about, and returns a data frame with that info.
```
my_summary <- function(x) {
data.frame(
mean = mean(x),
sd = sd(x),
N_missing = sum(is.na(x))
)
}
```
In the above, `x` is an arbitrary name for an input. You can name it whatever you want, but the more meaningful the better. In R (and other languages) these are called *arguments*, but these inputs will determine in part what is eventually produced as output by the function.
```
my_summary(mtcars$mpg)
```
```
mean sd N_missing
1 20.09062 6.026948 0
```
Works fine. However, data typically isn’t that pretty. It often has missing values.
```
load('data/gapminder.RData')
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_missing
1 NA NA 516
```
If there are actually missing values, we need to set `na.rm = TRUE` or the mean and sd will return `NA`. Let’s try it. We can either hard bake it in, as in the initial example, or add an argument to let us control how to handle NAs with our custom function.
```
my_summary <- function(x) {
data.frame(
mean = mean(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
N_missing = sum(is.na(x))
)
}
my_summary_na <- function(x, remove_na = TRUE) {
data.frame(
mean = mean(x, na.rm = remove_na),
sd = sd(x, na.rm = remove_na),
N_missing = sum(is.na(x))
)
}
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_missing
1 43.13218 16.31355 516
```
```
my_summary_na(gapminder_2019$lifeExp, remove_na = FALSE)
```
```
mean sd N_missing
1 NA NA 516
```
Seems to work fine. Let’s add how many total observations there are.
```
my_summary <- function(x) {
# create an arbitrarily named object with the summary information
summary_data = data.frame(
mean = mean(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
N_total = length(x),
N_missing = sum(is.na(x))
)
# return the result!
summary_data
}
```
That was easy! Let’s try it.
```
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_total N_missing
1 43.13218 16.31355 40953 516
```
Now let’s do it for every column! We’ve used the map function before, now let’s use a variant that will return a data frame.
```
gapminder_2019 %>%
select_if(is.numeric) %>%
map_dfr(my_summary, .id = 'variable')
```
```
variable mean sd N_total N_missing
1 year 1.909000e+03 6.321997e+01 40953 0
2 lifeExp 4.313218e+01 1.631355e+01 40953 516
3 pop 1.353928e+07 6.565653e+07 40953 0
4 gdpPercap 4.591026e+03 1.016210e+04 40953 0
5 giniPercap 4.005331e+01 9.102757e+00 40953 0
```
The map\_dfr function is just like our previous usage in the [iterative programming](iterative.html#iterative-programming) section, just that it will create mini\-data.frames then row\-bind them together.
This shows that writing the first part of any function can be straightforward. Then, once in place, you can usually add functionality without too much trouble. Eventually you could have something very complicated, but which will make sense to you because you built it from the ground up.
Keep in mind as you start out that your initial decisions to make are:
* What are the inputs (arguments) to the function?
* What is the value to be returned?
When you think about writing a function, just write the code that can do it first. The goal is then to generalize beyond that single use case. RStudio even has a shortcut to help you get started. Consider our starting point. Highlight the code, hit Ctrl/Cmd \+ Shft \+ X, then give it a name.
```
mean(myvar)
sd(myvar)
sum(is.na(myvar))
```
It should look something like this.
```
test_fun <- function(myvar) {
mean(myvar)
sd(myvar)
sum(is.na(myvar))
}
```
RStudio could tell that you would need at least one input `myvar`, but beyond that, you’re now on your way to tweaking the function as you see fit.
Note that what goes in and what comes out could be anything, even nothing!
```
two <- function() {
2
}
two()
```
```
[1] 2
```
Or even another function!
```
center <- function(type) {
if (type == 'mean') {
mean
}
else {
median
}
}
center(type = 'mean')
```
```
function (x, ...)
UseMethod("mean")
<bytecode: 0x7fe3efc05860>
<environment: namespace:base>
```
```
myfun = center(type = 'mean')
myfun(1:5)
```
```
[1] 3
```
```
myfun = center(type = 'median')
myfun(1:4)
```
```
[1] 2.5
```
We can also set default values for the inputs.
```
hi <- function(name = 'Beyoncé') {
paste0('Hi ', name, '!')
}
hi()
```
```
[1] "Hi Beyoncé!"
```
```
hi(name = 'Jay-Z')
```
```
[1] "Hi Jay-Z!"
```
If you are working within an RStudio project, it would be a good idea to create a folder for your functions and save each as their own script. When you need the function just use the following:
```
source('my_functions/awesome_func.R')
```
This would make it easy to even create your own personal package with the functions you create.
However you go about creating a function and for whatever purpose, try to make a clear decision at the beginning
* What is the (specific) goal of your function?
* What is the minimum needed to obtain that goal?
There is even a keyboard shortcut to create R style documentation automatically!
Cmd/Ctrl \+ Option/Alt \+ Shift \+ R
DRY
---
An oft\-quoted mantra in programming is ***D**on’t **R**epeat **Y**ourself*. One context regards iterative programming, where we would rather write one line of code than one\-hundred. More generally though, we would like to gain efficiency where possible. A good rule of thumb is, if you are writing the same set of code more than twice, you should write a function to do it instead.
Consider the following example, where we want to subset the data given a set of conditions. Given the cylinder, engine displacement, and mileage, we’ll get different parts of the data.
```
good_mileage_displ_low_cyl_4 = if_else(cyl == 4 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_low_cyl_6 = if_else(cyl == 6 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_low_cyl_8 = if_else(cyl == 8 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_4 = if_else(cyl == 4 & displ > mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_6 = if_else(cyl == 6 & displ > mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_8 = if_else(cyl == 8 & displ > mean(displ) & hwy > 30, 'yes', 'no')
```
It was tedious, but that’s not much code. But now consider\- what if you want to change the mpg cutoff? The mean to median? Something else? You have to change all of it. Screw that\- let’s write a function instead! What kinds of inputs will we need?
* cyl: Which cylinder type we want
* mpg\_cutoff: The cutoff for ‘good’ mileage
* displ\_fun: Whether the displacement to be based on the mean or something else
* displ\_low: Whether we are interested in low or high displacement vehicles
* cls: the class of the vehicle (e.g. compact or suv)
```
good_mileage <- function(
cylinder = 4,
mpg_cutoff = 30,
displ_fun = mean,
displ_low = TRUE,
cls = 'compact'
) {
if (displ_low == TRUE) { # condition to check, if it holds,
result <- mpg %>% # filter data given the arguments
filter(
cyl == cylinder,
displ <= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls
)
}
else { # if the condition doesn't hold, filter
result <- mpg %>% # the data this way instead
filter(
cyl == cylinder,
displ >= displ_fun(displ), # the only change is here
hwy >= mpg_cutoff,
class == cls
)
}
result # return the object
}
```
So what’s going on here? Not a whole lot really. The function just filters the data to observations that match the input criteria, and returns that result at the end. We also put *default values* to the arguments, which can be done to your discretion.
Conditionals
------------
The core of the above function uses a *conditional statement* using standard if…else structure. The if part determines whether some condition holds. If it does, then proceed to the next step in the brackets. If not, skip to the else part. You may have used the ifelse function in base R, or dplyr’s if\_else as above, which are a short cuts for this approach. We can also add conditional else statements (else if), drop the else part entirely, nest conditionals within other conditionals, etc. Like loops, conditional statements look very similar across all programming languages.
JavaScript:
```
if (Math.random() < 0.5) {
console.log("You got Heads!")
} else {
console.log("You got Tails!")
}
```
Python:
```
if x == 2:
print(x)
else:
print(x*x)
```
In any case, with our function at the ready, we can now do the things we want to as needed:
```
good_mileage(mpg_cutoff = 40)
```
```
# A tibble: 1 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 volkswagen jetta 1.9 1999 4 manual(m5) f 33 44 d compact
```
```
good_mileage(
cylinder = 8,
mpg_cutoff = 15,
displ_low = FALSE,
cls = 'suv'
)
```
```
# A tibble: 34 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
2 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 11 15 e suv
3 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
4 chevrolet c1500 suburban 2wd 5.7 1999 8 auto(l4) r 13 17 r suv
5 chevrolet c1500 suburban 2wd 6 2008 8 auto(l4) r 12 17 r suv
6 chevrolet k1500 tahoe 4wd 5.3 2008 8 auto(l4) 4 14 19 r suv
7 chevrolet k1500 tahoe 4wd 5.7 1999 8 auto(l4) 4 11 15 r suv
8 chevrolet k1500 tahoe 4wd 6.5 1999 8 auto(l4) 4 14 17 d suv
9 dodge durango 4wd 4.7 2008 8 auto(l5) 4 13 17 r suv
10 dodge durango 4wd 4.7 2008 8 auto(l5) 4 13 17 r suv
# … with 24 more rows
```
Let’s extend the functionality by adding a year argument (the only values available are 2008 and 1999\).
```
good_mileage <- function(
cylinder = 4,
mpg_cutoff = 30,
displ_fun = mean,
displ_low = TRUE,
cls = 'compact',
yr = 2008
) {
if (displ_low) {
result = mpg %>%
filter(cyl == cylinder,
displ <= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls,
year == yr)
}
else {
result = mpg %>%
filter(cyl == cylinder,
displ >= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls,
year == yr)
}
result
}
```
```
good_mileage(
cylinder = 8,
mpg_cutoff = 19,
displ_low = FALSE,
cls = 'suv',
yr = 2008
)
```
```
# A tibble: 6 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
2 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
3 chevrolet k1500 tahoe 4wd 5.3 2008 8 auto(l4) 4 14 19 r suv
4 ford explorer 4wd 4.6 2008 8 auto(l6) 4 13 19 r suv
5 jeep grand cherokee 4wd 4.7 2008 8 auto(l5) 4 14 19 r suv
6 mercury mountaineer 4wd 4.6 2008 8 auto(l6) 4 13 19 r suv
```
So we now have something that is *flexible*, *reusable*, and *extensible*, and it took less code than writing out the individual lines of code.
Anonymous functions
-------------------
Oftentimes we just need a quick and easy function for a one\-off application, especially when using apply/map functions. Consider the following two lines of code.
```
apply(mtcars, 2, sd)
apply(mtcars, 2, function(x) x / 2 )
```
The difference between the two is that for the latter, our function didn’t have to be a named object already available. We created a function on the fly just to serve a specific purpose. A function doesn’t exist in base R that just does nothing but divide by two, but since it is simple, we just created it as needed.
To further illustrate this, we’ll create a robust standardization function that uses the median and median absolute deviation rather than the mean and standard deviation.
```
# some variables have a mad = 0, and so return Inf (x/0) or NaN (0/0)
# apply(mtcars, 2, function(x) (x - median(x))/mad(x)) %>%
# head()
mtcars %>%
map_df(function(x) (x - median(x))/mad(x))
```
```
# A tibble: 32 x 11
mpg cyl disp hp drat wt qsec vs am gear carb
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.333 0 -0.258 -0.169 0.291 -0.919 -0.883 NaN Inf 0 1.35
2 0.333 0 -0.258 -0.169 0.291 -0.587 -0.487 NaN Inf 0 1.35
3 0.665 -0.674 -0.629 -0.389 0.220 -1.31 0.636 Inf Inf 0 -0.674
4 0.407 0 0.439 -0.169 -0.873 -0.143 1.22 Inf NaN -0.674 -0.674
5 -0.0924 0.674 1.17 0.674 -0.774 0.150 -0.487 NaN NaN -0.674 0
6 -0.203 0 0.204 -0.233 -1.33 0.176 1.77 Inf NaN -0.674 -0.674
7 -0.905 0.674 1.17 1.58 -0.689 0.319 -1.32 NaN NaN -0.674 1.35
8 0.961 -0.674 -0.353 -0.791 -0.00710 -0.176 1.62 Inf NaN 0 0
9 0.665 -0.674 -0.395 -0.363 0.319 -0.228 3.67 Inf NaN 0 0
10 0 0 -0.204 0 0.319 0.150 0.417 Inf NaN 0 1.35
# … with 22 more rows
```
Even if you don’t use [anonymous functions](https://en.wikipedia.org/wiki/Anonymous_function) (sometimes called *lambda* functions), it’s important to understand them, because you’ll often see other people’s code using them.
While it goes beyond the scope of this document at present, I should note that RStudio has a very nice and easy to use debugger. Once you get comfortable writing functions, you can use the debugger to troubleshoot problems that arise, and test new functionality (see the ‘Debug’ menu). In addition, one can profile functions to see what parts are, for example, more memory intensive, or otherwise serve as a bottleneck (see the ‘Profile’ menu). You can use the profiler on any code, not just functions.
Writing Functions Exercises
---------------------------
### Excercise 1
Write a function that takes the log of the sum of two values (i.e. just two single numbers) using the log function. Just remember that within a function, you can write R code just like you normally would.
```
log_sum <- function(a, b) {
?
}
```
### Excercise 1b
What happens if the sum of the two numbers is negative? You can’t take a log of a negative value, so it’s an error. How might we deal with this? Try using a conditional to provide an error message using the stop function. The first part is basically identical to the function you just did. But given that result, you will need to check for whether it is negative or not. The message can be whatever you want.
```
log_sum <- function(a, b) {
?
if (? < 0) {
stop('Your message here.')
} else {
?
return(your_log_sum_results) # this is an arbitrary name, change accordingly
}
}
```
### Exercise 2
Let’s write a function that will take a numeric variable and convert it to a character string of ‘positive’ vs. ‘negative’. We can use `if {}... else {}` structure, ifelse, or dplyr::if\_else\- they all would accomplish this. In this case, the input is a single vector of numbers, and the output will recode any negative value to ‘negative’ and positive values to ‘positive’ (or whatever you want). Here is an example of how we would just do it as a one\-off.
```
set.seed(123) # so you get the exact same 'random' result
x <- rnorm(10)
if_else(x < 0, "negative", "positive")
```
Now try your hand at writing a function for that.
```
pos_neg <- function(?) {
?
}
```
A Starting Point
----------------
Let’s assume you want to calculate the mean, standard deviation, and number of missing values for a variable, called `myvar`. We could do something like the following
```
mean(myvar)
sd(myvar)
sum(is.na(myvar))
```
Now let’s say you need to do it for several variables. Here’s what your custom function could look like. It takes a single input, the variable you want information about, and returns a data frame with that info.
```
my_summary <- function(x) {
data.frame(
mean = mean(x),
sd = sd(x),
N_missing = sum(is.na(x))
)
}
```
In the above, `x` is an arbitrary name for an input. You can name it whatever you want, but the more meaningful the better. In R (and other languages) these are called *arguments*, but these inputs will determine in part what is eventually produced as output by the function.
```
my_summary(mtcars$mpg)
```
```
mean sd N_missing
1 20.09062 6.026948 0
```
Works fine. However, data typically isn’t that pretty. It often has missing values.
```
load('data/gapminder.RData')
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_missing
1 NA NA 516
```
If there are actually missing values, we need to set `na.rm = TRUE` or the mean and sd will return `NA`. Let’s try it. We can either hard bake it in, as in the initial example, or add an argument to let us control how to handle NAs with our custom function.
```
my_summary <- function(x) {
data.frame(
mean = mean(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
N_missing = sum(is.na(x))
)
}
my_summary_na <- function(x, remove_na = TRUE) {
data.frame(
mean = mean(x, na.rm = remove_na),
sd = sd(x, na.rm = remove_na),
N_missing = sum(is.na(x))
)
}
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_missing
1 43.13218 16.31355 516
```
```
my_summary_na(gapminder_2019$lifeExp, remove_na = FALSE)
```
```
mean sd N_missing
1 NA NA 516
```
Seems to work fine. Let’s add how many total observations there are.
```
my_summary <- function(x) {
# create an arbitrarily named object with the summary information
summary_data = data.frame(
mean = mean(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
N_total = length(x),
N_missing = sum(is.na(x))
)
# return the result!
summary_data
}
```
That was easy! Let’s try it.
```
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_total N_missing
1 43.13218 16.31355 40953 516
```
Now let’s do it for every column! We’ve used the map function before, now let’s use a variant that will return a data frame.
```
gapminder_2019 %>%
select_if(is.numeric) %>%
map_dfr(my_summary, .id = 'variable')
```
```
variable mean sd N_total N_missing
1 year 1.909000e+03 6.321997e+01 40953 0
2 lifeExp 4.313218e+01 1.631355e+01 40953 516
3 pop 1.353928e+07 6.565653e+07 40953 0
4 gdpPercap 4.591026e+03 1.016210e+04 40953 0
5 giniPercap 4.005331e+01 9.102757e+00 40953 0
```
The map\_dfr function is just like our previous usage in the [iterative programming](iterative.html#iterative-programming) section, just that it will create mini\-data.frames then row\-bind them together.
This shows that writing the first part of any function can be straightforward. Then, once in place, you can usually add functionality without too much trouble. Eventually you could have something very complicated, but which will make sense to you because you built it from the ground up.
Keep in mind as you start out that your initial decisions to make are:
* What are the inputs (arguments) to the function?
* What is the value to be returned?
When you think about writing a function, just write the code that can do it first. The goal is then to generalize beyond that single use case. RStudio even has a shortcut to help you get started. Consider our starting point. Highlight the code, hit Ctrl/Cmd \+ Shft \+ X, then give it a name.
```
mean(myvar)
sd(myvar)
sum(is.na(myvar))
```
It should look something like this.
```
test_fun <- function(myvar) {
mean(myvar)
sd(myvar)
sum(is.na(myvar))
}
```
RStudio could tell that you would need at least one input `myvar`, but beyond that, you’re now on your way to tweaking the function as you see fit.
Note that what goes in and what comes out could be anything, even nothing!
```
two <- function() {
2
}
two()
```
```
[1] 2
```
Or even another function!
```
center <- function(type) {
if (type == 'mean') {
mean
}
else {
median
}
}
center(type = 'mean')
```
```
function (x, ...)
UseMethod("mean")
<bytecode: 0x7fe3efc05860>
<environment: namespace:base>
```
```
myfun = center(type = 'mean')
myfun(1:5)
```
```
[1] 3
```
```
myfun = center(type = 'median')
myfun(1:4)
```
```
[1] 2.5
```
We can also set default values for the inputs.
```
hi <- function(name = 'Beyoncé') {
paste0('Hi ', name, '!')
}
hi()
```
```
[1] "Hi Beyoncé!"
```
```
hi(name = 'Jay-Z')
```
```
[1] "Hi Jay-Z!"
```
If you are working within an RStudio project, it would be a good idea to create a folder for your functions and save each as their own script. When you need the function just use the following:
```
source('my_functions/awesome_func.R')
```
This would make it easy to even create your own personal package with the functions you create.
However you go about creating a function and for whatever purpose, try to make a clear decision at the beginning
* What is the (specific) goal of your function?
* What is the minimum needed to obtain that goal?
There is even a keyboard shortcut to create R style documentation automatically!
Cmd/Ctrl \+ Option/Alt \+ Shift \+ R
DRY
---
An oft\-quoted mantra in programming is ***D**on’t **R**epeat **Y**ourself*. One context regards iterative programming, where we would rather write one line of code than one\-hundred. More generally though, we would like to gain efficiency where possible. A good rule of thumb is, if you are writing the same set of code more than twice, you should write a function to do it instead.
Consider the following example, where we want to subset the data given a set of conditions. Given the cylinder, engine displacement, and mileage, we’ll get different parts of the data.
```
good_mileage_displ_low_cyl_4 = if_else(cyl == 4 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_low_cyl_6 = if_else(cyl == 6 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_low_cyl_8 = if_else(cyl == 8 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_4 = if_else(cyl == 4 & displ > mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_6 = if_else(cyl == 6 & displ > mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_8 = if_else(cyl == 8 & displ > mean(displ) & hwy > 30, 'yes', 'no')
```
It was tedious, but that’s not much code. But now consider\- what if you want to change the mpg cutoff? The mean to median? Something else? You have to change all of it. Screw that\- let’s write a function instead! What kinds of inputs will we need?
* cyl: Which cylinder type we want
* mpg\_cutoff: The cutoff for ‘good’ mileage
* displ\_fun: Whether the displacement to be based on the mean or something else
* displ\_low: Whether we are interested in low or high displacement vehicles
* cls: the class of the vehicle (e.g. compact or suv)
```
good_mileage <- function(
cylinder = 4,
mpg_cutoff = 30,
displ_fun = mean,
displ_low = TRUE,
cls = 'compact'
) {
if (displ_low == TRUE) { # condition to check, if it holds,
result <- mpg %>% # filter data given the arguments
filter(
cyl == cylinder,
displ <= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls
)
}
else { # if the condition doesn't hold, filter
result <- mpg %>% # the data this way instead
filter(
cyl == cylinder,
displ >= displ_fun(displ), # the only change is here
hwy >= mpg_cutoff,
class == cls
)
}
result # return the object
}
```
So what’s going on here? Not a whole lot really. The function just filters the data to observations that match the input criteria, and returns that result at the end. We also put *default values* to the arguments, which can be done to your discretion.
Conditionals
------------
The core of the above function uses a *conditional statement* using standard if…else structure. The if part determines whether some condition holds. If it does, then proceed to the next step in the brackets. If not, skip to the else part. You may have used the ifelse function in base R, or dplyr’s if\_else as above, which are a short cuts for this approach. We can also add conditional else statements (else if), drop the else part entirely, nest conditionals within other conditionals, etc. Like loops, conditional statements look very similar across all programming languages.
JavaScript:
```
if (Math.random() < 0.5) {
console.log("You got Heads!")
} else {
console.log("You got Tails!")
}
```
Python:
```
if x == 2:
print(x)
else:
print(x*x)
```
In any case, with our function at the ready, we can now do the things we want to as needed:
```
good_mileage(mpg_cutoff = 40)
```
```
# A tibble: 1 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 volkswagen jetta 1.9 1999 4 manual(m5) f 33 44 d compact
```
```
good_mileage(
cylinder = 8,
mpg_cutoff = 15,
displ_low = FALSE,
cls = 'suv'
)
```
```
# A tibble: 34 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
2 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 11 15 e suv
3 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
4 chevrolet c1500 suburban 2wd 5.7 1999 8 auto(l4) r 13 17 r suv
5 chevrolet c1500 suburban 2wd 6 2008 8 auto(l4) r 12 17 r suv
6 chevrolet k1500 tahoe 4wd 5.3 2008 8 auto(l4) 4 14 19 r suv
7 chevrolet k1500 tahoe 4wd 5.7 1999 8 auto(l4) 4 11 15 r suv
8 chevrolet k1500 tahoe 4wd 6.5 1999 8 auto(l4) 4 14 17 d suv
9 dodge durango 4wd 4.7 2008 8 auto(l5) 4 13 17 r suv
10 dodge durango 4wd 4.7 2008 8 auto(l5) 4 13 17 r suv
# … with 24 more rows
```
Let’s extend the functionality by adding a year argument (the only values available are 2008 and 1999\).
```
good_mileage <- function(
cylinder = 4,
mpg_cutoff = 30,
displ_fun = mean,
displ_low = TRUE,
cls = 'compact',
yr = 2008
) {
if (displ_low) {
result = mpg %>%
filter(cyl == cylinder,
displ <= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls,
year == yr)
}
else {
result = mpg %>%
filter(cyl == cylinder,
displ >= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls,
year == yr)
}
result
}
```
```
good_mileage(
cylinder = 8,
mpg_cutoff = 19,
displ_low = FALSE,
cls = 'suv',
yr = 2008
)
```
```
# A tibble: 6 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
2 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
3 chevrolet k1500 tahoe 4wd 5.3 2008 8 auto(l4) 4 14 19 r suv
4 ford explorer 4wd 4.6 2008 8 auto(l6) 4 13 19 r suv
5 jeep grand cherokee 4wd 4.7 2008 8 auto(l5) 4 14 19 r suv
6 mercury mountaineer 4wd 4.6 2008 8 auto(l6) 4 13 19 r suv
```
So we now have something that is *flexible*, *reusable*, and *extensible*, and it took less code than writing out the individual lines of code.
Anonymous functions
-------------------
Oftentimes we just need a quick and easy function for a one\-off application, especially when using apply/map functions. Consider the following two lines of code.
```
apply(mtcars, 2, sd)
apply(mtcars, 2, function(x) x / 2 )
```
The difference between the two is that for the latter, our function didn’t have to be a named object already available. We created a function on the fly just to serve a specific purpose. A function doesn’t exist in base R that just does nothing but divide by two, but since it is simple, we just created it as needed.
To further illustrate this, we’ll create a robust standardization function that uses the median and median absolute deviation rather than the mean and standard deviation.
```
# some variables have a mad = 0, and so return Inf (x/0) or NaN (0/0)
# apply(mtcars, 2, function(x) (x - median(x))/mad(x)) %>%
# head()
mtcars %>%
map_df(function(x) (x - median(x))/mad(x))
```
```
# A tibble: 32 x 11
mpg cyl disp hp drat wt qsec vs am gear carb
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.333 0 -0.258 -0.169 0.291 -0.919 -0.883 NaN Inf 0 1.35
2 0.333 0 -0.258 -0.169 0.291 -0.587 -0.487 NaN Inf 0 1.35
3 0.665 -0.674 -0.629 -0.389 0.220 -1.31 0.636 Inf Inf 0 -0.674
4 0.407 0 0.439 -0.169 -0.873 -0.143 1.22 Inf NaN -0.674 -0.674
5 -0.0924 0.674 1.17 0.674 -0.774 0.150 -0.487 NaN NaN -0.674 0
6 -0.203 0 0.204 -0.233 -1.33 0.176 1.77 Inf NaN -0.674 -0.674
7 -0.905 0.674 1.17 1.58 -0.689 0.319 -1.32 NaN NaN -0.674 1.35
8 0.961 -0.674 -0.353 -0.791 -0.00710 -0.176 1.62 Inf NaN 0 0
9 0.665 -0.674 -0.395 -0.363 0.319 -0.228 3.67 Inf NaN 0 0
10 0 0 -0.204 0 0.319 0.150 0.417 Inf NaN 0 1.35
# … with 22 more rows
```
Even if you don’t use [anonymous functions](https://en.wikipedia.org/wiki/Anonymous_function) (sometimes called *lambda* functions), it’s important to understand them, because you’ll often see other people’s code using them.
While it goes beyond the scope of this document at present, I should note that RStudio has a very nice and easy to use debugger. Once you get comfortable writing functions, you can use the debugger to troubleshoot problems that arise, and test new functionality (see the ‘Debug’ menu). In addition, one can profile functions to see what parts are, for example, more memory intensive, or otherwise serve as a bottleneck (see the ‘Profile’ menu). You can use the profiler on any code, not just functions.
Writing Functions Exercises
---------------------------
### Excercise 1
Write a function that takes the log of the sum of two values (i.e. just two single numbers) using the log function. Just remember that within a function, you can write R code just like you normally would.
```
log_sum <- function(a, b) {
?
}
```
### Excercise 1b
What happens if the sum of the two numbers is negative? You can’t take a log of a negative value, so it’s an error. How might we deal with this? Try using a conditional to provide an error message using the stop function. The first part is basically identical to the function you just did. But given that result, you will need to check for whether it is negative or not. The message can be whatever you want.
```
log_sum <- function(a, b) {
?
if (? < 0) {
stop('Your message here.')
} else {
?
return(your_log_sum_results) # this is an arbitrary name, change accordingly
}
}
```
### Exercise 2
Let’s write a function that will take a numeric variable and convert it to a character string of ‘positive’ vs. ‘negative’. We can use `if {}... else {}` structure, ifelse, or dplyr::if\_else\- they all would accomplish this. In this case, the input is a single vector of numbers, and the output will recode any negative value to ‘negative’ and positive values to ‘positive’ (or whatever you want). Here is an example of how we would just do it as a one\-off.
```
set.seed(123) # so you get the exact same 'random' result
x <- rnorm(10)
if_else(x < 0, "negative", "positive")
```
Now try your hand at writing a function for that.
```
pos_neg <- function(?) {
?
}
```
### Excercise 1
Write a function that takes the log of the sum of two values (i.e. just two single numbers) using the log function. Just remember that within a function, you can write R code just like you normally would.
```
log_sum <- function(a, b) {
?
}
```
### Excercise 1b
What happens if the sum of the two numbers is negative? You can’t take a log of a negative value, so it’s an error. How might we deal with this? Try using a conditional to provide an error message using the stop function. The first part is basically identical to the function you just did. But given that result, you will need to check for whether it is negative or not. The message can be whatever you want.
```
log_sum <- function(a, b) {
?
if (? < 0) {
stop('Your message here.')
} else {
?
return(your_log_sum_results) # this is an arbitrary name, change accordingly
}
}
```
### Exercise 2
Let’s write a function that will take a numeric variable and convert it to a character string of ‘positive’ vs. ‘negative’. We can use `if {}... else {}` structure, ifelse, or dplyr::if\_else\- they all would accomplish this. In this case, the input is a single vector of numbers, and the output will recode any negative value to ‘negative’ and positive values to ‘positive’ (or whatever you want). Here is an example of how we would just do it as a one\-off.
```
set.seed(123) # so you get the exact same 'random' result
x <- rnorm(10)
if_else(x < 0, "negative", "positive")
```
Now try your hand at writing a function for that.
```
pos_neg <- function(?) {
?
}
```
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/functions.html |
Writing Functions
=================
You can’t do anything in R without using functions, but have you ever written your own? Why would you?
* Efficiency
* Customized functionality
* Reproducibility
* Extend the work that’s already been done
There are many benefits to writing your own functions, and it’s actually easy to do. Once you get the basic concept down, you’ll likely find yourself using your own functions more and more.
A Starting Point
----------------
Let’s assume you want to calculate the mean, standard deviation, and number of missing values for a variable, called `myvar`. We could do something like the following
```
mean(myvar)
sd(myvar)
sum(is.na(myvar))
```
Now let’s say you need to do it for several variables. Here’s what your custom function could look like. It takes a single input, the variable you want information about, and returns a data frame with that info.
```
my_summary <- function(x) {
data.frame(
mean = mean(x),
sd = sd(x),
N_missing = sum(is.na(x))
)
}
```
In the above, `x` is an arbitrary name for an input. You can name it whatever you want, but the more meaningful the better. In R (and other languages) these are called *arguments*, but these inputs will determine in part what is eventually produced as output by the function.
```
my_summary(mtcars$mpg)
```
```
mean sd N_missing
1 20.09062 6.026948 0
```
Works fine. However, data typically isn’t that pretty. It often has missing values.
```
load('data/gapminder.RData')
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_missing
1 NA NA 516
```
If there are actually missing values, we need to set `na.rm = TRUE` or the mean and sd will return `NA`. Let’s try it. We can either hard bake it in, as in the initial example, or add an argument to let us control how to handle NAs with our custom function.
```
my_summary <- function(x) {
data.frame(
mean = mean(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
N_missing = sum(is.na(x))
)
}
my_summary_na <- function(x, remove_na = TRUE) {
data.frame(
mean = mean(x, na.rm = remove_na),
sd = sd(x, na.rm = remove_na),
N_missing = sum(is.na(x))
)
}
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_missing
1 43.13218 16.31355 516
```
```
my_summary_na(gapminder_2019$lifeExp, remove_na = FALSE)
```
```
mean sd N_missing
1 NA NA 516
```
Seems to work fine. Let’s add how many total observations there are.
```
my_summary <- function(x) {
# create an arbitrarily named object with the summary information
summary_data = data.frame(
mean = mean(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
N_total = length(x),
N_missing = sum(is.na(x))
)
# return the result!
summary_data
}
```
That was easy! Let’s try it.
```
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_total N_missing
1 43.13218 16.31355 40953 516
```
Now let’s do it for every column! We’ve used the map function before, now let’s use a variant that will return a data frame.
```
gapminder_2019 %>%
select_if(is.numeric) %>%
map_dfr(my_summary, .id = 'variable')
```
```
variable mean sd N_total N_missing
1 year 1.909000e+03 6.321997e+01 40953 0
2 lifeExp 4.313218e+01 1.631355e+01 40953 516
3 pop 1.353928e+07 6.565653e+07 40953 0
4 gdpPercap 4.591026e+03 1.016210e+04 40953 0
5 giniPercap 4.005331e+01 9.102757e+00 40953 0
```
The map\_dfr function is just like our previous usage in the [iterative programming](iterative.html#iterative-programming) section, just that it will create mini\-data.frames then row\-bind them together.
This shows that writing the first part of any function can be straightforward. Then, once in place, you can usually add functionality without too much trouble. Eventually you could have something very complicated, but which will make sense to you because you built it from the ground up.
Keep in mind as you start out that your initial decisions to make are:
* What are the inputs (arguments) to the function?
* What is the value to be returned?
When you think about writing a function, just write the code that can do it first. The goal is then to generalize beyond that single use case. RStudio even has a shortcut to help you get started. Consider our starting point. Highlight the code, hit Ctrl/Cmd \+ Shft \+ X, then give it a name.
```
mean(myvar)
sd(myvar)
sum(is.na(myvar))
```
It should look something like this.
```
test_fun <- function(myvar) {
mean(myvar)
sd(myvar)
sum(is.na(myvar))
}
```
RStudio could tell that you would need at least one input `myvar`, but beyond that, you’re now on your way to tweaking the function as you see fit.
Note that what goes in and what comes out could be anything, even nothing!
```
two <- function() {
2
}
two()
```
```
[1] 2
```
Or even another function!
```
center <- function(type) {
if (type == 'mean') {
mean
}
else {
median
}
}
center(type = 'mean')
```
```
function (x, ...)
UseMethod("mean")
<bytecode: 0x7fe3efc05860>
<environment: namespace:base>
```
```
myfun = center(type = 'mean')
myfun(1:5)
```
```
[1] 3
```
```
myfun = center(type = 'median')
myfun(1:4)
```
```
[1] 2.5
```
We can also set default values for the inputs.
```
hi <- function(name = 'Beyoncé') {
paste0('Hi ', name, '!')
}
hi()
```
```
[1] "Hi Beyoncé!"
```
```
hi(name = 'Jay-Z')
```
```
[1] "Hi Jay-Z!"
```
If you are working within an RStudio project, it would be a good idea to create a folder for your functions and save each as their own script. When you need the function just use the following:
```
source('my_functions/awesome_func.R')
```
This would make it easy to even create your own personal package with the functions you create.
However you go about creating a function and for whatever purpose, try to make a clear decision at the beginning
* What is the (specific) goal of your function?
* What is the minimum needed to obtain that goal?
There is even a keyboard shortcut to create R style documentation automatically!
Cmd/Ctrl \+ Option/Alt \+ Shift \+ R
DRY
---
An oft\-quoted mantra in programming is ***D**on’t **R**epeat **Y**ourself*. One context regards iterative programming, where we would rather write one line of code than one\-hundred. More generally though, we would like to gain efficiency where possible. A good rule of thumb is, if you are writing the same set of code more than twice, you should write a function to do it instead.
Consider the following example, where we want to subset the data given a set of conditions. Given the cylinder, engine displacement, and mileage, we’ll get different parts of the data.
```
good_mileage_displ_low_cyl_4 = if_else(cyl == 4 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_low_cyl_6 = if_else(cyl == 6 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_low_cyl_8 = if_else(cyl == 8 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_4 = if_else(cyl == 4 & displ > mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_6 = if_else(cyl == 6 & displ > mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_8 = if_else(cyl == 8 & displ > mean(displ) & hwy > 30, 'yes', 'no')
```
It was tedious, but that’s not much code. But now consider\- what if you want to change the mpg cutoff? The mean to median? Something else? You have to change all of it. Screw that\- let’s write a function instead! What kinds of inputs will we need?
* cyl: Which cylinder type we want
* mpg\_cutoff: The cutoff for ‘good’ mileage
* displ\_fun: Whether the displacement to be based on the mean or something else
* displ\_low: Whether we are interested in low or high displacement vehicles
* cls: the class of the vehicle (e.g. compact or suv)
```
good_mileage <- function(
cylinder = 4,
mpg_cutoff = 30,
displ_fun = mean,
displ_low = TRUE,
cls = 'compact'
) {
if (displ_low == TRUE) { # condition to check, if it holds,
result <- mpg %>% # filter data given the arguments
filter(
cyl == cylinder,
displ <= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls
)
}
else { # if the condition doesn't hold, filter
result <- mpg %>% # the data this way instead
filter(
cyl == cylinder,
displ >= displ_fun(displ), # the only change is here
hwy >= mpg_cutoff,
class == cls
)
}
result # return the object
}
```
So what’s going on here? Not a whole lot really. The function just filters the data to observations that match the input criteria, and returns that result at the end. We also put *default values* to the arguments, which can be done to your discretion.
Conditionals
------------
The core of the above function uses a *conditional statement* using standard if…else structure. The if part determines whether some condition holds. If it does, then proceed to the next step in the brackets. If not, skip to the else part. You may have used the ifelse function in base R, or dplyr’s if\_else as above, which are a short cuts for this approach. We can also add conditional else statements (else if), drop the else part entirely, nest conditionals within other conditionals, etc. Like loops, conditional statements look very similar across all programming languages.
JavaScript:
```
if (Math.random() < 0.5) {
console.log("You got Heads!")
} else {
console.log("You got Tails!")
}
```
Python:
```
if x == 2:
print(x)
else:
print(x*x)
```
In any case, with our function at the ready, we can now do the things we want to as needed:
```
good_mileage(mpg_cutoff = 40)
```
```
# A tibble: 1 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 volkswagen jetta 1.9 1999 4 manual(m5) f 33 44 d compact
```
```
good_mileage(
cylinder = 8,
mpg_cutoff = 15,
displ_low = FALSE,
cls = 'suv'
)
```
```
# A tibble: 34 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
2 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 11 15 e suv
3 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
4 chevrolet c1500 suburban 2wd 5.7 1999 8 auto(l4) r 13 17 r suv
5 chevrolet c1500 suburban 2wd 6 2008 8 auto(l4) r 12 17 r suv
6 chevrolet k1500 tahoe 4wd 5.3 2008 8 auto(l4) 4 14 19 r suv
7 chevrolet k1500 tahoe 4wd 5.7 1999 8 auto(l4) 4 11 15 r suv
8 chevrolet k1500 tahoe 4wd 6.5 1999 8 auto(l4) 4 14 17 d suv
9 dodge durango 4wd 4.7 2008 8 auto(l5) 4 13 17 r suv
10 dodge durango 4wd 4.7 2008 8 auto(l5) 4 13 17 r suv
# … with 24 more rows
```
Let’s extend the functionality by adding a year argument (the only values available are 2008 and 1999\).
```
good_mileage <- function(
cylinder = 4,
mpg_cutoff = 30,
displ_fun = mean,
displ_low = TRUE,
cls = 'compact',
yr = 2008
) {
if (displ_low) {
result = mpg %>%
filter(cyl == cylinder,
displ <= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls,
year == yr)
}
else {
result = mpg %>%
filter(cyl == cylinder,
displ >= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls,
year == yr)
}
result
}
```
```
good_mileage(
cylinder = 8,
mpg_cutoff = 19,
displ_low = FALSE,
cls = 'suv',
yr = 2008
)
```
```
# A tibble: 6 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
2 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
3 chevrolet k1500 tahoe 4wd 5.3 2008 8 auto(l4) 4 14 19 r suv
4 ford explorer 4wd 4.6 2008 8 auto(l6) 4 13 19 r suv
5 jeep grand cherokee 4wd 4.7 2008 8 auto(l5) 4 14 19 r suv
6 mercury mountaineer 4wd 4.6 2008 8 auto(l6) 4 13 19 r suv
```
So we now have something that is *flexible*, *reusable*, and *extensible*, and it took less code than writing out the individual lines of code.
Anonymous functions
-------------------
Oftentimes we just need a quick and easy function for a one\-off application, especially when using apply/map functions. Consider the following two lines of code.
```
apply(mtcars, 2, sd)
apply(mtcars, 2, function(x) x / 2 )
```
The difference between the two is that for the latter, our function didn’t have to be a named object already available. We created a function on the fly just to serve a specific purpose. A function doesn’t exist in base R that just does nothing but divide by two, but since it is simple, we just created it as needed.
To further illustrate this, we’ll create a robust standardization function that uses the median and median absolute deviation rather than the mean and standard deviation.
```
# some variables have a mad = 0, and so return Inf (x/0) or NaN (0/0)
# apply(mtcars, 2, function(x) (x - median(x))/mad(x)) %>%
# head()
mtcars %>%
map_df(function(x) (x - median(x))/mad(x))
```
```
# A tibble: 32 x 11
mpg cyl disp hp drat wt qsec vs am gear carb
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.333 0 -0.258 -0.169 0.291 -0.919 -0.883 NaN Inf 0 1.35
2 0.333 0 -0.258 -0.169 0.291 -0.587 -0.487 NaN Inf 0 1.35
3 0.665 -0.674 -0.629 -0.389 0.220 -1.31 0.636 Inf Inf 0 -0.674
4 0.407 0 0.439 -0.169 -0.873 -0.143 1.22 Inf NaN -0.674 -0.674
5 -0.0924 0.674 1.17 0.674 -0.774 0.150 -0.487 NaN NaN -0.674 0
6 -0.203 0 0.204 -0.233 -1.33 0.176 1.77 Inf NaN -0.674 -0.674
7 -0.905 0.674 1.17 1.58 -0.689 0.319 -1.32 NaN NaN -0.674 1.35
8 0.961 -0.674 -0.353 -0.791 -0.00710 -0.176 1.62 Inf NaN 0 0
9 0.665 -0.674 -0.395 -0.363 0.319 -0.228 3.67 Inf NaN 0 0
10 0 0 -0.204 0 0.319 0.150 0.417 Inf NaN 0 1.35
# … with 22 more rows
```
Even if you don’t use [anonymous functions](https://en.wikipedia.org/wiki/Anonymous_function) (sometimes called *lambda* functions), it’s important to understand them, because you’ll often see other people’s code using them.
While it goes beyond the scope of this document at present, I should note that RStudio has a very nice and easy to use debugger. Once you get comfortable writing functions, you can use the debugger to troubleshoot problems that arise, and test new functionality (see the ‘Debug’ menu). In addition, one can profile functions to see what parts are, for example, more memory intensive, or otherwise serve as a bottleneck (see the ‘Profile’ menu). You can use the profiler on any code, not just functions.
Writing Functions Exercises
---------------------------
### Excercise 1
Write a function that takes the log of the sum of two values (i.e. just two single numbers) using the log function. Just remember that within a function, you can write R code just like you normally would.
```
log_sum <- function(a, b) {
?
}
```
### Excercise 1b
What happens if the sum of the two numbers is negative? You can’t take a log of a negative value, so it’s an error. How might we deal with this? Try using a conditional to provide an error message using the stop function. The first part is basically identical to the function you just did. But given that result, you will need to check for whether it is negative or not. The message can be whatever you want.
```
log_sum <- function(a, b) {
?
if (? < 0) {
stop('Your message here.')
} else {
?
return(your_log_sum_results) # this is an arbitrary name, change accordingly
}
}
```
### Exercise 2
Let’s write a function that will take a numeric variable and convert it to a character string of ‘positive’ vs. ‘negative’. We can use `if {}... else {}` structure, ifelse, or dplyr::if\_else\- they all would accomplish this. In this case, the input is a single vector of numbers, and the output will recode any negative value to ‘negative’ and positive values to ‘positive’ (or whatever you want). Here is an example of how we would just do it as a one\-off.
```
set.seed(123) # so you get the exact same 'random' result
x <- rnorm(10)
if_else(x < 0, "negative", "positive")
```
Now try your hand at writing a function for that.
```
pos_neg <- function(?) {
?
}
```
A Starting Point
----------------
Let’s assume you want to calculate the mean, standard deviation, and number of missing values for a variable, called `myvar`. We could do something like the following
```
mean(myvar)
sd(myvar)
sum(is.na(myvar))
```
Now let’s say you need to do it for several variables. Here’s what your custom function could look like. It takes a single input, the variable you want information about, and returns a data frame with that info.
```
my_summary <- function(x) {
data.frame(
mean = mean(x),
sd = sd(x),
N_missing = sum(is.na(x))
)
}
```
In the above, `x` is an arbitrary name for an input. You can name it whatever you want, but the more meaningful the better. In R (and other languages) these are called *arguments*, but these inputs will determine in part what is eventually produced as output by the function.
```
my_summary(mtcars$mpg)
```
```
mean sd N_missing
1 20.09062 6.026948 0
```
Works fine. However, data typically isn’t that pretty. It often has missing values.
```
load('data/gapminder.RData')
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_missing
1 NA NA 516
```
If there are actually missing values, we need to set `na.rm = TRUE` or the mean and sd will return `NA`. Let’s try it. We can either hard bake it in, as in the initial example, or add an argument to let us control how to handle NAs with our custom function.
```
my_summary <- function(x) {
data.frame(
mean = mean(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
N_missing = sum(is.na(x))
)
}
my_summary_na <- function(x, remove_na = TRUE) {
data.frame(
mean = mean(x, na.rm = remove_na),
sd = sd(x, na.rm = remove_na),
N_missing = sum(is.na(x))
)
}
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_missing
1 43.13218 16.31355 516
```
```
my_summary_na(gapminder_2019$lifeExp, remove_na = FALSE)
```
```
mean sd N_missing
1 NA NA 516
```
Seems to work fine. Let’s add how many total observations there are.
```
my_summary <- function(x) {
# create an arbitrarily named object with the summary information
summary_data = data.frame(
mean = mean(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
N_total = length(x),
N_missing = sum(is.na(x))
)
# return the result!
summary_data
}
```
That was easy! Let’s try it.
```
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_total N_missing
1 43.13218 16.31355 40953 516
```
Now let’s do it for every column! We’ve used the map function before, now let’s use a variant that will return a data frame.
```
gapminder_2019 %>%
select_if(is.numeric) %>%
map_dfr(my_summary, .id = 'variable')
```
```
variable mean sd N_total N_missing
1 year 1.909000e+03 6.321997e+01 40953 0
2 lifeExp 4.313218e+01 1.631355e+01 40953 516
3 pop 1.353928e+07 6.565653e+07 40953 0
4 gdpPercap 4.591026e+03 1.016210e+04 40953 0
5 giniPercap 4.005331e+01 9.102757e+00 40953 0
```
The map\_dfr function is just like our previous usage in the [iterative programming](iterative.html#iterative-programming) section, just that it will create mini\-data.frames then row\-bind them together.
This shows that writing the first part of any function can be straightforward. Then, once in place, you can usually add functionality without too much trouble. Eventually you could have something very complicated, but which will make sense to you because you built it from the ground up.
Keep in mind as you start out that your initial decisions to make are:
* What are the inputs (arguments) to the function?
* What is the value to be returned?
When you think about writing a function, just write the code that can do it first. The goal is then to generalize beyond that single use case. RStudio even has a shortcut to help you get started. Consider our starting point. Highlight the code, hit Ctrl/Cmd \+ Shft \+ X, then give it a name.
```
mean(myvar)
sd(myvar)
sum(is.na(myvar))
```
It should look something like this.
```
test_fun <- function(myvar) {
mean(myvar)
sd(myvar)
sum(is.na(myvar))
}
```
RStudio could tell that you would need at least one input `myvar`, but beyond that, you’re now on your way to tweaking the function as you see fit.
Note that what goes in and what comes out could be anything, even nothing!
```
two <- function() {
2
}
two()
```
```
[1] 2
```
Or even another function!
```
center <- function(type) {
if (type == 'mean') {
mean
}
else {
median
}
}
center(type = 'mean')
```
```
function (x, ...)
UseMethod("mean")
<bytecode: 0x7fe3efc05860>
<environment: namespace:base>
```
```
myfun = center(type = 'mean')
myfun(1:5)
```
```
[1] 3
```
```
myfun = center(type = 'median')
myfun(1:4)
```
```
[1] 2.5
```
We can also set default values for the inputs.
```
hi <- function(name = 'Beyoncé') {
paste0('Hi ', name, '!')
}
hi()
```
```
[1] "Hi Beyoncé!"
```
```
hi(name = 'Jay-Z')
```
```
[1] "Hi Jay-Z!"
```
If you are working within an RStudio project, it would be a good idea to create a folder for your functions and save each as their own script. When you need the function just use the following:
```
source('my_functions/awesome_func.R')
```
This would make it easy to even create your own personal package with the functions you create.
However you go about creating a function and for whatever purpose, try to make a clear decision at the beginning
* What is the (specific) goal of your function?
* What is the minimum needed to obtain that goal?
There is even a keyboard shortcut to create R style documentation automatically!
Cmd/Ctrl \+ Option/Alt \+ Shift \+ R
DRY
---
An oft\-quoted mantra in programming is ***D**on’t **R**epeat **Y**ourself*. One context regards iterative programming, where we would rather write one line of code than one\-hundred. More generally though, we would like to gain efficiency where possible. A good rule of thumb is, if you are writing the same set of code more than twice, you should write a function to do it instead.
Consider the following example, where we want to subset the data given a set of conditions. Given the cylinder, engine displacement, and mileage, we’ll get different parts of the data.
```
good_mileage_displ_low_cyl_4 = if_else(cyl == 4 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_low_cyl_6 = if_else(cyl == 6 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_low_cyl_8 = if_else(cyl == 8 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_4 = if_else(cyl == 4 & displ > mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_6 = if_else(cyl == 6 & displ > mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_8 = if_else(cyl == 8 & displ > mean(displ) & hwy > 30, 'yes', 'no')
```
It was tedious, but that’s not much code. But now consider\- what if you want to change the mpg cutoff? The mean to median? Something else? You have to change all of it. Screw that\- let’s write a function instead! What kinds of inputs will we need?
* cyl: Which cylinder type we want
* mpg\_cutoff: The cutoff for ‘good’ mileage
* displ\_fun: Whether the displacement to be based on the mean or something else
* displ\_low: Whether we are interested in low or high displacement vehicles
* cls: the class of the vehicle (e.g. compact or suv)
```
good_mileage <- function(
cylinder = 4,
mpg_cutoff = 30,
displ_fun = mean,
displ_low = TRUE,
cls = 'compact'
) {
if (displ_low == TRUE) { # condition to check, if it holds,
result <- mpg %>% # filter data given the arguments
filter(
cyl == cylinder,
displ <= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls
)
}
else { # if the condition doesn't hold, filter
result <- mpg %>% # the data this way instead
filter(
cyl == cylinder,
displ >= displ_fun(displ), # the only change is here
hwy >= mpg_cutoff,
class == cls
)
}
result # return the object
}
```
So what’s going on here? Not a whole lot really. The function just filters the data to observations that match the input criteria, and returns that result at the end. We also put *default values* to the arguments, which can be done to your discretion.
Conditionals
------------
The core of the above function uses a *conditional statement* using standard if…else structure. The if part determines whether some condition holds. If it does, then proceed to the next step in the brackets. If not, skip to the else part. You may have used the ifelse function in base R, or dplyr’s if\_else as above, which are a short cuts for this approach. We can also add conditional else statements (else if), drop the else part entirely, nest conditionals within other conditionals, etc. Like loops, conditional statements look very similar across all programming languages.
JavaScript:
```
if (Math.random() < 0.5) {
console.log("You got Heads!")
} else {
console.log("You got Tails!")
}
```
Python:
```
if x == 2:
print(x)
else:
print(x*x)
```
In any case, with our function at the ready, we can now do the things we want to as needed:
```
good_mileage(mpg_cutoff = 40)
```
```
# A tibble: 1 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 volkswagen jetta 1.9 1999 4 manual(m5) f 33 44 d compact
```
```
good_mileage(
cylinder = 8,
mpg_cutoff = 15,
displ_low = FALSE,
cls = 'suv'
)
```
```
# A tibble: 34 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
2 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 11 15 e suv
3 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
4 chevrolet c1500 suburban 2wd 5.7 1999 8 auto(l4) r 13 17 r suv
5 chevrolet c1500 suburban 2wd 6 2008 8 auto(l4) r 12 17 r suv
6 chevrolet k1500 tahoe 4wd 5.3 2008 8 auto(l4) 4 14 19 r suv
7 chevrolet k1500 tahoe 4wd 5.7 1999 8 auto(l4) 4 11 15 r suv
8 chevrolet k1500 tahoe 4wd 6.5 1999 8 auto(l4) 4 14 17 d suv
9 dodge durango 4wd 4.7 2008 8 auto(l5) 4 13 17 r suv
10 dodge durango 4wd 4.7 2008 8 auto(l5) 4 13 17 r suv
# … with 24 more rows
```
Let’s extend the functionality by adding a year argument (the only values available are 2008 and 1999\).
```
good_mileage <- function(
cylinder = 4,
mpg_cutoff = 30,
displ_fun = mean,
displ_low = TRUE,
cls = 'compact',
yr = 2008
) {
if (displ_low) {
result = mpg %>%
filter(cyl == cylinder,
displ <= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls,
year == yr)
}
else {
result = mpg %>%
filter(cyl == cylinder,
displ >= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls,
year == yr)
}
result
}
```
```
good_mileage(
cylinder = 8,
mpg_cutoff = 19,
displ_low = FALSE,
cls = 'suv',
yr = 2008
)
```
```
# A tibble: 6 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
2 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
3 chevrolet k1500 tahoe 4wd 5.3 2008 8 auto(l4) 4 14 19 r suv
4 ford explorer 4wd 4.6 2008 8 auto(l6) 4 13 19 r suv
5 jeep grand cherokee 4wd 4.7 2008 8 auto(l5) 4 14 19 r suv
6 mercury mountaineer 4wd 4.6 2008 8 auto(l6) 4 13 19 r suv
```
So we now have something that is *flexible*, *reusable*, and *extensible*, and it took less code than writing out the individual lines of code.
Anonymous functions
-------------------
Oftentimes we just need a quick and easy function for a one\-off application, especially when using apply/map functions. Consider the following two lines of code.
```
apply(mtcars, 2, sd)
apply(mtcars, 2, function(x) x / 2 )
```
The difference between the two is that for the latter, our function didn’t have to be a named object already available. We created a function on the fly just to serve a specific purpose. A function doesn’t exist in base R that just does nothing but divide by two, but since it is simple, we just created it as needed.
To further illustrate this, we’ll create a robust standardization function that uses the median and median absolute deviation rather than the mean and standard deviation.
```
# some variables have a mad = 0, and so return Inf (x/0) or NaN (0/0)
# apply(mtcars, 2, function(x) (x - median(x))/mad(x)) %>%
# head()
mtcars %>%
map_df(function(x) (x - median(x))/mad(x))
```
```
# A tibble: 32 x 11
mpg cyl disp hp drat wt qsec vs am gear carb
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.333 0 -0.258 -0.169 0.291 -0.919 -0.883 NaN Inf 0 1.35
2 0.333 0 -0.258 -0.169 0.291 -0.587 -0.487 NaN Inf 0 1.35
3 0.665 -0.674 -0.629 -0.389 0.220 -1.31 0.636 Inf Inf 0 -0.674
4 0.407 0 0.439 -0.169 -0.873 -0.143 1.22 Inf NaN -0.674 -0.674
5 -0.0924 0.674 1.17 0.674 -0.774 0.150 -0.487 NaN NaN -0.674 0
6 -0.203 0 0.204 -0.233 -1.33 0.176 1.77 Inf NaN -0.674 -0.674
7 -0.905 0.674 1.17 1.58 -0.689 0.319 -1.32 NaN NaN -0.674 1.35
8 0.961 -0.674 -0.353 -0.791 -0.00710 -0.176 1.62 Inf NaN 0 0
9 0.665 -0.674 -0.395 -0.363 0.319 -0.228 3.67 Inf NaN 0 0
10 0 0 -0.204 0 0.319 0.150 0.417 Inf NaN 0 1.35
# … with 22 more rows
```
Even if you don’t use [anonymous functions](https://en.wikipedia.org/wiki/Anonymous_function) (sometimes called *lambda* functions), it’s important to understand them, because you’ll often see other people’s code using them.
While it goes beyond the scope of this document at present, I should note that RStudio has a very nice and easy to use debugger. Once you get comfortable writing functions, you can use the debugger to troubleshoot problems that arise, and test new functionality (see the ‘Debug’ menu). In addition, one can profile functions to see what parts are, for example, more memory intensive, or otherwise serve as a bottleneck (see the ‘Profile’ menu). You can use the profiler on any code, not just functions.
Writing Functions Exercises
---------------------------
### Excercise 1
Write a function that takes the log of the sum of two values (i.e. just two single numbers) using the log function. Just remember that within a function, you can write R code just like you normally would.
```
log_sum <- function(a, b) {
?
}
```
### Excercise 1b
What happens if the sum of the two numbers is negative? You can’t take a log of a negative value, so it’s an error. How might we deal with this? Try using a conditional to provide an error message using the stop function. The first part is basically identical to the function you just did. But given that result, you will need to check for whether it is negative or not. The message can be whatever you want.
```
log_sum <- function(a, b) {
?
if (? < 0) {
stop('Your message here.')
} else {
?
return(your_log_sum_results) # this is an arbitrary name, change accordingly
}
}
```
### Exercise 2
Let’s write a function that will take a numeric variable and convert it to a character string of ‘positive’ vs. ‘negative’. We can use `if {}... else {}` structure, ifelse, or dplyr::if\_else\- they all would accomplish this. In this case, the input is a single vector of numbers, and the output will recode any negative value to ‘negative’ and positive values to ‘positive’ (or whatever you want). Here is an example of how we would just do it as a one\-off.
```
set.seed(123) # so you get the exact same 'random' result
x <- rnorm(10)
if_else(x < 0, "negative", "positive")
```
Now try your hand at writing a function for that.
```
pos_neg <- function(?) {
?
}
```
### Excercise 1
Write a function that takes the log of the sum of two values (i.e. just two single numbers) using the log function. Just remember that within a function, you can write R code just like you normally would.
```
log_sum <- function(a, b) {
?
}
```
### Excercise 1b
What happens if the sum of the two numbers is negative? You can’t take a log of a negative value, so it’s an error. How might we deal with this? Try using a conditional to provide an error message using the stop function. The first part is basically identical to the function you just did. But given that result, you will need to check for whether it is negative or not. The message can be whatever you want.
```
log_sum <- function(a, b) {
?
if (? < 0) {
stop('Your message here.')
} else {
?
return(your_log_sum_results) # this is an arbitrary name, change accordingly
}
}
```
### Exercise 2
Let’s write a function that will take a numeric variable and convert it to a character string of ‘positive’ vs. ‘negative’. We can use `if {}... else {}` structure, ifelse, or dplyr::if\_else\- they all would accomplish this. In this case, the input is a single vector of numbers, and the output will recode any negative value to ‘negative’ and positive values to ‘positive’ (or whatever you want). Here is an example of how we would just do it as a one\-off.
```
set.seed(123) # so you get the exact same 'random' result
x <- rnorm(10)
if_else(x < 0, "negative", "positive")
```
Now try your hand at writing a function for that.
```
pos_neg <- function(?) {
?
}
```
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/functions.html |
Writing Functions
=================
You can’t do anything in R without using functions, but have you ever written your own? Why would you?
* Efficiency
* Customized functionality
* Reproducibility
* Extend the work that’s already been done
There are many benefits to writing your own functions, and it’s actually easy to do. Once you get the basic concept down, you’ll likely find yourself using your own functions more and more.
A Starting Point
----------------
Let’s assume you want to calculate the mean, standard deviation, and number of missing values for a variable, called `myvar`. We could do something like the following
```
mean(myvar)
sd(myvar)
sum(is.na(myvar))
```
Now let’s say you need to do it for several variables. Here’s what your custom function could look like. It takes a single input, the variable you want information about, and returns a data frame with that info.
```
my_summary <- function(x) {
data.frame(
mean = mean(x),
sd = sd(x),
N_missing = sum(is.na(x))
)
}
```
In the above, `x` is an arbitrary name for an input. You can name it whatever you want, but the more meaningful the better. In R (and other languages) these are called *arguments*, but these inputs will determine in part what is eventually produced as output by the function.
```
my_summary(mtcars$mpg)
```
```
mean sd N_missing
1 20.09062 6.026948 0
```
Works fine. However, data typically isn’t that pretty. It often has missing values.
```
load('data/gapminder.RData')
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_missing
1 NA NA 516
```
If there are actually missing values, we need to set `na.rm = TRUE` or the mean and sd will return `NA`. Let’s try it. We can either hard bake it in, as in the initial example, or add an argument to let us control how to handle NAs with our custom function.
```
my_summary <- function(x) {
data.frame(
mean = mean(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
N_missing = sum(is.na(x))
)
}
my_summary_na <- function(x, remove_na = TRUE) {
data.frame(
mean = mean(x, na.rm = remove_na),
sd = sd(x, na.rm = remove_na),
N_missing = sum(is.na(x))
)
}
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_missing
1 43.13218 16.31355 516
```
```
my_summary_na(gapminder_2019$lifeExp, remove_na = FALSE)
```
```
mean sd N_missing
1 NA NA 516
```
Seems to work fine. Let’s add how many total observations there are.
```
my_summary <- function(x) {
# create an arbitrarily named object with the summary information
summary_data = data.frame(
mean = mean(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
N_total = length(x),
N_missing = sum(is.na(x))
)
# return the result!
summary_data
}
```
That was easy! Let’s try it.
```
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_total N_missing
1 43.13218 16.31355 40953 516
```
Now let’s do it for every column! We’ve used the map function before, now let’s use a variant that will return a data frame.
```
gapminder_2019 %>%
select_if(is.numeric) %>%
map_dfr(my_summary, .id = 'variable')
```
```
variable mean sd N_total N_missing
1 year 1.909000e+03 6.321997e+01 40953 0
2 lifeExp 4.313218e+01 1.631355e+01 40953 516
3 pop 1.353928e+07 6.565653e+07 40953 0
4 gdpPercap 4.591026e+03 1.016210e+04 40953 0
5 giniPercap 4.005331e+01 9.102757e+00 40953 0
```
The map\_dfr function is just like our previous usage in the [iterative programming](iterative.html#iterative-programming) section, just that it will create mini\-data.frames then row\-bind them together.
This shows that writing the first part of any function can be straightforward. Then, once in place, you can usually add functionality without too much trouble. Eventually you could have something very complicated, but which will make sense to you because you built it from the ground up.
Keep in mind as you start out that your initial decisions to make are:
* What are the inputs (arguments) to the function?
* What is the value to be returned?
When you think about writing a function, just write the code that can do it first. The goal is then to generalize beyond that single use case. RStudio even has a shortcut to help you get started. Consider our starting point. Highlight the code, hit Ctrl/Cmd \+ Shft \+ X, then give it a name.
```
mean(myvar)
sd(myvar)
sum(is.na(myvar))
```
It should look something like this.
```
test_fun <- function(myvar) {
mean(myvar)
sd(myvar)
sum(is.na(myvar))
}
```
RStudio could tell that you would need at least one input `myvar`, but beyond that, you’re now on your way to tweaking the function as you see fit.
Note that what goes in and what comes out could be anything, even nothing!
```
two <- function() {
2
}
two()
```
```
[1] 2
```
Or even another function!
```
center <- function(type) {
if (type == 'mean') {
mean
}
else {
median
}
}
center(type = 'mean')
```
```
function (x, ...)
UseMethod("mean")
<bytecode: 0x7fe3efc05860>
<environment: namespace:base>
```
```
myfun = center(type = 'mean')
myfun(1:5)
```
```
[1] 3
```
```
myfun = center(type = 'median')
myfun(1:4)
```
```
[1] 2.5
```
We can also set default values for the inputs.
```
hi <- function(name = 'Beyoncé') {
paste0('Hi ', name, '!')
}
hi()
```
```
[1] "Hi Beyoncé!"
```
```
hi(name = 'Jay-Z')
```
```
[1] "Hi Jay-Z!"
```
If you are working within an RStudio project, it would be a good idea to create a folder for your functions and save each as their own script. When you need the function just use the following:
```
source('my_functions/awesome_func.R')
```
This would make it easy to even create your own personal package with the functions you create.
However you go about creating a function and for whatever purpose, try to make a clear decision at the beginning
* What is the (specific) goal of your function?
* What is the minimum needed to obtain that goal?
There is even a keyboard shortcut to create R style documentation automatically!
Cmd/Ctrl \+ Option/Alt \+ Shift \+ R
DRY
---
An oft\-quoted mantra in programming is ***D**on’t **R**epeat **Y**ourself*. One context regards iterative programming, where we would rather write one line of code than one\-hundred. More generally though, we would like to gain efficiency where possible. A good rule of thumb is, if you are writing the same set of code more than twice, you should write a function to do it instead.
Consider the following example, where we want to subset the data given a set of conditions. Given the cylinder, engine displacement, and mileage, we’ll get different parts of the data.
```
good_mileage_displ_low_cyl_4 = if_else(cyl == 4 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_low_cyl_6 = if_else(cyl == 6 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_low_cyl_8 = if_else(cyl == 8 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_4 = if_else(cyl == 4 & displ > mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_6 = if_else(cyl == 6 & displ > mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_8 = if_else(cyl == 8 & displ > mean(displ) & hwy > 30, 'yes', 'no')
```
It was tedious, but that’s not much code. But now consider\- what if you want to change the mpg cutoff? The mean to median? Something else? You have to change all of it. Screw that\- let’s write a function instead! What kinds of inputs will we need?
* cyl: Which cylinder type we want
* mpg\_cutoff: The cutoff for ‘good’ mileage
* displ\_fun: Whether the displacement to be based on the mean or something else
* displ\_low: Whether we are interested in low or high displacement vehicles
* cls: the class of the vehicle (e.g. compact or suv)
```
good_mileage <- function(
cylinder = 4,
mpg_cutoff = 30,
displ_fun = mean,
displ_low = TRUE,
cls = 'compact'
) {
if (displ_low == TRUE) { # condition to check, if it holds,
result <- mpg %>% # filter data given the arguments
filter(
cyl == cylinder,
displ <= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls
)
}
else { # if the condition doesn't hold, filter
result <- mpg %>% # the data this way instead
filter(
cyl == cylinder,
displ >= displ_fun(displ), # the only change is here
hwy >= mpg_cutoff,
class == cls
)
}
result # return the object
}
```
So what’s going on here? Not a whole lot really. The function just filters the data to observations that match the input criteria, and returns that result at the end. We also put *default values* to the arguments, which can be done to your discretion.
Conditionals
------------
The core of the above function uses a *conditional statement* using standard if…else structure. The if part determines whether some condition holds. If it does, then proceed to the next step in the brackets. If not, skip to the else part. You may have used the ifelse function in base R, or dplyr’s if\_else as above, which are a short cuts for this approach. We can also add conditional else statements (else if), drop the else part entirely, nest conditionals within other conditionals, etc. Like loops, conditional statements look very similar across all programming languages.
JavaScript:
```
if (Math.random() < 0.5) {
console.log("You got Heads!")
} else {
console.log("You got Tails!")
}
```
Python:
```
if x == 2:
print(x)
else:
print(x*x)
```
In any case, with our function at the ready, we can now do the things we want to as needed:
```
good_mileage(mpg_cutoff = 40)
```
```
# A tibble: 1 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 volkswagen jetta 1.9 1999 4 manual(m5) f 33 44 d compact
```
```
good_mileage(
cylinder = 8,
mpg_cutoff = 15,
displ_low = FALSE,
cls = 'suv'
)
```
```
# A tibble: 34 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
2 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 11 15 e suv
3 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
4 chevrolet c1500 suburban 2wd 5.7 1999 8 auto(l4) r 13 17 r suv
5 chevrolet c1500 suburban 2wd 6 2008 8 auto(l4) r 12 17 r suv
6 chevrolet k1500 tahoe 4wd 5.3 2008 8 auto(l4) 4 14 19 r suv
7 chevrolet k1500 tahoe 4wd 5.7 1999 8 auto(l4) 4 11 15 r suv
8 chevrolet k1500 tahoe 4wd 6.5 1999 8 auto(l4) 4 14 17 d suv
9 dodge durango 4wd 4.7 2008 8 auto(l5) 4 13 17 r suv
10 dodge durango 4wd 4.7 2008 8 auto(l5) 4 13 17 r suv
# … with 24 more rows
```
Let’s extend the functionality by adding a year argument (the only values available are 2008 and 1999\).
```
good_mileage <- function(
cylinder = 4,
mpg_cutoff = 30,
displ_fun = mean,
displ_low = TRUE,
cls = 'compact',
yr = 2008
) {
if (displ_low) {
result = mpg %>%
filter(cyl == cylinder,
displ <= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls,
year == yr)
}
else {
result = mpg %>%
filter(cyl == cylinder,
displ >= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls,
year == yr)
}
result
}
```
```
good_mileage(
cylinder = 8,
mpg_cutoff = 19,
displ_low = FALSE,
cls = 'suv',
yr = 2008
)
```
```
# A tibble: 6 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
2 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
3 chevrolet k1500 tahoe 4wd 5.3 2008 8 auto(l4) 4 14 19 r suv
4 ford explorer 4wd 4.6 2008 8 auto(l6) 4 13 19 r suv
5 jeep grand cherokee 4wd 4.7 2008 8 auto(l5) 4 14 19 r suv
6 mercury mountaineer 4wd 4.6 2008 8 auto(l6) 4 13 19 r suv
```
So we now have something that is *flexible*, *reusable*, and *extensible*, and it took less code than writing out the individual lines of code.
Anonymous functions
-------------------
Oftentimes we just need a quick and easy function for a one\-off application, especially when using apply/map functions. Consider the following two lines of code.
```
apply(mtcars, 2, sd)
apply(mtcars, 2, function(x) x / 2 )
```
The difference between the two is that for the latter, our function didn’t have to be a named object already available. We created a function on the fly just to serve a specific purpose. A function doesn’t exist in base R that just does nothing but divide by two, but since it is simple, we just created it as needed.
To further illustrate this, we’ll create a robust standardization function that uses the median and median absolute deviation rather than the mean and standard deviation.
```
# some variables have a mad = 0, and so return Inf (x/0) or NaN (0/0)
# apply(mtcars, 2, function(x) (x - median(x))/mad(x)) %>%
# head()
mtcars %>%
map_df(function(x) (x - median(x))/mad(x))
```
```
# A tibble: 32 x 11
mpg cyl disp hp drat wt qsec vs am gear carb
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.333 0 -0.258 -0.169 0.291 -0.919 -0.883 NaN Inf 0 1.35
2 0.333 0 -0.258 -0.169 0.291 -0.587 -0.487 NaN Inf 0 1.35
3 0.665 -0.674 -0.629 -0.389 0.220 -1.31 0.636 Inf Inf 0 -0.674
4 0.407 0 0.439 -0.169 -0.873 -0.143 1.22 Inf NaN -0.674 -0.674
5 -0.0924 0.674 1.17 0.674 -0.774 0.150 -0.487 NaN NaN -0.674 0
6 -0.203 0 0.204 -0.233 -1.33 0.176 1.77 Inf NaN -0.674 -0.674
7 -0.905 0.674 1.17 1.58 -0.689 0.319 -1.32 NaN NaN -0.674 1.35
8 0.961 -0.674 -0.353 -0.791 -0.00710 -0.176 1.62 Inf NaN 0 0
9 0.665 -0.674 -0.395 -0.363 0.319 -0.228 3.67 Inf NaN 0 0
10 0 0 -0.204 0 0.319 0.150 0.417 Inf NaN 0 1.35
# … with 22 more rows
```
Even if you don’t use [anonymous functions](https://en.wikipedia.org/wiki/Anonymous_function) (sometimes called *lambda* functions), it’s important to understand them, because you’ll often see other people’s code using them.
While it goes beyond the scope of this document at present, I should note that RStudio has a very nice and easy to use debugger. Once you get comfortable writing functions, you can use the debugger to troubleshoot problems that arise, and test new functionality (see the ‘Debug’ menu). In addition, one can profile functions to see what parts are, for example, more memory intensive, or otherwise serve as a bottleneck (see the ‘Profile’ menu). You can use the profiler on any code, not just functions.
Writing Functions Exercises
---------------------------
### Excercise 1
Write a function that takes the log of the sum of two values (i.e. just two single numbers) using the log function. Just remember that within a function, you can write R code just like you normally would.
```
log_sum <- function(a, b) {
?
}
```
### Excercise 1b
What happens if the sum of the two numbers is negative? You can’t take a log of a negative value, so it’s an error. How might we deal with this? Try using a conditional to provide an error message using the stop function. The first part is basically identical to the function you just did. But given that result, you will need to check for whether it is negative or not. The message can be whatever you want.
```
log_sum <- function(a, b) {
?
if (? < 0) {
stop('Your message here.')
} else {
?
return(your_log_sum_results) # this is an arbitrary name, change accordingly
}
}
```
### Exercise 2
Let’s write a function that will take a numeric variable and convert it to a character string of ‘positive’ vs. ‘negative’. We can use `if {}... else {}` structure, ifelse, or dplyr::if\_else\- they all would accomplish this. In this case, the input is a single vector of numbers, and the output will recode any negative value to ‘negative’ and positive values to ‘positive’ (or whatever you want). Here is an example of how we would just do it as a one\-off.
```
set.seed(123) # so you get the exact same 'random' result
x <- rnorm(10)
if_else(x < 0, "negative", "positive")
```
Now try your hand at writing a function for that.
```
pos_neg <- function(?) {
?
}
```
A Starting Point
----------------
Let’s assume you want to calculate the mean, standard deviation, and number of missing values for a variable, called `myvar`. We could do something like the following
```
mean(myvar)
sd(myvar)
sum(is.na(myvar))
```
Now let’s say you need to do it for several variables. Here’s what your custom function could look like. It takes a single input, the variable you want information about, and returns a data frame with that info.
```
my_summary <- function(x) {
data.frame(
mean = mean(x),
sd = sd(x),
N_missing = sum(is.na(x))
)
}
```
In the above, `x` is an arbitrary name for an input. You can name it whatever you want, but the more meaningful the better. In R (and other languages) these are called *arguments*, but these inputs will determine in part what is eventually produced as output by the function.
```
my_summary(mtcars$mpg)
```
```
mean sd N_missing
1 20.09062 6.026948 0
```
Works fine. However, data typically isn’t that pretty. It often has missing values.
```
load('data/gapminder.RData')
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_missing
1 NA NA 516
```
If there are actually missing values, we need to set `na.rm = TRUE` or the mean and sd will return `NA`. Let’s try it. We can either hard bake it in, as in the initial example, or add an argument to let us control how to handle NAs with our custom function.
```
my_summary <- function(x) {
data.frame(
mean = mean(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
N_missing = sum(is.na(x))
)
}
my_summary_na <- function(x, remove_na = TRUE) {
data.frame(
mean = mean(x, na.rm = remove_na),
sd = sd(x, na.rm = remove_na),
N_missing = sum(is.na(x))
)
}
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_missing
1 43.13218 16.31355 516
```
```
my_summary_na(gapminder_2019$lifeExp, remove_na = FALSE)
```
```
mean sd N_missing
1 NA NA 516
```
Seems to work fine. Let’s add how many total observations there are.
```
my_summary <- function(x) {
# create an arbitrarily named object with the summary information
summary_data = data.frame(
mean = mean(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
N_total = length(x),
N_missing = sum(is.na(x))
)
# return the result!
summary_data
}
```
That was easy! Let’s try it.
```
my_summary(gapminder_2019$lifeExp)
```
```
mean sd N_total N_missing
1 43.13218 16.31355 40953 516
```
Now let’s do it for every column! We’ve used the map function before, now let’s use a variant that will return a data frame.
```
gapminder_2019 %>%
select_if(is.numeric) %>%
map_dfr(my_summary, .id = 'variable')
```
```
variable mean sd N_total N_missing
1 year 1.909000e+03 6.321997e+01 40953 0
2 lifeExp 4.313218e+01 1.631355e+01 40953 516
3 pop 1.353928e+07 6.565653e+07 40953 0
4 gdpPercap 4.591026e+03 1.016210e+04 40953 0
5 giniPercap 4.005331e+01 9.102757e+00 40953 0
```
The map\_dfr function is just like our previous usage in the [iterative programming](iterative.html#iterative-programming) section, just that it will create mini\-data.frames then row\-bind them together.
This shows that writing the first part of any function can be straightforward. Then, once in place, you can usually add functionality without too much trouble. Eventually you could have something very complicated, but which will make sense to you because you built it from the ground up.
Keep in mind as you start out that your initial decisions to make are:
* What are the inputs (arguments) to the function?
* What is the value to be returned?
When you think about writing a function, just write the code that can do it first. The goal is then to generalize beyond that single use case. RStudio even has a shortcut to help you get started. Consider our starting point. Highlight the code, hit Ctrl/Cmd \+ Shft \+ X, then give it a name.
```
mean(myvar)
sd(myvar)
sum(is.na(myvar))
```
It should look something like this.
```
test_fun <- function(myvar) {
mean(myvar)
sd(myvar)
sum(is.na(myvar))
}
```
RStudio could tell that you would need at least one input `myvar`, but beyond that, you’re now on your way to tweaking the function as you see fit.
Note that what goes in and what comes out could be anything, even nothing!
```
two <- function() {
2
}
two()
```
```
[1] 2
```
Or even another function!
```
center <- function(type) {
if (type == 'mean') {
mean
}
else {
median
}
}
center(type = 'mean')
```
```
function (x, ...)
UseMethod("mean")
<bytecode: 0x7fe3efc05860>
<environment: namespace:base>
```
```
myfun = center(type = 'mean')
myfun(1:5)
```
```
[1] 3
```
```
myfun = center(type = 'median')
myfun(1:4)
```
```
[1] 2.5
```
We can also set default values for the inputs.
```
hi <- function(name = 'Beyoncé') {
paste0('Hi ', name, '!')
}
hi()
```
```
[1] "Hi Beyoncé!"
```
```
hi(name = 'Jay-Z')
```
```
[1] "Hi Jay-Z!"
```
If you are working within an RStudio project, it would be a good idea to create a folder for your functions and save each as their own script. When you need the function just use the following:
```
source('my_functions/awesome_func.R')
```
This would make it easy to even create your own personal package with the functions you create.
However you go about creating a function and for whatever purpose, try to make a clear decision at the beginning
* What is the (specific) goal of your function?
* What is the minimum needed to obtain that goal?
There is even a keyboard shortcut to create R style documentation automatically!
Cmd/Ctrl \+ Option/Alt \+ Shift \+ R
DRY
---
An oft\-quoted mantra in programming is ***D**on’t **R**epeat **Y**ourself*. One context regards iterative programming, where we would rather write one line of code than one\-hundred. More generally though, we would like to gain efficiency where possible. A good rule of thumb is, if you are writing the same set of code more than twice, you should write a function to do it instead.
Consider the following example, where we want to subset the data given a set of conditions. Given the cylinder, engine displacement, and mileage, we’ll get different parts of the data.
```
good_mileage_displ_low_cyl_4 = if_else(cyl == 4 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_low_cyl_6 = if_else(cyl == 6 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_low_cyl_8 = if_else(cyl == 8 & displ < mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_4 = if_else(cyl == 4 & displ > mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_6 = if_else(cyl == 6 & displ > mean(displ) & hwy > 30, 'yes', 'no')
good_mileage_displ_high_cyl_8 = if_else(cyl == 8 & displ > mean(displ) & hwy > 30, 'yes', 'no')
```
It was tedious, but that’s not much code. But now consider\- what if you want to change the mpg cutoff? The mean to median? Something else? You have to change all of it. Screw that\- let’s write a function instead! What kinds of inputs will we need?
* cyl: Which cylinder type we want
* mpg\_cutoff: The cutoff for ‘good’ mileage
* displ\_fun: Whether the displacement to be based on the mean or something else
* displ\_low: Whether we are interested in low or high displacement vehicles
* cls: the class of the vehicle (e.g. compact or suv)
```
good_mileage <- function(
cylinder = 4,
mpg_cutoff = 30,
displ_fun = mean,
displ_low = TRUE,
cls = 'compact'
) {
if (displ_low == TRUE) { # condition to check, if it holds,
result <- mpg %>% # filter data given the arguments
filter(
cyl == cylinder,
displ <= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls
)
}
else { # if the condition doesn't hold, filter
result <- mpg %>% # the data this way instead
filter(
cyl == cylinder,
displ >= displ_fun(displ), # the only change is here
hwy >= mpg_cutoff,
class == cls
)
}
result # return the object
}
```
So what’s going on here? Not a whole lot really. The function just filters the data to observations that match the input criteria, and returns that result at the end. We also put *default values* to the arguments, which can be done to your discretion.
Conditionals
------------
The core of the above function uses a *conditional statement* using standard if…else structure. The if part determines whether some condition holds. If it does, then proceed to the next step in the brackets. If not, skip to the else part. You may have used the ifelse function in base R, or dplyr’s if\_else as above, which are a short cuts for this approach. We can also add conditional else statements (else if), drop the else part entirely, nest conditionals within other conditionals, etc. Like loops, conditional statements look very similar across all programming languages.
JavaScript:
```
if (Math.random() < 0.5) {
console.log("You got Heads!")
} else {
console.log("You got Tails!")
}
```
Python:
```
if x == 2:
print(x)
else:
print(x*x)
```
In any case, with our function at the ready, we can now do the things we want to as needed:
```
good_mileage(mpg_cutoff = 40)
```
```
# A tibble: 1 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 volkswagen jetta 1.9 1999 4 manual(m5) f 33 44 d compact
```
```
good_mileage(
cylinder = 8,
mpg_cutoff = 15,
displ_low = FALSE,
cls = 'suv'
)
```
```
# A tibble: 34 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
2 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 11 15 e suv
3 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
4 chevrolet c1500 suburban 2wd 5.7 1999 8 auto(l4) r 13 17 r suv
5 chevrolet c1500 suburban 2wd 6 2008 8 auto(l4) r 12 17 r suv
6 chevrolet k1500 tahoe 4wd 5.3 2008 8 auto(l4) 4 14 19 r suv
7 chevrolet k1500 tahoe 4wd 5.7 1999 8 auto(l4) 4 11 15 r suv
8 chevrolet k1500 tahoe 4wd 6.5 1999 8 auto(l4) 4 14 17 d suv
9 dodge durango 4wd 4.7 2008 8 auto(l5) 4 13 17 r suv
10 dodge durango 4wd 4.7 2008 8 auto(l5) 4 13 17 r suv
# … with 24 more rows
```
Let’s extend the functionality by adding a year argument (the only values available are 2008 and 1999\).
```
good_mileage <- function(
cylinder = 4,
mpg_cutoff = 30,
displ_fun = mean,
displ_low = TRUE,
cls = 'compact',
yr = 2008
) {
if (displ_low) {
result = mpg %>%
filter(cyl == cylinder,
displ <= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls,
year == yr)
}
else {
result = mpg %>%
filter(cyl == cylinder,
displ >= displ_fun(displ),
hwy >= mpg_cutoff,
class == cls,
year == yr)
}
result
}
```
```
good_mileage(
cylinder = 8,
mpg_cutoff = 19,
displ_low = FALSE,
cls = 'suv',
yr = 2008
)
```
```
# A tibble: 6 x 11
manufacturer model displ year cyl trans drv cty hwy fl class
<chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
1 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
2 chevrolet c1500 suburban 2wd 5.3 2008 8 auto(l4) r 14 20 r suv
3 chevrolet k1500 tahoe 4wd 5.3 2008 8 auto(l4) 4 14 19 r suv
4 ford explorer 4wd 4.6 2008 8 auto(l6) 4 13 19 r suv
5 jeep grand cherokee 4wd 4.7 2008 8 auto(l5) 4 14 19 r suv
6 mercury mountaineer 4wd 4.6 2008 8 auto(l6) 4 13 19 r suv
```
So we now have something that is *flexible*, *reusable*, and *extensible*, and it took less code than writing out the individual lines of code.
Anonymous functions
-------------------
Oftentimes we just need a quick and easy function for a one\-off application, especially when using apply/map functions. Consider the following two lines of code.
```
apply(mtcars, 2, sd)
apply(mtcars, 2, function(x) x / 2 )
```
The difference between the two is that for the latter, our function didn’t have to be a named object already available. We created a function on the fly just to serve a specific purpose. A function doesn’t exist in base R that just does nothing but divide by two, but since it is simple, we just created it as needed.
To further illustrate this, we’ll create a robust standardization function that uses the median and median absolute deviation rather than the mean and standard deviation.
```
# some variables have a mad = 0, and so return Inf (x/0) or NaN (0/0)
# apply(mtcars, 2, function(x) (x - median(x))/mad(x)) %>%
# head()
mtcars %>%
map_df(function(x) (x - median(x))/mad(x))
```
```
# A tibble: 32 x 11
mpg cyl disp hp drat wt qsec vs am gear carb
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 0.333 0 -0.258 -0.169 0.291 -0.919 -0.883 NaN Inf 0 1.35
2 0.333 0 -0.258 -0.169 0.291 -0.587 -0.487 NaN Inf 0 1.35
3 0.665 -0.674 -0.629 -0.389 0.220 -1.31 0.636 Inf Inf 0 -0.674
4 0.407 0 0.439 -0.169 -0.873 -0.143 1.22 Inf NaN -0.674 -0.674
5 -0.0924 0.674 1.17 0.674 -0.774 0.150 -0.487 NaN NaN -0.674 0
6 -0.203 0 0.204 -0.233 -1.33 0.176 1.77 Inf NaN -0.674 -0.674
7 -0.905 0.674 1.17 1.58 -0.689 0.319 -1.32 NaN NaN -0.674 1.35
8 0.961 -0.674 -0.353 -0.791 -0.00710 -0.176 1.62 Inf NaN 0 0
9 0.665 -0.674 -0.395 -0.363 0.319 -0.228 3.67 Inf NaN 0 0
10 0 0 -0.204 0 0.319 0.150 0.417 Inf NaN 0 1.35
# … with 22 more rows
```
Even if you don’t use [anonymous functions](https://en.wikipedia.org/wiki/Anonymous_function) (sometimes called *lambda* functions), it’s important to understand them, because you’ll often see other people’s code using them.
While it goes beyond the scope of this document at present, I should note that RStudio has a very nice and easy to use debugger. Once you get comfortable writing functions, you can use the debugger to troubleshoot problems that arise, and test new functionality (see the ‘Debug’ menu). In addition, one can profile functions to see what parts are, for example, more memory intensive, or otherwise serve as a bottleneck (see the ‘Profile’ menu). You can use the profiler on any code, not just functions.
Writing Functions Exercises
---------------------------
### Excercise 1
Write a function that takes the log of the sum of two values (i.e. just two single numbers) using the log function. Just remember that within a function, you can write R code just like you normally would.
```
log_sum <- function(a, b) {
?
}
```
### Excercise 1b
What happens if the sum of the two numbers is negative? You can’t take a log of a negative value, so it’s an error. How might we deal with this? Try using a conditional to provide an error message using the stop function. The first part is basically identical to the function you just did. But given that result, you will need to check for whether it is negative or not. The message can be whatever you want.
```
log_sum <- function(a, b) {
?
if (? < 0) {
stop('Your message here.')
} else {
?
return(your_log_sum_results) # this is an arbitrary name, change accordingly
}
}
```
### Exercise 2
Let’s write a function that will take a numeric variable and convert it to a character string of ‘positive’ vs. ‘negative’. We can use `if {}... else {}` structure, ifelse, or dplyr::if\_else\- they all would accomplish this. In this case, the input is a single vector of numbers, and the output will recode any negative value to ‘negative’ and positive values to ‘positive’ (or whatever you want). Here is an example of how we would just do it as a one\-off.
```
set.seed(123) # so you get the exact same 'random' result
x <- rnorm(10)
if_else(x < 0, "negative", "positive")
```
Now try your hand at writing a function for that.
```
pos_neg <- function(?) {
?
}
```
### Excercise 1
Write a function that takes the log of the sum of two values (i.e. just two single numbers) using the log function. Just remember that within a function, you can write R code just like you normally would.
```
log_sum <- function(a, b) {
?
}
```
### Excercise 1b
What happens if the sum of the two numbers is negative? You can’t take a log of a negative value, so it’s an error. How might we deal with this? Try using a conditional to provide an error message using the stop function. The first part is basically identical to the function you just did. But given that result, you will need to check for whether it is negative or not. The message can be whatever you want.
```
log_sum <- function(a, b) {
?
if (? < 0) {
stop('Your message here.')
} else {
?
return(your_log_sum_results) # this is an arbitrary name, change accordingly
}
}
```
### Exercise 2
Let’s write a function that will take a numeric variable and convert it to a character string of ‘positive’ vs. ‘negative’. We can use `if {}... else {}` structure, ifelse, or dplyr::if\_else\- they all would accomplish this. In this case, the input is a single vector of numbers, and the output will recode any negative value to ‘negative’ and positive values to ‘positive’ (or whatever you want). Here is an example of how we would just do it as a one\-off.
```
set.seed(123) # so you get the exact same 'random' result
x <- rnorm(10)
if_else(x < 0, "negative", "positive")
```
Now try your hand at writing a function for that.
```
pos_neg <- function(?) {
?
}
```
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/more.html |
More Programming
================
This section is kind of a grab bag of miscellaneous things related to programming. If you’ve made it this far, feel free to keep going!
Code Style
----------
A lot has been written about coding style over the decades. If there was a definitive answer, you would have heard of it by now. However, there are a couple things you can do at the beginning of your programming approach to go a long way making your code notably better.
### Why does your code exist?
Either use text in an R Markdown file or comment your R script. Explain *why*, not *what*, the code is doing. Think of it as leaving your future self a note (they will thank you!). Be clear, and don’t assume you’ll remember why you were doing what you did.
### Assignment
You see some using `<-` or `=` for assignment. While there is a slight difference, if you’re writing decent code it shouldn’t matter. Far more programming languages use `=`, so that’s reason to prefer it. However, if you like being snobby about things, go with `<-`. Whichever you use, do so consistently[16](#fn16).
### Code length
If your script is becoming hundreds of lines long, you probably need to compartmentalize your operations into separate scripts. For example, separate your data processing from your model scripts.
### Spacing
Don’t be stingy with spaces. As you start out, err on the side of using them. Just note there are exceptions (e.g. no space between function name and parenthesis, unless that function is something like if or else), but you’ll get used to the exceptions over time.
```
x=rnorm(10, mean=0,sd=1) # harder to read
# space between lines too!
x = rnorm(10, mean = 0, sd = 1) # easier to read
```
### Naming things
You might not think of it as such initially, but one of the more difficult challenges in programming is naming things. Even if we can come up with a name for an object or file, there are different styles we can use for the name.
Here is a brief list of things to keep in mind.
* The name should make sense to you, your future self, and others that will use the code
* Try to be concise, but see the previous
* Make liberal use of suffixes/prefixes for naming the same types of things e.g. model\_x, model\_z
* For function names, try for verbs that describe what they do (e.g. add\_two vs. two\_more or plus2\)
* Don’t name anything with ‘final’
* Don’t name something that is already an R function/object (e.g. `T`, c, data, etc.)
* Avoid distinguishing names only by number, e.g. data1 data2
Common naming styles include:
* snake\_case
* CamelCase or camelCase
* spinal\-case (e.g. for file names)
* dot.case
For objects and functions, I find snake case easier to read and less prone to issues[17](#fn17). For example, camel case can fail miserably when acronyms are involved. Dots already have specific uses (file name extensions, function methods, etc.), so probably should be avoided unless you’re using them for that specific purpose (they can also make selecting the whole name difficult depending on the context).
### Other
Use tools like the built\-in RStudio code cleanup shortcut like `Ctrl/Cmd + Shft + A`. It’s not perfect, in the sense I disagree with some of its style choice, but it will definitely be better than you will do on your own starting out.
Vectorization
-------------
### Boolean indexing
Assume x is a vector of numbers. How would we create an index representing any value greater than 2?
```
x = c(-1, 2, 10, -5)
idx = x > 2
idx
```
```
[1] FALSE FALSE TRUE FALSE
```
```
x[idx]
```
```
[1] 10
```
As mentioned [previously](data_structures.html#logicals), logicals are objects with values of `TRUE` or `FALSE`, like the idx variable above. While sometimes we want to deal with the logical object as an end, it is extremely commonly used as an index in data processing. Note how we don’t have to create an explicit index object first (though often you should), as R indexing is ridiculously flexible. Here are more examples, not necessarily recommended, but just to demonstrate the flexibility of Boolean indexing.
```
x[x > 2]
x[x != 'cat']
x[ifelse(x > 2 & x !=10, TRUE, FALSE)]
x[{y = idx; y}]
x[resid(lm(y ~ x)) > 0]
```
All of these will transfer to the tidyverse filter function.
```
df %>%
filter(x > 2, z == 'a') # commas are like &
```
### Vectorized operations
Boolean indexing allows us to take vectorized approaches to dealing with data. Consider the following unfortunately coded loop, where we create a variable `y`, which takes on the value of **Yes** if the variable `x` is greater than 2, and **No** if otherwise.
```
for (i in 1:nrow(mydf)) {
check = mydf$x[i] > 2
if (check == TRUE) {
mydf$y[i] = 'Yes'
}
else {
mydf$y[i] = 'No'
}
}
```
Compare[18](#fn18):
```
mydf$y = 'No'
mydf$y[mydf$x > 2] = 'Yes'
```
This gets us the same thing, and would be much faster than the looped approach. Boolean indexing is an example of a vectorized operation. The whole vector is considered, rather than each element individually. The result is that any preprocessing is done once rather than the `n` iterations of the loop. In R, this will always faster.
Example: Log all values in a matrix.
```
mymatrix_log = log(mymatrix)
```
This is way faster than looping over elements, rows or columns. Here we’ll let the apply function stand in for our loop, logging the elements of each column.
```
mymatrix = matrix(runif(100), 10, 10)
identical(apply(mymatrix, 2, log), log(mymatrix))
```
```
[1] TRUE
```
```
library(microbenchmark)
microbenchmark(apply(mymatrix, 2, log), log(mymatrix))
```
```
Unit: nanoseconds
expr min lq mean median uq max neval cld
apply(mymatrix, 2, log) 33961 41309.5 77630.22 64910 96955.0 289128 100 b
log(mymatrix) 918 1040.5 3473.66 1258 1704.5 189819 100 a
```
Many vectorized functions already exist in R. They are often written in C, Fortran etc., and so even faster. Not all programming languages lean toward vectorized operations, and may not see much speed gain from it. In R however, you’ll want to prefer it. Even without the speed gain, it’s cleaner/clearer code, another reason to use the approach.
#### Timings
We made our own function before, however, there is a scale function in base R that uses a more vectorized approach under the hood to standardize variables. The following demonstrates various approaches to standardizing the columns of the matrix, even using a parallelized approach. As you’ll see, the base R function requires very little code and beats the others.
```
mymat = matrix(rnorm(100000), ncol=1000)
stdize <- function(x) {
(x-mean(x)) / sd(x)
}
doubleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
}
singleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
x = (x - mean(x)) / sd(x)
}
}
library(parallel)
cl = makeCluster(8)
clusterExport(cl, c('stdize', 'mymat'))
test = microbenchmark::microbenchmark(
doubleloop = doubleloop(),
singleloop = singleloop(),
apply = apply(mymat, 2, stdize),
parApply = parApply(cl, mymat, 2, stdize),
vectorized = scale(mymat),
times = 25
)
stopCluster(cl)
test
```
Regular Expressions
-------------------
A regular expression, regex for short, is a sequence of characters that can be used as a search pattern for a string. Common operations are to merely detect, extract, or replace the matching string. There are actually many different flavors of regex for different programming languages, which are all flavors that originate with the Perl approach, or can enable the Perl approach to be used. However, knowing one means you pretty much know the others with only minor modifications if any.
To be clear, not only is regex another language, it’s nigh on indecipherable. You will not learn much regex, but what you do learn will save a potentially enormous amount of time you’d otherwise spend trying to do things in a more haphazard fashion. Furthermore, practically every situation that will come up has already been asked and answered on [Stack Overflow](https://stackoverflow.com/questions/tagged/regex), so you’ll almost always be able to search for what you need.
Here is an example of a pattern we might be interested in:
`^r.*shiny[0-9]$`
What is *that* you may ask? Well here is an example of strings it would and wouldn’t match. We’re using grepl to return a logical (i.e. `TRUE` or `FALSE`) if any of the strings match the pattern in some way.
```
string = c('r is the shiny', 'r is the shiny1', 'r shines brightly')
grepl(string, pattern = '^r.*shiny[0-9]$')
```
```
[1] FALSE TRUE FALSE
```
What the regex is esoterically attempting to match is any string that starts with ‘r’ and ends with ‘shiny\_’ where \_ is some single digit. Specifically it breaks down as follows:
* **^** : starts with, so ^r means starts with r
* **.** : any character
* **\*** : match the preceding zero or more times
* **shiny** : match ‘shiny’
* **\[0\-9]** : any digit 0\-9 (note that we are still talking about strings, not actual numbered values)
* **$** : ends with preceding
### Typical uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
Code Style Exercises
--------------------
### Exercise 1
For the following model related output, come up with a name for each object.
```
lm(hwy ~ cyl, data = mpg) # hwy mileage predicted by number of cylinders
summary(lm(hwy ~ cyl, data = mpg)) # the summary of that
lm(hwy ~ cyl + displ + year, data = mpg) # an extension of that
```
### Exercise 2
Fix this code.
```
x=rnorm(100, 10, 2)
y=.2* x+ rnorm(100)
data = data.frame(x,y)
q = lm(y~x, data=data)
summary(q)
```
Vectorization Exercises
-----------------------
Before we do this, did you remember to fix the names in the previous exercise?
### Exercise 1
Show a non\-vectorized (e.g. a loop) and a vectorized way to add a two to the numbers 1 through 3\.
```
?
```
### Exercise 2
Of the following, which do you think is faster? Test it with the bench package.
```
x = matrix(rpois(100000, lambda = 5), ncol = 100)
colSums(x)
apply(x, 2, sum)
bench::mark(
cs = colSums(x),
app = apply(x, 2, sum),
time_unit = 'ms' # microseconds
)
```
Regex Exercises
---------------
### Exercise 1
Using stringr and str\_replace, replace all the states a’s with nothing.
```
library(stringr)
str_replace(state.name, pattern = ?, replacement = ?)
```
Code Style
----------
A lot has been written about coding style over the decades. If there was a definitive answer, you would have heard of it by now. However, there are a couple things you can do at the beginning of your programming approach to go a long way making your code notably better.
### Why does your code exist?
Either use text in an R Markdown file or comment your R script. Explain *why*, not *what*, the code is doing. Think of it as leaving your future self a note (they will thank you!). Be clear, and don’t assume you’ll remember why you were doing what you did.
### Assignment
You see some using `<-` or `=` for assignment. While there is a slight difference, if you’re writing decent code it shouldn’t matter. Far more programming languages use `=`, so that’s reason to prefer it. However, if you like being snobby about things, go with `<-`. Whichever you use, do so consistently[16](#fn16).
### Code length
If your script is becoming hundreds of lines long, you probably need to compartmentalize your operations into separate scripts. For example, separate your data processing from your model scripts.
### Spacing
Don’t be stingy with spaces. As you start out, err on the side of using them. Just note there are exceptions (e.g. no space between function name and parenthesis, unless that function is something like if or else), but you’ll get used to the exceptions over time.
```
x=rnorm(10, mean=0,sd=1) # harder to read
# space between lines too!
x = rnorm(10, mean = 0, sd = 1) # easier to read
```
### Naming things
You might not think of it as such initially, but one of the more difficult challenges in programming is naming things. Even if we can come up with a name for an object or file, there are different styles we can use for the name.
Here is a brief list of things to keep in mind.
* The name should make sense to you, your future self, and others that will use the code
* Try to be concise, but see the previous
* Make liberal use of suffixes/prefixes for naming the same types of things e.g. model\_x, model\_z
* For function names, try for verbs that describe what they do (e.g. add\_two vs. two\_more or plus2\)
* Don’t name anything with ‘final’
* Don’t name something that is already an R function/object (e.g. `T`, c, data, etc.)
* Avoid distinguishing names only by number, e.g. data1 data2
Common naming styles include:
* snake\_case
* CamelCase or camelCase
* spinal\-case (e.g. for file names)
* dot.case
For objects and functions, I find snake case easier to read and less prone to issues[17](#fn17). For example, camel case can fail miserably when acronyms are involved. Dots already have specific uses (file name extensions, function methods, etc.), so probably should be avoided unless you’re using them for that specific purpose (they can also make selecting the whole name difficult depending on the context).
### Other
Use tools like the built\-in RStudio code cleanup shortcut like `Ctrl/Cmd + Shft + A`. It’s not perfect, in the sense I disagree with some of its style choice, but it will definitely be better than you will do on your own starting out.
### Why does your code exist?
Either use text in an R Markdown file or comment your R script. Explain *why*, not *what*, the code is doing. Think of it as leaving your future self a note (they will thank you!). Be clear, and don’t assume you’ll remember why you were doing what you did.
### Assignment
You see some using `<-` or `=` for assignment. While there is a slight difference, if you’re writing decent code it shouldn’t matter. Far more programming languages use `=`, so that’s reason to prefer it. However, if you like being snobby about things, go with `<-`. Whichever you use, do so consistently[16](#fn16).
### Code length
If your script is becoming hundreds of lines long, you probably need to compartmentalize your operations into separate scripts. For example, separate your data processing from your model scripts.
### Spacing
Don’t be stingy with spaces. As you start out, err on the side of using them. Just note there are exceptions (e.g. no space between function name and parenthesis, unless that function is something like if or else), but you’ll get used to the exceptions over time.
```
x=rnorm(10, mean=0,sd=1) # harder to read
# space between lines too!
x = rnorm(10, mean = 0, sd = 1) # easier to read
```
### Naming things
You might not think of it as such initially, but one of the more difficult challenges in programming is naming things. Even if we can come up with a name for an object or file, there are different styles we can use for the name.
Here is a brief list of things to keep in mind.
* The name should make sense to you, your future self, and others that will use the code
* Try to be concise, but see the previous
* Make liberal use of suffixes/prefixes for naming the same types of things e.g. model\_x, model\_z
* For function names, try for verbs that describe what they do (e.g. add\_two vs. two\_more or plus2\)
* Don’t name anything with ‘final’
* Don’t name something that is already an R function/object (e.g. `T`, c, data, etc.)
* Avoid distinguishing names only by number, e.g. data1 data2
Common naming styles include:
* snake\_case
* CamelCase or camelCase
* spinal\-case (e.g. for file names)
* dot.case
For objects and functions, I find snake case easier to read and less prone to issues[17](#fn17). For example, camel case can fail miserably when acronyms are involved. Dots already have specific uses (file name extensions, function methods, etc.), so probably should be avoided unless you’re using them for that specific purpose (they can also make selecting the whole name difficult depending on the context).
### Other
Use tools like the built\-in RStudio code cleanup shortcut like `Ctrl/Cmd + Shft + A`. It’s not perfect, in the sense I disagree with some of its style choice, but it will definitely be better than you will do on your own starting out.
Vectorization
-------------
### Boolean indexing
Assume x is a vector of numbers. How would we create an index representing any value greater than 2?
```
x = c(-1, 2, 10, -5)
idx = x > 2
idx
```
```
[1] FALSE FALSE TRUE FALSE
```
```
x[idx]
```
```
[1] 10
```
As mentioned [previously](data_structures.html#logicals), logicals are objects with values of `TRUE` or `FALSE`, like the idx variable above. While sometimes we want to deal with the logical object as an end, it is extremely commonly used as an index in data processing. Note how we don’t have to create an explicit index object first (though often you should), as R indexing is ridiculously flexible. Here are more examples, not necessarily recommended, but just to demonstrate the flexibility of Boolean indexing.
```
x[x > 2]
x[x != 'cat']
x[ifelse(x > 2 & x !=10, TRUE, FALSE)]
x[{y = idx; y}]
x[resid(lm(y ~ x)) > 0]
```
All of these will transfer to the tidyverse filter function.
```
df %>%
filter(x > 2, z == 'a') # commas are like &
```
### Vectorized operations
Boolean indexing allows us to take vectorized approaches to dealing with data. Consider the following unfortunately coded loop, where we create a variable `y`, which takes on the value of **Yes** if the variable `x` is greater than 2, and **No** if otherwise.
```
for (i in 1:nrow(mydf)) {
check = mydf$x[i] > 2
if (check == TRUE) {
mydf$y[i] = 'Yes'
}
else {
mydf$y[i] = 'No'
}
}
```
Compare[18](#fn18):
```
mydf$y = 'No'
mydf$y[mydf$x > 2] = 'Yes'
```
This gets us the same thing, and would be much faster than the looped approach. Boolean indexing is an example of a vectorized operation. The whole vector is considered, rather than each element individually. The result is that any preprocessing is done once rather than the `n` iterations of the loop. In R, this will always faster.
Example: Log all values in a matrix.
```
mymatrix_log = log(mymatrix)
```
This is way faster than looping over elements, rows or columns. Here we’ll let the apply function stand in for our loop, logging the elements of each column.
```
mymatrix = matrix(runif(100), 10, 10)
identical(apply(mymatrix, 2, log), log(mymatrix))
```
```
[1] TRUE
```
```
library(microbenchmark)
microbenchmark(apply(mymatrix, 2, log), log(mymatrix))
```
```
Unit: nanoseconds
expr min lq mean median uq max neval cld
apply(mymatrix, 2, log) 33961 41309.5 77630.22 64910 96955.0 289128 100 b
log(mymatrix) 918 1040.5 3473.66 1258 1704.5 189819 100 a
```
Many vectorized functions already exist in R. They are often written in C, Fortran etc., and so even faster. Not all programming languages lean toward vectorized operations, and may not see much speed gain from it. In R however, you’ll want to prefer it. Even without the speed gain, it’s cleaner/clearer code, another reason to use the approach.
#### Timings
We made our own function before, however, there is a scale function in base R that uses a more vectorized approach under the hood to standardize variables. The following demonstrates various approaches to standardizing the columns of the matrix, even using a parallelized approach. As you’ll see, the base R function requires very little code and beats the others.
```
mymat = matrix(rnorm(100000), ncol=1000)
stdize <- function(x) {
(x-mean(x)) / sd(x)
}
doubleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
}
singleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
x = (x - mean(x)) / sd(x)
}
}
library(parallel)
cl = makeCluster(8)
clusterExport(cl, c('stdize', 'mymat'))
test = microbenchmark::microbenchmark(
doubleloop = doubleloop(),
singleloop = singleloop(),
apply = apply(mymat, 2, stdize),
parApply = parApply(cl, mymat, 2, stdize),
vectorized = scale(mymat),
times = 25
)
stopCluster(cl)
test
```
### Boolean indexing
Assume x is a vector of numbers. How would we create an index representing any value greater than 2?
```
x = c(-1, 2, 10, -5)
idx = x > 2
idx
```
```
[1] FALSE FALSE TRUE FALSE
```
```
x[idx]
```
```
[1] 10
```
As mentioned [previously](data_structures.html#logicals), logicals are objects with values of `TRUE` or `FALSE`, like the idx variable above. While sometimes we want to deal with the logical object as an end, it is extremely commonly used as an index in data processing. Note how we don’t have to create an explicit index object first (though often you should), as R indexing is ridiculously flexible. Here are more examples, not necessarily recommended, but just to demonstrate the flexibility of Boolean indexing.
```
x[x > 2]
x[x != 'cat']
x[ifelse(x > 2 & x !=10, TRUE, FALSE)]
x[{y = idx; y}]
x[resid(lm(y ~ x)) > 0]
```
All of these will transfer to the tidyverse filter function.
```
df %>%
filter(x > 2, z == 'a') # commas are like &
```
### Vectorized operations
Boolean indexing allows us to take vectorized approaches to dealing with data. Consider the following unfortunately coded loop, where we create a variable `y`, which takes on the value of **Yes** if the variable `x` is greater than 2, and **No** if otherwise.
```
for (i in 1:nrow(mydf)) {
check = mydf$x[i] > 2
if (check == TRUE) {
mydf$y[i] = 'Yes'
}
else {
mydf$y[i] = 'No'
}
}
```
Compare[18](#fn18):
```
mydf$y = 'No'
mydf$y[mydf$x > 2] = 'Yes'
```
This gets us the same thing, and would be much faster than the looped approach. Boolean indexing is an example of a vectorized operation. The whole vector is considered, rather than each element individually. The result is that any preprocessing is done once rather than the `n` iterations of the loop. In R, this will always faster.
Example: Log all values in a matrix.
```
mymatrix_log = log(mymatrix)
```
This is way faster than looping over elements, rows or columns. Here we’ll let the apply function stand in for our loop, logging the elements of each column.
```
mymatrix = matrix(runif(100), 10, 10)
identical(apply(mymatrix, 2, log), log(mymatrix))
```
```
[1] TRUE
```
```
library(microbenchmark)
microbenchmark(apply(mymatrix, 2, log), log(mymatrix))
```
```
Unit: nanoseconds
expr min lq mean median uq max neval cld
apply(mymatrix, 2, log) 33961 41309.5 77630.22 64910 96955.0 289128 100 b
log(mymatrix) 918 1040.5 3473.66 1258 1704.5 189819 100 a
```
Many vectorized functions already exist in R. They are often written in C, Fortran etc., and so even faster. Not all programming languages lean toward vectorized operations, and may not see much speed gain from it. In R however, you’ll want to prefer it. Even without the speed gain, it’s cleaner/clearer code, another reason to use the approach.
#### Timings
We made our own function before, however, there is a scale function in base R that uses a more vectorized approach under the hood to standardize variables. The following demonstrates various approaches to standardizing the columns of the matrix, even using a parallelized approach. As you’ll see, the base R function requires very little code and beats the others.
```
mymat = matrix(rnorm(100000), ncol=1000)
stdize <- function(x) {
(x-mean(x)) / sd(x)
}
doubleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
}
singleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
x = (x - mean(x)) / sd(x)
}
}
library(parallel)
cl = makeCluster(8)
clusterExport(cl, c('stdize', 'mymat'))
test = microbenchmark::microbenchmark(
doubleloop = doubleloop(),
singleloop = singleloop(),
apply = apply(mymat, 2, stdize),
parApply = parApply(cl, mymat, 2, stdize),
vectorized = scale(mymat),
times = 25
)
stopCluster(cl)
test
```
#### Timings
We made our own function before, however, there is a scale function in base R that uses a more vectorized approach under the hood to standardize variables. The following demonstrates various approaches to standardizing the columns of the matrix, even using a parallelized approach. As you’ll see, the base R function requires very little code and beats the others.
```
mymat = matrix(rnorm(100000), ncol=1000)
stdize <- function(x) {
(x-mean(x)) / sd(x)
}
doubleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
}
singleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
x = (x - mean(x)) / sd(x)
}
}
library(parallel)
cl = makeCluster(8)
clusterExport(cl, c('stdize', 'mymat'))
test = microbenchmark::microbenchmark(
doubleloop = doubleloop(),
singleloop = singleloop(),
apply = apply(mymat, 2, stdize),
parApply = parApply(cl, mymat, 2, stdize),
vectorized = scale(mymat),
times = 25
)
stopCluster(cl)
test
```
Regular Expressions
-------------------
A regular expression, regex for short, is a sequence of characters that can be used as a search pattern for a string. Common operations are to merely detect, extract, or replace the matching string. There are actually many different flavors of regex for different programming languages, which are all flavors that originate with the Perl approach, or can enable the Perl approach to be used. However, knowing one means you pretty much know the others with only minor modifications if any.
To be clear, not only is regex another language, it’s nigh on indecipherable. You will not learn much regex, but what you do learn will save a potentially enormous amount of time you’d otherwise spend trying to do things in a more haphazard fashion. Furthermore, practically every situation that will come up has already been asked and answered on [Stack Overflow](https://stackoverflow.com/questions/tagged/regex), so you’ll almost always be able to search for what you need.
Here is an example of a pattern we might be interested in:
`^r.*shiny[0-9]$`
What is *that* you may ask? Well here is an example of strings it would and wouldn’t match. We’re using grepl to return a logical (i.e. `TRUE` or `FALSE`) if any of the strings match the pattern in some way.
```
string = c('r is the shiny', 'r is the shiny1', 'r shines brightly')
grepl(string, pattern = '^r.*shiny[0-9]$')
```
```
[1] FALSE TRUE FALSE
```
What the regex is esoterically attempting to match is any string that starts with ‘r’ and ends with ‘shiny\_’ where \_ is some single digit. Specifically it breaks down as follows:
* **^** : starts with, so ^r means starts with r
* **.** : any character
* **\*** : match the preceding zero or more times
* **shiny** : match ‘shiny’
* **\[0\-9]** : any digit 0\-9 (note that we are still talking about strings, not actual numbered values)
* **$** : ends with preceding
### Typical uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
### Typical uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
Code Style Exercises
--------------------
### Exercise 1
For the following model related output, come up with a name for each object.
```
lm(hwy ~ cyl, data = mpg) # hwy mileage predicted by number of cylinders
summary(lm(hwy ~ cyl, data = mpg)) # the summary of that
lm(hwy ~ cyl + displ + year, data = mpg) # an extension of that
```
### Exercise 2
Fix this code.
```
x=rnorm(100, 10, 2)
y=.2* x+ rnorm(100)
data = data.frame(x,y)
q = lm(y~x, data=data)
summary(q)
```
### Exercise 1
For the following model related output, come up with a name for each object.
```
lm(hwy ~ cyl, data = mpg) # hwy mileage predicted by number of cylinders
summary(lm(hwy ~ cyl, data = mpg)) # the summary of that
lm(hwy ~ cyl + displ + year, data = mpg) # an extension of that
```
### Exercise 2
Fix this code.
```
x=rnorm(100, 10, 2)
y=.2* x+ rnorm(100)
data = data.frame(x,y)
q = lm(y~x, data=data)
summary(q)
```
Vectorization Exercises
-----------------------
Before we do this, did you remember to fix the names in the previous exercise?
### Exercise 1
Show a non\-vectorized (e.g. a loop) and a vectorized way to add a two to the numbers 1 through 3\.
```
?
```
### Exercise 2
Of the following, which do you think is faster? Test it with the bench package.
```
x = matrix(rpois(100000, lambda = 5), ncol = 100)
colSums(x)
apply(x, 2, sum)
bench::mark(
cs = colSums(x),
app = apply(x, 2, sum),
time_unit = 'ms' # microseconds
)
```
### Exercise 1
Show a non\-vectorized (e.g. a loop) and a vectorized way to add a two to the numbers 1 through 3\.
```
?
```
### Exercise 2
Of the following, which do you think is faster? Test it with the bench package.
```
x = matrix(rpois(100000, lambda = 5), ncol = 100)
colSums(x)
apply(x, 2, sum)
bench::mark(
cs = colSums(x),
app = apply(x, 2, sum),
time_unit = 'ms' # microseconds
)
```
Regex Exercises
---------------
### Exercise 1
Using stringr and str\_replace, replace all the states a’s with nothing.
```
library(stringr)
str_replace(state.name, pattern = ?, replacement = ?)
```
### Exercise 1
Using stringr and str\_replace, replace all the states a’s with nothing.
```
library(stringr)
str_replace(state.name, pattern = ?, replacement = ?)
```
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/more.html |
More Programming
================
This section is kind of a grab bag of miscellaneous things related to programming. If you’ve made it this far, feel free to keep going!
Code Style
----------
A lot has been written about coding style over the decades. If there was a definitive answer, you would have heard of it by now. However, there are a couple things you can do at the beginning of your programming approach to go a long way making your code notably better.
### Why does your code exist?
Either use text in an R Markdown file or comment your R script. Explain *why*, not *what*, the code is doing. Think of it as leaving your future self a note (they will thank you!). Be clear, and don’t assume you’ll remember why you were doing what you did.
### Assignment
You see some using `<-` or `=` for assignment. While there is a slight difference, if you’re writing decent code it shouldn’t matter. Far more programming languages use `=`, so that’s reason to prefer it. However, if you like being snobby about things, go with `<-`. Whichever you use, do so consistently[16](#fn16).
### Code length
If your script is becoming hundreds of lines long, you probably need to compartmentalize your operations into separate scripts. For example, separate your data processing from your model scripts.
### Spacing
Don’t be stingy with spaces. As you start out, err on the side of using them. Just note there are exceptions (e.g. no space between function name and parenthesis, unless that function is something like if or else), but you’ll get used to the exceptions over time.
```
x=rnorm(10, mean=0,sd=1) # harder to read
# space between lines too!
x = rnorm(10, mean = 0, sd = 1) # easier to read
```
### Naming things
You might not think of it as such initially, but one of the more difficult challenges in programming is naming things. Even if we can come up with a name for an object or file, there are different styles we can use for the name.
Here is a brief list of things to keep in mind.
* The name should make sense to you, your future self, and others that will use the code
* Try to be concise, but see the previous
* Make liberal use of suffixes/prefixes for naming the same types of things e.g. model\_x, model\_z
* For function names, try for verbs that describe what they do (e.g. add\_two vs. two\_more or plus2\)
* Don’t name anything with ‘final’
* Don’t name something that is already an R function/object (e.g. `T`, c, data, etc.)
* Avoid distinguishing names only by number, e.g. data1 data2
Common naming styles include:
* snake\_case
* CamelCase or camelCase
* spinal\-case (e.g. for file names)
* dot.case
For objects and functions, I find snake case easier to read and less prone to issues[17](#fn17). For example, camel case can fail miserably when acronyms are involved. Dots already have specific uses (file name extensions, function methods, etc.), so probably should be avoided unless you’re using them for that specific purpose (they can also make selecting the whole name difficult depending on the context).
### Other
Use tools like the built\-in RStudio code cleanup shortcut like `Ctrl/Cmd + Shft + A`. It’s not perfect, in the sense I disagree with some of its style choice, but it will definitely be better than you will do on your own starting out.
Vectorization
-------------
### Boolean indexing
Assume x is a vector of numbers. How would we create an index representing any value greater than 2?
```
x = c(-1, 2, 10, -5)
idx = x > 2
idx
```
```
[1] FALSE FALSE TRUE FALSE
```
```
x[idx]
```
```
[1] 10
```
As mentioned [previously](data_structures.html#logicals), logicals are objects with values of `TRUE` or `FALSE`, like the idx variable above. While sometimes we want to deal with the logical object as an end, it is extremely commonly used as an index in data processing. Note how we don’t have to create an explicit index object first (though often you should), as R indexing is ridiculously flexible. Here are more examples, not necessarily recommended, but just to demonstrate the flexibility of Boolean indexing.
```
x[x > 2]
x[x != 'cat']
x[ifelse(x > 2 & x !=10, TRUE, FALSE)]
x[{y = idx; y}]
x[resid(lm(y ~ x)) > 0]
```
All of these will transfer to the tidyverse filter function.
```
df %>%
filter(x > 2, z == 'a') # commas are like &
```
### Vectorized operations
Boolean indexing allows us to take vectorized approaches to dealing with data. Consider the following unfortunately coded loop, where we create a variable `y`, which takes on the value of **Yes** if the variable `x` is greater than 2, and **No** if otherwise.
```
for (i in 1:nrow(mydf)) {
check = mydf$x[i] > 2
if (check == TRUE) {
mydf$y[i] = 'Yes'
}
else {
mydf$y[i] = 'No'
}
}
```
Compare[18](#fn18):
```
mydf$y = 'No'
mydf$y[mydf$x > 2] = 'Yes'
```
This gets us the same thing, and would be much faster than the looped approach. Boolean indexing is an example of a vectorized operation. The whole vector is considered, rather than each element individually. The result is that any preprocessing is done once rather than the `n` iterations of the loop. In R, this will always faster.
Example: Log all values in a matrix.
```
mymatrix_log = log(mymatrix)
```
This is way faster than looping over elements, rows or columns. Here we’ll let the apply function stand in for our loop, logging the elements of each column.
```
mymatrix = matrix(runif(100), 10, 10)
identical(apply(mymatrix, 2, log), log(mymatrix))
```
```
[1] TRUE
```
```
library(microbenchmark)
microbenchmark(apply(mymatrix, 2, log), log(mymatrix))
```
```
Unit: nanoseconds
expr min lq mean median uq max neval cld
apply(mymatrix, 2, log) 33961 41309.5 77630.22 64910 96955.0 289128 100 b
log(mymatrix) 918 1040.5 3473.66 1258 1704.5 189819 100 a
```
Many vectorized functions already exist in R. They are often written in C, Fortran etc., and so even faster. Not all programming languages lean toward vectorized operations, and may not see much speed gain from it. In R however, you’ll want to prefer it. Even without the speed gain, it’s cleaner/clearer code, another reason to use the approach.
#### Timings
We made our own function before, however, there is a scale function in base R that uses a more vectorized approach under the hood to standardize variables. The following demonstrates various approaches to standardizing the columns of the matrix, even using a parallelized approach. As you’ll see, the base R function requires very little code and beats the others.
```
mymat = matrix(rnorm(100000), ncol=1000)
stdize <- function(x) {
(x-mean(x)) / sd(x)
}
doubleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
}
singleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
x = (x - mean(x)) / sd(x)
}
}
library(parallel)
cl = makeCluster(8)
clusterExport(cl, c('stdize', 'mymat'))
test = microbenchmark::microbenchmark(
doubleloop = doubleloop(),
singleloop = singleloop(),
apply = apply(mymat, 2, stdize),
parApply = parApply(cl, mymat, 2, stdize),
vectorized = scale(mymat),
times = 25
)
stopCluster(cl)
test
```
Regular Expressions
-------------------
A regular expression, regex for short, is a sequence of characters that can be used as a search pattern for a string. Common operations are to merely detect, extract, or replace the matching string. There are actually many different flavors of regex for different programming languages, which are all flavors that originate with the Perl approach, or can enable the Perl approach to be used. However, knowing one means you pretty much know the others with only minor modifications if any.
To be clear, not only is regex another language, it’s nigh on indecipherable. You will not learn much regex, but what you do learn will save a potentially enormous amount of time you’d otherwise spend trying to do things in a more haphazard fashion. Furthermore, practically every situation that will come up has already been asked and answered on [Stack Overflow](https://stackoverflow.com/questions/tagged/regex), so you’ll almost always be able to search for what you need.
Here is an example of a pattern we might be interested in:
`^r.*shiny[0-9]$`
What is *that* you may ask? Well here is an example of strings it would and wouldn’t match. We’re using grepl to return a logical (i.e. `TRUE` or `FALSE`) if any of the strings match the pattern in some way.
```
string = c('r is the shiny', 'r is the shiny1', 'r shines brightly')
grepl(string, pattern = '^r.*shiny[0-9]$')
```
```
[1] FALSE TRUE FALSE
```
What the regex is esoterically attempting to match is any string that starts with ‘r’ and ends with ‘shiny\_’ where \_ is some single digit. Specifically it breaks down as follows:
* **^** : starts with, so ^r means starts with r
* **.** : any character
* **\*** : match the preceding zero or more times
* **shiny** : match ‘shiny’
* **\[0\-9]** : any digit 0\-9 (note that we are still talking about strings, not actual numbered values)
* **$** : ends with preceding
### Typical uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
Code Style Exercises
--------------------
### Exercise 1
For the following model related output, come up with a name for each object.
```
lm(hwy ~ cyl, data = mpg) # hwy mileage predicted by number of cylinders
summary(lm(hwy ~ cyl, data = mpg)) # the summary of that
lm(hwy ~ cyl + displ + year, data = mpg) # an extension of that
```
### Exercise 2
Fix this code.
```
x=rnorm(100, 10, 2)
y=.2* x+ rnorm(100)
data = data.frame(x,y)
q = lm(y~x, data=data)
summary(q)
```
Vectorization Exercises
-----------------------
Before we do this, did you remember to fix the names in the previous exercise?
### Exercise 1
Show a non\-vectorized (e.g. a loop) and a vectorized way to add a two to the numbers 1 through 3\.
```
?
```
### Exercise 2
Of the following, which do you think is faster? Test it with the bench package.
```
x = matrix(rpois(100000, lambda = 5), ncol = 100)
colSums(x)
apply(x, 2, sum)
bench::mark(
cs = colSums(x),
app = apply(x, 2, sum),
time_unit = 'ms' # microseconds
)
```
Regex Exercises
---------------
### Exercise 1
Using stringr and str\_replace, replace all the states a’s with nothing.
```
library(stringr)
str_replace(state.name, pattern = ?, replacement = ?)
```
Code Style
----------
A lot has been written about coding style over the decades. If there was a definitive answer, you would have heard of it by now. However, there are a couple things you can do at the beginning of your programming approach to go a long way making your code notably better.
### Why does your code exist?
Either use text in an R Markdown file or comment your R script. Explain *why*, not *what*, the code is doing. Think of it as leaving your future self a note (they will thank you!). Be clear, and don’t assume you’ll remember why you were doing what you did.
### Assignment
You see some using `<-` or `=` for assignment. While there is a slight difference, if you’re writing decent code it shouldn’t matter. Far more programming languages use `=`, so that’s reason to prefer it. However, if you like being snobby about things, go with `<-`. Whichever you use, do so consistently[16](#fn16).
### Code length
If your script is becoming hundreds of lines long, you probably need to compartmentalize your operations into separate scripts. For example, separate your data processing from your model scripts.
### Spacing
Don’t be stingy with spaces. As you start out, err on the side of using them. Just note there are exceptions (e.g. no space between function name and parenthesis, unless that function is something like if or else), but you’ll get used to the exceptions over time.
```
x=rnorm(10, mean=0,sd=1) # harder to read
# space between lines too!
x = rnorm(10, mean = 0, sd = 1) # easier to read
```
### Naming things
You might not think of it as such initially, but one of the more difficult challenges in programming is naming things. Even if we can come up with a name for an object or file, there are different styles we can use for the name.
Here is a brief list of things to keep in mind.
* The name should make sense to you, your future self, and others that will use the code
* Try to be concise, but see the previous
* Make liberal use of suffixes/prefixes for naming the same types of things e.g. model\_x, model\_z
* For function names, try for verbs that describe what they do (e.g. add\_two vs. two\_more or plus2\)
* Don’t name anything with ‘final’
* Don’t name something that is already an R function/object (e.g. `T`, c, data, etc.)
* Avoid distinguishing names only by number, e.g. data1 data2
Common naming styles include:
* snake\_case
* CamelCase or camelCase
* spinal\-case (e.g. for file names)
* dot.case
For objects and functions, I find snake case easier to read and less prone to issues[17](#fn17). For example, camel case can fail miserably when acronyms are involved. Dots already have specific uses (file name extensions, function methods, etc.), so probably should be avoided unless you’re using them for that specific purpose (they can also make selecting the whole name difficult depending on the context).
### Other
Use tools like the built\-in RStudio code cleanup shortcut like `Ctrl/Cmd + Shft + A`. It’s not perfect, in the sense I disagree with some of its style choice, but it will definitely be better than you will do on your own starting out.
### Why does your code exist?
Either use text in an R Markdown file or comment your R script. Explain *why*, not *what*, the code is doing. Think of it as leaving your future self a note (they will thank you!). Be clear, and don’t assume you’ll remember why you were doing what you did.
### Assignment
You see some using `<-` or `=` for assignment. While there is a slight difference, if you’re writing decent code it shouldn’t matter. Far more programming languages use `=`, so that’s reason to prefer it. However, if you like being snobby about things, go with `<-`. Whichever you use, do so consistently[16](#fn16).
### Code length
If your script is becoming hundreds of lines long, you probably need to compartmentalize your operations into separate scripts. For example, separate your data processing from your model scripts.
### Spacing
Don’t be stingy with spaces. As you start out, err on the side of using them. Just note there are exceptions (e.g. no space between function name and parenthesis, unless that function is something like if or else), but you’ll get used to the exceptions over time.
```
x=rnorm(10, mean=0,sd=1) # harder to read
# space between lines too!
x = rnorm(10, mean = 0, sd = 1) # easier to read
```
### Naming things
You might not think of it as such initially, but one of the more difficult challenges in programming is naming things. Even if we can come up with a name for an object or file, there are different styles we can use for the name.
Here is a brief list of things to keep in mind.
* The name should make sense to you, your future self, and others that will use the code
* Try to be concise, but see the previous
* Make liberal use of suffixes/prefixes for naming the same types of things e.g. model\_x, model\_z
* For function names, try for verbs that describe what they do (e.g. add\_two vs. two\_more or plus2\)
* Don’t name anything with ‘final’
* Don’t name something that is already an R function/object (e.g. `T`, c, data, etc.)
* Avoid distinguishing names only by number, e.g. data1 data2
Common naming styles include:
* snake\_case
* CamelCase or camelCase
* spinal\-case (e.g. for file names)
* dot.case
For objects and functions, I find snake case easier to read and less prone to issues[17](#fn17). For example, camel case can fail miserably when acronyms are involved. Dots already have specific uses (file name extensions, function methods, etc.), so probably should be avoided unless you’re using them for that specific purpose (they can also make selecting the whole name difficult depending on the context).
### Other
Use tools like the built\-in RStudio code cleanup shortcut like `Ctrl/Cmd + Shft + A`. It’s not perfect, in the sense I disagree with some of its style choice, but it will definitely be better than you will do on your own starting out.
Vectorization
-------------
### Boolean indexing
Assume x is a vector of numbers. How would we create an index representing any value greater than 2?
```
x = c(-1, 2, 10, -5)
idx = x > 2
idx
```
```
[1] FALSE FALSE TRUE FALSE
```
```
x[idx]
```
```
[1] 10
```
As mentioned [previously](data_structures.html#logicals), logicals are objects with values of `TRUE` or `FALSE`, like the idx variable above. While sometimes we want to deal with the logical object as an end, it is extremely commonly used as an index in data processing. Note how we don’t have to create an explicit index object first (though often you should), as R indexing is ridiculously flexible. Here are more examples, not necessarily recommended, but just to demonstrate the flexibility of Boolean indexing.
```
x[x > 2]
x[x != 'cat']
x[ifelse(x > 2 & x !=10, TRUE, FALSE)]
x[{y = idx; y}]
x[resid(lm(y ~ x)) > 0]
```
All of these will transfer to the tidyverse filter function.
```
df %>%
filter(x > 2, z == 'a') # commas are like &
```
### Vectorized operations
Boolean indexing allows us to take vectorized approaches to dealing with data. Consider the following unfortunately coded loop, where we create a variable `y`, which takes on the value of **Yes** if the variable `x` is greater than 2, and **No** if otherwise.
```
for (i in 1:nrow(mydf)) {
check = mydf$x[i] > 2
if (check == TRUE) {
mydf$y[i] = 'Yes'
}
else {
mydf$y[i] = 'No'
}
}
```
Compare[18](#fn18):
```
mydf$y = 'No'
mydf$y[mydf$x > 2] = 'Yes'
```
This gets us the same thing, and would be much faster than the looped approach. Boolean indexing is an example of a vectorized operation. The whole vector is considered, rather than each element individually. The result is that any preprocessing is done once rather than the `n` iterations of the loop. In R, this will always faster.
Example: Log all values in a matrix.
```
mymatrix_log = log(mymatrix)
```
This is way faster than looping over elements, rows or columns. Here we’ll let the apply function stand in for our loop, logging the elements of each column.
```
mymatrix = matrix(runif(100), 10, 10)
identical(apply(mymatrix, 2, log), log(mymatrix))
```
```
[1] TRUE
```
```
library(microbenchmark)
microbenchmark(apply(mymatrix, 2, log), log(mymatrix))
```
```
Unit: nanoseconds
expr min lq mean median uq max neval cld
apply(mymatrix, 2, log) 33961 41309.5 77630.22 64910 96955.0 289128 100 b
log(mymatrix) 918 1040.5 3473.66 1258 1704.5 189819 100 a
```
Many vectorized functions already exist in R. They are often written in C, Fortran etc., and so even faster. Not all programming languages lean toward vectorized operations, and may not see much speed gain from it. In R however, you’ll want to prefer it. Even without the speed gain, it’s cleaner/clearer code, another reason to use the approach.
#### Timings
We made our own function before, however, there is a scale function in base R that uses a more vectorized approach under the hood to standardize variables. The following demonstrates various approaches to standardizing the columns of the matrix, even using a parallelized approach. As you’ll see, the base R function requires very little code and beats the others.
```
mymat = matrix(rnorm(100000), ncol=1000)
stdize <- function(x) {
(x-mean(x)) / sd(x)
}
doubleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
}
singleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
x = (x - mean(x)) / sd(x)
}
}
library(parallel)
cl = makeCluster(8)
clusterExport(cl, c('stdize', 'mymat'))
test = microbenchmark::microbenchmark(
doubleloop = doubleloop(),
singleloop = singleloop(),
apply = apply(mymat, 2, stdize),
parApply = parApply(cl, mymat, 2, stdize),
vectorized = scale(mymat),
times = 25
)
stopCluster(cl)
test
```
### Boolean indexing
Assume x is a vector of numbers. How would we create an index representing any value greater than 2?
```
x = c(-1, 2, 10, -5)
idx = x > 2
idx
```
```
[1] FALSE FALSE TRUE FALSE
```
```
x[idx]
```
```
[1] 10
```
As mentioned [previously](data_structures.html#logicals), logicals are objects with values of `TRUE` or `FALSE`, like the idx variable above. While sometimes we want to deal with the logical object as an end, it is extremely commonly used as an index in data processing. Note how we don’t have to create an explicit index object first (though often you should), as R indexing is ridiculously flexible. Here are more examples, not necessarily recommended, but just to demonstrate the flexibility of Boolean indexing.
```
x[x > 2]
x[x != 'cat']
x[ifelse(x > 2 & x !=10, TRUE, FALSE)]
x[{y = idx; y}]
x[resid(lm(y ~ x)) > 0]
```
All of these will transfer to the tidyverse filter function.
```
df %>%
filter(x > 2, z == 'a') # commas are like &
```
### Vectorized operations
Boolean indexing allows us to take vectorized approaches to dealing with data. Consider the following unfortunately coded loop, where we create a variable `y`, which takes on the value of **Yes** if the variable `x` is greater than 2, and **No** if otherwise.
```
for (i in 1:nrow(mydf)) {
check = mydf$x[i] > 2
if (check == TRUE) {
mydf$y[i] = 'Yes'
}
else {
mydf$y[i] = 'No'
}
}
```
Compare[18](#fn18):
```
mydf$y = 'No'
mydf$y[mydf$x > 2] = 'Yes'
```
This gets us the same thing, and would be much faster than the looped approach. Boolean indexing is an example of a vectorized operation. The whole vector is considered, rather than each element individually. The result is that any preprocessing is done once rather than the `n` iterations of the loop. In R, this will always faster.
Example: Log all values in a matrix.
```
mymatrix_log = log(mymatrix)
```
This is way faster than looping over elements, rows or columns. Here we’ll let the apply function stand in for our loop, logging the elements of each column.
```
mymatrix = matrix(runif(100), 10, 10)
identical(apply(mymatrix, 2, log), log(mymatrix))
```
```
[1] TRUE
```
```
library(microbenchmark)
microbenchmark(apply(mymatrix, 2, log), log(mymatrix))
```
```
Unit: nanoseconds
expr min lq mean median uq max neval cld
apply(mymatrix, 2, log) 33961 41309.5 77630.22 64910 96955.0 289128 100 b
log(mymatrix) 918 1040.5 3473.66 1258 1704.5 189819 100 a
```
Many vectorized functions already exist in R. They are often written in C, Fortran etc., and so even faster. Not all programming languages lean toward vectorized operations, and may not see much speed gain from it. In R however, you’ll want to prefer it. Even without the speed gain, it’s cleaner/clearer code, another reason to use the approach.
#### Timings
We made our own function before, however, there is a scale function in base R that uses a more vectorized approach under the hood to standardize variables. The following demonstrates various approaches to standardizing the columns of the matrix, even using a parallelized approach. As you’ll see, the base R function requires very little code and beats the others.
```
mymat = matrix(rnorm(100000), ncol=1000)
stdize <- function(x) {
(x-mean(x)) / sd(x)
}
doubleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
}
singleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
x = (x - mean(x)) / sd(x)
}
}
library(parallel)
cl = makeCluster(8)
clusterExport(cl, c('stdize', 'mymat'))
test = microbenchmark::microbenchmark(
doubleloop = doubleloop(),
singleloop = singleloop(),
apply = apply(mymat, 2, stdize),
parApply = parApply(cl, mymat, 2, stdize),
vectorized = scale(mymat),
times = 25
)
stopCluster(cl)
test
```
#### Timings
We made our own function before, however, there is a scale function in base R that uses a more vectorized approach under the hood to standardize variables. The following demonstrates various approaches to standardizing the columns of the matrix, even using a parallelized approach. As you’ll see, the base R function requires very little code and beats the others.
```
mymat = matrix(rnorm(100000), ncol=1000)
stdize <- function(x) {
(x-mean(x)) / sd(x)
}
doubleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
}
singleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
x = (x - mean(x)) / sd(x)
}
}
library(parallel)
cl = makeCluster(8)
clusterExport(cl, c('stdize', 'mymat'))
test = microbenchmark::microbenchmark(
doubleloop = doubleloop(),
singleloop = singleloop(),
apply = apply(mymat, 2, stdize),
parApply = parApply(cl, mymat, 2, stdize),
vectorized = scale(mymat),
times = 25
)
stopCluster(cl)
test
```
Regular Expressions
-------------------
A regular expression, regex for short, is a sequence of characters that can be used as a search pattern for a string. Common operations are to merely detect, extract, or replace the matching string. There are actually many different flavors of regex for different programming languages, which are all flavors that originate with the Perl approach, or can enable the Perl approach to be used. However, knowing one means you pretty much know the others with only minor modifications if any.
To be clear, not only is regex another language, it’s nigh on indecipherable. You will not learn much regex, but what you do learn will save a potentially enormous amount of time you’d otherwise spend trying to do things in a more haphazard fashion. Furthermore, practically every situation that will come up has already been asked and answered on [Stack Overflow](https://stackoverflow.com/questions/tagged/regex), so you’ll almost always be able to search for what you need.
Here is an example of a pattern we might be interested in:
`^r.*shiny[0-9]$`
What is *that* you may ask? Well here is an example of strings it would and wouldn’t match. We’re using grepl to return a logical (i.e. `TRUE` or `FALSE`) if any of the strings match the pattern in some way.
```
string = c('r is the shiny', 'r is the shiny1', 'r shines brightly')
grepl(string, pattern = '^r.*shiny[0-9]$')
```
```
[1] FALSE TRUE FALSE
```
What the regex is esoterically attempting to match is any string that starts with ‘r’ and ends with ‘shiny\_’ where \_ is some single digit. Specifically it breaks down as follows:
* **^** : starts with, so ^r means starts with r
* **.** : any character
* **\*** : match the preceding zero or more times
* **shiny** : match ‘shiny’
* **\[0\-9]** : any digit 0\-9 (note that we are still talking about strings, not actual numbered values)
* **$** : ends with preceding
### Typical uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
### Typical uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
Code Style Exercises
--------------------
### Exercise 1
For the following model related output, come up with a name for each object.
```
lm(hwy ~ cyl, data = mpg) # hwy mileage predicted by number of cylinders
summary(lm(hwy ~ cyl, data = mpg)) # the summary of that
lm(hwy ~ cyl + displ + year, data = mpg) # an extension of that
```
### Exercise 2
Fix this code.
```
x=rnorm(100, 10, 2)
y=.2* x+ rnorm(100)
data = data.frame(x,y)
q = lm(y~x, data=data)
summary(q)
```
### Exercise 1
For the following model related output, come up with a name for each object.
```
lm(hwy ~ cyl, data = mpg) # hwy mileage predicted by number of cylinders
summary(lm(hwy ~ cyl, data = mpg)) # the summary of that
lm(hwy ~ cyl + displ + year, data = mpg) # an extension of that
```
### Exercise 2
Fix this code.
```
x=rnorm(100, 10, 2)
y=.2* x+ rnorm(100)
data = data.frame(x,y)
q = lm(y~x, data=data)
summary(q)
```
Vectorization Exercises
-----------------------
Before we do this, did you remember to fix the names in the previous exercise?
### Exercise 1
Show a non\-vectorized (e.g. a loop) and a vectorized way to add a two to the numbers 1 through 3\.
```
?
```
### Exercise 2
Of the following, which do you think is faster? Test it with the bench package.
```
x = matrix(rpois(100000, lambda = 5), ncol = 100)
colSums(x)
apply(x, 2, sum)
bench::mark(
cs = colSums(x),
app = apply(x, 2, sum),
time_unit = 'ms' # microseconds
)
```
### Exercise 1
Show a non\-vectorized (e.g. a loop) and a vectorized way to add a two to the numbers 1 through 3\.
```
?
```
### Exercise 2
Of the following, which do you think is faster? Test it with the bench package.
```
x = matrix(rpois(100000, lambda = 5), ncol = 100)
colSums(x)
apply(x, 2, sum)
bench::mark(
cs = colSums(x),
app = apply(x, 2, sum),
time_unit = 'ms' # microseconds
)
```
Regex Exercises
---------------
### Exercise 1
Using stringr and str\_replace, replace all the states a’s with nothing.
```
library(stringr)
str_replace(state.name, pattern = ?, replacement = ?)
```
### Exercise 1
Using stringr and str\_replace, replace all the states a’s with nothing.
```
library(stringr)
str_replace(state.name, pattern = ?, replacement = ?)
```
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/more.html |
More Programming
================
This section is kind of a grab bag of miscellaneous things related to programming. If you’ve made it this far, feel free to keep going!
Code Style
----------
A lot has been written about coding style over the decades. If there was a definitive answer, you would have heard of it by now. However, there are a couple things you can do at the beginning of your programming approach to go a long way making your code notably better.
### Why does your code exist?
Either use text in an R Markdown file or comment your R script. Explain *why*, not *what*, the code is doing. Think of it as leaving your future self a note (they will thank you!). Be clear, and don’t assume you’ll remember why you were doing what you did.
### Assignment
You see some using `<-` or `=` for assignment. While there is a slight difference, if you’re writing decent code it shouldn’t matter. Far more programming languages use `=`, so that’s reason to prefer it. However, if you like being snobby about things, go with `<-`. Whichever you use, do so consistently[16](#fn16).
### Code length
If your script is becoming hundreds of lines long, you probably need to compartmentalize your operations into separate scripts. For example, separate your data processing from your model scripts.
### Spacing
Don’t be stingy with spaces. As you start out, err on the side of using them. Just note there are exceptions (e.g. no space between function name and parenthesis, unless that function is something like if or else), but you’ll get used to the exceptions over time.
```
x=rnorm(10, mean=0,sd=1) # harder to read
# space between lines too!
x = rnorm(10, mean = 0, sd = 1) # easier to read
```
### Naming things
You might not think of it as such initially, but one of the more difficult challenges in programming is naming things. Even if we can come up with a name for an object or file, there are different styles we can use for the name.
Here is a brief list of things to keep in mind.
* The name should make sense to you, your future self, and others that will use the code
* Try to be concise, but see the previous
* Make liberal use of suffixes/prefixes for naming the same types of things e.g. model\_x, model\_z
* For function names, try for verbs that describe what they do (e.g. add\_two vs. two\_more or plus2\)
* Don’t name anything with ‘final’
* Don’t name something that is already an R function/object (e.g. `T`, c, data, etc.)
* Avoid distinguishing names only by number, e.g. data1 data2
Common naming styles include:
* snake\_case
* CamelCase or camelCase
* spinal\-case (e.g. for file names)
* dot.case
For objects and functions, I find snake case easier to read and less prone to issues[17](#fn17). For example, camel case can fail miserably when acronyms are involved. Dots already have specific uses (file name extensions, function methods, etc.), so probably should be avoided unless you’re using them for that specific purpose (they can also make selecting the whole name difficult depending on the context).
### Other
Use tools like the built\-in RStudio code cleanup shortcut like `Ctrl/Cmd + Shft + A`. It’s not perfect, in the sense I disagree with some of its style choice, but it will definitely be better than you will do on your own starting out.
Vectorization
-------------
### Boolean indexing
Assume x is a vector of numbers. How would we create an index representing any value greater than 2?
```
x = c(-1, 2, 10, -5)
idx = x > 2
idx
```
```
[1] FALSE FALSE TRUE FALSE
```
```
x[idx]
```
```
[1] 10
```
As mentioned [previously](data_structures.html#logicals), logicals are objects with values of `TRUE` or `FALSE`, like the idx variable above. While sometimes we want to deal with the logical object as an end, it is extremely commonly used as an index in data processing. Note how we don’t have to create an explicit index object first (though often you should), as R indexing is ridiculously flexible. Here are more examples, not necessarily recommended, but just to demonstrate the flexibility of Boolean indexing.
```
x[x > 2]
x[x != 'cat']
x[ifelse(x > 2 & x !=10, TRUE, FALSE)]
x[{y = idx; y}]
x[resid(lm(y ~ x)) > 0]
```
All of these will transfer to the tidyverse filter function.
```
df %>%
filter(x > 2, z == 'a') # commas are like &
```
### Vectorized operations
Boolean indexing allows us to take vectorized approaches to dealing with data. Consider the following unfortunately coded loop, where we create a variable `y`, which takes on the value of **Yes** if the variable `x` is greater than 2, and **No** if otherwise.
```
for (i in 1:nrow(mydf)) {
check = mydf$x[i] > 2
if (check == TRUE) {
mydf$y[i] = 'Yes'
}
else {
mydf$y[i] = 'No'
}
}
```
Compare[18](#fn18):
```
mydf$y = 'No'
mydf$y[mydf$x > 2] = 'Yes'
```
This gets us the same thing, and would be much faster than the looped approach. Boolean indexing is an example of a vectorized operation. The whole vector is considered, rather than each element individually. The result is that any preprocessing is done once rather than the `n` iterations of the loop. In R, this will always faster.
Example: Log all values in a matrix.
```
mymatrix_log = log(mymatrix)
```
This is way faster than looping over elements, rows or columns. Here we’ll let the apply function stand in for our loop, logging the elements of each column.
```
mymatrix = matrix(runif(100), 10, 10)
identical(apply(mymatrix, 2, log), log(mymatrix))
```
```
[1] TRUE
```
```
library(microbenchmark)
microbenchmark(apply(mymatrix, 2, log), log(mymatrix))
```
```
Unit: nanoseconds
expr min lq mean median uq max neval cld
apply(mymatrix, 2, log) 33961 41309.5 77630.22 64910 96955.0 289128 100 b
log(mymatrix) 918 1040.5 3473.66 1258 1704.5 189819 100 a
```
Many vectorized functions already exist in R. They are often written in C, Fortran etc., and so even faster. Not all programming languages lean toward vectorized operations, and may not see much speed gain from it. In R however, you’ll want to prefer it. Even without the speed gain, it’s cleaner/clearer code, another reason to use the approach.
#### Timings
We made our own function before, however, there is a scale function in base R that uses a more vectorized approach under the hood to standardize variables. The following demonstrates various approaches to standardizing the columns of the matrix, even using a parallelized approach. As you’ll see, the base R function requires very little code and beats the others.
```
mymat = matrix(rnorm(100000), ncol=1000)
stdize <- function(x) {
(x-mean(x)) / sd(x)
}
doubleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
}
singleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
x = (x - mean(x)) / sd(x)
}
}
library(parallel)
cl = makeCluster(8)
clusterExport(cl, c('stdize', 'mymat'))
test = microbenchmark::microbenchmark(
doubleloop = doubleloop(),
singleloop = singleloop(),
apply = apply(mymat, 2, stdize),
parApply = parApply(cl, mymat, 2, stdize),
vectorized = scale(mymat),
times = 25
)
stopCluster(cl)
test
```
Regular Expressions
-------------------
A regular expression, regex for short, is a sequence of characters that can be used as a search pattern for a string. Common operations are to merely detect, extract, or replace the matching string. There are actually many different flavors of regex for different programming languages, which are all flavors that originate with the Perl approach, or can enable the Perl approach to be used. However, knowing one means you pretty much know the others with only minor modifications if any.
To be clear, not only is regex another language, it’s nigh on indecipherable. You will not learn much regex, but what you do learn will save a potentially enormous amount of time you’d otherwise spend trying to do things in a more haphazard fashion. Furthermore, practically every situation that will come up has already been asked and answered on [Stack Overflow](https://stackoverflow.com/questions/tagged/regex), so you’ll almost always be able to search for what you need.
Here is an example of a pattern we might be interested in:
`^r.*shiny[0-9]$`
What is *that* you may ask? Well here is an example of strings it would and wouldn’t match. We’re using grepl to return a logical (i.e. `TRUE` or `FALSE`) if any of the strings match the pattern in some way.
```
string = c('r is the shiny', 'r is the shiny1', 'r shines brightly')
grepl(string, pattern = '^r.*shiny[0-9]$')
```
```
[1] FALSE TRUE FALSE
```
What the regex is esoterically attempting to match is any string that starts with ‘r’ and ends with ‘shiny\_’ where \_ is some single digit. Specifically it breaks down as follows:
* **^** : starts with, so ^r means starts with r
* **.** : any character
* **\*** : match the preceding zero or more times
* **shiny** : match ‘shiny’
* **\[0\-9]** : any digit 0\-9 (note that we are still talking about strings, not actual numbered values)
* **$** : ends with preceding
### Typical uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
Code Style Exercises
--------------------
### Exercise 1
For the following model related output, come up with a name for each object.
```
lm(hwy ~ cyl, data = mpg) # hwy mileage predicted by number of cylinders
summary(lm(hwy ~ cyl, data = mpg)) # the summary of that
lm(hwy ~ cyl + displ + year, data = mpg) # an extension of that
```
### Exercise 2
Fix this code.
```
x=rnorm(100, 10, 2)
y=.2* x+ rnorm(100)
data = data.frame(x,y)
q = lm(y~x, data=data)
summary(q)
```
Vectorization Exercises
-----------------------
Before we do this, did you remember to fix the names in the previous exercise?
### Exercise 1
Show a non\-vectorized (e.g. a loop) and a vectorized way to add a two to the numbers 1 through 3\.
```
?
```
### Exercise 2
Of the following, which do you think is faster? Test it with the bench package.
```
x = matrix(rpois(100000, lambda = 5), ncol = 100)
colSums(x)
apply(x, 2, sum)
bench::mark(
cs = colSums(x),
app = apply(x, 2, sum),
time_unit = 'ms' # microseconds
)
```
Regex Exercises
---------------
### Exercise 1
Using stringr and str\_replace, replace all the states a’s with nothing.
```
library(stringr)
str_replace(state.name, pattern = ?, replacement = ?)
```
Code Style
----------
A lot has been written about coding style over the decades. If there was a definitive answer, you would have heard of it by now. However, there are a couple things you can do at the beginning of your programming approach to go a long way making your code notably better.
### Why does your code exist?
Either use text in an R Markdown file or comment your R script. Explain *why*, not *what*, the code is doing. Think of it as leaving your future self a note (they will thank you!). Be clear, and don’t assume you’ll remember why you were doing what you did.
### Assignment
You see some using `<-` or `=` for assignment. While there is a slight difference, if you’re writing decent code it shouldn’t matter. Far more programming languages use `=`, so that’s reason to prefer it. However, if you like being snobby about things, go with `<-`. Whichever you use, do so consistently[16](#fn16).
### Code length
If your script is becoming hundreds of lines long, you probably need to compartmentalize your operations into separate scripts. For example, separate your data processing from your model scripts.
### Spacing
Don’t be stingy with spaces. As you start out, err on the side of using them. Just note there are exceptions (e.g. no space between function name and parenthesis, unless that function is something like if or else), but you’ll get used to the exceptions over time.
```
x=rnorm(10, mean=0,sd=1) # harder to read
# space between lines too!
x = rnorm(10, mean = 0, sd = 1) # easier to read
```
### Naming things
You might not think of it as such initially, but one of the more difficult challenges in programming is naming things. Even if we can come up with a name for an object or file, there are different styles we can use for the name.
Here is a brief list of things to keep in mind.
* The name should make sense to you, your future self, and others that will use the code
* Try to be concise, but see the previous
* Make liberal use of suffixes/prefixes for naming the same types of things e.g. model\_x, model\_z
* For function names, try for verbs that describe what they do (e.g. add\_two vs. two\_more or plus2\)
* Don’t name anything with ‘final’
* Don’t name something that is already an R function/object (e.g. `T`, c, data, etc.)
* Avoid distinguishing names only by number, e.g. data1 data2
Common naming styles include:
* snake\_case
* CamelCase or camelCase
* spinal\-case (e.g. for file names)
* dot.case
For objects and functions, I find snake case easier to read and less prone to issues[17](#fn17). For example, camel case can fail miserably when acronyms are involved. Dots already have specific uses (file name extensions, function methods, etc.), so probably should be avoided unless you’re using them for that specific purpose (they can also make selecting the whole name difficult depending on the context).
### Other
Use tools like the built\-in RStudio code cleanup shortcut like `Ctrl/Cmd + Shft + A`. It’s not perfect, in the sense I disagree with some of its style choice, but it will definitely be better than you will do on your own starting out.
### Why does your code exist?
Either use text in an R Markdown file or comment your R script. Explain *why*, not *what*, the code is doing. Think of it as leaving your future self a note (they will thank you!). Be clear, and don’t assume you’ll remember why you were doing what you did.
### Assignment
You see some using `<-` or `=` for assignment. While there is a slight difference, if you’re writing decent code it shouldn’t matter. Far more programming languages use `=`, so that’s reason to prefer it. However, if you like being snobby about things, go with `<-`. Whichever you use, do so consistently[16](#fn16).
### Code length
If your script is becoming hundreds of lines long, you probably need to compartmentalize your operations into separate scripts. For example, separate your data processing from your model scripts.
### Spacing
Don’t be stingy with spaces. As you start out, err on the side of using them. Just note there are exceptions (e.g. no space between function name and parenthesis, unless that function is something like if or else), but you’ll get used to the exceptions over time.
```
x=rnorm(10, mean=0,sd=1) # harder to read
# space between lines too!
x = rnorm(10, mean = 0, sd = 1) # easier to read
```
### Naming things
You might not think of it as such initially, but one of the more difficult challenges in programming is naming things. Even if we can come up with a name for an object or file, there are different styles we can use for the name.
Here is a brief list of things to keep in mind.
* The name should make sense to you, your future self, and others that will use the code
* Try to be concise, but see the previous
* Make liberal use of suffixes/prefixes for naming the same types of things e.g. model\_x, model\_z
* For function names, try for verbs that describe what they do (e.g. add\_two vs. two\_more or plus2\)
* Don’t name anything with ‘final’
* Don’t name something that is already an R function/object (e.g. `T`, c, data, etc.)
* Avoid distinguishing names only by number, e.g. data1 data2
Common naming styles include:
* snake\_case
* CamelCase or camelCase
* spinal\-case (e.g. for file names)
* dot.case
For objects and functions, I find snake case easier to read and less prone to issues[17](#fn17). For example, camel case can fail miserably when acronyms are involved. Dots already have specific uses (file name extensions, function methods, etc.), so probably should be avoided unless you’re using them for that specific purpose (they can also make selecting the whole name difficult depending on the context).
### Other
Use tools like the built\-in RStudio code cleanup shortcut like `Ctrl/Cmd + Shft + A`. It’s not perfect, in the sense I disagree with some of its style choice, but it will definitely be better than you will do on your own starting out.
Vectorization
-------------
### Boolean indexing
Assume x is a vector of numbers. How would we create an index representing any value greater than 2?
```
x = c(-1, 2, 10, -5)
idx = x > 2
idx
```
```
[1] FALSE FALSE TRUE FALSE
```
```
x[idx]
```
```
[1] 10
```
As mentioned [previously](data_structures.html#logicals), logicals are objects with values of `TRUE` or `FALSE`, like the idx variable above. While sometimes we want to deal with the logical object as an end, it is extremely commonly used as an index in data processing. Note how we don’t have to create an explicit index object first (though often you should), as R indexing is ridiculously flexible. Here are more examples, not necessarily recommended, but just to demonstrate the flexibility of Boolean indexing.
```
x[x > 2]
x[x != 'cat']
x[ifelse(x > 2 & x !=10, TRUE, FALSE)]
x[{y = idx; y}]
x[resid(lm(y ~ x)) > 0]
```
All of these will transfer to the tidyverse filter function.
```
df %>%
filter(x > 2, z == 'a') # commas are like &
```
### Vectorized operations
Boolean indexing allows us to take vectorized approaches to dealing with data. Consider the following unfortunately coded loop, where we create a variable `y`, which takes on the value of **Yes** if the variable `x` is greater than 2, and **No** if otherwise.
```
for (i in 1:nrow(mydf)) {
check = mydf$x[i] > 2
if (check == TRUE) {
mydf$y[i] = 'Yes'
}
else {
mydf$y[i] = 'No'
}
}
```
Compare[18](#fn18):
```
mydf$y = 'No'
mydf$y[mydf$x > 2] = 'Yes'
```
This gets us the same thing, and would be much faster than the looped approach. Boolean indexing is an example of a vectorized operation. The whole vector is considered, rather than each element individually. The result is that any preprocessing is done once rather than the `n` iterations of the loop. In R, this will always faster.
Example: Log all values in a matrix.
```
mymatrix_log = log(mymatrix)
```
This is way faster than looping over elements, rows or columns. Here we’ll let the apply function stand in for our loop, logging the elements of each column.
```
mymatrix = matrix(runif(100), 10, 10)
identical(apply(mymatrix, 2, log), log(mymatrix))
```
```
[1] TRUE
```
```
library(microbenchmark)
microbenchmark(apply(mymatrix, 2, log), log(mymatrix))
```
```
Unit: nanoseconds
expr min lq mean median uq max neval cld
apply(mymatrix, 2, log) 33961 41309.5 77630.22 64910 96955.0 289128 100 b
log(mymatrix) 918 1040.5 3473.66 1258 1704.5 189819 100 a
```
Many vectorized functions already exist in R. They are often written in C, Fortran etc., and so even faster. Not all programming languages lean toward vectorized operations, and may not see much speed gain from it. In R however, you’ll want to prefer it. Even without the speed gain, it’s cleaner/clearer code, another reason to use the approach.
#### Timings
We made our own function before, however, there is a scale function in base R that uses a more vectorized approach under the hood to standardize variables. The following demonstrates various approaches to standardizing the columns of the matrix, even using a parallelized approach. As you’ll see, the base R function requires very little code and beats the others.
```
mymat = matrix(rnorm(100000), ncol=1000)
stdize <- function(x) {
(x-mean(x)) / sd(x)
}
doubleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
}
singleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
x = (x - mean(x)) / sd(x)
}
}
library(parallel)
cl = makeCluster(8)
clusterExport(cl, c('stdize', 'mymat'))
test = microbenchmark::microbenchmark(
doubleloop = doubleloop(),
singleloop = singleloop(),
apply = apply(mymat, 2, stdize),
parApply = parApply(cl, mymat, 2, stdize),
vectorized = scale(mymat),
times = 25
)
stopCluster(cl)
test
```
### Boolean indexing
Assume x is a vector of numbers. How would we create an index representing any value greater than 2?
```
x = c(-1, 2, 10, -5)
idx = x > 2
idx
```
```
[1] FALSE FALSE TRUE FALSE
```
```
x[idx]
```
```
[1] 10
```
As mentioned [previously](data_structures.html#logicals), logicals are objects with values of `TRUE` or `FALSE`, like the idx variable above. While sometimes we want to deal with the logical object as an end, it is extremely commonly used as an index in data processing. Note how we don’t have to create an explicit index object first (though often you should), as R indexing is ridiculously flexible. Here are more examples, not necessarily recommended, but just to demonstrate the flexibility of Boolean indexing.
```
x[x > 2]
x[x != 'cat']
x[ifelse(x > 2 & x !=10, TRUE, FALSE)]
x[{y = idx; y}]
x[resid(lm(y ~ x)) > 0]
```
All of these will transfer to the tidyverse filter function.
```
df %>%
filter(x > 2, z == 'a') # commas are like &
```
### Vectorized operations
Boolean indexing allows us to take vectorized approaches to dealing with data. Consider the following unfortunately coded loop, where we create a variable `y`, which takes on the value of **Yes** if the variable `x` is greater than 2, and **No** if otherwise.
```
for (i in 1:nrow(mydf)) {
check = mydf$x[i] > 2
if (check == TRUE) {
mydf$y[i] = 'Yes'
}
else {
mydf$y[i] = 'No'
}
}
```
Compare[18](#fn18):
```
mydf$y = 'No'
mydf$y[mydf$x > 2] = 'Yes'
```
This gets us the same thing, and would be much faster than the looped approach. Boolean indexing is an example of a vectorized operation. The whole vector is considered, rather than each element individually. The result is that any preprocessing is done once rather than the `n` iterations of the loop. In R, this will always faster.
Example: Log all values in a matrix.
```
mymatrix_log = log(mymatrix)
```
This is way faster than looping over elements, rows or columns. Here we’ll let the apply function stand in for our loop, logging the elements of each column.
```
mymatrix = matrix(runif(100), 10, 10)
identical(apply(mymatrix, 2, log), log(mymatrix))
```
```
[1] TRUE
```
```
library(microbenchmark)
microbenchmark(apply(mymatrix, 2, log), log(mymatrix))
```
```
Unit: nanoseconds
expr min lq mean median uq max neval cld
apply(mymatrix, 2, log) 33961 41309.5 77630.22 64910 96955.0 289128 100 b
log(mymatrix) 918 1040.5 3473.66 1258 1704.5 189819 100 a
```
Many vectorized functions already exist in R. They are often written in C, Fortran etc., and so even faster. Not all programming languages lean toward vectorized operations, and may not see much speed gain from it. In R however, you’ll want to prefer it. Even without the speed gain, it’s cleaner/clearer code, another reason to use the approach.
#### Timings
We made our own function before, however, there is a scale function in base R that uses a more vectorized approach under the hood to standardize variables. The following demonstrates various approaches to standardizing the columns of the matrix, even using a parallelized approach. As you’ll see, the base R function requires very little code and beats the others.
```
mymat = matrix(rnorm(100000), ncol=1000)
stdize <- function(x) {
(x-mean(x)) / sd(x)
}
doubleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
}
singleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
x = (x - mean(x)) / sd(x)
}
}
library(parallel)
cl = makeCluster(8)
clusterExport(cl, c('stdize', 'mymat'))
test = microbenchmark::microbenchmark(
doubleloop = doubleloop(),
singleloop = singleloop(),
apply = apply(mymat, 2, stdize),
parApply = parApply(cl, mymat, 2, stdize),
vectorized = scale(mymat),
times = 25
)
stopCluster(cl)
test
```
#### Timings
We made our own function before, however, there is a scale function in base R that uses a more vectorized approach under the hood to standardize variables. The following demonstrates various approaches to standardizing the columns of the matrix, even using a parallelized approach. As you’ll see, the base R function requires very little code and beats the others.
```
mymat = matrix(rnorm(100000), ncol=1000)
stdize <- function(x) {
(x-mean(x)) / sd(x)
}
doubleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
}
singleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
x = (x - mean(x)) / sd(x)
}
}
library(parallel)
cl = makeCluster(8)
clusterExport(cl, c('stdize', 'mymat'))
test = microbenchmark::microbenchmark(
doubleloop = doubleloop(),
singleloop = singleloop(),
apply = apply(mymat, 2, stdize),
parApply = parApply(cl, mymat, 2, stdize),
vectorized = scale(mymat),
times = 25
)
stopCluster(cl)
test
```
Regular Expressions
-------------------
A regular expression, regex for short, is a sequence of characters that can be used as a search pattern for a string. Common operations are to merely detect, extract, or replace the matching string. There are actually many different flavors of regex for different programming languages, which are all flavors that originate with the Perl approach, or can enable the Perl approach to be used. However, knowing one means you pretty much know the others with only minor modifications if any.
To be clear, not only is regex another language, it’s nigh on indecipherable. You will not learn much regex, but what you do learn will save a potentially enormous amount of time you’d otherwise spend trying to do things in a more haphazard fashion. Furthermore, practically every situation that will come up has already been asked and answered on [Stack Overflow](https://stackoverflow.com/questions/tagged/regex), so you’ll almost always be able to search for what you need.
Here is an example of a pattern we might be interested in:
`^r.*shiny[0-9]$`
What is *that* you may ask? Well here is an example of strings it would and wouldn’t match. We’re using grepl to return a logical (i.e. `TRUE` or `FALSE`) if any of the strings match the pattern in some way.
```
string = c('r is the shiny', 'r is the shiny1', 'r shines brightly')
grepl(string, pattern = '^r.*shiny[0-9]$')
```
```
[1] FALSE TRUE FALSE
```
What the regex is esoterically attempting to match is any string that starts with ‘r’ and ends with ‘shiny\_’ where \_ is some single digit. Specifically it breaks down as follows:
* **^** : starts with, so ^r means starts with r
* **.** : any character
* **\*** : match the preceding zero or more times
* **shiny** : match ‘shiny’
* **\[0\-9]** : any digit 0\-9 (note that we are still talking about strings, not actual numbered values)
* **$** : ends with preceding
### Typical uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
### Typical uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
Code Style Exercises
--------------------
### Exercise 1
For the following model related output, come up with a name for each object.
```
lm(hwy ~ cyl, data = mpg) # hwy mileage predicted by number of cylinders
summary(lm(hwy ~ cyl, data = mpg)) # the summary of that
lm(hwy ~ cyl + displ + year, data = mpg) # an extension of that
```
### Exercise 2
Fix this code.
```
x=rnorm(100, 10, 2)
y=.2* x+ rnorm(100)
data = data.frame(x,y)
q = lm(y~x, data=data)
summary(q)
```
### Exercise 1
For the following model related output, come up with a name for each object.
```
lm(hwy ~ cyl, data = mpg) # hwy mileage predicted by number of cylinders
summary(lm(hwy ~ cyl, data = mpg)) # the summary of that
lm(hwy ~ cyl + displ + year, data = mpg) # an extension of that
```
### Exercise 2
Fix this code.
```
x=rnorm(100, 10, 2)
y=.2* x+ rnorm(100)
data = data.frame(x,y)
q = lm(y~x, data=data)
summary(q)
```
Vectorization Exercises
-----------------------
Before we do this, did you remember to fix the names in the previous exercise?
### Exercise 1
Show a non\-vectorized (e.g. a loop) and a vectorized way to add a two to the numbers 1 through 3\.
```
?
```
### Exercise 2
Of the following, which do you think is faster? Test it with the bench package.
```
x = matrix(rpois(100000, lambda = 5), ncol = 100)
colSums(x)
apply(x, 2, sum)
bench::mark(
cs = colSums(x),
app = apply(x, 2, sum),
time_unit = 'ms' # microseconds
)
```
### Exercise 1
Show a non\-vectorized (e.g. a loop) and a vectorized way to add a two to the numbers 1 through 3\.
```
?
```
### Exercise 2
Of the following, which do you think is faster? Test it with the bench package.
```
x = matrix(rpois(100000, lambda = 5), ncol = 100)
colSums(x)
apply(x, 2, sum)
bench::mark(
cs = colSums(x),
app = apply(x, 2, sum),
time_unit = 'ms' # microseconds
)
```
Regex Exercises
---------------
### Exercise 1
Using stringr and str\_replace, replace all the states a’s with nothing.
```
library(stringr)
str_replace(state.name, pattern = ?, replacement = ?)
```
### Exercise 1
Using stringr and str\_replace, replace all the states a’s with nothing.
```
library(stringr)
str_replace(state.name, pattern = ?, replacement = ?)
```
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/more.html |
More Programming
================
This section is kind of a grab bag of miscellaneous things related to programming. If you’ve made it this far, feel free to keep going!
Code Style
----------
A lot has been written about coding style over the decades. If there was a definitive answer, you would have heard of it by now. However, there are a couple things you can do at the beginning of your programming approach to go a long way making your code notably better.
### Why does your code exist?
Either use text in an R Markdown file or comment your R script. Explain *why*, not *what*, the code is doing. Think of it as leaving your future self a note (they will thank you!). Be clear, and don’t assume you’ll remember why you were doing what you did.
### Assignment
You see some using `<-` or `=` for assignment. While there is a slight difference, if you’re writing decent code it shouldn’t matter. Far more programming languages use `=`, so that’s reason to prefer it. However, if you like being snobby about things, go with `<-`. Whichever you use, do so consistently[16](#fn16).
### Code length
If your script is becoming hundreds of lines long, you probably need to compartmentalize your operations into separate scripts. For example, separate your data processing from your model scripts.
### Spacing
Don’t be stingy with spaces. As you start out, err on the side of using them. Just note there are exceptions (e.g. no space between function name and parenthesis, unless that function is something like if or else), but you’ll get used to the exceptions over time.
```
x=rnorm(10, mean=0,sd=1) # harder to read
# space between lines too!
x = rnorm(10, mean = 0, sd = 1) # easier to read
```
### Naming things
You might not think of it as such initially, but one of the more difficult challenges in programming is naming things. Even if we can come up with a name for an object or file, there are different styles we can use for the name.
Here is a brief list of things to keep in mind.
* The name should make sense to you, your future self, and others that will use the code
* Try to be concise, but see the previous
* Make liberal use of suffixes/prefixes for naming the same types of things e.g. model\_x, model\_z
* For function names, try for verbs that describe what they do (e.g. add\_two vs. two\_more or plus2\)
* Don’t name anything with ‘final’
* Don’t name something that is already an R function/object (e.g. `T`, c, data, etc.)
* Avoid distinguishing names only by number, e.g. data1 data2
Common naming styles include:
* snake\_case
* CamelCase or camelCase
* spinal\-case (e.g. for file names)
* dot.case
For objects and functions, I find snake case easier to read and less prone to issues[17](#fn17). For example, camel case can fail miserably when acronyms are involved. Dots already have specific uses (file name extensions, function methods, etc.), so probably should be avoided unless you’re using them for that specific purpose (they can also make selecting the whole name difficult depending on the context).
### Other
Use tools like the built\-in RStudio code cleanup shortcut like `Ctrl/Cmd + Shft + A`. It’s not perfect, in the sense I disagree with some of its style choice, but it will definitely be better than you will do on your own starting out.
Vectorization
-------------
### Boolean indexing
Assume x is a vector of numbers. How would we create an index representing any value greater than 2?
```
x = c(-1, 2, 10, -5)
idx = x > 2
idx
```
```
[1] FALSE FALSE TRUE FALSE
```
```
x[idx]
```
```
[1] 10
```
As mentioned [previously](data_structures.html#logicals), logicals are objects with values of `TRUE` or `FALSE`, like the idx variable above. While sometimes we want to deal with the logical object as an end, it is extremely commonly used as an index in data processing. Note how we don’t have to create an explicit index object first (though often you should), as R indexing is ridiculously flexible. Here are more examples, not necessarily recommended, but just to demonstrate the flexibility of Boolean indexing.
```
x[x > 2]
x[x != 'cat']
x[ifelse(x > 2 & x !=10, TRUE, FALSE)]
x[{y = idx; y}]
x[resid(lm(y ~ x)) > 0]
```
All of these will transfer to the tidyverse filter function.
```
df %>%
filter(x > 2, z == 'a') # commas are like &
```
### Vectorized operations
Boolean indexing allows us to take vectorized approaches to dealing with data. Consider the following unfortunately coded loop, where we create a variable `y`, which takes on the value of **Yes** if the variable `x` is greater than 2, and **No** if otherwise.
```
for (i in 1:nrow(mydf)) {
check = mydf$x[i] > 2
if (check == TRUE) {
mydf$y[i] = 'Yes'
}
else {
mydf$y[i] = 'No'
}
}
```
Compare[18](#fn18):
```
mydf$y = 'No'
mydf$y[mydf$x > 2] = 'Yes'
```
This gets us the same thing, and would be much faster than the looped approach. Boolean indexing is an example of a vectorized operation. The whole vector is considered, rather than each element individually. The result is that any preprocessing is done once rather than the `n` iterations of the loop. In R, this will always faster.
Example: Log all values in a matrix.
```
mymatrix_log = log(mymatrix)
```
This is way faster than looping over elements, rows or columns. Here we’ll let the apply function stand in for our loop, logging the elements of each column.
```
mymatrix = matrix(runif(100), 10, 10)
identical(apply(mymatrix, 2, log), log(mymatrix))
```
```
[1] TRUE
```
```
library(microbenchmark)
microbenchmark(apply(mymatrix, 2, log), log(mymatrix))
```
```
Unit: nanoseconds
expr min lq mean median uq max neval cld
apply(mymatrix, 2, log) 33961 41309.5 77630.22 64910 96955.0 289128 100 b
log(mymatrix) 918 1040.5 3473.66 1258 1704.5 189819 100 a
```
Many vectorized functions already exist in R. They are often written in C, Fortran etc., and so even faster. Not all programming languages lean toward vectorized operations, and may not see much speed gain from it. In R however, you’ll want to prefer it. Even without the speed gain, it’s cleaner/clearer code, another reason to use the approach.
#### Timings
We made our own function before, however, there is a scale function in base R that uses a more vectorized approach under the hood to standardize variables. The following demonstrates various approaches to standardizing the columns of the matrix, even using a parallelized approach. As you’ll see, the base R function requires very little code and beats the others.
```
mymat = matrix(rnorm(100000), ncol=1000)
stdize <- function(x) {
(x-mean(x)) / sd(x)
}
doubleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
}
singleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
x = (x - mean(x)) / sd(x)
}
}
library(parallel)
cl = makeCluster(8)
clusterExport(cl, c('stdize', 'mymat'))
test = microbenchmark::microbenchmark(
doubleloop = doubleloop(),
singleloop = singleloop(),
apply = apply(mymat, 2, stdize),
parApply = parApply(cl, mymat, 2, stdize),
vectorized = scale(mymat),
times = 25
)
stopCluster(cl)
test
```
Regular Expressions
-------------------
A regular expression, regex for short, is a sequence of characters that can be used as a search pattern for a string. Common operations are to merely detect, extract, or replace the matching string. There are actually many different flavors of regex for different programming languages, which are all flavors that originate with the Perl approach, or can enable the Perl approach to be used. However, knowing one means you pretty much know the others with only minor modifications if any.
To be clear, not only is regex another language, it’s nigh on indecipherable. You will not learn much regex, but what you do learn will save a potentially enormous amount of time you’d otherwise spend trying to do things in a more haphazard fashion. Furthermore, practically every situation that will come up has already been asked and answered on [Stack Overflow](https://stackoverflow.com/questions/tagged/regex), so you’ll almost always be able to search for what you need.
Here is an example of a pattern we might be interested in:
`^r.*shiny[0-9]$`
What is *that* you may ask? Well here is an example of strings it would and wouldn’t match. We’re using grepl to return a logical (i.e. `TRUE` or `FALSE`) if any of the strings match the pattern in some way.
```
string = c('r is the shiny', 'r is the shiny1', 'r shines brightly')
grepl(string, pattern = '^r.*shiny[0-9]$')
```
```
[1] FALSE TRUE FALSE
```
What the regex is esoterically attempting to match is any string that starts with ‘r’ and ends with ‘shiny\_’ where \_ is some single digit. Specifically it breaks down as follows:
* **^** : starts with, so ^r means starts with r
* **.** : any character
* **\*** : match the preceding zero or more times
* **shiny** : match ‘shiny’
* **\[0\-9]** : any digit 0\-9 (note that we are still talking about strings, not actual numbered values)
* **$** : ends with preceding
### Typical uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
Code Style Exercises
--------------------
### Exercise 1
For the following model related output, come up with a name for each object.
```
lm(hwy ~ cyl, data = mpg) # hwy mileage predicted by number of cylinders
summary(lm(hwy ~ cyl, data = mpg)) # the summary of that
lm(hwy ~ cyl + displ + year, data = mpg) # an extension of that
```
### Exercise 2
Fix this code.
```
x=rnorm(100, 10, 2)
y=.2* x+ rnorm(100)
data = data.frame(x,y)
q = lm(y~x, data=data)
summary(q)
```
Vectorization Exercises
-----------------------
Before we do this, did you remember to fix the names in the previous exercise?
### Exercise 1
Show a non\-vectorized (e.g. a loop) and a vectorized way to add a two to the numbers 1 through 3\.
```
?
```
### Exercise 2
Of the following, which do you think is faster? Test it with the bench package.
```
x = matrix(rpois(100000, lambda = 5), ncol = 100)
colSums(x)
apply(x, 2, sum)
bench::mark(
cs = colSums(x),
app = apply(x, 2, sum),
time_unit = 'ms' # microseconds
)
```
Regex Exercises
---------------
### Exercise 1
Using stringr and str\_replace, replace all the states a’s with nothing.
```
library(stringr)
str_replace(state.name, pattern = ?, replacement = ?)
```
Code Style
----------
A lot has been written about coding style over the decades. If there was a definitive answer, you would have heard of it by now. However, there are a couple things you can do at the beginning of your programming approach to go a long way making your code notably better.
### Why does your code exist?
Either use text in an R Markdown file or comment your R script. Explain *why*, not *what*, the code is doing. Think of it as leaving your future self a note (they will thank you!). Be clear, and don’t assume you’ll remember why you were doing what you did.
### Assignment
You see some using `<-` or `=` for assignment. While there is a slight difference, if you’re writing decent code it shouldn’t matter. Far more programming languages use `=`, so that’s reason to prefer it. However, if you like being snobby about things, go with `<-`. Whichever you use, do so consistently[16](#fn16).
### Code length
If your script is becoming hundreds of lines long, you probably need to compartmentalize your operations into separate scripts. For example, separate your data processing from your model scripts.
### Spacing
Don’t be stingy with spaces. As you start out, err on the side of using them. Just note there are exceptions (e.g. no space between function name and parenthesis, unless that function is something like if or else), but you’ll get used to the exceptions over time.
```
x=rnorm(10, mean=0,sd=1) # harder to read
# space between lines too!
x = rnorm(10, mean = 0, sd = 1) # easier to read
```
### Naming things
You might not think of it as such initially, but one of the more difficult challenges in programming is naming things. Even if we can come up with a name for an object or file, there are different styles we can use for the name.
Here is a brief list of things to keep in mind.
* The name should make sense to you, your future self, and others that will use the code
* Try to be concise, but see the previous
* Make liberal use of suffixes/prefixes for naming the same types of things e.g. model\_x, model\_z
* For function names, try for verbs that describe what they do (e.g. add\_two vs. two\_more or plus2\)
* Don’t name anything with ‘final’
* Don’t name something that is already an R function/object (e.g. `T`, c, data, etc.)
* Avoid distinguishing names only by number, e.g. data1 data2
Common naming styles include:
* snake\_case
* CamelCase or camelCase
* spinal\-case (e.g. for file names)
* dot.case
For objects and functions, I find snake case easier to read and less prone to issues[17](#fn17). For example, camel case can fail miserably when acronyms are involved. Dots already have specific uses (file name extensions, function methods, etc.), so probably should be avoided unless you’re using them for that specific purpose (they can also make selecting the whole name difficult depending on the context).
### Other
Use tools like the built\-in RStudio code cleanup shortcut like `Ctrl/Cmd + Shft + A`. It’s not perfect, in the sense I disagree with some of its style choice, but it will definitely be better than you will do on your own starting out.
### Why does your code exist?
Either use text in an R Markdown file or comment your R script. Explain *why*, not *what*, the code is doing. Think of it as leaving your future self a note (they will thank you!). Be clear, and don’t assume you’ll remember why you were doing what you did.
### Assignment
You see some using `<-` or `=` for assignment. While there is a slight difference, if you’re writing decent code it shouldn’t matter. Far more programming languages use `=`, so that’s reason to prefer it. However, if you like being snobby about things, go with `<-`. Whichever you use, do so consistently[16](#fn16).
### Code length
If your script is becoming hundreds of lines long, you probably need to compartmentalize your operations into separate scripts. For example, separate your data processing from your model scripts.
### Spacing
Don’t be stingy with spaces. As you start out, err on the side of using them. Just note there are exceptions (e.g. no space between function name and parenthesis, unless that function is something like if or else), but you’ll get used to the exceptions over time.
```
x=rnorm(10, mean=0,sd=1) # harder to read
# space between lines too!
x = rnorm(10, mean = 0, sd = 1) # easier to read
```
### Naming things
You might not think of it as such initially, but one of the more difficult challenges in programming is naming things. Even if we can come up with a name for an object or file, there are different styles we can use for the name.
Here is a brief list of things to keep in mind.
* The name should make sense to you, your future self, and others that will use the code
* Try to be concise, but see the previous
* Make liberal use of suffixes/prefixes for naming the same types of things e.g. model\_x, model\_z
* For function names, try for verbs that describe what they do (e.g. add\_two vs. two\_more or plus2\)
* Don’t name anything with ‘final’
* Don’t name something that is already an R function/object (e.g. `T`, c, data, etc.)
* Avoid distinguishing names only by number, e.g. data1 data2
Common naming styles include:
* snake\_case
* CamelCase or camelCase
* spinal\-case (e.g. for file names)
* dot.case
For objects and functions, I find snake case easier to read and less prone to issues[17](#fn17). For example, camel case can fail miserably when acronyms are involved. Dots already have specific uses (file name extensions, function methods, etc.), so probably should be avoided unless you’re using them for that specific purpose (they can also make selecting the whole name difficult depending on the context).
### Other
Use tools like the built\-in RStudio code cleanup shortcut like `Ctrl/Cmd + Shft + A`. It’s not perfect, in the sense I disagree with some of its style choice, but it will definitely be better than you will do on your own starting out.
Vectorization
-------------
### Boolean indexing
Assume x is a vector of numbers. How would we create an index representing any value greater than 2?
```
x = c(-1, 2, 10, -5)
idx = x > 2
idx
```
```
[1] FALSE FALSE TRUE FALSE
```
```
x[idx]
```
```
[1] 10
```
As mentioned [previously](data_structures.html#logicals), logicals are objects with values of `TRUE` or `FALSE`, like the idx variable above. While sometimes we want to deal with the logical object as an end, it is extremely commonly used as an index in data processing. Note how we don’t have to create an explicit index object first (though often you should), as R indexing is ridiculously flexible. Here are more examples, not necessarily recommended, but just to demonstrate the flexibility of Boolean indexing.
```
x[x > 2]
x[x != 'cat']
x[ifelse(x > 2 & x !=10, TRUE, FALSE)]
x[{y = idx; y}]
x[resid(lm(y ~ x)) > 0]
```
All of these will transfer to the tidyverse filter function.
```
df %>%
filter(x > 2, z == 'a') # commas are like &
```
### Vectorized operations
Boolean indexing allows us to take vectorized approaches to dealing with data. Consider the following unfortunately coded loop, where we create a variable `y`, which takes on the value of **Yes** if the variable `x` is greater than 2, and **No** if otherwise.
```
for (i in 1:nrow(mydf)) {
check = mydf$x[i] > 2
if (check == TRUE) {
mydf$y[i] = 'Yes'
}
else {
mydf$y[i] = 'No'
}
}
```
Compare[18](#fn18):
```
mydf$y = 'No'
mydf$y[mydf$x > 2] = 'Yes'
```
This gets us the same thing, and would be much faster than the looped approach. Boolean indexing is an example of a vectorized operation. The whole vector is considered, rather than each element individually. The result is that any preprocessing is done once rather than the `n` iterations of the loop. In R, this will always faster.
Example: Log all values in a matrix.
```
mymatrix_log = log(mymatrix)
```
This is way faster than looping over elements, rows or columns. Here we’ll let the apply function stand in for our loop, logging the elements of each column.
```
mymatrix = matrix(runif(100), 10, 10)
identical(apply(mymatrix, 2, log), log(mymatrix))
```
```
[1] TRUE
```
```
library(microbenchmark)
microbenchmark(apply(mymatrix, 2, log), log(mymatrix))
```
```
Unit: nanoseconds
expr min lq mean median uq max neval cld
apply(mymatrix, 2, log) 33961 41309.5 77630.22 64910 96955.0 289128 100 b
log(mymatrix) 918 1040.5 3473.66 1258 1704.5 189819 100 a
```
Many vectorized functions already exist in R. They are often written in C, Fortran etc., and so even faster. Not all programming languages lean toward vectorized operations, and may not see much speed gain from it. In R however, you’ll want to prefer it. Even without the speed gain, it’s cleaner/clearer code, another reason to use the approach.
#### Timings
We made our own function before, however, there is a scale function in base R that uses a more vectorized approach under the hood to standardize variables. The following demonstrates various approaches to standardizing the columns of the matrix, even using a parallelized approach. As you’ll see, the base R function requires very little code and beats the others.
```
mymat = matrix(rnorm(100000), ncol=1000)
stdize <- function(x) {
(x-mean(x)) / sd(x)
}
doubleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
}
singleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
x = (x - mean(x)) / sd(x)
}
}
library(parallel)
cl = makeCluster(8)
clusterExport(cl, c('stdize', 'mymat'))
test = microbenchmark::microbenchmark(
doubleloop = doubleloop(),
singleloop = singleloop(),
apply = apply(mymat, 2, stdize),
parApply = parApply(cl, mymat, 2, stdize),
vectorized = scale(mymat),
times = 25
)
stopCluster(cl)
test
```
### Boolean indexing
Assume x is a vector of numbers. How would we create an index representing any value greater than 2?
```
x = c(-1, 2, 10, -5)
idx = x > 2
idx
```
```
[1] FALSE FALSE TRUE FALSE
```
```
x[idx]
```
```
[1] 10
```
As mentioned [previously](data_structures.html#logicals), logicals are objects with values of `TRUE` or `FALSE`, like the idx variable above. While sometimes we want to deal with the logical object as an end, it is extremely commonly used as an index in data processing. Note how we don’t have to create an explicit index object first (though often you should), as R indexing is ridiculously flexible. Here are more examples, not necessarily recommended, but just to demonstrate the flexibility of Boolean indexing.
```
x[x > 2]
x[x != 'cat']
x[ifelse(x > 2 & x !=10, TRUE, FALSE)]
x[{y = idx; y}]
x[resid(lm(y ~ x)) > 0]
```
All of these will transfer to the tidyverse filter function.
```
df %>%
filter(x > 2, z == 'a') # commas are like &
```
### Vectorized operations
Boolean indexing allows us to take vectorized approaches to dealing with data. Consider the following unfortunately coded loop, where we create a variable `y`, which takes on the value of **Yes** if the variable `x` is greater than 2, and **No** if otherwise.
```
for (i in 1:nrow(mydf)) {
check = mydf$x[i] > 2
if (check == TRUE) {
mydf$y[i] = 'Yes'
}
else {
mydf$y[i] = 'No'
}
}
```
Compare[18](#fn18):
```
mydf$y = 'No'
mydf$y[mydf$x > 2] = 'Yes'
```
This gets us the same thing, and would be much faster than the looped approach. Boolean indexing is an example of a vectorized operation. The whole vector is considered, rather than each element individually. The result is that any preprocessing is done once rather than the `n` iterations of the loop. In R, this will always faster.
Example: Log all values in a matrix.
```
mymatrix_log = log(mymatrix)
```
This is way faster than looping over elements, rows or columns. Here we’ll let the apply function stand in for our loop, logging the elements of each column.
```
mymatrix = matrix(runif(100), 10, 10)
identical(apply(mymatrix, 2, log), log(mymatrix))
```
```
[1] TRUE
```
```
library(microbenchmark)
microbenchmark(apply(mymatrix, 2, log), log(mymatrix))
```
```
Unit: nanoseconds
expr min lq mean median uq max neval cld
apply(mymatrix, 2, log) 33961 41309.5 77630.22 64910 96955.0 289128 100 b
log(mymatrix) 918 1040.5 3473.66 1258 1704.5 189819 100 a
```
Many vectorized functions already exist in R. They are often written in C, Fortran etc., and so even faster. Not all programming languages lean toward vectorized operations, and may not see much speed gain from it. In R however, you’ll want to prefer it. Even without the speed gain, it’s cleaner/clearer code, another reason to use the approach.
#### Timings
We made our own function before, however, there is a scale function in base R that uses a more vectorized approach under the hood to standardize variables. The following demonstrates various approaches to standardizing the columns of the matrix, even using a parallelized approach. As you’ll see, the base R function requires very little code and beats the others.
```
mymat = matrix(rnorm(100000), ncol=1000)
stdize <- function(x) {
(x-mean(x)) / sd(x)
}
doubleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
}
singleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
x = (x - mean(x)) / sd(x)
}
}
library(parallel)
cl = makeCluster(8)
clusterExport(cl, c('stdize', 'mymat'))
test = microbenchmark::microbenchmark(
doubleloop = doubleloop(),
singleloop = singleloop(),
apply = apply(mymat, 2, stdize),
parApply = parApply(cl, mymat, 2, stdize),
vectorized = scale(mymat),
times = 25
)
stopCluster(cl)
test
```
#### Timings
We made our own function before, however, there is a scale function in base R that uses a more vectorized approach under the hood to standardize variables. The following demonstrates various approaches to standardizing the columns of the matrix, even using a parallelized approach. As you’ll see, the base R function requires very little code and beats the others.
```
mymat = matrix(rnorm(100000), ncol=1000)
stdize <- function(x) {
(x-mean(x)) / sd(x)
}
doubleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
for (j in 1:length(x)) {
x[j] = (x[j] - mean(x)) / sd(x)
}
}
}
singleloop = function() {
for (i in 1:ncol(mymat_asdf)) {
x = mymat_asdf[, i]
x = (x - mean(x)) / sd(x)
}
}
library(parallel)
cl = makeCluster(8)
clusterExport(cl, c('stdize', 'mymat'))
test = microbenchmark::microbenchmark(
doubleloop = doubleloop(),
singleloop = singleloop(),
apply = apply(mymat, 2, stdize),
parApply = parApply(cl, mymat, 2, stdize),
vectorized = scale(mymat),
times = 25
)
stopCluster(cl)
test
```
Regular Expressions
-------------------
A regular expression, regex for short, is a sequence of characters that can be used as a search pattern for a string. Common operations are to merely detect, extract, or replace the matching string. There are actually many different flavors of regex for different programming languages, which are all flavors that originate with the Perl approach, or can enable the Perl approach to be used. However, knowing one means you pretty much know the others with only minor modifications if any.
To be clear, not only is regex another language, it’s nigh on indecipherable. You will not learn much regex, but what you do learn will save a potentially enormous amount of time you’d otherwise spend trying to do things in a more haphazard fashion. Furthermore, practically every situation that will come up has already been asked and answered on [Stack Overflow](https://stackoverflow.com/questions/tagged/regex), so you’ll almost always be able to search for what you need.
Here is an example of a pattern we might be interested in:
`^r.*shiny[0-9]$`
What is *that* you may ask? Well here is an example of strings it would and wouldn’t match. We’re using grepl to return a logical (i.e. `TRUE` or `FALSE`) if any of the strings match the pattern in some way.
```
string = c('r is the shiny', 'r is the shiny1', 'r shines brightly')
grepl(string, pattern = '^r.*shiny[0-9]$')
```
```
[1] FALSE TRUE FALSE
```
What the regex is esoterically attempting to match is any string that starts with ‘r’ and ends with ‘shiny\_’ where \_ is some single digit. Specifically it breaks down as follows:
* **^** : starts with, so ^r means starts with r
* **.** : any character
* **\*** : match the preceding zero or more times
* **shiny** : match ‘shiny’
* **\[0\-9]** : any digit 0\-9 (note that we are still talking about strings, not actual numbered values)
* **$** : ends with preceding
### Typical uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
### Typical uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
Code Style Exercises
--------------------
### Exercise 1
For the following model related output, come up with a name for each object.
```
lm(hwy ~ cyl, data = mpg) # hwy mileage predicted by number of cylinders
summary(lm(hwy ~ cyl, data = mpg)) # the summary of that
lm(hwy ~ cyl + displ + year, data = mpg) # an extension of that
```
### Exercise 2
Fix this code.
```
x=rnorm(100, 10, 2)
y=.2* x+ rnorm(100)
data = data.frame(x,y)
q = lm(y~x, data=data)
summary(q)
```
### Exercise 1
For the following model related output, come up with a name for each object.
```
lm(hwy ~ cyl, data = mpg) # hwy mileage predicted by number of cylinders
summary(lm(hwy ~ cyl, data = mpg)) # the summary of that
lm(hwy ~ cyl + displ + year, data = mpg) # an extension of that
```
### Exercise 2
Fix this code.
```
x=rnorm(100, 10, 2)
y=.2* x+ rnorm(100)
data = data.frame(x,y)
q = lm(y~x, data=data)
summary(q)
```
Vectorization Exercises
-----------------------
Before we do this, did you remember to fix the names in the previous exercise?
### Exercise 1
Show a non\-vectorized (e.g. a loop) and a vectorized way to add a two to the numbers 1 through 3\.
```
?
```
### Exercise 2
Of the following, which do you think is faster? Test it with the bench package.
```
x = matrix(rpois(100000, lambda = 5), ncol = 100)
colSums(x)
apply(x, 2, sum)
bench::mark(
cs = colSums(x),
app = apply(x, 2, sum),
time_unit = 'ms' # microseconds
)
```
### Exercise 1
Show a non\-vectorized (e.g. a loop) and a vectorized way to add a two to the numbers 1 through 3\.
```
?
```
### Exercise 2
Of the following, which do you think is faster? Test it with the bench package.
```
x = matrix(rpois(100000, lambda = 5), ncol = 100)
colSums(x)
apply(x, 2, sum)
bench::mark(
cs = colSums(x),
app = apply(x, 2, sum),
time_unit = 'ms' # microseconds
)
```
Regex Exercises
---------------
### Exercise 1
Using stringr and str\_replace, replace all the states a’s with nothing.
```
library(stringr)
str_replace(state.name, pattern = ?, replacement = ?)
```
### Exercise 1
Using stringr and str\_replace, replace all the states a’s with nothing.
```
library(stringr)
str_replace(state.name, pattern = ?, replacement = ?)
```
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/models.html |
Model Exploration
=================
The following section shows how to get started with modeling in R generally, with a focus on concepts, tools, and syntax, rather than trying to understand the specifics of a given model. We first dive into model exploration, getting a sense of the basic mechanics behind our modeling tools, and contemplating standard results. We’ll then shift our attention to understanding the strengths and limitations of our models. We’ll then change from classical methods to explore machine learning techniques. The goal of these chapters is to provide an overview of concepts and ways to think about modeling.
Model Taxonomy
--------------
We can begin with a taxonomy that broadly describes two classes of models:
* *Supervised*
* *Unsupervised*
* Some combination
For supervised settings, there is a target or set of target variables which we aim to predict with a set of predictor variables or covariates. This is far and away the most common case, and the one we will focus on here. It is very common in machine learning parlance to further distinguish *regression* and *classification* among supervised models, but what they actually mean to distinguish is numeric target variables from categorical ones (it’s all regression).
In the case of unsupervised models, the data itself is the target, and this setting includes techniques such as principal components analysis, factor analysis, cluster analytic approaches, topic modeling, and many others. A key goal for many such methods is *dimension reduction*, either of the columns or rows. For example, we may have many items of a survey we wish to group together into a few concepts, or cluster thousands of observations into a few simple categories.
We can also broadly describe two primary goals of modeling:
* *Prediction*
* *Explanation*
Different models will provide varying amounts of predictive and explanatory (or inferential) power. In some settings, prediction is almost entirely the goal, with little need to understand the underlying details of the relation of inputs to outputs. For example, in a model that predicts words to suggest when typing, we don’t really need to know nor much care about the details except to be able to improve those suggestions. In scientific studies however, we may be much more interested in the (potentially causal) relations among the variables under study.
While these are sometimes competing goals, it is definitely not the case that they are mutually exclusive. For example, a fully interpretable model, statistically speaking, may have no predictive capability, and so is fairly useless in practical terms. Often, very predictive models offer little understanding. But sometimes we can luck out and have both a highly predictive model as well as one that is highly interpretable.
Linear models
-------------
Most models you see in published reports are *linear models* of varying kinds, and form the basis on which to build more complex forms. In such models we distinguish a *target variable* we want to understand from the variables which we will use to understand it. Note that these come with different names depending on the goal of the study, discipline, and other factors[19](#fn19). The following table denotes common nomenclature across many disciplines.
| Type | Names |
| --- | --- |
| Target | Dependent variable |
| Endogenous |
| Response |
| Outcome |
| Output |
| Y |
| Regressand |
| Left hand side (LHS) |
| Predictor | Independent variable |
| Exogenous |
| Explanatory Variable |
| Covariate |
| Input |
| X |
| Regressor |
| Right hand side (RHS) |
A typical way to depict a linear regression model is as follows:
\\\[y \= b\_0 \+ b\_1\\cdot x\_1 \+ b\_2\\cdot x\_2 \+ ... \+ \+ b\_p\\cdot x\_p \+ \\epsilon\\]
In the above, \\(b\_0\\) is the intercept, and the other \\(b\_\*\\) are the regression coefficients that represent the relationship of the predictors \\(x\\) to the target variable \\(y\\). The \\(\\epsilon\\) represents the *error* or *residual*. We don’t have perfect prediction, and that represents the difference between what we can guess with our predictor relationships to the target and what we actually observe with it.
In R, we specify a linear model as follows. Conveniently enough, we use a function, `lm`, that stands for linear model. There are various inputs, typically starting with the formula. In the formula, The target variable is first, followed by the predictor variables, separated by a tilde (`~`). Additional predictor variables are added with a plus sign (`+`). In this example, `y` is our target, and the predictors are `x` and `z`.
```
lm(y ~ x + z)
```
We can still use linear models to investigate nonlinear relationships. For example, in the following, we can add a quadratic term or an interaction, yet the model is still linear in the parameters. All of the following are standard linear regression models.
```
lm(y ~ x + z + x:z)
lm(y ~ x + x_squared) # a better way: lm(y ~ poly(x, degree = 2))
```
In the models above, `x` has a potentially nonlinear relationship with `y`, either by varying its (linear) relationship depending on values of z (the first case) or itself (the second). In general, the manner in which nonlinear relationships may be explored in linear models is quite flexible.
An example of a *nonlinear model* would be population growth models, like exponential or logistic growth curves. You can use functions like nls or nlme for such models, but should have a specific theoretical reason to do so, and even then, flexible models such as [GAMs](https://m-clark.github.io/generalized-additive-models/) might be better than assuming a functional form.
Estimation
----------
One key thing to understand with predictive models of any kind is how we estimate the parameters of interest, e.g. coefficients/weights, variance, and more. To start with, we must have some sort of goal that choosing a particular set of values for the parameters achieves, and then find some way to reach that goal efficiently.
### Minimizing and maximizing
The goal of many estimation approaches is the reduction of *loss*, conceptually defined as the difference between the model predictions and the observed data, i.e. prediction error. In an introductory methods course, many are introduced to *ordinary least squares* as a means to estimate the coefficients for a linear regression model. In this scenario, we are seeking to come up with estimates of the coefficients that *minimize* the (squared) difference between the observed target value and the fitted value based on the parameter estimates. The loss in this case is defined as the sum of the squared errors. Formally we can state it as follows.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2\\]
We can see how this works more clearly with some simple conceptual code. In what follows, we create a [function](functions.html#writing-functions), allows us to move [row by row](iterative.html#for-loops) through the data, calculating both our prediction based on the given model parameters\- \\(\\hat{y}\\), and the difference between that and our target variable \\(y\\). We sum these squared differences to get a total. In practice such a function is called the loss function, cost function, or objective function.
```
ls_loss <- function(X, y, beta) {
# initialize the objects
loss = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
# for each row, calculate y_hat and square the difference with y
for (n in 1:nrow(X)) {
y_hat[n] = sum(X[n, ] * beta)
loss[n] = (y[n] - y_hat[n]) ^ 2
}
sum(loss)
}
```
Now we need some data. Let’s construct some data so that we know the true underlying values for the regression coefficients. Feel free to change the sample size `N` or the coefficient values.
```
set.seed(123) # for reproducibility
N = 100
X = cbind(1, rnorm(N)) # a model matrix; first column represents the intercept
y = 5 * X[, 1] + .5 * X[, 2] + rnorm(N) # a target with some noise; truth is y = 5 +.5*x
df = data.frame(y = y, x = X[, 2])
```
Now let’s make some guesses for the coefficients, and see what the corresponding sum of the squared errors, i.e. the loss, would be.
```
ls_loss(X, y, beta = c(0, 1)) # guess 1
```
```
[1] 2467.106
```
```
ls_loss(X, y, beta = c(1, 2)) # guess 2
```
```
[1] 1702.547
```
```
ls_loss(X, y, beta = c(4, .25)) # guess 3
```
```
[1] 179.2952
```
We see that in our third guess we reduce the loss quite a bit relative to our first guess. This makes sense because a value of 4 for the intercept and .25 for the coefficient for `x` are not as relatively far from the true values.
However, we can also see that they are not the best we could have done. In addition, with more data, our estimated coefficients would get closer to true values.
```
model = lm(y ~ x, df) # fit the model and obtain parameter estimates using OLS
coef(model) # best guess given the data
```
```
(Intercept) x
4.8971969 0.4475284
```
```
sum(residuals(model)^2) # least squares loss
```
```
[1] 92.34413
```
In some relatively rare cases, a known approach is available and we do not have to search for the best estimates, but simply have to perform the correct steps that will result in them. For example, the following matrix operations will produce the best estimates for linear regression, which also happen to be the maximum likelihood estimates.
```
solve(crossprod(X)) %*% crossprod(X, y) # 'normal equations'
```
```
[,1]
[1,] 4.8971969
[2,] 0.4475284
```
```
coef(model)
```
```
(Intercept) x
4.8971969 0.4475284
```
Most of the time we don’t have such luxury, or even if we did, the computations might be too great for the size of our data.
Many statistical modeling techniques use *maximum likelihood* in some form or fashion, including Bayesian approaches, so you would do well to understand the basics. In this case, instead of minimizing the loss, we use an approach to maximize the probability of the observations of the target variable given the estimates of the parameters of the model (e.g. the coefficients in a regression)[20](#fn20).
The following shows how this would look for estimating a single value like a mean for a set of observations from a specific distribution[21](#fn21). In this case, the true underlying value that maximizes the likelihood is 5, but we typically don’t know this. We see that as our guesses for the mean would get closer to 5, the likelihood of the observed values increases. Our final guess based on the observed data won’t be exactly 5, but with enough data and an appropriate model for that data, we should get close.
Again, some simple conceptual code can help us. The next bit of code follows a similar approach to what we had with least squares regression, but the goal is instead to maximize the likelihood of the observed data. In this example, I fix the estimated variance, but in practice we’d need to estimate that parameter as well. As probabilities are typically very small, we work with them on the log scale.
```
max_like <- function(X, y, beta, sigma = 1) {
likelihood = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
for (n in 1:nrow(X)) {
y_hat[n] <- sum(X[n, ] * beta)
likelihood[n] = dnorm(y[n], mean = y_hat[n], sd = sigma, log = TRUE)
}
sum(likelihood)
}
```
```
max_like(X, y, beta = c(0, 1)) # guess 1
```
```
[1] -1327.593
```
```
max_like(X, y, beta = c(1, 2)) # guess 2
```
```
[1] -1022.18
```
```
max_like(X, y, beta = c(4, .25)) # guess 3
```
```
[1] -300.6741
```
```
logLik(model)
```
```
'log Lik.' -137.9115 (df=3)
```
To better understand maximum likelihood, it might help to think of our model from a data generating perspective, rather than in terms of ‘errors’. In the standard regression setting, we think of a single observation as follows:
\\\[\\mu \= b\_0 \+ b\_1\*x\_1 \+ ... \+ b\_p\*x\_p\\]
Or with matrix notation (consider it shorthand if not familiar):
\\\[\\mu \= X\\beta\\]
Now we display how \\(y\\) is generated:
\\\[y \\sim \\mathcal{N}(\\mathrm{mean} \= \\mu, \\mathrm{sd} \= \\sigma)\\]
In words, this means that our target observation \\(y\\) is assumed to be normally distributed with some mean and some standard deviation/variance. The mean \\(\\mu\\) is a function, or simply weighted sum, of our covariates \\(X\\). The unknown parameters we have to estimate are the \\(\\beta\\), i.e. weights, and standard deviation \\(\\sigma\\) (or variance \\(\\sigma^2\\)).
One more note regarding estimation, it is good to distinguish models from estimation procedures. The following shows the more specific to the more general for both models and estimation procedures respectively.
| Label | Name |
| --- | --- |
| LM | Linear Model |
| GLM | Generalized Linear Model |
| GLMM | Generalized Linear Mixed Model |
| GAMM | Generalized Linear Mixed Model |
| OLS | Ordinary Least Squares |
| WLS | Weighted Least Squares |
| GLS | Generalized Least Squares |
| GEE | Generalized Estimating Equations |
| GMM | Generalized Method of Moments |
### Optimization
So we know the goal, but how do we get to it? In practice, we typically use *optimization* methods to iteratively search for the best estimates for the parameters of a given model. The functions we explored above provide a goal\- to minimize loss (however defined\- least squares for continuous, classification error for binary, etc.) or maximize the likelihood (or posterior probability in the Bayesian context). Whatever the goal, an optimizing *algorithm* will typically be used to find the estimates that reach that goal. Some approaches are very general, some are better for certain types of modeling problems. These algorithms continue to make guesses until some criterion has been reached (*convergence*)[22](#fn22).
You generally don’t need to know the details to use these algorithms to fit models, but knowing a little bit about the optimization process and available options may prove useful to deal with more complex data scenarios, where convergence can be difficult. Some packages will even have documentation specifically dealing with convergence issues. In the more predictive models previously discussed, knowing more about the optimization algorithm may speedup the time it takes to train the model, or smooth out the variability in the process.
As an aside, most Bayesian models use an estimation approach that is some form of *Markov Chain Monte Carlo*. It is a simulation based approach to generate subsequent estimates of parameters conditional on present estimates of them. One set of iterations is called a chain, and convergence requires multiple chains to mix well, i.e. come to similar conclusions about the parameter estimates. The goal even then is to maximize the log posterior distribution, similar to maximizing the likelihood. In the past this was an extremely computationally expensive procedure, but these days, modern laptops can handle even complex models with ease, though some data set sizes may be prohibitive still[23](#fn23).
Fitting Models
--------------
With practically every modern modeling package in R, the two components required to fit a model are the model formula, and a data frame that contains the variables specified in that formula. Consider the following models. In general the syntax is the similar regardless of package, with special considerations for the type of model. The data argument is not included in these examples, but would be needed.
```
lm(y ~ x + z) # standard linear model/OLS
glm(y ~ x + z, family = 'binomial') # logistic regression with binary response
glm(y ~ x + z + offset(log(q)), family = 'poisson') # count/rate model
betareg::betareg(y ~ x + z) # beta regression for targets between 0 and 1
pscl::hurdle(y ~ x + z, dist = "negbin") # hurdle model with negative binomial response
lme4::glmer(y ~ x + (1 | group), family = 'binomial') # generalized linear mixed model
mgcv::gam(y ~ s(x)) # generalized additive model
survival::coxph(Surv(time = t, event = q) ~ x) # Cox Proportional Hazards Regression
# Bayesian mixed model
brms::brm(
y ~ x + (1 + x | group),
family = 'zero_one_inflated_beta',
prior = priors
)
```
For examples of many other types of models, see this [document](https://m-clark.github.io/R-models/).
Let’s finally get our hands dirty and run an example. We’ll use the world happiness dataset[24](#fn24). This is country level data based on surveys taken at various years, and the scores are averages or proportions, along with other values like GDP.
```
library(tidyverse) # load if you haven't already
load('data/world_happiness.RData')
# glimpse(happy)
```
| Variable | N | Mean | SD | Min | Q1 | Median | Q3 | Max | % Missing |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| year | 1704 | 2012\.33 | 3\.69 | 2005\.00 | 2009\.00 | 2012\.00 | 2015\.00 | 2018\.00 | 0 |
| life\_ladder | 1704 | 5\.44 | 1\.12 | 2\.66 | 4\.61 | 5\.34 | 6\.27 | 8\.02 | 0 |
| log\_gdp\_per\_capita | 1676 | 9\.22 | 1\.19 | 6\.46 | 8\.30 | 9\.41 | 10\.19 | 11\.77 | 2 |
| social\_support | 1691 | 0\.81 | 0\.12 | 0\.29 | 0\.75 | 0\.83 | 0\.90 | 0\.99 | 1 |
| healthy\_life\_expectancy\_at\_birth | 1676 | 63\.11 | 7\.58 | 32\.30 | 58\.30 | 65\.00 | 68\.30 | 76\.80 | 2 |
| freedom\_to\_make\_life\_choices | 1675 | 0\.73 | 0\.14 | 0\.26 | 0\.64 | 0\.75 | 0\.85 | 0\.99 | 2 |
| generosity | 1622 | 0\.00 | 0\.16 | \-0\.34 | \-0\.12 | \-0\.02 | 0\.09 | 0\.68 | 5 |
| perceptions\_of\_corruption | 1608 | 0\.75 | 0\.19 | 0\.04 | 0\.70 | 0\.81 | 0\.88 | 0\.98 | 6 |
| positive\_affect | 1685 | 0\.71 | 0\.11 | 0\.36 | 0\.62 | 0\.72 | 0\.80 | 0\.94 | 1 |
| negative\_affect | 1691 | 0\.27 | 0\.08 | 0\.08 | 0\.21 | 0\.25 | 0\.31 | 0\.70 | 1 |
| confidence\_in\_national\_government | 1530 | 0\.48 | 0\.19 | 0\.07 | 0\.33 | 0\.46 | 0\.61 | 0\.99 | 10 |
| democratic\_quality | 1558 | \-0\.14 | 0\.88 | \-2\.45 | \-0\.79 | \-0\.23 | 0\.65 | 1\.58 | 9 |
| delivery\_quality | 1559 | 0\.00 | 0\.98 | \-2\.14 | \-0\.71 | \-0\.22 | 0\.70 | 2\.18 | 9 |
| gini\_index\_world\_bank\_estimate | 643 | 0\.37 | 0\.08 | 0\.24 | 0\.30 | 0\.35 | 0\.43 | 0\.63 | 62 |
| happiness\_score | 554 | 5\.41 | 1\.13 | 2\.69 | 4\.51 | 5\.31 | 6\.32 | 7\.63 | 67 |
| dystopia\_residual | 554 | 2\.06 | 0\.55 | 0\.29 | 1\.72 | 2\.06 | 2\.44 | 3\.84 | 67 |
The happiness score itself ranges from 2\.7 to 7\.6, with a mean of 5\.4 and standard deviation of 1\.1\.
Fitting a model with R is trivial, and at a minimum requires the two key ingredients mentioned before, the formula and data. Here we specify our target at `happiness_score` with predictors democratic quality, generosity, and GDP per capita (logged).
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
```
And that’s all there is to it.
### Using matrices
Many packages still allow for matrix input instead of specifying a model formula, or even require it (but shouldn’t). This means separating data into a model (or design) matrix, and the vector or matrix of the target variable(s). For example, if we needed a speed boost and weren’t concerned about some typical output we could use lm.fit.
First we need to create the required components. We can use model.matrix to get what we need.
```
X = model.matrix(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
head(X)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
8 1 -1.8443636 0.08909068 7.500539
9 1 -1.8554263 0.05136492 7.497038
10 1 -1.8865659 -0.11219829 7.497755
19 1 0.2516293 -0.08441135 9.302960
20 1 0.2572919 -0.02068741 9.337532
21 1 0.2999450 -0.03264282 9.376145
```
Note the column of ones in the model matrix `X`. This represents our intercept, but that may not mean much to you unless you understand matrix multiplication (nice demo [here](http://matrixmultiplication.xyz/)). The other columns are just as they are in the data. Note also that the missing values have been removed.
```
nrow(happy)
```
```
[1] 1704
```
```
nrow(X)
```
```
[1] 411
```
The target variable must contain the same number of observations as in the model matrix, and there are various ways to create it to ensure this. Instead of model.matrix, there is also model.frame, which creates a data frame, with a method for extracting the corresponding target variable[25](#fn25).
```
X_df = model.frame(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
y = model.response(X_df)
```
We can now fit the model as follows.
```
happy_model_matrix = lm.fit(X, y)
summary(happy_model_matrix) # only a standard list is returned
```
```
Length Class Mode
coefficients 4 -none- numeric
residuals 411 -none- numeric
effects 411 -none- numeric
rank 1 -none- numeric
fitted.values 411 -none- numeric
assign 4 -none- numeric
qr 5 qr list
df.residual 1 -none- numeric
```
```
coef(happy_model_matrix)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
In my experience, it is generally a bad sign if a package requires that you create the model matrix rather than doing so itself via the standard formula \+ data.frame approach. I typically find that such packages tend to skip out on many other things like using typical methods like predict, coef, etc., making them even more difficult to work with. In general, the only real time you should need to use model matrices is when you are creating your own modeling package, doing simulations, utilizing model speed\-ups, or otherwise know why you need them.
Summarizing Models
------------------
Once we have a model, we’ll want to summarize the results of it. Most modeling packages have a summary method we can apply, which will provide parameter estimates, some notion of model fit, inferential statistics, and other output.
```
happy_model_base_sum = summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
There is a lot of info to parse there, so we’ll go over some of it in particular. The summary provides several pieces of information: the coefficients or weights (`Estimate`)[26](#fn26), the standard errors (`Std. Error`), the t\-statistic (which is just the coefficient divided by the standard error), and the corresponding p\-value. The main thing to look at are the actual coefficients and the direction of their relationship, positive or negative. For example, with regard to the effect of democratic quality, moving one point on democratic quality results in roughly 0\.2 units of happiness. Is this a notable effect? Knowing the scale of the outcome can help us understand the magnitude of the effect in a general sense. Before we showed that the standard deviation of the happiness scale was 1\.1\. So, in terms of standard deviation units\- moving 1 points on democratic quality would result in a 0\.2 standard deviation increase in state\-level happiness. We might consider this fairly small, but maybe not negligible.
One thing we must also have in order to understand our results is to get a sense of the uncertainty in the effects. The following provides confidence intervals for each of the coefficients.
```
confint(happy_model_base)
```
```
2.5 % 97.5 %
(Intercept) -1.62845472 -0.3925003
democratic_quality 0.08018814 0.2605586
generosity 0.77656244 1.5451306
log_gdp_per_capita 0.62786210 0.7589806
```
Now we have a sense of the range of plausible values for the coefficients. The value we actually estimate is the best guess given our circumstances, but slight changes in the data, the way we collect it, the time we collect it, etc., all would result in a slightly different result. The confidence interval provides a range of what we could expect given the uncertainty, and, given its importance, you should always report it. In fact, just showing the coefficient and the interval would be better than typical reporting of the statistical test results, though you can do both.
Variable Transformations
------------------------
Transforming variables can provide a few benefits in modeling, whether applied to the target, covariates, or both, and should regularly be used for most models. Some of these benefits include[27](#fn27):
* Interpretable intercepts
* More comparable covariate effects
* Faster estimation
* Easier convergence
* Help with heteroscedasticity
For example, merely centering predictor variables, i.e. subtracting the mean, provides a more interpretable intercept that will fall within the actual range of the target variable, telling us what the value of the target variable is when the covariates are at their means (or reference value if categorical).
### Numeric variables
The following table shows the interpretation of two extremely common transformations applied to numeric variables\- logging and scaling (i.e. standardizing to mean zero, standard deviation one).
| target | predictor | interpretation |
| --- | --- | --- |
| y | x | \\(\\Delta y \= \\beta\\Delta x\\) |
| y | log(x) | \\(\\Delta y \\approx (\\beta/100\)\\%\\Delta x\\) |
| log(y) | x | \\(\\%\\Delta y \\approx 100\\cdot \\beta\\%\\Delta x\\) |
| log(y) | log(x) | \\(\\%\\Delta y \= \\beta\\%\\Delta x\\) |
| y | scale(x) | \\(\\Delta y \= \\beta\\sigma\\Delta x\\) |
| scale(y) | x | \\(\\sigma\\Delta y \= \\beta\\Delta x\\) |
| scale(y) | scale(x) | \\(\\sigma\\Delta y \= \\beta\\sigma\\Delta x\\) |
For example, to start with the normal linear model situation, a one\-unit change in \\(x\\), i.e. \\(\\Delta x \=1\\), leads to \\(\\beta\\) unit change in \\(y\\). If we log the target variable \\(y\\), the interpretation of the coefficient for \\(x\\) is that a one\-unit change in \\(x\\) leads to an (approximately) 100\\(\\cdot\\)\\(\\beta\\)% change in \\(y\\). The 100 changes the result from a proportion to percentage change. More concretely, if \\(\\beta\\) was .5, a unit change in \\(x\\) leads to (roughly) a 50% change in \\(y\\). If both were logged, a percentage change in \\(x\\) leads to a \\(\\beta\\) percentage change in y[28](#fn28). These percentage change interpretations are called [elasticities](https://en.wikipedia.org/wiki/Elasticity_(economics)) in econometrics and areas trained similarly[29](#fn29).
It is very common to use *standardized* variables as well, also called normalizing, or simply scaling. If \\(y\\) and \\(x\\) are both standardized, a one unit (i.e. one standard deviation) change in \\(x\\) leads to a \\(\\beta\\) standard deviation change in \\(y\\). Again, if \\(\\beta\\) was .5, a standard deviation change in \\(x\\) leads to a half standard deviation change in \\(y\\). In general, there is nothing to lose by standardizing, so you should employ it often.
Another common transformation, particularly in machine learning, is the *min\-max normalization*, changing variables to range from some minimum to some maximum, usually zero to one.
### Categorical variables
A raw character string is not an analyzable unit, so character strings and labeled variables like factors must be converted for analysis to be conducted on them. For categorical variables, we can employ what is called *effects coding* to test for specific types of group differences. Far and away the most common approach is called *dummy coding* or *one\-hot encoding*[30](#fn30). In the next example, we will use dummy coding via the recipes package. I also to show how to standardize a numeric variable as previously discussed.
```
library(recipes)
nafta = happy %>%
filter(country %in% c('United States', 'Canada', 'Mexico'))
dummy = nafta %>%
recipe(~ country + generosity) %>% # formula approach for specifying variables
step_dummy(country, one_hot = TRUE) %>% # make variables for all factor levels
step_center(generosity) %>% # example of centering
step_scale(generosity) # example of standardizing
prep(dummy) %>% # estimates the necessary data to apply to this or other data sets
bake(nafta) %>% # apply the computations
print(n = 20)
```
```
# A tibble: 39 x 4
generosity country_Canada country_Mexico country_United.States
<dbl> <dbl> <dbl> <dbl>
1 0.835 1 0 0
2 0.819 1 0 0
3 0.891 1 0 0
4 0.801 1 0 0
5 0.707 1 0 0
6 0.841 1 0 0
7 1.06 1 0 0
8 1.21 1 0 0
9 0.940 1 0 0
10 0.838 1 0 0
11 0.590 1 0 0
12 0.305 1 0 0
13 -0.0323 1 0 0
14 NA 0 1 0
15 -1.19 0 1 0
16 -1.39 0 1 0
17 -1.08 0 1 0
18 -0.915 0 1 0
19 -1.22 0 1 0
20 -1.18 0 1 0
# … with 19 more rows
```
We see that the first few observations are Canada, and the next few Mexico. Note that doing this is rarely required for most modeling situations, but even if not, it sometimes can be useful to do so explicitly. If your modeling package cannot handle factor variables, and thus requires explicit coding, you’ll know, and typically these are the same ones that require matrix input.
Let’s run a regression as follows to show how it would happen automatically.
```
model_dummy = lm(happiness_score ~ country, data = nafta)
summary(model_dummy)
```
```
Call:
lm(formula = happiness_score ~ country, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.36887 0.09633 76.493 5.64e-14 ***
countryMexico -0.61107 0.13624 -4.485 0.00152 **
countryUnited States -0.34337 0.13624 -2.520 0.03275 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
In this case, the coefficient represents the difference in means on the target variable between the reference group and the group in question. In this case, the U.S. is \-0\.34 less on the happy score than the reference country (Canada). The intercept tells us the mean of the reference group.
Other codings are possible, and these would allow for specific group comparisons or types of comparisons. This is sometimes called *contrast coding*. For example, we could compare Canada vs. both the U.S. and Mexico. By giving Canada twice the weight of the other two we can get this result. I also add a coding that will just compare Mexico vs. the U.S. The actual weights used are arbitrary, but in this case should sum to zero.
| group | canada\_vs\_other | mexico\_vs\_us |
| --- | --- | --- |
| Canada | \-0\.667 | 0\.0 |
| Mexico | 0\.333 | \-0\.5 |
| United States | 0\.333 | 0\.5 |
| weights sum to zero, but are arbitrary |
| --- |
Adding such coding to a factor variable allows the corresponding models to use it in constructing the model matrix, rather than dummy coding. See the group means and calculate the results by hand for yourself.
```
nafta = nafta %>%
mutate(country_fac = factor(country))
contrasts(nafta$country_fac) = matrix(c(-2/3, 1/3, 1/3, 0, -.5, .5),
ncol = 2)
summary(lm(happiness_score ~ country_fac, data = nafta))
```
```
Call:
lm(formula = happiness_score ~ country_fac, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.05072 0.05562 126.769 6.01e-16 ***
country_fac1 -0.47722 0.11799 -4.045 0.00291 **
country_fac2 0.26770 0.13624 1.965 0.08100 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
```
nafta %>%
group_by(country) %>%
summarise(happy = mean(happiness_score, na.rm = TRUE))
```
```
# A tibble: 3 x 2
country happy
<chr> <dbl>
1 Canada 7.37
2 Mexico 6.76
3 United States 7.03
```
For example, we can see that for this balanced data set, the `_fac1` coefficient is the average of the U.S. and Mexico coefficients that we got from dummy coding, which represented their respective mean differences from Canada: (\-0\.611 \+ \-0\.343\) / 2 \= \-0\.477\. The `_fac2` coefficient is just the U.S. Mexico mean difference, as expected.
In other circumstances, we can use *categorical embeddings* to reduce a very large number of categorical levels to a smaller number of numeric variables. This is very commonly employed in deep learning.
### Scales, indices, and dimension reduction
It is often the case that we have several correlated variables/items which do not all need to go into the model. For example, instead of using all items in a psychological scale, we can use the scale score, however defined, which is often just a *sum score* of the underlying items. Often people will create an index by using a *principal components analysis*, which can be thought of as a means to create a weighted sum score, or set of scores. Some (especially binary) items may tend toward the creation of a single variable that simply notes whether any of those collection of variables was present or not.
#### Two\-step approaches
Some might do a preliminary analysis, such as a *cluster analysis* or *factor analysis*, to create new target or predictor variables. In the former we reduce several variables to a single categorical label. Factor analysis does the same but results in a more expressive continuous metric. While fine to use, the corresponding results are measured with error, so treating the categories or factor scores as you would observed variables will typically result in optimistic results when you later include them in a subsequent analysis like a linear regression. Though this difference is probably slight in most applications, keen reviewers would probably point out the model shortcoming.
### Don’t discretize
Little pains advanced modelers more than seeing results where a nice expressive continuous metric is butchered into two categories (e.g. taking a numeric age and collapsing to ‘old’ vs. ‘young’). There is rarely a reason to do this, and it is difficult to justify. There are reasons to collapse rare labels of a categorical variable, so that the new variable has fewer but more frequent categories. For example, data may have five or six race categories, but often the values are lumped into majority group vs. minority group due to each minority category having too few observations. But even that can cause problems, and doesn’t really overcome the fact that you simply didn’t have enough data to begin with.
Variable Importance
-------------------
In many circumstances, one of the modeling goals is to determine which predictor variable is most important out of the collection used in the model, or otherwise rank order the effectiveness of the predictors in some fashion. However, determining relative *variable importance* is at best an approximation with some methods, and a fairly hopeless endeavor with others. For just basic linear regression there are many methods that would not necessarily come to the same conclusions. Statistical significance, e.g. the Z/t statistic or p\-value, is simply not a correct way to do so. Some believe that [standardizing numeric variables](models.html#numeric-variables) is enough, but it is not, and doesn’t help with comparison to categorical inputs. In addition, if you’re model is not strong, it doesn’t make much sense to even worry about which is the best of a bad lot.
Another reason that ‘importance’ is a problematic endeavor is that a statistical result doesn’t speak to practical action, nor does it speak to the fact that small effects may be very important. Sex may be an important driver in social science model, but we may not be able to do anything about it for many outcomes that may be of interest. With health outcomes, any effects might be worthy of attention, however small, if they could practically increase the likelihood of survival.
Even if you can come up with a metric you like, you would still need some measure of uncertainty around that to make a claim that one predictor is reasonably better than another, and the only real approach to do that is usually some computationally expensive procedure that you will likely have to put together by hand.
As an example, for standard linear regression there are many methods that decompose \\(R^2\\) into relative contributions by the covariates. The tools to do so have to re\-run the model in many ways to produce these estimates (see the relaimpo package for example), but you would then have to use bootstrapping or similar approach to get interval estimates for those measures of importance. Certain techniques like random forests have a natural way to provide variable importance metrics, but providing inference on them would similarly be very computationally expensive.
In the end though, I think it is probably best to assume that any effect that seems practically distinct from zero might be worthy of attention, and can be regarded for its own sake. The more actionable, the better.
Extracting Output
-----------------
The better you get at modeling, the more often you are going to need to get at certain parts of the model output easily. For example, we can extract the coefficients, residuals, model data and other parts from standard linear model objects from base R.
Why would you want to do this? A simple example would be to compare effects across different settings. We can collect the values, put them in a data frame, and then to a table or visualization.
Typical modeling [methods](programming.html#methods) you might want to use:
* summary: print results in a legible way
* plot: plot something about the model (e.g. diagnostic plots)
* predict: make predictions, possibly on new data
* confint: get confidence intervals for parameters
* coef: extract coefficients
* fitted: extract fitted values
* residuals: extract residuals
* AIC: extract AIC
Here is an example of using the predict and coef methods.
```
predict(happy_model_base, newdata = happy %>% slice(1:5))
```
```
1 2 3 4 5
3.838179 3.959046 3.928180 4.004129 4.171624
```
```
coef(happy_model_base)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
Also, it’s useful to assign the summary results to an object, so that you can extract things that are also useful but would not be in the model object. We did this before, so now let’s take a look.
```
str(happy_model_base_sum, 1)
```
```
List of 12
$ call : language lm(formula = happiness_score ~ democratic_quality + generosity + log_gdp_per_capita, data = happy)
$ terms :Classes 'terms', 'formula' language happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
.. ..- attr(*, "variables")= language list(happiness_score, democratic_quality, generosity, log_gdp_per_capita)
.. ..- attr(*, "factors")= int [1:4, 1:3] 0 1 0 0 0 0 1 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:3] "democratic_quality" "generosity" "log_gdp_per_capita"
.. ..- attr(*, "order")= int [1:3] 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(happiness_score, democratic_quality, generosity, log_gdp_per_capita)
.. ..- attr(*, "dataClasses")= Named chr [1:4] "numeric" "numeric" "numeric" "numeric"
.. .. ..- attr(*, "names")= chr [1:4] "happiness_score" "democratic_quality" "generosity" "log_gdp_per_capita"
$ residuals : Named num [1:411] -0.405 -0.572 0.057 -0.426 -0.829 ...
..- attr(*, "names")= chr [1:411] "8" "9" "10" "19" ...
$ coefficients : num [1:4, 1:4] -1.01 0.17 1.161 0.693 0.314 ...
..- attr(*, "dimnames")=List of 2
$ aliased : Named logi [1:4] FALSE FALSE FALSE FALSE
..- attr(*, "names")= chr [1:4] "(Intercept)" "democratic_quality" "generosity" "log_gdp_per_capita"
$ sigma : num 0.628
$ df : int [1:3] 4 407 4
$ r.squared : num 0.695
$ adj.r.squared: num 0.693
$ fstatistic : Named num [1:3] 310 3 407
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:4, 1:4] 0.2504 0.0229 -0.0139 -0.0264 0.0229 ...
..- attr(*, "dimnames")=List of 2
$ na.action : 'omit' Named int [1:1293] 1 2 3 4 5 6 7 11 12 13 ...
..- attr(*, "names")= chr [1:1293] "1" "2" "3" "4" ...
- attr(*, "class")= chr "summary.lm"
```
If we want the adjusted \\(R^2\\) or root mean squared error (RMSE, i.e. average error[31](#fn31)), they aren’t readily available in the model object, but they are in the summary object, so we can pluck them out as we would any other [list object](data_structures.html#lists).
```
happy_model_base_sum$adj.r.squared
```
```
[1] 0.6930647
```
```
happy_model_base_sum[['sigma']]
```
```
[1] 0.6282718
```
### Package support
There are many packages available to get at model results. One of the more widely used is broom, which has tidy and other functions that can apply in different ways to different models depending on their class.
```
library(broom)
tidy(happy_model_base)
```
```
# A tibble: 4 x 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -1.01 0.314 -3.21 1.41e- 3
2 democratic_quality 0.170 0.0459 3.71 2.33e- 4
3 generosity 1.16 0.195 5.94 6.18e- 9
4 log_gdp_per_capita 0.693 0.0333 20.8 5.93e-66
```
Some packages will produce tables for a model object that are more or less ready for publication. However, unless you know it’s in the exact style you need, you’re probably better off dealing with it yourself. For example, you can use tidy and do minor cleanup to get the table ready for publication.
Visualization
-------------
> Models require visualization to be understood completely.
If you aren’t using visualization as a fundamental part of your model exploration, you’re likely leaving a lot of that exploration behind, and not communicating the results as well as you could to the broadest audience possible. When adding nonlinear effects, interactions, and more, visualization is a must. Thankfully there are many packages to help you get data you need to visualize effects.
We start with the emmeans package. In the following example we have a country effect, and wish to get the mean happiness scores per country. We then visualize the results. Here we can see that Mexico is lowest on average.
```
happy_model_nafta = lm(happiness_score ~ country + year, data = nafta)
library(emmeans)
country_means = emmeans(happy_model_nafta, ~ country)
country_means
```
```
country emmean SE df lower.CL upper.CL
Canada 7.37 0.064 8 7.22 7.52
Mexico 6.76 0.064 8 6.61 6.91
United States 7.03 0.064 8 6.88 7.17
Confidence level used: 0.95
```
```
plot(country_means)
```
We can also test for pairwise differences between the countries, and there’s no reason not to visualize that also. In the following, after adjustment Mexico and U.S. might not differ on mean happiness, but the other comparisons are statistically notable[32](#fn32).
```
pw_comparisons = contrast(country_means, method = 'pairwise', adjust = 'bonferroni')
pw_comparisons
```
```
contrast estimate SE df t.ratio p.value
Canada - Mexico 0.611 0.0905 8 6.751 0.0004
Canada - United States 0.343 0.0905 8 3.793 0.0159
Mexico - United States -0.268 0.0905 8 -2.957 0.0547
P value adjustment: bonferroni method for 3 tests
```
```
plot(pw_comparisons)
```
The following example uses ggeffects. First, we run a model with an interaction of country and year (we’ll talk more about interactions later). Then we get predictions for the year by country, and subsequently visualize. We can see that the trend, while negative for all countries, is more pronounced as we move south.
```
happy_model_nafta = lm(happiness_score ~ year*country, data = nafta)
library(ggeffects)
preds = ggpredict(happy_model_nafta, terms = c('year', 'country'))
plot(preds)
```
Whenever you move to generalized linear models or other more complicated settings, visualization is even more important, so it’s best to have some tools at your disposal.
Extensions to the Standard Linear Model
---------------------------------------
### Different types of targets
In many data situations, we do not have a continuous numeric target variable, or may want to use a different distribution to get a better fit, or adhere to some theoretical perspective. For example, count data is not continuous and often notably skewed, so assuming a normal symmetric distribution may not work as well. From a data generating perspective we can use the Poisson distribution[33](#fn33) for the target variable instead.
\\\[\\ln{\\mu} \= X\\beta\\]
\\\[\\mu \= e^{X\\beta}\\]
\\\[y \\sim \\mathcal{Pois}(\\mu)\\]
Conceptually nothing has really changed from what we were doing with the standard linear model, except for the distribution. We still have a mean function determined by our predictors, and this is what we’re typically mainly interested in from a theoretical perspective. We do have an added step, a transformation of the mean (now usually called the *linear predictor*). Poisson naturally works with the log of the target, but rather than do that explicitly, we instead exponentiate the linear predictor. The *link function*[34](#fn34), which is the natural log in this setting, has a corresponding *inverse link* (or mean function)\- exponentiation.
In code we can demonstrate this as follows.
```
set.seed(123) # for reproducibility
N = 1000 # sample size
beta = c(2, 1) # the true coefficient values
x = rnorm(N) # a single predictor variable
mu = exp(beta[1] + beta[2]*x) # the linear predictor
y = rpois(N, lambda = mu) # the target variable lambda = mean
glm(y ~ x, family = poisson)
```
```
Call: glm(formula = y ~ x, family = poisson)
Coefficients:
(Intercept) x
2.009 0.994
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 13240
Residual Deviance: 1056 AIC: 4831
```
A very common setting is the case where our target variable takes on only two values\- yes vs. no, alive vs. dead, etc. The most common model used in such settings is the logistic regression model. In this case, it will have a different link to go with a different distribution.
\\\[\\ln{\\frac{\\mu}{1\-\\mu}} \= X\\beta\\]
\\\[\\mu \= \\frac{1}{1\+e^{\-X\\beta}}\\]
\\\[y \\sim \\mathcal{Binom}(\\mathrm{prob}\=\\mu, \\mathrm{size} \= 1\)\\]
Here our link function is called the *logit*, and it’s inverse takes our linear predictor and puts it on the probability scale.
Again, some code can help drive this home.
```
mu = plogis(beta[1] + beta[2]*x)
y = rbinom(N, size = 1, mu)
glm(y ~ x, family = binomial)
```
```
Call: glm(formula = y ~ x, family = binomial)
Coefficients:
(Intercept) x
2.141 1.227
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 852.3
Residual Deviance: 708.8 AIC: 712.8
```
```
# extension to count/proportional model
# mu = plogis(beta[1] + beta[2]*x)
# total = rpois(N, lambda = 5)
# events = rbinom(N, size = total, mu)
# nonevents = total - events
#
# glm(cbind(events, nonevents) ~ x, family = binomial)
```
You’ll have noticed that when we fit these models we used glm instead of lm. The normal linear model is a special case of *generalized linear models*, which includes a specific class of distributions \- normal, poisson, binomial, gamma, beta and more \- collectively referred to as the [exponential family](https://en.wikipedia.org/wiki/Exponential_family). While this family can cover a lot of ground, you do not have to restrict yourself to it, and many R modeling packages will provide easy access to more. The main point is that you have tools to deal with continuous, binary, count, ordinal, and other types of data. Furthermore, not much necessarily changes conceptually from model to model besides the link function and/or distribution.
### Correlated data
Often in standard regression modeling situations we have data that is correlated, like when we observe multiple observations for individuals (e.g. longitudinal studies), or observations are clustered within geographic units. There are many ways to analyze all kinds of correlated data in the form of clustered data, time series, spatial data and similar. In terms of understanding the mean function and data generating distribution for our target variable, as we did in our previous models, not much changes. However, we will want to utilize estimation techniques that take this correlation into account. Examples of such models include:
* Mixed models (e.g. random intercepts, ‘multilevel’ models)
* Time series models (autoregressive)
* Spatial models (e.g. conditional autoregressive)
As demonstration is beyond the scope of this document, the main point here is awareness. But see these on [mixed models](https://m-clark.github.io/mixed-models-with-R/) and [generalized additive models](https://m-clark.github.io/generalized-additive-models/).
### Other extensions
There are many types of models that will take one well beyond the standard linear model. In some cases, the focus is multivariate, trying to model many targets at once. Other models will even be domain\-specific, tailored to a very narrow type of problem. Whatever the scenario, having a good understanding of the models we’ve been discussing will likely help you navigate these new waters much more easily.
Model Exploration Summary
-------------------------
At this point you should have a good idea of how to get started exploring models with R. Generally what you will explore will be based on theory, or merely curiosity. Specific packages while make certain types of models easy to pull off, without much change to the syntax from the standard `lm` approach of base R. Almost invariably, you will need to process the data to make it more amenable to analysis and/or more interpretable. After model fitting, summaries and visualizations go a long way toward understanding the part of the world you are exploring.
Model Exploration Exercises
---------------------------
### Exercise 1
With the Google app data, use a standard linear model (i.e. lm) to predict one of three target variables of your choosing:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
I would suggest preprocessing the number of reviews\- dividing by 100,000, scaling (standardizing), or logging it (for the latter you can add 1 first to deal with zeros[35](#fn35)).
Interpret the results. Visualize the difference in means between free and paid apps. See the [emmeans](models.html#visualization) example above.
```
load('data/google_apps.RData')
model = lm(? ~ reviews + type + size_in_MB, data = google_apps)
plot(emmeans::emmeans(model, ~type))
```
### Exercise 2
Rerun the above with interactions of the number of reviews or app size (or both) with type (via `a + b + a:b` or just `a*b` for two predictors). Visualize the interaction. Does it look like the effect differs by type?
```
model = lm(? ~ reviews + type*?, data = google_apps)
plot(ggeffects::ggpredict(model, terms = c('size_in_MB', 'type')))
```
### Exercise 3
Use the fish data to predict the number of fish caught `count` by the following predictor variables:
* `livebait`: whether live bait was used or not
* `child`: how many children present
* `persons`: total persons on the trip
If you wish, you can start with an `lm`, but as the number of fish caught is a count, it is suitable for using a poisson distribution via `glm` with `family = poisson`, so try that if you’re feeling up for it. If you exponentiate the coefficients, they can be interpreted as [incidence rate ratios](https://stats.idre.ucla.edu/stata/output/poisson-regression/).
```
load('data/fish.RData')
model = glm(?, data = fish)
```
Python Model Exploration Notebook
---------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/models.ipynb)
Model Taxonomy
--------------
We can begin with a taxonomy that broadly describes two classes of models:
* *Supervised*
* *Unsupervised*
* Some combination
For supervised settings, there is a target or set of target variables which we aim to predict with a set of predictor variables or covariates. This is far and away the most common case, and the one we will focus on here. It is very common in machine learning parlance to further distinguish *regression* and *classification* among supervised models, but what they actually mean to distinguish is numeric target variables from categorical ones (it’s all regression).
In the case of unsupervised models, the data itself is the target, and this setting includes techniques such as principal components analysis, factor analysis, cluster analytic approaches, topic modeling, and many others. A key goal for many such methods is *dimension reduction*, either of the columns or rows. For example, we may have many items of a survey we wish to group together into a few concepts, or cluster thousands of observations into a few simple categories.
We can also broadly describe two primary goals of modeling:
* *Prediction*
* *Explanation*
Different models will provide varying amounts of predictive and explanatory (or inferential) power. In some settings, prediction is almost entirely the goal, with little need to understand the underlying details of the relation of inputs to outputs. For example, in a model that predicts words to suggest when typing, we don’t really need to know nor much care about the details except to be able to improve those suggestions. In scientific studies however, we may be much more interested in the (potentially causal) relations among the variables under study.
While these are sometimes competing goals, it is definitely not the case that they are mutually exclusive. For example, a fully interpretable model, statistically speaking, may have no predictive capability, and so is fairly useless in practical terms. Often, very predictive models offer little understanding. But sometimes we can luck out and have both a highly predictive model as well as one that is highly interpretable.
Linear models
-------------
Most models you see in published reports are *linear models* of varying kinds, and form the basis on which to build more complex forms. In such models we distinguish a *target variable* we want to understand from the variables which we will use to understand it. Note that these come with different names depending on the goal of the study, discipline, and other factors[19](#fn19). The following table denotes common nomenclature across many disciplines.
| Type | Names |
| --- | --- |
| Target | Dependent variable |
| Endogenous |
| Response |
| Outcome |
| Output |
| Y |
| Regressand |
| Left hand side (LHS) |
| Predictor | Independent variable |
| Exogenous |
| Explanatory Variable |
| Covariate |
| Input |
| X |
| Regressor |
| Right hand side (RHS) |
A typical way to depict a linear regression model is as follows:
\\\[y \= b\_0 \+ b\_1\\cdot x\_1 \+ b\_2\\cdot x\_2 \+ ... \+ \+ b\_p\\cdot x\_p \+ \\epsilon\\]
In the above, \\(b\_0\\) is the intercept, and the other \\(b\_\*\\) are the regression coefficients that represent the relationship of the predictors \\(x\\) to the target variable \\(y\\). The \\(\\epsilon\\) represents the *error* or *residual*. We don’t have perfect prediction, and that represents the difference between what we can guess with our predictor relationships to the target and what we actually observe with it.
In R, we specify a linear model as follows. Conveniently enough, we use a function, `lm`, that stands for linear model. There are various inputs, typically starting with the formula. In the formula, The target variable is first, followed by the predictor variables, separated by a tilde (`~`). Additional predictor variables are added with a plus sign (`+`). In this example, `y` is our target, and the predictors are `x` and `z`.
```
lm(y ~ x + z)
```
We can still use linear models to investigate nonlinear relationships. For example, in the following, we can add a quadratic term or an interaction, yet the model is still linear in the parameters. All of the following are standard linear regression models.
```
lm(y ~ x + z + x:z)
lm(y ~ x + x_squared) # a better way: lm(y ~ poly(x, degree = 2))
```
In the models above, `x` has a potentially nonlinear relationship with `y`, either by varying its (linear) relationship depending on values of z (the first case) or itself (the second). In general, the manner in which nonlinear relationships may be explored in linear models is quite flexible.
An example of a *nonlinear model* would be population growth models, like exponential or logistic growth curves. You can use functions like nls or nlme for such models, but should have a specific theoretical reason to do so, and even then, flexible models such as [GAMs](https://m-clark.github.io/generalized-additive-models/) might be better than assuming a functional form.
Estimation
----------
One key thing to understand with predictive models of any kind is how we estimate the parameters of interest, e.g. coefficients/weights, variance, and more. To start with, we must have some sort of goal that choosing a particular set of values for the parameters achieves, and then find some way to reach that goal efficiently.
### Minimizing and maximizing
The goal of many estimation approaches is the reduction of *loss*, conceptually defined as the difference between the model predictions and the observed data, i.e. prediction error. In an introductory methods course, many are introduced to *ordinary least squares* as a means to estimate the coefficients for a linear regression model. In this scenario, we are seeking to come up with estimates of the coefficients that *minimize* the (squared) difference between the observed target value and the fitted value based on the parameter estimates. The loss in this case is defined as the sum of the squared errors. Formally we can state it as follows.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2\\]
We can see how this works more clearly with some simple conceptual code. In what follows, we create a [function](functions.html#writing-functions), allows us to move [row by row](iterative.html#for-loops) through the data, calculating both our prediction based on the given model parameters\- \\(\\hat{y}\\), and the difference between that and our target variable \\(y\\). We sum these squared differences to get a total. In practice such a function is called the loss function, cost function, or objective function.
```
ls_loss <- function(X, y, beta) {
# initialize the objects
loss = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
# for each row, calculate y_hat and square the difference with y
for (n in 1:nrow(X)) {
y_hat[n] = sum(X[n, ] * beta)
loss[n] = (y[n] - y_hat[n]) ^ 2
}
sum(loss)
}
```
Now we need some data. Let’s construct some data so that we know the true underlying values for the regression coefficients. Feel free to change the sample size `N` or the coefficient values.
```
set.seed(123) # for reproducibility
N = 100
X = cbind(1, rnorm(N)) # a model matrix; first column represents the intercept
y = 5 * X[, 1] + .5 * X[, 2] + rnorm(N) # a target with some noise; truth is y = 5 +.5*x
df = data.frame(y = y, x = X[, 2])
```
Now let’s make some guesses for the coefficients, and see what the corresponding sum of the squared errors, i.e. the loss, would be.
```
ls_loss(X, y, beta = c(0, 1)) # guess 1
```
```
[1] 2467.106
```
```
ls_loss(X, y, beta = c(1, 2)) # guess 2
```
```
[1] 1702.547
```
```
ls_loss(X, y, beta = c(4, .25)) # guess 3
```
```
[1] 179.2952
```
We see that in our third guess we reduce the loss quite a bit relative to our first guess. This makes sense because a value of 4 for the intercept and .25 for the coefficient for `x` are not as relatively far from the true values.
However, we can also see that they are not the best we could have done. In addition, with more data, our estimated coefficients would get closer to true values.
```
model = lm(y ~ x, df) # fit the model and obtain parameter estimates using OLS
coef(model) # best guess given the data
```
```
(Intercept) x
4.8971969 0.4475284
```
```
sum(residuals(model)^2) # least squares loss
```
```
[1] 92.34413
```
In some relatively rare cases, a known approach is available and we do not have to search for the best estimates, but simply have to perform the correct steps that will result in them. For example, the following matrix operations will produce the best estimates for linear regression, which also happen to be the maximum likelihood estimates.
```
solve(crossprod(X)) %*% crossprod(X, y) # 'normal equations'
```
```
[,1]
[1,] 4.8971969
[2,] 0.4475284
```
```
coef(model)
```
```
(Intercept) x
4.8971969 0.4475284
```
Most of the time we don’t have such luxury, or even if we did, the computations might be too great for the size of our data.
Many statistical modeling techniques use *maximum likelihood* in some form or fashion, including Bayesian approaches, so you would do well to understand the basics. In this case, instead of minimizing the loss, we use an approach to maximize the probability of the observations of the target variable given the estimates of the parameters of the model (e.g. the coefficients in a regression)[20](#fn20).
The following shows how this would look for estimating a single value like a mean for a set of observations from a specific distribution[21](#fn21). In this case, the true underlying value that maximizes the likelihood is 5, but we typically don’t know this. We see that as our guesses for the mean would get closer to 5, the likelihood of the observed values increases. Our final guess based on the observed data won’t be exactly 5, but with enough data and an appropriate model for that data, we should get close.
Again, some simple conceptual code can help us. The next bit of code follows a similar approach to what we had with least squares regression, but the goal is instead to maximize the likelihood of the observed data. In this example, I fix the estimated variance, but in practice we’d need to estimate that parameter as well. As probabilities are typically very small, we work with them on the log scale.
```
max_like <- function(X, y, beta, sigma = 1) {
likelihood = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
for (n in 1:nrow(X)) {
y_hat[n] <- sum(X[n, ] * beta)
likelihood[n] = dnorm(y[n], mean = y_hat[n], sd = sigma, log = TRUE)
}
sum(likelihood)
}
```
```
max_like(X, y, beta = c(0, 1)) # guess 1
```
```
[1] -1327.593
```
```
max_like(X, y, beta = c(1, 2)) # guess 2
```
```
[1] -1022.18
```
```
max_like(X, y, beta = c(4, .25)) # guess 3
```
```
[1] -300.6741
```
```
logLik(model)
```
```
'log Lik.' -137.9115 (df=3)
```
To better understand maximum likelihood, it might help to think of our model from a data generating perspective, rather than in terms of ‘errors’. In the standard regression setting, we think of a single observation as follows:
\\\[\\mu \= b\_0 \+ b\_1\*x\_1 \+ ... \+ b\_p\*x\_p\\]
Or with matrix notation (consider it shorthand if not familiar):
\\\[\\mu \= X\\beta\\]
Now we display how \\(y\\) is generated:
\\\[y \\sim \\mathcal{N}(\\mathrm{mean} \= \\mu, \\mathrm{sd} \= \\sigma)\\]
In words, this means that our target observation \\(y\\) is assumed to be normally distributed with some mean and some standard deviation/variance. The mean \\(\\mu\\) is a function, or simply weighted sum, of our covariates \\(X\\). The unknown parameters we have to estimate are the \\(\\beta\\), i.e. weights, and standard deviation \\(\\sigma\\) (or variance \\(\\sigma^2\\)).
One more note regarding estimation, it is good to distinguish models from estimation procedures. The following shows the more specific to the more general for both models and estimation procedures respectively.
| Label | Name |
| --- | --- |
| LM | Linear Model |
| GLM | Generalized Linear Model |
| GLMM | Generalized Linear Mixed Model |
| GAMM | Generalized Linear Mixed Model |
| OLS | Ordinary Least Squares |
| WLS | Weighted Least Squares |
| GLS | Generalized Least Squares |
| GEE | Generalized Estimating Equations |
| GMM | Generalized Method of Moments |
### Optimization
So we know the goal, but how do we get to it? In practice, we typically use *optimization* methods to iteratively search for the best estimates for the parameters of a given model. The functions we explored above provide a goal\- to minimize loss (however defined\- least squares for continuous, classification error for binary, etc.) or maximize the likelihood (or posterior probability in the Bayesian context). Whatever the goal, an optimizing *algorithm* will typically be used to find the estimates that reach that goal. Some approaches are very general, some are better for certain types of modeling problems. These algorithms continue to make guesses until some criterion has been reached (*convergence*)[22](#fn22).
You generally don’t need to know the details to use these algorithms to fit models, but knowing a little bit about the optimization process and available options may prove useful to deal with more complex data scenarios, where convergence can be difficult. Some packages will even have documentation specifically dealing with convergence issues. In the more predictive models previously discussed, knowing more about the optimization algorithm may speedup the time it takes to train the model, or smooth out the variability in the process.
As an aside, most Bayesian models use an estimation approach that is some form of *Markov Chain Monte Carlo*. It is a simulation based approach to generate subsequent estimates of parameters conditional on present estimates of them. One set of iterations is called a chain, and convergence requires multiple chains to mix well, i.e. come to similar conclusions about the parameter estimates. The goal even then is to maximize the log posterior distribution, similar to maximizing the likelihood. In the past this was an extremely computationally expensive procedure, but these days, modern laptops can handle even complex models with ease, though some data set sizes may be prohibitive still[23](#fn23).
### Minimizing and maximizing
The goal of many estimation approaches is the reduction of *loss*, conceptually defined as the difference between the model predictions and the observed data, i.e. prediction error. In an introductory methods course, many are introduced to *ordinary least squares* as a means to estimate the coefficients for a linear regression model. In this scenario, we are seeking to come up with estimates of the coefficients that *minimize* the (squared) difference between the observed target value and the fitted value based on the parameter estimates. The loss in this case is defined as the sum of the squared errors. Formally we can state it as follows.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2\\]
We can see how this works more clearly with some simple conceptual code. In what follows, we create a [function](functions.html#writing-functions), allows us to move [row by row](iterative.html#for-loops) through the data, calculating both our prediction based on the given model parameters\- \\(\\hat{y}\\), and the difference between that and our target variable \\(y\\). We sum these squared differences to get a total. In practice such a function is called the loss function, cost function, or objective function.
```
ls_loss <- function(X, y, beta) {
# initialize the objects
loss = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
# for each row, calculate y_hat and square the difference with y
for (n in 1:nrow(X)) {
y_hat[n] = sum(X[n, ] * beta)
loss[n] = (y[n] - y_hat[n]) ^ 2
}
sum(loss)
}
```
Now we need some data. Let’s construct some data so that we know the true underlying values for the regression coefficients. Feel free to change the sample size `N` or the coefficient values.
```
set.seed(123) # for reproducibility
N = 100
X = cbind(1, rnorm(N)) # a model matrix; first column represents the intercept
y = 5 * X[, 1] + .5 * X[, 2] + rnorm(N) # a target with some noise; truth is y = 5 +.5*x
df = data.frame(y = y, x = X[, 2])
```
Now let’s make some guesses for the coefficients, and see what the corresponding sum of the squared errors, i.e. the loss, would be.
```
ls_loss(X, y, beta = c(0, 1)) # guess 1
```
```
[1] 2467.106
```
```
ls_loss(X, y, beta = c(1, 2)) # guess 2
```
```
[1] 1702.547
```
```
ls_loss(X, y, beta = c(4, .25)) # guess 3
```
```
[1] 179.2952
```
We see that in our third guess we reduce the loss quite a bit relative to our first guess. This makes sense because a value of 4 for the intercept and .25 for the coefficient for `x` are not as relatively far from the true values.
However, we can also see that they are not the best we could have done. In addition, with more data, our estimated coefficients would get closer to true values.
```
model = lm(y ~ x, df) # fit the model and obtain parameter estimates using OLS
coef(model) # best guess given the data
```
```
(Intercept) x
4.8971969 0.4475284
```
```
sum(residuals(model)^2) # least squares loss
```
```
[1] 92.34413
```
In some relatively rare cases, a known approach is available and we do not have to search for the best estimates, but simply have to perform the correct steps that will result in them. For example, the following matrix operations will produce the best estimates for linear regression, which also happen to be the maximum likelihood estimates.
```
solve(crossprod(X)) %*% crossprod(X, y) # 'normal equations'
```
```
[,1]
[1,] 4.8971969
[2,] 0.4475284
```
```
coef(model)
```
```
(Intercept) x
4.8971969 0.4475284
```
Most of the time we don’t have such luxury, or even if we did, the computations might be too great for the size of our data.
Many statistical modeling techniques use *maximum likelihood* in some form or fashion, including Bayesian approaches, so you would do well to understand the basics. In this case, instead of minimizing the loss, we use an approach to maximize the probability of the observations of the target variable given the estimates of the parameters of the model (e.g. the coefficients in a regression)[20](#fn20).
The following shows how this would look for estimating a single value like a mean for a set of observations from a specific distribution[21](#fn21). In this case, the true underlying value that maximizes the likelihood is 5, but we typically don’t know this. We see that as our guesses for the mean would get closer to 5, the likelihood of the observed values increases. Our final guess based on the observed data won’t be exactly 5, but with enough data and an appropriate model for that data, we should get close.
Again, some simple conceptual code can help us. The next bit of code follows a similar approach to what we had with least squares regression, but the goal is instead to maximize the likelihood of the observed data. In this example, I fix the estimated variance, but in practice we’d need to estimate that parameter as well. As probabilities are typically very small, we work with them on the log scale.
```
max_like <- function(X, y, beta, sigma = 1) {
likelihood = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
for (n in 1:nrow(X)) {
y_hat[n] <- sum(X[n, ] * beta)
likelihood[n] = dnorm(y[n], mean = y_hat[n], sd = sigma, log = TRUE)
}
sum(likelihood)
}
```
```
max_like(X, y, beta = c(0, 1)) # guess 1
```
```
[1] -1327.593
```
```
max_like(X, y, beta = c(1, 2)) # guess 2
```
```
[1] -1022.18
```
```
max_like(X, y, beta = c(4, .25)) # guess 3
```
```
[1] -300.6741
```
```
logLik(model)
```
```
'log Lik.' -137.9115 (df=3)
```
To better understand maximum likelihood, it might help to think of our model from a data generating perspective, rather than in terms of ‘errors’. In the standard regression setting, we think of a single observation as follows:
\\\[\\mu \= b\_0 \+ b\_1\*x\_1 \+ ... \+ b\_p\*x\_p\\]
Or with matrix notation (consider it shorthand if not familiar):
\\\[\\mu \= X\\beta\\]
Now we display how \\(y\\) is generated:
\\\[y \\sim \\mathcal{N}(\\mathrm{mean} \= \\mu, \\mathrm{sd} \= \\sigma)\\]
In words, this means that our target observation \\(y\\) is assumed to be normally distributed with some mean and some standard deviation/variance. The mean \\(\\mu\\) is a function, or simply weighted sum, of our covariates \\(X\\). The unknown parameters we have to estimate are the \\(\\beta\\), i.e. weights, and standard deviation \\(\\sigma\\) (or variance \\(\\sigma^2\\)).
One more note regarding estimation, it is good to distinguish models from estimation procedures. The following shows the more specific to the more general for both models and estimation procedures respectively.
| Label | Name |
| --- | --- |
| LM | Linear Model |
| GLM | Generalized Linear Model |
| GLMM | Generalized Linear Mixed Model |
| GAMM | Generalized Linear Mixed Model |
| OLS | Ordinary Least Squares |
| WLS | Weighted Least Squares |
| GLS | Generalized Least Squares |
| GEE | Generalized Estimating Equations |
| GMM | Generalized Method of Moments |
### Optimization
So we know the goal, but how do we get to it? In practice, we typically use *optimization* methods to iteratively search for the best estimates for the parameters of a given model. The functions we explored above provide a goal\- to minimize loss (however defined\- least squares for continuous, classification error for binary, etc.) or maximize the likelihood (or posterior probability in the Bayesian context). Whatever the goal, an optimizing *algorithm* will typically be used to find the estimates that reach that goal. Some approaches are very general, some are better for certain types of modeling problems. These algorithms continue to make guesses until some criterion has been reached (*convergence*)[22](#fn22).
You generally don’t need to know the details to use these algorithms to fit models, but knowing a little bit about the optimization process and available options may prove useful to deal with more complex data scenarios, where convergence can be difficult. Some packages will even have documentation specifically dealing with convergence issues. In the more predictive models previously discussed, knowing more about the optimization algorithm may speedup the time it takes to train the model, or smooth out the variability in the process.
As an aside, most Bayesian models use an estimation approach that is some form of *Markov Chain Monte Carlo*. It is a simulation based approach to generate subsequent estimates of parameters conditional on present estimates of them. One set of iterations is called a chain, and convergence requires multiple chains to mix well, i.e. come to similar conclusions about the parameter estimates. The goal even then is to maximize the log posterior distribution, similar to maximizing the likelihood. In the past this was an extremely computationally expensive procedure, but these days, modern laptops can handle even complex models with ease, though some data set sizes may be prohibitive still[23](#fn23).
Fitting Models
--------------
With practically every modern modeling package in R, the two components required to fit a model are the model formula, and a data frame that contains the variables specified in that formula. Consider the following models. In general the syntax is the similar regardless of package, with special considerations for the type of model. The data argument is not included in these examples, but would be needed.
```
lm(y ~ x + z) # standard linear model/OLS
glm(y ~ x + z, family = 'binomial') # logistic regression with binary response
glm(y ~ x + z + offset(log(q)), family = 'poisson') # count/rate model
betareg::betareg(y ~ x + z) # beta regression for targets between 0 and 1
pscl::hurdle(y ~ x + z, dist = "negbin") # hurdle model with negative binomial response
lme4::glmer(y ~ x + (1 | group), family = 'binomial') # generalized linear mixed model
mgcv::gam(y ~ s(x)) # generalized additive model
survival::coxph(Surv(time = t, event = q) ~ x) # Cox Proportional Hazards Regression
# Bayesian mixed model
brms::brm(
y ~ x + (1 + x | group),
family = 'zero_one_inflated_beta',
prior = priors
)
```
For examples of many other types of models, see this [document](https://m-clark.github.io/R-models/).
Let’s finally get our hands dirty and run an example. We’ll use the world happiness dataset[24](#fn24). This is country level data based on surveys taken at various years, and the scores are averages or proportions, along with other values like GDP.
```
library(tidyverse) # load if you haven't already
load('data/world_happiness.RData')
# glimpse(happy)
```
| Variable | N | Mean | SD | Min | Q1 | Median | Q3 | Max | % Missing |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| year | 1704 | 2012\.33 | 3\.69 | 2005\.00 | 2009\.00 | 2012\.00 | 2015\.00 | 2018\.00 | 0 |
| life\_ladder | 1704 | 5\.44 | 1\.12 | 2\.66 | 4\.61 | 5\.34 | 6\.27 | 8\.02 | 0 |
| log\_gdp\_per\_capita | 1676 | 9\.22 | 1\.19 | 6\.46 | 8\.30 | 9\.41 | 10\.19 | 11\.77 | 2 |
| social\_support | 1691 | 0\.81 | 0\.12 | 0\.29 | 0\.75 | 0\.83 | 0\.90 | 0\.99 | 1 |
| healthy\_life\_expectancy\_at\_birth | 1676 | 63\.11 | 7\.58 | 32\.30 | 58\.30 | 65\.00 | 68\.30 | 76\.80 | 2 |
| freedom\_to\_make\_life\_choices | 1675 | 0\.73 | 0\.14 | 0\.26 | 0\.64 | 0\.75 | 0\.85 | 0\.99 | 2 |
| generosity | 1622 | 0\.00 | 0\.16 | \-0\.34 | \-0\.12 | \-0\.02 | 0\.09 | 0\.68 | 5 |
| perceptions\_of\_corruption | 1608 | 0\.75 | 0\.19 | 0\.04 | 0\.70 | 0\.81 | 0\.88 | 0\.98 | 6 |
| positive\_affect | 1685 | 0\.71 | 0\.11 | 0\.36 | 0\.62 | 0\.72 | 0\.80 | 0\.94 | 1 |
| negative\_affect | 1691 | 0\.27 | 0\.08 | 0\.08 | 0\.21 | 0\.25 | 0\.31 | 0\.70 | 1 |
| confidence\_in\_national\_government | 1530 | 0\.48 | 0\.19 | 0\.07 | 0\.33 | 0\.46 | 0\.61 | 0\.99 | 10 |
| democratic\_quality | 1558 | \-0\.14 | 0\.88 | \-2\.45 | \-0\.79 | \-0\.23 | 0\.65 | 1\.58 | 9 |
| delivery\_quality | 1559 | 0\.00 | 0\.98 | \-2\.14 | \-0\.71 | \-0\.22 | 0\.70 | 2\.18 | 9 |
| gini\_index\_world\_bank\_estimate | 643 | 0\.37 | 0\.08 | 0\.24 | 0\.30 | 0\.35 | 0\.43 | 0\.63 | 62 |
| happiness\_score | 554 | 5\.41 | 1\.13 | 2\.69 | 4\.51 | 5\.31 | 6\.32 | 7\.63 | 67 |
| dystopia\_residual | 554 | 2\.06 | 0\.55 | 0\.29 | 1\.72 | 2\.06 | 2\.44 | 3\.84 | 67 |
The happiness score itself ranges from 2\.7 to 7\.6, with a mean of 5\.4 and standard deviation of 1\.1\.
Fitting a model with R is trivial, and at a minimum requires the two key ingredients mentioned before, the formula and data. Here we specify our target at `happiness_score` with predictors democratic quality, generosity, and GDP per capita (logged).
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
```
And that’s all there is to it.
### Using matrices
Many packages still allow for matrix input instead of specifying a model formula, or even require it (but shouldn’t). This means separating data into a model (or design) matrix, and the vector or matrix of the target variable(s). For example, if we needed a speed boost and weren’t concerned about some typical output we could use lm.fit.
First we need to create the required components. We can use model.matrix to get what we need.
```
X = model.matrix(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
head(X)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
8 1 -1.8443636 0.08909068 7.500539
9 1 -1.8554263 0.05136492 7.497038
10 1 -1.8865659 -0.11219829 7.497755
19 1 0.2516293 -0.08441135 9.302960
20 1 0.2572919 -0.02068741 9.337532
21 1 0.2999450 -0.03264282 9.376145
```
Note the column of ones in the model matrix `X`. This represents our intercept, but that may not mean much to you unless you understand matrix multiplication (nice demo [here](http://matrixmultiplication.xyz/)). The other columns are just as they are in the data. Note also that the missing values have been removed.
```
nrow(happy)
```
```
[1] 1704
```
```
nrow(X)
```
```
[1] 411
```
The target variable must contain the same number of observations as in the model matrix, and there are various ways to create it to ensure this. Instead of model.matrix, there is also model.frame, which creates a data frame, with a method for extracting the corresponding target variable[25](#fn25).
```
X_df = model.frame(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
y = model.response(X_df)
```
We can now fit the model as follows.
```
happy_model_matrix = lm.fit(X, y)
summary(happy_model_matrix) # only a standard list is returned
```
```
Length Class Mode
coefficients 4 -none- numeric
residuals 411 -none- numeric
effects 411 -none- numeric
rank 1 -none- numeric
fitted.values 411 -none- numeric
assign 4 -none- numeric
qr 5 qr list
df.residual 1 -none- numeric
```
```
coef(happy_model_matrix)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
In my experience, it is generally a bad sign if a package requires that you create the model matrix rather than doing so itself via the standard formula \+ data.frame approach. I typically find that such packages tend to skip out on many other things like using typical methods like predict, coef, etc., making them even more difficult to work with. In general, the only real time you should need to use model matrices is when you are creating your own modeling package, doing simulations, utilizing model speed\-ups, or otherwise know why you need them.
### Using matrices
Many packages still allow for matrix input instead of specifying a model formula, or even require it (but shouldn’t). This means separating data into a model (or design) matrix, and the vector or matrix of the target variable(s). For example, if we needed a speed boost and weren’t concerned about some typical output we could use lm.fit.
First we need to create the required components. We can use model.matrix to get what we need.
```
X = model.matrix(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
head(X)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
8 1 -1.8443636 0.08909068 7.500539
9 1 -1.8554263 0.05136492 7.497038
10 1 -1.8865659 -0.11219829 7.497755
19 1 0.2516293 -0.08441135 9.302960
20 1 0.2572919 -0.02068741 9.337532
21 1 0.2999450 -0.03264282 9.376145
```
Note the column of ones in the model matrix `X`. This represents our intercept, but that may not mean much to you unless you understand matrix multiplication (nice demo [here](http://matrixmultiplication.xyz/)). The other columns are just as they are in the data. Note also that the missing values have been removed.
```
nrow(happy)
```
```
[1] 1704
```
```
nrow(X)
```
```
[1] 411
```
The target variable must contain the same number of observations as in the model matrix, and there are various ways to create it to ensure this. Instead of model.matrix, there is also model.frame, which creates a data frame, with a method for extracting the corresponding target variable[25](#fn25).
```
X_df = model.frame(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
y = model.response(X_df)
```
We can now fit the model as follows.
```
happy_model_matrix = lm.fit(X, y)
summary(happy_model_matrix) # only a standard list is returned
```
```
Length Class Mode
coefficients 4 -none- numeric
residuals 411 -none- numeric
effects 411 -none- numeric
rank 1 -none- numeric
fitted.values 411 -none- numeric
assign 4 -none- numeric
qr 5 qr list
df.residual 1 -none- numeric
```
```
coef(happy_model_matrix)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
In my experience, it is generally a bad sign if a package requires that you create the model matrix rather than doing so itself via the standard formula \+ data.frame approach. I typically find that such packages tend to skip out on many other things like using typical methods like predict, coef, etc., making them even more difficult to work with. In general, the only real time you should need to use model matrices is when you are creating your own modeling package, doing simulations, utilizing model speed\-ups, or otherwise know why you need them.
Summarizing Models
------------------
Once we have a model, we’ll want to summarize the results of it. Most modeling packages have a summary method we can apply, which will provide parameter estimates, some notion of model fit, inferential statistics, and other output.
```
happy_model_base_sum = summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
There is a lot of info to parse there, so we’ll go over some of it in particular. The summary provides several pieces of information: the coefficients or weights (`Estimate`)[26](#fn26), the standard errors (`Std. Error`), the t\-statistic (which is just the coefficient divided by the standard error), and the corresponding p\-value. The main thing to look at are the actual coefficients and the direction of their relationship, positive or negative. For example, with regard to the effect of democratic quality, moving one point on democratic quality results in roughly 0\.2 units of happiness. Is this a notable effect? Knowing the scale of the outcome can help us understand the magnitude of the effect in a general sense. Before we showed that the standard deviation of the happiness scale was 1\.1\. So, in terms of standard deviation units\- moving 1 points on democratic quality would result in a 0\.2 standard deviation increase in state\-level happiness. We might consider this fairly small, but maybe not negligible.
One thing we must also have in order to understand our results is to get a sense of the uncertainty in the effects. The following provides confidence intervals for each of the coefficients.
```
confint(happy_model_base)
```
```
2.5 % 97.5 %
(Intercept) -1.62845472 -0.3925003
democratic_quality 0.08018814 0.2605586
generosity 0.77656244 1.5451306
log_gdp_per_capita 0.62786210 0.7589806
```
Now we have a sense of the range of plausible values for the coefficients. The value we actually estimate is the best guess given our circumstances, but slight changes in the data, the way we collect it, the time we collect it, etc., all would result in a slightly different result. The confidence interval provides a range of what we could expect given the uncertainty, and, given its importance, you should always report it. In fact, just showing the coefficient and the interval would be better than typical reporting of the statistical test results, though you can do both.
Variable Transformations
------------------------
Transforming variables can provide a few benefits in modeling, whether applied to the target, covariates, or both, and should regularly be used for most models. Some of these benefits include[27](#fn27):
* Interpretable intercepts
* More comparable covariate effects
* Faster estimation
* Easier convergence
* Help with heteroscedasticity
For example, merely centering predictor variables, i.e. subtracting the mean, provides a more interpretable intercept that will fall within the actual range of the target variable, telling us what the value of the target variable is when the covariates are at their means (or reference value if categorical).
### Numeric variables
The following table shows the interpretation of two extremely common transformations applied to numeric variables\- logging and scaling (i.e. standardizing to mean zero, standard deviation one).
| target | predictor | interpretation |
| --- | --- | --- |
| y | x | \\(\\Delta y \= \\beta\\Delta x\\) |
| y | log(x) | \\(\\Delta y \\approx (\\beta/100\)\\%\\Delta x\\) |
| log(y) | x | \\(\\%\\Delta y \\approx 100\\cdot \\beta\\%\\Delta x\\) |
| log(y) | log(x) | \\(\\%\\Delta y \= \\beta\\%\\Delta x\\) |
| y | scale(x) | \\(\\Delta y \= \\beta\\sigma\\Delta x\\) |
| scale(y) | x | \\(\\sigma\\Delta y \= \\beta\\Delta x\\) |
| scale(y) | scale(x) | \\(\\sigma\\Delta y \= \\beta\\sigma\\Delta x\\) |
For example, to start with the normal linear model situation, a one\-unit change in \\(x\\), i.e. \\(\\Delta x \=1\\), leads to \\(\\beta\\) unit change in \\(y\\). If we log the target variable \\(y\\), the interpretation of the coefficient for \\(x\\) is that a one\-unit change in \\(x\\) leads to an (approximately) 100\\(\\cdot\\)\\(\\beta\\)% change in \\(y\\). The 100 changes the result from a proportion to percentage change. More concretely, if \\(\\beta\\) was .5, a unit change in \\(x\\) leads to (roughly) a 50% change in \\(y\\). If both were logged, a percentage change in \\(x\\) leads to a \\(\\beta\\) percentage change in y[28](#fn28). These percentage change interpretations are called [elasticities](https://en.wikipedia.org/wiki/Elasticity_(economics)) in econometrics and areas trained similarly[29](#fn29).
It is very common to use *standardized* variables as well, also called normalizing, or simply scaling. If \\(y\\) and \\(x\\) are both standardized, a one unit (i.e. one standard deviation) change in \\(x\\) leads to a \\(\\beta\\) standard deviation change in \\(y\\). Again, if \\(\\beta\\) was .5, a standard deviation change in \\(x\\) leads to a half standard deviation change in \\(y\\). In general, there is nothing to lose by standardizing, so you should employ it often.
Another common transformation, particularly in machine learning, is the *min\-max normalization*, changing variables to range from some minimum to some maximum, usually zero to one.
### Categorical variables
A raw character string is not an analyzable unit, so character strings and labeled variables like factors must be converted for analysis to be conducted on them. For categorical variables, we can employ what is called *effects coding* to test for specific types of group differences. Far and away the most common approach is called *dummy coding* or *one\-hot encoding*[30](#fn30). In the next example, we will use dummy coding via the recipes package. I also to show how to standardize a numeric variable as previously discussed.
```
library(recipes)
nafta = happy %>%
filter(country %in% c('United States', 'Canada', 'Mexico'))
dummy = nafta %>%
recipe(~ country + generosity) %>% # formula approach for specifying variables
step_dummy(country, one_hot = TRUE) %>% # make variables for all factor levels
step_center(generosity) %>% # example of centering
step_scale(generosity) # example of standardizing
prep(dummy) %>% # estimates the necessary data to apply to this or other data sets
bake(nafta) %>% # apply the computations
print(n = 20)
```
```
# A tibble: 39 x 4
generosity country_Canada country_Mexico country_United.States
<dbl> <dbl> <dbl> <dbl>
1 0.835 1 0 0
2 0.819 1 0 0
3 0.891 1 0 0
4 0.801 1 0 0
5 0.707 1 0 0
6 0.841 1 0 0
7 1.06 1 0 0
8 1.21 1 0 0
9 0.940 1 0 0
10 0.838 1 0 0
11 0.590 1 0 0
12 0.305 1 0 0
13 -0.0323 1 0 0
14 NA 0 1 0
15 -1.19 0 1 0
16 -1.39 0 1 0
17 -1.08 0 1 0
18 -0.915 0 1 0
19 -1.22 0 1 0
20 -1.18 0 1 0
# … with 19 more rows
```
We see that the first few observations are Canada, and the next few Mexico. Note that doing this is rarely required for most modeling situations, but even if not, it sometimes can be useful to do so explicitly. If your modeling package cannot handle factor variables, and thus requires explicit coding, you’ll know, and typically these are the same ones that require matrix input.
Let’s run a regression as follows to show how it would happen automatically.
```
model_dummy = lm(happiness_score ~ country, data = nafta)
summary(model_dummy)
```
```
Call:
lm(formula = happiness_score ~ country, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.36887 0.09633 76.493 5.64e-14 ***
countryMexico -0.61107 0.13624 -4.485 0.00152 **
countryUnited States -0.34337 0.13624 -2.520 0.03275 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
In this case, the coefficient represents the difference in means on the target variable between the reference group and the group in question. In this case, the U.S. is \-0\.34 less on the happy score than the reference country (Canada). The intercept tells us the mean of the reference group.
Other codings are possible, and these would allow for specific group comparisons or types of comparisons. This is sometimes called *contrast coding*. For example, we could compare Canada vs. both the U.S. and Mexico. By giving Canada twice the weight of the other two we can get this result. I also add a coding that will just compare Mexico vs. the U.S. The actual weights used are arbitrary, but in this case should sum to zero.
| group | canada\_vs\_other | mexico\_vs\_us |
| --- | --- | --- |
| Canada | \-0\.667 | 0\.0 |
| Mexico | 0\.333 | \-0\.5 |
| United States | 0\.333 | 0\.5 |
| weights sum to zero, but are arbitrary |
| --- |
Adding such coding to a factor variable allows the corresponding models to use it in constructing the model matrix, rather than dummy coding. See the group means and calculate the results by hand for yourself.
```
nafta = nafta %>%
mutate(country_fac = factor(country))
contrasts(nafta$country_fac) = matrix(c(-2/3, 1/3, 1/3, 0, -.5, .5),
ncol = 2)
summary(lm(happiness_score ~ country_fac, data = nafta))
```
```
Call:
lm(formula = happiness_score ~ country_fac, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.05072 0.05562 126.769 6.01e-16 ***
country_fac1 -0.47722 0.11799 -4.045 0.00291 **
country_fac2 0.26770 0.13624 1.965 0.08100 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
```
nafta %>%
group_by(country) %>%
summarise(happy = mean(happiness_score, na.rm = TRUE))
```
```
# A tibble: 3 x 2
country happy
<chr> <dbl>
1 Canada 7.37
2 Mexico 6.76
3 United States 7.03
```
For example, we can see that for this balanced data set, the `_fac1` coefficient is the average of the U.S. and Mexico coefficients that we got from dummy coding, which represented their respective mean differences from Canada: (\-0\.611 \+ \-0\.343\) / 2 \= \-0\.477\. The `_fac2` coefficient is just the U.S. Mexico mean difference, as expected.
In other circumstances, we can use *categorical embeddings* to reduce a very large number of categorical levels to a smaller number of numeric variables. This is very commonly employed in deep learning.
### Scales, indices, and dimension reduction
It is often the case that we have several correlated variables/items which do not all need to go into the model. For example, instead of using all items in a psychological scale, we can use the scale score, however defined, which is often just a *sum score* of the underlying items. Often people will create an index by using a *principal components analysis*, which can be thought of as a means to create a weighted sum score, or set of scores. Some (especially binary) items may tend toward the creation of a single variable that simply notes whether any of those collection of variables was present or not.
#### Two\-step approaches
Some might do a preliminary analysis, such as a *cluster analysis* or *factor analysis*, to create new target or predictor variables. In the former we reduce several variables to a single categorical label. Factor analysis does the same but results in a more expressive continuous metric. While fine to use, the corresponding results are measured with error, so treating the categories or factor scores as you would observed variables will typically result in optimistic results when you later include them in a subsequent analysis like a linear regression. Though this difference is probably slight in most applications, keen reviewers would probably point out the model shortcoming.
### Don’t discretize
Little pains advanced modelers more than seeing results where a nice expressive continuous metric is butchered into two categories (e.g. taking a numeric age and collapsing to ‘old’ vs. ‘young’). There is rarely a reason to do this, and it is difficult to justify. There are reasons to collapse rare labels of a categorical variable, so that the new variable has fewer but more frequent categories. For example, data may have five or six race categories, but often the values are lumped into majority group vs. minority group due to each minority category having too few observations. But even that can cause problems, and doesn’t really overcome the fact that you simply didn’t have enough data to begin with.
### Numeric variables
The following table shows the interpretation of two extremely common transformations applied to numeric variables\- logging and scaling (i.e. standardizing to mean zero, standard deviation one).
| target | predictor | interpretation |
| --- | --- | --- |
| y | x | \\(\\Delta y \= \\beta\\Delta x\\) |
| y | log(x) | \\(\\Delta y \\approx (\\beta/100\)\\%\\Delta x\\) |
| log(y) | x | \\(\\%\\Delta y \\approx 100\\cdot \\beta\\%\\Delta x\\) |
| log(y) | log(x) | \\(\\%\\Delta y \= \\beta\\%\\Delta x\\) |
| y | scale(x) | \\(\\Delta y \= \\beta\\sigma\\Delta x\\) |
| scale(y) | x | \\(\\sigma\\Delta y \= \\beta\\Delta x\\) |
| scale(y) | scale(x) | \\(\\sigma\\Delta y \= \\beta\\sigma\\Delta x\\) |
For example, to start with the normal linear model situation, a one\-unit change in \\(x\\), i.e. \\(\\Delta x \=1\\), leads to \\(\\beta\\) unit change in \\(y\\). If we log the target variable \\(y\\), the interpretation of the coefficient for \\(x\\) is that a one\-unit change in \\(x\\) leads to an (approximately) 100\\(\\cdot\\)\\(\\beta\\)% change in \\(y\\). The 100 changes the result from a proportion to percentage change. More concretely, if \\(\\beta\\) was .5, a unit change in \\(x\\) leads to (roughly) a 50% change in \\(y\\). If both were logged, a percentage change in \\(x\\) leads to a \\(\\beta\\) percentage change in y[28](#fn28). These percentage change interpretations are called [elasticities](https://en.wikipedia.org/wiki/Elasticity_(economics)) in econometrics and areas trained similarly[29](#fn29).
It is very common to use *standardized* variables as well, also called normalizing, or simply scaling. If \\(y\\) and \\(x\\) are both standardized, a one unit (i.e. one standard deviation) change in \\(x\\) leads to a \\(\\beta\\) standard deviation change in \\(y\\). Again, if \\(\\beta\\) was .5, a standard deviation change in \\(x\\) leads to a half standard deviation change in \\(y\\). In general, there is nothing to lose by standardizing, so you should employ it often.
Another common transformation, particularly in machine learning, is the *min\-max normalization*, changing variables to range from some minimum to some maximum, usually zero to one.
### Categorical variables
A raw character string is not an analyzable unit, so character strings and labeled variables like factors must be converted for analysis to be conducted on them. For categorical variables, we can employ what is called *effects coding* to test for specific types of group differences. Far and away the most common approach is called *dummy coding* or *one\-hot encoding*[30](#fn30). In the next example, we will use dummy coding via the recipes package. I also to show how to standardize a numeric variable as previously discussed.
```
library(recipes)
nafta = happy %>%
filter(country %in% c('United States', 'Canada', 'Mexico'))
dummy = nafta %>%
recipe(~ country + generosity) %>% # formula approach for specifying variables
step_dummy(country, one_hot = TRUE) %>% # make variables for all factor levels
step_center(generosity) %>% # example of centering
step_scale(generosity) # example of standardizing
prep(dummy) %>% # estimates the necessary data to apply to this or other data sets
bake(nafta) %>% # apply the computations
print(n = 20)
```
```
# A tibble: 39 x 4
generosity country_Canada country_Mexico country_United.States
<dbl> <dbl> <dbl> <dbl>
1 0.835 1 0 0
2 0.819 1 0 0
3 0.891 1 0 0
4 0.801 1 0 0
5 0.707 1 0 0
6 0.841 1 0 0
7 1.06 1 0 0
8 1.21 1 0 0
9 0.940 1 0 0
10 0.838 1 0 0
11 0.590 1 0 0
12 0.305 1 0 0
13 -0.0323 1 0 0
14 NA 0 1 0
15 -1.19 0 1 0
16 -1.39 0 1 0
17 -1.08 0 1 0
18 -0.915 0 1 0
19 -1.22 0 1 0
20 -1.18 0 1 0
# … with 19 more rows
```
We see that the first few observations are Canada, and the next few Mexico. Note that doing this is rarely required for most modeling situations, but even if not, it sometimes can be useful to do so explicitly. If your modeling package cannot handle factor variables, and thus requires explicit coding, you’ll know, and typically these are the same ones that require matrix input.
Let’s run a regression as follows to show how it would happen automatically.
```
model_dummy = lm(happiness_score ~ country, data = nafta)
summary(model_dummy)
```
```
Call:
lm(formula = happiness_score ~ country, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.36887 0.09633 76.493 5.64e-14 ***
countryMexico -0.61107 0.13624 -4.485 0.00152 **
countryUnited States -0.34337 0.13624 -2.520 0.03275 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
In this case, the coefficient represents the difference in means on the target variable between the reference group and the group in question. In this case, the U.S. is \-0\.34 less on the happy score than the reference country (Canada). The intercept tells us the mean of the reference group.
Other codings are possible, and these would allow for specific group comparisons or types of comparisons. This is sometimes called *contrast coding*. For example, we could compare Canada vs. both the U.S. and Mexico. By giving Canada twice the weight of the other two we can get this result. I also add a coding that will just compare Mexico vs. the U.S. The actual weights used are arbitrary, but in this case should sum to zero.
| group | canada\_vs\_other | mexico\_vs\_us |
| --- | --- | --- |
| Canada | \-0\.667 | 0\.0 |
| Mexico | 0\.333 | \-0\.5 |
| United States | 0\.333 | 0\.5 |
| weights sum to zero, but are arbitrary |
| --- |
Adding such coding to a factor variable allows the corresponding models to use it in constructing the model matrix, rather than dummy coding. See the group means and calculate the results by hand for yourself.
```
nafta = nafta %>%
mutate(country_fac = factor(country))
contrasts(nafta$country_fac) = matrix(c(-2/3, 1/3, 1/3, 0, -.5, .5),
ncol = 2)
summary(lm(happiness_score ~ country_fac, data = nafta))
```
```
Call:
lm(formula = happiness_score ~ country_fac, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.05072 0.05562 126.769 6.01e-16 ***
country_fac1 -0.47722 0.11799 -4.045 0.00291 **
country_fac2 0.26770 0.13624 1.965 0.08100 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
```
nafta %>%
group_by(country) %>%
summarise(happy = mean(happiness_score, na.rm = TRUE))
```
```
# A tibble: 3 x 2
country happy
<chr> <dbl>
1 Canada 7.37
2 Mexico 6.76
3 United States 7.03
```
For example, we can see that for this balanced data set, the `_fac1` coefficient is the average of the U.S. and Mexico coefficients that we got from dummy coding, which represented their respective mean differences from Canada: (\-0\.611 \+ \-0\.343\) / 2 \= \-0\.477\. The `_fac2` coefficient is just the U.S. Mexico mean difference, as expected.
In other circumstances, we can use *categorical embeddings* to reduce a very large number of categorical levels to a smaller number of numeric variables. This is very commonly employed in deep learning.
### Scales, indices, and dimension reduction
It is often the case that we have several correlated variables/items which do not all need to go into the model. For example, instead of using all items in a psychological scale, we can use the scale score, however defined, which is often just a *sum score* of the underlying items. Often people will create an index by using a *principal components analysis*, which can be thought of as a means to create a weighted sum score, or set of scores. Some (especially binary) items may tend toward the creation of a single variable that simply notes whether any of those collection of variables was present or not.
#### Two\-step approaches
Some might do a preliminary analysis, such as a *cluster analysis* or *factor analysis*, to create new target or predictor variables. In the former we reduce several variables to a single categorical label. Factor analysis does the same but results in a more expressive continuous metric. While fine to use, the corresponding results are measured with error, so treating the categories or factor scores as you would observed variables will typically result in optimistic results when you later include them in a subsequent analysis like a linear regression. Though this difference is probably slight in most applications, keen reviewers would probably point out the model shortcoming.
#### Two\-step approaches
Some might do a preliminary analysis, such as a *cluster analysis* or *factor analysis*, to create new target or predictor variables. In the former we reduce several variables to a single categorical label. Factor analysis does the same but results in a more expressive continuous metric. While fine to use, the corresponding results are measured with error, so treating the categories or factor scores as you would observed variables will typically result in optimistic results when you later include them in a subsequent analysis like a linear regression. Though this difference is probably slight in most applications, keen reviewers would probably point out the model shortcoming.
### Don’t discretize
Little pains advanced modelers more than seeing results where a nice expressive continuous metric is butchered into two categories (e.g. taking a numeric age and collapsing to ‘old’ vs. ‘young’). There is rarely a reason to do this, and it is difficult to justify. There are reasons to collapse rare labels of a categorical variable, so that the new variable has fewer but more frequent categories. For example, data may have five or six race categories, but often the values are lumped into majority group vs. minority group due to each minority category having too few observations. But even that can cause problems, and doesn’t really overcome the fact that you simply didn’t have enough data to begin with.
Variable Importance
-------------------
In many circumstances, one of the modeling goals is to determine which predictor variable is most important out of the collection used in the model, or otherwise rank order the effectiveness of the predictors in some fashion. However, determining relative *variable importance* is at best an approximation with some methods, and a fairly hopeless endeavor with others. For just basic linear regression there are many methods that would not necessarily come to the same conclusions. Statistical significance, e.g. the Z/t statistic or p\-value, is simply not a correct way to do so. Some believe that [standardizing numeric variables](models.html#numeric-variables) is enough, but it is not, and doesn’t help with comparison to categorical inputs. In addition, if you’re model is not strong, it doesn’t make much sense to even worry about which is the best of a bad lot.
Another reason that ‘importance’ is a problematic endeavor is that a statistical result doesn’t speak to practical action, nor does it speak to the fact that small effects may be very important. Sex may be an important driver in social science model, but we may not be able to do anything about it for many outcomes that may be of interest. With health outcomes, any effects might be worthy of attention, however small, if they could practically increase the likelihood of survival.
Even if you can come up with a metric you like, you would still need some measure of uncertainty around that to make a claim that one predictor is reasonably better than another, and the only real approach to do that is usually some computationally expensive procedure that you will likely have to put together by hand.
As an example, for standard linear regression there are many methods that decompose \\(R^2\\) into relative contributions by the covariates. The tools to do so have to re\-run the model in many ways to produce these estimates (see the relaimpo package for example), but you would then have to use bootstrapping or similar approach to get interval estimates for those measures of importance. Certain techniques like random forests have a natural way to provide variable importance metrics, but providing inference on them would similarly be very computationally expensive.
In the end though, I think it is probably best to assume that any effect that seems practically distinct from zero might be worthy of attention, and can be regarded for its own sake. The more actionable, the better.
Extracting Output
-----------------
The better you get at modeling, the more often you are going to need to get at certain parts of the model output easily. For example, we can extract the coefficients, residuals, model data and other parts from standard linear model objects from base R.
Why would you want to do this? A simple example would be to compare effects across different settings. We can collect the values, put them in a data frame, and then to a table or visualization.
Typical modeling [methods](programming.html#methods) you might want to use:
* summary: print results in a legible way
* plot: plot something about the model (e.g. diagnostic plots)
* predict: make predictions, possibly on new data
* confint: get confidence intervals for parameters
* coef: extract coefficients
* fitted: extract fitted values
* residuals: extract residuals
* AIC: extract AIC
Here is an example of using the predict and coef methods.
```
predict(happy_model_base, newdata = happy %>% slice(1:5))
```
```
1 2 3 4 5
3.838179 3.959046 3.928180 4.004129 4.171624
```
```
coef(happy_model_base)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
Also, it’s useful to assign the summary results to an object, so that you can extract things that are also useful but would not be in the model object. We did this before, so now let’s take a look.
```
str(happy_model_base_sum, 1)
```
```
List of 12
$ call : language lm(formula = happiness_score ~ democratic_quality + generosity + log_gdp_per_capita, data = happy)
$ terms :Classes 'terms', 'formula' language happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
.. ..- attr(*, "variables")= language list(happiness_score, democratic_quality, generosity, log_gdp_per_capita)
.. ..- attr(*, "factors")= int [1:4, 1:3] 0 1 0 0 0 0 1 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:3] "democratic_quality" "generosity" "log_gdp_per_capita"
.. ..- attr(*, "order")= int [1:3] 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(happiness_score, democratic_quality, generosity, log_gdp_per_capita)
.. ..- attr(*, "dataClasses")= Named chr [1:4] "numeric" "numeric" "numeric" "numeric"
.. .. ..- attr(*, "names")= chr [1:4] "happiness_score" "democratic_quality" "generosity" "log_gdp_per_capita"
$ residuals : Named num [1:411] -0.405 -0.572 0.057 -0.426 -0.829 ...
..- attr(*, "names")= chr [1:411] "8" "9" "10" "19" ...
$ coefficients : num [1:4, 1:4] -1.01 0.17 1.161 0.693 0.314 ...
..- attr(*, "dimnames")=List of 2
$ aliased : Named logi [1:4] FALSE FALSE FALSE FALSE
..- attr(*, "names")= chr [1:4] "(Intercept)" "democratic_quality" "generosity" "log_gdp_per_capita"
$ sigma : num 0.628
$ df : int [1:3] 4 407 4
$ r.squared : num 0.695
$ adj.r.squared: num 0.693
$ fstatistic : Named num [1:3] 310 3 407
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:4, 1:4] 0.2504 0.0229 -0.0139 -0.0264 0.0229 ...
..- attr(*, "dimnames")=List of 2
$ na.action : 'omit' Named int [1:1293] 1 2 3 4 5 6 7 11 12 13 ...
..- attr(*, "names")= chr [1:1293] "1" "2" "3" "4" ...
- attr(*, "class")= chr "summary.lm"
```
If we want the adjusted \\(R^2\\) or root mean squared error (RMSE, i.e. average error[31](#fn31)), they aren’t readily available in the model object, but they are in the summary object, so we can pluck them out as we would any other [list object](data_structures.html#lists).
```
happy_model_base_sum$adj.r.squared
```
```
[1] 0.6930647
```
```
happy_model_base_sum[['sigma']]
```
```
[1] 0.6282718
```
### Package support
There are many packages available to get at model results. One of the more widely used is broom, which has tidy and other functions that can apply in different ways to different models depending on their class.
```
library(broom)
tidy(happy_model_base)
```
```
# A tibble: 4 x 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -1.01 0.314 -3.21 1.41e- 3
2 democratic_quality 0.170 0.0459 3.71 2.33e- 4
3 generosity 1.16 0.195 5.94 6.18e- 9
4 log_gdp_per_capita 0.693 0.0333 20.8 5.93e-66
```
Some packages will produce tables for a model object that are more or less ready for publication. However, unless you know it’s in the exact style you need, you’re probably better off dealing with it yourself. For example, you can use tidy and do minor cleanup to get the table ready for publication.
### Package support
There are many packages available to get at model results. One of the more widely used is broom, which has tidy and other functions that can apply in different ways to different models depending on their class.
```
library(broom)
tidy(happy_model_base)
```
```
# A tibble: 4 x 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -1.01 0.314 -3.21 1.41e- 3
2 democratic_quality 0.170 0.0459 3.71 2.33e- 4
3 generosity 1.16 0.195 5.94 6.18e- 9
4 log_gdp_per_capita 0.693 0.0333 20.8 5.93e-66
```
Some packages will produce tables for a model object that are more or less ready for publication. However, unless you know it’s in the exact style you need, you’re probably better off dealing with it yourself. For example, you can use tidy and do minor cleanup to get the table ready for publication.
Visualization
-------------
> Models require visualization to be understood completely.
If you aren’t using visualization as a fundamental part of your model exploration, you’re likely leaving a lot of that exploration behind, and not communicating the results as well as you could to the broadest audience possible. When adding nonlinear effects, interactions, and more, visualization is a must. Thankfully there are many packages to help you get data you need to visualize effects.
We start with the emmeans package. In the following example we have a country effect, and wish to get the mean happiness scores per country. We then visualize the results. Here we can see that Mexico is lowest on average.
```
happy_model_nafta = lm(happiness_score ~ country + year, data = nafta)
library(emmeans)
country_means = emmeans(happy_model_nafta, ~ country)
country_means
```
```
country emmean SE df lower.CL upper.CL
Canada 7.37 0.064 8 7.22 7.52
Mexico 6.76 0.064 8 6.61 6.91
United States 7.03 0.064 8 6.88 7.17
Confidence level used: 0.95
```
```
plot(country_means)
```
We can also test for pairwise differences between the countries, and there’s no reason not to visualize that also. In the following, after adjustment Mexico and U.S. might not differ on mean happiness, but the other comparisons are statistically notable[32](#fn32).
```
pw_comparisons = contrast(country_means, method = 'pairwise', adjust = 'bonferroni')
pw_comparisons
```
```
contrast estimate SE df t.ratio p.value
Canada - Mexico 0.611 0.0905 8 6.751 0.0004
Canada - United States 0.343 0.0905 8 3.793 0.0159
Mexico - United States -0.268 0.0905 8 -2.957 0.0547
P value adjustment: bonferroni method for 3 tests
```
```
plot(pw_comparisons)
```
The following example uses ggeffects. First, we run a model with an interaction of country and year (we’ll talk more about interactions later). Then we get predictions for the year by country, and subsequently visualize. We can see that the trend, while negative for all countries, is more pronounced as we move south.
```
happy_model_nafta = lm(happiness_score ~ year*country, data = nafta)
library(ggeffects)
preds = ggpredict(happy_model_nafta, terms = c('year', 'country'))
plot(preds)
```
Whenever you move to generalized linear models or other more complicated settings, visualization is even more important, so it’s best to have some tools at your disposal.
Extensions to the Standard Linear Model
---------------------------------------
### Different types of targets
In many data situations, we do not have a continuous numeric target variable, or may want to use a different distribution to get a better fit, or adhere to some theoretical perspective. For example, count data is not continuous and often notably skewed, so assuming a normal symmetric distribution may not work as well. From a data generating perspective we can use the Poisson distribution[33](#fn33) for the target variable instead.
\\\[\\ln{\\mu} \= X\\beta\\]
\\\[\\mu \= e^{X\\beta}\\]
\\\[y \\sim \\mathcal{Pois}(\\mu)\\]
Conceptually nothing has really changed from what we were doing with the standard linear model, except for the distribution. We still have a mean function determined by our predictors, and this is what we’re typically mainly interested in from a theoretical perspective. We do have an added step, a transformation of the mean (now usually called the *linear predictor*). Poisson naturally works with the log of the target, but rather than do that explicitly, we instead exponentiate the linear predictor. The *link function*[34](#fn34), which is the natural log in this setting, has a corresponding *inverse link* (or mean function)\- exponentiation.
In code we can demonstrate this as follows.
```
set.seed(123) # for reproducibility
N = 1000 # sample size
beta = c(2, 1) # the true coefficient values
x = rnorm(N) # a single predictor variable
mu = exp(beta[1] + beta[2]*x) # the linear predictor
y = rpois(N, lambda = mu) # the target variable lambda = mean
glm(y ~ x, family = poisson)
```
```
Call: glm(formula = y ~ x, family = poisson)
Coefficients:
(Intercept) x
2.009 0.994
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 13240
Residual Deviance: 1056 AIC: 4831
```
A very common setting is the case where our target variable takes on only two values\- yes vs. no, alive vs. dead, etc. The most common model used in such settings is the logistic regression model. In this case, it will have a different link to go with a different distribution.
\\\[\\ln{\\frac{\\mu}{1\-\\mu}} \= X\\beta\\]
\\\[\\mu \= \\frac{1}{1\+e^{\-X\\beta}}\\]
\\\[y \\sim \\mathcal{Binom}(\\mathrm{prob}\=\\mu, \\mathrm{size} \= 1\)\\]
Here our link function is called the *logit*, and it’s inverse takes our linear predictor and puts it on the probability scale.
Again, some code can help drive this home.
```
mu = plogis(beta[1] + beta[2]*x)
y = rbinom(N, size = 1, mu)
glm(y ~ x, family = binomial)
```
```
Call: glm(formula = y ~ x, family = binomial)
Coefficients:
(Intercept) x
2.141 1.227
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 852.3
Residual Deviance: 708.8 AIC: 712.8
```
```
# extension to count/proportional model
# mu = plogis(beta[1] + beta[2]*x)
# total = rpois(N, lambda = 5)
# events = rbinom(N, size = total, mu)
# nonevents = total - events
#
# glm(cbind(events, nonevents) ~ x, family = binomial)
```
You’ll have noticed that when we fit these models we used glm instead of lm. The normal linear model is a special case of *generalized linear models*, which includes a specific class of distributions \- normal, poisson, binomial, gamma, beta and more \- collectively referred to as the [exponential family](https://en.wikipedia.org/wiki/Exponential_family). While this family can cover a lot of ground, you do not have to restrict yourself to it, and many R modeling packages will provide easy access to more. The main point is that you have tools to deal with continuous, binary, count, ordinal, and other types of data. Furthermore, not much necessarily changes conceptually from model to model besides the link function and/or distribution.
### Correlated data
Often in standard regression modeling situations we have data that is correlated, like when we observe multiple observations for individuals (e.g. longitudinal studies), or observations are clustered within geographic units. There are many ways to analyze all kinds of correlated data in the form of clustered data, time series, spatial data and similar. In terms of understanding the mean function and data generating distribution for our target variable, as we did in our previous models, not much changes. However, we will want to utilize estimation techniques that take this correlation into account. Examples of such models include:
* Mixed models (e.g. random intercepts, ‘multilevel’ models)
* Time series models (autoregressive)
* Spatial models (e.g. conditional autoregressive)
As demonstration is beyond the scope of this document, the main point here is awareness. But see these on [mixed models](https://m-clark.github.io/mixed-models-with-R/) and [generalized additive models](https://m-clark.github.io/generalized-additive-models/).
### Other extensions
There are many types of models that will take one well beyond the standard linear model. In some cases, the focus is multivariate, trying to model many targets at once. Other models will even be domain\-specific, tailored to a very narrow type of problem. Whatever the scenario, having a good understanding of the models we’ve been discussing will likely help you navigate these new waters much more easily.
### Different types of targets
In many data situations, we do not have a continuous numeric target variable, or may want to use a different distribution to get a better fit, or adhere to some theoretical perspective. For example, count data is not continuous and often notably skewed, so assuming a normal symmetric distribution may not work as well. From a data generating perspective we can use the Poisson distribution[33](#fn33) for the target variable instead.
\\\[\\ln{\\mu} \= X\\beta\\]
\\\[\\mu \= e^{X\\beta}\\]
\\\[y \\sim \\mathcal{Pois}(\\mu)\\]
Conceptually nothing has really changed from what we were doing with the standard linear model, except for the distribution. We still have a mean function determined by our predictors, and this is what we’re typically mainly interested in from a theoretical perspective. We do have an added step, a transformation of the mean (now usually called the *linear predictor*). Poisson naturally works with the log of the target, but rather than do that explicitly, we instead exponentiate the linear predictor. The *link function*[34](#fn34), which is the natural log in this setting, has a corresponding *inverse link* (or mean function)\- exponentiation.
In code we can demonstrate this as follows.
```
set.seed(123) # for reproducibility
N = 1000 # sample size
beta = c(2, 1) # the true coefficient values
x = rnorm(N) # a single predictor variable
mu = exp(beta[1] + beta[2]*x) # the linear predictor
y = rpois(N, lambda = mu) # the target variable lambda = mean
glm(y ~ x, family = poisson)
```
```
Call: glm(formula = y ~ x, family = poisson)
Coefficients:
(Intercept) x
2.009 0.994
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 13240
Residual Deviance: 1056 AIC: 4831
```
A very common setting is the case where our target variable takes on only two values\- yes vs. no, alive vs. dead, etc. The most common model used in such settings is the logistic regression model. In this case, it will have a different link to go with a different distribution.
\\\[\\ln{\\frac{\\mu}{1\-\\mu}} \= X\\beta\\]
\\\[\\mu \= \\frac{1}{1\+e^{\-X\\beta}}\\]
\\\[y \\sim \\mathcal{Binom}(\\mathrm{prob}\=\\mu, \\mathrm{size} \= 1\)\\]
Here our link function is called the *logit*, and it’s inverse takes our linear predictor and puts it on the probability scale.
Again, some code can help drive this home.
```
mu = plogis(beta[1] + beta[2]*x)
y = rbinom(N, size = 1, mu)
glm(y ~ x, family = binomial)
```
```
Call: glm(formula = y ~ x, family = binomial)
Coefficients:
(Intercept) x
2.141 1.227
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 852.3
Residual Deviance: 708.8 AIC: 712.8
```
```
# extension to count/proportional model
# mu = plogis(beta[1] + beta[2]*x)
# total = rpois(N, lambda = 5)
# events = rbinom(N, size = total, mu)
# nonevents = total - events
#
# glm(cbind(events, nonevents) ~ x, family = binomial)
```
You’ll have noticed that when we fit these models we used glm instead of lm. The normal linear model is a special case of *generalized linear models*, which includes a specific class of distributions \- normal, poisson, binomial, gamma, beta and more \- collectively referred to as the [exponential family](https://en.wikipedia.org/wiki/Exponential_family). While this family can cover a lot of ground, you do not have to restrict yourself to it, and many R modeling packages will provide easy access to more. The main point is that you have tools to deal with continuous, binary, count, ordinal, and other types of data. Furthermore, not much necessarily changes conceptually from model to model besides the link function and/or distribution.
### Correlated data
Often in standard regression modeling situations we have data that is correlated, like when we observe multiple observations for individuals (e.g. longitudinal studies), or observations are clustered within geographic units. There are many ways to analyze all kinds of correlated data in the form of clustered data, time series, spatial data and similar. In terms of understanding the mean function and data generating distribution for our target variable, as we did in our previous models, not much changes. However, we will want to utilize estimation techniques that take this correlation into account. Examples of such models include:
* Mixed models (e.g. random intercepts, ‘multilevel’ models)
* Time series models (autoregressive)
* Spatial models (e.g. conditional autoregressive)
As demonstration is beyond the scope of this document, the main point here is awareness. But see these on [mixed models](https://m-clark.github.io/mixed-models-with-R/) and [generalized additive models](https://m-clark.github.io/generalized-additive-models/).
### Other extensions
There are many types of models that will take one well beyond the standard linear model. In some cases, the focus is multivariate, trying to model many targets at once. Other models will even be domain\-specific, tailored to a very narrow type of problem. Whatever the scenario, having a good understanding of the models we’ve been discussing will likely help you navigate these new waters much more easily.
Model Exploration Summary
-------------------------
At this point you should have a good idea of how to get started exploring models with R. Generally what you will explore will be based on theory, or merely curiosity. Specific packages while make certain types of models easy to pull off, without much change to the syntax from the standard `lm` approach of base R. Almost invariably, you will need to process the data to make it more amenable to analysis and/or more interpretable. After model fitting, summaries and visualizations go a long way toward understanding the part of the world you are exploring.
Model Exploration Exercises
---------------------------
### Exercise 1
With the Google app data, use a standard linear model (i.e. lm) to predict one of three target variables of your choosing:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
I would suggest preprocessing the number of reviews\- dividing by 100,000, scaling (standardizing), or logging it (for the latter you can add 1 first to deal with zeros[35](#fn35)).
Interpret the results. Visualize the difference in means between free and paid apps. See the [emmeans](models.html#visualization) example above.
```
load('data/google_apps.RData')
model = lm(? ~ reviews + type + size_in_MB, data = google_apps)
plot(emmeans::emmeans(model, ~type))
```
### Exercise 2
Rerun the above with interactions of the number of reviews or app size (or both) with type (via `a + b + a:b` or just `a*b` for two predictors). Visualize the interaction. Does it look like the effect differs by type?
```
model = lm(? ~ reviews + type*?, data = google_apps)
plot(ggeffects::ggpredict(model, terms = c('size_in_MB', 'type')))
```
### Exercise 3
Use the fish data to predict the number of fish caught `count` by the following predictor variables:
* `livebait`: whether live bait was used or not
* `child`: how many children present
* `persons`: total persons on the trip
If you wish, you can start with an `lm`, but as the number of fish caught is a count, it is suitable for using a poisson distribution via `glm` with `family = poisson`, so try that if you’re feeling up for it. If you exponentiate the coefficients, they can be interpreted as [incidence rate ratios](https://stats.idre.ucla.edu/stata/output/poisson-regression/).
```
load('data/fish.RData')
model = glm(?, data = fish)
```
### Exercise 1
With the Google app data, use a standard linear model (i.e. lm) to predict one of three target variables of your choosing:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
I would suggest preprocessing the number of reviews\- dividing by 100,000, scaling (standardizing), or logging it (for the latter you can add 1 first to deal with zeros[35](#fn35)).
Interpret the results. Visualize the difference in means between free and paid apps. See the [emmeans](models.html#visualization) example above.
```
load('data/google_apps.RData')
model = lm(? ~ reviews + type + size_in_MB, data = google_apps)
plot(emmeans::emmeans(model, ~type))
```
### Exercise 2
Rerun the above with interactions of the number of reviews or app size (or both) with type (via `a + b + a:b` or just `a*b` for two predictors). Visualize the interaction. Does it look like the effect differs by type?
```
model = lm(? ~ reviews + type*?, data = google_apps)
plot(ggeffects::ggpredict(model, terms = c('size_in_MB', 'type')))
```
### Exercise 3
Use the fish data to predict the number of fish caught `count` by the following predictor variables:
* `livebait`: whether live bait was used or not
* `child`: how many children present
* `persons`: total persons on the trip
If you wish, you can start with an `lm`, but as the number of fish caught is a count, it is suitable for using a poisson distribution via `glm` with `family = poisson`, so try that if you’re feeling up for it. If you exponentiate the coefficients, they can be interpreted as [incidence rate ratios](https://stats.idre.ucla.edu/stata/output/poisson-regression/).
```
load('data/fish.RData')
model = glm(?, data = fish)
```
Python Model Exploration Notebook
---------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/models.ipynb)
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/models.html |
Model Exploration
=================
The following section shows how to get started with modeling in R generally, with a focus on concepts, tools, and syntax, rather than trying to understand the specifics of a given model. We first dive into model exploration, getting a sense of the basic mechanics behind our modeling tools, and contemplating standard results. We’ll then shift our attention to understanding the strengths and limitations of our models. We’ll then change from classical methods to explore machine learning techniques. The goal of these chapters is to provide an overview of concepts and ways to think about modeling.
Model Taxonomy
--------------
We can begin with a taxonomy that broadly describes two classes of models:
* *Supervised*
* *Unsupervised*
* Some combination
For supervised settings, there is a target or set of target variables which we aim to predict with a set of predictor variables or covariates. This is far and away the most common case, and the one we will focus on here. It is very common in machine learning parlance to further distinguish *regression* and *classification* among supervised models, but what they actually mean to distinguish is numeric target variables from categorical ones (it’s all regression).
In the case of unsupervised models, the data itself is the target, and this setting includes techniques such as principal components analysis, factor analysis, cluster analytic approaches, topic modeling, and many others. A key goal for many such methods is *dimension reduction*, either of the columns or rows. For example, we may have many items of a survey we wish to group together into a few concepts, or cluster thousands of observations into a few simple categories.
We can also broadly describe two primary goals of modeling:
* *Prediction*
* *Explanation*
Different models will provide varying amounts of predictive and explanatory (or inferential) power. In some settings, prediction is almost entirely the goal, with little need to understand the underlying details of the relation of inputs to outputs. For example, in a model that predicts words to suggest when typing, we don’t really need to know nor much care about the details except to be able to improve those suggestions. In scientific studies however, we may be much more interested in the (potentially causal) relations among the variables under study.
While these are sometimes competing goals, it is definitely not the case that they are mutually exclusive. For example, a fully interpretable model, statistically speaking, may have no predictive capability, and so is fairly useless in practical terms. Often, very predictive models offer little understanding. But sometimes we can luck out and have both a highly predictive model as well as one that is highly interpretable.
Linear models
-------------
Most models you see in published reports are *linear models* of varying kinds, and form the basis on which to build more complex forms. In such models we distinguish a *target variable* we want to understand from the variables which we will use to understand it. Note that these come with different names depending on the goal of the study, discipline, and other factors[19](#fn19). The following table denotes common nomenclature across many disciplines.
| Type | Names |
| --- | --- |
| Target | Dependent variable |
| Endogenous |
| Response |
| Outcome |
| Output |
| Y |
| Regressand |
| Left hand side (LHS) |
| Predictor | Independent variable |
| Exogenous |
| Explanatory Variable |
| Covariate |
| Input |
| X |
| Regressor |
| Right hand side (RHS) |
A typical way to depict a linear regression model is as follows:
\\\[y \= b\_0 \+ b\_1\\cdot x\_1 \+ b\_2\\cdot x\_2 \+ ... \+ \+ b\_p\\cdot x\_p \+ \\epsilon\\]
In the above, \\(b\_0\\) is the intercept, and the other \\(b\_\*\\) are the regression coefficients that represent the relationship of the predictors \\(x\\) to the target variable \\(y\\). The \\(\\epsilon\\) represents the *error* or *residual*. We don’t have perfect prediction, and that represents the difference between what we can guess with our predictor relationships to the target and what we actually observe with it.
In R, we specify a linear model as follows. Conveniently enough, we use a function, `lm`, that stands for linear model. There are various inputs, typically starting with the formula. In the formula, The target variable is first, followed by the predictor variables, separated by a tilde (`~`). Additional predictor variables are added with a plus sign (`+`). In this example, `y` is our target, and the predictors are `x` and `z`.
```
lm(y ~ x + z)
```
We can still use linear models to investigate nonlinear relationships. For example, in the following, we can add a quadratic term or an interaction, yet the model is still linear in the parameters. All of the following are standard linear regression models.
```
lm(y ~ x + z + x:z)
lm(y ~ x + x_squared) # a better way: lm(y ~ poly(x, degree = 2))
```
In the models above, `x` has a potentially nonlinear relationship with `y`, either by varying its (linear) relationship depending on values of z (the first case) or itself (the second). In general, the manner in which nonlinear relationships may be explored in linear models is quite flexible.
An example of a *nonlinear model* would be population growth models, like exponential or logistic growth curves. You can use functions like nls or nlme for such models, but should have a specific theoretical reason to do so, and even then, flexible models such as [GAMs](https://m-clark.github.io/generalized-additive-models/) might be better than assuming a functional form.
Estimation
----------
One key thing to understand with predictive models of any kind is how we estimate the parameters of interest, e.g. coefficients/weights, variance, and more. To start with, we must have some sort of goal that choosing a particular set of values for the parameters achieves, and then find some way to reach that goal efficiently.
### Minimizing and maximizing
The goal of many estimation approaches is the reduction of *loss*, conceptually defined as the difference between the model predictions and the observed data, i.e. prediction error. In an introductory methods course, many are introduced to *ordinary least squares* as a means to estimate the coefficients for a linear regression model. In this scenario, we are seeking to come up with estimates of the coefficients that *minimize* the (squared) difference between the observed target value and the fitted value based on the parameter estimates. The loss in this case is defined as the sum of the squared errors. Formally we can state it as follows.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2\\]
We can see how this works more clearly with some simple conceptual code. In what follows, we create a [function](functions.html#writing-functions), allows us to move [row by row](iterative.html#for-loops) through the data, calculating both our prediction based on the given model parameters\- \\(\\hat{y}\\), and the difference between that and our target variable \\(y\\). We sum these squared differences to get a total. In practice such a function is called the loss function, cost function, or objective function.
```
ls_loss <- function(X, y, beta) {
# initialize the objects
loss = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
# for each row, calculate y_hat and square the difference with y
for (n in 1:nrow(X)) {
y_hat[n] = sum(X[n, ] * beta)
loss[n] = (y[n] - y_hat[n]) ^ 2
}
sum(loss)
}
```
Now we need some data. Let’s construct some data so that we know the true underlying values for the regression coefficients. Feel free to change the sample size `N` or the coefficient values.
```
set.seed(123) # for reproducibility
N = 100
X = cbind(1, rnorm(N)) # a model matrix; first column represents the intercept
y = 5 * X[, 1] + .5 * X[, 2] + rnorm(N) # a target with some noise; truth is y = 5 +.5*x
df = data.frame(y = y, x = X[, 2])
```
Now let’s make some guesses for the coefficients, and see what the corresponding sum of the squared errors, i.e. the loss, would be.
```
ls_loss(X, y, beta = c(0, 1)) # guess 1
```
```
[1] 2467.106
```
```
ls_loss(X, y, beta = c(1, 2)) # guess 2
```
```
[1] 1702.547
```
```
ls_loss(X, y, beta = c(4, .25)) # guess 3
```
```
[1] 179.2952
```
We see that in our third guess we reduce the loss quite a bit relative to our first guess. This makes sense because a value of 4 for the intercept and .25 for the coefficient for `x` are not as relatively far from the true values.
However, we can also see that they are not the best we could have done. In addition, with more data, our estimated coefficients would get closer to true values.
```
model = lm(y ~ x, df) # fit the model and obtain parameter estimates using OLS
coef(model) # best guess given the data
```
```
(Intercept) x
4.8971969 0.4475284
```
```
sum(residuals(model)^2) # least squares loss
```
```
[1] 92.34413
```
In some relatively rare cases, a known approach is available and we do not have to search for the best estimates, but simply have to perform the correct steps that will result in them. For example, the following matrix operations will produce the best estimates for linear regression, which also happen to be the maximum likelihood estimates.
```
solve(crossprod(X)) %*% crossprod(X, y) # 'normal equations'
```
```
[,1]
[1,] 4.8971969
[2,] 0.4475284
```
```
coef(model)
```
```
(Intercept) x
4.8971969 0.4475284
```
Most of the time we don’t have such luxury, or even if we did, the computations might be too great for the size of our data.
Many statistical modeling techniques use *maximum likelihood* in some form or fashion, including Bayesian approaches, so you would do well to understand the basics. In this case, instead of minimizing the loss, we use an approach to maximize the probability of the observations of the target variable given the estimates of the parameters of the model (e.g. the coefficients in a regression)[20](#fn20).
The following shows how this would look for estimating a single value like a mean for a set of observations from a specific distribution[21](#fn21). In this case, the true underlying value that maximizes the likelihood is 5, but we typically don’t know this. We see that as our guesses for the mean would get closer to 5, the likelihood of the observed values increases. Our final guess based on the observed data won’t be exactly 5, but with enough data and an appropriate model for that data, we should get close.
Again, some simple conceptual code can help us. The next bit of code follows a similar approach to what we had with least squares regression, but the goal is instead to maximize the likelihood of the observed data. In this example, I fix the estimated variance, but in practice we’d need to estimate that parameter as well. As probabilities are typically very small, we work with them on the log scale.
```
max_like <- function(X, y, beta, sigma = 1) {
likelihood = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
for (n in 1:nrow(X)) {
y_hat[n] <- sum(X[n, ] * beta)
likelihood[n] = dnorm(y[n], mean = y_hat[n], sd = sigma, log = TRUE)
}
sum(likelihood)
}
```
```
max_like(X, y, beta = c(0, 1)) # guess 1
```
```
[1] -1327.593
```
```
max_like(X, y, beta = c(1, 2)) # guess 2
```
```
[1] -1022.18
```
```
max_like(X, y, beta = c(4, .25)) # guess 3
```
```
[1] -300.6741
```
```
logLik(model)
```
```
'log Lik.' -137.9115 (df=3)
```
To better understand maximum likelihood, it might help to think of our model from a data generating perspective, rather than in terms of ‘errors’. In the standard regression setting, we think of a single observation as follows:
\\\[\\mu \= b\_0 \+ b\_1\*x\_1 \+ ... \+ b\_p\*x\_p\\]
Or with matrix notation (consider it shorthand if not familiar):
\\\[\\mu \= X\\beta\\]
Now we display how \\(y\\) is generated:
\\\[y \\sim \\mathcal{N}(\\mathrm{mean} \= \\mu, \\mathrm{sd} \= \\sigma)\\]
In words, this means that our target observation \\(y\\) is assumed to be normally distributed with some mean and some standard deviation/variance. The mean \\(\\mu\\) is a function, or simply weighted sum, of our covariates \\(X\\). The unknown parameters we have to estimate are the \\(\\beta\\), i.e. weights, and standard deviation \\(\\sigma\\) (or variance \\(\\sigma^2\\)).
One more note regarding estimation, it is good to distinguish models from estimation procedures. The following shows the more specific to the more general for both models and estimation procedures respectively.
| Label | Name |
| --- | --- |
| LM | Linear Model |
| GLM | Generalized Linear Model |
| GLMM | Generalized Linear Mixed Model |
| GAMM | Generalized Linear Mixed Model |
| OLS | Ordinary Least Squares |
| WLS | Weighted Least Squares |
| GLS | Generalized Least Squares |
| GEE | Generalized Estimating Equations |
| GMM | Generalized Method of Moments |
### Optimization
So we know the goal, but how do we get to it? In practice, we typically use *optimization* methods to iteratively search for the best estimates for the parameters of a given model. The functions we explored above provide a goal\- to minimize loss (however defined\- least squares for continuous, classification error for binary, etc.) or maximize the likelihood (or posterior probability in the Bayesian context). Whatever the goal, an optimizing *algorithm* will typically be used to find the estimates that reach that goal. Some approaches are very general, some are better for certain types of modeling problems. These algorithms continue to make guesses until some criterion has been reached (*convergence*)[22](#fn22).
You generally don’t need to know the details to use these algorithms to fit models, but knowing a little bit about the optimization process and available options may prove useful to deal with more complex data scenarios, where convergence can be difficult. Some packages will even have documentation specifically dealing with convergence issues. In the more predictive models previously discussed, knowing more about the optimization algorithm may speedup the time it takes to train the model, or smooth out the variability in the process.
As an aside, most Bayesian models use an estimation approach that is some form of *Markov Chain Monte Carlo*. It is a simulation based approach to generate subsequent estimates of parameters conditional on present estimates of them. One set of iterations is called a chain, and convergence requires multiple chains to mix well, i.e. come to similar conclusions about the parameter estimates. The goal even then is to maximize the log posterior distribution, similar to maximizing the likelihood. In the past this was an extremely computationally expensive procedure, but these days, modern laptops can handle even complex models with ease, though some data set sizes may be prohibitive still[23](#fn23).
Fitting Models
--------------
With practically every modern modeling package in R, the two components required to fit a model are the model formula, and a data frame that contains the variables specified in that formula. Consider the following models. In general the syntax is the similar regardless of package, with special considerations for the type of model. The data argument is not included in these examples, but would be needed.
```
lm(y ~ x + z) # standard linear model/OLS
glm(y ~ x + z, family = 'binomial') # logistic regression with binary response
glm(y ~ x + z + offset(log(q)), family = 'poisson') # count/rate model
betareg::betareg(y ~ x + z) # beta regression for targets between 0 and 1
pscl::hurdle(y ~ x + z, dist = "negbin") # hurdle model with negative binomial response
lme4::glmer(y ~ x + (1 | group), family = 'binomial') # generalized linear mixed model
mgcv::gam(y ~ s(x)) # generalized additive model
survival::coxph(Surv(time = t, event = q) ~ x) # Cox Proportional Hazards Regression
# Bayesian mixed model
brms::brm(
y ~ x + (1 + x | group),
family = 'zero_one_inflated_beta',
prior = priors
)
```
For examples of many other types of models, see this [document](https://m-clark.github.io/R-models/).
Let’s finally get our hands dirty and run an example. We’ll use the world happiness dataset[24](#fn24). This is country level data based on surveys taken at various years, and the scores are averages or proportions, along with other values like GDP.
```
library(tidyverse) # load if you haven't already
load('data/world_happiness.RData')
# glimpse(happy)
```
| Variable | N | Mean | SD | Min | Q1 | Median | Q3 | Max | % Missing |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| year | 1704 | 2012\.33 | 3\.69 | 2005\.00 | 2009\.00 | 2012\.00 | 2015\.00 | 2018\.00 | 0 |
| life\_ladder | 1704 | 5\.44 | 1\.12 | 2\.66 | 4\.61 | 5\.34 | 6\.27 | 8\.02 | 0 |
| log\_gdp\_per\_capita | 1676 | 9\.22 | 1\.19 | 6\.46 | 8\.30 | 9\.41 | 10\.19 | 11\.77 | 2 |
| social\_support | 1691 | 0\.81 | 0\.12 | 0\.29 | 0\.75 | 0\.83 | 0\.90 | 0\.99 | 1 |
| healthy\_life\_expectancy\_at\_birth | 1676 | 63\.11 | 7\.58 | 32\.30 | 58\.30 | 65\.00 | 68\.30 | 76\.80 | 2 |
| freedom\_to\_make\_life\_choices | 1675 | 0\.73 | 0\.14 | 0\.26 | 0\.64 | 0\.75 | 0\.85 | 0\.99 | 2 |
| generosity | 1622 | 0\.00 | 0\.16 | \-0\.34 | \-0\.12 | \-0\.02 | 0\.09 | 0\.68 | 5 |
| perceptions\_of\_corruption | 1608 | 0\.75 | 0\.19 | 0\.04 | 0\.70 | 0\.81 | 0\.88 | 0\.98 | 6 |
| positive\_affect | 1685 | 0\.71 | 0\.11 | 0\.36 | 0\.62 | 0\.72 | 0\.80 | 0\.94 | 1 |
| negative\_affect | 1691 | 0\.27 | 0\.08 | 0\.08 | 0\.21 | 0\.25 | 0\.31 | 0\.70 | 1 |
| confidence\_in\_national\_government | 1530 | 0\.48 | 0\.19 | 0\.07 | 0\.33 | 0\.46 | 0\.61 | 0\.99 | 10 |
| democratic\_quality | 1558 | \-0\.14 | 0\.88 | \-2\.45 | \-0\.79 | \-0\.23 | 0\.65 | 1\.58 | 9 |
| delivery\_quality | 1559 | 0\.00 | 0\.98 | \-2\.14 | \-0\.71 | \-0\.22 | 0\.70 | 2\.18 | 9 |
| gini\_index\_world\_bank\_estimate | 643 | 0\.37 | 0\.08 | 0\.24 | 0\.30 | 0\.35 | 0\.43 | 0\.63 | 62 |
| happiness\_score | 554 | 5\.41 | 1\.13 | 2\.69 | 4\.51 | 5\.31 | 6\.32 | 7\.63 | 67 |
| dystopia\_residual | 554 | 2\.06 | 0\.55 | 0\.29 | 1\.72 | 2\.06 | 2\.44 | 3\.84 | 67 |
The happiness score itself ranges from 2\.7 to 7\.6, with a mean of 5\.4 and standard deviation of 1\.1\.
Fitting a model with R is trivial, and at a minimum requires the two key ingredients mentioned before, the formula and data. Here we specify our target at `happiness_score` with predictors democratic quality, generosity, and GDP per capita (logged).
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
```
And that’s all there is to it.
### Using matrices
Many packages still allow for matrix input instead of specifying a model formula, or even require it (but shouldn’t). This means separating data into a model (or design) matrix, and the vector or matrix of the target variable(s). For example, if we needed a speed boost and weren’t concerned about some typical output we could use lm.fit.
First we need to create the required components. We can use model.matrix to get what we need.
```
X = model.matrix(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
head(X)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
8 1 -1.8443636 0.08909068 7.500539
9 1 -1.8554263 0.05136492 7.497038
10 1 -1.8865659 -0.11219829 7.497755
19 1 0.2516293 -0.08441135 9.302960
20 1 0.2572919 -0.02068741 9.337532
21 1 0.2999450 -0.03264282 9.376145
```
Note the column of ones in the model matrix `X`. This represents our intercept, but that may not mean much to you unless you understand matrix multiplication (nice demo [here](http://matrixmultiplication.xyz/)). The other columns are just as they are in the data. Note also that the missing values have been removed.
```
nrow(happy)
```
```
[1] 1704
```
```
nrow(X)
```
```
[1] 411
```
The target variable must contain the same number of observations as in the model matrix, and there are various ways to create it to ensure this. Instead of model.matrix, there is also model.frame, which creates a data frame, with a method for extracting the corresponding target variable[25](#fn25).
```
X_df = model.frame(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
y = model.response(X_df)
```
We can now fit the model as follows.
```
happy_model_matrix = lm.fit(X, y)
summary(happy_model_matrix) # only a standard list is returned
```
```
Length Class Mode
coefficients 4 -none- numeric
residuals 411 -none- numeric
effects 411 -none- numeric
rank 1 -none- numeric
fitted.values 411 -none- numeric
assign 4 -none- numeric
qr 5 qr list
df.residual 1 -none- numeric
```
```
coef(happy_model_matrix)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
In my experience, it is generally a bad sign if a package requires that you create the model matrix rather than doing so itself via the standard formula \+ data.frame approach. I typically find that such packages tend to skip out on many other things like using typical methods like predict, coef, etc., making them even more difficult to work with. In general, the only real time you should need to use model matrices is when you are creating your own modeling package, doing simulations, utilizing model speed\-ups, or otherwise know why you need them.
Summarizing Models
------------------
Once we have a model, we’ll want to summarize the results of it. Most modeling packages have a summary method we can apply, which will provide parameter estimates, some notion of model fit, inferential statistics, and other output.
```
happy_model_base_sum = summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
There is a lot of info to parse there, so we’ll go over some of it in particular. The summary provides several pieces of information: the coefficients or weights (`Estimate`)[26](#fn26), the standard errors (`Std. Error`), the t\-statistic (which is just the coefficient divided by the standard error), and the corresponding p\-value. The main thing to look at are the actual coefficients and the direction of their relationship, positive or negative. For example, with regard to the effect of democratic quality, moving one point on democratic quality results in roughly 0\.2 units of happiness. Is this a notable effect? Knowing the scale of the outcome can help us understand the magnitude of the effect in a general sense. Before we showed that the standard deviation of the happiness scale was 1\.1\. So, in terms of standard deviation units\- moving 1 points on democratic quality would result in a 0\.2 standard deviation increase in state\-level happiness. We might consider this fairly small, but maybe not negligible.
One thing we must also have in order to understand our results is to get a sense of the uncertainty in the effects. The following provides confidence intervals for each of the coefficients.
```
confint(happy_model_base)
```
```
2.5 % 97.5 %
(Intercept) -1.62845472 -0.3925003
democratic_quality 0.08018814 0.2605586
generosity 0.77656244 1.5451306
log_gdp_per_capita 0.62786210 0.7589806
```
Now we have a sense of the range of plausible values for the coefficients. The value we actually estimate is the best guess given our circumstances, but slight changes in the data, the way we collect it, the time we collect it, etc., all would result in a slightly different result. The confidence interval provides a range of what we could expect given the uncertainty, and, given its importance, you should always report it. In fact, just showing the coefficient and the interval would be better than typical reporting of the statistical test results, though you can do both.
Variable Transformations
------------------------
Transforming variables can provide a few benefits in modeling, whether applied to the target, covariates, or both, and should regularly be used for most models. Some of these benefits include[27](#fn27):
* Interpretable intercepts
* More comparable covariate effects
* Faster estimation
* Easier convergence
* Help with heteroscedasticity
For example, merely centering predictor variables, i.e. subtracting the mean, provides a more interpretable intercept that will fall within the actual range of the target variable, telling us what the value of the target variable is when the covariates are at their means (or reference value if categorical).
### Numeric variables
The following table shows the interpretation of two extremely common transformations applied to numeric variables\- logging and scaling (i.e. standardizing to mean zero, standard deviation one).
| target | predictor | interpretation |
| --- | --- | --- |
| y | x | \\(\\Delta y \= \\beta\\Delta x\\) |
| y | log(x) | \\(\\Delta y \\approx (\\beta/100\)\\%\\Delta x\\) |
| log(y) | x | \\(\\%\\Delta y \\approx 100\\cdot \\beta\\%\\Delta x\\) |
| log(y) | log(x) | \\(\\%\\Delta y \= \\beta\\%\\Delta x\\) |
| y | scale(x) | \\(\\Delta y \= \\beta\\sigma\\Delta x\\) |
| scale(y) | x | \\(\\sigma\\Delta y \= \\beta\\Delta x\\) |
| scale(y) | scale(x) | \\(\\sigma\\Delta y \= \\beta\\sigma\\Delta x\\) |
For example, to start with the normal linear model situation, a one\-unit change in \\(x\\), i.e. \\(\\Delta x \=1\\), leads to \\(\\beta\\) unit change in \\(y\\). If we log the target variable \\(y\\), the interpretation of the coefficient for \\(x\\) is that a one\-unit change in \\(x\\) leads to an (approximately) 100\\(\\cdot\\)\\(\\beta\\)% change in \\(y\\). The 100 changes the result from a proportion to percentage change. More concretely, if \\(\\beta\\) was .5, a unit change in \\(x\\) leads to (roughly) a 50% change in \\(y\\). If both were logged, a percentage change in \\(x\\) leads to a \\(\\beta\\) percentage change in y[28](#fn28). These percentage change interpretations are called [elasticities](https://en.wikipedia.org/wiki/Elasticity_(economics)) in econometrics and areas trained similarly[29](#fn29).
It is very common to use *standardized* variables as well, also called normalizing, or simply scaling. If \\(y\\) and \\(x\\) are both standardized, a one unit (i.e. one standard deviation) change in \\(x\\) leads to a \\(\\beta\\) standard deviation change in \\(y\\). Again, if \\(\\beta\\) was .5, a standard deviation change in \\(x\\) leads to a half standard deviation change in \\(y\\). In general, there is nothing to lose by standardizing, so you should employ it often.
Another common transformation, particularly in machine learning, is the *min\-max normalization*, changing variables to range from some minimum to some maximum, usually zero to one.
### Categorical variables
A raw character string is not an analyzable unit, so character strings and labeled variables like factors must be converted for analysis to be conducted on them. For categorical variables, we can employ what is called *effects coding* to test for specific types of group differences. Far and away the most common approach is called *dummy coding* or *one\-hot encoding*[30](#fn30). In the next example, we will use dummy coding via the recipes package. I also to show how to standardize a numeric variable as previously discussed.
```
library(recipes)
nafta = happy %>%
filter(country %in% c('United States', 'Canada', 'Mexico'))
dummy = nafta %>%
recipe(~ country + generosity) %>% # formula approach for specifying variables
step_dummy(country, one_hot = TRUE) %>% # make variables for all factor levels
step_center(generosity) %>% # example of centering
step_scale(generosity) # example of standardizing
prep(dummy) %>% # estimates the necessary data to apply to this or other data sets
bake(nafta) %>% # apply the computations
print(n = 20)
```
```
# A tibble: 39 x 4
generosity country_Canada country_Mexico country_United.States
<dbl> <dbl> <dbl> <dbl>
1 0.835 1 0 0
2 0.819 1 0 0
3 0.891 1 0 0
4 0.801 1 0 0
5 0.707 1 0 0
6 0.841 1 0 0
7 1.06 1 0 0
8 1.21 1 0 0
9 0.940 1 0 0
10 0.838 1 0 0
11 0.590 1 0 0
12 0.305 1 0 0
13 -0.0323 1 0 0
14 NA 0 1 0
15 -1.19 0 1 0
16 -1.39 0 1 0
17 -1.08 0 1 0
18 -0.915 0 1 0
19 -1.22 0 1 0
20 -1.18 0 1 0
# … with 19 more rows
```
We see that the first few observations are Canada, and the next few Mexico. Note that doing this is rarely required for most modeling situations, but even if not, it sometimes can be useful to do so explicitly. If your modeling package cannot handle factor variables, and thus requires explicit coding, you’ll know, and typically these are the same ones that require matrix input.
Let’s run a regression as follows to show how it would happen automatically.
```
model_dummy = lm(happiness_score ~ country, data = nafta)
summary(model_dummy)
```
```
Call:
lm(formula = happiness_score ~ country, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.36887 0.09633 76.493 5.64e-14 ***
countryMexico -0.61107 0.13624 -4.485 0.00152 **
countryUnited States -0.34337 0.13624 -2.520 0.03275 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
In this case, the coefficient represents the difference in means on the target variable between the reference group and the group in question. In this case, the U.S. is \-0\.34 less on the happy score than the reference country (Canada). The intercept tells us the mean of the reference group.
Other codings are possible, and these would allow for specific group comparisons or types of comparisons. This is sometimes called *contrast coding*. For example, we could compare Canada vs. both the U.S. and Mexico. By giving Canada twice the weight of the other two we can get this result. I also add a coding that will just compare Mexico vs. the U.S. The actual weights used are arbitrary, but in this case should sum to zero.
| group | canada\_vs\_other | mexico\_vs\_us |
| --- | --- | --- |
| Canada | \-0\.667 | 0\.0 |
| Mexico | 0\.333 | \-0\.5 |
| United States | 0\.333 | 0\.5 |
| weights sum to zero, but are arbitrary |
| --- |
Adding such coding to a factor variable allows the corresponding models to use it in constructing the model matrix, rather than dummy coding. See the group means and calculate the results by hand for yourself.
```
nafta = nafta %>%
mutate(country_fac = factor(country))
contrasts(nafta$country_fac) = matrix(c(-2/3, 1/3, 1/3, 0, -.5, .5),
ncol = 2)
summary(lm(happiness_score ~ country_fac, data = nafta))
```
```
Call:
lm(formula = happiness_score ~ country_fac, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.05072 0.05562 126.769 6.01e-16 ***
country_fac1 -0.47722 0.11799 -4.045 0.00291 **
country_fac2 0.26770 0.13624 1.965 0.08100 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
```
nafta %>%
group_by(country) %>%
summarise(happy = mean(happiness_score, na.rm = TRUE))
```
```
# A tibble: 3 x 2
country happy
<chr> <dbl>
1 Canada 7.37
2 Mexico 6.76
3 United States 7.03
```
For example, we can see that for this balanced data set, the `_fac1` coefficient is the average of the U.S. and Mexico coefficients that we got from dummy coding, which represented their respective mean differences from Canada: (\-0\.611 \+ \-0\.343\) / 2 \= \-0\.477\. The `_fac2` coefficient is just the U.S. Mexico mean difference, as expected.
In other circumstances, we can use *categorical embeddings* to reduce a very large number of categorical levels to a smaller number of numeric variables. This is very commonly employed in deep learning.
### Scales, indices, and dimension reduction
It is often the case that we have several correlated variables/items which do not all need to go into the model. For example, instead of using all items in a psychological scale, we can use the scale score, however defined, which is often just a *sum score* of the underlying items. Often people will create an index by using a *principal components analysis*, which can be thought of as a means to create a weighted sum score, or set of scores. Some (especially binary) items may tend toward the creation of a single variable that simply notes whether any of those collection of variables was present or not.
#### Two\-step approaches
Some might do a preliminary analysis, such as a *cluster analysis* or *factor analysis*, to create new target or predictor variables. In the former we reduce several variables to a single categorical label. Factor analysis does the same but results in a more expressive continuous metric. While fine to use, the corresponding results are measured with error, so treating the categories or factor scores as you would observed variables will typically result in optimistic results when you later include them in a subsequent analysis like a linear regression. Though this difference is probably slight in most applications, keen reviewers would probably point out the model shortcoming.
### Don’t discretize
Little pains advanced modelers more than seeing results where a nice expressive continuous metric is butchered into two categories (e.g. taking a numeric age and collapsing to ‘old’ vs. ‘young’). There is rarely a reason to do this, and it is difficult to justify. There are reasons to collapse rare labels of a categorical variable, so that the new variable has fewer but more frequent categories. For example, data may have five or six race categories, but often the values are lumped into majority group vs. minority group due to each minority category having too few observations. But even that can cause problems, and doesn’t really overcome the fact that you simply didn’t have enough data to begin with.
Variable Importance
-------------------
In many circumstances, one of the modeling goals is to determine which predictor variable is most important out of the collection used in the model, or otherwise rank order the effectiveness of the predictors in some fashion. However, determining relative *variable importance* is at best an approximation with some methods, and a fairly hopeless endeavor with others. For just basic linear regression there are many methods that would not necessarily come to the same conclusions. Statistical significance, e.g. the Z/t statistic or p\-value, is simply not a correct way to do so. Some believe that [standardizing numeric variables](models.html#numeric-variables) is enough, but it is not, and doesn’t help with comparison to categorical inputs. In addition, if you’re model is not strong, it doesn’t make much sense to even worry about which is the best of a bad lot.
Another reason that ‘importance’ is a problematic endeavor is that a statistical result doesn’t speak to practical action, nor does it speak to the fact that small effects may be very important. Sex may be an important driver in social science model, but we may not be able to do anything about it for many outcomes that may be of interest. With health outcomes, any effects might be worthy of attention, however small, if they could practically increase the likelihood of survival.
Even if you can come up with a metric you like, you would still need some measure of uncertainty around that to make a claim that one predictor is reasonably better than another, and the only real approach to do that is usually some computationally expensive procedure that you will likely have to put together by hand.
As an example, for standard linear regression there are many methods that decompose \\(R^2\\) into relative contributions by the covariates. The tools to do so have to re\-run the model in many ways to produce these estimates (see the relaimpo package for example), but you would then have to use bootstrapping or similar approach to get interval estimates for those measures of importance. Certain techniques like random forests have a natural way to provide variable importance metrics, but providing inference on them would similarly be very computationally expensive.
In the end though, I think it is probably best to assume that any effect that seems practically distinct from zero might be worthy of attention, and can be regarded for its own sake. The more actionable, the better.
Extracting Output
-----------------
The better you get at modeling, the more often you are going to need to get at certain parts of the model output easily. For example, we can extract the coefficients, residuals, model data and other parts from standard linear model objects from base R.
Why would you want to do this? A simple example would be to compare effects across different settings. We can collect the values, put them in a data frame, and then to a table or visualization.
Typical modeling [methods](programming.html#methods) you might want to use:
* summary: print results in a legible way
* plot: plot something about the model (e.g. diagnostic plots)
* predict: make predictions, possibly on new data
* confint: get confidence intervals for parameters
* coef: extract coefficients
* fitted: extract fitted values
* residuals: extract residuals
* AIC: extract AIC
Here is an example of using the predict and coef methods.
```
predict(happy_model_base, newdata = happy %>% slice(1:5))
```
```
1 2 3 4 5
3.838179 3.959046 3.928180 4.004129 4.171624
```
```
coef(happy_model_base)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
Also, it’s useful to assign the summary results to an object, so that you can extract things that are also useful but would not be in the model object. We did this before, so now let’s take a look.
```
str(happy_model_base_sum, 1)
```
```
List of 12
$ call : language lm(formula = happiness_score ~ democratic_quality + generosity + log_gdp_per_capita, data = happy)
$ terms :Classes 'terms', 'formula' language happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
.. ..- attr(*, "variables")= language list(happiness_score, democratic_quality, generosity, log_gdp_per_capita)
.. ..- attr(*, "factors")= int [1:4, 1:3] 0 1 0 0 0 0 1 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:3] "democratic_quality" "generosity" "log_gdp_per_capita"
.. ..- attr(*, "order")= int [1:3] 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(happiness_score, democratic_quality, generosity, log_gdp_per_capita)
.. ..- attr(*, "dataClasses")= Named chr [1:4] "numeric" "numeric" "numeric" "numeric"
.. .. ..- attr(*, "names")= chr [1:4] "happiness_score" "democratic_quality" "generosity" "log_gdp_per_capita"
$ residuals : Named num [1:411] -0.405 -0.572 0.057 -0.426 -0.829 ...
..- attr(*, "names")= chr [1:411] "8" "9" "10" "19" ...
$ coefficients : num [1:4, 1:4] -1.01 0.17 1.161 0.693 0.314 ...
..- attr(*, "dimnames")=List of 2
$ aliased : Named logi [1:4] FALSE FALSE FALSE FALSE
..- attr(*, "names")= chr [1:4] "(Intercept)" "democratic_quality" "generosity" "log_gdp_per_capita"
$ sigma : num 0.628
$ df : int [1:3] 4 407 4
$ r.squared : num 0.695
$ adj.r.squared: num 0.693
$ fstatistic : Named num [1:3] 310 3 407
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:4, 1:4] 0.2504 0.0229 -0.0139 -0.0264 0.0229 ...
..- attr(*, "dimnames")=List of 2
$ na.action : 'omit' Named int [1:1293] 1 2 3 4 5 6 7 11 12 13 ...
..- attr(*, "names")= chr [1:1293] "1" "2" "3" "4" ...
- attr(*, "class")= chr "summary.lm"
```
If we want the adjusted \\(R^2\\) or root mean squared error (RMSE, i.e. average error[31](#fn31)), they aren’t readily available in the model object, but they are in the summary object, so we can pluck them out as we would any other [list object](data_structures.html#lists).
```
happy_model_base_sum$adj.r.squared
```
```
[1] 0.6930647
```
```
happy_model_base_sum[['sigma']]
```
```
[1] 0.6282718
```
### Package support
There are many packages available to get at model results. One of the more widely used is broom, which has tidy and other functions that can apply in different ways to different models depending on their class.
```
library(broom)
tidy(happy_model_base)
```
```
# A tibble: 4 x 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -1.01 0.314 -3.21 1.41e- 3
2 democratic_quality 0.170 0.0459 3.71 2.33e- 4
3 generosity 1.16 0.195 5.94 6.18e- 9
4 log_gdp_per_capita 0.693 0.0333 20.8 5.93e-66
```
Some packages will produce tables for a model object that are more or less ready for publication. However, unless you know it’s in the exact style you need, you’re probably better off dealing with it yourself. For example, you can use tidy and do minor cleanup to get the table ready for publication.
Visualization
-------------
> Models require visualization to be understood completely.
If you aren’t using visualization as a fundamental part of your model exploration, you’re likely leaving a lot of that exploration behind, and not communicating the results as well as you could to the broadest audience possible. When adding nonlinear effects, interactions, and more, visualization is a must. Thankfully there are many packages to help you get data you need to visualize effects.
We start with the emmeans package. In the following example we have a country effect, and wish to get the mean happiness scores per country. We then visualize the results. Here we can see that Mexico is lowest on average.
```
happy_model_nafta = lm(happiness_score ~ country + year, data = nafta)
library(emmeans)
country_means = emmeans(happy_model_nafta, ~ country)
country_means
```
```
country emmean SE df lower.CL upper.CL
Canada 7.37 0.064 8 7.22 7.52
Mexico 6.76 0.064 8 6.61 6.91
United States 7.03 0.064 8 6.88 7.17
Confidence level used: 0.95
```
```
plot(country_means)
```
We can also test for pairwise differences between the countries, and there’s no reason not to visualize that also. In the following, after adjustment Mexico and U.S. might not differ on mean happiness, but the other comparisons are statistically notable[32](#fn32).
```
pw_comparisons = contrast(country_means, method = 'pairwise', adjust = 'bonferroni')
pw_comparisons
```
```
contrast estimate SE df t.ratio p.value
Canada - Mexico 0.611 0.0905 8 6.751 0.0004
Canada - United States 0.343 0.0905 8 3.793 0.0159
Mexico - United States -0.268 0.0905 8 -2.957 0.0547
P value adjustment: bonferroni method for 3 tests
```
```
plot(pw_comparisons)
```
The following example uses ggeffects. First, we run a model with an interaction of country and year (we’ll talk more about interactions later). Then we get predictions for the year by country, and subsequently visualize. We can see that the trend, while negative for all countries, is more pronounced as we move south.
```
happy_model_nafta = lm(happiness_score ~ year*country, data = nafta)
library(ggeffects)
preds = ggpredict(happy_model_nafta, terms = c('year', 'country'))
plot(preds)
```
Whenever you move to generalized linear models or other more complicated settings, visualization is even more important, so it’s best to have some tools at your disposal.
Extensions to the Standard Linear Model
---------------------------------------
### Different types of targets
In many data situations, we do not have a continuous numeric target variable, or may want to use a different distribution to get a better fit, or adhere to some theoretical perspective. For example, count data is not continuous and often notably skewed, so assuming a normal symmetric distribution may not work as well. From a data generating perspective we can use the Poisson distribution[33](#fn33) for the target variable instead.
\\\[\\ln{\\mu} \= X\\beta\\]
\\\[\\mu \= e^{X\\beta}\\]
\\\[y \\sim \\mathcal{Pois}(\\mu)\\]
Conceptually nothing has really changed from what we were doing with the standard linear model, except for the distribution. We still have a mean function determined by our predictors, and this is what we’re typically mainly interested in from a theoretical perspective. We do have an added step, a transformation of the mean (now usually called the *linear predictor*). Poisson naturally works with the log of the target, but rather than do that explicitly, we instead exponentiate the linear predictor. The *link function*[34](#fn34), which is the natural log in this setting, has a corresponding *inverse link* (or mean function)\- exponentiation.
In code we can demonstrate this as follows.
```
set.seed(123) # for reproducibility
N = 1000 # sample size
beta = c(2, 1) # the true coefficient values
x = rnorm(N) # a single predictor variable
mu = exp(beta[1] + beta[2]*x) # the linear predictor
y = rpois(N, lambda = mu) # the target variable lambda = mean
glm(y ~ x, family = poisson)
```
```
Call: glm(formula = y ~ x, family = poisson)
Coefficients:
(Intercept) x
2.009 0.994
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 13240
Residual Deviance: 1056 AIC: 4831
```
A very common setting is the case where our target variable takes on only two values\- yes vs. no, alive vs. dead, etc. The most common model used in such settings is the logistic regression model. In this case, it will have a different link to go with a different distribution.
\\\[\\ln{\\frac{\\mu}{1\-\\mu}} \= X\\beta\\]
\\\[\\mu \= \\frac{1}{1\+e^{\-X\\beta}}\\]
\\\[y \\sim \\mathcal{Binom}(\\mathrm{prob}\=\\mu, \\mathrm{size} \= 1\)\\]
Here our link function is called the *logit*, and it’s inverse takes our linear predictor and puts it on the probability scale.
Again, some code can help drive this home.
```
mu = plogis(beta[1] + beta[2]*x)
y = rbinom(N, size = 1, mu)
glm(y ~ x, family = binomial)
```
```
Call: glm(formula = y ~ x, family = binomial)
Coefficients:
(Intercept) x
2.141 1.227
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 852.3
Residual Deviance: 708.8 AIC: 712.8
```
```
# extension to count/proportional model
# mu = plogis(beta[1] + beta[2]*x)
# total = rpois(N, lambda = 5)
# events = rbinom(N, size = total, mu)
# nonevents = total - events
#
# glm(cbind(events, nonevents) ~ x, family = binomial)
```
You’ll have noticed that when we fit these models we used glm instead of lm. The normal linear model is a special case of *generalized linear models*, which includes a specific class of distributions \- normal, poisson, binomial, gamma, beta and more \- collectively referred to as the [exponential family](https://en.wikipedia.org/wiki/Exponential_family). While this family can cover a lot of ground, you do not have to restrict yourself to it, and many R modeling packages will provide easy access to more. The main point is that you have tools to deal with continuous, binary, count, ordinal, and other types of data. Furthermore, not much necessarily changes conceptually from model to model besides the link function and/or distribution.
### Correlated data
Often in standard regression modeling situations we have data that is correlated, like when we observe multiple observations for individuals (e.g. longitudinal studies), or observations are clustered within geographic units. There are many ways to analyze all kinds of correlated data in the form of clustered data, time series, spatial data and similar. In terms of understanding the mean function and data generating distribution for our target variable, as we did in our previous models, not much changes. However, we will want to utilize estimation techniques that take this correlation into account. Examples of such models include:
* Mixed models (e.g. random intercepts, ‘multilevel’ models)
* Time series models (autoregressive)
* Spatial models (e.g. conditional autoregressive)
As demonstration is beyond the scope of this document, the main point here is awareness. But see these on [mixed models](https://m-clark.github.io/mixed-models-with-R/) and [generalized additive models](https://m-clark.github.io/generalized-additive-models/).
### Other extensions
There are many types of models that will take one well beyond the standard linear model. In some cases, the focus is multivariate, trying to model many targets at once. Other models will even be domain\-specific, tailored to a very narrow type of problem. Whatever the scenario, having a good understanding of the models we’ve been discussing will likely help you navigate these new waters much more easily.
Model Exploration Summary
-------------------------
At this point you should have a good idea of how to get started exploring models with R. Generally what you will explore will be based on theory, or merely curiosity. Specific packages while make certain types of models easy to pull off, without much change to the syntax from the standard `lm` approach of base R. Almost invariably, you will need to process the data to make it more amenable to analysis and/or more interpretable. After model fitting, summaries and visualizations go a long way toward understanding the part of the world you are exploring.
Model Exploration Exercises
---------------------------
### Exercise 1
With the Google app data, use a standard linear model (i.e. lm) to predict one of three target variables of your choosing:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
I would suggest preprocessing the number of reviews\- dividing by 100,000, scaling (standardizing), or logging it (for the latter you can add 1 first to deal with zeros[35](#fn35)).
Interpret the results. Visualize the difference in means between free and paid apps. See the [emmeans](models.html#visualization) example above.
```
load('data/google_apps.RData')
model = lm(? ~ reviews + type + size_in_MB, data = google_apps)
plot(emmeans::emmeans(model, ~type))
```
### Exercise 2
Rerun the above with interactions of the number of reviews or app size (or both) with type (via `a + b + a:b` or just `a*b` for two predictors). Visualize the interaction. Does it look like the effect differs by type?
```
model = lm(? ~ reviews + type*?, data = google_apps)
plot(ggeffects::ggpredict(model, terms = c('size_in_MB', 'type')))
```
### Exercise 3
Use the fish data to predict the number of fish caught `count` by the following predictor variables:
* `livebait`: whether live bait was used or not
* `child`: how many children present
* `persons`: total persons on the trip
If you wish, you can start with an `lm`, but as the number of fish caught is a count, it is suitable for using a poisson distribution via `glm` with `family = poisson`, so try that if you’re feeling up for it. If you exponentiate the coefficients, they can be interpreted as [incidence rate ratios](https://stats.idre.ucla.edu/stata/output/poisson-regression/).
```
load('data/fish.RData')
model = glm(?, data = fish)
```
Python Model Exploration Notebook
---------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/models.ipynb)
Model Taxonomy
--------------
We can begin with a taxonomy that broadly describes two classes of models:
* *Supervised*
* *Unsupervised*
* Some combination
For supervised settings, there is a target or set of target variables which we aim to predict with a set of predictor variables or covariates. This is far and away the most common case, and the one we will focus on here. It is very common in machine learning parlance to further distinguish *regression* and *classification* among supervised models, but what they actually mean to distinguish is numeric target variables from categorical ones (it’s all regression).
In the case of unsupervised models, the data itself is the target, and this setting includes techniques such as principal components analysis, factor analysis, cluster analytic approaches, topic modeling, and many others. A key goal for many such methods is *dimension reduction*, either of the columns or rows. For example, we may have many items of a survey we wish to group together into a few concepts, or cluster thousands of observations into a few simple categories.
We can also broadly describe two primary goals of modeling:
* *Prediction*
* *Explanation*
Different models will provide varying amounts of predictive and explanatory (or inferential) power. In some settings, prediction is almost entirely the goal, with little need to understand the underlying details of the relation of inputs to outputs. For example, in a model that predicts words to suggest when typing, we don’t really need to know nor much care about the details except to be able to improve those suggestions. In scientific studies however, we may be much more interested in the (potentially causal) relations among the variables under study.
While these are sometimes competing goals, it is definitely not the case that they are mutually exclusive. For example, a fully interpretable model, statistically speaking, may have no predictive capability, and so is fairly useless in practical terms. Often, very predictive models offer little understanding. But sometimes we can luck out and have both a highly predictive model as well as one that is highly interpretable.
Linear models
-------------
Most models you see in published reports are *linear models* of varying kinds, and form the basis on which to build more complex forms. In such models we distinguish a *target variable* we want to understand from the variables which we will use to understand it. Note that these come with different names depending on the goal of the study, discipline, and other factors[19](#fn19). The following table denotes common nomenclature across many disciplines.
| Type | Names |
| --- | --- |
| Target | Dependent variable |
| Endogenous |
| Response |
| Outcome |
| Output |
| Y |
| Regressand |
| Left hand side (LHS) |
| Predictor | Independent variable |
| Exogenous |
| Explanatory Variable |
| Covariate |
| Input |
| X |
| Regressor |
| Right hand side (RHS) |
A typical way to depict a linear regression model is as follows:
\\\[y \= b\_0 \+ b\_1\\cdot x\_1 \+ b\_2\\cdot x\_2 \+ ... \+ \+ b\_p\\cdot x\_p \+ \\epsilon\\]
In the above, \\(b\_0\\) is the intercept, and the other \\(b\_\*\\) are the regression coefficients that represent the relationship of the predictors \\(x\\) to the target variable \\(y\\). The \\(\\epsilon\\) represents the *error* or *residual*. We don’t have perfect prediction, and that represents the difference between what we can guess with our predictor relationships to the target and what we actually observe with it.
In R, we specify a linear model as follows. Conveniently enough, we use a function, `lm`, that stands for linear model. There are various inputs, typically starting with the formula. In the formula, The target variable is first, followed by the predictor variables, separated by a tilde (`~`). Additional predictor variables are added with a plus sign (`+`). In this example, `y` is our target, and the predictors are `x` and `z`.
```
lm(y ~ x + z)
```
We can still use linear models to investigate nonlinear relationships. For example, in the following, we can add a quadratic term or an interaction, yet the model is still linear in the parameters. All of the following are standard linear regression models.
```
lm(y ~ x + z + x:z)
lm(y ~ x + x_squared) # a better way: lm(y ~ poly(x, degree = 2))
```
In the models above, `x` has a potentially nonlinear relationship with `y`, either by varying its (linear) relationship depending on values of z (the first case) or itself (the second). In general, the manner in which nonlinear relationships may be explored in linear models is quite flexible.
An example of a *nonlinear model* would be population growth models, like exponential or logistic growth curves. You can use functions like nls or nlme for such models, but should have a specific theoretical reason to do so, and even then, flexible models such as [GAMs](https://m-clark.github.io/generalized-additive-models/) might be better than assuming a functional form.
Estimation
----------
One key thing to understand with predictive models of any kind is how we estimate the parameters of interest, e.g. coefficients/weights, variance, and more. To start with, we must have some sort of goal that choosing a particular set of values for the parameters achieves, and then find some way to reach that goal efficiently.
### Minimizing and maximizing
The goal of many estimation approaches is the reduction of *loss*, conceptually defined as the difference between the model predictions and the observed data, i.e. prediction error. In an introductory methods course, many are introduced to *ordinary least squares* as a means to estimate the coefficients for a linear regression model. In this scenario, we are seeking to come up with estimates of the coefficients that *minimize* the (squared) difference between the observed target value and the fitted value based on the parameter estimates. The loss in this case is defined as the sum of the squared errors. Formally we can state it as follows.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2\\]
We can see how this works more clearly with some simple conceptual code. In what follows, we create a [function](functions.html#writing-functions), allows us to move [row by row](iterative.html#for-loops) through the data, calculating both our prediction based on the given model parameters\- \\(\\hat{y}\\), and the difference between that and our target variable \\(y\\). We sum these squared differences to get a total. In practice such a function is called the loss function, cost function, or objective function.
```
ls_loss <- function(X, y, beta) {
# initialize the objects
loss = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
# for each row, calculate y_hat and square the difference with y
for (n in 1:nrow(X)) {
y_hat[n] = sum(X[n, ] * beta)
loss[n] = (y[n] - y_hat[n]) ^ 2
}
sum(loss)
}
```
Now we need some data. Let’s construct some data so that we know the true underlying values for the regression coefficients. Feel free to change the sample size `N` or the coefficient values.
```
set.seed(123) # for reproducibility
N = 100
X = cbind(1, rnorm(N)) # a model matrix; first column represents the intercept
y = 5 * X[, 1] + .5 * X[, 2] + rnorm(N) # a target with some noise; truth is y = 5 +.5*x
df = data.frame(y = y, x = X[, 2])
```
Now let’s make some guesses for the coefficients, and see what the corresponding sum of the squared errors, i.e. the loss, would be.
```
ls_loss(X, y, beta = c(0, 1)) # guess 1
```
```
[1] 2467.106
```
```
ls_loss(X, y, beta = c(1, 2)) # guess 2
```
```
[1] 1702.547
```
```
ls_loss(X, y, beta = c(4, .25)) # guess 3
```
```
[1] 179.2952
```
We see that in our third guess we reduce the loss quite a bit relative to our first guess. This makes sense because a value of 4 for the intercept and .25 for the coefficient for `x` are not as relatively far from the true values.
However, we can also see that they are not the best we could have done. In addition, with more data, our estimated coefficients would get closer to true values.
```
model = lm(y ~ x, df) # fit the model and obtain parameter estimates using OLS
coef(model) # best guess given the data
```
```
(Intercept) x
4.8971969 0.4475284
```
```
sum(residuals(model)^2) # least squares loss
```
```
[1] 92.34413
```
In some relatively rare cases, a known approach is available and we do not have to search for the best estimates, but simply have to perform the correct steps that will result in them. For example, the following matrix operations will produce the best estimates for linear regression, which also happen to be the maximum likelihood estimates.
```
solve(crossprod(X)) %*% crossprod(X, y) # 'normal equations'
```
```
[,1]
[1,] 4.8971969
[2,] 0.4475284
```
```
coef(model)
```
```
(Intercept) x
4.8971969 0.4475284
```
Most of the time we don’t have such luxury, or even if we did, the computations might be too great for the size of our data.
Many statistical modeling techniques use *maximum likelihood* in some form or fashion, including Bayesian approaches, so you would do well to understand the basics. In this case, instead of minimizing the loss, we use an approach to maximize the probability of the observations of the target variable given the estimates of the parameters of the model (e.g. the coefficients in a regression)[20](#fn20).
The following shows how this would look for estimating a single value like a mean for a set of observations from a specific distribution[21](#fn21). In this case, the true underlying value that maximizes the likelihood is 5, but we typically don’t know this. We see that as our guesses for the mean would get closer to 5, the likelihood of the observed values increases. Our final guess based on the observed data won’t be exactly 5, but with enough data and an appropriate model for that data, we should get close.
Again, some simple conceptual code can help us. The next bit of code follows a similar approach to what we had with least squares regression, but the goal is instead to maximize the likelihood of the observed data. In this example, I fix the estimated variance, but in practice we’d need to estimate that parameter as well. As probabilities are typically very small, we work with them on the log scale.
```
max_like <- function(X, y, beta, sigma = 1) {
likelihood = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
for (n in 1:nrow(X)) {
y_hat[n] <- sum(X[n, ] * beta)
likelihood[n] = dnorm(y[n], mean = y_hat[n], sd = sigma, log = TRUE)
}
sum(likelihood)
}
```
```
max_like(X, y, beta = c(0, 1)) # guess 1
```
```
[1] -1327.593
```
```
max_like(X, y, beta = c(1, 2)) # guess 2
```
```
[1] -1022.18
```
```
max_like(X, y, beta = c(4, .25)) # guess 3
```
```
[1] -300.6741
```
```
logLik(model)
```
```
'log Lik.' -137.9115 (df=3)
```
To better understand maximum likelihood, it might help to think of our model from a data generating perspective, rather than in terms of ‘errors’. In the standard regression setting, we think of a single observation as follows:
\\\[\\mu \= b\_0 \+ b\_1\*x\_1 \+ ... \+ b\_p\*x\_p\\]
Or with matrix notation (consider it shorthand if not familiar):
\\\[\\mu \= X\\beta\\]
Now we display how \\(y\\) is generated:
\\\[y \\sim \\mathcal{N}(\\mathrm{mean} \= \\mu, \\mathrm{sd} \= \\sigma)\\]
In words, this means that our target observation \\(y\\) is assumed to be normally distributed with some mean and some standard deviation/variance. The mean \\(\\mu\\) is a function, or simply weighted sum, of our covariates \\(X\\). The unknown parameters we have to estimate are the \\(\\beta\\), i.e. weights, and standard deviation \\(\\sigma\\) (or variance \\(\\sigma^2\\)).
One more note regarding estimation, it is good to distinguish models from estimation procedures. The following shows the more specific to the more general for both models and estimation procedures respectively.
| Label | Name |
| --- | --- |
| LM | Linear Model |
| GLM | Generalized Linear Model |
| GLMM | Generalized Linear Mixed Model |
| GAMM | Generalized Linear Mixed Model |
| OLS | Ordinary Least Squares |
| WLS | Weighted Least Squares |
| GLS | Generalized Least Squares |
| GEE | Generalized Estimating Equations |
| GMM | Generalized Method of Moments |
### Optimization
So we know the goal, but how do we get to it? In practice, we typically use *optimization* methods to iteratively search for the best estimates for the parameters of a given model. The functions we explored above provide a goal\- to minimize loss (however defined\- least squares for continuous, classification error for binary, etc.) or maximize the likelihood (or posterior probability in the Bayesian context). Whatever the goal, an optimizing *algorithm* will typically be used to find the estimates that reach that goal. Some approaches are very general, some are better for certain types of modeling problems. These algorithms continue to make guesses until some criterion has been reached (*convergence*)[22](#fn22).
You generally don’t need to know the details to use these algorithms to fit models, but knowing a little bit about the optimization process and available options may prove useful to deal with more complex data scenarios, where convergence can be difficult. Some packages will even have documentation specifically dealing with convergence issues. In the more predictive models previously discussed, knowing more about the optimization algorithm may speedup the time it takes to train the model, or smooth out the variability in the process.
As an aside, most Bayesian models use an estimation approach that is some form of *Markov Chain Monte Carlo*. It is a simulation based approach to generate subsequent estimates of parameters conditional on present estimates of them. One set of iterations is called a chain, and convergence requires multiple chains to mix well, i.e. come to similar conclusions about the parameter estimates. The goal even then is to maximize the log posterior distribution, similar to maximizing the likelihood. In the past this was an extremely computationally expensive procedure, but these days, modern laptops can handle even complex models with ease, though some data set sizes may be prohibitive still[23](#fn23).
### Minimizing and maximizing
The goal of many estimation approaches is the reduction of *loss*, conceptually defined as the difference between the model predictions and the observed data, i.e. prediction error. In an introductory methods course, many are introduced to *ordinary least squares* as a means to estimate the coefficients for a linear regression model. In this scenario, we are seeking to come up with estimates of the coefficients that *minimize* the (squared) difference between the observed target value and the fitted value based on the parameter estimates. The loss in this case is defined as the sum of the squared errors. Formally we can state it as follows.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2\\]
We can see how this works more clearly with some simple conceptual code. In what follows, we create a [function](functions.html#writing-functions), allows us to move [row by row](iterative.html#for-loops) through the data, calculating both our prediction based on the given model parameters\- \\(\\hat{y}\\), and the difference between that and our target variable \\(y\\). We sum these squared differences to get a total. In practice such a function is called the loss function, cost function, or objective function.
```
ls_loss <- function(X, y, beta) {
# initialize the objects
loss = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
# for each row, calculate y_hat and square the difference with y
for (n in 1:nrow(X)) {
y_hat[n] = sum(X[n, ] * beta)
loss[n] = (y[n] - y_hat[n]) ^ 2
}
sum(loss)
}
```
Now we need some data. Let’s construct some data so that we know the true underlying values for the regression coefficients. Feel free to change the sample size `N` or the coefficient values.
```
set.seed(123) # for reproducibility
N = 100
X = cbind(1, rnorm(N)) # a model matrix; first column represents the intercept
y = 5 * X[, 1] + .5 * X[, 2] + rnorm(N) # a target with some noise; truth is y = 5 +.5*x
df = data.frame(y = y, x = X[, 2])
```
Now let’s make some guesses for the coefficients, and see what the corresponding sum of the squared errors, i.e. the loss, would be.
```
ls_loss(X, y, beta = c(0, 1)) # guess 1
```
```
[1] 2467.106
```
```
ls_loss(X, y, beta = c(1, 2)) # guess 2
```
```
[1] 1702.547
```
```
ls_loss(X, y, beta = c(4, .25)) # guess 3
```
```
[1] 179.2952
```
We see that in our third guess we reduce the loss quite a bit relative to our first guess. This makes sense because a value of 4 for the intercept and .25 for the coefficient for `x` are not as relatively far from the true values.
However, we can also see that they are not the best we could have done. In addition, with more data, our estimated coefficients would get closer to true values.
```
model = lm(y ~ x, df) # fit the model and obtain parameter estimates using OLS
coef(model) # best guess given the data
```
```
(Intercept) x
4.8971969 0.4475284
```
```
sum(residuals(model)^2) # least squares loss
```
```
[1] 92.34413
```
In some relatively rare cases, a known approach is available and we do not have to search for the best estimates, but simply have to perform the correct steps that will result in them. For example, the following matrix operations will produce the best estimates for linear regression, which also happen to be the maximum likelihood estimates.
```
solve(crossprod(X)) %*% crossprod(X, y) # 'normal equations'
```
```
[,1]
[1,] 4.8971969
[2,] 0.4475284
```
```
coef(model)
```
```
(Intercept) x
4.8971969 0.4475284
```
Most of the time we don’t have such luxury, or even if we did, the computations might be too great for the size of our data.
Many statistical modeling techniques use *maximum likelihood* in some form or fashion, including Bayesian approaches, so you would do well to understand the basics. In this case, instead of minimizing the loss, we use an approach to maximize the probability of the observations of the target variable given the estimates of the parameters of the model (e.g. the coefficients in a regression)[20](#fn20).
The following shows how this would look for estimating a single value like a mean for a set of observations from a specific distribution[21](#fn21). In this case, the true underlying value that maximizes the likelihood is 5, but we typically don’t know this. We see that as our guesses for the mean would get closer to 5, the likelihood of the observed values increases. Our final guess based on the observed data won’t be exactly 5, but with enough data and an appropriate model for that data, we should get close.
Again, some simple conceptual code can help us. The next bit of code follows a similar approach to what we had with least squares regression, but the goal is instead to maximize the likelihood of the observed data. In this example, I fix the estimated variance, but in practice we’d need to estimate that parameter as well. As probabilities are typically very small, we work with them on the log scale.
```
max_like <- function(X, y, beta, sigma = 1) {
likelihood = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
for (n in 1:nrow(X)) {
y_hat[n] <- sum(X[n, ] * beta)
likelihood[n] = dnorm(y[n], mean = y_hat[n], sd = sigma, log = TRUE)
}
sum(likelihood)
}
```
```
max_like(X, y, beta = c(0, 1)) # guess 1
```
```
[1] -1327.593
```
```
max_like(X, y, beta = c(1, 2)) # guess 2
```
```
[1] -1022.18
```
```
max_like(X, y, beta = c(4, .25)) # guess 3
```
```
[1] -300.6741
```
```
logLik(model)
```
```
'log Lik.' -137.9115 (df=3)
```
To better understand maximum likelihood, it might help to think of our model from a data generating perspective, rather than in terms of ‘errors’. In the standard regression setting, we think of a single observation as follows:
\\\[\\mu \= b\_0 \+ b\_1\*x\_1 \+ ... \+ b\_p\*x\_p\\]
Or with matrix notation (consider it shorthand if not familiar):
\\\[\\mu \= X\\beta\\]
Now we display how \\(y\\) is generated:
\\\[y \\sim \\mathcal{N}(\\mathrm{mean} \= \\mu, \\mathrm{sd} \= \\sigma)\\]
In words, this means that our target observation \\(y\\) is assumed to be normally distributed with some mean and some standard deviation/variance. The mean \\(\\mu\\) is a function, or simply weighted sum, of our covariates \\(X\\). The unknown parameters we have to estimate are the \\(\\beta\\), i.e. weights, and standard deviation \\(\\sigma\\) (or variance \\(\\sigma^2\\)).
One more note regarding estimation, it is good to distinguish models from estimation procedures. The following shows the more specific to the more general for both models and estimation procedures respectively.
| Label | Name |
| --- | --- |
| LM | Linear Model |
| GLM | Generalized Linear Model |
| GLMM | Generalized Linear Mixed Model |
| GAMM | Generalized Linear Mixed Model |
| OLS | Ordinary Least Squares |
| WLS | Weighted Least Squares |
| GLS | Generalized Least Squares |
| GEE | Generalized Estimating Equations |
| GMM | Generalized Method of Moments |
### Optimization
So we know the goal, but how do we get to it? In practice, we typically use *optimization* methods to iteratively search for the best estimates for the parameters of a given model. The functions we explored above provide a goal\- to minimize loss (however defined\- least squares for continuous, classification error for binary, etc.) or maximize the likelihood (or posterior probability in the Bayesian context). Whatever the goal, an optimizing *algorithm* will typically be used to find the estimates that reach that goal. Some approaches are very general, some are better for certain types of modeling problems. These algorithms continue to make guesses until some criterion has been reached (*convergence*)[22](#fn22).
You generally don’t need to know the details to use these algorithms to fit models, but knowing a little bit about the optimization process and available options may prove useful to deal with more complex data scenarios, where convergence can be difficult. Some packages will even have documentation specifically dealing with convergence issues. In the more predictive models previously discussed, knowing more about the optimization algorithm may speedup the time it takes to train the model, or smooth out the variability in the process.
As an aside, most Bayesian models use an estimation approach that is some form of *Markov Chain Monte Carlo*. It is a simulation based approach to generate subsequent estimates of parameters conditional on present estimates of them. One set of iterations is called a chain, and convergence requires multiple chains to mix well, i.e. come to similar conclusions about the parameter estimates. The goal even then is to maximize the log posterior distribution, similar to maximizing the likelihood. In the past this was an extremely computationally expensive procedure, but these days, modern laptops can handle even complex models with ease, though some data set sizes may be prohibitive still[23](#fn23).
Fitting Models
--------------
With practically every modern modeling package in R, the two components required to fit a model are the model formula, and a data frame that contains the variables specified in that formula. Consider the following models. In general the syntax is the similar regardless of package, with special considerations for the type of model. The data argument is not included in these examples, but would be needed.
```
lm(y ~ x + z) # standard linear model/OLS
glm(y ~ x + z, family = 'binomial') # logistic regression with binary response
glm(y ~ x + z + offset(log(q)), family = 'poisson') # count/rate model
betareg::betareg(y ~ x + z) # beta regression for targets between 0 and 1
pscl::hurdle(y ~ x + z, dist = "negbin") # hurdle model with negative binomial response
lme4::glmer(y ~ x + (1 | group), family = 'binomial') # generalized linear mixed model
mgcv::gam(y ~ s(x)) # generalized additive model
survival::coxph(Surv(time = t, event = q) ~ x) # Cox Proportional Hazards Regression
# Bayesian mixed model
brms::brm(
y ~ x + (1 + x | group),
family = 'zero_one_inflated_beta',
prior = priors
)
```
For examples of many other types of models, see this [document](https://m-clark.github.io/R-models/).
Let’s finally get our hands dirty and run an example. We’ll use the world happiness dataset[24](#fn24). This is country level data based on surveys taken at various years, and the scores are averages or proportions, along with other values like GDP.
```
library(tidyverse) # load if you haven't already
load('data/world_happiness.RData')
# glimpse(happy)
```
| Variable | N | Mean | SD | Min | Q1 | Median | Q3 | Max | % Missing |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| year | 1704 | 2012\.33 | 3\.69 | 2005\.00 | 2009\.00 | 2012\.00 | 2015\.00 | 2018\.00 | 0 |
| life\_ladder | 1704 | 5\.44 | 1\.12 | 2\.66 | 4\.61 | 5\.34 | 6\.27 | 8\.02 | 0 |
| log\_gdp\_per\_capita | 1676 | 9\.22 | 1\.19 | 6\.46 | 8\.30 | 9\.41 | 10\.19 | 11\.77 | 2 |
| social\_support | 1691 | 0\.81 | 0\.12 | 0\.29 | 0\.75 | 0\.83 | 0\.90 | 0\.99 | 1 |
| healthy\_life\_expectancy\_at\_birth | 1676 | 63\.11 | 7\.58 | 32\.30 | 58\.30 | 65\.00 | 68\.30 | 76\.80 | 2 |
| freedom\_to\_make\_life\_choices | 1675 | 0\.73 | 0\.14 | 0\.26 | 0\.64 | 0\.75 | 0\.85 | 0\.99 | 2 |
| generosity | 1622 | 0\.00 | 0\.16 | \-0\.34 | \-0\.12 | \-0\.02 | 0\.09 | 0\.68 | 5 |
| perceptions\_of\_corruption | 1608 | 0\.75 | 0\.19 | 0\.04 | 0\.70 | 0\.81 | 0\.88 | 0\.98 | 6 |
| positive\_affect | 1685 | 0\.71 | 0\.11 | 0\.36 | 0\.62 | 0\.72 | 0\.80 | 0\.94 | 1 |
| negative\_affect | 1691 | 0\.27 | 0\.08 | 0\.08 | 0\.21 | 0\.25 | 0\.31 | 0\.70 | 1 |
| confidence\_in\_national\_government | 1530 | 0\.48 | 0\.19 | 0\.07 | 0\.33 | 0\.46 | 0\.61 | 0\.99 | 10 |
| democratic\_quality | 1558 | \-0\.14 | 0\.88 | \-2\.45 | \-0\.79 | \-0\.23 | 0\.65 | 1\.58 | 9 |
| delivery\_quality | 1559 | 0\.00 | 0\.98 | \-2\.14 | \-0\.71 | \-0\.22 | 0\.70 | 2\.18 | 9 |
| gini\_index\_world\_bank\_estimate | 643 | 0\.37 | 0\.08 | 0\.24 | 0\.30 | 0\.35 | 0\.43 | 0\.63 | 62 |
| happiness\_score | 554 | 5\.41 | 1\.13 | 2\.69 | 4\.51 | 5\.31 | 6\.32 | 7\.63 | 67 |
| dystopia\_residual | 554 | 2\.06 | 0\.55 | 0\.29 | 1\.72 | 2\.06 | 2\.44 | 3\.84 | 67 |
The happiness score itself ranges from 2\.7 to 7\.6, with a mean of 5\.4 and standard deviation of 1\.1\.
Fitting a model with R is trivial, and at a minimum requires the two key ingredients mentioned before, the formula and data. Here we specify our target at `happiness_score` with predictors democratic quality, generosity, and GDP per capita (logged).
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
```
And that’s all there is to it.
### Using matrices
Many packages still allow for matrix input instead of specifying a model formula, or even require it (but shouldn’t). This means separating data into a model (or design) matrix, and the vector or matrix of the target variable(s). For example, if we needed a speed boost and weren’t concerned about some typical output we could use lm.fit.
First we need to create the required components. We can use model.matrix to get what we need.
```
X = model.matrix(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
head(X)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
8 1 -1.8443636 0.08909068 7.500539
9 1 -1.8554263 0.05136492 7.497038
10 1 -1.8865659 -0.11219829 7.497755
19 1 0.2516293 -0.08441135 9.302960
20 1 0.2572919 -0.02068741 9.337532
21 1 0.2999450 -0.03264282 9.376145
```
Note the column of ones in the model matrix `X`. This represents our intercept, but that may not mean much to you unless you understand matrix multiplication (nice demo [here](http://matrixmultiplication.xyz/)). The other columns are just as they are in the data. Note also that the missing values have been removed.
```
nrow(happy)
```
```
[1] 1704
```
```
nrow(X)
```
```
[1] 411
```
The target variable must contain the same number of observations as in the model matrix, and there are various ways to create it to ensure this. Instead of model.matrix, there is also model.frame, which creates a data frame, with a method for extracting the corresponding target variable[25](#fn25).
```
X_df = model.frame(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
y = model.response(X_df)
```
We can now fit the model as follows.
```
happy_model_matrix = lm.fit(X, y)
summary(happy_model_matrix) # only a standard list is returned
```
```
Length Class Mode
coefficients 4 -none- numeric
residuals 411 -none- numeric
effects 411 -none- numeric
rank 1 -none- numeric
fitted.values 411 -none- numeric
assign 4 -none- numeric
qr 5 qr list
df.residual 1 -none- numeric
```
```
coef(happy_model_matrix)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
In my experience, it is generally a bad sign if a package requires that you create the model matrix rather than doing so itself via the standard formula \+ data.frame approach. I typically find that such packages tend to skip out on many other things like using typical methods like predict, coef, etc., making them even more difficult to work with. In general, the only real time you should need to use model matrices is when you are creating your own modeling package, doing simulations, utilizing model speed\-ups, or otherwise know why you need them.
### Using matrices
Many packages still allow for matrix input instead of specifying a model formula, or even require it (but shouldn’t). This means separating data into a model (or design) matrix, and the vector or matrix of the target variable(s). For example, if we needed a speed boost and weren’t concerned about some typical output we could use lm.fit.
First we need to create the required components. We can use model.matrix to get what we need.
```
X = model.matrix(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
head(X)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
8 1 -1.8443636 0.08909068 7.500539
9 1 -1.8554263 0.05136492 7.497038
10 1 -1.8865659 -0.11219829 7.497755
19 1 0.2516293 -0.08441135 9.302960
20 1 0.2572919 -0.02068741 9.337532
21 1 0.2999450 -0.03264282 9.376145
```
Note the column of ones in the model matrix `X`. This represents our intercept, but that may not mean much to you unless you understand matrix multiplication (nice demo [here](http://matrixmultiplication.xyz/)). The other columns are just as they are in the data. Note also that the missing values have been removed.
```
nrow(happy)
```
```
[1] 1704
```
```
nrow(X)
```
```
[1] 411
```
The target variable must contain the same number of observations as in the model matrix, and there are various ways to create it to ensure this. Instead of model.matrix, there is also model.frame, which creates a data frame, with a method for extracting the corresponding target variable[25](#fn25).
```
X_df = model.frame(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
y = model.response(X_df)
```
We can now fit the model as follows.
```
happy_model_matrix = lm.fit(X, y)
summary(happy_model_matrix) # only a standard list is returned
```
```
Length Class Mode
coefficients 4 -none- numeric
residuals 411 -none- numeric
effects 411 -none- numeric
rank 1 -none- numeric
fitted.values 411 -none- numeric
assign 4 -none- numeric
qr 5 qr list
df.residual 1 -none- numeric
```
```
coef(happy_model_matrix)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
In my experience, it is generally a bad sign if a package requires that you create the model matrix rather than doing so itself via the standard formula \+ data.frame approach. I typically find that such packages tend to skip out on many other things like using typical methods like predict, coef, etc., making them even more difficult to work with. In general, the only real time you should need to use model matrices is when you are creating your own modeling package, doing simulations, utilizing model speed\-ups, or otherwise know why you need them.
Summarizing Models
------------------
Once we have a model, we’ll want to summarize the results of it. Most modeling packages have a summary method we can apply, which will provide parameter estimates, some notion of model fit, inferential statistics, and other output.
```
happy_model_base_sum = summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
There is a lot of info to parse there, so we’ll go over some of it in particular. The summary provides several pieces of information: the coefficients or weights (`Estimate`)[26](#fn26), the standard errors (`Std. Error`), the t\-statistic (which is just the coefficient divided by the standard error), and the corresponding p\-value. The main thing to look at are the actual coefficients and the direction of their relationship, positive or negative. For example, with regard to the effect of democratic quality, moving one point on democratic quality results in roughly 0\.2 units of happiness. Is this a notable effect? Knowing the scale of the outcome can help us understand the magnitude of the effect in a general sense. Before we showed that the standard deviation of the happiness scale was 1\.1\. So, in terms of standard deviation units\- moving 1 points on democratic quality would result in a 0\.2 standard deviation increase in state\-level happiness. We might consider this fairly small, but maybe not negligible.
One thing we must also have in order to understand our results is to get a sense of the uncertainty in the effects. The following provides confidence intervals for each of the coefficients.
```
confint(happy_model_base)
```
```
2.5 % 97.5 %
(Intercept) -1.62845472 -0.3925003
democratic_quality 0.08018814 0.2605586
generosity 0.77656244 1.5451306
log_gdp_per_capita 0.62786210 0.7589806
```
Now we have a sense of the range of plausible values for the coefficients. The value we actually estimate is the best guess given our circumstances, but slight changes in the data, the way we collect it, the time we collect it, etc., all would result in a slightly different result. The confidence interval provides a range of what we could expect given the uncertainty, and, given its importance, you should always report it. In fact, just showing the coefficient and the interval would be better than typical reporting of the statistical test results, though you can do both.
Variable Transformations
------------------------
Transforming variables can provide a few benefits in modeling, whether applied to the target, covariates, or both, and should regularly be used for most models. Some of these benefits include[27](#fn27):
* Interpretable intercepts
* More comparable covariate effects
* Faster estimation
* Easier convergence
* Help with heteroscedasticity
For example, merely centering predictor variables, i.e. subtracting the mean, provides a more interpretable intercept that will fall within the actual range of the target variable, telling us what the value of the target variable is when the covariates are at their means (or reference value if categorical).
### Numeric variables
The following table shows the interpretation of two extremely common transformations applied to numeric variables\- logging and scaling (i.e. standardizing to mean zero, standard deviation one).
| target | predictor | interpretation |
| --- | --- | --- |
| y | x | \\(\\Delta y \= \\beta\\Delta x\\) |
| y | log(x) | \\(\\Delta y \\approx (\\beta/100\)\\%\\Delta x\\) |
| log(y) | x | \\(\\%\\Delta y \\approx 100\\cdot \\beta\\%\\Delta x\\) |
| log(y) | log(x) | \\(\\%\\Delta y \= \\beta\\%\\Delta x\\) |
| y | scale(x) | \\(\\Delta y \= \\beta\\sigma\\Delta x\\) |
| scale(y) | x | \\(\\sigma\\Delta y \= \\beta\\Delta x\\) |
| scale(y) | scale(x) | \\(\\sigma\\Delta y \= \\beta\\sigma\\Delta x\\) |
For example, to start with the normal linear model situation, a one\-unit change in \\(x\\), i.e. \\(\\Delta x \=1\\), leads to \\(\\beta\\) unit change in \\(y\\). If we log the target variable \\(y\\), the interpretation of the coefficient for \\(x\\) is that a one\-unit change in \\(x\\) leads to an (approximately) 100\\(\\cdot\\)\\(\\beta\\)% change in \\(y\\). The 100 changes the result from a proportion to percentage change. More concretely, if \\(\\beta\\) was .5, a unit change in \\(x\\) leads to (roughly) a 50% change in \\(y\\). If both were logged, a percentage change in \\(x\\) leads to a \\(\\beta\\) percentage change in y[28](#fn28). These percentage change interpretations are called [elasticities](https://en.wikipedia.org/wiki/Elasticity_(economics)) in econometrics and areas trained similarly[29](#fn29).
It is very common to use *standardized* variables as well, also called normalizing, or simply scaling. If \\(y\\) and \\(x\\) are both standardized, a one unit (i.e. one standard deviation) change in \\(x\\) leads to a \\(\\beta\\) standard deviation change in \\(y\\). Again, if \\(\\beta\\) was .5, a standard deviation change in \\(x\\) leads to a half standard deviation change in \\(y\\). In general, there is nothing to lose by standardizing, so you should employ it often.
Another common transformation, particularly in machine learning, is the *min\-max normalization*, changing variables to range from some minimum to some maximum, usually zero to one.
### Categorical variables
A raw character string is not an analyzable unit, so character strings and labeled variables like factors must be converted for analysis to be conducted on them. For categorical variables, we can employ what is called *effects coding* to test for specific types of group differences. Far and away the most common approach is called *dummy coding* or *one\-hot encoding*[30](#fn30). In the next example, we will use dummy coding via the recipes package. I also to show how to standardize a numeric variable as previously discussed.
```
library(recipes)
nafta = happy %>%
filter(country %in% c('United States', 'Canada', 'Mexico'))
dummy = nafta %>%
recipe(~ country + generosity) %>% # formula approach for specifying variables
step_dummy(country, one_hot = TRUE) %>% # make variables for all factor levels
step_center(generosity) %>% # example of centering
step_scale(generosity) # example of standardizing
prep(dummy) %>% # estimates the necessary data to apply to this or other data sets
bake(nafta) %>% # apply the computations
print(n = 20)
```
```
# A tibble: 39 x 4
generosity country_Canada country_Mexico country_United.States
<dbl> <dbl> <dbl> <dbl>
1 0.835 1 0 0
2 0.819 1 0 0
3 0.891 1 0 0
4 0.801 1 0 0
5 0.707 1 0 0
6 0.841 1 0 0
7 1.06 1 0 0
8 1.21 1 0 0
9 0.940 1 0 0
10 0.838 1 0 0
11 0.590 1 0 0
12 0.305 1 0 0
13 -0.0323 1 0 0
14 NA 0 1 0
15 -1.19 0 1 0
16 -1.39 0 1 0
17 -1.08 0 1 0
18 -0.915 0 1 0
19 -1.22 0 1 0
20 -1.18 0 1 0
# … with 19 more rows
```
We see that the first few observations are Canada, and the next few Mexico. Note that doing this is rarely required for most modeling situations, but even if not, it sometimes can be useful to do so explicitly. If your modeling package cannot handle factor variables, and thus requires explicit coding, you’ll know, and typically these are the same ones that require matrix input.
Let’s run a regression as follows to show how it would happen automatically.
```
model_dummy = lm(happiness_score ~ country, data = nafta)
summary(model_dummy)
```
```
Call:
lm(formula = happiness_score ~ country, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.36887 0.09633 76.493 5.64e-14 ***
countryMexico -0.61107 0.13624 -4.485 0.00152 **
countryUnited States -0.34337 0.13624 -2.520 0.03275 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
In this case, the coefficient represents the difference in means on the target variable between the reference group and the group in question. In this case, the U.S. is \-0\.34 less on the happy score than the reference country (Canada). The intercept tells us the mean of the reference group.
Other codings are possible, and these would allow for specific group comparisons or types of comparisons. This is sometimes called *contrast coding*. For example, we could compare Canada vs. both the U.S. and Mexico. By giving Canada twice the weight of the other two we can get this result. I also add a coding that will just compare Mexico vs. the U.S. The actual weights used are arbitrary, but in this case should sum to zero.
| group | canada\_vs\_other | mexico\_vs\_us |
| --- | --- | --- |
| Canada | \-0\.667 | 0\.0 |
| Mexico | 0\.333 | \-0\.5 |
| United States | 0\.333 | 0\.5 |
| weights sum to zero, but are arbitrary |
| --- |
Adding such coding to a factor variable allows the corresponding models to use it in constructing the model matrix, rather than dummy coding. See the group means and calculate the results by hand for yourself.
```
nafta = nafta %>%
mutate(country_fac = factor(country))
contrasts(nafta$country_fac) = matrix(c(-2/3, 1/3, 1/3, 0, -.5, .5),
ncol = 2)
summary(lm(happiness_score ~ country_fac, data = nafta))
```
```
Call:
lm(formula = happiness_score ~ country_fac, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.05072 0.05562 126.769 6.01e-16 ***
country_fac1 -0.47722 0.11799 -4.045 0.00291 **
country_fac2 0.26770 0.13624 1.965 0.08100 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
```
nafta %>%
group_by(country) %>%
summarise(happy = mean(happiness_score, na.rm = TRUE))
```
```
# A tibble: 3 x 2
country happy
<chr> <dbl>
1 Canada 7.37
2 Mexico 6.76
3 United States 7.03
```
For example, we can see that for this balanced data set, the `_fac1` coefficient is the average of the U.S. and Mexico coefficients that we got from dummy coding, which represented their respective mean differences from Canada: (\-0\.611 \+ \-0\.343\) / 2 \= \-0\.477\. The `_fac2` coefficient is just the U.S. Mexico mean difference, as expected.
In other circumstances, we can use *categorical embeddings* to reduce a very large number of categorical levels to a smaller number of numeric variables. This is very commonly employed in deep learning.
### Scales, indices, and dimension reduction
It is often the case that we have several correlated variables/items which do not all need to go into the model. For example, instead of using all items in a psychological scale, we can use the scale score, however defined, which is often just a *sum score* of the underlying items. Often people will create an index by using a *principal components analysis*, which can be thought of as a means to create a weighted sum score, or set of scores. Some (especially binary) items may tend toward the creation of a single variable that simply notes whether any of those collection of variables was present or not.
#### Two\-step approaches
Some might do a preliminary analysis, such as a *cluster analysis* or *factor analysis*, to create new target or predictor variables. In the former we reduce several variables to a single categorical label. Factor analysis does the same but results in a more expressive continuous metric. While fine to use, the corresponding results are measured with error, so treating the categories or factor scores as you would observed variables will typically result in optimistic results when you later include them in a subsequent analysis like a linear regression. Though this difference is probably slight in most applications, keen reviewers would probably point out the model shortcoming.
### Don’t discretize
Little pains advanced modelers more than seeing results where a nice expressive continuous metric is butchered into two categories (e.g. taking a numeric age and collapsing to ‘old’ vs. ‘young’). There is rarely a reason to do this, and it is difficult to justify. There are reasons to collapse rare labels of a categorical variable, so that the new variable has fewer but more frequent categories. For example, data may have five or six race categories, but often the values are lumped into majority group vs. minority group due to each minority category having too few observations. But even that can cause problems, and doesn’t really overcome the fact that you simply didn’t have enough data to begin with.
### Numeric variables
The following table shows the interpretation of two extremely common transformations applied to numeric variables\- logging and scaling (i.e. standardizing to mean zero, standard deviation one).
| target | predictor | interpretation |
| --- | --- | --- |
| y | x | \\(\\Delta y \= \\beta\\Delta x\\) |
| y | log(x) | \\(\\Delta y \\approx (\\beta/100\)\\%\\Delta x\\) |
| log(y) | x | \\(\\%\\Delta y \\approx 100\\cdot \\beta\\%\\Delta x\\) |
| log(y) | log(x) | \\(\\%\\Delta y \= \\beta\\%\\Delta x\\) |
| y | scale(x) | \\(\\Delta y \= \\beta\\sigma\\Delta x\\) |
| scale(y) | x | \\(\\sigma\\Delta y \= \\beta\\Delta x\\) |
| scale(y) | scale(x) | \\(\\sigma\\Delta y \= \\beta\\sigma\\Delta x\\) |
For example, to start with the normal linear model situation, a one\-unit change in \\(x\\), i.e. \\(\\Delta x \=1\\), leads to \\(\\beta\\) unit change in \\(y\\). If we log the target variable \\(y\\), the interpretation of the coefficient for \\(x\\) is that a one\-unit change in \\(x\\) leads to an (approximately) 100\\(\\cdot\\)\\(\\beta\\)% change in \\(y\\). The 100 changes the result from a proportion to percentage change. More concretely, if \\(\\beta\\) was .5, a unit change in \\(x\\) leads to (roughly) a 50% change in \\(y\\). If both were logged, a percentage change in \\(x\\) leads to a \\(\\beta\\) percentage change in y[28](#fn28). These percentage change interpretations are called [elasticities](https://en.wikipedia.org/wiki/Elasticity_(economics)) in econometrics and areas trained similarly[29](#fn29).
It is very common to use *standardized* variables as well, also called normalizing, or simply scaling. If \\(y\\) and \\(x\\) are both standardized, a one unit (i.e. one standard deviation) change in \\(x\\) leads to a \\(\\beta\\) standard deviation change in \\(y\\). Again, if \\(\\beta\\) was .5, a standard deviation change in \\(x\\) leads to a half standard deviation change in \\(y\\). In general, there is nothing to lose by standardizing, so you should employ it often.
Another common transformation, particularly in machine learning, is the *min\-max normalization*, changing variables to range from some minimum to some maximum, usually zero to one.
### Categorical variables
A raw character string is not an analyzable unit, so character strings and labeled variables like factors must be converted for analysis to be conducted on them. For categorical variables, we can employ what is called *effects coding* to test for specific types of group differences. Far and away the most common approach is called *dummy coding* or *one\-hot encoding*[30](#fn30). In the next example, we will use dummy coding via the recipes package. I also to show how to standardize a numeric variable as previously discussed.
```
library(recipes)
nafta = happy %>%
filter(country %in% c('United States', 'Canada', 'Mexico'))
dummy = nafta %>%
recipe(~ country + generosity) %>% # formula approach for specifying variables
step_dummy(country, one_hot = TRUE) %>% # make variables for all factor levels
step_center(generosity) %>% # example of centering
step_scale(generosity) # example of standardizing
prep(dummy) %>% # estimates the necessary data to apply to this or other data sets
bake(nafta) %>% # apply the computations
print(n = 20)
```
```
# A tibble: 39 x 4
generosity country_Canada country_Mexico country_United.States
<dbl> <dbl> <dbl> <dbl>
1 0.835 1 0 0
2 0.819 1 0 0
3 0.891 1 0 0
4 0.801 1 0 0
5 0.707 1 0 0
6 0.841 1 0 0
7 1.06 1 0 0
8 1.21 1 0 0
9 0.940 1 0 0
10 0.838 1 0 0
11 0.590 1 0 0
12 0.305 1 0 0
13 -0.0323 1 0 0
14 NA 0 1 0
15 -1.19 0 1 0
16 -1.39 0 1 0
17 -1.08 0 1 0
18 -0.915 0 1 0
19 -1.22 0 1 0
20 -1.18 0 1 0
# … with 19 more rows
```
We see that the first few observations are Canada, and the next few Mexico. Note that doing this is rarely required for most modeling situations, but even if not, it sometimes can be useful to do so explicitly. If your modeling package cannot handle factor variables, and thus requires explicit coding, you’ll know, and typically these are the same ones that require matrix input.
Let’s run a regression as follows to show how it would happen automatically.
```
model_dummy = lm(happiness_score ~ country, data = nafta)
summary(model_dummy)
```
```
Call:
lm(formula = happiness_score ~ country, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.36887 0.09633 76.493 5.64e-14 ***
countryMexico -0.61107 0.13624 -4.485 0.00152 **
countryUnited States -0.34337 0.13624 -2.520 0.03275 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
In this case, the coefficient represents the difference in means on the target variable between the reference group and the group in question. In this case, the U.S. is \-0\.34 less on the happy score than the reference country (Canada). The intercept tells us the mean of the reference group.
Other codings are possible, and these would allow for specific group comparisons or types of comparisons. This is sometimes called *contrast coding*. For example, we could compare Canada vs. both the U.S. and Mexico. By giving Canada twice the weight of the other two we can get this result. I also add a coding that will just compare Mexico vs. the U.S. The actual weights used are arbitrary, but in this case should sum to zero.
| group | canada\_vs\_other | mexico\_vs\_us |
| --- | --- | --- |
| Canada | \-0\.667 | 0\.0 |
| Mexico | 0\.333 | \-0\.5 |
| United States | 0\.333 | 0\.5 |
| weights sum to zero, but are arbitrary |
| --- |
Adding such coding to a factor variable allows the corresponding models to use it in constructing the model matrix, rather than dummy coding. See the group means and calculate the results by hand for yourself.
```
nafta = nafta %>%
mutate(country_fac = factor(country))
contrasts(nafta$country_fac) = matrix(c(-2/3, 1/3, 1/3, 0, -.5, .5),
ncol = 2)
summary(lm(happiness_score ~ country_fac, data = nafta))
```
```
Call:
lm(formula = happiness_score ~ country_fac, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.05072 0.05562 126.769 6.01e-16 ***
country_fac1 -0.47722 0.11799 -4.045 0.00291 **
country_fac2 0.26770 0.13624 1.965 0.08100 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
```
nafta %>%
group_by(country) %>%
summarise(happy = mean(happiness_score, na.rm = TRUE))
```
```
# A tibble: 3 x 2
country happy
<chr> <dbl>
1 Canada 7.37
2 Mexico 6.76
3 United States 7.03
```
For example, we can see that for this balanced data set, the `_fac1` coefficient is the average of the U.S. and Mexico coefficients that we got from dummy coding, which represented their respective mean differences from Canada: (\-0\.611 \+ \-0\.343\) / 2 \= \-0\.477\. The `_fac2` coefficient is just the U.S. Mexico mean difference, as expected.
In other circumstances, we can use *categorical embeddings* to reduce a very large number of categorical levels to a smaller number of numeric variables. This is very commonly employed in deep learning.
### Scales, indices, and dimension reduction
It is often the case that we have several correlated variables/items which do not all need to go into the model. For example, instead of using all items in a psychological scale, we can use the scale score, however defined, which is often just a *sum score* of the underlying items. Often people will create an index by using a *principal components analysis*, which can be thought of as a means to create a weighted sum score, or set of scores. Some (especially binary) items may tend toward the creation of a single variable that simply notes whether any of those collection of variables was present or not.
#### Two\-step approaches
Some might do a preliminary analysis, such as a *cluster analysis* or *factor analysis*, to create new target or predictor variables. In the former we reduce several variables to a single categorical label. Factor analysis does the same but results in a more expressive continuous metric. While fine to use, the corresponding results are measured with error, so treating the categories or factor scores as you would observed variables will typically result in optimistic results when you later include them in a subsequent analysis like a linear regression. Though this difference is probably slight in most applications, keen reviewers would probably point out the model shortcoming.
#### Two\-step approaches
Some might do a preliminary analysis, such as a *cluster analysis* or *factor analysis*, to create new target or predictor variables. In the former we reduce several variables to a single categorical label. Factor analysis does the same but results in a more expressive continuous metric. While fine to use, the corresponding results are measured with error, so treating the categories or factor scores as you would observed variables will typically result in optimistic results when you later include them in a subsequent analysis like a linear regression. Though this difference is probably slight in most applications, keen reviewers would probably point out the model shortcoming.
### Don’t discretize
Little pains advanced modelers more than seeing results where a nice expressive continuous metric is butchered into two categories (e.g. taking a numeric age and collapsing to ‘old’ vs. ‘young’). There is rarely a reason to do this, and it is difficult to justify. There are reasons to collapse rare labels of a categorical variable, so that the new variable has fewer but more frequent categories. For example, data may have five or six race categories, but often the values are lumped into majority group vs. minority group due to each minority category having too few observations. But even that can cause problems, and doesn’t really overcome the fact that you simply didn’t have enough data to begin with.
Variable Importance
-------------------
In many circumstances, one of the modeling goals is to determine which predictor variable is most important out of the collection used in the model, or otherwise rank order the effectiveness of the predictors in some fashion. However, determining relative *variable importance* is at best an approximation with some methods, and a fairly hopeless endeavor with others. For just basic linear regression there are many methods that would not necessarily come to the same conclusions. Statistical significance, e.g. the Z/t statistic or p\-value, is simply not a correct way to do so. Some believe that [standardizing numeric variables](models.html#numeric-variables) is enough, but it is not, and doesn’t help with comparison to categorical inputs. In addition, if you’re model is not strong, it doesn’t make much sense to even worry about which is the best of a bad lot.
Another reason that ‘importance’ is a problematic endeavor is that a statistical result doesn’t speak to practical action, nor does it speak to the fact that small effects may be very important. Sex may be an important driver in social science model, but we may not be able to do anything about it for many outcomes that may be of interest. With health outcomes, any effects might be worthy of attention, however small, if they could practically increase the likelihood of survival.
Even if you can come up with a metric you like, you would still need some measure of uncertainty around that to make a claim that one predictor is reasonably better than another, and the only real approach to do that is usually some computationally expensive procedure that you will likely have to put together by hand.
As an example, for standard linear regression there are many methods that decompose \\(R^2\\) into relative contributions by the covariates. The tools to do so have to re\-run the model in many ways to produce these estimates (see the relaimpo package for example), but you would then have to use bootstrapping or similar approach to get interval estimates for those measures of importance. Certain techniques like random forests have a natural way to provide variable importance metrics, but providing inference on them would similarly be very computationally expensive.
In the end though, I think it is probably best to assume that any effect that seems practically distinct from zero might be worthy of attention, and can be regarded for its own sake. The more actionable, the better.
Extracting Output
-----------------
The better you get at modeling, the more often you are going to need to get at certain parts of the model output easily. For example, we can extract the coefficients, residuals, model data and other parts from standard linear model objects from base R.
Why would you want to do this? A simple example would be to compare effects across different settings. We can collect the values, put them in a data frame, and then to a table or visualization.
Typical modeling [methods](programming.html#methods) you might want to use:
* summary: print results in a legible way
* plot: plot something about the model (e.g. diagnostic plots)
* predict: make predictions, possibly on new data
* confint: get confidence intervals for parameters
* coef: extract coefficients
* fitted: extract fitted values
* residuals: extract residuals
* AIC: extract AIC
Here is an example of using the predict and coef methods.
```
predict(happy_model_base, newdata = happy %>% slice(1:5))
```
```
1 2 3 4 5
3.838179 3.959046 3.928180 4.004129 4.171624
```
```
coef(happy_model_base)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
Also, it’s useful to assign the summary results to an object, so that you can extract things that are also useful but would not be in the model object. We did this before, so now let’s take a look.
```
str(happy_model_base_sum, 1)
```
```
List of 12
$ call : language lm(formula = happiness_score ~ democratic_quality + generosity + log_gdp_per_capita, data = happy)
$ terms :Classes 'terms', 'formula' language happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
.. ..- attr(*, "variables")= language list(happiness_score, democratic_quality, generosity, log_gdp_per_capita)
.. ..- attr(*, "factors")= int [1:4, 1:3] 0 1 0 0 0 0 1 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:3] "democratic_quality" "generosity" "log_gdp_per_capita"
.. ..- attr(*, "order")= int [1:3] 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(happiness_score, democratic_quality, generosity, log_gdp_per_capita)
.. ..- attr(*, "dataClasses")= Named chr [1:4] "numeric" "numeric" "numeric" "numeric"
.. .. ..- attr(*, "names")= chr [1:4] "happiness_score" "democratic_quality" "generosity" "log_gdp_per_capita"
$ residuals : Named num [1:411] -0.405 -0.572 0.057 -0.426 -0.829 ...
..- attr(*, "names")= chr [1:411] "8" "9" "10" "19" ...
$ coefficients : num [1:4, 1:4] -1.01 0.17 1.161 0.693 0.314 ...
..- attr(*, "dimnames")=List of 2
$ aliased : Named logi [1:4] FALSE FALSE FALSE FALSE
..- attr(*, "names")= chr [1:4] "(Intercept)" "democratic_quality" "generosity" "log_gdp_per_capita"
$ sigma : num 0.628
$ df : int [1:3] 4 407 4
$ r.squared : num 0.695
$ adj.r.squared: num 0.693
$ fstatistic : Named num [1:3] 310 3 407
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:4, 1:4] 0.2504 0.0229 -0.0139 -0.0264 0.0229 ...
..- attr(*, "dimnames")=List of 2
$ na.action : 'omit' Named int [1:1293] 1 2 3 4 5 6 7 11 12 13 ...
..- attr(*, "names")= chr [1:1293] "1" "2" "3" "4" ...
- attr(*, "class")= chr "summary.lm"
```
If we want the adjusted \\(R^2\\) or root mean squared error (RMSE, i.e. average error[31](#fn31)), they aren’t readily available in the model object, but they are in the summary object, so we can pluck them out as we would any other [list object](data_structures.html#lists).
```
happy_model_base_sum$adj.r.squared
```
```
[1] 0.6930647
```
```
happy_model_base_sum[['sigma']]
```
```
[1] 0.6282718
```
### Package support
There are many packages available to get at model results. One of the more widely used is broom, which has tidy and other functions that can apply in different ways to different models depending on their class.
```
library(broom)
tidy(happy_model_base)
```
```
# A tibble: 4 x 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -1.01 0.314 -3.21 1.41e- 3
2 democratic_quality 0.170 0.0459 3.71 2.33e- 4
3 generosity 1.16 0.195 5.94 6.18e- 9
4 log_gdp_per_capita 0.693 0.0333 20.8 5.93e-66
```
Some packages will produce tables for a model object that are more or less ready for publication. However, unless you know it’s in the exact style you need, you’re probably better off dealing with it yourself. For example, you can use tidy and do minor cleanup to get the table ready for publication.
### Package support
There are many packages available to get at model results. One of the more widely used is broom, which has tidy and other functions that can apply in different ways to different models depending on their class.
```
library(broom)
tidy(happy_model_base)
```
```
# A tibble: 4 x 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -1.01 0.314 -3.21 1.41e- 3
2 democratic_quality 0.170 0.0459 3.71 2.33e- 4
3 generosity 1.16 0.195 5.94 6.18e- 9
4 log_gdp_per_capita 0.693 0.0333 20.8 5.93e-66
```
Some packages will produce tables for a model object that are more or less ready for publication. However, unless you know it’s in the exact style you need, you’re probably better off dealing with it yourself. For example, you can use tidy and do minor cleanup to get the table ready for publication.
Visualization
-------------
> Models require visualization to be understood completely.
If you aren’t using visualization as a fundamental part of your model exploration, you’re likely leaving a lot of that exploration behind, and not communicating the results as well as you could to the broadest audience possible. When adding nonlinear effects, interactions, and more, visualization is a must. Thankfully there are many packages to help you get data you need to visualize effects.
We start with the emmeans package. In the following example we have a country effect, and wish to get the mean happiness scores per country. We then visualize the results. Here we can see that Mexico is lowest on average.
```
happy_model_nafta = lm(happiness_score ~ country + year, data = nafta)
library(emmeans)
country_means = emmeans(happy_model_nafta, ~ country)
country_means
```
```
country emmean SE df lower.CL upper.CL
Canada 7.37 0.064 8 7.22 7.52
Mexico 6.76 0.064 8 6.61 6.91
United States 7.03 0.064 8 6.88 7.17
Confidence level used: 0.95
```
```
plot(country_means)
```
We can also test for pairwise differences between the countries, and there’s no reason not to visualize that also. In the following, after adjustment Mexico and U.S. might not differ on mean happiness, but the other comparisons are statistically notable[32](#fn32).
```
pw_comparisons = contrast(country_means, method = 'pairwise', adjust = 'bonferroni')
pw_comparisons
```
```
contrast estimate SE df t.ratio p.value
Canada - Mexico 0.611 0.0905 8 6.751 0.0004
Canada - United States 0.343 0.0905 8 3.793 0.0159
Mexico - United States -0.268 0.0905 8 -2.957 0.0547
P value adjustment: bonferroni method for 3 tests
```
```
plot(pw_comparisons)
```
The following example uses ggeffects. First, we run a model with an interaction of country and year (we’ll talk more about interactions later). Then we get predictions for the year by country, and subsequently visualize. We can see that the trend, while negative for all countries, is more pronounced as we move south.
```
happy_model_nafta = lm(happiness_score ~ year*country, data = nafta)
library(ggeffects)
preds = ggpredict(happy_model_nafta, terms = c('year', 'country'))
plot(preds)
```
Whenever you move to generalized linear models or other more complicated settings, visualization is even more important, so it’s best to have some tools at your disposal.
Extensions to the Standard Linear Model
---------------------------------------
### Different types of targets
In many data situations, we do not have a continuous numeric target variable, or may want to use a different distribution to get a better fit, or adhere to some theoretical perspective. For example, count data is not continuous and often notably skewed, so assuming a normal symmetric distribution may not work as well. From a data generating perspective we can use the Poisson distribution[33](#fn33) for the target variable instead.
\\\[\\ln{\\mu} \= X\\beta\\]
\\\[\\mu \= e^{X\\beta}\\]
\\\[y \\sim \\mathcal{Pois}(\\mu)\\]
Conceptually nothing has really changed from what we were doing with the standard linear model, except for the distribution. We still have a mean function determined by our predictors, and this is what we’re typically mainly interested in from a theoretical perspective. We do have an added step, a transformation of the mean (now usually called the *linear predictor*). Poisson naturally works with the log of the target, but rather than do that explicitly, we instead exponentiate the linear predictor. The *link function*[34](#fn34), which is the natural log in this setting, has a corresponding *inverse link* (or mean function)\- exponentiation.
In code we can demonstrate this as follows.
```
set.seed(123) # for reproducibility
N = 1000 # sample size
beta = c(2, 1) # the true coefficient values
x = rnorm(N) # a single predictor variable
mu = exp(beta[1] + beta[2]*x) # the linear predictor
y = rpois(N, lambda = mu) # the target variable lambda = mean
glm(y ~ x, family = poisson)
```
```
Call: glm(formula = y ~ x, family = poisson)
Coefficients:
(Intercept) x
2.009 0.994
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 13240
Residual Deviance: 1056 AIC: 4831
```
A very common setting is the case where our target variable takes on only two values\- yes vs. no, alive vs. dead, etc. The most common model used in such settings is the logistic regression model. In this case, it will have a different link to go with a different distribution.
\\\[\\ln{\\frac{\\mu}{1\-\\mu}} \= X\\beta\\]
\\\[\\mu \= \\frac{1}{1\+e^{\-X\\beta}}\\]
\\\[y \\sim \\mathcal{Binom}(\\mathrm{prob}\=\\mu, \\mathrm{size} \= 1\)\\]
Here our link function is called the *logit*, and it’s inverse takes our linear predictor and puts it on the probability scale.
Again, some code can help drive this home.
```
mu = plogis(beta[1] + beta[2]*x)
y = rbinom(N, size = 1, mu)
glm(y ~ x, family = binomial)
```
```
Call: glm(formula = y ~ x, family = binomial)
Coefficients:
(Intercept) x
2.141 1.227
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 852.3
Residual Deviance: 708.8 AIC: 712.8
```
```
# extension to count/proportional model
# mu = plogis(beta[1] + beta[2]*x)
# total = rpois(N, lambda = 5)
# events = rbinom(N, size = total, mu)
# nonevents = total - events
#
# glm(cbind(events, nonevents) ~ x, family = binomial)
```
You’ll have noticed that when we fit these models we used glm instead of lm. The normal linear model is a special case of *generalized linear models*, which includes a specific class of distributions \- normal, poisson, binomial, gamma, beta and more \- collectively referred to as the [exponential family](https://en.wikipedia.org/wiki/Exponential_family). While this family can cover a lot of ground, you do not have to restrict yourself to it, and many R modeling packages will provide easy access to more. The main point is that you have tools to deal with continuous, binary, count, ordinal, and other types of data. Furthermore, not much necessarily changes conceptually from model to model besides the link function and/or distribution.
### Correlated data
Often in standard regression modeling situations we have data that is correlated, like when we observe multiple observations for individuals (e.g. longitudinal studies), or observations are clustered within geographic units. There are many ways to analyze all kinds of correlated data in the form of clustered data, time series, spatial data and similar. In terms of understanding the mean function and data generating distribution for our target variable, as we did in our previous models, not much changes. However, we will want to utilize estimation techniques that take this correlation into account. Examples of such models include:
* Mixed models (e.g. random intercepts, ‘multilevel’ models)
* Time series models (autoregressive)
* Spatial models (e.g. conditional autoregressive)
As demonstration is beyond the scope of this document, the main point here is awareness. But see these on [mixed models](https://m-clark.github.io/mixed-models-with-R/) and [generalized additive models](https://m-clark.github.io/generalized-additive-models/).
### Other extensions
There are many types of models that will take one well beyond the standard linear model. In some cases, the focus is multivariate, trying to model many targets at once. Other models will even be domain\-specific, tailored to a very narrow type of problem. Whatever the scenario, having a good understanding of the models we’ve been discussing will likely help you navigate these new waters much more easily.
### Different types of targets
In many data situations, we do not have a continuous numeric target variable, or may want to use a different distribution to get a better fit, or adhere to some theoretical perspective. For example, count data is not continuous and often notably skewed, so assuming a normal symmetric distribution may not work as well. From a data generating perspective we can use the Poisson distribution[33](#fn33) for the target variable instead.
\\\[\\ln{\\mu} \= X\\beta\\]
\\\[\\mu \= e^{X\\beta}\\]
\\\[y \\sim \\mathcal{Pois}(\\mu)\\]
Conceptually nothing has really changed from what we were doing with the standard linear model, except for the distribution. We still have a mean function determined by our predictors, and this is what we’re typically mainly interested in from a theoretical perspective. We do have an added step, a transformation of the mean (now usually called the *linear predictor*). Poisson naturally works with the log of the target, but rather than do that explicitly, we instead exponentiate the linear predictor. The *link function*[34](#fn34), which is the natural log in this setting, has a corresponding *inverse link* (or mean function)\- exponentiation.
In code we can demonstrate this as follows.
```
set.seed(123) # for reproducibility
N = 1000 # sample size
beta = c(2, 1) # the true coefficient values
x = rnorm(N) # a single predictor variable
mu = exp(beta[1] + beta[2]*x) # the linear predictor
y = rpois(N, lambda = mu) # the target variable lambda = mean
glm(y ~ x, family = poisson)
```
```
Call: glm(formula = y ~ x, family = poisson)
Coefficients:
(Intercept) x
2.009 0.994
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 13240
Residual Deviance: 1056 AIC: 4831
```
A very common setting is the case where our target variable takes on only two values\- yes vs. no, alive vs. dead, etc. The most common model used in such settings is the logistic regression model. In this case, it will have a different link to go with a different distribution.
\\\[\\ln{\\frac{\\mu}{1\-\\mu}} \= X\\beta\\]
\\\[\\mu \= \\frac{1}{1\+e^{\-X\\beta}}\\]
\\\[y \\sim \\mathcal{Binom}(\\mathrm{prob}\=\\mu, \\mathrm{size} \= 1\)\\]
Here our link function is called the *logit*, and it’s inverse takes our linear predictor and puts it on the probability scale.
Again, some code can help drive this home.
```
mu = plogis(beta[1] + beta[2]*x)
y = rbinom(N, size = 1, mu)
glm(y ~ x, family = binomial)
```
```
Call: glm(formula = y ~ x, family = binomial)
Coefficients:
(Intercept) x
2.141 1.227
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 852.3
Residual Deviance: 708.8 AIC: 712.8
```
```
# extension to count/proportional model
# mu = plogis(beta[1] + beta[2]*x)
# total = rpois(N, lambda = 5)
# events = rbinom(N, size = total, mu)
# nonevents = total - events
#
# glm(cbind(events, nonevents) ~ x, family = binomial)
```
You’ll have noticed that when we fit these models we used glm instead of lm. The normal linear model is a special case of *generalized linear models*, which includes a specific class of distributions \- normal, poisson, binomial, gamma, beta and more \- collectively referred to as the [exponential family](https://en.wikipedia.org/wiki/Exponential_family). While this family can cover a lot of ground, you do not have to restrict yourself to it, and many R modeling packages will provide easy access to more. The main point is that you have tools to deal with continuous, binary, count, ordinal, and other types of data. Furthermore, not much necessarily changes conceptually from model to model besides the link function and/or distribution.
### Correlated data
Often in standard regression modeling situations we have data that is correlated, like when we observe multiple observations for individuals (e.g. longitudinal studies), or observations are clustered within geographic units. There are many ways to analyze all kinds of correlated data in the form of clustered data, time series, spatial data and similar. In terms of understanding the mean function and data generating distribution for our target variable, as we did in our previous models, not much changes. However, we will want to utilize estimation techniques that take this correlation into account. Examples of such models include:
* Mixed models (e.g. random intercepts, ‘multilevel’ models)
* Time series models (autoregressive)
* Spatial models (e.g. conditional autoregressive)
As demonstration is beyond the scope of this document, the main point here is awareness. But see these on [mixed models](https://m-clark.github.io/mixed-models-with-R/) and [generalized additive models](https://m-clark.github.io/generalized-additive-models/).
### Other extensions
There are many types of models that will take one well beyond the standard linear model. In some cases, the focus is multivariate, trying to model many targets at once. Other models will even be domain\-specific, tailored to a very narrow type of problem. Whatever the scenario, having a good understanding of the models we’ve been discussing will likely help you navigate these new waters much more easily.
Model Exploration Summary
-------------------------
At this point you should have a good idea of how to get started exploring models with R. Generally what you will explore will be based on theory, or merely curiosity. Specific packages while make certain types of models easy to pull off, without much change to the syntax from the standard `lm` approach of base R. Almost invariably, you will need to process the data to make it more amenable to analysis and/or more interpretable. After model fitting, summaries and visualizations go a long way toward understanding the part of the world you are exploring.
Model Exploration Exercises
---------------------------
### Exercise 1
With the Google app data, use a standard linear model (i.e. lm) to predict one of three target variables of your choosing:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
I would suggest preprocessing the number of reviews\- dividing by 100,000, scaling (standardizing), or logging it (for the latter you can add 1 first to deal with zeros[35](#fn35)).
Interpret the results. Visualize the difference in means between free and paid apps. See the [emmeans](models.html#visualization) example above.
```
load('data/google_apps.RData')
model = lm(? ~ reviews + type + size_in_MB, data = google_apps)
plot(emmeans::emmeans(model, ~type))
```
### Exercise 2
Rerun the above with interactions of the number of reviews or app size (or both) with type (via `a + b + a:b` or just `a*b` for two predictors). Visualize the interaction. Does it look like the effect differs by type?
```
model = lm(? ~ reviews + type*?, data = google_apps)
plot(ggeffects::ggpredict(model, terms = c('size_in_MB', 'type')))
```
### Exercise 3
Use the fish data to predict the number of fish caught `count` by the following predictor variables:
* `livebait`: whether live bait was used or not
* `child`: how many children present
* `persons`: total persons on the trip
If you wish, you can start with an `lm`, but as the number of fish caught is a count, it is suitable for using a poisson distribution via `glm` with `family = poisson`, so try that if you’re feeling up for it. If you exponentiate the coefficients, they can be interpreted as [incidence rate ratios](https://stats.idre.ucla.edu/stata/output/poisson-regression/).
```
load('data/fish.RData')
model = glm(?, data = fish)
```
### Exercise 1
With the Google app data, use a standard linear model (i.e. lm) to predict one of three target variables of your choosing:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
I would suggest preprocessing the number of reviews\- dividing by 100,000, scaling (standardizing), or logging it (for the latter you can add 1 first to deal with zeros[35](#fn35)).
Interpret the results. Visualize the difference in means between free and paid apps. See the [emmeans](models.html#visualization) example above.
```
load('data/google_apps.RData')
model = lm(? ~ reviews + type + size_in_MB, data = google_apps)
plot(emmeans::emmeans(model, ~type))
```
### Exercise 2
Rerun the above with interactions of the number of reviews or app size (or both) with type (via `a + b + a:b` or just `a*b` for two predictors). Visualize the interaction. Does it look like the effect differs by type?
```
model = lm(? ~ reviews + type*?, data = google_apps)
plot(ggeffects::ggpredict(model, terms = c('size_in_MB', 'type')))
```
### Exercise 3
Use the fish data to predict the number of fish caught `count` by the following predictor variables:
* `livebait`: whether live bait was used or not
* `child`: how many children present
* `persons`: total persons on the trip
If you wish, you can start with an `lm`, but as the number of fish caught is a count, it is suitable for using a poisson distribution via `glm` with `family = poisson`, so try that if you’re feeling up for it. If you exponentiate the coefficients, they can be interpreted as [incidence rate ratios](https://stats.idre.ucla.edu/stata/output/poisson-regression/).
```
load('data/fish.RData')
model = glm(?, data = fish)
```
Python Model Exploration Notebook
---------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/models.ipynb)
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/models.html |
Model Exploration
=================
The following section shows how to get started with modeling in R generally, with a focus on concepts, tools, and syntax, rather than trying to understand the specifics of a given model. We first dive into model exploration, getting a sense of the basic mechanics behind our modeling tools, and contemplating standard results. We’ll then shift our attention to understanding the strengths and limitations of our models. We’ll then change from classical methods to explore machine learning techniques. The goal of these chapters is to provide an overview of concepts and ways to think about modeling.
Model Taxonomy
--------------
We can begin with a taxonomy that broadly describes two classes of models:
* *Supervised*
* *Unsupervised*
* Some combination
For supervised settings, there is a target or set of target variables which we aim to predict with a set of predictor variables or covariates. This is far and away the most common case, and the one we will focus on here. It is very common in machine learning parlance to further distinguish *regression* and *classification* among supervised models, but what they actually mean to distinguish is numeric target variables from categorical ones (it’s all regression).
In the case of unsupervised models, the data itself is the target, and this setting includes techniques such as principal components analysis, factor analysis, cluster analytic approaches, topic modeling, and many others. A key goal for many such methods is *dimension reduction*, either of the columns or rows. For example, we may have many items of a survey we wish to group together into a few concepts, or cluster thousands of observations into a few simple categories.
We can also broadly describe two primary goals of modeling:
* *Prediction*
* *Explanation*
Different models will provide varying amounts of predictive and explanatory (or inferential) power. In some settings, prediction is almost entirely the goal, with little need to understand the underlying details of the relation of inputs to outputs. For example, in a model that predicts words to suggest when typing, we don’t really need to know nor much care about the details except to be able to improve those suggestions. In scientific studies however, we may be much more interested in the (potentially causal) relations among the variables under study.
While these are sometimes competing goals, it is definitely not the case that they are mutually exclusive. For example, a fully interpretable model, statistically speaking, may have no predictive capability, and so is fairly useless in practical terms. Often, very predictive models offer little understanding. But sometimes we can luck out and have both a highly predictive model as well as one that is highly interpretable.
Linear models
-------------
Most models you see in published reports are *linear models* of varying kinds, and form the basis on which to build more complex forms. In such models we distinguish a *target variable* we want to understand from the variables which we will use to understand it. Note that these come with different names depending on the goal of the study, discipline, and other factors[19](#fn19). The following table denotes common nomenclature across many disciplines.
| Type | Names |
| --- | --- |
| Target | Dependent variable |
| Endogenous |
| Response |
| Outcome |
| Output |
| Y |
| Regressand |
| Left hand side (LHS) |
| Predictor | Independent variable |
| Exogenous |
| Explanatory Variable |
| Covariate |
| Input |
| X |
| Regressor |
| Right hand side (RHS) |
A typical way to depict a linear regression model is as follows:
\\\[y \= b\_0 \+ b\_1\\cdot x\_1 \+ b\_2\\cdot x\_2 \+ ... \+ \+ b\_p\\cdot x\_p \+ \\epsilon\\]
In the above, \\(b\_0\\) is the intercept, and the other \\(b\_\*\\) are the regression coefficients that represent the relationship of the predictors \\(x\\) to the target variable \\(y\\). The \\(\\epsilon\\) represents the *error* or *residual*. We don’t have perfect prediction, and that represents the difference between what we can guess with our predictor relationships to the target and what we actually observe with it.
In R, we specify a linear model as follows. Conveniently enough, we use a function, `lm`, that stands for linear model. There are various inputs, typically starting with the formula. In the formula, The target variable is first, followed by the predictor variables, separated by a tilde (`~`). Additional predictor variables are added with a plus sign (`+`). In this example, `y` is our target, and the predictors are `x` and `z`.
```
lm(y ~ x + z)
```
We can still use linear models to investigate nonlinear relationships. For example, in the following, we can add a quadratic term or an interaction, yet the model is still linear in the parameters. All of the following are standard linear regression models.
```
lm(y ~ x + z + x:z)
lm(y ~ x + x_squared) # a better way: lm(y ~ poly(x, degree = 2))
```
In the models above, `x` has a potentially nonlinear relationship with `y`, either by varying its (linear) relationship depending on values of z (the first case) or itself (the second). In general, the manner in which nonlinear relationships may be explored in linear models is quite flexible.
An example of a *nonlinear model* would be population growth models, like exponential or logistic growth curves. You can use functions like nls or nlme for such models, but should have a specific theoretical reason to do so, and even then, flexible models such as [GAMs](https://m-clark.github.io/generalized-additive-models/) might be better than assuming a functional form.
Estimation
----------
One key thing to understand with predictive models of any kind is how we estimate the parameters of interest, e.g. coefficients/weights, variance, and more. To start with, we must have some sort of goal that choosing a particular set of values for the parameters achieves, and then find some way to reach that goal efficiently.
### Minimizing and maximizing
The goal of many estimation approaches is the reduction of *loss*, conceptually defined as the difference between the model predictions and the observed data, i.e. prediction error. In an introductory methods course, many are introduced to *ordinary least squares* as a means to estimate the coefficients for a linear regression model. In this scenario, we are seeking to come up with estimates of the coefficients that *minimize* the (squared) difference between the observed target value and the fitted value based on the parameter estimates. The loss in this case is defined as the sum of the squared errors. Formally we can state it as follows.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2\\]
We can see how this works more clearly with some simple conceptual code. In what follows, we create a [function](functions.html#writing-functions), allows us to move [row by row](iterative.html#for-loops) through the data, calculating both our prediction based on the given model parameters\- \\(\\hat{y}\\), and the difference between that and our target variable \\(y\\). We sum these squared differences to get a total. In practice such a function is called the loss function, cost function, or objective function.
```
ls_loss <- function(X, y, beta) {
# initialize the objects
loss = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
# for each row, calculate y_hat and square the difference with y
for (n in 1:nrow(X)) {
y_hat[n] = sum(X[n, ] * beta)
loss[n] = (y[n] - y_hat[n]) ^ 2
}
sum(loss)
}
```
Now we need some data. Let’s construct some data so that we know the true underlying values for the regression coefficients. Feel free to change the sample size `N` or the coefficient values.
```
set.seed(123) # for reproducibility
N = 100
X = cbind(1, rnorm(N)) # a model matrix; first column represents the intercept
y = 5 * X[, 1] + .5 * X[, 2] + rnorm(N) # a target with some noise; truth is y = 5 +.5*x
df = data.frame(y = y, x = X[, 2])
```
Now let’s make some guesses for the coefficients, and see what the corresponding sum of the squared errors, i.e. the loss, would be.
```
ls_loss(X, y, beta = c(0, 1)) # guess 1
```
```
[1] 2467.106
```
```
ls_loss(X, y, beta = c(1, 2)) # guess 2
```
```
[1] 1702.547
```
```
ls_loss(X, y, beta = c(4, .25)) # guess 3
```
```
[1] 179.2952
```
We see that in our third guess we reduce the loss quite a bit relative to our first guess. This makes sense because a value of 4 for the intercept and .25 for the coefficient for `x` are not as relatively far from the true values.
However, we can also see that they are not the best we could have done. In addition, with more data, our estimated coefficients would get closer to true values.
```
model = lm(y ~ x, df) # fit the model and obtain parameter estimates using OLS
coef(model) # best guess given the data
```
```
(Intercept) x
4.8971969 0.4475284
```
```
sum(residuals(model)^2) # least squares loss
```
```
[1] 92.34413
```
In some relatively rare cases, a known approach is available and we do not have to search for the best estimates, but simply have to perform the correct steps that will result in them. For example, the following matrix operations will produce the best estimates for linear regression, which also happen to be the maximum likelihood estimates.
```
solve(crossprod(X)) %*% crossprod(X, y) # 'normal equations'
```
```
[,1]
[1,] 4.8971969
[2,] 0.4475284
```
```
coef(model)
```
```
(Intercept) x
4.8971969 0.4475284
```
Most of the time we don’t have such luxury, or even if we did, the computations might be too great for the size of our data.
Many statistical modeling techniques use *maximum likelihood* in some form or fashion, including Bayesian approaches, so you would do well to understand the basics. In this case, instead of minimizing the loss, we use an approach to maximize the probability of the observations of the target variable given the estimates of the parameters of the model (e.g. the coefficients in a regression)[20](#fn20).
The following shows how this would look for estimating a single value like a mean for a set of observations from a specific distribution[21](#fn21). In this case, the true underlying value that maximizes the likelihood is 5, but we typically don’t know this. We see that as our guesses for the mean would get closer to 5, the likelihood of the observed values increases. Our final guess based on the observed data won’t be exactly 5, but with enough data and an appropriate model for that data, we should get close.
Again, some simple conceptual code can help us. The next bit of code follows a similar approach to what we had with least squares regression, but the goal is instead to maximize the likelihood of the observed data. In this example, I fix the estimated variance, but in practice we’d need to estimate that parameter as well. As probabilities are typically very small, we work with them on the log scale.
```
max_like <- function(X, y, beta, sigma = 1) {
likelihood = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
for (n in 1:nrow(X)) {
y_hat[n] <- sum(X[n, ] * beta)
likelihood[n] = dnorm(y[n], mean = y_hat[n], sd = sigma, log = TRUE)
}
sum(likelihood)
}
```
```
max_like(X, y, beta = c(0, 1)) # guess 1
```
```
[1] -1327.593
```
```
max_like(X, y, beta = c(1, 2)) # guess 2
```
```
[1] -1022.18
```
```
max_like(X, y, beta = c(4, .25)) # guess 3
```
```
[1] -300.6741
```
```
logLik(model)
```
```
'log Lik.' -137.9115 (df=3)
```
To better understand maximum likelihood, it might help to think of our model from a data generating perspective, rather than in terms of ‘errors’. In the standard regression setting, we think of a single observation as follows:
\\\[\\mu \= b\_0 \+ b\_1\*x\_1 \+ ... \+ b\_p\*x\_p\\]
Or with matrix notation (consider it shorthand if not familiar):
\\\[\\mu \= X\\beta\\]
Now we display how \\(y\\) is generated:
\\\[y \\sim \\mathcal{N}(\\mathrm{mean} \= \\mu, \\mathrm{sd} \= \\sigma)\\]
In words, this means that our target observation \\(y\\) is assumed to be normally distributed with some mean and some standard deviation/variance. The mean \\(\\mu\\) is a function, or simply weighted sum, of our covariates \\(X\\). The unknown parameters we have to estimate are the \\(\\beta\\), i.e. weights, and standard deviation \\(\\sigma\\) (or variance \\(\\sigma^2\\)).
One more note regarding estimation, it is good to distinguish models from estimation procedures. The following shows the more specific to the more general for both models and estimation procedures respectively.
| Label | Name |
| --- | --- |
| LM | Linear Model |
| GLM | Generalized Linear Model |
| GLMM | Generalized Linear Mixed Model |
| GAMM | Generalized Linear Mixed Model |
| OLS | Ordinary Least Squares |
| WLS | Weighted Least Squares |
| GLS | Generalized Least Squares |
| GEE | Generalized Estimating Equations |
| GMM | Generalized Method of Moments |
### Optimization
So we know the goal, but how do we get to it? In practice, we typically use *optimization* methods to iteratively search for the best estimates for the parameters of a given model. The functions we explored above provide a goal\- to minimize loss (however defined\- least squares for continuous, classification error for binary, etc.) or maximize the likelihood (or posterior probability in the Bayesian context). Whatever the goal, an optimizing *algorithm* will typically be used to find the estimates that reach that goal. Some approaches are very general, some are better for certain types of modeling problems. These algorithms continue to make guesses until some criterion has been reached (*convergence*)[22](#fn22).
You generally don’t need to know the details to use these algorithms to fit models, but knowing a little bit about the optimization process and available options may prove useful to deal with more complex data scenarios, where convergence can be difficult. Some packages will even have documentation specifically dealing with convergence issues. In the more predictive models previously discussed, knowing more about the optimization algorithm may speedup the time it takes to train the model, or smooth out the variability in the process.
As an aside, most Bayesian models use an estimation approach that is some form of *Markov Chain Monte Carlo*. It is a simulation based approach to generate subsequent estimates of parameters conditional on present estimates of them. One set of iterations is called a chain, and convergence requires multiple chains to mix well, i.e. come to similar conclusions about the parameter estimates. The goal even then is to maximize the log posterior distribution, similar to maximizing the likelihood. In the past this was an extremely computationally expensive procedure, but these days, modern laptops can handle even complex models with ease, though some data set sizes may be prohibitive still[23](#fn23).
Fitting Models
--------------
With practically every modern modeling package in R, the two components required to fit a model are the model formula, and a data frame that contains the variables specified in that formula. Consider the following models. In general the syntax is the similar regardless of package, with special considerations for the type of model. The data argument is not included in these examples, but would be needed.
```
lm(y ~ x + z) # standard linear model/OLS
glm(y ~ x + z, family = 'binomial') # logistic regression with binary response
glm(y ~ x + z + offset(log(q)), family = 'poisson') # count/rate model
betareg::betareg(y ~ x + z) # beta regression for targets between 0 and 1
pscl::hurdle(y ~ x + z, dist = "negbin") # hurdle model with negative binomial response
lme4::glmer(y ~ x + (1 | group), family = 'binomial') # generalized linear mixed model
mgcv::gam(y ~ s(x)) # generalized additive model
survival::coxph(Surv(time = t, event = q) ~ x) # Cox Proportional Hazards Regression
# Bayesian mixed model
brms::brm(
y ~ x + (1 + x | group),
family = 'zero_one_inflated_beta',
prior = priors
)
```
For examples of many other types of models, see this [document](https://m-clark.github.io/R-models/).
Let’s finally get our hands dirty and run an example. We’ll use the world happiness dataset[24](#fn24). This is country level data based on surveys taken at various years, and the scores are averages or proportions, along with other values like GDP.
```
library(tidyverse) # load if you haven't already
load('data/world_happiness.RData')
# glimpse(happy)
```
| Variable | N | Mean | SD | Min | Q1 | Median | Q3 | Max | % Missing |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| year | 1704 | 2012\.33 | 3\.69 | 2005\.00 | 2009\.00 | 2012\.00 | 2015\.00 | 2018\.00 | 0 |
| life\_ladder | 1704 | 5\.44 | 1\.12 | 2\.66 | 4\.61 | 5\.34 | 6\.27 | 8\.02 | 0 |
| log\_gdp\_per\_capita | 1676 | 9\.22 | 1\.19 | 6\.46 | 8\.30 | 9\.41 | 10\.19 | 11\.77 | 2 |
| social\_support | 1691 | 0\.81 | 0\.12 | 0\.29 | 0\.75 | 0\.83 | 0\.90 | 0\.99 | 1 |
| healthy\_life\_expectancy\_at\_birth | 1676 | 63\.11 | 7\.58 | 32\.30 | 58\.30 | 65\.00 | 68\.30 | 76\.80 | 2 |
| freedom\_to\_make\_life\_choices | 1675 | 0\.73 | 0\.14 | 0\.26 | 0\.64 | 0\.75 | 0\.85 | 0\.99 | 2 |
| generosity | 1622 | 0\.00 | 0\.16 | \-0\.34 | \-0\.12 | \-0\.02 | 0\.09 | 0\.68 | 5 |
| perceptions\_of\_corruption | 1608 | 0\.75 | 0\.19 | 0\.04 | 0\.70 | 0\.81 | 0\.88 | 0\.98 | 6 |
| positive\_affect | 1685 | 0\.71 | 0\.11 | 0\.36 | 0\.62 | 0\.72 | 0\.80 | 0\.94 | 1 |
| negative\_affect | 1691 | 0\.27 | 0\.08 | 0\.08 | 0\.21 | 0\.25 | 0\.31 | 0\.70 | 1 |
| confidence\_in\_national\_government | 1530 | 0\.48 | 0\.19 | 0\.07 | 0\.33 | 0\.46 | 0\.61 | 0\.99 | 10 |
| democratic\_quality | 1558 | \-0\.14 | 0\.88 | \-2\.45 | \-0\.79 | \-0\.23 | 0\.65 | 1\.58 | 9 |
| delivery\_quality | 1559 | 0\.00 | 0\.98 | \-2\.14 | \-0\.71 | \-0\.22 | 0\.70 | 2\.18 | 9 |
| gini\_index\_world\_bank\_estimate | 643 | 0\.37 | 0\.08 | 0\.24 | 0\.30 | 0\.35 | 0\.43 | 0\.63 | 62 |
| happiness\_score | 554 | 5\.41 | 1\.13 | 2\.69 | 4\.51 | 5\.31 | 6\.32 | 7\.63 | 67 |
| dystopia\_residual | 554 | 2\.06 | 0\.55 | 0\.29 | 1\.72 | 2\.06 | 2\.44 | 3\.84 | 67 |
The happiness score itself ranges from 2\.7 to 7\.6, with a mean of 5\.4 and standard deviation of 1\.1\.
Fitting a model with R is trivial, and at a minimum requires the two key ingredients mentioned before, the formula and data. Here we specify our target at `happiness_score` with predictors democratic quality, generosity, and GDP per capita (logged).
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
```
And that’s all there is to it.
### Using matrices
Many packages still allow for matrix input instead of specifying a model formula, or even require it (but shouldn’t). This means separating data into a model (or design) matrix, and the vector or matrix of the target variable(s). For example, if we needed a speed boost and weren’t concerned about some typical output we could use lm.fit.
First we need to create the required components. We can use model.matrix to get what we need.
```
X = model.matrix(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
head(X)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
8 1 -1.8443636 0.08909068 7.500539
9 1 -1.8554263 0.05136492 7.497038
10 1 -1.8865659 -0.11219829 7.497755
19 1 0.2516293 -0.08441135 9.302960
20 1 0.2572919 -0.02068741 9.337532
21 1 0.2999450 -0.03264282 9.376145
```
Note the column of ones in the model matrix `X`. This represents our intercept, but that may not mean much to you unless you understand matrix multiplication (nice demo [here](http://matrixmultiplication.xyz/)). The other columns are just as they are in the data. Note also that the missing values have been removed.
```
nrow(happy)
```
```
[1] 1704
```
```
nrow(X)
```
```
[1] 411
```
The target variable must contain the same number of observations as in the model matrix, and there are various ways to create it to ensure this. Instead of model.matrix, there is also model.frame, which creates a data frame, with a method for extracting the corresponding target variable[25](#fn25).
```
X_df = model.frame(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
y = model.response(X_df)
```
We can now fit the model as follows.
```
happy_model_matrix = lm.fit(X, y)
summary(happy_model_matrix) # only a standard list is returned
```
```
Length Class Mode
coefficients 4 -none- numeric
residuals 411 -none- numeric
effects 411 -none- numeric
rank 1 -none- numeric
fitted.values 411 -none- numeric
assign 4 -none- numeric
qr 5 qr list
df.residual 1 -none- numeric
```
```
coef(happy_model_matrix)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
In my experience, it is generally a bad sign if a package requires that you create the model matrix rather than doing so itself via the standard formula \+ data.frame approach. I typically find that such packages tend to skip out on many other things like using typical methods like predict, coef, etc., making them even more difficult to work with. In general, the only real time you should need to use model matrices is when you are creating your own modeling package, doing simulations, utilizing model speed\-ups, or otherwise know why you need them.
Summarizing Models
------------------
Once we have a model, we’ll want to summarize the results of it. Most modeling packages have a summary method we can apply, which will provide parameter estimates, some notion of model fit, inferential statistics, and other output.
```
happy_model_base_sum = summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
There is a lot of info to parse there, so we’ll go over some of it in particular. The summary provides several pieces of information: the coefficients or weights (`Estimate`)[26](#fn26), the standard errors (`Std. Error`), the t\-statistic (which is just the coefficient divided by the standard error), and the corresponding p\-value. The main thing to look at are the actual coefficients and the direction of their relationship, positive or negative. For example, with regard to the effect of democratic quality, moving one point on democratic quality results in roughly 0\.2 units of happiness. Is this a notable effect? Knowing the scale of the outcome can help us understand the magnitude of the effect in a general sense. Before we showed that the standard deviation of the happiness scale was 1\.1\. So, in terms of standard deviation units\- moving 1 points on democratic quality would result in a 0\.2 standard deviation increase in state\-level happiness. We might consider this fairly small, but maybe not negligible.
One thing we must also have in order to understand our results is to get a sense of the uncertainty in the effects. The following provides confidence intervals for each of the coefficients.
```
confint(happy_model_base)
```
```
2.5 % 97.5 %
(Intercept) -1.62845472 -0.3925003
democratic_quality 0.08018814 0.2605586
generosity 0.77656244 1.5451306
log_gdp_per_capita 0.62786210 0.7589806
```
Now we have a sense of the range of plausible values for the coefficients. The value we actually estimate is the best guess given our circumstances, but slight changes in the data, the way we collect it, the time we collect it, etc., all would result in a slightly different result. The confidence interval provides a range of what we could expect given the uncertainty, and, given its importance, you should always report it. In fact, just showing the coefficient and the interval would be better than typical reporting of the statistical test results, though you can do both.
Variable Transformations
------------------------
Transforming variables can provide a few benefits in modeling, whether applied to the target, covariates, or both, and should regularly be used for most models. Some of these benefits include[27](#fn27):
* Interpretable intercepts
* More comparable covariate effects
* Faster estimation
* Easier convergence
* Help with heteroscedasticity
For example, merely centering predictor variables, i.e. subtracting the mean, provides a more interpretable intercept that will fall within the actual range of the target variable, telling us what the value of the target variable is when the covariates are at their means (or reference value if categorical).
### Numeric variables
The following table shows the interpretation of two extremely common transformations applied to numeric variables\- logging and scaling (i.e. standardizing to mean zero, standard deviation one).
| target | predictor | interpretation |
| --- | --- | --- |
| y | x | \\(\\Delta y \= \\beta\\Delta x\\) |
| y | log(x) | \\(\\Delta y \\approx (\\beta/100\)\\%\\Delta x\\) |
| log(y) | x | \\(\\%\\Delta y \\approx 100\\cdot \\beta\\%\\Delta x\\) |
| log(y) | log(x) | \\(\\%\\Delta y \= \\beta\\%\\Delta x\\) |
| y | scale(x) | \\(\\Delta y \= \\beta\\sigma\\Delta x\\) |
| scale(y) | x | \\(\\sigma\\Delta y \= \\beta\\Delta x\\) |
| scale(y) | scale(x) | \\(\\sigma\\Delta y \= \\beta\\sigma\\Delta x\\) |
For example, to start with the normal linear model situation, a one\-unit change in \\(x\\), i.e. \\(\\Delta x \=1\\), leads to \\(\\beta\\) unit change in \\(y\\). If we log the target variable \\(y\\), the interpretation of the coefficient for \\(x\\) is that a one\-unit change in \\(x\\) leads to an (approximately) 100\\(\\cdot\\)\\(\\beta\\)% change in \\(y\\). The 100 changes the result from a proportion to percentage change. More concretely, if \\(\\beta\\) was .5, a unit change in \\(x\\) leads to (roughly) a 50% change in \\(y\\). If both were logged, a percentage change in \\(x\\) leads to a \\(\\beta\\) percentage change in y[28](#fn28). These percentage change interpretations are called [elasticities](https://en.wikipedia.org/wiki/Elasticity_(economics)) in econometrics and areas trained similarly[29](#fn29).
It is very common to use *standardized* variables as well, also called normalizing, or simply scaling. If \\(y\\) and \\(x\\) are both standardized, a one unit (i.e. one standard deviation) change in \\(x\\) leads to a \\(\\beta\\) standard deviation change in \\(y\\). Again, if \\(\\beta\\) was .5, a standard deviation change in \\(x\\) leads to a half standard deviation change in \\(y\\). In general, there is nothing to lose by standardizing, so you should employ it often.
Another common transformation, particularly in machine learning, is the *min\-max normalization*, changing variables to range from some minimum to some maximum, usually zero to one.
### Categorical variables
A raw character string is not an analyzable unit, so character strings and labeled variables like factors must be converted for analysis to be conducted on them. For categorical variables, we can employ what is called *effects coding* to test for specific types of group differences. Far and away the most common approach is called *dummy coding* or *one\-hot encoding*[30](#fn30). In the next example, we will use dummy coding via the recipes package. I also to show how to standardize a numeric variable as previously discussed.
```
library(recipes)
nafta = happy %>%
filter(country %in% c('United States', 'Canada', 'Mexico'))
dummy = nafta %>%
recipe(~ country + generosity) %>% # formula approach for specifying variables
step_dummy(country, one_hot = TRUE) %>% # make variables for all factor levels
step_center(generosity) %>% # example of centering
step_scale(generosity) # example of standardizing
prep(dummy) %>% # estimates the necessary data to apply to this or other data sets
bake(nafta) %>% # apply the computations
print(n = 20)
```
```
# A tibble: 39 x 4
generosity country_Canada country_Mexico country_United.States
<dbl> <dbl> <dbl> <dbl>
1 0.835 1 0 0
2 0.819 1 0 0
3 0.891 1 0 0
4 0.801 1 0 0
5 0.707 1 0 0
6 0.841 1 0 0
7 1.06 1 0 0
8 1.21 1 0 0
9 0.940 1 0 0
10 0.838 1 0 0
11 0.590 1 0 0
12 0.305 1 0 0
13 -0.0323 1 0 0
14 NA 0 1 0
15 -1.19 0 1 0
16 -1.39 0 1 0
17 -1.08 0 1 0
18 -0.915 0 1 0
19 -1.22 0 1 0
20 -1.18 0 1 0
# … with 19 more rows
```
We see that the first few observations are Canada, and the next few Mexico. Note that doing this is rarely required for most modeling situations, but even if not, it sometimes can be useful to do so explicitly. If your modeling package cannot handle factor variables, and thus requires explicit coding, you’ll know, and typically these are the same ones that require matrix input.
Let’s run a regression as follows to show how it would happen automatically.
```
model_dummy = lm(happiness_score ~ country, data = nafta)
summary(model_dummy)
```
```
Call:
lm(formula = happiness_score ~ country, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.36887 0.09633 76.493 5.64e-14 ***
countryMexico -0.61107 0.13624 -4.485 0.00152 **
countryUnited States -0.34337 0.13624 -2.520 0.03275 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
In this case, the coefficient represents the difference in means on the target variable between the reference group and the group in question. In this case, the U.S. is \-0\.34 less on the happy score than the reference country (Canada). The intercept tells us the mean of the reference group.
Other codings are possible, and these would allow for specific group comparisons or types of comparisons. This is sometimes called *contrast coding*. For example, we could compare Canada vs. both the U.S. and Mexico. By giving Canada twice the weight of the other two we can get this result. I also add a coding that will just compare Mexico vs. the U.S. The actual weights used are arbitrary, but in this case should sum to zero.
| group | canada\_vs\_other | mexico\_vs\_us |
| --- | --- | --- |
| Canada | \-0\.667 | 0\.0 |
| Mexico | 0\.333 | \-0\.5 |
| United States | 0\.333 | 0\.5 |
| weights sum to zero, but are arbitrary |
| --- |
Adding such coding to a factor variable allows the corresponding models to use it in constructing the model matrix, rather than dummy coding. See the group means and calculate the results by hand for yourself.
```
nafta = nafta %>%
mutate(country_fac = factor(country))
contrasts(nafta$country_fac) = matrix(c(-2/3, 1/3, 1/3, 0, -.5, .5),
ncol = 2)
summary(lm(happiness_score ~ country_fac, data = nafta))
```
```
Call:
lm(formula = happiness_score ~ country_fac, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.05072 0.05562 126.769 6.01e-16 ***
country_fac1 -0.47722 0.11799 -4.045 0.00291 **
country_fac2 0.26770 0.13624 1.965 0.08100 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
```
nafta %>%
group_by(country) %>%
summarise(happy = mean(happiness_score, na.rm = TRUE))
```
```
# A tibble: 3 x 2
country happy
<chr> <dbl>
1 Canada 7.37
2 Mexico 6.76
3 United States 7.03
```
For example, we can see that for this balanced data set, the `_fac1` coefficient is the average of the U.S. and Mexico coefficients that we got from dummy coding, which represented their respective mean differences from Canada: (\-0\.611 \+ \-0\.343\) / 2 \= \-0\.477\. The `_fac2` coefficient is just the U.S. Mexico mean difference, as expected.
In other circumstances, we can use *categorical embeddings* to reduce a very large number of categorical levels to a smaller number of numeric variables. This is very commonly employed in deep learning.
### Scales, indices, and dimension reduction
It is often the case that we have several correlated variables/items which do not all need to go into the model. For example, instead of using all items in a psychological scale, we can use the scale score, however defined, which is often just a *sum score* of the underlying items. Often people will create an index by using a *principal components analysis*, which can be thought of as a means to create a weighted sum score, or set of scores. Some (especially binary) items may tend toward the creation of a single variable that simply notes whether any of those collection of variables was present or not.
#### Two\-step approaches
Some might do a preliminary analysis, such as a *cluster analysis* or *factor analysis*, to create new target or predictor variables. In the former we reduce several variables to a single categorical label. Factor analysis does the same but results in a more expressive continuous metric. While fine to use, the corresponding results are measured with error, so treating the categories or factor scores as you would observed variables will typically result in optimistic results when you later include them in a subsequent analysis like a linear regression. Though this difference is probably slight in most applications, keen reviewers would probably point out the model shortcoming.
### Don’t discretize
Little pains advanced modelers more than seeing results where a nice expressive continuous metric is butchered into two categories (e.g. taking a numeric age and collapsing to ‘old’ vs. ‘young’). There is rarely a reason to do this, and it is difficult to justify. There are reasons to collapse rare labels of a categorical variable, so that the new variable has fewer but more frequent categories. For example, data may have five or six race categories, but often the values are lumped into majority group vs. minority group due to each minority category having too few observations. But even that can cause problems, and doesn’t really overcome the fact that you simply didn’t have enough data to begin with.
Variable Importance
-------------------
In many circumstances, one of the modeling goals is to determine which predictor variable is most important out of the collection used in the model, or otherwise rank order the effectiveness of the predictors in some fashion. However, determining relative *variable importance* is at best an approximation with some methods, and a fairly hopeless endeavor with others. For just basic linear regression there are many methods that would not necessarily come to the same conclusions. Statistical significance, e.g. the Z/t statistic or p\-value, is simply not a correct way to do so. Some believe that [standardizing numeric variables](models.html#numeric-variables) is enough, but it is not, and doesn’t help with comparison to categorical inputs. In addition, if you’re model is not strong, it doesn’t make much sense to even worry about which is the best of a bad lot.
Another reason that ‘importance’ is a problematic endeavor is that a statistical result doesn’t speak to practical action, nor does it speak to the fact that small effects may be very important. Sex may be an important driver in social science model, but we may not be able to do anything about it for many outcomes that may be of interest. With health outcomes, any effects might be worthy of attention, however small, if they could practically increase the likelihood of survival.
Even if you can come up with a metric you like, you would still need some measure of uncertainty around that to make a claim that one predictor is reasonably better than another, and the only real approach to do that is usually some computationally expensive procedure that you will likely have to put together by hand.
As an example, for standard linear regression there are many methods that decompose \\(R^2\\) into relative contributions by the covariates. The tools to do so have to re\-run the model in many ways to produce these estimates (see the relaimpo package for example), but you would then have to use bootstrapping or similar approach to get interval estimates for those measures of importance. Certain techniques like random forests have a natural way to provide variable importance metrics, but providing inference on them would similarly be very computationally expensive.
In the end though, I think it is probably best to assume that any effect that seems practically distinct from zero might be worthy of attention, and can be regarded for its own sake. The more actionable, the better.
Extracting Output
-----------------
The better you get at modeling, the more often you are going to need to get at certain parts of the model output easily. For example, we can extract the coefficients, residuals, model data and other parts from standard linear model objects from base R.
Why would you want to do this? A simple example would be to compare effects across different settings. We can collect the values, put them in a data frame, and then to a table or visualization.
Typical modeling [methods](programming.html#methods) you might want to use:
* summary: print results in a legible way
* plot: plot something about the model (e.g. diagnostic plots)
* predict: make predictions, possibly on new data
* confint: get confidence intervals for parameters
* coef: extract coefficients
* fitted: extract fitted values
* residuals: extract residuals
* AIC: extract AIC
Here is an example of using the predict and coef methods.
```
predict(happy_model_base, newdata = happy %>% slice(1:5))
```
```
1 2 3 4 5
3.838179 3.959046 3.928180 4.004129 4.171624
```
```
coef(happy_model_base)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
Also, it’s useful to assign the summary results to an object, so that you can extract things that are also useful but would not be in the model object. We did this before, so now let’s take a look.
```
str(happy_model_base_sum, 1)
```
```
List of 12
$ call : language lm(formula = happiness_score ~ democratic_quality + generosity + log_gdp_per_capita, data = happy)
$ terms :Classes 'terms', 'formula' language happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
.. ..- attr(*, "variables")= language list(happiness_score, democratic_quality, generosity, log_gdp_per_capita)
.. ..- attr(*, "factors")= int [1:4, 1:3] 0 1 0 0 0 0 1 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:3] "democratic_quality" "generosity" "log_gdp_per_capita"
.. ..- attr(*, "order")= int [1:3] 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(happiness_score, democratic_quality, generosity, log_gdp_per_capita)
.. ..- attr(*, "dataClasses")= Named chr [1:4] "numeric" "numeric" "numeric" "numeric"
.. .. ..- attr(*, "names")= chr [1:4] "happiness_score" "democratic_quality" "generosity" "log_gdp_per_capita"
$ residuals : Named num [1:411] -0.405 -0.572 0.057 -0.426 -0.829 ...
..- attr(*, "names")= chr [1:411] "8" "9" "10" "19" ...
$ coefficients : num [1:4, 1:4] -1.01 0.17 1.161 0.693 0.314 ...
..- attr(*, "dimnames")=List of 2
$ aliased : Named logi [1:4] FALSE FALSE FALSE FALSE
..- attr(*, "names")= chr [1:4] "(Intercept)" "democratic_quality" "generosity" "log_gdp_per_capita"
$ sigma : num 0.628
$ df : int [1:3] 4 407 4
$ r.squared : num 0.695
$ adj.r.squared: num 0.693
$ fstatistic : Named num [1:3] 310 3 407
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:4, 1:4] 0.2504 0.0229 -0.0139 -0.0264 0.0229 ...
..- attr(*, "dimnames")=List of 2
$ na.action : 'omit' Named int [1:1293] 1 2 3 4 5 6 7 11 12 13 ...
..- attr(*, "names")= chr [1:1293] "1" "2" "3" "4" ...
- attr(*, "class")= chr "summary.lm"
```
If we want the adjusted \\(R^2\\) or root mean squared error (RMSE, i.e. average error[31](#fn31)), they aren’t readily available in the model object, but they are in the summary object, so we can pluck them out as we would any other [list object](data_structures.html#lists).
```
happy_model_base_sum$adj.r.squared
```
```
[1] 0.6930647
```
```
happy_model_base_sum[['sigma']]
```
```
[1] 0.6282718
```
### Package support
There are many packages available to get at model results. One of the more widely used is broom, which has tidy and other functions that can apply in different ways to different models depending on their class.
```
library(broom)
tidy(happy_model_base)
```
```
# A tibble: 4 x 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -1.01 0.314 -3.21 1.41e- 3
2 democratic_quality 0.170 0.0459 3.71 2.33e- 4
3 generosity 1.16 0.195 5.94 6.18e- 9
4 log_gdp_per_capita 0.693 0.0333 20.8 5.93e-66
```
Some packages will produce tables for a model object that are more or less ready for publication. However, unless you know it’s in the exact style you need, you’re probably better off dealing with it yourself. For example, you can use tidy and do minor cleanup to get the table ready for publication.
Visualization
-------------
> Models require visualization to be understood completely.
If you aren’t using visualization as a fundamental part of your model exploration, you’re likely leaving a lot of that exploration behind, and not communicating the results as well as you could to the broadest audience possible. When adding nonlinear effects, interactions, and more, visualization is a must. Thankfully there are many packages to help you get data you need to visualize effects.
We start with the emmeans package. In the following example we have a country effect, and wish to get the mean happiness scores per country. We then visualize the results. Here we can see that Mexico is lowest on average.
```
happy_model_nafta = lm(happiness_score ~ country + year, data = nafta)
library(emmeans)
country_means = emmeans(happy_model_nafta, ~ country)
country_means
```
```
country emmean SE df lower.CL upper.CL
Canada 7.37 0.064 8 7.22 7.52
Mexico 6.76 0.064 8 6.61 6.91
United States 7.03 0.064 8 6.88 7.17
Confidence level used: 0.95
```
```
plot(country_means)
```
We can also test for pairwise differences between the countries, and there’s no reason not to visualize that also. In the following, after adjustment Mexico and U.S. might not differ on mean happiness, but the other comparisons are statistically notable[32](#fn32).
```
pw_comparisons = contrast(country_means, method = 'pairwise', adjust = 'bonferroni')
pw_comparisons
```
```
contrast estimate SE df t.ratio p.value
Canada - Mexico 0.611 0.0905 8 6.751 0.0004
Canada - United States 0.343 0.0905 8 3.793 0.0159
Mexico - United States -0.268 0.0905 8 -2.957 0.0547
P value adjustment: bonferroni method for 3 tests
```
```
plot(pw_comparisons)
```
The following example uses ggeffects. First, we run a model with an interaction of country and year (we’ll talk more about interactions later). Then we get predictions for the year by country, and subsequently visualize. We can see that the trend, while negative for all countries, is more pronounced as we move south.
```
happy_model_nafta = lm(happiness_score ~ year*country, data = nafta)
library(ggeffects)
preds = ggpredict(happy_model_nafta, terms = c('year', 'country'))
plot(preds)
```
Whenever you move to generalized linear models or other more complicated settings, visualization is even more important, so it’s best to have some tools at your disposal.
Extensions to the Standard Linear Model
---------------------------------------
### Different types of targets
In many data situations, we do not have a continuous numeric target variable, or may want to use a different distribution to get a better fit, or adhere to some theoretical perspective. For example, count data is not continuous and often notably skewed, so assuming a normal symmetric distribution may not work as well. From a data generating perspective we can use the Poisson distribution[33](#fn33) for the target variable instead.
\\\[\\ln{\\mu} \= X\\beta\\]
\\\[\\mu \= e^{X\\beta}\\]
\\\[y \\sim \\mathcal{Pois}(\\mu)\\]
Conceptually nothing has really changed from what we were doing with the standard linear model, except for the distribution. We still have a mean function determined by our predictors, and this is what we’re typically mainly interested in from a theoretical perspective. We do have an added step, a transformation of the mean (now usually called the *linear predictor*). Poisson naturally works with the log of the target, but rather than do that explicitly, we instead exponentiate the linear predictor. The *link function*[34](#fn34), which is the natural log in this setting, has a corresponding *inverse link* (or mean function)\- exponentiation.
In code we can demonstrate this as follows.
```
set.seed(123) # for reproducibility
N = 1000 # sample size
beta = c(2, 1) # the true coefficient values
x = rnorm(N) # a single predictor variable
mu = exp(beta[1] + beta[2]*x) # the linear predictor
y = rpois(N, lambda = mu) # the target variable lambda = mean
glm(y ~ x, family = poisson)
```
```
Call: glm(formula = y ~ x, family = poisson)
Coefficients:
(Intercept) x
2.009 0.994
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 13240
Residual Deviance: 1056 AIC: 4831
```
A very common setting is the case where our target variable takes on only two values\- yes vs. no, alive vs. dead, etc. The most common model used in such settings is the logistic regression model. In this case, it will have a different link to go with a different distribution.
\\\[\\ln{\\frac{\\mu}{1\-\\mu}} \= X\\beta\\]
\\\[\\mu \= \\frac{1}{1\+e^{\-X\\beta}}\\]
\\\[y \\sim \\mathcal{Binom}(\\mathrm{prob}\=\\mu, \\mathrm{size} \= 1\)\\]
Here our link function is called the *logit*, and it’s inverse takes our linear predictor and puts it on the probability scale.
Again, some code can help drive this home.
```
mu = plogis(beta[1] + beta[2]*x)
y = rbinom(N, size = 1, mu)
glm(y ~ x, family = binomial)
```
```
Call: glm(formula = y ~ x, family = binomial)
Coefficients:
(Intercept) x
2.141 1.227
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 852.3
Residual Deviance: 708.8 AIC: 712.8
```
```
# extension to count/proportional model
# mu = plogis(beta[1] + beta[2]*x)
# total = rpois(N, lambda = 5)
# events = rbinom(N, size = total, mu)
# nonevents = total - events
#
# glm(cbind(events, nonevents) ~ x, family = binomial)
```
You’ll have noticed that when we fit these models we used glm instead of lm. The normal linear model is a special case of *generalized linear models*, which includes a specific class of distributions \- normal, poisson, binomial, gamma, beta and more \- collectively referred to as the [exponential family](https://en.wikipedia.org/wiki/Exponential_family). While this family can cover a lot of ground, you do not have to restrict yourself to it, and many R modeling packages will provide easy access to more. The main point is that you have tools to deal with continuous, binary, count, ordinal, and other types of data. Furthermore, not much necessarily changes conceptually from model to model besides the link function and/or distribution.
### Correlated data
Often in standard regression modeling situations we have data that is correlated, like when we observe multiple observations for individuals (e.g. longitudinal studies), or observations are clustered within geographic units. There are many ways to analyze all kinds of correlated data in the form of clustered data, time series, spatial data and similar. In terms of understanding the mean function and data generating distribution for our target variable, as we did in our previous models, not much changes. However, we will want to utilize estimation techniques that take this correlation into account. Examples of such models include:
* Mixed models (e.g. random intercepts, ‘multilevel’ models)
* Time series models (autoregressive)
* Spatial models (e.g. conditional autoregressive)
As demonstration is beyond the scope of this document, the main point here is awareness. But see these on [mixed models](https://m-clark.github.io/mixed-models-with-R/) and [generalized additive models](https://m-clark.github.io/generalized-additive-models/).
### Other extensions
There are many types of models that will take one well beyond the standard linear model. In some cases, the focus is multivariate, trying to model many targets at once. Other models will even be domain\-specific, tailored to a very narrow type of problem. Whatever the scenario, having a good understanding of the models we’ve been discussing will likely help you navigate these new waters much more easily.
Model Exploration Summary
-------------------------
At this point you should have a good idea of how to get started exploring models with R. Generally what you will explore will be based on theory, or merely curiosity. Specific packages while make certain types of models easy to pull off, without much change to the syntax from the standard `lm` approach of base R. Almost invariably, you will need to process the data to make it more amenable to analysis and/or more interpretable. After model fitting, summaries and visualizations go a long way toward understanding the part of the world you are exploring.
Model Exploration Exercises
---------------------------
### Exercise 1
With the Google app data, use a standard linear model (i.e. lm) to predict one of three target variables of your choosing:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
I would suggest preprocessing the number of reviews\- dividing by 100,000, scaling (standardizing), or logging it (for the latter you can add 1 first to deal with zeros[35](#fn35)).
Interpret the results. Visualize the difference in means between free and paid apps. See the [emmeans](models.html#visualization) example above.
```
load('data/google_apps.RData')
model = lm(? ~ reviews + type + size_in_MB, data = google_apps)
plot(emmeans::emmeans(model, ~type))
```
### Exercise 2
Rerun the above with interactions of the number of reviews or app size (or both) with type (via `a + b + a:b` or just `a*b` for two predictors). Visualize the interaction. Does it look like the effect differs by type?
```
model = lm(? ~ reviews + type*?, data = google_apps)
plot(ggeffects::ggpredict(model, terms = c('size_in_MB', 'type')))
```
### Exercise 3
Use the fish data to predict the number of fish caught `count` by the following predictor variables:
* `livebait`: whether live bait was used or not
* `child`: how many children present
* `persons`: total persons on the trip
If you wish, you can start with an `lm`, but as the number of fish caught is a count, it is suitable for using a poisson distribution via `glm` with `family = poisson`, so try that if you’re feeling up for it. If you exponentiate the coefficients, they can be interpreted as [incidence rate ratios](https://stats.idre.ucla.edu/stata/output/poisson-regression/).
```
load('data/fish.RData')
model = glm(?, data = fish)
```
Python Model Exploration Notebook
---------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/models.ipynb)
Model Taxonomy
--------------
We can begin with a taxonomy that broadly describes two classes of models:
* *Supervised*
* *Unsupervised*
* Some combination
For supervised settings, there is a target or set of target variables which we aim to predict with a set of predictor variables or covariates. This is far and away the most common case, and the one we will focus on here. It is very common in machine learning parlance to further distinguish *regression* and *classification* among supervised models, but what they actually mean to distinguish is numeric target variables from categorical ones (it’s all regression).
In the case of unsupervised models, the data itself is the target, and this setting includes techniques such as principal components analysis, factor analysis, cluster analytic approaches, topic modeling, and many others. A key goal for many such methods is *dimension reduction*, either of the columns or rows. For example, we may have many items of a survey we wish to group together into a few concepts, or cluster thousands of observations into a few simple categories.
We can also broadly describe two primary goals of modeling:
* *Prediction*
* *Explanation*
Different models will provide varying amounts of predictive and explanatory (or inferential) power. In some settings, prediction is almost entirely the goal, with little need to understand the underlying details of the relation of inputs to outputs. For example, in a model that predicts words to suggest when typing, we don’t really need to know nor much care about the details except to be able to improve those suggestions. In scientific studies however, we may be much more interested in the (potentially causal) relations among the variables under study.
While these are sometimes competing goals, it is definitely not the case that they are mutually exclusive. For example, a fully interpretable model, statistically speaking, may have no predictive capability, and so is fairly useless in practical terms. Often, very predictive models offer little understanding. But sometimes we can luck out and have both a highly predictive model as well as one that is highly interpretable.
Linear models
-------------
Most models you see in published reports are *linear models* of varying kinds, and form the basis on which to build more complex forms. In such models we distinguish a *target variable* we want to understand from the variables which we will use to understand it. Note that these come with different names depending on the goal of the study, discipline, and other factors[19](#fn19). The following table denotes common nomenclature across many disciplines.
| Type | Names |
| --- | --- |
| Target | Dependent variable |
| Endogenous |
| Response |
| Outcome |
| Output |
| Y |
| Regressand |
| Left hand side (LHS) |
| Predictor | Independent variable |
| Exogenous |
| Explanatory Variable |
| Covariate |
| Input |
| X |
| Regressor |
| Right hand side (RHS) |
A typical way to depict a linear regression model is as follows:
\\\[y \= b\_0 \+ b\_1\\cdot x\_1 \+ b\_2\\cdot x\_2 \+ ... \+ \+ b\_p\\cdot x\_p \+ \\epsilon\\]
In the above, \\(b\_0\\) is the intercept, and the other \\(b\_\*\\) are the regression coefficients that represent the relationship of the predictors \\(x\\) to the target variable \\(y\\). The \\(\\epsilon\\) represents the *error* or *residual*. We don’t have perfect prediction, and that represents the difference between what we can guess with our predictor relationships to the target and what we actually observe with it.
In R, we specify a linear model as follows. Conveniently enough, we use a function, `lm`, that stands for linear model. There are various inputs, typically starting with the formula. In the formula, The target variable is first, followed by the predictor variables, separated by a tilde (`~`). Additional predictor variables are added with a plus sign (`+`). In this example, `y` is our target, and the predictors are `x` and `z`.
```
lm(y ~ x + z)
```
We can still use linear models to investigate nonlinear relationships. For example, in the following, we can add a quadratic term or an interaction, yet the model is still linear in the parameters. All of the following are standard linear regression models.
```
lm(y ~ x + z + x:z)
lm(y ~ x + x_squared) # a better way: lm(y ~ poly(x, degree = 2))
```
In the models above, `x` has a potentially nonlinear relationship with `y`, either by varying its (linear) relationship depending on values of z (the first case) or itself (the second). In general, the manner in which nonlinear relationships may be explored in linear models is quite flexible.
An example of a *nonlinear model* would be population growth models, like exponential or logistic growth curves. You can use functions like nls or nlme for such models, but should have a specific theoretical reason to do so, and even then, flexible models such as [GAMs](https://m-clark.github.io/generalized-additive-models/) might be better than assuming a functional form.
Estimation
----------
One key thing to understand with predictive models of any kind is how we estimate the parameters of interest, e.g. coefficients/weights, variance, and more. To start with, we must have some sort of goal that choosing a particular set of values for the parameters achieves, and then find some way to reach that goal efficiently.
### Minimizing and maximizing
The goal of many estimation approaches is the reduction of *loss*, conceptually defined as the difference between the model predictions and the observed data, i.e. prediction error. In an introductory methods course, many are introduced to *ordinary least squares* as a means to estimate the coefficients for a linear regression model. In this scenario, we are seeking to come up with estimates of the coefficients that *minimize* the (squared) difference between the observed target value and the fitted value based on the parameter estimates. The loss in this case is defined as the sum of the squared errors. Formally we can state it as follows.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2\\]
We can see how this works more clearly with some simple conceptual code. In what follows, we create a [function](functions.html#writing-functions), allows us to move [row by row](iterative.html#for-loops) through the data, calculating both our prediction based on the given model parameters\- \\(\\hat{y}\\), and the difference between that and our target variable \\(y\\). We sum these squared differences to get a total. In practice such a function is called the loss function, cost function, or objective function.
```
ls_loss <- function(X, y, beta) {
# initialize the objects
loss = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
# for each row, calculate y_hat and square the difference with y
for (n in 1:nrow(X)) {
y_hat[n] = sum(X[n, ] * beta)
loss[n] = (y[n] - y_hat[n]) ^ 2
}
sum(loss)
}
```
Now we need some data. Let’s construct some data so that we know the true underlying values for the regression coefficients. Feel free to change the sample size `N` or the coefficient values.
```
set.seed(123) # for reproducibility
N = 100
X = cbind(1, rnorm(N)) # a model matrix; first column represents the intercept
y = 5 * X[, 1] + .5 * X[, 2] + rnorm(N) # a target with some noise; truth is y = 5 +.5*x
df = data.frame(y = y, x = X[, 2])
```
Now let’s make some guesses for the coefficients, and see what the corresponding sum of the squared errors, i.e. the loss, would be.
```
ls_loss(X, y, beta = c(0, 1)) # guess 1
```
```
[1] 2467.106
```
```
ls_loss(X, y, beta = c(1, 2)) # guess 2
```
```
[1] 1702.547
```
```
ls_loss(X, y, beta = c(4, .25)) # guess 3
```
```
[1] 179.2952
```
We see that in our third guess we reduce the loss quite a bit relative to our first guess. This makes sense because a value of 4 for the intercept and .25 for the coefficient for `x` are not as relatively far from the true values.
However, we can also see that they are not the best we could have done. In addition, with more data, our estimated coefficients would get closer to true values.
```
model = lm(y ~ x, df) # fit the model and obtain parameter estimates using OLS
coef(model) # best guess given the data
```
```
(Intercept) x
4.8971969 0.4475284
```
```
sum(residuals(model)^2) # least squares loss
```
```
[1] 92.34413
```
In some relatively rare cases, a known approach is available and we do not have to search for the best estimates, but simply have to perform the correct steps that will result in them. For example, the following matrix operations will produce the best estimates for linear regression, which also happen to be the maximum likelihood estimates.
```
solve(crossprod(X)) %*% crossprod(X, y) # 'normal equations'
```
```
[,1]
[1,] 4.8971969
[2,] 0.4475284
```
```
coef(model)
```
```
(Intercept) x
4.8971969 0.4475284
```
Most of the time we don’t have such luxury, or even if we did, the computations might be too great for the size of our data.
Many statistical modeling techniques use *maximum likelihood* in some form or fashion, including Bayesian approaches, so you would do well to understand the basics. In this case, instead of minimizing the loss, we use an approach to maximize the probability of the observations of the target variable given the estimates of the parameters of the model (e.g. the coefficients in a regression)[20](#fn20).
The following shows how this would look for estimating a single value like a mean for a set of observations from a specific distribution[21](#fn21). In this case, the true underlying value that maximizes the likelihood is 5, but we typically don’t know this. We see that as our guesses for the mean would get closer to 5, the likelihood of the observed values increases. Our final guess based on the observed data won’t be exactly 5, but with enough data and an appropriate model for that data, we should get close.
Again, some simple conceptual code can help us. The next bit of code follows a similar approach to what we had with least squares regression, but the goal is instead to maximize the likelihood of the observed data. In this example, I fix the estimated variance, but in practice we’d need to estimate that parameter as well. As probabilities are typically very small, we work with them on the log scale.
```
max_like <- function(X, y, beta, sigma = 1) {
likelihood = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
for (n in 1:nrow(X)) {
y_hat[n] <- sum(X[n, ] * beta)
likelihood[n] = dnorm(y[n], mean = y_hat[n], sd = sigma, log = TRUE)
}
sum(likelihood)
}
```
```
max_like(X, y, beta = c(0, 1)) # guess 1
```
```
[1] -1327.593
```
```
max_like(X, y, beta = c(1, 2)) # guess 2
```
```
[1] -1022.18
```
```
max_like(X, y, beta = c(4, .25)) # guess 3
```
```
[1] -300.6741
```
```
logLik(model)
```
```
'log Lik.' -137.9115 (df=3)
```
To better understand maximum likelihood, it might help to think of our model from a data generating perspective, rather than in terms of ‘errors’. In the standard regression setting, we think of a single observation as follows:
\\\[\\mu \= b\_0 \+ b\_1\*x\_1 \+ ... \+ b\_p\*x\_p\\]
Or with matrix notation (consider it shorthand if not familiar):
\\\[\\mu \= X\\beta\\]
Now we display how \\(y\\) is generated:
\\\[y \\sim \\mathcal{N}(\\mathrm{mean} \= \\mu, \\mathrm{sd} \= \\sigma)\\]
In words, this means that our target observation \\(y\\) is assumed to be normally distributed with some mean and some standard deviation/variance. The mean \\(\\mu\\) is a function, or simply weighted sum, of our covariates \\(X\\). The unknown parameters we have to estimate are the \\(\\beta\\), i.e. weights, and standard deviation \\(\\sigma\\) (or variance \\(\\sigma^2\\)).
One more note regarding estimation, it is good to distinguish models from estimation procedures. The following shows the more specific to the more general for both models and estimation procedures respectively.
| Label | Name |
| --- | --- |
| LM | Linear Model |
| GLM | Generalized Linear Model |
| GLMM | Generalized Linear Mixed Model |
| GAMM | Generalized Linear Mixed Model |
| OLS | Ordinary Least Squares |
| WLS | Weighted Least Squares |
| GLS | Generalized Least Squares |
| GEE | Generalized Estimating Equations |
| GMM | Generalized Method of Moments |
### Optimization
So we know the goal, but how do we get to it? In practice, we typically use *optimization* methods to iteratively search for the best estimates for the parameters of a given model. The functions we explored above provide a goal\- to minimize loss (however defined\- least squares for continuous, classification error for binary, etc.) or maximize the likelihood (or posterior probability in the Bayesian context). Whatever the goal, an optimizing *algorithm* will typically be used to find the estimates that reach that goal. Some approaches are very general, some are better for certain types of modeling problems. These algorithms continue to make guesses until some criterion has been reached (*convergence*)[22](#fn22).
You generally don’t need to know the details to use these algorithms to fit models, but knowing a little bit about the optimization process and available options may prove useful to deal with more complex data scenarios, where convergence can be difficult. Some packages will even have documentation specifically dealing with convergence issues. In the more predictive models previously discussed, knowing more about the optimization algorithm may speedup the time it takes to train the model, or smooth out the variability in the process.
As an aside, most Bayesian models use an estimation approach that is some form of *Markov Chain Monte Carlo*. It is a simulation based approach to generate subsequent estimates of parameters conditional on present estimates of them. One set of iterations is called a chain, and convergence requires multiple chains to mix well, i.e. come to similar conclusions about the parameter estimates. The goal even then is to maximize the log posterior distribution, similar to maximizing the likelihood. In the past this was an extremely computationally expensive procedure, but these days, modern laptops can handle even complex models with ease, though some data set sizes may be prohibitive still[23](#fn23).
### Minimizing and maximizing
The goal of many estimation approaches is the reduction of *loss*, conceptually defined as the difference between the model predictions and the observed data, i.e. prediction error. In an introductory methods course, many are introduced to *ordinary least squares* as a means to estimate the coefficients for a linear regression model. In this scenario, we are seeking to come up with estimates of the coefficients that *minimize* the (squared) difference between the observed target value and the fitted value based on the parameter estimates. The loss in this case is defined as the sum of the squared errors. Formally we can state it as follows.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2\\]
We can see how this works more clearly with some simple conceptual code. In what follows, we create a [function](functions.html#writing-functions), allows us to move [row by row](iterative.html#for-loops) through the data, calculating both our prediction based on the given model parameters\- \\(\\hat{y}\\), and the difference between that and our target variable \\(y\\). We sum these squared differences to get a total. In practice such a function is called the loss function, cost function, or objective function.
```
ls_loss <- function(X, y, beta) {
# initialize the objects
loss = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
# for each row, calculate y_hat and square the difference with y
for (n in 1:nrow(X)) {
y_hat[n] = sum(X[n, ] * beta)
loss[n] = (y[n] - y_hat[n]) ^ 2
}
sum(loss)
}
```
Now we need some data. Let’s construct some data so that we know the true underlying values for the regression coefficients. Feel free to change the sample size `N` or the coefficient values.
```
set.seed(123) # for reproducibility
N = 100
X = cbind(1, rnorm(N)) # a model matrix; first column represents the intercept
y = 5 * X[, 1] + .5 * X[, 2] + rnorm(N) # a target with some noise; truth is y = 5 +.5*x
df = data.frame(y = y, x = X[, 2])
```
Now let’s make some guesses for the coefficients, and see what the corresponding sum of the squared errors, i.e. the loss, would be.
```
ls_loss(X, y, beta = c(0, 1)) # guess 1
```
```
[1] 2467.106
```
```
ls_loss(X, y, beta = c(1, 2)) # guess 2
```
```
[1] 1702.547
```
```
ls_loss(X, y, beta = c(4, .25)) # guess 3
```
```
[1] 179.2952
```
We see that in our third guess we reduce the loss quite a bit relative to our first guess. This makes sense because a value of 4 for the intercept and .25 for the coefficient for `x` are not as relatively far from the true values.
However, we can also see that they are not the best we could have done. In addition, with more data, our estimated coefficients would get closer to true values.
```
model = lm(y ~ x, df) # fit the model and obtain parameter estimates using OLS
coef(model) # best guess given the data
```
```
(Intercept) x
4.8971969 0.4475284
```
```
sum(residuals(model)^2) # least squares loss
```
```
[1] 92.34413
```
In some relatively rare cases, a known approach is available and we do not have to search for the best estimates, but simply have to perform the correct steps that will result in them. For example, the following matrix operations will produce the best estimates for linear regression, which also happen to be the maximum likelihood estimates.
```
solve(crossprod(X)) %*% crossprod(X, y) # 'normal equations'
```
```
[,1]
[1,] 4.8971969
[2,] 0.4475284
```
```
coef(model)
```
```
(Intercept) x
4.8971969 0.4475284
```
Most of the time we don’t have such luxury, or even if we did, the computations might be too great for the size of our data.
Many statistical modeling techniques use *maximum likelihood* in some form or fashion, including Bayesian approaches, so you would do well to understand the basics. In this case, instead of minimizing the loss, we use an approach to maximize the probability of the observations of the target variable given the estimates of the parameters of the model (e.g. the coefficients in a regression)[20](#fn20).
The following shows how this would look for estimating a single value like a mean for a set of observations from a specific distribution[21](#fn21). In this case, the true underlying value that maximizes the likelihood is 5, but we typically don’t know this. We see that as our guesses for the mean would get closer to 5, the likelihood of the observed values increases. Our final guess based on the observed data won’t be exactly 5, but with enough data and an appropriate model for that data, we should get close.
Again, some simple conceptual code can help us. The next bit of code follows a similar approach to what we had with least squares regression, but the goal is instead to maximize the likelihood of the observed data. In this example, I fix the estimated variance, but in practice we’d need to estimate that parameter as well. As probabilities are typically very small, we work with them on the log scale.
```
max_like <- function(X, y, beta, sigma = 1) {
likelihood = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
for (n in 1:nrow(X)) {
y_hat[n] <- sum(X[n, ] * beta)
likelihood[n] = dnorm(y[n], mean = y_hat[n], sd = sigma, log = TRUE)
}
sum(likelihood)
}
```
```
max_like(X, y, beta = c(0, 1)) # guess 1
```
```
[1] -1327.593
```
```
max_like(X, y, beta = c(1, 2)) # guess 2
```
```
[1] -1022.18
```
```
max_like(X, y, beta = c(4, .25)) # guess 3
```
```
[1] -300.6741
```
```
logLik(model)
```
```
'log Lik.' -137.9115 (df=3)
```
To better understand maximum likelihood, it might help to think of our model from a data generating perspective, rather than in terms of ‘errors’. In the standard regression setting, we think of a single observation as follows:
\\\[\\mu \= b\_0 \+ b\_1\*x\_1 \+ ... \+ b\_p\*x\_p\\]
Or with matrix notation (consider it shorthand if not familiar):
\\\[\\mu \= X\\beta\\]
Now we display how \\(y\\) is generated:
\\\[y \\sim \\mathcal{N}(\\mathrm{mean} \= \\mu, \\mathrm{sd} \= \\sigma)\\]
In words, this means that our target observation \\(y\\) is assumed to be normally distributed with some mean and some standard deviation/variance. The mean \\(\\mu\\) is a function, or simply weighted sum, of our covariates \\(X\\). The unknown parameters we have to estimate are the \\(\\beta\\), i.e. weights, and standard deviation \\(\\sigma\\) (or variance \\(\\sigma^2\\)).
One more note regarding estimation, it is good to distinguish models from estimation procedures. The following shows the more specific to the more general for both models and estimation procedures respectively.
| Label | Name |
| --- | --- |
| LM | Linear Model |
| GLM | Generalized Linear Model |
| GLMM | Generalized Linear Mixed Model |
| GAMM | Generalized Linear Mixed Model |
| OLS | Ordinary Least Squares |
| WLS | Weighted Least Squares |
| GLS | Generalized Least Squares |
| GEE | Generalized Estimating Equations |
| GMM | Generalized Method of Moments |
### Optimization
So we know the goal, but how do we get to it? In practice, we typically use *optimization* methods to iteratively search for the best estimates for the parameters of a given model. The functions we explored above provide a goal\- to minimize loss (however defined\- least squares for continuous, classification error for binary, etc.) or maximize the likelihood (or posterior probability in the Bayesian context). Whatever the goal, an optimizing *algorithm* will typically be used to find the estimates that reach that goal. Some approaches are very general, some are better for certain types of modeling problems. These algorithms continue to make guesses until some criterion has been reached (*convergence*)[22](#fn22).
You generally don’t need to know the details to use these algorithms to fit models, but knowing a little bit about the optimization process and available options may prove useful to deal with more complex data scenarios, where convergence can be difficult. Some packages will even have documentation specifically dealing with convergence issues. In the more predictive models previously discussed, knowing more about the optimization algorithm may speedup the time it takes to train the model, or smooth out the variability in the process.
As an aside, most Bayesian models use an estimation approach that is some form of *Markov Chain Monte Carlo*. It is a simulation based approach to generate subsequent estimates of parameters conditional on present estimates of them. One set of iterations is called a chain, and convergence requires multiple chains to mix well, i.e. come to similar conclusions about the parameter estimates. The goal even then is to maximize the log posterior distribution, similar to maximizing the likelihood. In the past this was an extremely computationally expensive procedure, but these days, modern laptops can handle even complex models with ease, though some data set sizes may be prohibitive still[23](#fn23).
Fitting Models
--------------
With practically every modern modeling package in R, the two components required to fit a model are the model formula, and a data frame that contains the variables specified in that formula. Consider the following models. In general the syntax is the similar regardless of package, with special considerations for the type of model. The data argument is not included in these examples, but would be needed.
```
lm(y ~ x + z) # standard linear model/OLS
glm(y ~ x + z, family = 'binomial') # logistic regression with binary response
glm(y ~ x + z + offset(log(q)), family = 'poisson') # count/rate model
betareg::betareg(y ~ x + z) # beta regression for targets between 0 and 1
pscl::hurdle(y ~ x + z, dist = "negbin") # hurdle model with negative binomial response
lme4::glmer(y ~ x + (1 | group), family = 'binomial') # generalized linear mixed model
mgcv::gam(y ~ s(x)) # generalized additive model
survival::coxph(Surv(time = t, event = q) ~ x) # Cox Proportional Hazards Regression
# Bayesian mixed model
brms::brm(
y ~ x + (1 + x | group),
family = 'zero_one_inflated_beta',
prior = priors
)
```
For examples of many other types of models, see this [document](https://m-clark.github.io/R-models/).
Let’s finally get our hands dirty and run an example. We’ll use the world happiness dataset[24](#fn24). This is country level data based on surveys taken at various years, and the scores are averages or proportions, along with other values like GDP.
```
library(tidyverse) # load if you haven't already
load('data/world_happiness.RData')
# glimpse(happy)
```
| Variable | N | Mean | SD | Min | Q1 | Median | Q3 | Max | % Missing |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| year | 1704 | 2012\.33 | 3\.69 | 2005\.00 | 2009\.00 | 2012\.00 | 2015\.00 | 2018\.00 | 0 |
| life\_ladder | 1704 | 5\.44 | 1\.12 | 2\.66 | 4\.61 | 5\.34 | 6\.27 | 8\.02 | 0 |
| log\_gdp\_per\_capita | 1676 | 9\.22 | 1\.19 | 6\.46 | 8\.30 | 9\.41 | 10\.19 | 11\.77 | 2 |
| social\_support | 1691 | 0\.81 | 0\.12 | 0\.29 | 0\.75 | 0\.83 | 0\.90 | 0\.99 | 1 |
| healthy\_life\_expectancy\_at\_birth | 1676 | 63\.11 | 7\.58 | 32\.30 | 58\.30 | 65\.00 | 68\.30 | 76\.80 | 2 |
| freedom\_to\_make\_life\_choices | 1675 | 0\.73 | 0\.14 | 0\.26 | 0\.64 | 0\.75 | 0\.85 | 0\.99 | 2 |
| generosity | 1622 | 0\.00 | 0\.16 | \-0\.34 | \-0\.12 | \-0\.02 | 0\.09 | 0\.68 | 5 |
| perceptions\_of\_corruption | 1608 | 0\.75 | 0\.19 | 0\.04 | 0\.70 | 0\.81 | 0\.88 | 0\.98 | 6 |
| positive\_affect | 1685 | 0\.71 | 0\.11 | 0\.36 | 0\.62 | 0\.72 | 0\.80 | 0\.94 | 1 |
| negative\_affect | 1691 | 0\.27 | 0\.08 | 0\.08 | 0\.21 | 0\.25 | 0\.31 | 0\.70 | 1 |
| confidence\_in\_national\_government | 1530 | 0\.48 | 0\.19 | 0\.07 | 0\.33 | 0\.46 | 0\.61 | 0\.99 | 10 |
| democratic\_quality | 1558 | \-0\.14 | 0\.88 | \-2\.45 | \-0\.79 | \-0\.23 | 0\.65 | 1\.58 | 9 |
| delivery\_quality | 1559 | 0\.00 | 0\.98 | \-2\.14 | \-0\.71 | \-0\.22 | 0\.70 | 2\.18 | 9 |
| gini\_index\_world\_bank\_estimate | 643 | 0\.37 | 0\.08 | 0\.24 | 0\.30 | 0\.35 | 0\.43 | 0\.63 | 62 |
| happiness\_score | 554 | 5\.41 | 1\.13 | 2\.69 | 4\.51 | 5\.31 | 6\.32 | 7\.63 | 67 |
| dystopia\_residual | 554 | 2\.06 | 0\.55 | 0\.29 | 1\.72 | 2\.06 | 2\.44 | 3\.84 | 67 |
The happiness score itself ranges from 2\.7 to 7\.6, with a mean of 5\.4 and standard deviation of 1\.1\.
Fitting a model with R is trivial, and at a minimum requires the two key ingredients mentioned before, the formula and data. Here we specify our target at `happiness_score` with predictors democratic quality, generosity, and GDP per capita (logged).
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
```
And that’s all there is to it.
### Using matrices
Many packages still allow for matrix input instead of specifying a model formula, or even require it (but shouldn’t). This means separating data into a model (or design) matrix, and the vector or matrix of the target variable(s). For example, if we needed a speed boost and weren’t concerned about some typical output we could use lm.fit.
First we need to create the required components. We can use model.matrix to get what we need.
```
X = model.matrix(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
head(X)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
8 1 -1.8443636 0.08909068 7.500539
9 1 -1.8554263 0.05136492 7.497038
10 1 -1.8865659 -0.11219829 7.497755
19 1 0.2516293 -0.08441135 9.302960
20 1 0.2572919 -0.02068741 9.337532
21 1 0.2999450 -0.03264282 9.376145
```
Note the column of ones in the model matrix `X`. This represents our intercept, but that may not mean much to you unless you understand matrix multiplication (nice demo [here](http://matrixmultiplication.xyz/)). The other columns are just as they are in the data. Note also that the missing values have been removed.
```
nrow(happy)
```
```
[1] 1704
```
```
nrow(X)
```
```
[1] 411
```
The target variable must contain the same number of observations as in the model matrix, and there are various ways to create it to ensure this. Instead of model.matrix, there is also model.frame, which creates a data frame, with a method for extracting the corresponding target variable[25](#fn25).
```
X_df = model.frame(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
y = model.response(X_df)
```
We can now fit the model as follows.
```
happy_model_matrix = lm.fit(X, y)
summary(happy_model_matrix) # only a standard list is returned
```
```
Length Class Mode
coefficients 4 -none- numeric
residuals 411 -none- numeric
effects 411 -none- numeric
rank 1 -none- numeric
fitted.values 411 -none- numeric
assign 4 -none- numeric
qr 5 qr list
df.residual 1 -none- numeric
```
```
coef(happy_model_matrix)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
In my experience, it is generally a bad sign if a package requires that you create the model matrix rather than doing so itself via the standard formula \+ data.frame approach. I typically find that such packages tend to skip out on many other things like using typical methods like predict, coef, etc., making them even more difficult to work with. In general, the only real time you should need to use model matrices is when you are creating your own modeling package, doing simulations, utilizing model speed\-ups, or otherwise know why you need them.
### Using matrices
Many packages still allow for matrix input instead of specifying a model formula, or even require it (but shouldn’t). This means separating data into a model (or design) matrix, and the vector or matrix of the target variable(s). For example, if we needed a speed boost and weren’t concerned about some typical output we could use lm.fit.
First we need to create the required components. We can use model.matrix to get what we need.
```
X = model.matrix(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
head(X)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
8 1 -1.8443636 0.08909068 7.500539
9 1 -1.8554263 0.05136492 7.497038
10 1 -1.8865659 -0.11219829 7.497755
19 1 0.2516293 -0.08441135 9.302960
20 1 0.2572919 -0.02068741 9.337532
21 1 0.2999450 -0.03264282 9.376145
```
Note the column of ones in the model matrix `X`. This represents our intercept, but that may not mean much to you unless you understand matrix multiplication (nice demo [here](http://matrixmultiplication.xyz/)). The other columns are just as they are in the data. Note also that the missing values have been removed.
```
nrow(happy)
```
```
[1] 1704
```
```
nrow(X)
```
```
[1] 411
```
The target variable must contain the same number of observations as in the model matrix, and there are various ways to create it to ensure this. Instead of model.matrix, there is also model.frame, which creates a data frame, with a method for extracting the corresponding target variable[25](#fn25).
```
X_df = model.frame(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
y = model.response(X_df)
```
We can now fit the model as follows.
```
happy_model_matrix = lm.fit(X, y)
summary(happy_model_matrix) # only a standard list is returned
```
```
Length Class Mode
coefficients 4 -none- numeric
residuals 411 -none- numeric
effects 411 -none- numeric
rank 1 -none- numeric
fitted.values 411 -none- numeric
assign 4 -none- numeric
qr 5 qr list
df.residual 1 -none- numeric
```
```
coef(happy_model_matrix)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
In my experience, it is generally a bad sign if a package requires that you create the model matrix rather than doing so itself via the standard formula \+ data.frame approach. I typically find that such packages tend to skip out on many other things like using typical methods like predict, coef, etc., making them even more difficult to work with. In general, the only real time you should need to use model matrices is when you are creating your own modeling package, doing simulations, utilizing model speed\-ups, or otherwise know why you need them.
Summarizing Models
------------------
Once we have a model, we’ll want to summarize the results of it. Most modeling packages have a summary method we can apply, which will provide parameter estimates, some notion of model fit, inferential statistics, and other output.
```
happy_model_base_sum = summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
There is a lot of info to parse there, so we’ll go over some of it in particular. The summary provides several pieces of information: the coefficients or weights (`Estimate`)[26](#fn26), the standard errors (`Std. Error`), the t\-statistic (which is just the coefficient divided by the standard error), and the corresponding p\-value. The main thing to look at are the actual coefficients and the direction of their relationship, positive or negative. For example, with regard to the effect of democratic quality, moving one point on democratic quality results in roughly 0\.2 units of happiness. Is this a notable effect? Knowing the scale of the outcome can help us understand the magnitude of the effect in a general sense. Before we showed that the standard deviation of the happiness scale was 1\.1\. So, in terms of standard deviation units\- moving 1 points on democratic quality would result in a 0\.2 standard deviation increase in state\-level happiness. We might consider this fairly small, but maybe not negligible.
One thing we must also have in order to understand our results is to get a sense of the uncertainty in the effects. The following provides confidence intervals for each of the coefficients.
```
confint(happy_model_base)
```
```
2.5 % 97.5 %
(Intercept) -1.62845472 -0.3925003
democratic_quality 0.08018814 0.2605586
generosity 0.77656244 1.5451306
log_gdp_per_capita 0.62786210 0.7589806
```
Now we have a sense of the range of plausible values for the coefficients. The value we actually estimate is the best guess given our circumstances, but slight changes in the data, the way we collect it, the time we collect it, etc., all would result in a slightly different result. The confidence interval provides a range of what we could expect given the uncertainty, and, given its importance, you should always report it. In fact, just showing the coefficient and the interval would be better than typical reporting of the statistical test results, though you can do both.
Variable Transformations
------------------------
Transforming variables can provide a few benefits in modeling, whether applied to the target, covariates, or both, and should regularly be used for most models. Some of these benefits include[27](#fn27):
* Interpretable intercepts
* More comparable covariate effects
* Faster estimation
* Easier convergence
* Help with heteroscedasticity
For example, merely centering predictor variables, i.e. subtracting the mean, provides a more interpretable intercept that will fall within the actual range of the target variable, telling us what the value of the target variable is when the covariates are at their means (or reference value if categorical).
### Numeric variables
The following table shows the interpretation of two extremely common transformations applied to numeric variables\- logging and scaling (i.e. standardizing to mean zero, standard deviation one).
| target | predictor | interpretation |
| --- | --- | --- |
| y | x | \\(\\Delta y \= \\beta\\Delta x\\) |
| y | log(x) | \\(\\Delta y \\approx (\\beta/100\)\\%\\Delta x\\) |
| log(y) | x | \\(\\%\\Delta y \\approx 100\\cdot \\beta\\%\\Delta x\\) |
| log(y) | log(x) | \\(\\%\\Delta y \= \\beta\\%\\Delta x\\) |
| y | scale(x) | \\(\\Delta y \= \\beta\\sigma\\Delta x\\) |
| scale(y) | x | \\(\\sigma\\Delta y \= \\beta\\Delta x\\) |
| scale(y) | scale(x) | \\(\\sigma\\Delta y \= \\beta\\sigma\\Delta x\\) |
For example, to start with the normal linear model situation, a one\-unit change in \\(x\\), i.e. \\(\\Delta x \=1\\), leads to \\(\\beta\\) unit change in \\(y\\). If we log the target variable \\(y\\), the interpretation of the coefficient for \\(x\\) is that a one\-unit change in \\(x\\) leads to an (approximately) 100\\(\\cdot\\)\\(\\beta\\)% change in \\(y\\). The 100 changes the result from a proportion to percentage change. More concretely, if \\(\\beta\\) was .5, a unit change in \\(x\\) leads to (roughly) a 50% change in \\(y\\). If both were logged, a percentage change in \\(x\\) leads to a \\(\\beta\\) percentage change in y[28](#fn28). These percentage change interpretations are called [elasticities](https://en.wikipedia.org/wiki/Elasticity_(economics)) in econometrics and areas trained similarly[29](#fn29).
It is very common to use *standardized* variables as well, also called normalizing, or simply scaling. If \\(y\\) and \\(x\\) are both standardized, a one unit (i.e. one standard deviation) change in \\(x\\) leads to a \\(\\beta\\) standard deviation change in \\(y\\). Again, if \\(\\beta\\) was .5, a standard deviation change in \\(x\\) leads to a half standard deviation change in \\(y\\). In general, there is nothing to lose by standardizing, so you should employ it often.
Another common transformation, particularly in machine learning, is the *min\-max normalization*, changing variables to range from some minimum to some maximum, usually zero to one.
### Categorical variables
A raw character string is not an analyzable unit, so character strings and labeled variables like factors must be converted for analysis to be conducted on them. For categorical variables, we can employ what is called *effects coding* to test for specific types of group differences. Far and away the most common approach is called *dummy coding* or *one\-hot encoding*[30](#fn30). In the next example, we will use dummy coding via the recipes package. I also to show how to standardize a numeric variable as previously discussed.
```
library(recipes)
nafta = happy %>%
filter(country %in% c('United States', 'Canada', 'Mexico'))
dummy = nafta %>%
recipe(~ country + generosity) %>% # formula approach for specifying variables
step_dummy(country, one_hot = TRUE) %>% # make variables for all factor levels
step_center(generosity) %>% # example of centering
step_scale(generosity) # example of standardizing
prep(dummy) %>% # estimates the necessary data to apply to this or other data sets
bake(nafta) %>% # apply the computations
print(n = 20)
```
```
# A tibble: 39 x 4
generosity country_Canada country_Mexico country_United.States
<dbl> <dbl> <dbl> <dbl>
1 0.835 1 0 0
2 0.819 1 0 0
3 0.891 1 0 0
4 0.801 1 0 0
5 0.707 1 0 0
6 0.841 1 0 0
7 1.06 1 0 0
8 1.21 1 0 0
9 0.940 1 0 0
10 0.838 1 0 0
11 0.590 1 0 0
12 0.305 1 0 0
13 -0.0323 1 0 0
14 NA 0 1 0
15 -1.19 0 1 0
16 -1.39 0 1 0
17 -1.08 0 1 0
18 -0.915 0 1 0
19 -1.22 0 1 0
20 -1.18 0 1 0
# … with 19 more rows
```
We see that the first few observations are Canada, and the next few Mexico. Note that doing this is rarely required for most modeling situations, but even if not, it sometimes can be useful to do so explicitly. If your modeling package cannot handle factor variables, and thus requires explicit coding, you’ll know, and typically these are the same ones that require matrix input.
Let’s run a regression as follows to show how it would happen automatically.
```
model_dummy = lm(happiness_score ~ country, data = nafta)
summary(model_dummy)
```
```
Call:
lm(formula = happiness_score ~ country, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.36887 0.09633 76.493 5.64e-14 ***
countryMexico -0.61107 0.13624 -4.485 0.00152 **
countryUnited States -0.34337 0.13624 -2.520 0.03275 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
In this case, the coefficient represents the difference in means on the target variable between the reference group and the group in question. In this case, the U.S. is \-0\.34 less on the happy score than the reference country (Canada). The intercept tells us the mean of the reference group.
Other codings are possible, and these would allow for specific group comparisons or types of comparisons. This is sometimes called *contrast coding*. For example, we could compare Canada vs. both the U.S. and Mexico. By giving Canada twice the weight of the other two we can get this result. I also add a coding that will just compare Mexico vs. the U.S. The actual weights used are arbitrary, but in this case should sum to zero.
| group | canada\_vs\_other | mexico\_vs\_us |
| --- | --- | --- |
| Canada | \-0\.667 | 0\.0 |
| Mexico | 0\.333 | \-0\.5 |
| United States | 0\.333 | 0\.5 |
| weights sum to zero, but are arbitrary |
| --- |
Adding such coding to a factor variable allows the corresponding models to use it in constructing the model matrix, rather than dummy coding. See the group means and calculate the results by hand for yourself.
```
nafta = nafta %>%
mutate(country_fac = factor(country))
contrasts(nafta$country_fac) = matrix(c(-2/3, 1/3, 1/3, 0, -.5, .5),
ncol = 2)
summary(lm(happiness_score ~ country_fac, data = nafta))
```
```
Call:
lm(formula = happiness_score ~ country_fac, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.05072 0.05562 126.769 6.01e-16 ***
country_fac1 -0.47722 0.11799 -4.045 0.00291 **
country_fac2 0.26770 0.13624 1.965 0.08100 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
```
nafta %>%
group_by(country) %>%
summarise(happy = mean(happiness_score, na.rm = TRUE))
```
```
# A tibble: 3 x 2
country happy
<chr> <dbl>
1 Canada 7.37
2 Mexico 6.76
3 United States 7.03
```
For example, we can see that for this balanced data set, the `_fac1` coefficient is the average of the U.S. and Mexico coefficients that we got from dummy coding, which represented their respective mean differences from Canada: (\-0\.611 \+ \-0\.343\) / 2 \= \-0\.477\. The `_fac2` coefficient is just the U.S. Mexico mean difference, as expected.
In other circumstances, we can use *categorical embeddings* to reduce a very large number of categorical levels to a smaller number of numeric variables. This is very commonly employed in deep learning.
### Scales, indices, and dimension reduction
It is often the case that we have several correlated variables/items which do not all need to go into the model. For example, instead of using all items in a psychological scale, we can use the scale score, however defined, which is often just a *sum score* of the underlying items. Often people will create an index by using a *principal components analysis*, which can be thought of as a means to create a weighted sum score, or set of scores. Some (especially binary) items may tend toward the creation of a single variable that simply notes whether any of those collection of variables was present or not.
#### Two\-step approaches
Some might do a preliminary analysis, such as a *cluster analysis* or *factor analysis*, to create new target or predictor variables. In the former we reduce several variables to a single categorical label. Factor analysis does the same but results in a more expressive continuous metric. While fine to use, the corresponding results are measured with error, so treating the categories or factor scores as you would observed variables will typically result in optimistic results when you later include them in a subsequent analysis like a linear regression. Though this difference is probably slight in most applications, keen reviewers would probably point out the model shortcoming.
### Don’t discretize
Little pains advanced modelers more than seeing results where a nice expressive continuous metric is butchered into two categories (e.g. taking a numeric age and collapsing to ‘old’ vs. ‘young’). There is rarely a reason to do this, and it is difficult to justify. There are reasons to collapse rare labels of a categorical variable, so that the new variable has fewer but more frequent categories. For example, data may have five or six race categories, but often the values are lumped into majority group vs. minority group due to each minority category having too few observations. But even that can cause problems, and doesn’t really overcome the fact that you simply didn’t have enough data to begin with.
### Numeric variables
The following table shows the interpretation of two extremely common transformations applied to numeric variables\- logging and scaling (i.e. standardizing to mean zero, standard deviation one).
| target | predictor | interpretation |
| --- | --- | --- |
| y | x | \\(\\Delta y \= \\beta\\Delta x\\) |
| y | log(x) | \\(\\Delta y \\approx (\\beta/100\)\\%\\Delta x\\) |
| log(y) | x | \\(\\%\\Delta y \\approx 100\\cdot \\beta\\%\\Delta x\\) |
| log(y) | log(x) | \\(\\%\\Delta y \= \\beta\\%\\Delta x\\) |
| y | scale(x) | \\(\\Delta y \= \\beta\\sigma\\Delta x\\) |
| scale(y) | x | \\(\\sigma\\Delta y \= \\beta\\Delta x\\) |
| scale(y) | scale(x) | \\(\\sigma\\Delta y \= \\beta\\sigma\\Delta x\\) |
For example, to start with the normal linear model situation, a one\-unit change in \\(x\\), i.e. \\(\\Delta x \=1\\), leads to \\(\\beta\\) unit change in \\(y\\). If we log the target variable \\(y\\), the interpretation of the coefficient for \\(x\\) is that a one\-unit change in \\(x\\) leads to an (approximately) 100\\(\\cdot\\)\\(\\beta\\)% change in \\(y\\). The 100 changes the result from a proportion to percentage change. More concretely, if \\(\\beta\\) was .5, a unit change in \\(x\\) leads to (roughly) a 50% change in \\(y\\). If both were logged, a percentage change in \\(x\\) leads to a \\(\\beta\\) percentage change in y[28](#fn28). These percentage change interpretations are called [elasticities](https://en.wikipedia.org/wiki/Elasticity_(economics)) in econometrics and areas trained similarly[29](#fn29).
It is very common to use *standardized* variables as well, also called normalizing, or simply scaling. If \\(y\\) and \\(x\\) are both standardized, a one unit (i.e. one standard deviation) change in \\(x\\) leads to a \\(\\beta\\) standard deviation change in \\(y\\). Again, if \\(\\beta\\) was .5, a standard deviation change in \\(x\\) leads to a half standard deviation change in \\(y\\). In general, there is nothing to lose by standardizing, so you should employ it often.
Another common transformation, particularly in machine learning, is the *min\-max normalization*, changing variables to range from some minimum to some maximum, usually zero to one.
### Categorical variables
A raw character string is not an analyzable unit, so character strings and labeled variables like factors must be converted for analysis to be conducted on them. For categorical variables, we can employ what is called *effects coding* to test for specific types of group differences. Far and away the most common approach is called *dummy coding* or *one\-hot encoding*[30](#fn30). In the next example, we will use dummy coding via the recipes package. I also to show how to standardize a numeric variable as previously discussed.
```
library(recipes)
nafta = happy %>%
filter(country %in% c('United States', 'Canada', 'Mexico'))
dummy = nafta %>%
recipe(~ country + generosity) %>% # formula approach for specifying variables
step_dummy(country, one_hot = TRUE) %>% # make variables for all factor levels
step_center(generosity) %>% # example of centering
step_scale(generosity) # example of standardizing
prep(dummy) %>% # estimates the necessary data to apply to this or other data sets
bake(nafta) %>% # apply the computations
print(n = 20)
```
```
# A tibble: 39 x 4
generosity country_Canada country_Mexico country_United.States
<dbl> <dbl> <dbl> <dbl>
1 0.835 1 0 0
2 0.819 1 0 0
3 0.891 1 0 0
4 0.801 1 0 0
5 0.707 1 0 0
6 0.841 1 0 0
7 1.06 1 0 0
8 1.21 1 0 0
9 0.940 1 0 0
10 0.838 1 0 0
11 0.590 1 0 0
12 0.305 1 0 0
13 -0.0323 1 0 0
14 NA 0 1 0
15 -1.19 0 1 0
16 -1.39 0 1 0
17 -1.08 0 1 0
18 -0.915 0 1 0
19 -1.22 0 1 0
20 -1.18 0 1 0
# … with 19 more rows
```
We see that the first few observations are Canada, and the next few Mexico. Note that doing this is rarely required for most modeling situations, but even if not, it sometimes can be useful to do so explicitly. If your modeling package cannot handle factor variables, and thus requires explicit coding, you’ll know, and typically these are the same ones that require matrix input.
Let’s run a regression as follows to show how it would happen automatically.
```
model_dummy = lm(happiness_score ~ country, data = nafta)
summary(model_dummy)
```
```
Call:
lm(formula = happiness_score ~ country, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.36887 0.09633 76.493 5.64e-14 ***
countryMexico -0.61107 0.13624 -4.485 0.00152 **
countryUnited States -0.34337 0.13624 -2.520 0.03275 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
In this case, the coefficient represents the difference in means on the target variable between the reference group and the group in question. In this case, the U.S. is \-0\.34 less on the happy score than the reference country (Canada). The intercept tells us the mean of the reference group.
Other codings are possible, and these would allow for specific group comparisons or types of comparisons. This is sometimes called *contrast coding*. For example, we could compare Canada vs. both the U.S. and Mexico. By giving Canada twice the weight of the other two we can get this result. I also add a coding that will just compare Mexico vs. the U.S. The actual weights used are arbitrary, but in this case should sum to zero.
| group | canada\_vs\_other | mexico\_vs\_us |
| --- | --- | --- |
| Canada | \-0\.667 | 0\.0 |
| Mexico | 0\.333 | \-0\.5 |
| United States | 0\.333 | 0\.5 |
| weights sum to zero, but are arbitrary |
| --- |
Adding such coding to a factor variable allows the corresponding models to use it in constructing the model matrix, rather than dummy coding. See the group means and calculate the results by hand for yourself.
```
nafta = nafta %>%
mutate(country_fac = factor(country))
contrasts(nafta$country_fac) = matrix(c(-2/3, 1/3, 1/3, 0, -.5, .5),
ncol = 2)
summary(lm(happiness_score ~ country_fac, data = nafta))
```
```
Call:
lm(formula = happiness_score ~ country_fac, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.05072 0.05562 126.769 6.01e-16 ***
country_fac1 -0.47722 0.11799 -4.045 0.00291 **
country_fac2 0.26770 0.13624 1.965 0.08100 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
```
nafta %>%
group_by(country) %>%
summarise(happy = mean(happiness_score, na.rm = TRUE))
```
```
# A tibble: 3 x 2
country happy
<chr> <dbl>
1 Canada 7.37
2 Mexico 6.76
3 United States 7.03
```
For example, we can see that for this balanced data set, the `_fac1` coefficient is the average of the U.S. and Mexico coefficients that we got from dummy coding, which represented their respective mean differences from Canada: (\-0\.611 \+ \-0\.343\) / 2 \= \-0\.477\. The `_fac2` coefficient is just the U.S. Mexico mean difference, as expected.
In other circumstances, we can use *categorical embeddings* to reduce a very large number of categorical levels to a smaller number of numeric variables. This is very commonly employed in deep learning.
### Scales, indices, and dimension reduction
It is often the case that we have several correlated variables/items which do not all need to go into the model. For example, instead of using all items in a psychological scale, we can use the scale score, however defined, which is often just a *sum score* of the underlying items. Often people will create an index by using a *principal components analysis*, which can be thought of as a means to create a weighted sum score, or set of scores. Some (especially binary) items may tend toward the creation of a single variable that simply notes whether any of those collection of variables was present or not.
#### Two\-step approaches
Some might do a preliminary analysis, such as a *cluster analysis* or *factor analysis*, to create new target or predictor variables. In the former we reduce several variables to a single categorical label. Factor analysis does the same but results in a more expressive continuous metric. While fine to use, the corresponding results are measured with error, so treating the categories or factor scores as you would observed variables will typically result in optimistic results when you later include them in a subsequent analysis like a linear regression. Though this difference is probably slight in most applications, keen reviewers would probably point out the model shortcoming.
#### Two\-step approaches
Some might do a preliminary analysis, such as a *cluster analysis* or *factor analysis*, to create new target or predictor variables. In the former we reduce several variables to a single categorical label. Factor analysis does the same but results in a more expressive continuous metric. While fine to use, the corresponding results are measured with error, so treating the categories or factor scores as you would observed variables will typically result in optimistic results when you later include them in a subsequent analysis like a linear regression. Though this difference is probably slight in most applications, keen reviewers would probably point out the model shortcoming.
### Don’t discretize
Little pains advanced modelers more than seeing results where a nice expressive continuous metric is butchered into two categories (e.g. taking a numeric age and collapsing to ‘old’ vs. ‘young’). There is rarely a reason to do this, and it is difficult to justify. There are reasons to collapse rare labels of a categorical variable, so that the new variable has fewer but more frequent categories. For example, data may have five or six race categories, but often the values are lumped into majority group vs. minority group due to each minority category having too few observations. But even that can cause problems, and doesn’t really overcome the fact that you simply didn’t have enough data to begin with.
Variable Importance
-------------------
In many circumstances, one of the modeling goals is to determine which predictor variable is most important out of the collection used in the model, or otherwise rank order the effectiveness of the predictors in some fashion. However, determining relative *variable importance* is at best an approximation with some methods, and a fairly hopeless endeavor with others. For just basic linear regression there are many methods that would not necessarily come to the same conclusions. Statistical significance, e.g. the Z/t statistic or p\-value, is simply not a correct way to do so. Some believe that [standardizing numeric variables](models.html#numeric-variables) is enough, but it is not, and doesn’t help with comparison to categorical inputs. In addition, if you’re model is not strong, it doesn’t make much sense to even worry about which is the best of a bad lot.
Another reason that ‘importance’ is a problematic endeavor is that a statistical result doesn’t speak to practical action, nor does it speak to the fact that small effects may be very important. Sex may be an important driver in social science model, but we may not be able to do anything about it for many outcomes that may be of interest. With health outcomes, any effects might be worthy of attention, however small, if they could practically increase the likelihood of survival.
Even if you can come up with a metric you like, you would still need some measure of uncertainty around that to make a claim that one predictor is reasonably better than another, and the only real approach to do that is usually some computationally expensive procedure that you will likely have to put together by hand.
As an example, for standard linear regression there are many methods that decompose \\(R^2\\) into relative contributions by the covariates. The tools to do so have to re\-run the model in many ways to produce these estimates (see the relaimpo package for example), but you would then have to use bootstrapping or similar approach to get interval estimates for those measures of importance. Certain techniques like random forests have a natural way to provide variable importance metrics, but providing inference on them would similarly be very computationally expensive.
In the end though, I think it is probably best to assume that any effect that seems practically distinct from zero might be worthy of attention, and can be regarded for its own sake. The more actionable, the better.
Extracting Output
-----------------
The better you get at modeling, the more often you are going to need to get at certain parts of the model output easily. For example, we can extract the coefficients, residuals, model data and other parts from standard linear model objects from base R.
Why would you want to do this? A simple example would be to compare effects across different settings. We can collect the values, put them in a data frame, and then to a table or visualization.
Typical modeling [methods](programming.html#methods) you might want to use:
* summary: print results in a legible way
* plot: plot something about the model (e.g. diagnostic plots)
* predict: make predictions, possibly on new data
* confint: get confidence intervals for parameters
* coef: extract coefficients
* fitted: extract fitted values
* residuals: extract residuals
* AIC: extract AIC
Here is an example of using the predict and coef methods.
```
predict(happy_model_base, newdata = happy %>% slice(1:5))
```
```
1 2 3 4 5
3.838179 3.959046 3.928180 4.004129 4.171624
```
```
coef(happy_model_base)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
Also, it’s useful to assign the summary results to an object, so that you can extract things that are also useful but would not be in the model object. We did this before, so now let’s take a look.
```
str(happy_model_base_sum, 1)
```
```
List of 12
$ call : language lm(formula = happiness_score ~ democratic_quality + generosity + log_gdp_per_capita, data = happy)
$ terms :Classes 'terms', 'formula' language happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
.. ..- attr(*, "variables")= language list(happiness_score, democratic_quality, generosity, log_gdp_per_capita)
.. ..- attr(*, "factors")= int [1:4, 1:3] 0 1 0 0 0 0 1 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:3] "democratic_quality" "generosity" "log_gdp_per_capita"
.. ..- attr(*, "order")= int [1:3] 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(happiness_score, democratic_quality, generosity, log_gdp_per_capita)
.. ..- attr(*, "dataClasses")= Named chr [1:4] "numeric" "numeric" "numeric" "numeric"
.. .. ..- attr(*, "names")= chr [1:4] "happiness_score" "democratic_quality" "generosity" "log_gdp_per_capita"
$ residuals : Named num [1:411] -0.405 -0.572 0.057 -0.426 -0.829 ...
..- attr(*, "names")= chr [1:411] "8" "9" "10" "19" ...
$ coefficients : num [1:4, 1:4] -1.01 0.17 1.161 0.693 0.314 ...
..- attr(*, "dimnames")=List of 2
$ aliased : Named logi [1:4] FALSE FALSE FALSE FALSE
..- attr(*, "names")= chr [1:4] "(Intercept)" "democratic_quality" "generosity" "log_gdp_per_capita"
$ sigma : num 0.628
$ df : int [1:3] 4 407 4
$ r.squared : num 0.695
$ adj.r.squared: num 0.693
$ fstatistic : Named num [1:3] 310 3 407
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:4, 1:4] 0.2504 0.0229 -0.0139 -0.0264 0.0229 ...
..- attr(*, "dimnames")=List of 2
$ na.action : 'omit' Named int [1:1293] 1 2 3 4 5 6 7 11 12 13 ...
..- attr(*, "names")= chr [1:1293] "1" "2" "3" "4" ...
- attr(*, "class")= chr "summary.lm"
```
If we want the adjusted \\(R^2\\) or root mean squared error (RMSE, i.e. average error[31](#fn31)), they aren’t readily available in the model object, but they are in the summary object, so we can pluck them out as we would any other [list object](data_structures.html#lists).
```
happy_model_base_sum$adj.r.squared
```
```
[1] 0.6930647
```
```
happy_model_base_sum[['sigma']]
```
```
[1] 0.6282718
```
### Package support
There are many packages available to get at model results. One of the more widely used is broom, which has tidy and other functions that can apply in different ways to different models depending on their class.
```
library(broom)
tidy(happy_model_base)
```
```
# A tibble: 4 x 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -1.01 0.314 -3.21 1.41e- 3
2 democratic_quality 0.170 0.0459 3.71 2.33e- 4
3 generosity 1.16 0.195 5.94 6.18e- 9
4 log_gdp_per_capita 0.693 0.0333 20.8 5.93e-66
```
Some packages will produce tables for a model object that are more or less ready for publication. However, unless you know it’s in the exact style you need, you’re probably better off dealing with it yourself. For example, you can use tidy and do minor cleanup to get the table ready for publication.
### Package support
There are many packages available to get at model results. One of the more widely used is broom, which has tidy and other functions that can apply in different ways to different models depending on their class.
```
library(broom)
tidy(happy_model_base)
```
```
# A tibble: 4 x 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -1.01 0.314 -3.21 1.41e- 3
2 democratic_quality 0.170 0.0459 3.71 2.33e- 4
3 generosity 1.16 0.195 5.94 6.18e- 9
4 log_gdp_per_capita 0.693 0.0333 20.8 5.93e-66
```
Some packages will produce tables for a model object that are more or less ready for publication. However, unless you know it’s in the exact style you need, you’re probably better off dealing with it yourself. For example, you can use tidy and do minor cleanup to get the table ready for publication.
Visualization
-------------
> Models require visualization to be understood completely.
If you aren’t using visualization as a fundamental part of your model exploration, you’re likely leaving a lot of that exploration behind, and not communicating the results as well as you could to the broadest audience possible. When adding nonlinear effects, interactions, and more, visualization is a must. Thankfully there are many packages to help you get data you need to visualize effects.
We start with the emmeans package. In the following example we have a country effect, and wish to get the mean happiness scores per country. We then visualize the results. Here we can see that Mexico is lowest on average.
```
happy_model_nafta = lm(happiness_score ~ country + year, data = nafta)
library(emmeans)
country_means = emmeans(happy_model_nafta, ~ country)
country_means
```
```
country emmean SE df lower.CL upper.CL
Canada 7.37 0.064 8 7.22 7.52
Mexico 6.76 0.064 8 6.61 6.91
United States 7.03 0.064 8 6.88 7.17
Confidence level used: 0.95
```
```
plot(country_means)
```
We can also test for pairwise differences between the countries, and there’s no reason not to visualize that also. In the following, after adjustment Mexico and U.S. might not differ on mean happiness, but the other comparisons are statistically notable[32](#fn32).
```
pw_comparisons = contrast(country_means, method = 'pairwise', adjust = 'bonferroni')
pw_comparisons
```
```
contrast estimate SE df t.ratio p.value
Canada - Mexico 0.611 0.0905 8 6.751 0.0004
Canada - United States 0.343 0.0905 8 3.793 0.0159
Mexico - United States -0.268 0.0905 8 -2.957 0.0547
P value adjustment: bonferroni method for 3 tests
```
```
plot(pw_comparisons)
```
The following example uses ggeffects. First, we run a model with an interaction of country and year (we’ll talk more about interactions later). Then we get predictions for the year by country, and subsequently visualize. We can see that the trend, while negative for all countries, is more pronounced as we move south.
```
happy_model_nafta = lm(happiness_score ~ year*country, data = nafta)
library(ggeffects)
preds = ggpredict(happy_model_nafta, terms = c('year', 'country'))
plot(preds)
```
Whenever you move to generalized linear models or other more complicated settings, visualization is even more important, so it’s best to have some tools at your disposal.
Extensions to the Standard Linear Model
---------------------------------------
### Different types of targets
In many data situations, we do not have a continuous numeric target variable, or may want to use a different distribution to get a better fit, or adhere to some theoretical perspective. For example, count data is not continuous and often notably skewed, so assuming a normal symmetric distribution may not work as well. From a data generating perspective we can use the Poisson distribution[33](#fn33) for the target variable instead.
\\\[\\ln{\\mu} \= X\\beta\\]
\\\[\\mu \= e^{X\\beta}\\]
\\\[y \\sim \\mathcal{Pois}(\\mu)\\]
Conceptually nothing has really changed from what we were doing with the standard linear model, except for the distribution. We still have a mean function determined by our predictors, and this is what we’re typically mainly interested in from a theoretical perspective. We do have an added step, a transformation of the mean (now usually called the *linear predictor*). Poisson naturally works with the log of the target, but rather than do that explicitly, we instead exponentiate the linear predictor. The *link function*[34](#fn34), which is the natural log in this setting, has a corresponding *inverse link* (or mean function)\- exponentiation.
In code we can demonstrate this as follows.
```
set.seed(123) # for reproducibility
N = 1000 # sample size
beta = c(2, 1) # the true coefficient values
x = rnorm(N) # a single predictor variable
mu = exp(beta[1] + beta[2]*x) # the linear predictor
y = rpois(N, lambda = mu) # the target variable lambda = mean
glm(y ~ x, family = poisson)
```
```
Call: glm(formula = y ~ x, family = poisson)
Coefficients:
(Intercept) x
2.009 0.994
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 13240
Residual Deviance: 1056 AIC: 4831
```
A very common setting is the case where our target variable takes on only two values\- yes vs. no, alive vs. dead, etc. The most common model used in such settings is the logistic regression model. In this case, it will have a different link to go with a different distribution.
\\\[\\ln{\\frac{\\mu}{1\-\\mu}} \= X\\beta\\]
\\\[\\mu \= \\frac{1}{1\+e^{\-X\\beta}}\\]
\\\[y \\sim \\mathcal{Binom}(\\mathrm{prob}\=\\mu, \\mathrm{size} \= 1\)\\]
Here our link function is called the *logit*, and it’s inverse takes our linear predictor and puts it on the probability scale.
Again, some code can help drive this home.
```
mu = plogis(beta[1] + beta[2]*x)
y = rbinom(N, size = 1, mu)
glm(y ~ x, family = binomial)
```
```
Call: glm(formula = y ~ x, family = binomial)
Coefficients:
(Intercept) x
2.141 1.227
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 852.3
Residual Deviance: 708.8 AIC: 712.8
```
```
# extension to count/proportional model
# mu = plogis(beta[1] + beta[2]*x)
# total = rpois(N, lambda = 5)
# events = rbinom(N, size = total, mu)
# nonevents = total - events
#
# glm(cbind(events, nonevents) ~ x, family = binomial)
```
You’ll have noticed that when we fit these models we used glm instead of lm. The normal linear model is a special case of *generalized linear models*, which includes a specific class of distributions \- normal, poisson, binomial, gamma, beta and more \- collectively referred to as the [exponential family](https://en.wikipedia.org/wiki/Exponential_family). While this family can cover a lot of ground, you do not have to restrict yourself to it, and many R modeling packages will provide easy access to more. The main point is that you have tools to deal with continuous, binary, count, ordinal, and other types of data. Furthermore, not much necessarily changes conceptually from model to model besides the link function and/or distribution.
### Correlated data
Often in standard regression modeling situations we have data that is correlated, like when we observe multiple observations for individuals (e.g. longitudinal studies), or observations are clustered within geographic units. There are many ways to analyze all kinds of correlated data in the form of clustered data, time series, spatial data and similar. In terms of understanding the mean function and data generating distribution for our target variable, as we did in our previous models, not much changes. However, we will want to utilize estimation techniques that take this correlation into account. Examples of such models include:
* Mixed models (e.g. random intercepts, ‘multilevel’ models)
* Time series models (autoregressive)
* Spatial models (e.g. conditional autoregressive)
As demonstration is beyond the scope of this document, the main point here is awareness. But see these on [mixed models](https://m-clark.github.io/mixed-models-with-R/) and [generalized additive models](https://m-clark.github.io/generalized-additive-models/).
### Other extensions
There are many types of models that will take one well beyond the standard linear model. In some cases, the focus is multivariate, trying to model many targets at once. Other models will even be domain\-specific, tailored to a very narrow type of problem. Whatever the scenario, having a good understanding of the models we’ve been discussing will likely help you navigate these new waters much more easily.
### Different types of targets
In many data situations, we do not have a continuous numeric target variable, or may want to use a different distribution to get a better fit, or adhere to some theoretical perspective. For example, count data is not continuous and often notably skewed, so assuming a normal symmetric distribution may not work as well. From a data generating perspective we can use the Poisson distribution[33](#fn33) for the target variable instead.
\\\[\\ln{\\mu} \= X\\beta\\]
\\\[\\mu \= e^{X\\beta}\\]
\\\[y \\sim \\mathcal{Pois}(\\mu)\\]
Conceptually nothing has really changed from what we were doing with the standard linear model, except for the distribution. We still have a mean function determined by our predictors, and this is what we’re typically mainly interested in from a theoretical perspective. We do have an added step, a transformation of the mean (now usually called the *linear predictor*). Poisson naturally works with the log of the target, but rather than do that explicitly, we instead exponentiate the linear predictor. The *link function*[34](#fn34), which is the natural log in this setting, has a corresponding *inverse link* (or mean function)\- exponentiation.
In code we can demonstrate this as follows.
```
set.seed(123) # for reproducibility
N = 1000 # sample size
beta = c(2, 1) # the true coefficient values
x = rnorm(N) # a single predictor variable
mu = exp(beta[1] + beta[2]*x) # the linear predictor
y = rpois(N, lambda = mu) # the target variable lambda = mean
glm(y ~ x, family = poisson)
```
```
Call: glm(formula = y ~ x, family = poisson)
Coefficients:
(Intercept) x
2.009 0.994
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 13240
Residual Deviance: 1056 AIC: 4831
```
A very common setting is the case where our target variable takes on only two values\- yes vs. no, alive vs. dead, etc. The most common model used in such settings is the logistic regression model. In this case, it will have a different link to go with a different distribution.
\\\[\\ln{\\frac{\\mu}{1\-\\mu}} \= X\\beta\\]
\\\[\\mu \= \\frac{1}{1\+e^{\-X\\beta}}\\]
\\\[y \\sim \\mathcal{Binom}(\\mathrm{prob}\=\\mu, \\mathrm{size} \= 1\)\\]
Here our link function is called the *logit*, and it’s inverse takes our linear predictor and puts it on the probability scale.
Again, some code can help drive this home.
```
mu = plogis(beta[1] + beta[2]*x)
y = rbinom(N, size = 1, mu)
glm(y ~ x, family = binomial)
```
```
Call: glm(formula = y ~ x, family = binomial)
Coefficients:
(Intercept) x
2.141 1.227
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 852.3
Residual Deviance: 708.8 AIC: 712.8
```
```
# extension to count/proportional model
# mu = plogis(beta[1] + beta[2]*x)
# total = rpois(N, lambda = 5)
# events = rbinom(N, size = total, mu)
# nonevents = total - events
#
# glm(cbind(events, nonevents) ~ x, family = binomial)
```
You’ll have noticed that when we fit these models we used glm instead of lm. The normal linear model is a special case of *generalized linear models*, which includes a specific class of distributions \- normal, poisson, binomial, gamma, beta and more \- collectively referred to as the [exponential family](https://en.wikipedia.org/wiki/Exponential_family). While this family can cover a lot of ground, you do not have to restrict yourself to it, and many R modeling packages will provide easy access to more. The main point is that you have tools to deal with continuous, binary, count, ordinal, and other types of data. Furthermore, not much necessarily changes conceptually from model to model besides the link function and/or distribution.
### Correlated data
Often in standard regression modeling situations we have data that is correlated, like when we observe multiple observations for individuals (e.g. longitudinal studies), or observations are clustered within geographic units. There are many ways to analyze all kinds of correlated data in the form of clustered data, time series, spatial data and similar. In terms of understanding the mean function and data generating distribution for our target variable, as we did in our previous models, not much changes. However, we will want to utilize estimation techniques that take this correlation into account. Examples of such models include:
* Mixed models (e.g. random intercepts, ‘multilevel’ models)
* Time series models (autoregressive)
* Spatial models (e.g. conditional autoregressive)
As demonstration is beyond the scope of this document, the main point here is awareness. But see these on [mixed models](https://m-clark.github.io/mixed-models-with-R/) and [generalized additive models](https://m-clark.github.io/generalized-additive-models/).
### Other extensions
There are many types of models that will take one well beyond the standard linear model. In some cases, the focus is multivariate, trying to model many targets at once. Other models will even be domain\-specific, tailored to a very narrow type of problem. Whatever the scenario, having a good understanding of the models we’ve been discussing will likely help you navigate these new waters much more easily.
Model Exploration Summary
-------------------------
At this point you should have a good idea of how to get started exploring models with R. Generally what you will explore will be based on theory, or merely curiosity. Specific packages while make certain types of models easy to pull off, without much change to the syntax from the standard `lm` approach of base R. Almost invariably, you will need to process the data to make it more amenable to analysis and/or more interpretable. After model fitting, summaries and visualizations go a long way toward understanding the part of the world you are exploring.
Model Exploration Exercises
---------------------------
### Exercise 1
With the Google app data, use a standard linear model (i.e. lm) to predict one of three target variables of your choosing:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
I would suggest preprocessing the number of reviews\- dividing by 100,000, scaling (standardizing), or logging it (for the latter you can add 1 first to deal with zeros[35](#fn35)).
Interpret the results. Visualize the difference in means between free and paid apps. See the [emmeans](models.html#visualization) example above.
```
load('data/google_apps.RData')
model = lm(? ~ reviews + type + size_in_MB, data = google_apps)
plot(emmeans::emmeans(model, ~type))
```
### Exercise 2
Rerun the above with interactions of the number of reviews or app size (or both) with type (via `a + b + a:b` or just `a*b` for two predictors). Visualize the interaction. Does it look like the effect differs by type?
```
model = lm(? ~ reviews + type*?, data = google_apps)
plot(ggeffects::ggpredict(model, terms = c('size_in_MB', 'type')))
```
### Exercise 3
Use the fish data to predict the number of fish caught `count` by the following predictor variables:
* `livebait`: whether live bait was used or not
* `child`: how many children present
* `persons`: total persons on the trip
If you wish, you can start with an `lm`, but as the number of fish caught is a count, it is suitable for using a poisson distribution via `glm` with `family = poisson`, so try that if you’re feeling up for it. If you exponentiate the coefficients, they can be interpreted as [incidence rate ratios](https://stats.idre.ucla.edu/stata/output/poisson-regression/).
```
load('data/fish.RData')
model = glm(?, data = fish)
```
### Exercise 1
With the Google app data, use a standard linear model (i.e. lm) to predict one of three target variables of your choosing:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
I would suggest preprocessing the number of reviews\- dividing by 100,000, scaling (standardizing), or logging it (for the latter you can add 1 first to deal with zeros[35](#fn35)).
Interpret the results. Visualize the difference in means between free and paid apps. See the [emmeans](models.html#visualization) example above.
```
load('data/google_apps.RData')
model = lm(? ~ reviews + type + size_in_MB, data = google_apps)
plot(emmeans::emmeans(model, ~type))
```
### Exercise 2
Rerun the above with interactions of the number of reviews or app size (or both) with type (via `a + b + a:b` or just `a*b` for two predictors). Visualize the interaction. Does it look like the effect differs by type?
```
model = lm(? ~ reviews + type*?, data = google_apps)
plot(ggeffects::ggpredict(model, terms = c('size_in_MB', 'type')))
```
### Exercise 3
Use the fish data to predict the number of fish caught `count` by the following predictor variables:
* `livebait`: whether live bait was used or not
* `child`: how many children present
* `persons`: total persons on the trip
If you wish, you can start with an `lm`, but as the number of fish caught is a count, it is suitable for using a poisson distribution via `glm` with `family = poisson`, so try that if you’re feeling up for it. If you exponentiate the coefficients, they can be interpreted as [incidence rate ratios](https://stats.idre.ucla.edu/stata/output/poisson-regression/).
```
load('data/fish.RData')
model = glm(?, data = fish)
```
Python Model Exploration Notebook
---------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/models.ipynb)
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/models.html |
Model Exploration
=================
The following section shows how to get started with modeling in R generally, with a focus on concepts, tools, and syntax, rather than trying to understand the specifics of a given model. We first dive into model exploration, getting a sense of the basic mechanics behind our modeling tools, and contemplating standard results. We’ll then shift our attention to understanding the strengths and limitations of our models. We’ll then change from classical methods to explore machine learning techniques. The goal of these chapters is to provide an overview of concepts and ways to think about modeling.
Model Taxonomy
--------------
We can begin with a taxonomy that broadly describes two classes of models:
* *Supervised*
* *Unsupervised*
* Some combination
For supervised settings, there is a target or set of target variables which we aim to predict with a set of predictor variables or covariates. This is far and away the most common case, and the one we will focus on here. It is very common in machine learning parlance to further distinguish *regression* and *classification* among supervised models, but what they actually mean to distinguish is numeric target variables from categorical ones (it’s all regression).
In the case of unsupervised models, the data itself is the target, and this setting includes techniques such as principal components analysis, factor analysis, cluster analytic approaches, topic modeling, and many others. A key goal for many such methods is *dimension reduction*, either of the columns or rows. For example, we may have many items of a survey we wish to group together into a few concepts, or cluster thousands of observations into a few simple categories.
We can also broadly describe two primary goals of modeling:
* *Prediction*
* *Explanation*
Different models will provide varying amounts of predictive and explanatory (or inferential) power. In some settings, prediction is almost entirely the goal, with little need to understand the underlying details of the relation of inputs to outputs. For example, in a model that predicts words to suggest when typing, we don’t really need to know nor much care about the details except to be able to improve those suggestions. In scientific studies however, we may be much more interested in the (potentially causal) relations among the variables under study.
While these are sometimes competing goals, it is definitely not the case that they are mutually exclusive. For example, a fully interpretable model, statistically speaking, may have no predictive capability, and so is fairly useless in practical terms. Often, very predictive models offer little understanding. But sometimes we can luck out and have both a highly predictive model as well as one that is highly interpretable.
Linear models
-------------
Most models you see in published reports are *linear models* of varying kinds, and form the basis on which to build more complex forms. In such models we distinguish a *target variable* we want to understand from the variables which we will use to understand it. Note that these come with different names depending on the goal of the study, discipline, and other factors[19](#fn19). The following table denotes common nomenclature across many disciplines.
| Type | Names |
| --- | --- |
| Target | Dependent variable |
| Endogenous |
| Response |
| Outcome |
| Output |
| Y |
| Regressand |
| Left hand side (LHS) |
| Predictor | Independent variable |
| Exogenous |
| Explanatory Variable |
| Covariate |
| Input |
| X |
| Regressor |
| Right hand side (RHS) |
A typical way to depict a linear regression model is as follows:
\\\[y \= b\_0 \+ b\_1\\cdot x\_1 \+ b\_2\\cdot x\_2 \+ ... \+ \+ b\_p\\cdot x\_p \+ \\epsilon\\]
In the above, \\(b\_0\\) is the intercept, and the other \\(b\_\*\\) are the regression coefficients that represent the relationship of the predictors \\(x\\) to the target variable \\(y\\). The \\(\\epsilon\\) represents the *error* or *residual*. We don’t have perfect prediction, and that represents the difference between what we can guess with our predictor relationships to the target and what we actually observe with it.
In R, we specify a linear model as follows. Conveniently enough, we use a function, `lm`, that stands for linear model. There are various inputs, typically starting with the formula. In the formula, The target variable is first, followed by the predictor variables, separated by a tilde (`~`). Additional predictor variables are added with a plus sign (`+`). In this example, `y` is our target, and the predictors are `x` and `z`.
```
lm(y ~ x + z)
```
We can still use linear models to investigate nonlinear relationships. For example, in the following, we can add a quadratic term or an interaction, yet the model is still linear in the parameters. All of the following are standard linear regression models.
```
lm(y ~ x + z + x:z)
lm(y ~ x + x_squared) # a better way: lm(y ~ poly(x, degree = 2))
```
In the models above, `x` has a potentially nonlinear relationship with `y`, either by varying its (linear) relationship depending on values of z (the first case) or itself (the second). In general, the manner in which nonlinear relationships may be explored in linear models is quite flexible.
An example of a *nonlinear model* would be population growth models, like exponential or logistic growth curves. You can use functions like nls or nlme for such models, but should have a specific theoretical reason to do so, and even then, flexible models such as [GAMs](https://m-clark.github.io/generalized-additive-models/) might be better than assuming a functional form.
Estimation
----------
One key thing to understand with predictive models of any kind is how we estimate the parameters of interest, e.g. coefficients/weights, variance, and more. To start with, we must have some sort of goal that choosing a particular set of values for the parameters achieves, and then find some way to reach that goal efficiently.
### Minimizing and maximizing
The goal of many estimation approaches is the reduction of *loss*, conceptually defined as the difference between the model predictions and the observed data, i.e. prediction error. In an introductory methods course, many are introduced to *ordinary least squares* as a means to estimate the coefficients for a linear regression model. In this scenario, we are seeking to come up with estimates of the coefficients that *minimize* the (squared) difference between the observed target value and the fitted value based on the parameter estimates. The loss in this case is defined as the sum of the squared errors. Formally we can state it as follows.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2\\]
We can see how this works more clearly with some simple conceptual code. In what follows, we create a [function](functions.html#writing-functions), allows us to move [row by row](iterative.html#for-loops) through the data, calculating both our prediction based on the given model parameters\- \\(\\hat{y}\\), and the difference between that and our target variable \\(y\\). We sum these squared differences to get a total. In practice such a function is called the loss function, cost function, or objective function.
```
ls_loss <- function(X, y, beta) {
# initialize the objects
loss = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
# for each row, calculate y_hat and square the difference with y
for (n in 1:nrow(X)) {
y_hat[n] = sum(X[n, ] * beta)
loss[n] = (y[n] - y_hat[n]) ^ 2
}
sum(loss)
}
```
Now we need some data. Let’s construct some data so that we know the true underlying values for the regression coefficients. Feel free to change the sample size `N` or the coefficient values.
```
set.seed(123) # for reproducibility
N = 100
X = cbind(1, rnorm(N)) # a model matrix; first column represents the intercept
y = 5 * X[, 1] + .5 * X[, 2] + rnorm(N) # a target with some noise; truth is y = 5 +.5*x
df = data.frame(y = y, x = X[, 2])
```
Now let’s make some guesses for the coefficients, and see what the corresponding sum of the squared errors, i.e. the loss, would be.
```
ls_loss(X, y, beta = c(0, 1)) # guess 1
```
```
[1] 2467.106
```
```
ls_loss(X, y, beta = c(1, 2)) # guess 2
```
```
[1] 1702.547
```
```
ls_loss(X, y, beta = c(4, .25)) # guess 3
```
```
[1] 179.2952
```
We see that in our third guess we reduce the loss quite a bit relative to our first guess. This makes sense because a value of 4 for the intercept and .25 for the coefficient for `x` are not as relatively far from the true values.
However, we can also see that they are not the best we could have done. In addition, with more data, our estimated coefficients would get closer to true values.
```
model = lm(y ~ x, df) # fit the model and obtain parameter estimates using OLS
coef(model) # best guess given the data
```
```
(Intercept) x
4.8971969 0.4475284
```
```
sum(residuals(model)^2) # least squares loss
```
```
[1] 92.34413
```
In some relatively rare cases, a known approach is available and we do not have to search for the best estimates, but simply have to perform the correct steps that will result in them. For example, the following matrix operations will produce the best estimates for linear regression, which also happen to be the maximum likelihood estimates.
```
solve(crossprod(X)) %*% crossprod(X, y) # 'normal equations'
```
```
[,1]
[1,] 4.8971969
[2,] 0.4475284
```
```
coef(model)
```
```
(Intercept) x
4.8971969 0.4475284
```
Most of the time we don’t have such luxury, or even if we did, the computations might be too great for the size of our data.
Many statistical modeling techniques use *maximum likelihood* in some form or fashion, including Bayesian approaches, so you would do well to understand the basics. In this case, instead of minimizing the loss, we use an approach to maximize the probability of the observations of the target variable given the estimates of the parameters of the model (e.g. the coefficients in a regression)[20](#fn20).
The following shows how this would look for estimating a single value like a mean for a set of observations from a specific distribution[21](#fn21). In this case, the true underlying value that maximizes the likelihood is 5, but we typically don’t know this. We see that as our guesses for the mean would get closer to 5, the likelihood of the observed values increases. Our final guess based on the observed data won’t be exactly 5, but with enough data and an appropriate model for that data, we should get close.
Again, some simple conceptual code can help us. The next bit of code follows a similar approach to what we had with least squares regression, but the goal is instead to maximize the likelihood of the observed data. In this example, I fix the estimated variance, but in practice we’d need to estimate that parameter as well. As probabilities are typically very small, we work with them on the log scale.
```
max_like <- function(X, y, beta, sigma = 1) {
likelihood = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
for (n in 1:nrow(X)) {
y_hat[n] <- sum(X[n, ] * beta)
likelihood[n] = dnorm(y[n], mean = y_hat[n], sd = sigma, log = TRUE)
}
sum(likelihood)
}
```
```
max_like(X, y, beta = c(0, 1)) # guess 1
```
```
[1] -1327.593
```
```
max_like(X, y, beta = c(1, 2)) # guess 2
```
```
[1] -1022.18
```
```
max_like(X, y, beta = c(4, .25)) # guess 3
```
```
[1] -300.6741
```
```
logLik(model)
```
```
'log Lik.' -137.9115 (df=3)
```
To better understand maximum likelihood, it might help to think of our model from a data generating perspective, rather than in terms of ‘errors’. In the standard regression setting, we think of a single observation as follows:
\\\[\\mu \= b\_0 \+ b\_1\*x\_1 \+ ... \+ b\_p\*x\_p\\]
Or with matrix notation (consider it shorthand if not familiar):
\\\[\\mu \= X\\beta\\]
Now we display how \\(y\\) is generated:
\\\[y \\sim \\mathcal{N}(\\mathrm{mean} \= \\mu, \\mathrm{sd} \= \\sigma)\\]
In words, this means that our target observation \\(y\\) is assumed to be normally distributed with some mean and some standard deviation/variance. The mean \\(\\mu\\) is a function, or simply weighted sum, of our covariates \\(X\\). The unknown parameters we have to estimate are the \\(\\beta\\), i.e. weights, and standard deviation \\(\\sigma\\) (or variance \\(\\sigma^2\\)).
One more note regarding estimation, it is good to distinguish models from estimation procedures. The following shows the more specific to the more general for both models and estimation procedures respectively.
| Label | Name |
| --- | --- |
| LM | Linear Model |
| GLM | Generalized Linear Model |
| GLMM | Generalized Linear Mixed Model |
| GAMM | Generalized Linear Mixed Model |
| OLS | Ordinary Least Squares |
| WLS | Weighted Least Squares |
| GLS | Generalized Least Squares |
| GEE | Generalized Estimating Equations |
| GMM | Generalized Method of Moments |
### Optimization
So we know the goal, but how do we get to it? In practice, we typically use *optimization* methods to iteratively search for the best estimates for the parameters of a given model. The functions we explored above provide a goal\- to minimize loss (however defined\- least squares for continuous, classification error for binary, etc.) or maximize the likelihood (or posterior probability in the Bayesian context). Whatever the goal, an optimizing *algorithm* will typically be used to find the estimates that reach that goal. Some approaches are very general, some are better for certain types of modeling problems. These algorithms continue to make guesses until some criterion has been reached (*convergence*)[22](#fn22).
You generally don’t need to know the details to use these algorithms to fit models, but knowing a little bit about the optimization process and available options may prove useful to deal with more complex data scenarios, where convergence can be difficult. Some packages will even have documentation specifically dealing with convergence issues. In the more predictive models previously discussed, knowing more about the optimization algorithm may speedup the time it takes to train the model, or smooth out the variability in the process.
As an aside, most Bayesian models use an estimation approach that is some form of *Markov Chain Monte Carlo*. It is a simulation based approach to generate subsequent estimates of parameters conditional on present estimates of them. One set of iterations is called a chain, and convergence requires multiple chains to mix well, i.e. come to similar conclusions about the parameter estimates. The goal even then is to maximize the log posterior distribution, similar to maximizing the likelihood. In the past this was an extremely computationally expensive procedure, but these days, modern laptops can handle even complex models with ease, though some data set sizes may be prohibitive still[23](#fn23).
Fitting Models
--------------
With practically every modern modeling package in R, the two components required to fit a model are the model formula, and a data frame that contains the variables specified in that formula. Consider the following models. In general the syntax is the similar regardless of package, with special considerations for the type of model. The data argument is not included in these examples, but would be needed.
```
lm(y ~ x + z) # standard linear model/OLS
glm(y ~ x + z, family = 'binomial') # logistic regression with binary response
glm(y ~ x + z + offset(log(q)), family = 'poisson') # count/rate model
betareg::betareg(y ~ x + z) # beta regression for targets between 0 and 1
pscl::hurdle(y ~ x + z, dist = "negbin") # hurdle model with negative binomial response
lme4::glmer(y ~ x + (1 | group), family = 'binomial') # generalized linear mixed model
mgcv::gam(y ~ s(x)) # generalized additive model
survival::coxph(Surv(time = t, event = q) ~ x) # Cox Proportional Hazards Regression
# Bayesian mixed model
brms::brm(
y ~ x + (1 + x | group),
family = 'zero_one_inflated_beta',
prior = priors
)
```
For examples of many other types of models, see this [document](https://m-clark.github.io/R-models/).
Let’s finally get our hands dirty and run an example. We’ll use the world happiness dataset[24](#fn24). This is country level data based on surveys taken at various years, and the scores are averages or proportions, along with other values like GDP.
```
library(tidyverse) # load if you haven't already
load('data/world_happiness.RData')
# glimpse(happy)
```
| Variable | N | Mean | SD | Min | Q1 | Median | Q3 | Max | % Missing |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| year | 1704 | 2012\.33 | 3\.69 | 2005\.00 | 2009\.00 | 2012\.00 | 2015\.00 | 2018\.00 | 0 |
| life\_ladder | 1704 | 5\.44 | 1\.12 | 2\.66 | 4\.61 | 5\.34 | 6\.27 | 8\.02 | 0 |
| log\_gdp\_per\_capita | 1676 | 9\.22 | 1\.19 | 6\.46 | 8\.30 | 9\.41 | 10\.19 | 11\.77 | 2 |
| social\_support | 1691 | 0\.81 | 0\.12 | 0\.29 | 0\.75 | 0\.83 | 0\.90 | 0\.99 | 1 |
| healthy\_life\_expectancy\_at\_birth | 1676 | 63\.11 | 7\.58 | 32\.30 | 58\.30 | 65\.00 | 68\.30 | 76\.80 | 2 |
| freedom\_to\_make\_life\_choices | 1675 | 0\.73 | 0\.14 | 0\.26 | 0\.64 | 0\.75 | 0\.85 | 0\.99 | 2 |
| generosity | 1622 | 0\.00 | 0\.16 | \-0\.34 | \-0\.12 | \-0\.02 | 0\.09 | 0\.68 | 5 |
| perceptions\_of\_corruption | 1608 | 0\.75 | 0\.19 | 0\.04 | 0\.70 | 0\.81 | 0\.88 | 0\.98 | 6 |
| positive\_affect | 1685 | 0\.71 | 0\.11 | 0\.36 | 0\.62 | 0\.72 | 0\.80 | 0\.94 | 1 |
| negative\_affect | 1691 | 0\.27 | 0\.08 | 0\.08 | 0\.21 | 0\.25 | 0\.31 | 0\.70 | 1 |
| confidence\_in\_national\_government | 1530 | 0\.48 | 0\.19 | 0\.07 | 0\.33 | 0\.46 | 0\.61 | 0\.99 | 10 |
| democratic\_quality | 1558 | \-0\.14 | 0\.88 | \-2\.45 | \-0\.79 | \-0\.23 | 0\.65 | 1\.58 | 9 |
| delivery\_quality | 1559 | 0\.00 | 0\.98 | \-2\.14 | \-0\.71 | \-0\.22 | 0\.70 | 2\.18 | 9 |
| gini\_index\_world\_bank\_estimate | 643 | 0\.37 | 0\.08 | 0\.24 | 0\.30 | 0\.35 | 0\.43 | 0\.63 | 62 |
| happiness\_score | 554 | 5\.41 | 1\.13 | 2\.69 | 4\.51 | 5\.31 | 6\.32 | 7\.63 | 67 |
| dystopia\_residual | 554 | 2\.06 | 0\.55 | 0\.29 | 1\.72 | 2\.06 | 2\.44 | 3\.84 | 67 |
The happiness score itself ranges from 2\.7 to 7\.6, with a mean of 5\.4 and standard deviation of 1\.1\.
Fitting a model with R is trivial, and at a minimum requires the two key ingredients mentioned before, the formula and data. Here we specify our target at `happiness_score` with predictors democratic quality, generosity, and GDP per capita (logged).
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
```
And that’s all there is to it.
### Using matrices
Many packages still allow for matrix input instead of specifying a model formula, or even require it (but shouldn’t). This means separating data into a model (or design) matrix, and the vector or matrix of the target variable(s). For example, if we needed a speed boost and weren’t concerned about some typical output we could use lm.fit.
First we need to create the required components. We can use model.matrix to get what we need.
```
X = model.matrix(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
head(X)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
8 1 -1.8443636 0.08909068 7.500539
9 1 -1.8554263 0.05136492 7.497038
10 1 -1.8865659 -0.11219829 7.497755
19 1 0.2516293 -0.08441135 9.302960
20 1 0.2572919 -0.02068741 9.337532
21 1 0.2999450 -0.03264282 9.376145
```
Note the column of ones in the model matrix `X`. This represents our intercept, but that may not mean much to you unless you understand matrix multiplication (nice demo [here](http://matrixmultiplication.xyz/)). The other columns are just as they are in the data. Note also that the missing values have been removed.
```
nrow(happy)
```
```
[1] 1704
```
```
nrow(X)
```
```
[1] 411
```
The target variable must contain the same number of observations as in the model matrix, and there are various ways to create it to ensure this. Instead of model.matrix, there is also model.frame, which creates a data frame, with a method for extracting the corresponding target variable[25](#fn25).
```
X_df = model.frame(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
y = model.response(X_df)
```
We can now fit the model as follows.
```
happy_model_matrix = lm.fit(X, y)
summary(happy_model_matrix) # only a standard list is returned
```
```
Length Class Mode
coefficients 4 -none- numeric
residuals 411 -none- numeric
effects 411 -none- numeric
rank 1 -none- numeric
fitted.values 411 -none- numeric
assign 4 -none- numeric
qr 5 qr list
df.residual 1 -none- numeric
```
```
coef(happy_model_matrix)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
In my experience, it is generally a bad sign if a package requires that you create the model matrix rather than doing so itself via the standard formula \+ data.frame approach. I typically find that such packages tend to skip out on many other things like using typical methods like predict, coef, etc., making them even more difficult to work with. In general, the only real time you should need to use model matrices is when you are creating your own modeling package, doing simulations, utilizing model speed\-ups, or otherwise know why you need them.
Summarizing Models
------------------
Once we have a model, we’ll want to summarize the results of it. Most modeling packages have a summary method we can apply, which will provide parameter estimates, some notion of model fit, inferential statistics, and other output.
```
happy_model_base_sum = summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
There is a lot of info to parse there, so we’ll go over some of it in particular. The summary provides several pieces of information: the coefficients or weights (`Estimate`)[26](#fn26), the standard errors (`Std. Error`), the t\-statistic (which is just the coefficient divided by the standard error), and the corresponding p\-value. The main thing to look at are the actual coefficients and the direction of their relationship, positive or negative. For example, with regard to the effect of democratic quality, moving one point on democratic quality results in roughly 0\.2 units of happiness. Is this a notable effect? Knowing the scale of the outcome can help us understand the magnitude of the effect in a general sense. Before we showed that the standard deviation of the happiness scale was 1\.1\. So, in terms of standard deviation units\- moving 1 points on democratic quality would result in a 0\.2 standard deviation increase in state\-level happiness. We might consider this fairly small, but maybe not negligible.
One thing we must also have in order to understand our results is to get a sense of the uncertainty in the effects. The following provides confidence intervals for each of the coefficients.
```
confint(happy_model_base)
```
```
2.5 % 97.5 %
(Intercept) -1.62845472 -0.3925003
democratic_quality 0.08018814 0.2605586
generosity 0.77656244 1.5451306
log_gdp_per_capita 0.62786210 0.7589806
```
Now we have a sense of the range of plausible values for the coefficients. The value we actually estimate is the best guess given our circumstances, but slight changes in the data, the way we collect it, the time we collect it, etc., all would result in a slightly different result. The confidence interval provides a range of what we could expect given the uncertainty, and, given its importance, you should always report it. In fact, just showing the coefficient and the interval would be better than typical reporting of the statistical test results, though you can do both.
Variable Transformations
------------------------
Transforming variables can provide a few benefits in modeling, whether applied to the target, covariates, or both, and should regularly be used for most models. Some of these benefits include[27](#fn27):
* Interpretable intercepts
* More comparable covariate effects
* Faster estimation
* Easier convergence
* Help with heteroscedasticity
For example, merely centering predictor variables, i.e. subtracting the mean, provides a more interpretable intercept that will fall within the actual range of the target variable, telling us what the value of the target variable is when the covariates are at their means (or reference value if categorical).
### Numeric variables
The following table shows the interpretation of two extremely common transformations applied to numeric variables\- logging and scaling (i.e. standardizing to mean zero, standard deviation one).
| target | predictor | interpretation |
| --- | --- | --- |
| y | x | \\(\\Delta y \= \\beta\\Delta x\\) |
| y | log(x) | \\(\\Delta y \\approx (\\beta/100\)\\%\\Delta x\\) |
| log(y) | x | \\(\\%\\Delta y \\approx 100\\cdot \\beta\\%\\Delta x\\) |
| log(y) | log(x) | \\(\\%\\Delta y \= \\beta\\%\\Delta x\\) |
| y | scale(x) | \\(\\Delta y \= \\beta\\sigma\\Delta x\\) |
| scale(y) | x | \\(\\sigma\\Delta y \= \\beta\\Delta x\\) |
| scale(y) | scale(x) | \\(\\sigma\\Delta y \= \\beta\\sigma\\Delta x\\) |
For example, to start with the normal linear model situation, a one\-unit change in \\(x\\), i.e. \\(\\Delta x \=1\\), leads to \\(\\beta\\) unit change in \\(y\\). If we log the target variable \\(y\\), the interpretation of the coefficient for \\(x\\) is that a one\-unit change in \\(x\\) leads to an (approximately) 100\\(\\cdot\\)\\(\\beta\\)% change in \\(y\\). The 100 changes the result from a proportion to percentage change. More concretely, if \\(\\beta\\) was .5, a unit change in \\(x\\) leads to (roughly) a 50% change in \\(y\\). If both were logged, a percentage change in \\(x\\) leads to a \\(\\beta\\) percentage change in y[28](#fn28). These percentage change interpretations are called [elasticities](https://en.wikipedia.org/wiki/Elasticity_(economics)) in econometrics and areas trained similarly[29](#fn29).
It is very common to use *standardized* variables as well, also called normalizing, or simply scaling. If \\(y\\) and \\(x\\) are both standardized, a one unit (i.e. one standard deviation) change in \\(x\\) leads to a \\(\\beta\\) standard deviation change in \\(y\\). Again, if \\(\\beta\\) was .5, a standard deviation change in \\(x\\) leads to a half standard deviation change in \\(y\\). In general, there is nothing to lose by standardizing, so you should employ it often.
Another common transformation, particularly in machine learning, is the *min\-max normalization*, changing variables to range from some minimum to some maximum, usually zero to one.
### Categorical variables
A raw character string is not an analyzable unit, so character strings and labeled variables like factors must be converted for analysis to be conducted on them. For categorical variables, we can employ what is called *effects coding* to test for specific types of group differences. Far and away the most common approach is called *dummy coding* or *one\-hot encoding*[30](#fn30). In the next example, we will use dummy coding via the recipes package. I also to show how to standardize a numeric variable as previously discussed.
```
library(recipes)
nafta = happy %>%
filter(country %in% c('United States', 'Canada', 'Mexico'))
dummy = nafta %>%
recipe(~ country + generosity) %>% # formula approach for specifying variables
step_dummy(country, one_hot = TRUE) %>% # make variables for all factor levels
step_center(generosity) %>% # example of centering
step_scale(generosity) # example of standardizing
prep(dummy) %>% # estimates the necessary data to apply to this or other data sets
bake(nafta) %>% # apply the computations
print(n = 20)
```
```
# A tibble: 39 x 4
generosity country_Canada country_Mexico country_United.States
<dbl> <dbl> <dbl> <dbl>
1 0.835 1 0 0
2 0.819 1 0 0
3 0.891 1 0 0
4 0.801 1 0 0
5 0.707 1 0 0
6 0.841 1 0 0
7 1.06 1 0 0
8 1.21 1 0 0
9 0.940 1 0 0
10 0.838 1 0 0
11 0.590 1 0 0
12 0.305 1 0 0
13 -0.0323 1 0 0
14 NA 0 1 0
15 -1.19 0 1 0
16 -1.39 0 1 0
17 -1.08 0 1 0
18 -0.915 0 1 0
19 -1.22 0 1 0
20 -1.18 0 1 0
# … with 19 more rows
```
We see that the first few observations are Canada, and the next few Mexico. Note that doing this is rarely required for most modeling situations, but even if not, it sometimes can be useful to do so explicitly. If your modeling package cannot handle factor variables, and thus requires explicit coding, you’ll know, and typically these are the same ones that require matrix input.
Let’s run a regression as follows to show how it would happen automatically.
```
model_dummy = lm(happiness_score ~ country, data = nafta)
summary(model_dummy)
```
```
Call:
lm(formula = happiness_score ~ country, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.36887 0.09633 76.493 5.64e-14 ***
countryMexico -0.61107 0.13624 -4.485 0.00152 **
countryUnited States -0.34337 0.13624 -2.520 0.03275 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
In this case, the coefficient represents the difference in means on the target variable between the reference group and the group in question. In this case, the U.S. is \-0\.34 less on the happy score than the reference country (Canada). The intercept tells us the mean of the reference group.
Other codings are possible, and these would allow for specific group comparisons or types of comparisons. This is sometimes called *contrast coding*. For example, we could compare Canada vs. both the U.S. and Mexico. By giving Canada twice the weight of the other two we can get this result. I also add a coding that will just compare Mexico vs. the U.S. The actual weights used are arbitrary, but in this case should sum to zero.
| group | canada\_vs\_other | mexico\_vs\_us |
| --- | --- | --- |
| Canada | \-0\.667 | 0\.0 |
| Mexico | 0\.333 | \-0\.5 |
| United States | 0\.333 | 0\.5 |
| weights sum to zero, but are arbitrary |
| --- |
Adding such coding to a factor variable allows the corresponding models to use it in constructing the model matrix, rather than dummy coding. See the group means and calculate the results by hand for yourself.
```
nafta = nafta %>%
mutate(country_fac = factor(country))
contrasts(nafta$country_fac) = matrix(c(-2/3, 1/3, 1/3, 0, -.5, .5),
ncol = 2)
summary(lm(happiness_score ~ country_fac, data = nafta))
```
```
Call:
lm(formula = happiness_score ~ country_fac, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.05072 0.05562 126.769 6.01e-16 ***
country_fac1 -0.47722 0.11799 -4.045 0.00291 **
country_fac2 0.26770 0.13624 1.965 0.08100 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
```
nafta %>%
group_by(country) %>%
summarise(happy = mean(happiness_score, na.rm = TRUE))
```
```
# A tibble: 3 x 2
country happy
<chr> <dbl>
1 Canada 7.37
2 Mexico 6.76
3 United States 7.03
```
For example, we can see that for this balanced data set, the `_fac1` coefficient is the average of the U.S. and Mexico coefficients that we got from dummy coding, which represented their respective mean differences from Canada: (\-0\.611 \+ \-0\.343\) / 2 \= \-0\.477\. The `_fac2` coefficient is just the U.S. Mexico mean difference, as expected.
In other circumstances, we can use *categorical embeddings* to reduce a very large number of categorical levels to a smaller number of numeric variables. This is very commonly employed in deep learning.
### Scales, indices, and dimension reduction
It is often the case that we have several correlated variables/items which do not all need to go into the model. For example, instead of using all items in a psychological scale, we can use the scale score, however defined, which is often just a *sum score* of the underlying items. Often people will create an index by using a *principal components analysis*, which can be thought of as a means to create a weighted sum score, or set of scores. Some (especially binary) items may tend toward the creation of a single variable that simply notes whether any of those collection of variables was present or not.
#### Two\-step approaches
Some might do a preliminary analysis, such as a *cluster analysis* or *factor analysis*, to create new target or predictor variables. In the former we reduce several variables to a single categorical label. Factor analysis does the same but results in a more expressive continuous metric. While fine to use, the corresponding results are measured with error, so treating the categories or factor scores as you would observed variables will typically result in optimistic results when you later include them in a subsequent analysis like a linear regression. Though this difference is probably slight in most applications, keen reviewers would probably point out the model shortcoming.
### Don’t discretize
Little pains advanced modelers more than seeing results where a nice expressive continuous metric is butchered into two categories (e.g. taking a numeric age and collapsing to ‘old’ vs. ‘young’). There is rarely a reason to do this, and it is difficult to justify. There are reasons to collapse rare labels of a categorical variable, so that the new variable has fewer but more frequent categories. For example, data may have five or six race categories, but often the values are lumped into majority group vs. minority group due to each minority category having too few observations. But even that can cause problems, and doesn’t really overcome the fact that you simply didn’t have enough data to begin with.
Variable Importance
-------------------
In many circumstances, one of the modeling goals is to determine which predictor variable is most important out of the collection used in the model, or otherwise rank order the effectiveness of the predictors in some fashion. However, determining relative *variable importance* is at best an approximation with some methods, and a fairly hopeless endeavor with others. For just basic linear regression there are many methods that would not necessarily come to the same conclusions. Statistical significance, e.g. the Z/t statistic or p\-value, is simply not a correct way to do so. Some believe that [standardizing numeric variables](models.html#numeric-variables) is enough, but it is not, and doesn’t help with comparison to categorical inputs. In addition, if you’re model is not strong, it doesn’t make much sense to even worry about which is the best of a bad lot.
Another reason that ‘importance’ is a problematic endeavor is that a statistical result doesn’t speak to practical action, nor does it speak to the fact that small effects may be very important. Sex may be an important driver in social science model, but we may not be able to do anything about it for many outcomes that may be of interest. With health outcomes, any effects might be worthy of attention, however small, if they could practically increase the likelihood of survival.
Even if you can come up with a metric you like, you would still need some measure of uncertainty around that to make a claim that one predictor is reasonably better than another, and the only real approach to do that is usually some computationally expensive procedure that you will likely have to put together by hand.
As an example, for standard linear regression there are many methods that decompose \\(R^2\\) into relative contributions by the covariates. The tools to do so have to re\-run the model in many ways to produce these estimates (see the relaimpo package for example), but you would then have to use bootstrapping or similar approach to get interval estimates for those measures of importance. Certain techniques like random forests have a natural way to provide variable importance metrics, but providing inference on them would similarly be very computationally expensive.
In the end though, I think it is probably best to assume that any effect that seems practically distinct from zero might be worthy of attention, and can be regarded for its own sake. The more actionable, the better.
Extracting Output
-----------------
The better you get at modeling, the more often you are going to need to get at certain parts of the model output easily. For example, we can extract the coefficients, residuals, model data and other parts from standard linear model objects from base R.
Why would you want to do this? A simple example would be to compare effects across different settings. We can collect the values, put them in a data frame, and then to a table or visualization.
Typical modeling [methods](programming.html#methods) you might want to use:
* summary: print results in a legible way
* plot: plot something about the model (e.g. diagnostic plots)
* predict: make predictions, possibly on new data
* confint: get confidence intervals for parameters
* coef: extract coefficients
* fitted: extract fitted values
* residuals: extract residuals
* AIC: extract AIC
Here is an example of using the predict and coef methods.
```
predict(happy_model_base, newdata = happy %>% slice(1:5))
```
```
1 2 3 4 5
3.838179 3.959046 3.928180 4.004129 4.171624
```
```
coef(happy_model_base)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
Also, it’s useful to assign the summary results to an object, so that you can extract things that are also useful but would not be in the model object. We did this before, so now let’s take a look.
```
str(happy_model_base_sum, 1)
```
```
List of 12
$ call : language lm(formula = happiness_score ~ democratic_quality + generosity + log_gdp_per_capita, data = happy)
$ terms :Classes 'terms', 'formula' language happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
.. ..- attr(*, "variables")= language list(happiness_score, democratic_quality, generosity, log_gdp_per_capita)
.. ..- attr(*, "factors")= int [1:4, 1:3] 0 1 0 0 0 0 1 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:3] "democratic_quality" "generosity" "log_gdp_per_capita"
.. ..- attr(*, "order")= int [1:3] 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(happiness_score, democratic_quality, generosity, log_gdp_per_capita)
.. ..- attr(*, "dataClasses")= Named chr [1:4] "numeric" "numeric" "numeric" "numeric"
.. .. ..- attr(*, "names")= chr [1:4] "happiness_score" "democratic_quality" "generosity" "log_gdp_per_capita"
$ residuals : Named num [1:411] -0.405 -0.572 0.057 -0.426 -0.829 ...
..- attr(*, "names")= chr [1:411] "8" "9" "10" "19" ...
$ coefficients : num [1:4, 1:4] -1.01 0.17 1.161 0.693 0.314 ...
..- attr(*, "dimnames")=List of 2
$ aliased : Named logi [1:4] FALSE FALSE FALSE FALSE
..- attr(*, "names")= chr [1:4] "(Intercept)" "democratic_quality" "generosity" "log_gdp_per_capita"
$ sigma : num 0.628
$ df : int [1:3] 4 407 4
$ r.squared : num 0.695
$ adj.r.squared: num 0.693
$ fstatistic : Named num [1:3] 310 3 407
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:4, 1:4] 0.2504 0.0229 -0.0139 -0.0264 0.0229 ...
..- attr(*, "dimnames")=List of 2
$ na.action : 'omit' Named int [1:1293] 1 2 3 4 5 6 7 11 12 13 ...
..- attr(*, "names")= chr [1:1293] "1" "2" "3" "4" ...
- attr(*, "class")= chr "summary.lm"
```
If we want the adjusted \\(R^2\\) or root mean squared error (RMSE, i.e. average error[31](#fn31)), they aren’t readily available in the model object, but they are in the summary object, so we can pluck them out as we would any other [list object](data_structures.html#lists).
```
happy_model_base_sum$adj.r.squared
```
```
[1] 0.6930647
```
```
happy_model_base_sum[['sigma']]
```
```
[1] 0.6282718
```
### Package support
There are many packages available to get at model results. One of the more widely used is broom, which has tidy and other functions that can apply in different ways to different models depending on their class.
```
library(broom)
tidy(happy_model_base)
```
```
# A tibble: 4 x 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -1.01 0.314 -3.21 1.41e- 3
2 democratic_quality 0.170 0.0459 3.71 2.33e- 4
3 generosity 1.16 0.195 5.94 6.18e- 9
4 log_gdp_per_capita 0.693 0.0333 20.8 5.93e-66
```
Some packages will produce tables for a model object that are more or less ready for publication. However, unless you know it’s in the exact style you need, you’re probably better off dealing with it yourself. For example, you can use tidy and do minor cleanup to get the table ready for publication.
Visualization
-------------
> Models require visualization to be understood completely.
If you aren’t using visualization as a fundamental part of your model exploration, you’re likely leaving a lot of that exploration behind, and not communicating the results as well as you could to the broadest audience possible. When adding nonlinear effects, interactions, and more, visualization is a must. Thankfully there are many packages to help you get data you need to visualize effects.
We start with the emmeans package. In the following example we have a country effect, and wish to get the mean happiness scores per country. We then visualize the results. Here we can see that Mexico is lowest on average.
```
happy_model_nafta = lm(happiness_score ~ country + year, data = nafta)
library(emmeans)
country_means = emmeans(happy_model_nafta, ~ country)
country_means
```
```
country emmean SE df lower.CL upper.CL
Canada 7.37 0.064 8 7.22 7.52
Mexico 6.76 0.064 8 6.61 6.91
United States 7.03 0.064 8 6.88 7.17
Confidence level used: 0.95
```
```
plot(country_means)
```
We can also test for pairwise differences between the countries, and there’s no reason not to visualize that also. In the following, after adjustment Mexico and U.S. might not differ on mean happiness, but the other comparisons are statistically notable[32](#fn32).
```
pw_comparisons = contrast(country_means, method = 'pairwise', adjust = 'bonferroni')
pw_comparisons
```
```
contrast estimate SE df t.ratio p.value
Canada - Mexico 0.611 0.0905 8 6.751 0.0004
Canada - United States 0.343 0.0905 8 3.793 0.0159
Mexico - United States -0.268 0.0905 8 -2.957 0.0547
P value adjustment: bonferroni method for 3 tests
```
```
plot(pw_comparisons)
```
The following example uses ggeffects. First, we run a model with an interaction of country and year (we’ll talk more about interactions later). Then we get predictions for the year by country, and subsequently visualize. We can see that the trend, while negative for all countries, is more pronounced as we move south.
```
happy_model_nafta = lm(happiness_score ~ year*country, data = nafta)
library(ggeffects)
preds = ggpredict(happy_model_nafta, terms = c('year', 'country'))
plot(preds)
```
Whenever you move to generalized linear models or other more complicated settings, visualization is even more important, so it’s best to have some tools at your disposal.
Extensions to the Standard Linear Model
---------------------------------------
### Different types of targets
In many data situations, we do not have a continuous numeric target variable, or may want to use a different distribution to get a better fit, or adhere to some theoretical perspective. For example, count data is not continuous and often notably skewed, so assuming a normal symmetric distribution may not work as well. From a data generating perspective we can use the Poisson distribution[33](#fn33) for the target variable instead.
\\\[\\ln{\\mu} \= X\\beta\\]
\\\[\\mu \= e^{X\\beta}\\]
\\\[y \\sim \\mathcal{Pois}(\\mu)\\]
Conceptually nothing has really changed from what we were doing with the standard linear model, except for the distribution. We still have a mean function determined by our predictors, and this is what we’re typically mainly interested in from a theoretical perspective. We do have an added step, a transformation of the mean (now usually called the *linear predictor*). Poisson naturally works with the log of the target, but rather than do that explicitly, we instead exponentiate the linear predictor. The *link function*[34](#fn34), which is the natural log in this setting, has a corresponding *inverse link* (or mean function)\- exponentiation.
In code we can demonstrate this as follows.
```
set.seed(123) # for reproducibility
N = 1000 # sample size
beta = c(2, 1) # the true coefficient values
x = rnorm(N) # a single predictor variable
mu = exp(beta[1] + beta[2]*x) # the linear predictor
y = rpois(N, lambda = mu) # the target variable lambda = mean
glm(y ~ x, family = poisson)
```
```
Call: glm(formula = y ~ x, family = poisson)
Coefficients:
(Intercept) x
2.009 0.994
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 13240
Residual Deviance: 1056 AIC: 4831
```
A very common setting is the case where our target variable takes on only two values\- yes vs. no, alive vs. dead, etc. The most common model used in such settings is the logistic regression model. In this case, it will have a different link to go with a different distribution.
\\\[\\ln{\\frac{\\mu}{1\-\\mu}} \= X\\beta\\]
\\\[\\mu \= \\frac{1}{1\+e^{\-X\\beta}}\\]
\\\[y \\sim \\mathcal{Binom}(\\mathrm{prob}\=\\mu, \\mathrm{size} \= 1\)\\]
Here our link function is called the *logit*, and it’s inverse takes our linear predictor and puts it on the probability scale.
Again, some code can help drive this home.
```
mu = plogis(beta[1] + beta[2]*x)
y = rbinom(N, size = 1, mu)
glm(y ~ x, family = binomial)
```
```
Call: glm(formula = y ~ x, family = binomial)
Coefficients:
(Intercept) x
2.141 1.227
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 852.3
Residual Deviance: 708.8 AIC: 712.8
```
```
# extension to count/proportional model
# mu = plogis(beta[1] + beta[2]*x)
# total = rpois(N, lambda = 5)
# events = rbinom(N, size = total, mu)
# nonevents = total - events
#
# glm(cbind(events, nonevents) ~ x, family = binomial)
```
You’ll have noticed that when we fit these models we used glm instead of lm. The normal linear model is a special case of *generalized linear models*, which includes a specific class of distributions \- normal, poisson, binomial, gamma, beta and more \- collectively referred to as the [exponential family](https://en.wikipedia.org/wiki/Exponential_family). While this family can cover a lot of ground, you do not have to restrict yourself to it, and many R modeling packages will provide easy access to more. The main point is that you have tools to deal with continuous, binary, count, ordinal, and other types of data. Furthermore, not much necessarily changes conceptually from model to model besides the link function and/or distribution.
### Correlated data
Often in standard regression modeling situations we have data that is correlated, like when we observe multiple observations for individuals (e.g. longitudinal studies), or observations are clustered within geographic units. There are many ways to analyze all kinds of correlated data in the form of clustered data, time series, spatial data and similar. In terms of understanding the mean function and data generating distribution for our target variable, as we did in our previous models, not much changes. However, we will want to utilize estimation techniques that take this correlation into account. Examples of such models include:
* Mixed models (e.g. random intercepts, ‘multilevel’ models)
* Time series models (autoregressive)
* Spatial models (e.g. conditional autoregressive)
As demonstration is beyond the scope of this document, the main point here is awareness. But see these on [mixed models](https://m-clark.github.io/mixed-models-with-R/) and [generalized additive models](https://m-clark.github.io/generalized-additive-models/).
### Other extensions
There are many types of models that will take one well beyond the standard linear model. In some cases, the focus is multivariate, trying to model many targets at once. Other models will even be domain\-specific, tailored to a very narrow type of problem. Whatever the scenario, having a good understanding of the models we’ve been discussing will likely help you navigate these new waters much more easily.
Model Exploration Summary
-------------------------
At this point you should have a good idea of how to get started exploring models with R. Generally what you will explore will be based on theory, or merely curiosity. Specific packages while make certain types of models easy to pull off, without much change to the syntax from the standard `lm` approach of base R. Almost invariably, you will need to process the data to make it more amenable to analysis and/or more interpretable. After model fitting, summaries and visualizations go a long way toward understanding the part of the world you are exploring.
Model Exploration Exercises
---------------------------
### Exercise 1
With the Google app data, use a standard linear model (i.e. lm) to predict one of three target variables of your choosing:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
I would suggest preprocessing the number of reviews\- dividing by 100,000, scaling (standardizing), or logging it (for the latter you can add 1 first to deal with zeros[35](#fn35)).
Interpret the results. Visualize the difference in means between free and paid apps. See the [emmeans](models.html#visualization) example above.
```
load('data/google_apps.RData')
model = lm(? ~ reviews + type + size_in_MB, data = google_apps)
plot(emmeans::emmeans(model, ~type))
```
### Exercise 2
Rerun the above with interactions of the number of reviews or app size (or both) with type (via `a + b + a:b` or just `a*b` for two predictors). Visualize the interaction. Does it look like the effect differs by type?
```
model = lm(? ~ reviews + type*?, data = google_apps)
plot(ggeffects::ggpredict(model, terms = c('size_in_MB', 'type')))
```
### Exercise 3
Use the fish data to predict the number of fish caught `count` by the following predictor variables:
* `livebait`: whether live bait was used or not
* `child`: how many children present
* `persons`: total persons on the trip
If you wish, you can start with an `lm`, but as the number of fish caught is a count, it is suitable for using a poisson distribution via `glm` with `family = poisson`, so try that if you’re feeling up for it. If you exponentiate the coefficients, they can be interpreted as [incidence rate ratios](https://stats.idre.ucla.edu/stata/output/poisson-regression/).
```
load('data/fish.RData')
model = glm(?, data = fish)
```
Python Model Exploration Notebook
---------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/models.ipynb)
Model Taxonomy
--------------
We can begin with a taxonomy that broadly describes two classes of models:
* *Supervised*
* *Unsupervised*
* Some combination
For supervised settings, there is a target or set of target variables which we aim to predict with a set of predictor variables or covariates. This is far and away the most common case, and the one we will focus on here. It is very common in machine learning parlance to further distinguish *regression* and *classification* among supervised models, but what they actually mean to distinguish is numeric target variables from categorical ones (it’s all regression).
In the case of unsupervised models, the data itself is the target, and this setting includes techniques such as principal components analysis, factor analysis, cluster analytic approaches, topic modeling, and many others. A key goal for many such methods is *dimension reduction*, either of the columns or rows. For example, we may have many items of a survey we wish to group together into a few concepts, or cluster thousands of observations into a few simple categories.
We can also broadly describe two primary goals of modeling:
* *Prediction*
* *Explanation*
Different models will provide varying amounts of predictive and explanatory (or inferential) power. In some settings, prediction is almost entirely the goal, with little need to understand the underlying details of the relation of inputs to outputs. For example, in a model that predicts words to suggest when typing, we don’t really need to know nor much care about the details except to be able to improve those suggestions. In scientific studies however, we may be much more interested in the (potentially causal) relations among the variables under study.
While these are sometimes competing goals, it is definitely not the case that they are mutually exclusive. For example, a fully interpretable model, statistically speaking, may have no predictive capability, and so is fairly useless in practical terms. Often, very predictive models offer little understanding. But sometimes we can luck out and have both a highly predictive model as well as one that is highly interpretable.
Linear models
-------------
Most models you see in published reports are *linear models* of varying kinds, and form the basis on which to build more complex forms. In such models we distinguish a *target variable* we want to understand from the variables which we will use to understand it. Note that these come with different names depending on the goal of the study, discipline, and other factors[19](#fn19). The following table denotes common nomenclature across many disciplines.
| Type | Names |
| --- | --- |
| Target | Dependent variable |
| Endogenous |
| Response |
| Outcome |
| Output |
| Y |
| Regressand |
| Left hand side (LHS) |
| Predictor | Independent variable |
| Exogenous |
| Explanatory Variable |
| Covariate |
| Input |
| X |
| Regressor |
| Right hand side (RHS) |
A typical way to depict a linear regression model is as follows:
\\\[y \= b\_0 \+ b\_1\\cdot x\_1 \+ b\_2\\cdot x\_2 \+ ... \+ \+ b\_p\\cdot x\_p \+ \\epsilon\\]
In the above, \\(b\_0\\) is the intercept, and the other \\(b\_\*\\) are the regression coefficients that represent the relationship of the predictors \\(x\\) to the target variable \\(y\\). The \\(\\epsilon\\) represents the *error* or *residual*. We don’t have perfect prediction, and that represents the difference between what we can guess with our predictor relationships to the target and what we actually observe with it.
In R, we specify a linear model as follows. Conveniently enough, we use a function, `lm`, that stands for linear model. There are various inputs, typically starting with the formula. In the formula, The target variable is first, followed by the predictor variables, separated by a tilde (`~`). Additional predictor variables are added with a plus sign (`+`). In this example, `y` is our target, and the predictors are `x` and `z`.
```
lm(y ~ x + z)
```
We can still use linear models to investigate nonlinear relationships. For example, in the following, we can add a quadratic term or an interaction, yet the model is still linear in the parameters. All of the following are standard linear regression models.
```
lm(y ~ x + z + x:z)
lm(y ~ x + x_squared) # a better way: lm(y ~ poly(x, degree = 2))
```
In the models above, `x` has a potentially nonlinear relationship with `y`, either by varying its (linear) relationship depending on values of z (the first case) or itself (the second). In general, the manner in which nonlinear relationships may be explored in linear models is quite flexible.
An example of a *nonlinear model* would be population growth models, like exponential or logistic growth curves. You can use functions like nls or nlme for such models, but should have a specific theoretical reason to do so, and even then, flexible models such as [GAMs](https://m-clark.github.io/generalized-additive-models/) might be better than assuming a functional form.
Estimation
----------
One key thing to understand with predictive models of any kind is how we estimate the parameters of interest, e.g. coefficients/weights, variance, and more. To start with, we must have some sort of goal that choosing a particular set of values for the parameters achieves, and then find some way to reach that goal efficiently.
### Minimizing and maximizing
The goal of many estimation approaches is the reduction of *loss*, conceptually defined as the difference between the model predictions and the observed data, i.e. prediction error. In an introductory methods course, many are introduced to *ordinary least squares* as a means to estimate the coefficients for a linear regression model. In this scenario, we are seeking to come up with estimates of the coefficients that *minimize* the (squared) difference between the observed target value and the fitted value based on the parameter estimates. The loss in this case is defined as the sum of the squared errors. Formally we can state it as follows.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2\\]
We can see how this works more clearly with some simple conceptual code. In what follows, we create a [function](functions.html#writing-functions), allows us to move [row by row](iterative.html#for-loops) through the data, calculating both our prediction based on the given model parameters\- \\(\\hat{y}\\), and the difference between that and our target variable \\(y\\). We sum these squared differences to get a total. In practice such a function is called the loss function, cost function, or objective function.
```
ls_loss <- function(X, y, beta) {
# initialize the objects
loss = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
# for each row, calculate y_hat and square the difference with y
for (n in 1:nrow(X)) {
y_hat[n] = sum(X[n, ] * beta)
loss[n] = (y[n] - y_hat[n]) ^ 2
}
sum(loss)
}
```
Now we need some data. Let’s construct some data so that we know the true underlying values for the regression coefficients. Feel free to change the sample size `N` or the coefficient values.
```
set.seed(123) # for reproducibility
N = 100
X = cbind(1, rnorm(N)) # a model matrix; first column represents the intercept
y = 5 * X[, 1] + .5 * X[, 2] + rnorm(N) # a target with some noise; truth is y = 5 +.5*x
df = data.frame(y = y, x = X[, 2])
```
Now let’s make some guesses for the coefficients, and see what the corresponding sum of the squared errors, i.e. the loss, would be.
```
ls_loss(X, y, beta = c(0, 1)) # guess 1
```
```
[1] 2467.106
```
```
ls_loss(X, y, beta = c(1, 2)) # guess 2
```
```
[1] 1702.547
```
```
ls_loss(X, y, beta = c(4, .25)) # guess 3
```
```
[1] 179.2952
```
We see that in our third guess we reduce the loss quite a bit relative to our first guess. This makes sense because a value of 4 for the intercept and .25 for the coefficient for `x` are not as relatively far from the true values.
However, we can also see that they are not the best we could have done. In addition, with more data, our estimated coefficients would get closer to true values.
```
model = lm(y ~ x, df) # fit the model and obtain parameter estimates using OLS
coef(model) # best guess given the data
```
```
(Intercept) x
4.8971969 0.4475284
```
```
sum(residuals(model)^2) # least squares loss
```
```
[1] 92.34413
```
In some relatively rare cases, a known approach is available and we do not have to search for the best estimates, but simply have to perform the correct steps that will result in them. For example, the following matrix operations will produce the best estimates for linear regression, which also happen to be the maximum likelihood estimates.
```
solve(crossprod(X)) %*% crossprod(X, y) # 'normal equations'
```
```
[,1]
[1,] 4.8971969
[2,] 0.4475284
```
```
coef(model)
```
```
(Intercept) x
4.8971969 0.4475284
```
Most of the time we don’t have such luxury, or even if we did, the computations might be too great for the size of our data.
Many statistical modeling techniques use *maximum likelihood* in some form or fashion, including Bayesian approaches, so you would do well to understand the basics. In this case, instead of minimizing the loss, we use an approach to maximize the probability of the observations of the target variable given the estimates of the parameters of the model (e.g. the coefficients in a regression)[20](#fn20).
The following shows how this would look for estimating a single value like a mean for a set of observations from a specific distribution[21](#fn21). In this case, the true underlying value that maximizes the likelihood is 5, but we typically don’t know this. We see that as our guesses for the mean would get closer to 5, the likelihood of the observed values increases. Our final guess based on the observed data won’t be exactly 5, but with enough data and an appropriate model for that data, we should get close.
Again, some simple conceptual code can help us. The next bit of code follows a similar approach to what we had with least squares regression, but the goal is instead to maximize the likelihood of the observed data. In this example, I fix the estimated variance, but in practice we’d need to estimate that parameter as well. As probabilities are typically very small, we work with them on the log scale.
```
max_like <- function(X, y, beta, sigma = 1) {
likelihood = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
for (n in 1:nrow(X)) {
y_hat[n] <- sum(X[n, ] * beta)
likelihood[n] = dnorm(y[n], mean = y_hat[n], sd = sigma, log = TRUE)
}
sum(likelihood)
}
```
```
max_like(X, y, beta = c(0, 1)) # guess 1
```
```
[1] -1327.593
```
```
max_like(X, y, beta = c(1, 2)) # guess 2
```
```
[1] -1022.18
```
```
max_like(X, y, beta = c(4, .25)) # guess 3
```
```
[1] -300.6741
```
```
logLik(model)
```
```
'log Lik.' -137.9115 (df=3)
```
To better understand maximum likelihood, it might help to think of our model from a data generating perspective, rather than in terms of ‘errors’. In the standard regression setting, we think of a single observation as follows:
\\\[\\mu \= b\_0 \+ b\_1\*x\_1 \+ ... \+ b\_p\*x\_p\\]
Or with matrix notation (consider it shorthand if not familiar):
\\\[\\mu \= X\\beta\\]
Now we display how \\(y\\) is generated:
\\\[y \\sim \\mathcal{N}(\\mathrm{mean} \= \\mu, \\mathrm{sd} \= \\sigma)\\]
In words, this means that our target observation \\(y\\) is assumed to be normally distributed with some mean and some standard deviation/variance. The mean \\(\\mu\\) is a function, or simply weighted sum, of our covariates \\(X\\). The unknown parameters we have to estimate are the \\(\\beta\\), i.e. weights, and standard deviation \\(\\sigma\\) (or variance \\(\\sigma^2\\)).
One more note regarding estimation, it is good to distinguish models from estimation procedures. The following shows the more specific to the more general for both models and estimation procedures respectively.
| Label | Name |
| --- | --- |
| LM | Linear Model |
| GLM | Generalized Linear Model |
| GLMM | Generalized Linear Mixed Model |
| GAMM | Generalized Linear Mixed Model |
| OLS | Ordinary Least Squares |
| WLS | Weighted Least Squares |
| GLS | Generalized Least Squares |
| GEE | Generalized Estimating Equations |
| GMM | Generalized Method of Moments |
### Optimization
So we know the goal, but how do we get to it? In practice, we typically use *optimization* methods to iteratively search for the best estimates for the parameters of a given model. The functions we explored above provide a goal\- to minimize loss (however defined\- least squares for continuous, classification error for binary, etc.) or maximize the likelihood (or posterior probability in the Bayesian context). Whatever the goal, an optimizing *algorithm* will typically be used to find the estimates that reach that goal. Some approaches are very general, some are better for certain types of modeling problems. These algorithms continue to make guesses until some criterion has been reached (*convergence*)[22](#fn22).
You generally don’t need to know the details to use these algorithms to fit models, but knowing a little bit about the optimization process and available options may prove useful to deal with more complex data scenarios, where convergence can be difficult. Some packages will even have documentation specifically dealing with convergence issues. In the more predictive models previously discussed, knowing more about the optimization algorithm may speedup the time it takes to train the model, or smooth out the variability in the process.
As an aside, most Bayesian models use an estimation approach that is some form of *Markov Chain Monte Carlo*. It is a simulation based approach to generate subsequent estimates of parameters conditional on present estimates of them. One set of iterations is called a chain, and convergence requires multiple chains to mix well, i.e. come to similar conclusions about the parameter estimates. The goal even then is to maximize the log posterior distribution, similar to maximizing the likelihood. In the past this was an extremely computationally expensive procedure, but these days, modern laptops can handle even complex models with ease, though some data set sizes may be prohibitive still[23](#fn23).
### Minimizing and maximizing
The goal of many estimation approaches is the reduction of *loss*, conceptually defined as the difference between the model predictions and the observed data, i.e. prediction error. In an introductory methods course, many are introduced to *ordinary least squares* as a means to estimate the coefficients for a linear regression model. In this scenario, we are seeking to come up with estimates of the coefficients that *minimize* the (squared) difference between the observed target value and the fitted value based on the parameter estimates. The loss in this case is defined as the sum of the squared errors. Formally we can state it as follows.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2\\]
We can see how this works more clearly with some simple conceptual code. In what follows, we create a [function](functions.html#writing-functions), allows us to move [row by row](iterative.html#for-loops) through the data, calculating both our prediction based on the given model parameters\- \\(\\hat{y}\\), and the difference between that and our target variable \\(y\\). We sum these squared differences to get a total. In practice such a function is called the loss function, cost function, or objective function.
```
ls_loss <- function(X, y, beta) {
# initialize the objects
loss = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
# for each row, calculate y_hat and square the difference with y
for (n in 1:nrow(X)) {
y_hat[n] = sum(X[n, ] * beta)
loss[n] = (y[n] - y_hat[n]) ^ 2
}
sum(loss)
}
```
Now we need some data. Let’s construct some data so that we know the true underlying values for the regression coefficients. Feel free to change the sample size `N` or the coefficient values.
```
set.seed(123) # for reproducibility
N = 100
X = cbind(1, rnorm(N)) # a model matrix; first column represents the intercept
y = 5 * X[, 1] + .5 * X[, 2] + rnorm(N) # a target with some noise; truth is y = 5 +.5*x
df = data.frame(y = y, x = X[, 2])
```
Now let’s make some guesses for the coefficients, and see what the corresponding sum of the squared errors, i.e. the loss, would be.
```
ls_loss(X, y, beta = c(0, 1)) # guess 1
```
```
[1] 2467.106
```
```
ls_loss(X, y, beta = c(1, 2)) # guess 2
```
```
[1] 1702.547
```
```
ls_loss(X, y, beta = c(4, .25)) # guess 3
```
```
[1] 179.2952
```
We see that in our third guess we reduce the loss quite a bit relative to our first guess. This makes sense because a value of 4 for the intercept and .25 for the coefficient for `x` are not as relatively far from the true values.
However, we can also see that they are not the best we could have done. In addition, with more data, our estimated coefficients would get closer to true values.
```
model = lm(y ~ x, df) # fit the model and obtain parameter estimates using OLS
coef(model) # best guess given the data
```
```
(Intercept) x
4.8971969 0.4475284
```
```
sum(residuals(model)^2) # least squares loss
```
```
[1] 92.34413
```
In some relatively rare cases, a known approach is available and we do not have to search for the best estimates, but simply have to perform the correct steps that will result in them. For example, the following matrix operations will produce the best estimates for linear regression, which also happen to be the maximum likelihood estimates.
```
solve(crossprod(X)) %*% crossprod(X, y) # 'normal equations'
```
```
[,1]
[1,] 4.8971969
[2,] 0.4475284
```
```
coef(model)
```
```
(Intercept) x
4.8971969 0.4475284
```
Most of the time we don’t have such luxury, or even if we did, the computations might be too great for the size of our data.
Many statistical modeling techniques use *maximum likelihood* in some form or fashion, including Bayesian approaches, so you would do well to understand the basics. In this case, instead of minimizing the loss, we use an approach to maximize the probability of the observations of the target variable given the estimates of the parameters of the model (e.g. the coefficients in a regression)[20](#fn20).
The following shows how this would look for estimating a single value like a mean for a set of observations from a specific distribution[21](#fn21). In this case, the true underlying value that maximizes the likelihood is 5, but we typically don’t know this. We see that as our guesses for the mean would get closer to 5, the likelihood of the observed values increases. Our final guess based on the observed data won’t be exactly 5, but with enough data and an appropriate model for that data, we should get close.
Again, some simple conceptual code can help us. The next bit of code follows a similar approach to what we had with least squares regression, but the goal is instead to maximize the likelihood of the observed data. In this example, I fix the estimated variance, but in practice we’d need to estimate that parameter as well. As probabilities are typically very small, we work with them on the log scale.
```
max_like <- function(X, y, beta, sigma = 1) {
likelihood = rep(0, nrow(X))
y_hat = rep(0, nrow(X))
for (n in 1:nrow(X)) {
y_hat[n] <- sum(X[n, ] * beta)
likelihood[n] = dnorm(y[n], mean = y_hat[n], sd = sigma, log = TRUE)
}
sum(likelihood)
}
```
```
max_like(X, y, beta = c(0, 1)) # guess 1
```
```
[1] -1327.593
```
```
max_like(X, y, beta = c(1, 2)) # guess 2
```
```
[1] -1022.18
```
```
max_like(X, y, beta = c(4, .25)) # guess 3
```
```
[1] -300.6741
```
```
logLik(model)
```
```
'log Lik.' -137.9115 (df=3)
```
To better understand maximum likelihood, it might help to think of our model from a data generating perspective, rather than in terms of ‘errors’. In the standard regression setting, we think of a single observation as follows:
\\\[\\mu \= b\_0 \+ b\_1\*x\_1 \+ ... \+ b\_p\*x\_p\\]
Or with matrix notation (consider it shorthand if not familiar):
\\\[\\mu \= X\\beta\\]
Now we display how \\(y\\) is generated:
\\\[y \\sim \\mathcal{N}(\\mathrm{mean} \= \\mu, \\mathrm{sd} \= \\sigma)\\]
In words, this means that our target observation \\(y\\) is assumed to be normally distributed with some mean and some standard deviation/variance. The mean \\(\\mu\\) is a function, or simply weighted sum, of our covariates \\(X\\). The unknown parameters we have to estimate are the \\(\\beta\\), i.e. weights, and standard deviation \\(\\sigma\\) (or variance \\(\\sigma^2\\)).
One more note regarding estimation, it is good to distinguish models from estimation procedures. The following shows the more specific to the more general for both models and estimation procedures respectively.
| Label | Name |
| --- | --- |
| LM | Linear Model |
| GLM | Generalized Linear Model |
| GLMM | Generalized Linear Mixed Model |
| GAMM | Generalized Linear Mixed Model |
| OLS | Ordinary Least Squares |
| WLS | Weighted Least Squares |
| GLS | Generalized Least Squares |
| GEE | Generalized Estimating Equations |
| GMM | Generalized Method of Moments |
### Optimization
So we know the goal, but how do we get to it? In practice, we typically use *optimization* methods to iteratively search for the best estimates for the parameters of a given model. The functions we explored above provide a goal\- to minimize loss (however defined\- least squares for continuous, classification error for binary, etc.) or maximize the likelihood (or posterior probability in the Bayesian context). Whatever the goal, an optimizing *algorithm* will typically be used to find the estimates that reach that goal. Some approaches are very general, some are better for certain types of modeling problems. These algorithms continue to make guesses until some criterion has been reached (*convergence*)[22](#fn22).
You generally don’t need to know the details to use these algorithms to fit models, but knowing a little bit about the optimization process and available options may prove useful to deal with more complex data scenarios, where convergence can be difficult. Some packages will even have documentation specifically dealing with convergence issues. In the more predictive models previously discussed, knowing more about the optimization algorithm may speedup the time it takes to train the model, or smooth out the variability in the process.
As an aside, most Bayesian models use an estimation approach that is some form of *Markov Chain Monte Carlo*. It is a simulation based approach to generate subsequent estimates of parameters conditional on present estimates of them. One set of iterations is called a chain, and convergence requires multiple chains to mix well, i.e. come to similar conclusions about the parameter estimates. The goal even then is to maximize the log posterior distribution, similar to maximizing the likelihood. In the past this was an extremely computationally expensive procedure, but these days, modern laptops can handle even complex models with ease, though some data set sizes may be prohibitive still[23](#fn23).
Fitting Models
--------------
With practically every modern modeling package in R, the two components required to fit a model are the model formula, and a data frame that contains the variables specified in that formula. Consider the following models. In general the syntax is the similar regardless of package, with special considerations for the type of model. The data argument is not included in these examples, but would be needed.
```
lm(y ~ x + z) # standard linear model/OLS
glm(y ~ x + z, family = 'binomial') # logistic regression with binary response
glm(y ~ x + z + offset(log(q)), family = 'poisson') # count/rate model
betareg::betareg(y ~ x + z) # beta regression for targets between 0 and 1
pscl::hurdle(y ~ x + z, dist = "negbin") # hurdle model with negative binomial response
lme4::glmer(y ~ x + (1 | group), family = 'binomial') # generalized linear mixed model
mgcv::gam(y ~ s(x)) # generalized additive model
survival::coxph(Surv(time = t, event = q) ~ x) # Cox Proportional Hazards Regression
# Bayesian mixed model
brms::brm(
y ~ x + (1 + x | group),
family = 'zero_one_inflated_beta',
prior = priors
)
```
For examples of many other types of models, see this [document](https://m-clark.github.io/R-models/).
Let’s finally get our hands dirty and run an example. We’ll use the world happiness dataset[24](#fn24). This is country level data based on surveys taken at various years, and the scores are averages or proportions, along with other values like GDP.
```
library(tidyverse) # load if you haven't already
load('data/world_happiness.RData')
# glimpse(happy)
```
| Variable | N | Mean | SD | Min | Q1 | Median | Q3 | Max | % Missing |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| year | 1704 | 2012\.33 | 3\.69 | 2005\.00 | 2009\.00 | 2012\.00 | 2015\.00 | 2018\.00 | 0 |
| life\_ladder | 1704 | 5\.44 | 1\.12 | 2\.66 | 4\.61 | 5\.34 | 6\.27 | 8\.02 | 0 |
| log\_gdp\_per\_capita | 1676 | 9\.22 | 1\.19 | 6\.46 | 8\.30 | 9\.41 | 10\.19 | 11\.77 | 2 |
| social\_support | 1691 | 0\.81 | 0\.12 | 0\.29 | 0\.75 | 0\.83 | 0\.90 | 0\.99 | 1 |
| healthy\_life\_expectancy\_at\_birth | 1676 | 63\.11 | 7\.58 | 32\.30 | 58\.30 | 65\.00 | 68\.30 | 76\.80 | 2 |
| freedom\_to\_make\_life\_choices | 1675 | 0\.73 | 0\.14 | 0\.26 | 0\.64 | 0\.75 | 0\.85 | 0\.99 | 2 |
| generosity | 1622 | 0\.00 | 0\.16 | \-0\.34 | \-0\.12 | \-0\.02 | 0\.09 | 0\.68 | 5 |
| perceptions\_of\_corruption | 1608 | 0\.75 | 0\.19 | 0\.04 | 0\.70 | 0\.81 | 0\.88 | 0\.98 | 6 |
| positive\_affect | 1685 | 0\.71 | 0\.11 | 0\.36 | 0\.62 | 0\.72 | 0\.80 | 0\.94 | 1 |
| negative\_affect | 1691 | 0\.27 | 0\.08 | 0\.08 | 0\.21 | 0\.25 | 0\.31 | 0\.70 | 1 |
| confidence\_in\_national\_government | 1530 | 0\.48 | 0\.19 | 0\.07 | 0\.33 | 0\.46 | 0\.61 | 0\.99 | 10 |
| democratic\_quality | 1558 | \-0\.14 | 0\.88 | \-2\.45 | \-0\.79 | \-0\.23 | 0\.65 | 1\.58 | 9 |
| delivery\_quality | 1559 | 0\.00 | 0\.98 | \-2\.14 | \-0\.71 | \-0\.22 | 0\.70 | 2\.18 | 9 |
| gini\_index\_world\_bank\_estimate | 643 | 0\.37 | 0\.08 | 0\.24 | 0\.30 | 0\.35 | 0\.43 | 0\.63 | 62 |
| happiness\_score | 554 | 5\.41 | 1\.13 | 2\.69 | 4\.51 | 5\.31 | 6\.32 | 7\.63 | 67 |
| dystopia\_residual | 554 | 2\.06 | 0\.55 | 0\.29 | 1\.72 | 2\.06 | 2\.44 | 3\.84 | 67 |
The happiness score itself ranges from 2\.7 to 7\.6, with a mean of 5\.4 and standard deviation of 1\.1\.
Fitting a model with R is trivial, and at a minimum requires the two key ingredients mentioned before, the formula and data. Here we specify our target at `happiness_score` with predictors democratic quality, generosity, and GDP per capita (logged).
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
```
And that’s all there is to it.
### Using matrices
Many packages still allow for matrix input instead of specifying a model formula, or even require it (but shouldn’t). This means separating data into a model (or design) matrix, and the vector or matrix of the target variable(s). For example, if we needed a speed boost and weren’t concerned about some typical output we could use lm.fit.
First we need to create the required components. We can use model.matrix to get what we need.
```
X = model.matrix(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
head(X)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
8 1 -1.8443636 0.08909068 7.500539
9 1 -1.8554263 0.05136492 7.497038
10 1 -1.8865659 -0.11219829 7.497755
19 1 0.2516293 -0.08441135 9.302960
20 1 0.2572919 -0.02068741 9.337532
21 1 0.2999450 -0.03264282 9.376145
```
Note the column of ones in the model matrix `X`. This represents our intercept, but that may not mean much to you unless you understand matrix multiplication (nice demo [here](http://matrixmultiplication.xyz/)). The other columns are just as they are in the data. Note also that the missing values have been removed.
```
nrow(happy)
```
```
[1] 1704
```
```
nrow(X)
```
```
[1] 411
```
The target variable must contain the same number of observations as in the model matrix, and there are various ways to create it to ensure this. Instead of model.matrix, there is also model.frame, which creates a data frame, with a method for extracting the corresponding target variable[25](#fn25).
```
X_df = model.frame(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
y = model.response(X_df)
```
We can now fit the model as follows.
```
happy_model_matrix = lm.fit(X, y)
summary(happy_model_matrix) # only a standard list is returned
```
```
Length Class Mode
coefficients 4 -none- numeric
residuals 411 -none- numeric
effects 411 -none- numeric
rank 1 -none- numeric
fitted.values 411 -none- numeric
assign 4 -none- numeric
qr 5 qr list
df.residual 1 -none- numeric
```
```
coef(happy_model_matrix)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
In my experience, it is generally a bad sign if a package requires that you create the model matrix rather than doing so itself via the standard formula \+ data.frame approach. I typically find that such packages tend to skip out on many other things like using typical methods like predict, coef, etc., making them even more difficult to work with. In general, the only real time you should need to use model matrices is when you are creating your own modeling package, doing simulations, utilizing model speed\-ups, or otherwise know why you need them.
### Using matrices
Many packages still allow for matrix input instead of specifying a model formula, or even require it (but shouldn’t). This means separating data into a model (or design) matrix, and the vector or matrix of the target variable(s). For example, if we needed a speed boost and weren’t concerned about some typical output we could use lm.fit.
First we need to create the required components. We can use model.matrix to get what we need.
```
X = model.matrix(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
head(X)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
8 1 -1.8443636 0.08909068 7.500539
9 1 -1.8554263 0.05136492 7.497038
10 1 -1.8865659 -0.11219829 7.497755
19 1 0.2516293 -0.08441135 9.302960
20 1 0.2572919 -0.02068741 9.337532
21 1 0.2999450 -0.03264282 9.376145
```
Note the column of ones in the model matrix `X`. This represents our intercept, but that may not mean much to you unless you understand matrix multiplication (nice demo [here](http://matrixmultiplication.xyz/)). The other columns are just as they are in the data. Note also that the missing values have been removed.
```
nrow(happy)
```
```
[1] 1704
```
```
nrow(X)
```
```
[1] 411
```
The target variable must contain the same number of observations as in the model matrix, and there are various ways to create it to ensure this. Instead of model.matrix, there is also model.frame, which creates a data frame, with a method for extracting the corresponding target variable[25](#fn25).
```
X_df = model.frame(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy
)
y = model.response(X_df)
```
We can now fit the model as follows.
```
happy_model_matrix = lm.fit(X, y)
summary(happy_model_matrix) # only a standard list is returned
```
```
Length Class Mode
coefficients 4 -none- numeric
residuals 411 -none- numeric
effects 411 -none- numeric
rank 1 -none- numeric
fitted.values 411 -none- numeric
assign 4 -none- numeric
qr 5 qr list
df.residual 1 -none- numeric
```
```
coef(happy_model_matrix)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
In my experience, it is generally a bad sign if a package requires that you create the model matrix rather than doing so itself via the standard formula \+ data.frame approach. I typically find that such packages tend to skip out on many other things like using typical methods like predict, coef, etc., making them even more difficult to work with. In general, the only real time you should need to use model matrices is when you are creating your own modeling package, doing simulations, utilizing model speed\-ups, or otherwise know why you need them.
Summarizing Models
------------------
Once we have a model, we’ll want to summarize the results of it. Most modeling packages have a summary method we can apply, which will provide parameter estimates, some notion of model fit, inferential statistics, and other output.
```
happy_model_base_sum = summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
There is a lot of info to parse there, so we’ll go over some of it in particular. The summary provides several pieces of information: the coefficients or weights (`Estimate`)[26](#fn26), the standard errors (`Std. Error`), the t\-statistic (which is just the coefficient divided by the standard error), and the corresponding p\-value. The main thing to look at are the actual coefficients and the direction of their relationship, positive or negative. For example, with regard to the effect of democratic quality, moving one point on democratic quality results in roughly 0\.2 units of happiness. Is this a notable effect? Knowing the scale of the outcome can help us understand the magnitude of the effect in a general sense. Before we showed that the standard deviation of the happiness scale was 1\.1\. So, in terms of standard deviation units\- moving 1 points on democratic quality would result in a 0\.2 standard deviation increase in state\-level happiness. We might consider this fairly small, but maybe not negligible.
One thing we must also have in order to understand our results is to get a sense of the uncertainty in the effects. The following provides confidence intervals for each of the coefficients.
```
confint(happy_model_base)
```
```
2.5 % 97.5 %
(Intercept) -1.62845472 -0.3925003
democratic_quality 0.08018814 0.2605586
generosity 0.77656244 1.5451306
log_gdp_per_capita 0.62786210 0.7589806
```
Now we have a sense of the range of plausible values for the coefficients. The value we actually estimate is the best guess given our circumstances, but slight changes in the data, the way we collect it, the time we collect it, etc., all would result in a slightly different result. The confidence interval provides a range of what we could expect given the uncertainty, and, given its importance, you should always report it. In fact, just showing the coefficient and the interval would be better than typical reporting of the statistical test results, though you can do both.
Variable Transformations
------------------------
Transforming variables can provide a few benefits in modeling, whether applied to the target, covariates, or both, and should regularly be used for most models. Some of these benefits include[27](#fn27):
* Interpretable intercepts
* More comparable covariate effects
* Faster estimation
* Easier convergence
* Help with heteroscedasticity
For example, merely centering predictor variables, i.e. subtracting the mean, provides a more interpretable intercept that will fall within the actual range of the target variable, telling us what the value of the target variable is when the covariates are at their means (or reference value if categorical).
### Numeric variables
The following table shows the interpretation of two extremely common transformations applied to numeric variables\- logging and scaling (i.e. standardizing to mean zero, standard deviation one).
| target | predictor | interpretation |
| --- | --- | --- |
| y | x | \\(\\Delta y \= \\beta\\Delta x\\) |
| y | log(x) | \\(\\Delta y \\approx (\\beta/100\)\\%\\Delta x\\) |
| log(y) | x | \\(\\%\\Delta y \\approx 100\\cdot \\beta\\%\\Delta x\\) |
| log(y) | log(x) | \\(\\%\\Delta y \= \\beta\\%\\Delta x\\) |
| y | scale(x) | \\(\\Delta y \= \\beta\\sigma\\Delta x\\) |
| scale(y) | x | \\(\\sigma\\Delta y \= \\beta\\Delta x\\) |
| scale(y) | scale(x) | \\(\\sigma\\Delta y \= \\beta\\sigma\\Delta x\\) |
For example, to start with the normal linear model situation, a one\-unit change in \\(x\\), i.e. \\(\\Delta x \=1\\), leads to \\(\\beta\\) unit change in \\(y\\). If we log the target variable \\(y\\), the interpretation of the coefficient for \\(x\\) is that a one\-unit change in \\(x\\) leads to an (approximately) 100\\(\\cdot\\)\\(\\beta\\)% change in \\(y\\). The 100 changes the result from a proportion to percentage change. More concretely, if \\(\\beta\\) was .5, a unit change in \\(x\\) leads to (roughly) a 50% change in \\(y\\). If both were logged, a percentage change in \\(x\\) leads to a \\(\\beta\\) percentage change in y[28](#fn28). These percentage change interpretations are called [elasticities](https://en.wikipedia.org/wiki/Elasticity_(economics)) in econometrics and areas trained similarly[29](#fn29).
It is very common to use *standardized* variables as well, also called normalizing, or simply scaling. If \\(y\\) and \\(x\\) are both standardized, a one unit (i.e. one standard deviation) change in \\(x\\) leads to a \\(\\beta\\) standard deviation change in \\(y\\). Again, if \\(\\beta\\) was .5, a standard deviation change in \\(x\\) leads to a half standard deviation change in \\(y\\). In general, there is nothing to lose by standardizing, so you should employ it often.
Another common transformation, particularly in machine learning, is the *min\-max normalization*, changing variables to range from some minimum to some maximum, usually zero to one.
### Categorical variables
A raw character string is not an analyzable unit, so character strings and labeled variables like factors must be converted for analysis to be conducted on them. For categorical variables, we can employ what is called *effects coding* to test for specific types of group differences. Far and away the most common approach is called *dummy coding* or *one\-hot encoding*[30](#fn30). In the next example, we will use dummy coding via the recipes package. I also to show how to standardize a numeric variable as previously discussed.
```
library(recipes)
nafta = happy %>%
filter(country %in% c('United States', 'Canada', 'Mexico'))
dummy = nafta %>%
recipe(~ country + generosity) %>% # formula approach for specifying variables
step_dummy(country, one_hot = TRUE) %>% # make variables for all factor levels
step_center(generosity) %>% # example of centering
step_scale(generosity) # example of standardizing
prep(dummy) %>% # estimates the necessary data to apply to this or other data sets
bake(nafta) %>% # apply the computations
print(n = 20)
```
```
# A tibble: 39 x 4
generosity country_Canada country_Mexico country_United.States
<dbl> <dbl> <dbl> <dbl>
1 0.835 1 0 0
2 0.819 1 0 0
3 0.891 1 0 0
4 0.801 1 0 0
5 0.707 1 0 0
6 0.841 1 0 0
7 1.06 1 0 0
8 1.21 1 0 0
9 0.940 1 0 0
10 0.838 1 0 0
11 0.590 1 0 0
12 0.305 1 0 0
13 -0.0323 1 0 0
14 NA 0 1 0
15 -1.19 0 1 0
16 -1.39 0 1 0
17 -1.08 0 1 0
18 -0.915 0 1 0
19 -1.22 0 1 0
20 -1.18 0 1 0
# … with 19 more rows
```
We see that the first few observations are Canada, and the next few Mexico. Note that doing this is rarely required for most modeling situations, but even if not, it sometimes can be useful to do so explicitly. If your modeling package cannot handle factor variables, and thus requires explicit coding, you’ll know, and typically these are the same ones that require matrix input.
Let’s run a regression as follows to show how it would happen automatically.
```
model_dummy = lm(happiness_score ~ country, data = nafta)
summary(model_dummy)
```
```
Call:
lm(formula = happiness_score ~ country, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.36887 0.09633 76.493 5.64e-14 ***
countryMexico -0.61107 0.13624 -4.485 0.00152 **
countryUnited States -0.34337 0.13624 -2.520 0.03275 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
In this case, the coefficient represents the difference in means on the target variable between the reference group and the group in question. In this case, the U.S. is \-0\.34 less on the happy score than the reference country (Canada). The intercept tells us the mean of the reference group.
Other codings are possible, and these would allow for specific group comparisons or types of comparisons. This is sometimes called *contrast coding*. For example, we could compare Canada vs. both the U.S. and Mexico. By giving Canada twice the weight of the other two we can get this result. I also add a coding that will just compare Mexico vs. the U.S. The actual weights used are arbitrary, but in this case should sum to zero.
| group | canada\_vs\_other | mexico\_vs\_us |
| --- | --- | --- |
| Canada | \-0\.667 | 0\.0 |
| Mexico | 0\.333 | \-0\.5 |
| United States | 0\.333 | 0\.5 |
| weights sum to zero, but are arbitrary |
| --- |
Adding such coding to a factor variable allows the corresponding models to use it in constructing the model matrix, rather than dummy coding. See the group means and calculate the results by hand for yourself.
```
nafta = nafta %>%
mutate(country_fac = factor(country))
contrasts(nafta$country_fac) = matrix(c(-2/3, 1/3, 1/3, 0, -.5, .5),
ncol = 2)
summary(lm(happiness_score ~ country_fac, data = nafta))
```
```
Call:
lm(formula = happiness_score ~ country_fac, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.05072 0.05562 126.769 6.01e-16 ***
country_fac1 -0.47722 0.11799 -4.045 0.00291 **
country_fac2 0.26770 0.13624 1.965 0.08100 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
```
nafta %>%
group_by(country) %>%
summarise(happy = mean(happiness_score, na.rm = TRUE))
```
```
# A tibble: 3 x 2
country happy
<chr> <dbl>
1 Canada 7.37
2 Mexico 6.76
3 United States 7.03
```
For example, we can see that for this balanced data set, the `_fac1` coefficient is the average of the U.S. and Mexico coefficients that we got from dummy coding, which represented their respective mean differences from Canada: (\-0\.611 \+ \-0\.343\) / 2 \= \-0\.477\. The `_fac2` coefficient is just the U.S. Mexico mean difference, as expected.
In other circumstances, we can use *categorical embeddings* to reduce a very large number of categorical levels to a smaller number of numeric variables. This is very commonly employed in deep learning.
### Scales, indices, and dimension reduction
It is often the case that we have several correlated variables/items which do not all need to go into the model. For example, instead of using all items in a psychological scale, we can use the scale score, however defined, which is often just a *sum score* of the underlying items. Often people will create an index by using a *principal components analysis*, which can be thought of as a means to create a weighted sum score, or set of scores. Some (especially binary) items may tend toward the creation of a single variable that simply notes whether any of those collection of variables was present or not.
#### Two\-step approaches
Some might do a preliminary analysis, such as a *cluster analysis* or *factor analysis*, to create new target or predictor variables. In the former we reduce several variables to a single categorical label. Factor analysis does the same but results in a more expressive continuous metric. While fine to use, the corresponding results are measured with error, so treating the categories or factor scores as you would observed variables will typically result in optimistic results when you later include them in a subsequent analysis like a linear regression. Though this difference is probably slight in most applications, keen reviewers would probably point out the model shortcoming.
### Don’t discretize
Little pains advanced modelers more than seeing results where a nice expressive continuous metric is butchered into two categories (e.g. taking a numeric age and collapsing to ‘old’ vs. ‘young’). There is rarely a reason to do this, and it is difficult to justify. There are reasons to collapse rare labels of a categorical variable, so that the new variable has fewer but more frequent categories. For example, data may have five or six race categories, but often the values are lumped into majority group vs. minority group due to each minority category having too few observations. But even that can cause problems, and doesn’t really overcome the fact that you simply didn’t have enough data to begin with.
### Numeric variables
The following table shows the interpretation of two extremely common transformations applied to numeric variables\- logging and scaling (i.e. standardizing to mean zero, standard deviation one).
| target | predictor | interpretation |
| --- | --- | --- |
| y | x | \\(\\Delta y \= \\beta\\Delta x\\) |
| y | log(x) | \\(\\Delta y \\approx (\\beta/100\)\\%\\Delta x\\) |
| log(y) | x | \\(\\%\\Delta y \\approx 100\\cdot \\beta\\%\\Delta x\\) |
| log(y) | log(x) | \\(\\%\\Delta y \= \\beta\\%\\Delta x\\) |
| y | scale(x) | \\(\\Delta y \= \\beta\\sigma\\Delta x\\) |
| scale(y) | x | \\(\\sigma\\Delta y \= \\beta\\Delta x\\) |
| scale(y) | scale(x) | \\(\\sigma\\Delta y \= \\beta\\sigma\\Delta x\\) |
For example, to start with the normal linear model situation, a one\-unit change in \\(x\\), i.e. \\(\\Delta x \=1\\), leads to \\(\\beta\\) unit change in \\(y\\). If we log the target variable \\(y\\), the interpretation of the coefficient for \\(x\\) is that a one\-unit change in \\(x\\) leads to an (approximately) 100\\(\\cdot\\)\\(\\beta\\)% change in \\(y\\). The 100 changes the result from a proportion to percentage change. More concretely, if \\(\\beta\\) was .5, a unit change in \\(x\\) leads to (roughly) a 50% change in \\(y\\). If both were logged, a percentage change in \\(x\\) leads to a \\(\\beta\\) percentage change in y[28](#fn28). These percentage change interpretations are called [elasticities](https://en.wikipedia.org/wiki/Elasticity_(economics)) in econometrics and areas trained similarly[29](#fn29).
It is very common to use *standardized* variables as well, also called normalizing, or simply scaling. If \\(y\\) and \\(x\\) are both standardized, a one unit (i.e. one standard deviation) change in \\(x\\) leads to a \\(\\beta\\) standard deviation change in \\(y\\). Again, if \\(\\beta\\) was .5, a standard deviation change in \\(x\\) leads to a half standard deviation change in \\(y\\). In general, there is nothing to lose by standardizing, so you should employ it often.
Another common transformation, particularly in machine learning, is the *min\-max normalization*, changing variables to range from some minimum to some maximum, usually zero to one.
### Categorical variables
A raw character string is not an analyzable unit, so character strings and labeled variables like factors must be converted for analysis to be conducted on them. For categorical variables, we can employ what is called *effects coding* to test for specific types of group differences. Far and away the most common approach is called *dummy coding* or *one\-hot encoding*[30](#fn30). In the next example, we will use dummy coding via the recipes package. I also to show how to standardize a numeric variable as previously discussed.
```
library(recipes)
nafta = happy %>%
filter(country %in% c('United States', 'Canada', 'Mexico'))
dummy = nafta %>%
recipe(~ country + generosity) %>% # formula approach for specifying variables
step_dummy(country, one_hot = TRUE) %>% # make variables for all factor levels
step_center(generosity) %>% # example of centering
step_scale(generosity) # example of standardizing
prep(dummy) %>% # estimates the necessary data to apply to this or other data sets
bake(nafta) %>% # apply the computations
print(n = 20)
```
```
# A tibble: 39 x 4
generosity country_Canada country_Mexico country_United.States
<dbl> <dbl> <dbl> <dbl>
1 0.835 1 0 0
2 0.819 1 0 0
3 0.891 1 0 0
4 0.801 1 0 0
5 0.707 1 0 0
6 0.841 1 0 0
7 1.06 1 0 0
8 1.21 1 0 0
9 0.940 1 0 0
10 0.838 1 0 0
11 0.590 1 0 0
12 0.305 1 0 0
13 -0.0323 1 0 0
14 NA 0 1 0
15 -1.19 0 1 0
16 -1.39 0 1 0
17 -1.08 0 1 0
18 -0.915 0 1 0
19 -1.22 0 1 0
20 -1.18 0 1 0
# … with 19 more rows
```
We see that the first few observations are Canada, and the next few Mexico. Note that doing this is rarely required for most modeling situations, but even if not, it sometimes can be useful to do so explicitly. If your modeling package cannot handle factor variables, and thus requires explicit coding, you’ll know, and typically these are the same ones that require matrix input.
Let’s run a regression as follows to show how it would happen automatically.
```
model_dummy = lm(happiness_score ~ country, data = nafta)
summary(model_dummy)
```
```
Call:
lm(formula = happiness_score ~ country, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.36887 0.09633 76.493 5.64e-14 ***
countryMexico -0.61107 0.13624 -4.485 0.00152 **
countryUnited States -0.34337 0.13624 -2.520 0.03275 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
In this case, the coefficient represents the difference in means on the target variable between the reference group and the group in question. In this case, the U.S. is \-0\.34 less on the happy score than the reference country (Canada). The intercept tells us the mean of the reference group.
Other codings are possible, and these would allow for specific group comparisons or types of comparisons. This is sometimes called *contrast coding*. For example, we could compare Canada vs. both the U.S. and Mexico. By giving Canada twice the weight of the other two we can get this result. I also add a coding that will just compare Mexico vs. the U.S. The actual weights used are arbitrary, but in this case should sum to zero.
| group | canada\_vs\_other | mexico\_vs\_us |
| --- | --- | --- |
| Canada | \-0\.667 | 0\.0 |
| Mexico | 0\.333 | \-0\.5 |
| United States | 0\.333 | 0\.5 |
| weights sum to zero, but are arbitrary |
| --- |
Adding such coding to a factor variable allows the corresponding models to use it in constructing the model matrix, rather than dummy coding. See the group means and calculate the results by hand for yourself.
```
nafta = nafta %>%
mutate(country_fac = factor(country))
contrasts(nafta$country_fac) = matrix(c(-2/3, 1/3, 1/3, 0, -.5, .5),
ncol = 2)
summary(lm(happiness_score ~ country_fac, data = nafta))
```
```
Call:
lm(formula = happiness_score ~ country_fac, data = nafta)
Residuals:
Min 1Q Median 3Q Max
-0.26960 -0.07453 -0.00615 0.06322 0.42920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.05072 0.05562 126.769 6.01e-16 ***
country_fac1 -0.47722 0.11799 -4.045 0.00291 **
country_fac2 0.26770 0.13624 1.965 0.08100 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1927 on 9 degrees of freedom
(27 observations deleted due to missingness)
Multiple R-squared: 0.692, Adjusted R-squared: 0.6236
F-statistic: 10.11 on 2 and 9 DF, p-value: 0.004994
```
```
nafta %>%
group_by(country) %>%
summarise(happy = mean(happiness_score, na.rm = TRUE))
```
```
# A tibble: 3 x 2
country happy
<chr> <dbl>
1 Canada 7.37
2 Mexico 6.76
3 United States 7.03
```
For example, we can see that for this balanced data set, the `_fac1` coefficient is the average of the U.S. and Mexico coefficients that we got from dummy coding, which represented their respective mean differences from Canada: (\-0\.611 \+ \-0\.343\) / 2 \= \-0\.477\. The `_fac2` coefficient is just the U.S. Mexico mean difference, as expected.
In other circumstances, we can use *categorical embeddings* to reduce a very large number of categorical levels to a smaller number of numeric variables. This is very commonly employed in deep learning.
### Scales, indices, and dimension reduction
It is often the case that we have several correlated variables/items which do not all need to go into the model. For example, instead of using all items in a psychological scale, we can use the scale score, however defined, which is often just a *sum score* of the underlying items. Often people will create an index by using a *principal components analysis*, which can be thought of as a means to create a weighted sum score, or set of scores. Some (especially binary) items may tend toward the creation of a single variable that simply notes whether any of those collection of variables was present or not.
#### Two\-step approaches
Some might do a preliminary analysis, such as a *cluster analysis* or *factor analysis*, to create new target or predictor variables. In the former we reduce several variables to a single categorical label. Factor analysis does the same but results in a more expressive continuous metric. While fine to use, the corresponding results are measured with error, so treating the categories or factor scores as you would observed variables will typically result in optimistic results when you later include them in a subsequent analysis like a linear regression. Though this difference is probably slight in most applications, keen reviewers would probably point out the model shortcoming.
#### Two\-step approaches
Some might do a preliminary analysis, such as a *cluster analysis* or *factor analysis*, to create new target or predictor variables. In the former we reduce several variables to a single categorical label. Factor analysis does the same but results in a more expressive continuous metric. While fine to use, the corresponding results are measured with error, so treating the categories or factor scores as you would observed variables will typically result in optimistic results when you later include them in a subsequent analysis like a linear regression. Though this difference is probably slight in most applications, keen reviewers would probably point out the model shortcoming.
### Don’t discretize
Little pains advanced modelers more than seeing results where a nice expressive continuous metric is butchered into two categories (e.g. taking a numeric age and collapsing to ‘old’ vs. ‘young’). There is rarely a reason to do this, and it is difficult to justify. There are reasons to collapse rare labels of a categorical variable, so that the new variable has fewer but more frequent categories. For example, data may have five or six race categories, but often the values are lumped into majority group vs. minority group due to each minority category having too few observations. But even that can cause problems, and doesn’t really overcome the fact that you simply didn’t have enough data to begin with.
Variable Importance
-------------------
In many circumstances, one of the modeling goals is to determine which predictor variable is most important out of the collection used in the model, or otherwise rank order the effectiveness of the predictors in some fashion. However, determining relative *variable importance* is at best an approximation with some methods, and a fairly hopeless endeavor with others. For just basic linear regression there are many methods that would not necessarily come to the same conclusions. Statistical significance, e.g. the Z/t statistic or p\-value, is simply not a correct way to do so. Some believe that [standardizing numeric variables](models.html#numeric-variables) is enough, but it is not, and doesn’t help with comparison to categorical inputs. In addition, if you’re model is not strong, it doesn’t make much sense to even worry about which is the best of a bad lot.
Another reason that ‘importance’ is a problematic endeavor is that a statistical result doesn’t speak to practical action, nor does it speak to the fact that small effects may be very important. Sex may be an important driver in social science model, but we may not be able to do anything about it for many outcomes that may be of interest. With health outcomes, any effects might be worthy of attention, however small, if they could practically increase the likelihood of survival.
Even if you can come up with a metric you like, you would still need some measure of uncertainty around that to make a claim that one predictor is reasonably better than another, and the only real approach to do that is usually some computationally expensive procedure that you will likely have to put together by hand.
As an example, for standard linear regression there are many methods that decompose \\(R^2\\) into relative contributions by the covariates. The tools to do so have to re\-run the model in many ways to produce these estimates (see the relaimpo package for example), but you would then have to use bootstrapping or similar approach to get interval estimates for those measures of importance. Certain techniques like random forests have a natural way to provide variable importance metrics, but providing inference on them would similarly be very computationally expensive.
In the end though, I think it is probably best to assume that any effect that seems practically distinct from zero might be worthy of attention, and can be regarded for its own sake. The more actionable, the better.
Extracting Output
-----------------
The better you get at modeling, the more often you are going to need to get at certain parts of the model output easily. For example, we can extract the coefficients, residuals, model data and other parts from standard linear model objects from base R.
Why would you want to do this? A simple example would be to compare effects across different settings. We can collect the values, put them in a data frame, and then to a table or visualization.
Typical modeling [methods](programming.html#methods) you might want to use:
* summary: print results in a legible way
* plot: plot something about the model (e.g. diagnostic plots)
* predict: make predictions, possibly on new data
* confint: get confidence intervals for parameters
* coef: extract coefficients
* fitted: extract fitted values
* residuals: extract residuals
* AIC: extract AIC
Here is an example of using the predict and coef methods.
```
predict(happy_model_base, newdata = happy %>% slice(1:5))
```
```
1 2 3 4 5
3.838179 3.959046 3.928180 4.004129 4.171624
```
```
coef(happy_model_base)
```
```
(Intercept) democratic_quality generosity log_gdp_per_capita
-1.0104775 0.1703734 1.1608465 0.6934213
```
Also, it’s useful to assign the summary results to an object, so that you can extract things that are also useful but would not be in the model object. We did this before, so now let’s take a look.
```
str(happy_model_base_sum, 1)
```
```
List of 12
$ call : language lm(formula = happiness_score ~ democratic_quality + generosity + log_gdp_per_capita, data = happy)
$ terms :Classes 'terms', 'formula' language happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
.. ..- attr(*, "variables")= language list(happiness_score, democratic_quality, generosity, log_gdp_per_capita)
.. ..- attr(*, "factors")= int [1:4, 1:3] 0 1 0 0 0 0 1 0 0 0 ...
.. .. ..- attr(*, "dimnames")=List of 2
.. ..- attr(*, "term.labels")= chr [1:3] "democratic_quality" "generosity" "log_gdp_per_capita"
.. ..- attr(*, "order")= int [1:3] 1 1 1
.. ..- attr(*, "intercept")= int 1
.. ..- attr(*, "response")= int 1
.. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
.. ..- attr(*, "predvars")= language list(happiness_score, democratic_quality, generosity, log_gdp_per_capita)
.. ..- attr(*, "dataClasses")= Named chr [1:4] "numeric" "numeric" "numeric" "numeric"
.. .. ..- attr(*, "names")= chr [1:4] "happiness_score" "democratic_quality" "generosity" "log_gdp_per_capita"
$ residuals : Named num [1:411] -0.405 -0.572 0.057 -0.426 -0.829 ...
..- attr(*, "names")= chr [1:411] "8" "9" "10" "19" ...
$ coefficients : num [1:4, 1:4] -1.01 0.17 1.161 0.693 0.314 ...
..- attr(*, "dimnames")=List of 2
$ aliased : Named logi [1:4] FALSE FALSE FALSE FALSE
..- attr(*, "names")= chr [1:4] "(Intercept)" "democratic_quality" "generosity" "log_gdp_per_capita"
$ sigma : num 0.628
$ df : int [1:3] 4 407 4
$ r.squared : num 0.695
$ adj.r.squared: num 0.693
$ fstatistic : Named num [1:3] 310 3 407
..- attr(*, "names")= chr [1:3] "value" "numdf" "dendf"
$ cov.unscaled : num [1:4, 1:4] 0.2504 0.0229 -0.0139 -0.0264 0.0229 ...
..- attr(*, "dimnames")=List of 2
$ na.action : 'omit' Named int [1:1293] 1 2 3 4 5 6 7 11 12 13 ...
..- attr(*, "names")= chr [1:1293] "1" "2" "3" "4" ...
- attr(*, "class")= chr "summary.lm"
```
If we want the adjusted \\(R^2\\) or root mean squared error (RMSE, i.e. average error[31](#fn31)), they aren’t readily available in the model object, but they are in the summary object, so we can pluck them out as we would any other [list object](data_structures.html#lists).
```
happy_model_base_sum$adj.r.squared
```
```
[1] 0.6930647
```
```
happy_model_base_sum[['sigma']]
```
```
[1] 0.6282718
```
### Package support
There are many packages available to get at model results. One of the more widely used is broom, which has tidy and other functions that can apply in different ways to different models depending on their class.
```
library(broom)
tidy(happy_model_base)
```
```
# A tibble: 4 x 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -1.01 0.314 -3.21 1.41e- 3
2 democratic_quality 0.170 0.0459 3.71 2.33e- 4
3 generosity 1.16 0.195 5.94 6.18e- 9
4 log_gdp_per_capita 0.693 0.0333 20.8 5.93e-66
```
Some packages will produce tables for a model object that are more or less ready for publication. However, unless you know it’s in the exact style you need, you’re probably better off dealing with it yourself. For example, you can use tidy and do minor cleanup to get the table ready for publication.
### Package support
There are many packages available to get at model results. One of the more widely used is broom, which has tidy and other functions that can apply in different ways to different models depending on their class.
```
library(broom)
tidy(happy_model_base)
```
```
# A tibble: 4 x 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -1.01 0.314 -3.21 1.41e- 3
2 democratic_quality 0.170 0.0459 3.71 2.33e- 4
3 generosity 1.16 0.195 5.94 6.18e- 9
4 log_gdp_per_capita 0.693 0.0333 20.8 5.93e-66
```
Some packages will produce tables for a model object that are more or less ready for publication. However, unless you know it’s in the exact style you need, you’re probably better off dealing with it yourself. For example, you can use tidy and do minor cleanup to get the table ready for publication.
Visualization
-------------
> Models require visualization to be understood completely.
If you aren’t using visualization as a fundamental part of your model exploration, you’re likely leaving a lot of that exploration behind, and not communicating the results as well as you could to the broadest audience possible. When adding nonlinear effects, interactions, and more, visualization is a must. Thankfully there are many packages to help you get data you need to visualize effects.
We start with the emmeans package. In the following example we have a country effect, and wish to get the mean happiness scores per country. We then visualize the results. Here we can see that Mexico is lowest on average.
```
happy_model_nafta = lm(happiness_score ~ country + year, data = nafta)
library(emmeans)
country_means = emmeans(happy_model_nafta, ~ country)
country_means
```
```
country emmean SE df lower.CL upper.CL
Canada 7.37 0.064 8 7.22 7.52
Mexico 6.76 0.064 8 6.61 6.91
United States 7.03 0.064 8 6.88 7.17
Confidence level used: 0.95
```
```
plot(country_means)
```
We can also test for pairwise differences between the countries, and there’s no reason not to visualize that also. In the following, after adjustment Mexico and U.S. might not differ on mean happiness, but the other comparisons are statistically notable[32](#fn32).
```
pw_comparisons = contrast(country_means, method = 'pairwise', adjust = 'bonferroni')
pw_comparisons
```
```
contrast estimate SE df t.ratio p.value
Canada - Mexico 0.611 0.0905 8 6.751 0.0004
Canada - United States 0.343 0.0905 8 3.793 0.0159
Mexico - United States -0.268 0.0905 8 -2.957 0.0547
P value adjustment: bonferroni method for 3 tests
```
```
plot(pw_comparisons)
```
The following example uses ggeffects. First, we run a model with an interaction of country and year (we’ll talk more about interactions later). Then we get predictions for the year by country, and subsequently visualize. We can see that the trend, while negative for all countries, is more pronounced as we move south.
```
happy_model_nafta = lm(happiness_score ~ year*country, data = nafta)
library(ggeffects)
preds = ggpredict(happy_model_nafta, terms = c('year', 'country'))
plot(preds)
```
Whenever you move to generalized linear models or other more complicated settings, visualization is even more important, so it’s best to have some tools at your disposal.
Extensions to the Standard Linear Model
---------------------------------------
### Different types of targets
In many data situations, we do not have a continuous numeric target variable, or may want to use a different distribution to get a better fit, or adhere to some theoretical perspective. For example, count data is not continuous and often notably skewed, so assuming a normal symmetric distribution may not work as well. From a data generating perspective we can use the Poisson distribution[33](#fn33) for the target variable instead.
\\\[\\ln{\\mu} \= X\\beta\\]
\\\[\\mu \= e^{X\\beta}\\]
\\\[y \\sim \\mathcal{Pois}(\\mu)\\]
Conceptually nothing has really changed from what we were doing with the standard linear model, except for the distribution. We still have a mean function determined by our predictors, and this is what we’re typically mainly interested in from a theoretical perspective. We do have an added step, a transformation of the mean (now usually called the *linear predictor*). Poisson naturally works with the log of the target, but rather than do that explicitly, we instead exponentiate the linear predictor. The *link function*[34](#fn34), which is the natural log in this setting, has a corresponding *inverse link* (or mean function)\- exponentiation.
In code we can demonstrate this as follows.
```
set.seed(123) # for reproducibility
N = 1000 # sample size
beta = c(2, 1) # the true coefficient values
x = rnorm(N) # a single predictor variable
mu = exp(beta[1] + beta[2]*x) # the linear predictor
y = rpois(N, lambda = mu) # the target variable lambda = mean
glm(y ~ x, family = poisson)
```
```
Call: glm(formula = y ~ x, family = poisson)
Coefficients:
(Intercept) x
2.009 0.994
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 13240
Residual Deviance: 1056 AIC: 4831
```
A very common setting is the case where our target variable takes on only two values\- yes vs. no, alive vs. dead, etc. The most common model used in such settings is the logistic regression model. In this case, it will have a different link to go with a different distribution.
\\\[\\ln{\\frac{\\mu}{1\-\\mu}} \= X\\beta\\]
\\\[\\mu \= \\frac{1}{1\+e^{\-X\\beta}}\\]
\\\[y \\sim \\mathcal{Binom}(\\mathrm{prob}\=\\mu, \\mathrm{size} \= 1\)\\]
Here our link function is called the *logit*, and it’s inverse takes our linear predictor and puts it on the probability scale.
Again, some code can help drive this home.
```
mu = plogis(beta[1] + beta[2]*x)
y = rbinom(N, size = 1, mu)
glm(y ~ x, family = binomial)
```
```
Call: glm(formula = y ~ x, family = binomial)
Coefficients:
(Intercept) x
2.141 1.227
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 852.3
Residual Deviance: 708.8 AIC: 712.8
```
```
# extension to count/proportional model
# mu = plogis(beta[1] + beta[2]*x)
# total = rpois(N, lambda = 5)
# events = rbinom(N, size = total, mu)
# nonevents = total - events
#
# glm(cbind(events, nonevents) ~ x, family = binomial)
```
You’ll have noticed that when we fit these models we used glm instead of lm. The normal linear model is a special case of *generalized linear models*, which includes a specific class of distributions \- normal, poisson, binomial, gamma, beta and more \- collectively referred to as the [exponential family](https://en.wikipedia.org/wiki/Exponential_family). While this family can cover a lot of ground, you do not have to restrict yourself to it, and many R modeling packages will provide easy access to more. The main point is that you have tools to deal with continuous, binary, count, ordinal, and other types of data. Furthermore, not much necessarily changes conceptually from model to model besides the link function and/or distribution.
### Correlated data
Often in standard regression modeling situations we have data that is correlated, like when we observe multiple observations for individuals (e.g. longitudinal studies), or observations are clustered within geographic units. There are many ways to analyze all kinds of correlated data in the form of clustered data, time series, spatial data and similar. In terms of understanding the mean function and data generating distribution for our target variable, as we did in our previous models, not much changes. However, we will want to utilize estimation techniques that take this correlation into account. Examples of such models include:
* Mixed models (e.g. random intercepts, ‘multilevel’ models)
* Time series models (autoregressive)
* Spatial models (e.g. conditional autoregressive)
As demonstration is beyond the scope of this document, the main point here is awareness. But see these on [mixed models](https://m-clark.github.io/mixed-models-with-R/) and [generalized additive models](https://m-clark.github.io/generalized-additive-models/).
### Other extensions
There are many types of models that will take one well beyond the standard linear model. In some cases, the focus is multivariate, trying to model many targets at once. Other models will even be domain\-specific, tailored to a very narrow type of problem. Whatever the scenario, having a good understanding of the models we’ve been discussing will likely help you navigate these new waters much more easily.
### Different types of targets
In many data situations, we do not have a continuous numeric target variable, or may want to use a different distribution to get a better fit, or adhere to some theoretical perspective. For example, count data is not continuous and often notably skewed, so assuming a normal symmetric distribution may not work as well. From a data generating perspective we can use the Poisson distribution[33](#fn33) for the target variable instead.
\\\[\\ln{\\mu} \= X\\beta\\]
\\\[\\mu \= e^{X\\beta}\\]
\\\[y \\sim \\mathcal{Pois}(\\mu)\\]
Conceptually nothing has really changed from what we were doing with the standard linear model, except for the distribution. We still have a mean function determined by our predictors, and this is what we’re typically mainly interested in from a theoretical perspective. We do have an added step, a transformation of the mean (now usually called the *linear predictor*). Poisson naturally works with the log of the target, but rather than do that explicitly, we instead exponentiate the linear predictor. The *link function*[34](#fn34), which is the natural log in this setting, has a corresponding *inverse link* (or mean function)\- exponentiation.
In code we can demonstrate this as follows.
```
set.seed(123) # for reproducibility
N = 1000 # sample size
beta = c(2, 1) # the true coefficient values
x = rnorm(N) # a single predictor variable
mu = exp(beta[1] + beta[2]*x) # the linear predictor
y = rpois(N, lambda = mu) # the target variable lambda = mean
glm(y ~ x, family = poisson)
```
```
Call: glm(formula = y ~ x, family = poisson)
Coefficients:
(Intercept) x
2.009 0.994
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 13240
Residual Deviance: 1056 AIC: 4831
```
A very common setting is the case where our target variable takes on only two values\- yes vs. no, alive vs. dead, etc. The most common model used in such settings is the logistic regression model. In this case, it will have a different link to go with a different distribution.
\\\[\\ln{\\frac{\\mu}{1\-\\mu}} \= X\\beta\\]
\\\[\\mu \= \\frac{1}{1\+e^{\-X\\beta}}\\]
\\\[y \\sim \\mathcal{Binom}(\\mathrm{prob}\=\\mu, \\mathrm{size} \= 1\)\\]
Here our link function is called the *logit*, and it’s inverse takes our linear predictor and puts it on the probability scale.
Again, some code can help drive this home.
```
mu = plogis(beta[1] + beta[2]*x)
y = rbinom(N, size = 1, mu)
glm(y ~ x, family = binomial)
```
```
Call: glm(formula = y ~ x, family = binomial)
Coefficients:
(Intercept) x
2.141 1.227
Degrees of Freedom: 999 Total (i.e. Null); 998 Residual
Null Deviance: 852.3
Residual Deviance: 708.8 AIC: 712.8
```
```
# extension to count/proportional model
# mu = plogis(beta[1] + beta[2]*x)
# total = rpois(N, lambda = 5)
# events = rbinom(N, size = total, mu)
# nonevents = total - events
#
# glm(cbind(events, nonevents) ~ x, family = binomial)
```
You’ll have noticed that when we fit these models we used glm instead of lm. The normal linear model is a special case of *generalized linear models*, which includes a specific class of distributions \- normal, poisson, binomial, gamma, beta and more \- collectively referred to as the [exponential family](https://en.wikipedia.org/wiki/Exponential_family). While this family can cover a lot of ground, you do not have to restrict yourself to it, and many R modeling packages will provide easy access to more. The main point is that you have tools to deal with continuous, binary, count, ordinal, and other types of data. Furthermore, not much necessarily changes conceptually from model to model besides the link function and/or distribution.
### Correlated data
Often in standard regression modeling situations we have data that is correlated, like when we observe multiple observations for individuals (e.g. longitudinal studies), or observations are clustered within geographic units. There are many ways to analyze all kinds of correlated data in the form of clustered data, time series, spatial data and similar. In terms of understanding the mean function and data generating distribution for our target variable, as we did in our previous models, not much changes. However, we will want to utilize estimation techniques that take this correlation into account. Examples of such models include:
* Mixed models (e.g. random intercepts, ‘multilevel’ models)
* Time series models (autoregressive)
* Spatial models (e.g. conditional autoregressive)
As demonstration is beyond the scope of this document, the main point here is awareness. But see these on [mixed models](https://m-clark.github.io/mixed-models-with-R/) and [generalized additive models](https://m-clark.github.io/generalized-additive-models/).
### Other extensions
There are many types of models that will take one well beyond the standard linear model. In some cases, the focus is multivariate, trying to model many targets at once. Other models will even be domain\-specific, tailored to a very narrow type of problem. Whatever the scenario, having a good understanding of the models we’ve been discussing will likely help you navigate these new waters much more easily.
Model Exploration Summary
-------------------------
At this point you should have a good idea of how to get started exploring models with R. Generally what you will explore will be based on theory, or merely curiosity. Specific packages while make certain types of models easy to pull off, without much change to the syntax from the standard `lm` approach of base R. Almost invariably, you will need to process the data to make it more amenable to analysis and/or more interpretable. After model fitting, summaries and visualizations go a long way toward understanding the part of the world you are exploring.
Model Exploration Exercises
---------------------------
### Exercise 1
With the Google app data, use a standard linear model (i.e. lm) to predict one of three target variables of your choosing:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
I would suggest preprocessing the number of reviews\- dividing by 100,000, scaling (standardizing), or logging it (for the latter you can add 1 first to deal with zeros[35](#fn35)).
Interpret the results. Visualize the difference in means between free and paid apps. See the [emmeans](models.html#visualization) example above.
```
load('data/google_apps.RData')
model = lm(? ~ reviews + type + size_in_MB, data = google_apps)
plot(emmeans::emmeans(model, ~type))
```
### Exercise 2
Rerun the above with interactions of the number of reviews or app size (or both) with type (via `a + b + a:b` or just `a*b` for two predictors). Visualize the interaction. Does it look like the effect differs by type?
```
model = lm(? ~ reviews + type*?, data = google_apps)
plot(ggeffects::ggpredict(model, terms = c('size_in_MB', 'type')))
```
### Exercise 3
Use the fish data to predict the number of fish caught `count` by the following predictor variables:
* `livebait`: whether live bait was used or not
* `child`: how many children present
* `persons`: total persons on the trip
If you wish, you can start with an `lm`, but as the number of fish caught is a count, it is suitable for using a poisson distribution via `glm` with `family = poisson`, so try that if you’re feeling up for it. If you exponentiate the coefficients, they can be interpreted as [incidence rate ratios](https://stats.idre.ucla.edu/stata/output/poisson-regression/).
```
load('data/fish.RData')
model = glm(?, data = fish)
```
### Exercise 1
With the Google app data, use a standard linear model (i.e. lm) to predict one of three target variables of your choosing:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
I would suggest preprocessing the number of reviews\- dividing by 100,000, scaling (standardizing), or logging it (for the latter you can add 1 first to deal with zeros[35](#fn35)).
Interpret the results. Visualize the difference in means between free and paid apps. See the [emmeans](models.html#visualization) example above.
```
load('data/google_apps.RData')
model = lm(? ~ reviews + type + size_in_MB, data = google_apps)
plot(emmeans::emmeans(model, ~type))
```
### Exercise 2
Rerun the above with interactions of the number of reviews or app size (or both) with type (via `a + b + a:b` or just `a*b` for two predictors). Visualize the interaction. Does it look like the effect differs by type?
```
model = lm(? ~ reviews + type*?, data = google_apps)
plot(ggeffects::ggpredict(model, terms = c('size_in_MB', 'type')))
```
### Exercise 3
Use the fish data to predict the number of fish caught `count` by the following predictor variables:
* `livebait`: whether live bait was used or not
* `child`: how many children present
* `persons`: total persons on the trip
If you wish, you can start with an `lm`, but as the number of fish caught is a count, it is suitable for using a poisson distribution via `glm` with `family = poisson`, so try that if you’re feeling up for it. If you exponentiate the coefficients, they can be interpreted as [incidence rate ratios](https://stats.idre.ucla.edu/stata/output/poisson-regression/).
```
load('data/fish.RData')
model = glm(?, data = fish)
```
Python Model Exploration Notebook
---------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/models.ipynb)
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/model_criticism.html |
Model Criticism
===============
It isn’t enough to simply fit a particular model, we must also ask how well it matches the data under study, if it can predict well on new data, where it fails, and more. In the following we will discuss how we can better understand our model and its limitations.
Model Fit
---------
### Standard linear model
In the basic regression setting we can think of model fit in terms of a statistical result, or in terms of the match between our model predictions and the observed target values. The former provides an inferential perspective, but as we will see, is limited. The latter regards a more practical result, and may provide a more nuanced or different conclusion.
#### Statistical assessment
In a standard linear model we can compare a model where there are no covariates vs. the model we actually care about, which may have many predictor variables. This is an almost useless test, but the results are typically reported both in standard output and academic presentation. Let’s think about it conceptually\- how does the variability in our target break down?
\\\[\\textrm{Total Variance} \= \\textrm{Model Explained Variance} \+ \\textrm{Residual Variance}\\]
So the variability in our target (TV) can be decomposed into that which we can explain with the predictor variables (MEV), and everything else that is not in our model (RV). If we have nothing in the model, then TV \= RV.
Let’s revisit the summary of our model. Note the *F\-statistic*, which represents a statistical test for the model as a whole.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
The standard F statistic can be calculated as follows, where \\(p\\) is the number of predictors[36](#fn36):
\\\[F \= \\frac{MV/p}{RV/(N\-p\-1\)}\\]
Conceptually it is a ratio of average squared variance to average unexplained variance. We can see this more explicitly as follows, where each predictor’s contribution to the total variance is provided in the `Sum Sq` column.
```
anova(happy_model_base)
```
```
Analysis of Variance Table
Response: happiness_score
Df Sum Sq Mean Sq F value Pr(>F)
democratic_quality 1 189.192 189.192 479.300 < 2.2e-16 ***
generosity 1 6.774 6.774 17.162 4.177e-05 ***
log_gdp_per_capita 1 170.649 170.649 432.324 < 2.2e-16 ***
Residuals 407 160.653 0.395
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
If we add those together and use our formula above we get:
\\\[F \= \\frac{366\.62/3}{160\.653/407} \= 309\.6\\]
Which is what is reported in the summary of the model. And the p\-value is just `pf(309.6, 3, 407, lower = FALSE)`, whose values can be extracted from the summary object.
```
happy_model_base_sum$fstatistic
```
```
value numdf dendf
309.5954 3.0000 407.0000
```
```
pf(309.6, 3, 407, lower.tail = FALSE)
```
```
[1] 1.239283e-104
```
Because the F\-value is so large and p\-value so small, the printed result in the summary doesn’t give us the actual p\-value. So let’s demonstrate again with a worse model, where the p\-value will be higher.
```
f_test = lm(happiness_score ~ generosity, happy)
summary(f_test)
```
```
Call:
lm(formula = happiness_score ~ generosity, data = happy)
Residuals:
Min 1Q Median 3Q Max
-2.81037 -0.89930 0.00716 0.84924 2.33153
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.41905 0.04852 111.692 < 2e-16 ***
generosity 0.89936 0.30351 2.963 0.00318 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.122 on 533 degrees of freedom
(1169 observations deleted due to missingness)
Multiple R-squared: 0.01621, Adjusted R-squared: 0.01436
F-statistic: 8.78 on 1 and 533 DF, p-value: 0.003181
```
```
pf(8.78, 1, 533, lower.tail = FALSE)
```
```
[1] 0.003181551
```
We can make this F\-test more explicit by actually fitting a null model and making the comparison. The following will provide the same result as before. We make sure to use the same data as in the original model, since there are missing values for some covariates.
```
happy_model_null = lm(happiness_score ~ 1, data = model.frame(happy_model_base))
anova(happy_model_null, happy_model_base)
```
```
Analysis of Variance Table
Model 1: happiness_score ~ 1
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Res.Df RSS Df Sum of Sq F Pr(>F)
1 410 527.27
2 407 160.65 3 366.62 309.6 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
In this case our F statistic generalizes to the following, where \\(\\textrm{Model}\_1\\) is the simpler model and \\(p\\) now refers to the total number of parameters estimated (i.e. same as before \+ 1 for the intercept)
\\\[F \= \\frac{(\\textrm{Model}\_2\\ \\textrm{RV} \- \\textrm{Model}\_1\\ \\textrm{RV})/(p\_2 \- p\_1\)}{\\textrm{Model}\_2\\ \\textrm{RV}/(N\-p\_2\-1\)}\\]
From the previous results, we can perform the necessary arithmetic based on this formula to get the F statistic.
```
((527.27 - 160.65)/3) / (160.65/407)
```
```
[1] 309.6054
```
#### \\(R^2\\)
The statistical result just shown is mostly a straw man type of test\- who actually cares if our model does statistically better than a model with nothing in it? Surely if you don’t do better than nothing, then you may need to think more intently about what you are trying to model and how. But just because you can knock the straw man down, it isn’t something to get overly excited about. Let’s turn instead to a different concept\- the amount of variance of the target variable that is explained by our predictors. For the standard linear model setting, this statistic is called *R\-squared* (\\(R^2\\)).
Going back to our previous notions, \\(R^2\\) is just:
\\\[R^2 \=\\textrm{Model Explained Variance}/\\textrm{Total Variance}\\]
This also is reported by default in our summary printout.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
With our values from before for model and total variance, we can calculate it ourselves.
```
366.62 / 527.27
```
```
[1] 0.6953174
```
Here is another way. Let’s get the model predictions, and see how well they correlate with the target.
```
predictions = predict(happy_model_base)
target = happy_model_base$model$happiness_score
rho = cor(predictions, target)
rho
```
```
[1] 0.8338528
```
```
rho^2
```
```
[1] 0.6953106
```
Now you can see why it’s called \\(R^2\\). It is the squared Pearson \\(r\\) of the model expected value and the observed target variable.
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
### Beyond OLS
People love \\(R^2\\), so much that they will report it wherever they can, even coming up with things like ‘Pseudo\-\\(R^2\\)’ when it proves difficult. However, outside of the OLS setting where we assume a normal distribution as the underlying data\-generating mechanism, \\(R^2\\) has little application, and so is not very useful. In some sense, for any numeric target variable we can ask how well our predictions correlate with the observed target values, but the notion of ‘variance explained’ doesn’t easily follow us. For example, for other distributions the estimated variance is a function of the mean (e.g. Poisson, Binomial), and so isn’t constant. In other settings we have multiple sources of (residual) variance, and some sources where it’s not clear whether the variance should be considered as part of the model explained variance or residual variance. For categorical targets the notion doesn’t really apply very well at all.
At least for GLM for non\-normal distributions, we can work with *deviance*, which is similar to the residual sum of squares in the OLS setting. We can get a ‘deviance explained’ using the following approach:
1. Fit a null model, i.e. intercept only. This gives the total deviance (`tot_dev`).
2. Fit the desired model. This provides the model unexplained deviance (`model_dev`)
3. Calculate \\(\\frac{\\textrm{tot\_dev} \-\\textrm{model\_dev}}{\\textrm{tot\_dev}}\\)
But this value doesn’t really behave in the same manner as \\(R^2\\). For one, it can actually go down for a more complex model, and there is no standard adjustment, neither of which is the case with \\(R^2\\) for the standard linear model. At most this can serve as an approximation. For more complicated settings you will have to rely on other means to determine model fit.
### Classification
For categorical targets we must think about obtaining predictions that allow us to classify the observations into specific categories. Not surprisingly, this will require different metrics to assess model performance.
#### Accuracy and other metrics
A very natural starting point is *accuracy*, or what percentage of our predicted class labels match the observed class labels. However, our model will not spit out a character string, only a number. On the scale of the linear predictor it can be anything, but we will at some point transform it to the probability scale, obtaining a predicted probability for each category. The class associated with the highest probability is the predicted class. In the case of binary targets, this is just an if\_else statement for one class `if_else(probability >= .5, 'class A', 'class B')`.
With those predicted labels and the observed labels we create what is commonly called a *confusion matrix*, but would more sanely be called a *classification table*, *prediction table*, or just about any other name one could come up with in the first 10 seconds of trying. Let’s look at the following hypothetical result.
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | 41 | 21 | | Predicted \= 0 | 16 | 13 | | | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | A | B | | Predicted \= 0 | C | D | |
In some cases we predict correctly, in other cases not. In this 2 x 2 setting we label the cells A through D. With things in place, consider the following the following nomenclature.
*True Positive*, *False Positive*, *True Negative*, *False Negative*: Above, these are A, B, D, and C respectively.
Now let’s see what we can calculate.
*Accuracy*: Number of correct classifications out of all predictions (A \+ D)/Total. In the above example this would be (41 \+ 13\)/91, about 59%.
*Error Rate*: 1 \- Accuracy.
*Sensitivity*: is the proportion of correctly predicted positives to all true positive events: A/(A \+ C). In the above example this would be 41/57, about 72%. High sensitivity would suggest a low type II error rate (see below), or high statistical power. Also known as *true positive rate*.
*Specificity*: is the proportion of correctly predicted negatives to all true negative events: D/(B \+ D). In the above example this would be 13/34, about 38%. High specificity would suggest a low type I error rate (see below). Also known as *true negative rate*.
*Positive Predictive Value* (PPV): proportion of true positives of those that are predicted positives: A/(A \+ B). In the above example this would be 41/62, about 66%.
*Negative Predictive Value* (NPV): proportion of true negatives of those that are predicted negative: D/(C \+ D). In the above example this would be 13/29, about 45%.
*Precision*: See PPV.
*Recall*: See sensitivity.
*Lift*: Ratio of positive predictions given actual positives to the proportion of positive predictions out of the total: (A/(A \+ C)) / ((A \+ B)/Total). In the above example this would be (41/(41 \+ 16\))/((41 \+ 21\)/(91\)), or 1\.06\.
*F Score* (F1 score): Harmonic mean of precision and recall: 2\*(Precision\*Recall)/(Precision\+Recall). In the above example this would be 2\*(.66\*.72\)/(.66\+.72\), about 0\.69\.
*Type I Error Rate* (false positive rate): proportion of true negatives that are incorrectly predicted positive: B/(B\+D). In the above example this would be 21/34, about 62%. Also known as *alpha*.
*Type II Error Rate* (false negative rate): proportion of true positives that are incorrectly predicted negative: C/(C\+A). In the above example this would be 16/57, about 28%. Also known as *beta*.
*False Discovery Rate*: proportion of false positives among all positive predictions: B/(A\+B). In the above example this would be 21/62, about 34%. Often used in multiple comparison testing in the context of ANOVA.
*Phi coefficient*: A measure of association: (A\*D \- B\*C) / (sqrt((A\+C)\*(D\+B)\*(A\+B)\*(D\+C))). In the above example this would be 0\.11\.
Several of these may also be produced on a per\-class basis when there are more than two classes. In addition, for multi\-class scenarios there are other metrics commonly employed. In general there are many, many other metrics for confusion matrices, any of which might be useful for your situation, but the above provides a starting point, and is enough for many situations.
Model Assumptions
-----------------
There are quite a few assumptions for the standard linear model that we could talk about, but I’ll focus on just a handful, ordered roughly in terms of the severity of violation.
* Correct model
* Heteroscedasticity
* Independence of observations
* Normality
These concern bias (the first), accurate inference (most of the rest), or other statistical concepts (efficiency, consistency). The issue with most of the assumptions you learn about in your statistics course is that they mostly just apply to the OLS setting. Moreover, you can meet all the assumptions you want and still have a crappy model. Practically speaking, the effects on inference often aren’t large enough to matter in many cases, as we shouldn’t be making any important decision based on a p\-value, or slight differences in the boundaries of an interval. Even then, at least for OLS and other simpler settings, the solutions to these issues are often easy, for example, to obtain correct standard errors, or are mostly overcome by having a large amount of data.
Still, the diagnostic tools can provide clues to model failure, and so have utility in that sense. As before, visualization will aid us here.
```
library(ggfortify)
autoplot(happy_model_base)
```
The first plot shows the spread of the residuals vs. the model estimated values. By default, the three most extreme observations are noted. In this plot we are looking for a lack of any conspicuous pattern, e.g. a fanning out to one side or butterfly shape. If the variance was dependent on some of the model estimated values, we have a couple options:
* Use a model that does not assume constant variance
* Add complexity to the model to better capture more extreme observations
* Change the assumed distribution
In this example we have it about as good as it gets. The second plot regards the normality of the residuals. If they are normally distributed, they would fall along the dotted line. Again, in practical application this is about as good as you’re going to get. In the following we can see that we have some issues, where predictions are worse at low and high ends, and we may not be capturing some of the tail of the target distribution.
Another plot we can use to assess model fit is simply to note the predictions vs. the observed values, and this sort of plot would be appropriate for any model. Here I show this both as a scatterplot and a density plot. With the first, the closer the result is to a line the better, with the latter, we can more adequately see what the model is predicting in relation to the observed values. In this case, while we’re doing well, one limitation of the model is that it does not have as much spread as target, and so is not capturing the more extreme values.
Beyond the OLS setting, assumptions may change, are more difficult to check, and guarantees are harder to come by. The primary one \- that you have an adequate and sufficiently complex model \- still remains the most vital. It is important to remember that these assumptions regard inference, not predictive capabilities. In addition, in many modeling scenarios we will actually induce bias to have more predictive capacity. In such settings statistical tests are of less importance, and there often may not even be an obvious test to use. Typically we will still have some means to get interval estimates for weights or predictions though.
Predictive Performance
----------------------
While we can gauge predictive performance to some extent with a metric like \\(R^2\\) in the standard linear model case, even then it almost certainly an optimistic viewpoint, and adjusted \\(R^2\\) doesn’t really deal with the underlying issue. What is the problem? The concern is that we are judging model performance on the very data it was fit to. Any potential deviation to the underlying data would certainly result in a different result for \\(R^2\\), accuracy, or any metric we choose to look at.
So the better estimate of how the model is doing is to observe performance on data it hasn’t seen, using a metric that better captures how close we hit the target. This data goes by different names\- *test set*, *validation set*, *holdout sample*, etc., but the basic idea is that we use some data that wasn’t used in model fitting to assess performance. We can do this in any data situation by randomly splitting into a data set for training the model, and one used for testing the model’s performance.
```
library(tidymodels)
set.seed(12)
happy_split = initial_split(happy, prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split) %>% drop_na()
happy_model_train = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_train
)
predictions = predict(happy_model_train, newdata = happy_test)
```
Comparing our loss on training and test (i.e. RMSE), we can see the loss is greater on the test set. You can use a package like yardstick to calculate this.
| RMSE\_train | RMSE\_test | % increase |
| --- | --- | --- |
| 0\.622 | 0\.758 | 21\.9 |
While in many settings we could simply report performance metrics from the test set, for a more accurate assessment of test error, we’d do better by taking an average over several test sets, an approach known as *cross\-validation*, something we’ll talk more about [later](ml.html#cross-validation).
In general, we may do okay in scenarios where the model is simple and uses a lot of data, but even then we may find a notable increase in test error relative to training error. For more complex models and/or with less data, the difference in training vs. test could be quite significant.
Model Comparison
----------------
Up until now the focus has been entirely on one model. However, if you’re trying to learn something new, you’ll almost always want to have multiple plausible models to explore, rather than just confirming what you think you already know. This can be as simple as starting with a baseline model and adding complexity to it, but it could also be pitting fundamentally different theoretical models against one another.
A notable problem is that complex models should always do better than simple ones. The question often then becomes if they are doing notably better given the additional complexity. So we’ll need some way to compare models in a way that takes the complexity of the model into account.
### Example: Additional covariates
A starting point for adding model complexity is simply adding more covariates. Let’s add life expectancy and a yearly trend to our happiness model. To make this model comparable to our baseline model, they need to be fit to the same data, and life expectancy has a couple missing values the others do not. So we’ll start with some data processing. I will start by standardizing some of the variables, and making year start at zero, which will represent 2008, and finally dropping missing values. Refer to our previous section on [transforming variables](models.html#numeric-variables) if you want to.
```
happy_recipe = happy %>%
select(
year,
happiness_score,
democratic_quality,
generosity,
healthy_life_expectancy_at_birth,
log_gdp_per_capita
) %>%
recipe(happiness_score ~ . ) %>%
step_center(all_numeric(), -log_gdp_per_capita, -year) %>%
step_scale(all_numeric(), -log_gdp_per_capita, -year) %>%
step_knnimpute(all_numeric()) %>%
step_naomit(everything()) %>%
step_center(year, means = 2005) %>%
prep()
happy_processed = happy_recipe %>% bake(happy)
```
Now let’s start with our baseline model again.
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_processed
)
summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.53727 -0.29553 -0.01258 0.32002 1.52749
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -5.49178 0.10993 -49.958 <2e-16 ***
democratic_quality 0.14175 0.01441 9.838 <2e-16 ***
generosity 0.19826 0.01096 18.092 <2e-16 ***
log_gdp_per_capita 0.59284 0.01187 49.946 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.44 on 1700 degrees of freedom
Multiple R-squared: 0.7805, Adjusted R-squared: 0.7801
F-statistic: 2014 on 3 and 1700 DF, p-value: < 2.2e-16
```
We can see that moving one standard deviation on democratic quality and generosity leads to similar standard deviation increases in happiness. Moving 10 percentage points in GDP would lead to less than .1 standard deviation increase in happiness.
Now we add our life expectancy and yearly trend.
```
happy_model_more = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed
)
summary(happy_model_more)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.50879 -0.27081 -0.01524 0.29640 1.60540
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.691818 0.148921 -24.790 < 2e-16 ***
democratic_quality 0.099717 0.013618 7.322 3.75e-13 ***
generosity 0.189113 0.010193 18.554 < 2e-16 ***
log_gdp_per_capita 0.397559 0.016121 24.661 < 2e-16 ***
healthy_life_expectancy_at_birth 0.311129 0.018732 16.609 < 2e-16 ***
year -0.007363 0.002728 -2.699 0.00702 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.4083 on 1698 degrees of freedom
Multiple R-squared: 0.8111, Adjusted R-squared: 0.8106
F-statistic: 1459 on 5 and 1698 DF, p-value: < 2.2e-16
```
Here it would seem that life expectancy has a notable effect on happiness (shocker), but the yearly trend, while negative, is not statistically notable. In addition, the democratic effect is no longer significant, as it would seem that it’s contribution was more due to it’s correlation with life expectancy. But the key question is\- is this model better?
The adjusted \\(R^2\\) seems to indicate that we are doing slightly better with this model, but not much (0\.81 vs. 0\.78\). We can test if the increase is a statistically notable one. [Recall previously](model_criticism.html#statistical-assessment) when we compared our model versus a null model to obtain a statistical test of model fit. Since these models are *nested*, i.e. one is a simpler form of the other, we can use the more general approach we depicted to compare these models. This ANOVA, or analysis of variance test, is essentially comparing whether the residual sum of squares (i.e. the loss) is statistically less for one model vs. the other. In many settings it is often called a *likelihood ratio test*.
```
anova(happy_model_base, happy_model_more, test = 'Chi')
```
```
Analysis of Variance Table
Model 1: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth + year
Res.Df RSS Df Sum of Sq Pr(>Chi)
1 1700 329.11
2 1698 283.11 2 45.997 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The `Df` from the test denotes that we have two additional parameters, i.e. coefficients, in the more complex model. But the main thing to note is whether the model statistically reduces the RSS, and so we see that this is a statistically notable improvement as well.
I actually do not like this test though. It requires nested models, which in some settings is either not the case or can be hard to determine, and ignores various aspects of uncertainty in parameter estimates. Furthermore, it may not be appropriate for some complex model settings. An approach that works in many settings is to compare *AIC* (Akaike Information Criterion). AIC is a value based on the likelihood for a given model, but which adds a penalty for complexity, since otherwise any more complex model would result in a larger likelihood (or in this case, smaller negative likelihood). In the following, \\(\\mathcal{L}\\) is the likelihood, and \\(\\mathcal{P}\\) is the number of parameters estimated for the model.
\\\[AIC \= \-2 ( \\ln (\\mathcal{L})) \+ 2 \\mathcal{P}\\]
```
AIC(happy_model_base)
```
```
[1] 2043.77
```
The value itself is meaningless until we compare models, in which case the lower value is the better model (because we are working with the negative log likelihood). With AIC, we don’t have to have nested models, so that’s a plus over the statistical test.
```
AIC(happy_model_base, happy_model_more)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
```
Again, our new model works better. However, this still may miss out on some uncertainty in the models. To try and capture this, I will calculate interval estimates for the adjusted \\(R^2\\) via *bootstrapping*, and then calculate an interval for their difference. The details are beyond what I want to delve into here, but the gist is we just want a confidence interval for the difference in adjusted \\(R^2\\).
| model | r2 | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 0\.780 | 0\.762 | 0\.798 |
| more | 0\.811 | 0\.795 | 0\.827 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in \\(R^2\\) | 0\.013 | 0\.049 |
It would seem the difference in adjusted \\(R^2\\) is not statistically different from zero. Likewise we could do the same for AIC.
| model | aic | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 2043\.770 | 1917\.958 | 2161\.231 |
| more | 1791\.237 | 1657\.755 | 1911\.073 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in AIC | \-369\.994 | \-126\.722 |
In this case, the more complex model may not be statistically better either, as the interval for the difference in AIC also contains zero, and exhibits a notably wide range.
### Example: Interactions
Let’s now add interactions to our model. Interactions allow the relationship of a predictor variable and target to vary depending on the values of another covariate. To keep things simple, we’ll add a single interaction to start\- I will interact democratic quality with life expectancy.
```
happy_model_interact = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth +
democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed
)
summary(happy_model_interact)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.42801 -0.26473 -0.00607 0.26868 1.48161
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.63990 0.14517 -25.074 < 2e-16 ***
democratic_quality 0.08785 0.01335 6.580 6.24e-11 ***
generosity 0.16479 0.01030 16.001 < 2e-16 ***
log_gdp_per_capita 0.38501 0.01578 24.404 < 2e-16 ***
healthy_life_expectancy_at_birth 0.33247 0.01830 18.165 < 2e-16 ***
democratic_quality:healthy_life_expectancy_at_birth 0.10526 0.01105 9.527 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.3987 on 1698 degrees of freedom
Multiple R-squared: 0.82, Adjusted R-squared: 0.8194
F-statistic: 1547 on 5 and 1698 DF, p-value: < 2.2e-16
```
The coefficient interpretation for variables in the interaction model changes. For those involved in an interaction, the base coefficient now only describes the effect when the variable they interact with is zero (or is at the reference group if it’s categorical). So democratic quality has a slight positive, but not statistically notable, effect at the mean of life expectancy (0\.088\). However, this effect increases by 0\.11 when life expectancy increases by 1 (i.e. 1 standard deviation since we standardized). The same interpretation goes for life expectancy. It’s base coefficient is when democratic quality is at it’s mean (0\.332\), and the interaction term is interpreted identically.
It seems most people (including journal reviewers) seem to have trouble understanding interactions if you just report them in a table. Furthermore, beyond the standard linear model with non\-normal distributions, the coefficient for the interaction term doesn’t even have the same precise meaning. But you know what helps us in every interaction setting? Visualization!
Let’s use ggeffects again. We’ll plot the effect of democratic quality at the mean of life expectancy, and at one standard deviation below and above. Since we already standardized it, this is even easier.
```
library(ggeffects)
plot(
ggpredict(
happy_model_interact,
terms = c('democratic_quality', 'healthy_life_expectancy_at_birth[-1, 0, 1]')
)
)
```
We seem to have discovered something interesting here! Democratic quality only has a positive effect for those countries with a high life expectancy, i.e. that are already in a good place in general. It may even be negative in countries in the contrary case. While this has to be taken with a lot of caution, it shows how exploring interactions can be fun and surprising!
Another way to plot interactions in which the variables are continuous is with a contour plot similar to the following. Here we don’t have to pick arbitrary values to plot against, and can see the predictions at all values of the covariates in question.
We see the the lowest expected happiness based on the model is with high democratic quality and low life expectancy. The best case scenario is to be high on both.
Here is our model comparison for all three models with AIC.
```
AIC(happy_model_base, happy_model_more, happy_model_interact)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
happy_model_interact 7 1709.801
```
Looks like our interaction model is winning.
### Example: Additive models
*Generalized additive models* allow our predictors to have a *wiggly* relationship with the target variable. For more information, see [this document](https://m-clark.github.io/generalized-additive-models/), but for our purposes, that’s all you really need to know\- effects don’t have to be linear even with linear models! We will use the base R mgcv package because it is awesome and you don’t need to install anything. In this case, we’ll allow all the covariates to have a nonlinear relationship, and we denote this with the `s()` syntax.
```
library(mgcv)
happy_model_gam = gam(
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth),
data = happy_processed
)
summary(happy_model_gam)
```
```
Family: gaussian
Link function: identity
Formula:
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth)
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.028888 0.008125 -3.555 0.000388 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(democratic_quality) 8.685 8.972 13.26 <2e-16 ***
s(generosity) 6.726 7.870 27.25 <2e-16 ***
s(log_gdp_per_capita) 8.893 8.996 87.20 <2e-16 ***
s(healthy_life_expectancy_at_birth) 8.717 8.977 65.82 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R-sq.(adj) = 0.872 Deviance explained = 87.5%
GCV = 0.11479 Scale est. = 0.11249 n = 1704
```
The first thing you may notice is that there are no regression coefficients. This is because the effect of any of these predictors depends on their value, so trying to assess it by a single value would be problematic at best. You can guess what will help us interpret this…
```
library(mgcViz)
plot.gamViz(happy_model_gam, allTerms = T)
```
Here is a brief summary of interpretation. We generally don’t have to worry about small wiggles.
* `democratic_quality`: Effect is most notable (positive and strong) for higher values. Negligible otherwise.
* `generosity`: Effect seems seems strongly positive, but mostly for lower values of generosity.
* `life_expectancy`: Effect is positive, but only if the country is around the mean or higher.
* `log GDP per capita`: Effect is mostly positive, but may depend on other factors not included in the model.
In terms of general model fit, the `Scale est.` is the same as the residual standard error (squared) in the other models, and is a notably lower than even the model with the interaction (0\.11 vs. 0\.16\). We can also see that the adjusted \\(R^2\\) is higher as well (0\.87 vs. 0\.82\). If we wanted, we can actually do wiggly interactions also! Here is our interaction from before for the GAM case.
Let’s check our AIC now to see which model wins.
```
AIC(
happy_model_null,
happy_model_base,
happy_model_more,
happy_model_interact,
happy_model_gam
)
```
```
df AIC
happy_model_null 2.00000 1272.755
happy_model_base 5.00000 2043.770
happy_model_more 7.00000 1791.237
happy_model_interact 7.00000 1709.801
happy_model_gam 35.02128 1148.417
```
It’s pretty clear our wiggly model is the winner, even with the added complexity. Note that even though we used a different function for the GAM model, the AIC is still comparable.
Model Averaging
---------------
Have you ever suffered from choice overload? Many folks who seek to understand some phenomenon via modeling do so. There are plenty of choices due to data processing, but then there may be many models to consider as well, and should be if you’re doing things correctly. But you know what? You don’t have to pick a best.
Model averaging is a common technique in the Bayesian world and also with some applications of machine learning (usually under the guise of *stacking*), but not as widely applied elsewhere, even though it could be. As an example if we (inversely) weight models by the AIC, we can get an average parameter that favors the better models, while not ignoring the lesser models if they aren’t notably poorer. People will use such an approach to get model averaged effects (i.e. coefficients) or predictions. In our setting, the GAM is doing so much better, that it’s weight would basically be 1\.0 and zero for the others. So the model averaged predictions would be almost identical to the GAM predictions.
| model | df | AIC | AICc | deltaAICc | Rel. Like. | weight |
| --- | --- | --- | --- | --- | --- | --- |
| happy\_model\_base | 5\.000 | 2043\.770 | 2043\.805 | 893\.875 | 0 | 0 |
| happy\_model\_more | 7\.000 | 1791\.237 | 1791\.303 | 641\.373 | 0 | 0 |
| happy\_model\_interact | 7\.000 | 1709\.801 | 1709\.867 | 559\.937 | 0 | 0 |
| happy\_model\_gam | 35\.021 | 1148\.417 | 1149\.930 | 0\.000 | 1 | 1 |
Model Criticism Summary
-----------------------
Statistical significance with a single model does not provide enough of a story to tell with your data. A better assessment of performance can be made on data the model has not seen, and can provide a better idea of the practical capabilities of it. Furthermore, pitting various models of differing complexities will allow for better confidence in the model or set of models we ultimately deem worthy. In general, in more explanatory settings we strive to balance performance with complexity through various means.
Model Criticism Exercises
-------------------------
### Exercise 0
Recall the [google app exercises](models.html#model-exploration-exercises), we use a standard linear model (i.e. lm) to predict one of three target variables:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
After that we did a model with an interaction.
Either using those models, or running new ones with a different target variable, conduct the following exercises.
```
load('data/google_apps.RData')
```
### Exercise 1
Assess the model fit and performance of your first model. Perform additional diagnostics to assess how the model is doing (e.g. plot the model to look at residuals).
```
summary(model)
plot(model)
```
### Exercise 2
Compare the model with the interaction model. Based on AIC or some other metric, which one would you choose? Visualize the interaction model if it’s the better model.
```
anova(model1, model2)
AIC(model1, model2)
```
Python Model Criticism Notebook
-------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/model_criticism.ipynb)
Model Fit
---------
### Standard linear model
In the basic regression setting we can think of model fit in terms of a statistical result, or in terms of the match between our model predictions and the observed target values. The former provides an inferential perspective, but as we will see, is limited. The latter regards a more practical result, and may provide a more nuanced or different conclusion.
#### Statistical assessment
In a standard linear model we can compare a model where there are no covariates vs. the model we actually care about, which may have many predictor variables. This is an almost useless test, but the results are typically reported both in standard output and academic presentation. Let’s think about it conceptually\- how does the variability in our target break down?
\\\[\\textrm{Total Variance} \= \\textrm{Model Explained Variance} \+ \\textrm{Residual Variance}\\]
So the variability in our target (TV) can be decomposed into that which we can explain with the predictor variables (MEV), and everything else that is not in our model (RV). If we have nothing in the model, then TV \= RV.
Let’s revisit the summary of our model. Note the *F\-statistic*, which represents a statistical test for the model as a whole.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
The standard F statistic can be calculated as follows, where \\(p\\) is the number of predictors[36](#fn36):
\\\[F \= \\frac{MV/p}{RV/(N\-p\-1\)}\\]
Conceptually it is a ratio of average squared variance to average unexplained variance. We can see this more explicitly as follows, where each predictor’s contribution to the total variance is provided in the `Sum Sq` column.
```
anova(happy_model_base)
```
```
Analysis of Variance Table
Response: happiness_score
Df Sum Sq Mean Sq F value Pr(>F)
democratic_quality 1 189.192 189.192 479.300 < 2.2e-16 ***
generosity 1 6.774 6.774 17.162 4.177e-05 ***
log_gdp_per_capita 1 170.649 170.649 432.324 < 2.2e-16 ***
Residuals 407 160.653 0.395
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
If we add those together and use our formula above we get:
\\\[F \= \\frac{366\.62/3}{160\.653/407} \= 309\.6\\]
Which is what is reported in the summary of the model. And the p\-value is just `pf(309.6, 3, 407, lower = FALSE)`, whose values can be extracted from the summary object.
```
happy_model_base_sum$fstatistic
```
```
value numdf dendf
309.5954 3.0000 407.0000
```
```
pf(309.6, 3, 407, lower.tail = FALSE)
```
```
[1] 1.239283e-104
```
Because the F\-value is so large and p\-value so small, the printed result in the summary doesn’t give us the actual p\-value. So let’s demonstrate again with a worse model, where the p\-value will be higher.
```
f_test = lm(happiness_score ~ generosity, happy)
summary(f_test)
```
```
Call:
lm(formula = happiness_score ~ generosity, data = happy)
Residuals:
Min 1Q Median 3Q Max
-2.81037 -0.89930 0.00716 0.84924 2.33153
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.41905 0.04852 111.692 < 2e-16 ***
generosity 0.89936 0.30351 2.963 0.00318 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.122 on 533 degrees of freedom
(1169 observations deleted due to missingness)
Multiple R-squared: 0.01621, Adjusted R-squared: 0.01436
F-statistic: 8.78 on 1 and 533 DF, p-value: 0.003181
```
```
pf(8.78, 1, 533, lower.tail = FALSE)
```
```
[1] 0.003181551
```
We can make this F\-test more explicit by actually fitting a null model and making the comparison. The following will provide the same result as before. We make sure to use the same data as in the original model, since there are missing values for some covariates.
```
happy_model_null = lm(happiness_score ~ 1, data = model.frame(happy_model_base))
anova(happy_model_null, happy_model_base)
```
```
Analysis of Variance Table
Model 1: happiness_score ~ 1
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Res.Df RSS Df Sum of Sq F Pr(>F)
1 410 527.27
2 407 160.65 3 366.62 309.6 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
In this case our F statistic generalizes to the following, where \\(\\textrm{Model}\_1\\) is the simpler model and \\(p\\) now refers to the total number of parameters estimated (i.e. same as before \+ 1 for the intercept)
\\\[F \= \\frac{(\\textrm{Model}\_2\\ \\textrm{RV} \- \\textrm{Model}\_1\\ \\textrm{RV})/(p\_2 \- p\_1\)}{\\textrm{Model}\_2\\ \\textrm{RV}/(N\-p\_2\-1\)}\\]
From the previous results, we can perform the necessary arithmetic based on this formula to get the F statistic.
```
((527.27 - 160.65)/3) / (160.65/407)
```
```
[1] 309.6054
```
#### \\(R^2\\)
The statistical result just shown is mostly a straw man type of test\- who actually cares if our model does statistically better than a model with nothing in it? Surely if you don’t do better than nothing, then you may need to think more intently about what you are trying to model and how. But just because you can knock the straw man down, it isn’t something to get overly excited about. Let’s turn instead to a different concept\- the amount of variance of the target variable that is explained by our predictors. For the standard linear model setting, this statistic is called *R\-squared* (\\(R^2\\)).
Going back to our previous notions, \\(R^2\\) is just:
\\\[R^2 \=\\textrm{Model Explained Variance}/\\textrm{Total Variance}\\]
This also is reported by default in our summary printout.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
With our values from before for model and total variance, we can calculate it ourselves.
```
366.62 / 527.27
```
```
[1] 0.6953174
```
Here is another way. Let’s get the model predictions, and see how well they correlate with the target.
```
predictions = predict(happy_model_base)
target = happy_model_base$model$happiness_score
rho = cor(predictions, target)
rho
```
```
[1] 0.8338528
```
```
rho^2
```
```
[1] 0.6953106
```
Now you can see why it’s called \\(R^2\\). It is the squared Pearson \\(r\\) of the model expected value and the observed target variable.
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
### Beyond OLS
People love \\(R^2\\), so much that they will report it wherever they can, even coming up with things like ‘Pseudo\-\\(R^2\\)’ when it proves difficult. However, outside of the OLS setting where we assume a normal distribution as the underlying data\-generating mechanism, \\(R^2\\) has little application, and so is not very useful. In some sense, for any numeric target variable we can ask how well our predictions correlate with the observed target values, but the notion of ‘variance explained’ doesn’t easily follow us. For example, for other distributions the estimated variance is a function of the mean (e.g. Poisson, Binomial), and so isn’t constant. In other settings we have multiple sources of (residual) variance, and some sources where it’s not clear whether the variance should be considered as part of the model explained variance or residual variance. For categorical targets the notion doesn’t really apply very well at all.
At least for GLM for non\-normal distributions, we can work with *deviance*, which is similar to the residual sum of squares in the OLS setting. We can get a ‘deviance explained’ using the following approach:
1. Fit a null model, i.e. intercept only. This gives the total deviance (`tot_dev`).
2. Fit the desired model. This provides the model unexplained deviance (`model_dev`)
3. Calculate \\(\\frac{\\textrm{tot\_dev} \-\\textrm{model\_dev}}{\\textrm{tot\_dev}}\\)
But this value doesn’t really behave in the same manner as \\(R^2\\). For one, it can actually go down for a more complex model, and there is no standard adjustment, neither of which is the case with \\(R^2\\) for the standard linear model. At most this can serve as an approximation. For more complicated settings you will have to rely on other means to determine model fit.
### Classification
For categorical targets we must think about obtaining predictions that allow us to classify the observations into specific categories. Not surprisingly, this will require different metrics to assess model performance.
#### Accuracy and other metrics
A very natural starting point is *accuracy*, or what percentage of our predicted class labels match the observed class labels. However, our model will not spit out a character string, only a number. On the scale of the linear predictor it can be anything, but we will at some point transform it to the probability scale, obtaining a predicted probability for each category. The class associated with the highest probability is the predicted class. In the case of binary targets, this is just an if\_else statement for one class `if_else(probability >= .5, 'class A', 'class B')`.
With those predicted labels and the observed labels we create what is commonly called a *confusion matrix*, but would more sanely be called a *classification table*, *prediction table*, or just about any other name one could come up with in the first 10 seconds of trying. Let’s look at the following hypothetical result.
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | 41 | 21 | | Predicted \= 0 | 16 | 13 | | | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | A | B | | Predicted \= 0 | C | D | |
In some cases we predict correctly, in other cases not. In this 2 x 2 setting we label the cells A through D. With things in place, consider the following the following nomenclature.
*True Positive*, *False Positive*, *True Negative*, *False Negative*: Above, these are A, B, D, and C respectively.
Now let’s see what we can calculate.
*Accuracy*: Number of correct classifications out of all predictions (A \+ D)/Total. In the above example this would be (41 \+ 13\)/91, about 59%.
*Error Rate*: 1 \- Accuracy.
*Sensitivity*: is the proportion of correctly predicted positives to all true positive events: A/(A \+ C). In the above example this would be 41/57, about 72%. High sensitivity would suggest a low type II error rate (see below), or high statistical power. Also known as *true positive rate*.
*Specificity*: is the proportion of correctly predicted negatives to all true negative events: D/(B \+ D). In the above example this would be 13/34, about 38%. High specificity would suggest a low type I error rate (see below). Also known as *true negative rate*.
*Positive Predictive Value* (PPV): proportion of true positives of those that are predicted positives: A/(A \+ B). In the above example this would be 41/62, about 66%.
*Negative Predictive Value* (NPV): proportion of true negatives of those that are predicted negative: D/(C \+ D). In the above example this would be 13/29, about 45%.
*Precision*: See PPV.
*Recall*: See sensitivity.
*Lift*: Ratio of positive predictions given actual positives to the proportion of positive predictions out of the total: (A/(A \+ C)) / ((A \+ B)/Total). In the above example this would be (41/(41 \+ 16\))/((41 \+ 21\)/(91\)), or 1\.06\.
*F Score* (F1 score): Harmonic mean of precision and recall: 2\*(Precision\*Recall)/(Precision\+Recall). In the above example this would be 2\*(.66\*.72\)/(.66\+.72\), about 0\.69\.
*Type I Error Rate* (false positive rate): proportion of true negatives that are incorrectly predicted positive: B/(B\+D). In the above example this would be 21/34, about 62%. Also known as *alpha*.
*Type II Error Rate* (false negative rate): proportion of true positives that are incorrectly predicted negative: C/(C\+A). In the above example this would be 16/57, about 28%. Also known as *beta*.
*False Discovery Rate*: proportion of false positives among all positive predictions: B/(A\+B). In the above example this would be 21/62, about 34%. Often used in multiple comparison testing in the context of ANOVA.
*Phi coefficient*: A measure of association: (A\*D \- B\*C) / (sqrt((A\+C)\*(D\+B)\*(A\+B)\*(D\+C))). In the above example this would be 0\.11\.
Several of these may also be produced on a per\-class basis when there are more than two classes. In addition, for multi\-class scenarios there are other metrics commonly employed. In general there are many, many other metrics for confusion matrices, any of which might be useful for your situation, but the above provides a starting point, and is enough for many situations.
### Standard linear model
In the basic regression setting we can think of model fit in terms of a statistical result, or in terms of the match between our model predictions and the observed target values. The former provides an inferential perspective, but as we will see, is limited. The latter regards a more practical result, and may provide a more nuanced or different conclusion.
#### Statistical assessment
In a standard linear model we can compare a model where there are no covariates vs. the model we actually care about, which may have many predictor variables. This is an almost useless test, but the results are typically reported both in standard output and academic presentation. Let’s think about it conceptually\- how does the variability in our target break down?
\\\[\\textrm{Total Variance} \= \\textrm{Model Explained Variance} \+ \\textrm{Residual Variance}\\]
So the variability in our target (TV) can be decomposed into that which we can explain with the predictor variables (MEV), and everything else that is not in our model (RV). If we have nothing in the model, then TV \= RV.
Let’s revisit the summary of our model. Note the *F\-statistic*, which represents a statistical test for the model as a whole.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
The standard F statistic can be calculated as follows, where \\(p\\) is the number of predictors[36](#fn36):
\\\[F \= \\frac{MV/p}{RV/(N\-p\-1\)}\\]
Conceptually it is a ratio of average squared variance to average unexplained variance. We can see this more explicitly as follows, where each predictor’s contribution to the total variance is provided in the `Sum Sq` column.
```
anova(happy_model_base)
```
```
Analysis of Variance Table
Response: happiness_score
Df Sum Sq Mean Sq F value Pr(>F)
democratic_quality 1 189.192 189.192 479.300 < 2.2e-16 ***
generosity 1 6.774 6.774 17.162 4.177e-05 ***
log_gdp_per_capita 1 170.649 170.649 432.324 < 2.2e-16 ***
Residuals 407 160.653 0.395
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
If we add those together and use our formula above we get:
\\\[F \= \\frac{366\.62/3}{160\.653/407} \= 309\.6\\]
Which is what is reported in the summary of the model. And the p\-value is just `pf(309.6, 3, 407, lower = FALSE)`, whose values can be extracted from the summary object.
```
happy_model_base_sum$fstatistic
```
```
value numdf dendf
309.5954 3.0000 407.0000
```
```
pf(309.6, 3, 407, lower.tail = FALSE)
```
```
[1] 1.239283e-104
```
Because the F\-value is so large and p\-value so small, the printed result in the summary doesn’t give us the actual p\-value. So let’s demonstrate again with a worse model, where the p\-value will be higher.
```
f_test = lm(happiness_score ~ generosity, happy)
summary(f_test)
```
```
Call:
lm(formula = happiness_score ~ generosity, data = happy)
Residuals:
Min 1Q Median 3Q Max
-2.81037 -0.89930 0.00716 0.84924 2.33153
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.41905 0.04852 111.692 < 2e-16 ***
generosity 0.89936 0.30351 2.963 0.00318 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.122 on 533 degrees of freedom
(1169 observations deleted due to missingness)
Multiple R-squared: 0.01621, Adjusted R-squared: 0.01436
F-statistic: 8.78 on 1 and 533 DF, p-value: 0.003181
```
```
pf(8.78, 1, 533, lower.tail = FALSE)
```
```
[1] 0.003181551
```
We can make this F\-test more explicit by actually fitting a null model and making the comparison. The following will provide the same result as before. We make sure to use the same data as in the original model, since there are missing values for some covariates.
```
happy_model_null = lm(happiness_score ~ 1, data = model.frame(happy_model_base))
anova(happy_model_null, happy_model_base)
```
```
Analysis of Variance Table
Model 1: happiness_score ~ 1
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Res.Df RSS Df Sum of Sq F Pr(>F)
1 410 527.27
2 407 160.65 3 366.62 309.6 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
In this case our F statistic generalizes to the following, where \\(\\textrm{Model}\_1\\) is the simpler model and \\(p\\) now refers to the total number of parameters estimated (i.e. same as before \+ 1 for the intercept)
\\\[F \= \\frac{(\\textrm{Model}\_2\\ \\textrm{RV} \- \\textrm{Model}\_1\\ \\textrm{RV})/(p\_2 \- p\_1\)}{\\textrm{Model}\_2\\ \\textrm{RV}/(N\-p\_2\-1\)}\\]
From the previous results, we can perform the necessary arithmetic based on this formula to get the F statistic.
```
((527.27 - 160.65)/3) / (160.65/407)
```
```
[1] 309.6054
```
#### \\(R^2\\)
The statistical result just shown is mostly a straw man type of test\- who actually cares if our model does statistically better than a model with nothing in it? Surely if you don’t do better than nothing, then you may need to think more intently about what you are trying to model and how. But just because you can knock the straw man down, it isn’t something to get overly excited about. Let’s turn instead to a different concept\- the amount of variance of the target variable that is explained by our predictors. For the standard linear model setting, this statistic is called *R\-squared* (\\(R^2\\)).
Going back to our previous notions, \\(R^2\\) is just:
\\\[R^2 \=\\textrm{Model Explained Variance}/\\textrm{Total Variance}\\]
This also is reported by default in our summary printout.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
With our values from before for model and total variance, we can calculate it ourselves.
```
366.62 / 527.27
```
```
[1] 0.6953174
```
Here is another way. Let’s get the model predictions, and see how well they correlate with the target.
```
predictions = predict(happy_model_base)
target = happy_model_base$model$happiness_score
rho = cor(predictions, target)
rho
```
```
[1] 0.8338528
```
```
rho^2
```
```
[1] 0.6953106
```
Now you can see why it’s called \\(R^2\\). It is the squared Pearson \\(r\\) of the model expected value and the observed target variable.
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
#### Statistical assessment
In a standard linear model we can compare a model where there are no covariates vs. the model we actually care about, which may have many predictor variables. This is an almost useless test, but the results are typically reported both in standard output and academic presentation. Let’s think about it conceptually\- how does the variability in our target break down?
\\\[\\textrm{Total Variance} \= \\textrm{Model Explained Variance} \+ \\textrm{Residual Variance}\\]
So the variability in our target (TV) can be decomposed into that which we can explain with the predictor variables (MEV), and everything else that is not in our model (RV). If we have nothing in the model, then TV \= RV.
Let’s revisit the summary of our model. Note the *F\-statistic*, which represents a statistical test for the model as a whole.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
The standard F statistic can be calculated as follows, where \\(p\\) is the number of predictors[36](#fn36):
\\\[F \= \\frac{MV/p}{RV/(N\-p\-1\)}\\]
Conceptually it is a ratio of average squared variance to average unexplained variance. We can see this more explicitly as follows, where each predictor’s contribution to the total variance is provided in the `Sum Sq` column.
```
anova(happy_model_base)
```
```
Analysis of Variance Table
Response: happiness_score
Df Sum Sq Mean Sq F value Pr(>F)
democratic_quality 1 189.192 189.192 479.300 < 2.2e-16 ***
generosity 1 6.774 6.774 17.162 4.177e-05 ***
log_gdp_per_capita 1 170.649 170.649 432.324 < 2.2e-16 ***
Residuals 407 160.653 0.395
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
If we add those together and use our formula above we get:
\\\[F \= \\frac{366\.62/3}{160\.653/407} \= 309\.6\\]
Which is what is reported in the summary of the model. And the p\-value is just `pf(309.6, 3, 407, lower = FALSE)`, whose values can be extracted from the summary object.
```
happy_model_base_sum$fstatistic
```
```
value numdf dendf
309.5954 3.0000 407.0000
```
```
pf(309.6, 3, 407, lower.tail = FALSE)
```
```
[1] 1.239283e-104
```
Because the F\-value is so large and p\-value so small, the printed result in the summary doesn’t give us the actual p\-value. So let’s demonstrate again with a worse model, where the p\-value will be higher.
```
f_test = lm(happiness_score ~ generosity, happy)
summary(f_test)
```
```
Call:
lm(formula = happiness_score ~ generosity, data = happy)
Residuals:
Min 1Q Median 3Q Max
-2.81037 -0.89930 0.00716 0.84924 2.33153
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.41905 0.04852 111.692 < 2e-16 ***
generosity 0.89936 0.30351 2.963 0.00318 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.122 on 533 degrees of freedom
(1169 observations deleted due to missingness)
Multiple R-squared: 0.01621, Adjusted R-squared: 0.01436
F-statistic: 8.78 on 1 and 533 DF, p-value: 0.003181
```
```
pf(8.78, 1, 533, lower.tail = FALSE)
```
```
[1] 0.003181551
```
We can make this F\-test more explicit by actually fitting a null model and making the comparison. The following will provide the same result as before. We make sure to use the same data as in the original model, since there are missing values for some covariates.
```
happy_model_null = lm(happiness_score ~ 1, data = model.frame(happy_model_base))
anova(happy_model_null, happy_model_base)
```
```
Analysis of Variance Table
Model 1: happiness_score ~ 1
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Res.Df RSS Df Sum of Sq F Pr(>F)
1 410 527.27
2 407 160.65 3 366.62 309.6 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
In this case our F statistic generalizes to the following, where \\(\\textrm{Model}\_1\\) is the simpler model and \\(p\\) now refers to the total number of parameters estimated (i.e. same as before \+ 1 for the intercept)
\\\[F \= \\frac{(\\textrm{Model}\_2\\ \\textrm{RV} \- \\textrm{Model}\_1\\ \\textrm{RV})/(p\_2 \- p\_1\)}{\\textrm{Model}\_2\\ \\textrm{RV}/(N\-p\_2\-1\)}\\]
From the previous results, we can perform the necessary arithmetic based on this formula to get the F statistic.
```
((527.27 - 160.65)/3) / (160.65/407)
```
```
[1] 309.6054
```
#### \\(R^2\\)
The statistical result just shown is mostly a straw man type of test\- who actually cares if our model does statistically better than a model with nothing in it? Surely if you don’t do better than nothing, then you may need to think more intently about what you are trying to model and how. But just because you can knock the straw man down, it isn’t something to get overly excited about. Let’s turn instead to a different concept\- the amount of variance of the target variable that is explained by our predictors. For the standard linear model setting, this statistic is called *R\-squared* (\\(R^2\\)).
Going back to our previous notions, \\(R^2\\) is just:
\\\[R^2 \=\\textrm{Model Explained Variance}/\\textrm{Total Variance}\\]
This also is reported by default in our summary printout.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
With our values from before for model and total variance, we can calculate it ourselves.
```
366.62 / 527.27
```
```
[1] 0.6953174
```
Here is another way. Let’s get the model predictions, and see how well they correlate with the target.
```
predictions = predict(happy_model_base)
target = happy_model_base$model$happiness_score
rho = cor(predictions, target)
rho
```
```
[1] 0.8338528
```
```
rho^2
```
```
[1] 0.6953106
```
Now you can see why it’s called \\(R^2\\). It is the squared Pearson \\(r\\) of the model expected value and the observed target variable.
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
### Beyond OLS
People love \\(R^2\\), so much that they will report it wherever they can, even coming up with things like ‘Pseudo\-\\(R^2\\)’ when it proves difficult. However, outside of the OLS setting where we assume a normal distribution as the underlying data\-generating mechanism, \\(R^2\\) has little application, and so is not very useful. In some sense, for any numeric target variable we can ask how well our predictions correlate with the observed target values, but the notion of ‘variance explained’ doesn’t easily follow us. For example, for other distributions the estimated variance is a function of the mean (e.g. Poisson, Binomial), and so isn’t constant. In other settings we have multiple sources of (residual) variance, and some sources where it’s not clear whether the variance should be considered as part of the model explained variance or residual variance. For categorical targets the notion doesn’t really apply very well at all.
At least for GLM for non\-normal distributions, we can work with *deviance*, which is similar to the residual sum of squares in the OLS setting. We can get a ‘deviance explained’ using the following approach:
1. Fit a null model, i.e. intercept only. This gives the total deviance (`tot_dev`).
2. Fit the desired model. This provides the model unexplained deviance (`model_dev`)
3. Calculate \\(\\frac{\\textrm{tot\_dev} \-\\textrm{model\_dev}}{\\textrm{tot\_dev}}\\)
But this value doesn’t really behave in the same manner as \\(R^2\\). For one, it can actually go down for a more complex model, and there is no standard adjustment, neither of which is the case with \\(R^2\\) for the standard linear model. At most this can serve as an approximation. For more complicated settings you will have to rely on other means to determine model fit.
### Classification
For categorical targets we must think about obtaining predictions that allow us to classify the observations into specific categories. Not surprisingly, this will require different metrics to assess model performance.
#### Accuracy and other metrics
A very natural starting point is *accuracy*, or what percentage of our predicted class labels match the observed class labels. However, our model will not spit out a character string, only a number. On the scale of the linear predictor it can be anything, but we will at some point transform it to the probability scale, obtaining a predicted probability for each category. The class associated with the highest probability is the predicted class. In the case of binary targets, this is just an if\_else statement for one class `if_else(probability >= .5, 'class A', 'class B')`.
With those predicted labels and the observed labels we create what is commonly called a *confusion matrix*, but would more sanely be called a *classification table*, *prediction table*, or just about any other name one could come up with in the first 10 seconds of trying. Let’s look at the following hypothetical result.
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | 41 | 21 | | Predicted \= 0 | 16 | 13 | | | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | A | B | | Predicted \= 0 | C | D | |
In some cases we predict correctly, in other cases not. In this 2 x 2 setting we label the cells A through D. With things in place, consider the following the following nomenclature.
*True Positive*, *False Positive*, *True Negative*, *False Negative*: Above, these are A, B, D, and C respectively.
Now let’s see what we can calculate.
*Accuracy*: Number of correct classifications out of all predictions (A \+ D)/Total. In the above example this would be (41 \+ 13\)/91, about 59%.
*Error Rate*: 1 \- Accuracy.
*Sensitivity*: is the proportion of correctly predicted positives to all true positive events: A/(A \+ C). In the above example this would be 41/57, about 72%. High sensitivity would suggest a low type II error rate (see below), or high statistical power. Also known as *true positive rate*.
*Specificity*: is the proportion of correctly predicted negatives to all true negative events: D/(B \+ D). In the above example this would be 13/34, about 38%. High specificity would suggest a low type I error rate (see below). Also known as *true negative rate*.
*Positive Predictive Value* (PPV): proportion of true positives of those that are predicted positives: A/(A \+ B). In the above example this would be 41/62, about 66%.
*Negative Predictive Value* (NPV): proportion of true negatives of those that are predicted negative: D/(C \+ D). In the above example this would be 13/29, about 45%.
*Precision*: See PPV.
*Recall*: See sensitivity.
*Lift*: Ratio of positive predictions given actual positives to the proportion of positive predictions out of the total: (A/(A \+ C)) / ((A \+ B)/Total). In the above example this would be (41/(41 \+ 16\))/((41 \+ 21\)/(91\)), or 1\.06\.
*F Score* (F1 score): Harmonic mean of precision and recall: 2\*(Precision\*Recall)/(Precision\+Recall). In the above example this would be 2\*(.66\*.72\)/(.66\+.72\), about 0\.69\.
*Type I Error Rate* (false positive rate): proportion of true negatives that are incorrectly predicted positive: B/(B\+D). In the above example this would be 21/34, about 62%. Also known as *alpha*.
*Type II Error Rate* (false negative rate): proportion of true positives that are incorrectly predicted negative: C/(C\+A). In the above example this would be 16/57, about 28%. Also known as *beta*.
*False Discovery Rate*: proportion of false positives among all positive predictions: B/(A\+B). In the above example this would be 21/62, about 34%. Often used in multiple comparison testing in the context of ANOVA.
*Phi coefficient*: A measure of association: (A\*D \- B\*C) / (sqrt((A\+C)\*(D\+B)\*(A\+B)\*(D\+C))). In the above example this would be 0\.11\.
Several of these may also be produced on a per\-class basis when there are more than two classes. In addition, for multi\-class scenarios there are other metrics commonly employed. In general there are many, many other metrics for confusion matrices, any of which might be useful for your situation, but the above provides a starting point, and is enough for many situations.
#### Accuracy and other metrics
A very natural starting point is *accuracy*, or what percentage of our predicted class labels match the observed class labels. However, our model will not spit out a character string, only a number. On the scale of the linear predictor it can be anything, but we will at some point transform it to the probability scale, obtaining a predicted probability for each category. The class associated with the highest probability is the predicted class. In the case of binary targets, this is just an if\_else statement for one class `if_else(probability >= .5, 'class A', 'class B')`.
With those predicted labels and the observed labels we create what is commonly called a *confusion matrix*, but would more sanely be called a *classification table*, *prediction table*, or just about any other name one could come up with in the first 10 seconds of trying. Let’s look at the following hypothetical result.
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | 41 | 21 | | Predicted \= 0 | 16 | 13 | | | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | A | B | | Predicted \= 0 | C | D | |
In some cases we predict correctly, in other cases not. In this 2 x 2 setting we label the cells A through D. With things in place, consider the following the following nomenclature.
*True Positive*, *False Positive*, *True Negative*, *False Negative*: Above, these are A, B, D, and C respectively.
Now let’s see what we can calculate.
*Accuracy*: Number of correct classifications out of all predictions (A \+ D)/Total. In the above example this would be (41 \+ 13\)/91, about 59%.
*Error Rate*: 1 \- Accuracy.
*Sensitivity*: is the proportion of correctly predicted positives to all true positive events: A/(A \+ C). In the above example this would be 41/57, about 72%. High sensitivity would suggest a low type II error rate (see below), or high statistical power. Also known as *true positive rate*.
*Specificity*: is the proportion of correctly predicted negatives to all true negative events: D/(B \+ D). In the above example this would be 13/34, about 38%. High specificity would suggest a low type I error rate (see below). Also known as *true negative rate*.
*Positive Predictive Value* (PPV): proportion of true positives of those that are predicted positives: A/(A \+ B). In the above example this would be 41/62, about 66%.
*Negative Predictive Value* (NPV): proportion of true negatives of those that are predicted negative: D/(C \+ D). In the above example this would be 13/29, about 45%.
*Precision*: See PPV.
*Recall*: See sensitivity.
*Lift*: Ratio of positive predictions given actual positives to the proportion of positive predictions out of the total: (A/(A \+ C)) / ((A \+ B)/Total). In the above example this would be (41/(41 \+ 16\))/((41 \+ 21\)/(91\)), or 1\.06\.
*F Score* (F1 score): Harmonic mean of precision and recall: 2\*(Precision\*Recall)/(Precision\+Recall). In the above example this would be 2\*(.66\*.72\)/(.66\+.72\), about 0\.69\.
*Type I Error Rate* (false positive rate): proportion of true negatives that are incorrectly predicted positive: B/(B\+D). In the above example this would be 21/34, about 62%. Also known as *alpha*.
*Type II Error Rate* (false negative rate): proportion of true positives that are incorrectly predicted negative: C/(C\+A). In the above example this would be 16/57, about 28%. Also known as *beta*.
*False Discovery Rate*: proportion of false positives among all positive predictions: B/(A\+B). In the above example this would be 21/62, about 34%. Often used in multiple comparison testing in the context of ANOVA.
*Phi coefficient*: A measure of association: (A\*D \- B\*C) / (sqrt((A\+C)\*(D\+B)\*(A\+B)\*(D\+C))). In the above example this would be 0\.11\.
Several of these may also be produced on a per\-class basis when there are more than two classes. In addition, for multi\-class scenarios there are other metrics commonly employed. In general there are many, many other metrics for confusion matrices, any of which might be useful for your situation, but the above provides a starting point, and is enough for many situations.
Model Assumptions
-----------------
There are quite a few assumptions for the standard linear model that we could talk about, but I’ll focus on just a handful, ordered roughly in terms of the severity of violation.
* Correct model
* Heteroscedasticity
* Independence of observations
* Normality
These concern bias (the first), accurate inference (most of the rest), or other statistical concepts (efficiency, consistency). The issue with most of the assumptions you learn about in your statistics course is that they mostly just apply to the OLS setting. Moreover, you can meet all the assumptions you want and still have a crappy model. Practically speaking, the effects on inference often aren’t large enough to matter in many cases, as we shouldn’t be making any important decision based on a p\-value, or slight differences in the boundaries of an interval. Even then, at least for OLS and other simpler settings, the solutions to these issues are often easy, for example, to obtain correct standard errors, or are mostly overcome by having a large amount of data.
Still, the diagnostic tools can provide clues to model failure, and so have utility in that sense. As before, visualization will aid us here.
```
library(ggfortify)
autoplot(happy_model_base)
```
The first plot shows the spread of the residuals vs. the model estimated values. By default, the three most extreme observations are noted. In this plot we are looking for a lack of any conspicuous pattern, e.g. a fanning out to one side or butterfly shape. If the variance was dependent on some of the model estimated values, we have a couple options:
* Use a model that does not assume constant variance
* Add complexity to the model to better capture more extreme observations
* Change the assumed distribution
In this example we have it about as good as it gets. The second plot regards the normality of the residuals. If they are normally distributed, they would fall along the dotted line. Again, in practical application this is about as good as you’re going to get. In the following we can see that we have some issues, where predictions are worse at low and high ends, and we may not be capturing some of the tail of the target distribution.
Another plot we can use to assess model fit is simply to note the predictions vs. the observed values, and this sort of plot would be appropriate for any model. Here I show this both as a scatterplot and a density plot. With the first, the closer the result is to a line the better, with the latter, we can more adequately see what the model is predicting in relation to the observed values. In this case, while we’re doing well, one limitation of the model is that it does not have as much spread as target, and so is not capturing the more extreme values.
Beyond the OLS setting, assumptions may change, are more difficult to check, and guarantees are harder to come by. The primary one \- that you have an adequate and sufficiently complex model \- still remains the most vital. It is important to remember that these assumptions regard inference, not predictive capabilities. In addition, in many modeling scenarios we will actually induce bias to have more predictive capacity. In such settings statistical tests are of less importance, and there often may not even be an obvious test to use. Typically we will still have some means to get interval estimates for weights or predictions though.
Predictive Performance
----------------------
While we can gauge predictive performance to some extent with a metric like \\(R^2\\) in the standard linear model case, even then it almost certainly an optimistic viewpoint, and adjusted \\(R^2\\) doesn’t really deal with the underlying issue. What is the problem? The concern is that we are judging model performance on the very data it was fit to. Any potential deviation to the underlying data would certainly result in a different result for \\(R^2\\), accuracy, or any metric we choose to look at.
So the better estimate of how the model is doing is to observe performance on data it hasn’t seen, using a metric that better captures how close we hit the target. This data goes by different names\- *test set*, *validation set*, *holdout sample*, etc., but the basic idea is that we use some data that wasn’t used in model fitting to assess performance. We can do this in any data situation by randomly splitting into a data set for training the model, and one used for testing the model’s performance.
```
library(tidymodels)
set.seed(12)
happy_split = initial_split(happy, prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split) %>% drop_na()
happy_model_train = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_train
)
predictions = predict(happy_model_train, newdata = happy_test)
```
Comparing our loss on training and test (i.e. RMSE), we can see the loss is greater on the test set. You can use a package like yardstick to calculate this.
| RMSE\_train | RMSE\_test | % increase |
| --- | --- | --- |
| 0\.622 | 0\.758 | 21\.9 |
While in many settings we could simply report performance metrics from the test set, for a more accurate assessment of test error, we’d do better by taking an average over several test sets, an approach known as *cross\-validation*, something we’ll talk more about [later](ml.html#cross-validation).
In general, we may do okay in scenarios where the model is simple and uses a lot of data, but even then we may find a notable increase in test error relative to training error. For more complex models and/or with less data, the difference in training vs. test could be quite significant.
Model Comparison
----------------
Up until now the focus has been entirely on one model. However, if you’re trying to learn something new, you’ll almost always want to have multiple plausible models to explore, rather than just confirming what you think you already know. This can be as simple as starting with a baseline model and adding complexity to it, but it could also be pitting fundamentally different theoretical models against one another.
A notable problem is that complex models should always do better than simple ones. The question often then becomes if they are doing notably better given the additional complexity. So we’ll need some way to compare models in a way that takes the complexity of the model into account.
### Example: Additional covariates
A starting point for adding model complexity is simply adding more covariates. Let’s add life expectancy and a yearly trend to our happiness model. To make this model comparable to our baseline model, they need to be fit to the same data, and life expectancy has a couple missing values the others do not. So we’ll start with some data processing. I will start by standardizing some of the variables, and making year start at zero, which will represent 2008, and finally dropping missing values. Refer to our previous section on [transforming variables](models.html#numeric-variables) if you want to.
```
happy_recipe = happy %>%
select(
year,
happiness_score,
democratic_quality,
generosity,
healthy_life_expectancy_at_birth,
log_gdp_per_capita
) %>%
recipe(happiness_score ~ . ) %>%
step_center(all_numeric(), -log_gdp_per_capita, -year) %>%
step_scale(all_numeric(), -log_gdp_per_capita, -year) %>%
step_knnimpute(all_numeric()) %>%
step_naomit(everything()) %>%
step_center(year, means = 2005) %>%
prep()
happy_processed = happy_recipe %>% bake(happy)
```
Now let’s start with our baseline model again.
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_processed
)
summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.53727 -0.29553 -0.01258 0.32002 1.52749
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -5.49178 0.10993 -49.958 <2e-16 ***
democratic_quality 0.14175 0.01441 9.838 <2e-16 ***
generosity 0.19826 0.01096 18.092 <2e-16 ***
log_gdp_per_capita 0.59284 0.01187 49.946 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.44 on 1700 degrees of freedom
Multiple R-squared: 0.7805, Adjusted R-squared: 0.7801
F-statistic: 2014 on 3 and 1700 DF, p-value: < 2.2e-16
```
We can see that moving one standard deviation on democratic quality and generosity leads to similar standard deviation increases in happiness. Moving 10 percentage points in GDP would lead to less than .1 standard deviation increase in happiness.
Now we add our life expectancy and yearly trend.
```
happy_model_more = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed
)
summary(happy_model_more)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.50879 -0.27081 -0.01524 0.29640 1.60540
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.691818 0.148921 -24.790 < 2e-16 ***
democratic_quality 0.099717 0.013618 7.322 3.75e-13 ***
generosity 0.189113 0.010193 18.554 < 2e-16 ***
log_gdp_per_capita 0.397559 0.016121 24.661 < 2e-16 ***
healthy_life_expectancy_at_birth 0.311129 0.018732 16.609 < 2e-16 ***
year -0.007363 0.002728 -2.699 0.00702 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.4083 on 1698 degrees of freedom
Multiple R-squared: 0.8111, Adjusted R-squared: 0.8106
F-statistic: 1459 on 5 and 1698 DF, p-value: < 2.2e-16
```
Here it would seem that life expectancy has a notable effect on happiness (shocker), but the yearly trend, while negative, is not statistically notable. In addition, the democratic effect is no longer significant, as it would seem that it’s contribution was more due to it’s correlation with life expectancy. But the key question is\- is this model better?
The adjusted \\(R^2\\) seems to indicate that we are doing slightly better with this model, but not much (0\.81 vs. 0\.78\). We can test if the increase is a statistically notable one. [Recall previously](model_criticism.html#statistical-assessment) when we compared our model versus a null model to obtain a statistical test of model fit. Since these models are *nested*, i.e. one is a simpler form of the other, we can use the more general approach we depicted to compare these models. This ANOVA, or analysis of variance test, is essentially comparing whether the residual sum of squares (i.e. the loss) is statistically less for one model vs. the other. In many settings it is often called a *likelihood ratio test*.
```
anova(happy_model_base, happy_model_more, test = 'Chi')
```
```
Analysis of Variance Table
Model 1: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth + year
Res.Df RSS Df Sum of Sq Pr(>Chi)
1 1700 329.11
2 1698 283.11 2 45.997 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The `Df` from the test denotes that we have two additional parameters, i.e. coefficients, in the more complex model. But the main thing to note is whether the model statistically reduces the RSS, and so we see that this is a statistically notable improvement as well.
I actually do not like this test though. It requires nested models, which in some settings is either not the case or can be hard to determine, and ignores various aspects of uncertainty in parameter estimates. Furthermore, it may not be appropriate for some complex model settings. An approach that works in many settings is to compare *AIC* (Akaike Information Criterion). AIC is a value based on the likelihood for a given model, but which adds a penalty for complexity, since otherwise any more complex model would result in a larger likelihood (or in this case, smaller negative likelihood). In the following, \\(\\mathcal{L}\\) is the likelihood, and \\(\\mathcal{P}\\) is the number of parameters estimated for the model.
\\\[AIC \= \-2 ( \\ln (\\mathcal{L})) \+ 2 \\mathcal{P}\\]
```
AIC(happy_model_base)
```
```
[1] 2043.77
```
The value itself is meaningless until we compare models, in which case the lower value is the better model (because we are working with the negative log likelihood). With AIC, we don’t have to have nested models, so that’s a plus over the statistical test.
```
AIC(happy_model_base, happy_model_more)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
```
Again, our new model works better. However, this still may miss out on some uncertainty in the models. To try and capture this, I will calculate interval estimates for the adjusted \\(R^2\\) via *bootstrapping*, and then calculate an interval for their difference. The details are beyond what I want to delve into here, but the gist is we just want a confidence interval for the difference in adjusted \\(R^2\\).
| model | r2 | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 0\.780 | 0\.762 | 0\.798 |
| more | 0\.811 | 0\.795 | 0\.827 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in \\(R^2\\) | 0\.013 | 0\.049 |
It would seem the difference in adjusted \\(R^2\\) is not statistically different from zero. Likewise we could do the same for AIC.
| model | aic | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 2043\.770 | 1917\.958 | 2161\.231 |
| more | 1791\.237 | 1657\.755 | 1911\.073 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in AIC | \-369\.994 | \-126\.722 |
In this case, the more complex model may not be statistically better either, as the interval for the difference in AIC also contains zero, and exhibits a notably wide range.
### Example: Interactions
Let’s now add interactions to our model. Interactions allow the relationship of a predictor variable and target to vary depending on the values of another covariate. To keep things simple, we’ll add a single interaction to start\- I will interact democratic quality with life expectancy.
```
happy_model_interact = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth +
democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed
)
summary(happy_model_interact)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.42801 -0.26473 -0.00607 0.26868 1.48161
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.63990 0.14517 -25.074 < 2e-16 ***
democratic_quality 0.08785 0.01335 6.580 6.24e-11 ***
generosity 0.16479 0.01030 16.001 < 2e-16 ***
log_gdp_per_capita 0.38501 0.01578 24.404 < 2e-16 ***
healthy_life_expectancy_at_birth 0.33247 0.01830 18.165 < 2e-16 ***
democratic_quality:healthy_life_expectancy_at_birth 0.10526 0.01105 9.527 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.3987 on 1698 degrees of freedom
Multiple R-squared: 0.82, Adjusted R-squared: 0.8194
F-statistic: 1547 on 5 and 1698 DF, p-value: < 2.2e-16
```
The coefficient interpretation for variables in the interaction model changes. For those involved in an interaction, the base coefficient now only describes the effect when the variable they interact with is zero (or is at the reference group if it’s categorical). So democratic quality has a slight positive, but not statistically notable, effect at the mean of life expectancy (0\.088\). However, this effect increases by 0\.11 when life expectancy increases by 1 (i.e. 1 standard deviation since we standardized). The same interpretation goes for life expectancy. It’s base coefficient is when democratic quality is at it’s mean (0\.332\), and the interaction term is interpreted identically.
It seems most people (including journal reviewers) seem to have trouble understanding interactions if you just report them in a table. Furthermore, beyond the standard linear model with non\-normal distributions, the coefficient for the interaction term doesn’t even have the same precise meaning. But you know what helps us in every interaction setting? Visualization!
Let’s use ggeffects again. We’ll plot the effect of democratic quality at the mean of life expectancy, and at one standard deviation below and above. Since we already standardized it, this is even easier.
```
library(ggeffects)
plot(
ggpredict(
happy_model_interact,
terms = c('democratic_quality', 'healthy_life_expectancy_at_birth[-1, 0, 1]')
)
)
```
We seem to have discovered something interesting here! Democratic quality only has a positive effect for those countries with a high life expectancy, i.e. that are already in a good place in general. It may even be negative in countries in the contrary case. While this has to be taken with a lot of caution, it shows how exploring interactions can be fun and surprising!
Another way to plot interactions in which the variables are continuous is with a contour plot similar to the following. Here we don’t have to pick arbitrary values to plot against, and can see the predictions at all values of the covariates in question.
We see the the lowest expected happiness based on the model is with high democratic quality and low life expectancy. The best case scenario is to be high on both.
Here is our model comparison for all three models with AIC.
```
AIC(happy_model_base, happy_model_more, happy_model_interact)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
happy_model_interact 7 1709.801
```
Looks like our interaction model is winning.
### Example: Additive models
*Generalized additive models* allow our predictors to have a *wiggly* relationship with the target variable. For more information, see [this document](https://m-clark.github.io/generalized-additive-models/), but for our purposes, that’s all you really need to know\- effects don’t have to be linear even with linear models! We will use the base R mgcv package because it is awesome and you don’t need to install anything. In this case, we’ll allow all the covariates to have a nonlinear relationship, and we denote this with the `s()` syntax.
```
library(mgcv)
happy_model_gam = gam(
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth),
data = happy_processed
)
summary(happy_model_gam)
```
```
Family: gaussian
Link function: identity
Formula:
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth)
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.028888 0.008125 -3.555 0.000388 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(democratic_quality) 8.685 8.972 13.26 <2e-16 ***
s(generosity) 6.726 7.870 27.25 <2e-16 ***
s(log_gdp_per_capita) 8.893 8.996 87.20 <2e-16 ***
s(healthy_life_expectancy_at_birth) 8.717 8.977 65.82 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R-sq.(adj) = 0.872 Deviance explained = 87.5%
GCV = 0.11479 Scale est. = 0.11249 n = 1704
```
The first thing you may notice is that there are no regression coefficients. This is because the effect of any of these predictors depends on their value, so trying to assess it by a single value would be problematic at best. You can guess what will help us interpret this…
```
library(mgcViz)
plot.gamViz(happy_model_gam, allTerms = T)
```
Here is a brief summary of interpretation. We generally don’t have to worry about small wiggles.
* `democratic_quality`: Effect is most notable (positive and strong) for higher values. Negligible otherwise.
* `generosity`: Effect seems seems strongly positive, but mostly for lower values of generosity.
* `life_expectancy`: Effect is positive, but only if the country is around the mean or higher.
* `log GDP per capita`: Effect is mostly positive, but may depend on other factors not included in the model.
In terms of general model fit, the `Scale est.` is the same as the residual standard error (squared) in the other models, and is a notably lower than even the model with the interaction (0\.11 vs. 0\.16\). We can also see that the adjusted \\(R^2\\) is higher as well (0\.87 vs. 0\.82\). If we wanted, we can actually do wiggly interactions also! Here is our interaction from before for the GAM case.
Let’s check our AIC now to see which model wins.
```
AIC(
happy_model_null,
happy_model_base,
happy_model_more,
happy_model_interact,
happy_model_gam
)
```
```
df AIC
happy_model_null 2.00000 1272.755
happy_model_base 5.00000 2043.770
happy_model_more 7.00000 1791.237
happy_model_interact 7.00000 1709.801
happy_model_gam 35.02128 1148.417
```
It’s pretty clear our wiggly model is the winner, even with the added complexity. Note that even though we used a different function for the GAM model, the AIC is still comparable.
### Example: Additional covariates
A starting point for adding model complexity is simply adding more covariates. Let’s add life expectancy and a yearly trend to our happiness model. To make this model comparable to our baseline model, they need to be fit to the same data, and life expectancy has a couple missing values the others do not. So we’ll start with some data processing. I will start by standardizing some of the variables, and making year start at zero, which will represent 2008, and finally dropping missing values. Refer to our previous section on [transforming variables](models.html#numeric-variables) if you want to.
```
happy_recipe = happy %>%
select(
year,
happiness_score,
democratic_quality,
generosity,
healthy_life_expectancy_at_birth,
log_gdp_per_capita
) %>%
recipe(happiness_score ~ . ) %>%
step_center(all_numeric(), -log_gdp_per_capita, -year) %>%
step_scale(all_numeric(), -log_gdp_per_capita, -year) %>%
step_knnimpute(all_numeric()) %>%
step_naomit(everything()) %>%
step_center(year, means = 2005) %>%
prep()
happy_processed = happy_recipe %>% bake(happy)
```
Now let’s start with our baseline model again.
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_processed
)
summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.53727 -0.29553 -0.01258 0.32002 1.52749
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -5.49178 0.10993 -49.958 <2e-16 ***
democratic_quality 0.14175 0.01441 9.838 <2e-16 ***
generosity 0.19826 0.01096 18.092 <2e-16 ***
log_gdp_per_capita 0.59284 0.01187 49.946 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.44 on 1700 degrees of freedom
Multiple R-squared: 0.7805, Adjusted R-squared: 0.7801
F-statistic: 2014 on 3 and 1700 DF, p-value: < 2.2e-16
```
We can see that moving one standard deviation on democratic quality and generosity leads to similar standard deviation increases in happiness. Moving 10 percentage points in GDP would lead to less than .1 standard deviation increase in happiness.
Now we add our life expectancy and yearly trend.
```
happy_model_more = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed
)
summary(happy_model_more)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.50879 -0.27081 -0.01524 0.29640 1.60540
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.691818 0.148921 -24.790 < 2e-16 ***
democratic_quality 0.099717 0.013618 7.322 3.75e-13 ***
generosity 0.189113 0.010193 18.554 < 2e-16 ***
log_gdp_per_capita 0.397559 0.016121 24.661 < 2e-16 ***
healthy_life_expectancy_at_birth 0.311129 0.018732 16.609 < 2e-16 ***
year -0.007363 0.002728 -2.699 0.00702 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.4083 on 1698 degrees of freedom
Multiple R-squared: 0.8111, Adjusted R-squared: 0.8106
F-statistic: 1459 on 5 and 1698 DF, p-value: < 2.2e-16
```
Here it would seem that life expectancy has a notable effect on happiness (shocker), but the yearly trend, while negative, is not statistically notable. In addition, the democratic effect is no longer significant, as it would seem that it’s contribution was more due to it’s correlation with life expectancy. But the key question is\- is this model better?
The adjusted \\(R^2\\) seems to indicate that we are doing slightly better with this model, but not much (0\.81 vs. 0\.78\). We can test if the increase is a statistically notable one. [Recall previously](model_criticism.html#statistical-assessment) when we compared our model versus a null model to obtain a statistical test of model fit. Since these models are *nested*, i.e. one is a simpler form of the other, we can use the more general approach we depicted to compare these models. This ANOVA, or analysis of variance test, is essentially comparing whether the residual sum of squares (i.e. the loss) is statistically less for one model vs. the other. In many settings it is often called a *likelihood ratio test*.
```
anova(happy_model_base, happy_model_more, test = 'Chi')
```
```
Analysis of Variance Table
Model 1: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth + year
Res.Df RSS Df Sum of Sq Pr(>Chi)
1 1700 329.11
2 1698 283.11 2 45.997 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The `Df` from the test denotes that we have two additional parameters, i.e. coefficients, in the more complex model. But the main thing to note is whether the model statistically reduces the RSS, and so we see that this is a statistically notable improvement as well.
I actually do not like this test though. It requires nested models, which in some settings is either not the case or can be hard to determine, and ignores various aspects of uncertainty in parameter estimates. Furthermore, it may not be appropriate for some complex model settings. An approach that works in many settings is to compare *AIC* (Akaike Information Criterion). AIC is a value based on the likelihood for a given model, but which adds a penalty for complexity, since otherwise any more complex model would result in a larger likelihood (or in this case, smaller negative likelihood). In the following, \\(\\mathcal{L}\\) is the likelihood, and \\(\\mathcal{P}\\) is the number of parameters estimated for the model.
\\\[AIC \= \-2 ( \\ln (\\mathcal{L})) \+ 2 \\mathcal{P}\\]
```
AIC(happy_model_base)
```
```
[1] 2043.77
```
The value itself is meaningless until we compare models, in which case the lower value is the better model (because we are working with the negative log likelihood). With AIC, we don’t have to have nested models, so that’s a plus over the statistical test.
```
AIC(happy_model_base, happy_model_more)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
```
Again, our new model works better. However, this still may miss out on some uncertainty in the models. To try and capture this, I will calculate interval estimates for the adjusted \\(R^2\\) via *bootstrapping*, and then calculate an interval for their difference. The details are beyond what I want to delve into here, but the gist is we just want a confidence interval for the difference in adjusted \\(R^2\\).
| model | r2 | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 0\.780 | 0\.762 | 0\.798 |
| more | 0\.811 | 0\.795 | 0\.827 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in \\(R^2\\) | 0\.013 | 0\.049 |
It would seem the difference in adjusted \\(R^2\\) is not statistically different from zero. Likewise we could do the same for AIC.
| model | aic | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 2043\.770 | 1917\.958 | 2161\.231 |
| more | 1791\.237 | 1657\.755 | 1911\.073 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in AIC | \-369\.994 | \-126\.722 |
In this case, the more complex model may not be statistically better either, as the interval for the difference in AIC also contains zero, and exhibits a notably wide range.
### Example: Interactions
Let’s now add interactions to our model. Interactions allow the relationship of a predictor variable and target to vary depending on the values of another covariate. To keep things simple, we’ll add a single interaction to start\- I will interact democratic quality with life expectancy.
```
happy_model_interact = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth +
democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed
)
summary(happy_model_interact)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.42801 -0.26473 -0.00607 0.26868 1.48161
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.63990 0.14517 -25.074 < 2e-16 ***
democratic_quality 0.08785 0.01335 6.580 6.24e-11 ***
generosity 0.16479 0.01030 16.001 < 2e-16 ***
log_gdp_per_capita 0.38501 0.01578 24.404 < 2e-16 ***
healthy_life_expectancy_at_birth 0.33247 0.01830 18.165 < 2e-16 ***
democratic_quality:healthy_life_expectancy_at_birth 0.10526 0.01105 9.527 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.3987 on 1698 degrees of freedom
Multiple R-squared: 0.82, Adjusted R-squared: 0.8194
F-statistic: 1547 on 5 and 1698 DF, p-value: < 2.2e-16
```
The coefficient interpretation for variables in the interaction model changes. For those involved in an interaction, the base coefficient now only describes the effect when the variable they interact with is zero (or is at the reference group if it’s categorical). So democratic quality has a slight positive, but not statistically notable, effect at the mean of life expectancy (0\.088\). However, this effect increases by 0\.11 when life expectancy increases by 1 (i.e. 1 standard deviation since we standardized). The same interpretation goes for life expectancy. It’s base coefficient is when democratic quality is at it’s mean (0\.332\), and the interaction term is interpreted identically.
It seems most people (including journal reviewers) seem to have trouble understanding interactions if you just report them in a table. Furthermore, beyond the standard linear model with non\-normal distributions, the coefficient for the interaction term doesn’t even have the same precise meaning. But you know what helps us in every interaction setting? Visualization!
Let’s use ggeffects again. We’ll plot the effect of democratic quality at the mean of life expectancy, and at one standard deviation below and above. Since we already standardized it, this is even easier.
```
library(ggeffects)
plot(
ggpredict(
happy_model_interact,
terms = c('democratic_quality', 'healthy_life_expectancy_at_birth[-1, 0, 1]')
)
)
```
We seem to have discovered something interesting here! Democratic quality only has a positive effect for those countries with a high life expectancy, i.e. that are already in a good place in general. It may even be negative in countries in the contrary case. While this has to be taken with a lot of caution, it shows how exploring interactions can be fun and surprising!
Another way to plot interactions in which the variables are continuous is with a contour plot similar to the following. Here we don’t have to pick arbitrary values to plot against, and can see the predictions at all values of the covariates in question.
We see the the lowest expected happiness based on the model is with high democratic quality and low life expectancy. The best case scenario is to be high on both.
Here is our model comparison for all three models with AIC.
```
AIC(happy_model_base, happy_model_more, happy_model_interact)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
happy_model_interact 7 1709.801
```
Looks like our interaction model is winning.
### Example: Additive models
*Generalized additive models* allow our predictors to have a *wiggly* relationship with the target variable. For more information, see [this document](https://m-clark.github.io/generalized-additive-models/), but for our purposes, that’s all you really need to know\- effects don’t have to be linear even with linear models! We will use the base R mgcv package because it is awesome and you don’t need to install anything. In this case, we’ll allow all the covariates to have a nonlinear relationship, and we denote this with the `s()` syntax.
```
library(mgcv)
happy_model_gam = gam(
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth),
data = happy_processed
)
summary(happy_model_gam)
```
```
Family: gaussian
Link function: identity
Formula:
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth)
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.028888 0.008125 -3.555 0.000388 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(democratic_quality) 8.685 8.972 13.26 <2e-16 ***
s(generosity) 6.726 7.870 27.25 <2e-16 ***
s(log_gdp_per_capita) 8.893 8.996 87.20 <2e-16 ***
s(healthy_life_expectancy_at_birth) 8.717 8.977 65.82 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R-sq.(adj) = 0.872 Deviance explained = 87.5%
GCV = 0.11479 Scale est. = 0.11249 n = 1704
```
The first thing you may notice is that there are no regression coefficients. This is because the effect of any of these predictors depends on their value, so trying to assess it by a single value would be problematic at best. You can guess what will help us interpret this…
```
library(mgcViz)
plot.gamViz(happy_model_gam, allTerms = T)
```
Here is a brief summary of interpretation. We generally don’t have to worry about small wiggles.
* `democratic_quality`: Effect is most notable (positive and strong) for higher values. Negligible otherwise.
* `generosity`: Effect seems seems strongly positive, but mostly for lower values of generosity.
* `life_expectancy`: Effect is positive, but only if the country is around the mean or higher.
* `log GDP per capita`: Effect is mostly positive, but may depend on other factors not included in the model.
In terms of general model fit, the `Scale est.` is the same as the residual standard error (squared) in the other models, and is a notably lower than even the model with the interaction (0\.11 vs. 0\.16\). We can also see that the adjusted \\(R^2\\) is higher as well (0\.87 vs. 0\.82\). If we wanted, we can actually do wiggly interactions also! Here is our interaction from before for the GAM case.
Let’s check our AIC now to see which model wins.
```
AIC(
happy_model_null,
happy_model_base,
happy_model_more,
happy_model_interact,
happy_model_gam
)
```
```
df AIC
happy_model_null 2.00000 1272.755
happy_model_base 5.00000 2043.770
happy_model_more 7.00000 1791.237
happy_model_interact 7.00000 1709.801
happy_model_gam 35.02128 1148.417
```
It’s pretty clear our wiggly model is the winner, even with the added complexity. Note that even though we used a different function for the GAM model, the AIC is still comparable.
Model Averaging
---------------
Have you ever suffered from choice overload? Many folks who seek to understand some phenomenon via modeling do so. There are plenty of choices due to data processing, but then there may be many models to consider as well, and should be if you’re doing things correctly. But you know what? You don’t have to pick a best.
Model averaging is a common technique in the Bayesian world and also with some applications of machine learning (usually under the guise of *stacking*), but not as widely applied elsewhere, even though it could be. As an example if we (inversely) weight models by the AIC, we can get an average parameter that favors the better models, while not ignoring the lesser models if they aren’t notably poorer. People will use such an approach to get model averaged effects (i.e. coefficients) or predictions. In our setting, the GAM is doing so much better, that it’s weight would basically be 1\.0 and zero for the others. So the model averaged predictions would be almost identical to the GAM predictions.
| model | df | AIC | AICc | deltaAICc | Rel. Like. | weight |
| --- | --- | --- | --- | --- | --- | --- |
| happy\_model\_base | 5\.000 | 2043\.770 | 2043\.805 | 893\.875 | 0 | 0 |
| happy\_model\_more | 7\.000 | 1791\.237 | 1791\.303 | 641\.373 | 0 | 0 |
| happy\_model\_interact | 7\.000 | 1709\.801 | 1709\.867 | 559\.937 | 0 | 0 |
| happy\_model\_gam | 35\.021 | 1148\.417 | 1149\.930 | 0\.000 | 1 | 1 |
Model Criticism Summary
-----------------------
Statistical significance with a single model does not provide enough of a story to tell with your data. A better assessment of performance can be made on data the model has not seen, and can provide a better idea of the practical capabilities of it. Furthermore, pitting various models of differing complexities will allow for better confidence in the model or set of models we ultimately deem worthy. In general, in more explanatory settings we strive to balance performance with complexity through various means.
Model Criticism Exercises
-------------------------
### Exercise 0
Recall the [google app exercises](models.html#model-exploration-exercises), we use a standard linear model (i.e. lm) to predict one of three target variables:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
After that we did a model with an interaction.
Either using those models, or running new ones with a different target variable, conduct the following exercises.
```
load('data/google_apps.RData')
```
### Exercise 1
Assess the model fit and performance of your first model. Perform additional diagnostics to assess how the model is doing (e.g. plot the model to look at residuals).
```
summary(model)
plot(model)
```
### Exercise 2
Compare the model with the interaction model. Based on AIC or some other metric, which one would you choose? Visualize the interaction model if it’s the better model.
```
anova(model1, model2)
AIC(model1, model2)
```
### Exercise 0
Recall the [google app exercises](models.html#model-exploration-exercises), we use a standard linear model (i.e. lm) to predict one of three target variables:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
After that we did a model with an interaction.
Either using those models, or running new ones with a different target variable, conduct the following exercises.
```
load('data/google_apps.RData')
```
### Exercise 1
Assess the model fit and performance of your first model. Perform additional diagnostics to assess how the model is doing (e.g. plot the model to look at residuals).
```
summary(model)
plot(model)
```
### Exercise 2
Compare the model with the interaction model. Based on AIC or some other metric, which one would you choose? Visualize the interaction model if it’s the better model.
```
anova(model1, model2)
AIC(model1, model2)
```
Python Model Criticism Notebook
-------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/model_criticism.ipynb)
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/model_criticism.html |
Model Criticism
===============
It isn’t enough to simply fit a particular model, we must also ask how well it matches the data under study, if it can predict well on new data, where it fails, and more. In the following we will discuss how we can better understand our model and its limitations.
Model Fit
---------
### Standard linear model
In the basic regression setting we can think of model fit in terms of a statistical result, or in terms of the match between our model predictions and the observed target values. The former provides an inferential perspective, but as we will see, is limited. The latter regards a more practical result, and may provide a more nuanced or different conclusion.
#### Statistical assessment
In a standard linear model we can compare a model where there are no covariates vs. the model we actually care about, which may have many predictor variables. This is an almost useless test, but the results are typically reported both in standard output and academic presentation. Let’s think about it conceptually\- how does the variability in our target break down?
\\\[\\textrm{Total Variance} \= \\textrm{Model Explained Variance} \+ \\textrm{Residual Variance}\\]
So the variability in our target (TV) can be decomposed into that which we can explain with the predictor variables (MEV), and everything else that is not in our model (RV). If we have nothing in the model, then TV \= RV.
Let’s revisit the summary of our model. Note the *F\-statistic*, which represents a statistical test for the model as a whole.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
The standard F statistic can be calculated as follows, where \\(p\\) is the number of predictors[36](#fn36):
\\\[F \= \\frac{MV/p}{RV/(N\-p\-1\)}\\]
Conceptually it is a ratio of average squared variance to average unexplained variance. We can see this more explicitly as follows, where each predictor’s contribution to the total variance is provided in the `Sum Sq` column.
```
anova(happy_model_base)
```
```
Analysis of Variance Table
Response: happiness_score
Df Sum Sq Mean Sq F value Pr(>F)
democratic_quality 1 189.192 189.192 479.300 < 2.2e-16 ***
generosity 1 6.774 6.774 17.162 4.177e-05 ***
log_gdp_per_capita 1 170.649 170.649 432.324 < 2.2e-16 ***
Residuals 407 160.653 0.395
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
If we add those together and use our formula above we get:
\\\[F \= \\frac{366\.62/3}{160\.653/407} \= 309\.6\\]
Which is what is reported in the summary of the model. And the p\-value is just `pf(309.6, 3, 407, lower = FALSE)`, whose values can be extracted from the summary object.
```
happy_model_base_sum$fstatistic
```
```
value numdf dendf
309.5954 3.0000 407.0000
```
```
pf(309.6, 3, 407, lower.tail = FALSE)
```
```
[1] 1.239283e-104
```
Because the F\-value is so large and p\-value so small, the printed result in the summary doesn’t give us the actual p\-value. So let’s demonstrate again with a worse model, where the p\-value will be higher.
```
f_test = lm(happiness_score ~ generosity, happy)
summary(f_test)
```
```
Call:
lm(formula = happiness_score ~ generosity, data = happy)
Residuals:
Min 1Q Median 3Q Max
-2.81037 -0.89930 0.00716 0.84924 2.33153
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.41905 0.04852 111.692 < 2e-16 ***
generosity 0.89936 0.30351 2.963 0.00318 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.122 on 533 degrees of freedom
(1169 observations deleted due to missingness)
Multiple R-squared: 0.01621, Adjusted R-squared: 0.01436
F-statistic: 8.78 on 1 and 533 DF, p-value: 0.003181
```
```
pf(8.78, 1, 533, lower.tail = FALSE)
```
```
[1] 0.003181551
```
We can make this F\-test more explicit by actually fitting a null model and making the comparison. The following will provide the same result as before. We make sure to use the same data as in the original model, since there are missing values for some covariates.
```
happy_model_null = lm(happiness_score ~ 1, data = model.frame(happy_model_base))
anova(happy_model_null, happy_model_base)
```
```
Analysis of Variance Table
Model 1: happiness_score ~ 1
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Res.Df RSS Df Sum of Sq F Pr(>F)
1 410 527.27
2 407 160.65 3 366.62 309.6 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
In this case our F statistic generalizes to the following, where \\(\\textrm{Model}\_1\\) is the simpler model and \\(p\\) now refers to the total number of parameters estimated (i.e. same as before \+ 1 for the intercept)
\\\[F \= \\frac{(\\textrm{Model}\_2\\ \\textrm{RV} \- \\textrm{Model}\_1\\ \\textrm{RV})/(p\_2 \- p\_1\)}{\\textrm{Model}\_2\\ \\textrm{RV}/(N\-p\_2\-1\)}\\]
From the previous results, we can perform the necessary arithmetic based on this formula to get the F statistic.
```
((527.27 - 160.65)/3) / (160.65/407)
```
```
[1] 309.6054
```
#### \\(R^2\\)
The statistical result just shown is mostly a straw man type of test\- who actually cares if our model does statistically better than a model with nothing in it? Surely if you don’t do better than nothing, then you may need to think more intently about what you are trying to model and how. But just because you can knock the straw man down, it isn’t something to get overly excited about. Let’s turn instead to a different concept\- the amount of variance of the target variable that is explained by our predictors. For the standard linear model setting, this statistic is called *R\-squared* (\\(R^2\\)).
Going back to our previous notions, \\(R^2\\) is just:
\\\[R^2 \=\\textrm{Model Explained Variance}/\\textrm{Total Variance}\\]
This also is reported by default in our summary printout.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
With our values from before for model and total variance, we can calculate it ourselves.
```
366.62 / 527.27
```
```
[1] 0.6953174
```
Here is another way. Let’s get the model predictions, and see how well they correlate with the target.
```
predictions = predict(happy_model_base)
target = happy_model_base$model$happiness_score
rho = cor(predictions, target)
rho
```
```
[1] 0.8338528
```
```
rho^2
```
```
[1] 0.6953106
```
Now you can see why it’s called \\(R^2\\). It is the squared Pearson \\(r\\) of the model expected value and the observed target variable.
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
### Beyond OLS
People love \\(R^2\\), so much that they will report it wherever they can, even coming up with things like ‘Pseudo\-\\(R^2\\)’ when it proves difficult. However, outside of the OLS setting where we assume a normal distribution as the underlying data\-generating mechanism, \\(R^2\\) has little application, and so is not very useful. In some sense, for any numeric target variable we can ask how well our predictions correlate with the observed target values, but the notion of ‘variance explained’ doesn’t easily follow us. For example, for other distributions the estimated variance is a function of the mean (e.g. Poisson, Binomial), and so isn’t constant. In other settings we have multiple sources of (residual) variance, and some sources where it’s not clear whether the variance should be considered as part of the model explained variance or residual variance. For categorical targets the notion doesn’t really apply very well at all.
At least for GLM for non\-normal distributions, we can work with *deviance*, which is similar to the residual sum of squares in the OLS setting. We can get a ‘deviance explained’ using the following approach:
1. Fit a null model, i.e. intercept only. This gives the total deviance (`tot_dev`).
2. Fit the desired model. This provides the model unexplained deviance (`model_dev`)
3. Calculate \\(\\frac{\\textrm{tot\_dev} \-\\textrm{model\_dev}}{\\textrm{tot\_dev}}\\)
But this value doesn’t really behave in the same manner as \\(R^2\\). For one, it can actually go down for a more complex model, and there is no standard adjustment, neither of which is the case with \\(R^2\\) for the standard linear model. At most this can serve as an approximation. For more complicated settings you will have to rely on other means to determine model fit.
### Classification
For categorical targets we must think about obtaining predictions that allow us to classify the observations into specific categories. Not surprisingly, this will require different metrics to assess model performance.
#### Accuracy and other metrics
A very natural starting point is *accuracy*, or what percentage of our predicted class labels match the observed class labels. However, our model will not spit out a character string, only a number. On the scale of the linear predictor it can be anything, but we will at some point transform it to the probability scale, obtaining a predicted probability for each category. The class associated with the highest probability is the predicted class. In the case of binary targets, this is just an if\_else statement for one class `if_else(probability >= .5, 'class A', 'class B')`.
With those predicted labels and the observed labels we create what is commonly called a *confusion matrix*, but would more sanely be called a *classification table*, *prediction table*, or just about any other name one could come up with in the first 10 seconds of trying. Let’s look at the following hypothetical result.
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | 41 | 21 | | Predicted \= 0 | 16 | 13 | | | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | A | B | | Predicted \= 0 | C | D | |
In some cases we predict correctly, in other cases not. In this 2 x 2 setting we label the cells A through D. With things in place, consider the following the following nomenclature.
*True Positive*, *False Positive*, *True Negative*, *False Negative*: Above, these are A, B, D, and C respectively.
Now let’s see what we can calculate.
*Accuracy*: Number of correct classifications out of all predictions (A \+ D)/Total. In the above example this would be (41 \+ 13\)/91, about 59%.
*Error Rate*: 1 \- Accuracy.
*Sensitivity*: is the proportion of correctly predicted positives to all true positive events: A/(A \+ C). In the above example this would be 41/57, about 72%. High sensitivity would suggest a low type II error rate (see below), or high statistical power. Also known as *true positive rate*.
*Specificity*: is the proportion of correctly predicted negatives to all true negative events: D/(B \+ D). In the above example this would be 13/34, about 38%. High specificity would suggest a low type I error rate (see below). Also known as *true negative rate*.
*Positive Predictive Value* (PPV): proportion of true positives of those that are predicted positives: A/(A \+ B). In the above example this would be 41/62, about 66%.
*Negative Predictive Value* (NPV): proportion of true negatives of those that are predicted negative: D/(C \+ D). In the above example this would be 13/29, about 45%.
*Precision*: See PPV.
*Recall*: See sensitivity.
*Lift*: Ratio of positive predictions given actual positives to the proportion of positive predictions out of the total: (A/(A \+ C)) / ((A \+ B)/Total). In the above example this would be (41/(41 \+ 16\))/((41 \+ 21\)/(91\)), or 1\.06\.
*F Score* (F1 score): Harmonic mean of precision and recall: 2\*(Precision\*Recall)/(Precision\+Recall). In the above example this would be 2\*(.66\*.72\)/(.66\+.72\), about 0\.69\.
*Type I Error Rate* (false positive rate): proportion of true negatives that are incorrectly predicted positive: B/(B\+D). In the above example this would be 21/34, about 62%. Also known as *alpha*.
*Type II Error Rate* (false negative rate): proportion of true positives that are incorrectly predicted negative: C/(C\+A). In the above example this would be 16/57, about 28%. Also known as *beta*.
*False Discovery Rate*: proportion of false positives among all positive predictions: B/(A\+B). In the above example this would be 21/62, about 34%. Often used in multiple comparison testing in the context of ANOVA.
*Phi coefficient*: A measure of association: (A\*D \- B\*C) / (sqrt((A\+C)\*(D\+B)\*(A\+B)\*(D\+C))). In the above example this would be 0\.11\.
Several of these may also be produced on a per\-class basis when there are more than two classes. In addition, for multi\-class scenarios there are other metrics commonly employed. In general there are many, many other metrics for confusion matrices, any of which might be useful for your situation, but the above provides a starting point, and is enough for many situations.
Model Assumptions
-----------------
There are quite a few assumptions for the standard linear model that we could talk about, but I’ll focus on just a handful, ordered roughly in terms of the severity of violation.
* Correct model
* Heteroscedasticity
* Independence of observations
* Normality
These concern bias (the first), accurate inference (most of the rest), or other statistical concepts (efficiency, consistency). The issue with most of the assumptions you learn about in your statistics course is that they mostly just apply to the OLS setting. Moreover, you can meet all the assumptions you want and still have a crappy model. Practically speaking, the effects on inference often aren’t large enough to matter in many cases, as we shouldn’t be making any important decision based on a p\-value, or slight differences in the boundaries of an interval. Even then, at least for OLS and other simpler settings, the solutions to these issues are often easy, for example, to obtain correct standard errors, or are mostly overcome by having a large amount of data.
Still, the diagnostic tools can provide clues to model failure, and so have utility in that sense. As before, visualization will aid us here.
```
library(ggfortify)
autoplot(happy_model_base)
```
The first plot shows the spread of the residuals vs. the model estimated values. By default, the three most extreme observations are noted. In this plot we are looking for a lack of any conspicuous pattern, e.g. a fanning out to one side or butterfly shape. If the variance was dependent on some of the model estimated values, we have a couple options:
* Use a model that does not assume constant variance
* Add complexity to the model to better capture more extreme observations
* Change the assumed distribution
In this example we have it about as good as it gets. The second plot regards the normality of the residuals. If they are normally distributed, they would fall along the dotted line. Again, in practical application this is about as good as you’re going to get. In the following we can see that we have some issues, where predictions are worse at low and high ends, and we may not be capturing some of the tail of the target distribution.
Another plot we can use to assess model fit is simply to note the predictions vs. the observed values, and this sort of plot would be appropriate for any model. Here I show this both as a scatterplot and a density plot. With the first, the closer the result is to a line the better, with the latter, we can more adequately see what the model is predicting in relation to the observed values. In this case, while we’re doing well, one limitation of the model is that it does not have as much spread as target, and so is not capturing the more extreme values.
Beyond the OLS setting, assumptions may change, are more difficult to check, and guarantees are harder to come by. The primary one \- that you have an adequate and sufficiently complex model \- still remains the most vital. It is important to remember that these assumptions regard inference, not predictive capabilities. In addition, in many modeling scenarios we will actually induce bias to have more predictive capacity. In such settings statistical tests are of less importance, and there often may not even be an obvious test to use. Typically we will still have some means to get interval estimates for weights or predictions though.
Predictive Performance
----------------------
While we can gauge predictive performance to some extent with a metric like \\(R^2\\) in the standard linear model case, even then it almost certainly an optimistic viewpoint, and adjusted \\(R^2\\) doesn’t really deal with the underlying issue. What is the problem? The concern is that we are judging model performance on the very data it was fit to. Any potential deviation to the underlying data would certainly result in a different result for \\(R^2\\), accuracy, or any metric we choose to look at.
So the better estimate of how the model is doing is to observe performance on data it hasn’t seen, using a metric that better captures how close we hit the target. This data goes by different names\- *test set*, *validation set*, *holdout sample*, etc., but the basic idea is that we use some data that wasn’t used in model fitting to assess performance. We can do this in any data situation by randomly splitting into a data set for training the model, and one used for testing the model’s performance.
```
library(tidymodels)
set.seed(12)
happy_split = initial_split(happy, prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split) %>% drop_na()
happy_model_train = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_train
)
predictions = predict(happy_model_train, newdata = happy_test)
```
Comparing our loss on training and test (i.e. RMSE), we can see the loss is greater on the test set. You can use a package like yardstick to calculate this.
| RMSE\_train | RMSE\_test | % increase |
| --- | --- | --- |
| 0\.622 | 0\.758 | 21\.9 |
While in many settings we could simply report performance metrics from the test set, for a more accurate assessment of test error, we’d do better by taking an average over several test sets, an approach known as *cross\-validation*, something we’ll talk more about [later](ml.html#cross-validation).
In general, we may do okay in scenarios where the model is simple and uses a lot of data, but even then we may find a notable increase in test error relative to training error. For more complex models and/or with less data, the difference in training vs. test could be quite significant.
Model Comparison
----------------
Up until now the focus has been entirely on one model. However, if you’re trying to learn something new, you’ll almost always want to have multiple plausible models to explore, rather than just confirming what you think you already know. This can be as simple as starting with a baseline model and adding complexity to it, but it could also be pitting fundamentally different theoretical models against one another.
A notable problem is that complex models should always do better than simple ones. The question often then becomes if they are doing notably better given the additional complexity. So we’ll need some way to compare models in a way that takes the complexity of the model into account.
### Example: Additional covariates
A starting point for adding model complexity is simply adding more covariates. Let’s add life expectancy and a yearly trend to our happiness model. To make this model comparable to our baseline model, they need to be fit to the same data, and life expectancy has a couple missing values the others do not. So we’ll start with some data processing. I will start by standardizing some of the variables, and making year start at zero, which will represent 2008, and finally dropping missing values. Refer to our previous section on [transforming variables](models.html#numeric-variables) if you want to.
```
happy_recipe = happy %>%
select(
year,
happiness_score,
democratic_quality,
generosity,
healthy_life_expectancy_at_birth,
log_gdp_per_capita
) %>%
recipe(happiness_score ~ . ) %>%
step_center(all_numeric(), -log_gdp_per_capita, -year) %>%
step_scale(all_numeric(), -log_gdp_per_capita, -year) %>%
step_knnimpute(all_numeric()) %>%
step_naomit(everything()) %>%
step_center(year, means = 2005) %>%
prep()
happy_processed = happy_recipe %>% bake(happy)
```
Now let’s start with our baseline model again.
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_processed
)
summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.53727 -0.29553 -0.01258 0.32002 1.52749
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -5.49178 0.10993 -49.958 <2e-16 ***
democratic_quality 0.14175 0.01441 9.838 <2e-16 ***
generosity 0.19826 0.01096 18.092 <2e-16 ***
log_gdp_per_capita 0.59284 0.01187 49.946 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.44 on 1700 degrees of freedom
Multiple R-squared: 0.7805, Adjusted R-squared: 0.7801
F-statistic: 2014 on 3 and 1700 DF, p-value: < 2.2e-16
```
We can see that moving one standard deviation on democratic quality and generosity leads to similar standard deviation increases in happiness. Moving 10 percentage points in GDP would lead to less than .1 standard deviation increase in happiness.
Now we add our life expectancy and yearly trend.
```
happy_model_more = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed
)
summary(happy_model_more)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.50879 -0.27081 -0.01524 0.29640 1.60540
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.691818 0.148921 -24.790 < 2e-16 ***
democratic_quality 0.099717 0.013618 7.322 3.75e-13 ***
generosity 0.189113 0.010193 18.554 < 2e-16 ***
log_gdp_per_capita 0.397559 0.016121 24.661 < 2e-16 ***
healthy_life_expectancy_at_birth 0.311129 0.018732 16.609 < 2e-16 ***
year -0.007363 0.002728 -2.699 0.00702 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.4083 on 1698 degrees of freedom
Multiple R-squared: 0.8111, Adjusted R-squared: 0.8106
F-statistic: 1459 on 5 and 1698 DF, p-value: < 2.2e-16
```
Here it would seem that life expectancy has a notable effect on happiness (shocker), but the yearly trend, while negative, is not statistically notable. In addition, the democratic effect is no longer significant, as it would seem that it’s contribution was more due to it’s correlation with life expectancy. But the key question is\- is this model better?
The adjusted \\(R^2\\) seems to indicate that we are doing slightly better with this model, but not much (0\.81 vs. 0\.78\). We can test if the increase is a statistically notable one. [Recall previously](model_criticism.html#statistical-assessment) when we compared our model versus a null model to obtain a statistical test of model fit. Since these models are *nested*, i.e. one is a simpler form of the other, we can use the more general approach we depicted to compare these models. This ANOVA, or analysis of variance test, is essentially comparing whether the residual sum of squares (i.e. the loss) is statistically less for one model vs. the other. In many settings it is often called a *likelihood ratio test*.
```
anova(happy_model_base, happy_model_more, test = 'Chi')
```
```
Analysis of Variance Table
Model 1: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth + year
Res.Df RSS Df Sum of Sq Pr(>Chi)
1 1700 329.11
2 1698 283.11 2 45.997 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The `Df` from the test denotes that we have two additional parameters, i.e. coefficients, in the more complex model. But the main thing to note is whether the model statistically reduces the RSS, and so we see that this is a statistically notable improvement as well.
I actually do not like this test though. It requires nested models, which in some settings is either not the case or can be hard to determine, and ignores various aspects of uncertainty in parameter estimates. Furthermore, it may not be appropriate for some complex model settings. An approach that works in many settings is to compare *AIC* (Akaike Information Criterion). AIC is a value based on the likelihood for a given model, but which adds a penalty for complexity, since otherwise any more complex model would result in a larger likelihood (or in this case, smaller negative likelihood). In the following, \\(\\mathcal{L}\\) is the likelihood, and \\(\\mathcal{P}\\) is the number of parameters estimated for the model.
\\\[AIC \= \-2 ( \\ln (\\mathcal{L})) \+ 2 \\mathcal{P}\\]
```
AIC(happy_model_base)
```
```
[1] 2043.77
```
The value itself is meaningless until we compare models, in which case the lower value is the better model (because we are working with the negative log likelihood). With AIC, we don’t have to have nested models, so that’s a plus over the statistical test.
```
AIC(happy_model_base, happy_model_more)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
```
Again, our new model works better. However, this still may miss out on some uncertainty in the models. To try and capture this, I will calculate interval estimates for the adjusted \\(R^2\\) via *bootstrapping*, and then calculate an interval for their difference. The details are beyond what I want to delve into here, but the gist is we just want a confidence interval for the difference in adjusted \\(R^2\\).
| model | r2 | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 0\.780 | 0\.762 | 0\.798 |
| more | 0\.811 | 0\.795 | 0\.827 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in \\(R^2\\) | 0\.013 | 0\.049 |
It would seem the difference in adjusted \\(R^2\\) is not statistically different from zero. Likewise we could do the same for AIC.
| model | aic | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 2043\.770 | 1917\.958 | 2161\.231 |
| more | 1791\.237 | 1657\.755 | 1911\.073 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in AIC | \-369\.994 | \-126\.722 |
In this case, the more complex model may not be statistically better either, as the interval for the difference in AIC also contains zero, and exhibits a notably wide range.
### Example: Interactions
Let’s now add interactions to our model. Interactions allow the relationship of a predictor variable and target to vary depending on the values of another covariate. To keep things simple, we’ll add a single interaction to start\- I will interact democratic quality with life expectancy.
```
happy_model_interact = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth +
democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed
)
summary(happy_model_interact)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.42801 -0.26473 -0.00607 0.26868 1.48161
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.63990 0.14517 -25.074 < 2e-16 ***
democratic_quality 0.08785 0.01335 6.580 6.24e-11 ***
generosity 0.16479 0.01030 16.001 < 2e-16 ***
log_gdp_per_capita 0.38501 0.01578 24.404 < 2e-16 ***
healthy_life_expectancy_at_birth 0.33247 0.01830 18.165 < 2e-16 ***
democratic_quality:healthy_life_expectancy_at_birth 0.10526 0.01105 9.527 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.3987 on 1698 degrees of freedom
Multiple R-squared: 0.82, Adjusted R-squared: 0.8194
F-statistic: 1547 on 5 and 1698 DF, p-value: < 2.2e-16
```
The coefficient interpretation for variables in the interaction model changes. For those involved in an interaction, the base coefficient now only describes the effect when the variable they interact with is zero (or is at the reference group if it’s categorical). So democratic quality has a slight positive, but not statistically notable, effect at the mean of life expectancy (0\.088\). However, this effect increases by 0\.11 when life expectancy increases by 1 (i.e. 1 standard deviation since we standardized). The same interpretation goes for life expectancy. It’s base coefficient is when democratic quality is at it’s mean (0\.332\), and the interaction term is interpreted identically.
It seems most people (including journal reviewers) seem to have trouble understanding interactions if you just report them in a table. Furthermore, beyond the standard linear model with non\-normal distributions, the coefficient for the interaction term doesn’t even have the same precise meaning. But you know what helps us in every interaction setting? Visualization!
Let’s use ggeffects again. We’ll plot the effect of democratic quality at the mean of life expectancy, and at one standard deviation below and above. Since we already standardized it, this is even easier.
```
library(ggeffects)
plot(
ggpredict(
happy_model_interact,
terms = c('democratic_quality', 'healthy_life_expectancy_at_birth[-1, 0, 1]')
)
)
```
We seem to have discovered something interesting here! Democratic quality only has a positive effect for those countries with a high life expectancy, i.e. that are already in a good place in general. It may even be negative in countries in the contrary case. While this has to be taken with a lot of caution, it shows how exploring interactions can be fun and surprising!
Another way to plot interactions in which the variables are continuous is with a contour plot similar to the following. Here we don’t have to pick arbitrary values to plot against, and can see the predictions at all values of the covariates in question.
We see the the lowest expected happiness based on the model is with high democratic quality and low life expectancy. The best case scenario is to be high on both.
Here is our model comparison for all three models with AIC.
```
AIC(happy_model_base, happy_model_more, happy_model_interact)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
happy_model_interact 7 1709.801
```
Looks like our interaction model is winning.
### Example: Additive models
*Generalized additive models* allow our predictors to have a *wiggly* relationship with the target variable. For more information, see [this document](https://m-clark.github.io/generalized-additive-models/), but for our purposes, that’s all you really need to know\- effects don’t have to be linear even with linear models! We will use the base R mgcv package because it is awesome and you don’t need to install anything. In this case, we’ll allow all the covariates to have a nonlinear relationship, and we denote this with the `s()` syntax.
```
library(mgcv)
happy_model_gam = gam(
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth),
data = happy_processed
)
summary(happy_model_gam)
```
```
Family: gaussian
Link function: identity
Formula:
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth)
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.028888 0.008125 -3.555 0.000388 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(democratic_quality) 8.685 8.972 13.26 <2e-16 ***
s(generosity) 6.726 7.870 27.25 <2e-16 ***
s(log_gdp_per_capita) 8.893 8.996 87.20 <2e-16 ***
s(healthy_life_expectancy_at_birth) 8.717 8.977 65.82 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R-sq.(adj) = 0.872 Deviance explained = 87.5%
GCV = 0.11479 Scale est. = 0.11249 n = 1704
```
The first thing you may notice is that there are no regression coefficients. This is because the effect of any of these predictors depends on their value, so trying to assess it by a single value would be problematic at best. You can guess what will help us interpret this…
```
library(mgcViz)
plot.gamViz(happy_model_gam, allTerms = T)
```
Here is a brief summary of interpretation. We generally don’t have to worry about small wiggles.
* `democratic_quality`: Effect is most notable (positive and strong) for higher values. Negligible otherwise.
* `generosity`: Effect seems seems strongly positive, but mostly for lower values of generosity.
* `life_expectancy`: Effect is positive, but only if the country is around the mean or higher.
* `log GDP per capita`: Effect is mostly positive, but may depend on other factors not included in the model.
In terms of general model fit, the `Scale est.` is the same as the residual standard error (squared) in the other models, and is a notably lower than even the model with the interaction (0\.11 vs. 0\.16\). We can also see that the adjusted \\(R^2\\) is higher as well (0\.87 vs. 0\.82\). If we wanted, we can actually do wiggly interactions also! Here is our interaction from before for the GAM case.
Let’s check our AIC now to see which model wins.
```
AIC(
happy_model_null,
happy_model_base,
happy_model_more,
happy_model_interact,
happy_model_gam
)
```
```
df AIC
happy_model_null 2.00000 1272.755
happy_model_base 5.00000 2043.770
happy_model_more 7.00000 1791.237
happy_model_interact 7.00000 1709.801
happy_model_gam 35.02128 1148.417
```
It’s pretty clear our wiggly model is the winner, even with the added complexity. Note that even though we used a different function for the GAM model, the AIC is still comparable.
Model Averaging
---------------
Have you ever suffered from choice overload? Many folks who seek to understand some phenomenon via modeling do so. There are plenty of choices due to data processing, but then there may be many models to consider as well, and should be if you’re doing things correctly. But you know what? You don’t have to pick a best.
Model averaging is a common technique in the Bayesian world and also with some applications of machine learning (usually under the guise of *stacking*), but not as widely applied elsewhere, even though it could be. As an example if we (inversely) weight models by the AIC, we can get an average parameter that favors the better models, while not ignoring the lesser models if they aren’t notably poorer. People will use such an approach to get model averaged effects (i.e. coefficients) or predictions. In our setting, the GAM is doing so much better, that it’s weight would basically be 1\.0 and zero for the others. So the model averaged predictions would be almost identical to the GAM predictions.
| model | df | AIC | AICc | deltaAICc | Rel. Like. | weight |
| --- | --- | --- | --- | --- | --- | --- |
| happy\_model\_base | 5\.000 | 2043\.770 | 2043\.805 | 893\.875 | 0 | 0 |
| happy\_model\_more | 7\.000 | 1791\.237 | 1791\.303 | 641\.373 | 0 | 0 |
| happy\_model\_interact | 7\.000 | 1709\.801 | 1709\.867 | 559\.937 | 0 | 0 |
| happy\_model\_gam | 35\.021 | 1148\.417 | 1149\.930 | 0\.000 | 1 | 1 |
Model Criticism Summary
-----------------------
Statistical significance with a single model does not provide enough of a story to tell with your data. A better assessment of performance can be made on data the model has not seen, and can provide a better idea of the practical capabilities of it. Furthermore, pitting various models of differing complexities will allow for better confidence in the model or set of models we ultimately deem worthy. In general, in more explanatory settings we strive to balance performance with complexity through various means.
Model Criticism Exercises
-------------------------
### Exercise 0
Recall the [google app exercises](models.html#model-exploration-exercises), we use a standard linear model (i.e. lm) to predict one of three target variables:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
After that we did a model with an interaction.
Either using those models, or running new ones with a different target variable, conduct the following exercises.
```
load('data/google_apps.RData')
```
### Exercise 1
Assess the model fit and performance of your first model. Perform additional diagnostics to assess how the model is doing (e.g. plot the model to look at residuals).
```
summary(model)
plot(model)
```
### Exercise 2
Compare the model with the interaction model. Based on AIC or some other metric, which one would you choose? Visualize the interaction model if it’s the better model.
```
anova(model1, model2)
AIC(model1, model2)
```
Python Model Criticism Notebook
-------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/model_criticism.ipynb)
Model Fit
---------
### Standard linear model
In the basic regression setting we can think of model fit in terms of a statistical result, or in terms of the match between our model predictions and the observed target values. The former provides an inferential perspective, but as we will see, is limited. The latter regards a more practical result, and may provide a more nuanced or different conclusion.
#### Statistical assessment
In a standard linear model we can compare a model where there are no covariates vs. the model we actually care about, which may have many predictor variables. This is an almost useless test, but the results are typically reported both in standard output and academic presentation. Let’s think about it conceptually\- how does the variability in our target break down?
\\\[\\textrm{Total Variance} \= \\textrm{Model Explained Variance} \+ \\textrm{Residual Variance}\\]
So the variability in our target (TV) can be decomposed into that which we can explain with the predictor variables (MEV), and everything else that is not in our model (RV). If we have nothing in the model, then TV \= RV.
Let’s revisit the summary of our model. Note the *F\-statistic*, which represents a statistical test for the model as a whole.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
The standard F statistic can be calculated as follows, where \\(p\\) is the number of predictors[36](#fn36):
\\\[F \= \\frac{MV/p}{RV/(N\-p\-1\)}\\]
Conceptually it is a ratio of average squared variance to average unexplained variance. We can see this more explicitly as follows, where each predictor’s contribution to the total variance is provided in the `Sum Sq` column.
```
anova(happy_model_base)
```
```
Analysis of Variance Table
Response: happiness_score
Df Sum Sq Mean Sq F value Pr(>F)
democratic_quality 1 189.192 189.192 479.300 < 2.2e-16 ***
generosity 1 6.774 6.774 17.162 4.177e-05 ***
log_gdp_per_capita 1 170.649 170.649 432.324 < 2.2e-16 ***
Residuals 407 160.653 0.395
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
If we add those together and use our formula above we get:
\\\[F \= \\frac{366\.62/3}{160\.653/407} \= 309\.6\\]
Which is what is reported in the summary of the model. And the p\-value is just `pf(309.6, 3, 407, lower = FALSE)`, whose values can be extracted from the summary object.
```
happy_model_base_sum$fstatistic
```
```
value numdf dendf
309.5954 3.0000 407.0000
```
```
pf(309.6, 3, 407, lower.tail = FALSE)
```
```
[1] 1.239283e-104
```
Because the F\-value is so large and p\-value so small, the printed result in the summary doesn’t give us the actual p\-value. So let’s demonstrate again with a worse model, where the p\-value will be higher.
```
f_test = lm(happiness_score ~ generosity, happy)
summary(f_test)
```
```
Call:
lm(formula = happiness_score ~ generosity, data = happy)
Residuals:
Min 1Q Median 3Q Max
-2.81037 -0.89930 0.00716 0.84924 2.33153
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.41905 0.04852 111.692 < 2e-16 ***
generosity 0.89936 0.30351 2.963 0.00318 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.122 on 533 degrees of freedom
(1169 observations deleted due to missingness)
Multiple R-squared: 0.01621, Adjusted R-squared: 0.01436
F-statistic: 8.78 on 1 and 533 DF, p-value: 0.003181
```
```
pf(8.78, 1, 533, lower.tail = FALSE)
```
```
[1] 0.003181551
```
We can make this F\-test more explicit by actually fitting a null model and making the comparison. The following will provide the same result as before. We make sure to use the same data as in the original model, since there are missing values for some covariates.
```
happy_model_null = lm(happiness_score ~ 1, data = model.frame(happy_model_base))
anova(happy_model_null, happy_model_base)
```
```
Analysis of Variance Table
Model 1: happiness_score ~ 1
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Res.Df RSS Df Sum of Sq F Pr(>F)
1 410 527.27
2 407 160.65 3 366.62 309.6 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
In this case our F statistic generalizes to the following, where \\(\\textrm{Model}\_1\\) is the simpler model and \\(p\\) now refers to the total number of parameters estimated (i.e. same as before \+ 1 for the intercept)
\\\[F \= \\frac{(\\textrm{Model}\_2\\ \\textrm{RV} \- \\textrm{Model}\_1\\ \\textrm{RV})/(p\_2 \- p\_1\)}{\\textrm{Model}\_2\\ \\textrm{RV}/(N\-p\_2\-1\)}\\]
From the previous results, we can perform the necessary arithmetic based on this formula to get the F statistic.
```
((527.27 - 160.65)/3) / (160.65/407)
```
```
[1] 309.6054
```
#### \\(R^2\\)
The statistical result just shown is mostly a straw man type of test\- who actually cares if our model does statistically better than a model with nothing in it? Surely if you don’t do better than nothing, then you may need to think more intently about what you are trying to model and how. But just because you can knock the straw man down, it isn’t something to get overly excited about. Let’s turn instead to a different concept\- the amount of variance of the target variable that is explained by our predictors. For the standard linear model setting, this statistic is called *R\-squared* (\\(R^2\\)).
Going back to our previous notions, \\(R^2\\) is just:
\\\[R^2 \=\\textrm{Model Explained Variance}/\\textrm{Total Variance}\\]
This also is reported by default in our summary printout.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
With our values from before for model and total variance, we can calculate it ourselves.
```
366.62 / 527.27
```
```
[1] 0.6953174
```
Here is another way. Let’s get the model predictions, and see how well they correlate with the target.
```
predictions = predict(happy_model_base)
target = happy_model_base$model$happiness_score
rho = cor(predictions, target)
rho
```
```
[1] 0.8338528
```
```
rho^2
```
```
[1] 0.6953106
```
Now you can see why it’s called \\(R^2\\). It is the squared Pearson \\(r\\) of the model expected value and the observed target variable.
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
### Beyond OLS
People love \\(R^2\\), so much that they will report it wherever they can, even coming up with things like ‘Pseudo\-\\(R^2\\)’ when it proves difficult. However, outside of the OLS setting where we assume a normal distribution as the underlying data\-generating mechanism, \\(R^2\\) has little application, and so is not very useful. In some sense, for any numeric target variable we can ask how well our predictions correlate with the observed target values, but the notion of ‘variance explained’ doesn’t easily follow us. For example, for other distributions the estimated variance is a function of the mean (e.g. Poisson, Binomial), and so isn’t constant. In other settings we have multiple sources of (residual) variance, and some sources where it’s not clear whether the variance should be considered as part of the model explained variance or residual variance. For categorical targets the notion doesn’t really apply very well at all.
At least for GLM for non\-normal distributions, we can work with *deviance*, which is similar to the residual sum of squares in the OLS setting. We can get a ‘deviance explained’ using the following approach:
1. Fit a null model, i.e. intercept only. This gives the total deviance (`tot_dev`).
2. Fit the desired model. This provides the model unexplained deviance (`model_dev`)
3. Calculate \\(\\frac{\\textrm{tot\_dev} \-\\textrm{model\_dev}}{\\textrm{tot\_dev}}\\)
But this value doesn’t really behave in the same manner as \\(R^2\\). For one, it can actually go down for a more complex model, and there is no standard adjustment, neither of which is the case with \\(R^2\\) for the standard linear model. At most this can serve as an approximation. For more complicated settings you will have to rely on other means to determine model fit.
### Classification
For categorical targets we must think about obtaining predictions that allow us to classify the observations into specific categories. Not surprisingly, this will require different metrics to assess model performance.
#### Accuracy and other metrics
A very natural starting point is *accuracy*, or what percentage of our predicted class labels match the observed class labels. However, our model will not spit out a character string, only a number. On the scale of the linear predictor it can be anything, but we will at some point transform it to the probability scale, obtaining a predicted probability for each category. The class associated with the highest probability is the predicted class. In the case of binary targets, this is just an if\_else statement for one class `if_else(probability >= .5, 'class A', 'class B')`.
With those predicted labels and the observed labels we create what is commonly called a *confusion matrix*, but would more sanely be called a *classification table*, *prediction table*, or just about any other name one could come up with in the first 10 seconds of trying. Let’s look at the following hypothetical result.
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | 41 | 21 | | Predicted \= 0 | 16 | 13 | | | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | A | B | | Predicted \= 0 | C | D | |
In some cases we predict correctly, in other cases not. In this 2 x 2 setting we label the cells A through D. With things in place, consider the following the following nomenclature.
*True Positive*, *False Positive*, *True Negative*, *False Negative*: Above, these are A, B, D, and C respectively.
Now let’s see what we can calculate.
*Accuracy*: Number of correct classifications out of all predictions (A \+ D)/Total. In the above example this would be (41 \+ 13\)/91, about 59%.
*Error Rate*: 1 \- Accuracy.
*Sensitivity*: is the proportion of correctly predicted positives to all true positive events: A/(A \+ C). In the above example this would be 41/57, about 72%. High sensitivity would suggest a low type II error rate (see below), or high statistical power. Also known as *true positive rate*.
*Specificity*: is the proportion of correctly predicted negatives to all true negative events: D/(B \+ D). In the above example this would be 13/34, about 38%. High specificity would suggest a low type I error rate (see below). Also known as *true negative rate*.
*Positive Predictive Value* (PPV): proportion of true positives of those that are predicted positives: A/(A \+ B). In the above example this would be 41/62, about 66%.
*Negative Predictive Value* (NPV): proportion of true negatives of those that are predicted negative: D/(C \+ D). In the above example this would be 13/29, about 45%.
*Precision*: See PPV.
*Recall*: See sensitivity.
*Lift*: Ratio of positive predictions given actual positives to the proportion of positive predictions out of the total: (A/(A \+ C)) / ((A \+ B)/Total). In the above example this would be (41/(41 \+ 16\))/((41 \+ 21\)/(91\)), or 1\.06\.
*F Score* (F1 score): Harmonic mean of precision and recall: 2\*(Precision\*Recall)/(Precision\+Recall). In the above example this would be 2\*(.66\*.72\)/(.66\+.72\), about 0\.69\.
*Type I Error Rate* (false positive rate): proportion of true negatives that are incorrectly predicted positive: B/(B\+D). In the above example this would be 21/34, about 62%. Also known as *alpha*.
*Type II Error Rate* (false negative rate): proportion of true positives that are incorrectly predicted negative: C/(C\+A). In the above example this would be 16/57, about 28%. Also known as *beta*.
*False Discovery Rate*: proportion of false positives among all positive predictions: B/(A\+B). In the above example this would be 21/62, about 34%. Often used in multiple comparison testing in the context of ANOVA.
*Phi coefficient*: A measure of association: (A\*D \- B\*C) / (sqrt((A\+C)\*(D\+B)\*(A\+B)\*(D\+C))). In the above example this would be 0\.11\.
Several of these may also be produced on a per\-class basis when there are more than two classes. In addition, for multi\-class scenarios there are other metrics commonly employed. In general there are many, many other metrics for confusion matrices, any of which might be useful for your situation, but the above provides a starting point, and is enough for many situations.
### Standard linear model
In the basic regression setting we can think of model fit in terms of a statistical result, or in terms of the match between our model predictions and the observed target values. The former provides an inferential perspective, but as we will see, is limited. The latter regards a more practical result, and may provide a more nuanced or different conclusion.
#### Statistical assessment
In a standard linear model we can compare a model where there are no covariates vs. the model we actually care about, which may have many predictor variables. This is an almost useless test, but the results are typically reported both in standard output and academic presentation. Let’s think about it conceptually\- how does the variability in our target break down?
\\\[\\textrm{Total Variance} \= \\textrm{Model Explained Variance} \+ \\textrm{Residual Variance}\\]
So the variability in our target (TV) can be decomposed into that which we can explain with the predictor variables (MEV), and everything else that is not in our model (RV). If we have nothing in the model, then TV \= RV.
Let’s revisit the summary of our model. Note the *F\-statistic*, which represents a statistical test for the model as a whole.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
The standard F statistic can be calculated as follows, where \\(p\\) is the number of predictors[36](#fn36):
\\\[F \= \\frac{MV/p}{RV/(N\-p\-1\)}\\]
Conceptually it is a ratio of average squared variance to average unexplained variance. We can see this more explicitly as follows, where each predictor’s contribution to the total variance is provided in the `Sum Sq` column.
```
anova(happy_model_base)
```
```
Analysis of Variance Table
Response: happiness_score
Df Sum Sq Mean Sq F value Pr(>F)
democratic_quality 1 189.192 189.192 479.300 < 2.2e-16 ***
generosity 1 6.774 6.774 17.162 4.177e-05 ***
log_gdp_per_capita 1 170.649 170.649 432.324 < 2.2e-16 ***
Residuals 407 160.653 0.395
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
If we add those together and use our formula above we get:
\\\[F \= \\frac{366\.62/3}{160\.653/407} \= 309\.6\\]
Which is what is reported in the summary of the model. And the p\-value is just `pf(309.6, 3, 407, lower = FALSE)`, whose values can be extracted from the summary object.
```
happy_model_base_sum$fstatistic
```
```
value numdf dendf
309.5954 3.0000 407.0000
```
```
pf(309.6, 3, 407, lower.tail = FALSE)
```
```
[1] 1.239283e-104
```
Because the F\-value is so large and p\-value so small, the printed result in the summary doesn’t give us the actual p\-value. So let’s demonstrate again with a worse model, where the p\-value will be higher.
```
f_test = lm(happiness_score ~ generosity, happy)
summary(f_test)
```
```
Call:
lm(formula = happiness_score ~ generosity, data = happy)
Residuals:
Min 1Q Median 3Q Max
-2.81037 -0.89930 0.00716 0.84924 2.33153
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.41905 0.04852 111.692 < 2e-16 ***
generosity 0.89936 0.30351 2.963 0.00318 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.122 on 533 degrees of freedom
(1169 observations deleted due to missingness)
Multiple R-squared: 0.01621, Adjusted R-squared: 0.01436
F-statistic: 8.78 on 1 and 533 DF, p-value: 0.003181
```
```
pf(8.78, 1, 533, lower.tail = FALSE)
```
```
[1] 0.003181551
```
We can make this F\-test more explicit by actually fitting a null model and making the comparison. The following will provide the same result as before. We make sure to use the same data as in the original model, since there are missing values for some covariates.
```
happy_model_null = lm(happiness_score ~ 1, data = model.frame(happy_model_base))
anova(happy_model_null, happy_model_base)
```
```
Analysis of Variance Table
Model 1: happiness_score ~ 1
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Res.Df RSS Df Sum of Sq F Pr(>F)
1 410 527.27
2 407 160.65 3 366.62 309.6 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
In this case our F statistic generalizes to the following, where \\(\\textrm{Model}\_1\\) is the simpler model and \\(p\\) now refers to the total number of parameters estimated (i.e. same as before \+ 1 for the intercept)
\\\[F \= \\frac{(\\textrm{Model}\_2\\ \\textrm{RV} \- \\textrm{Model}\_1\\ \\textrm{RV})/(p\_2 \- p\_1\)}{\\textrm{Model}\_2\\ \\textrm{RV}/(N\-p\_2\-1\)}\\]
From the previous results, we can perform the necessary arithmetic based on this formula to get the F statistic.
```
((527.27 - 160.65)/3) / (160.65/407)
```
```
[1] 309.6054
```
#### \\(R^2\\)
The statistical result just shown is mostly a straw man type of test\- who actually cares if our model does statistically better than a model with nothing in it? Surely if you don’t do better than nothing, then you may need to think more intently about what you are trying to model and how. But just because you can knock the straw man down, it isn’t something to get overly excited about. Let’s turn instead to a different concept\- the amount of variance of the target variable that is explained by our predictors. For the standard linear model setting, this statistic is called *R\-squared* (\\(R^2\\)).
Going back to our previous notions, \\(R^2\\) is just:
\\\[R^2 \=\\textrm{Model Explained Variance}/\\textrm{Total Variance}\\]
This also is reported by default in our summary printout.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
With our values from before for model and total variance, we can calculate it ourselves.
```
366.62 / 527.27
```
```
[1] 0.6953174
```
Here is another way. Let’s get the model predictions, and see how well they correlate with the target.
```
predictions = predict(happy_model_base)
target = happy_model_base$model$happiness_score
rho = cor(predictions, target)
rho
```
```
[1] 0.8338528
```
```
rho^2
```
```
[1] 0.6953106
```
Now you can see why it’s called \\(R^2\\). It is the squared Pearson \\(r\\) of the model expected value and the observed target variable.
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
#### Statistical assessment
In a standard linear model we can compare a model where there are no covariates vs. the model we actually care about, which may have many predictor variables. This is an almost useless test, but the results are typically reported both in standard output and academic presentation. Let’s think about it conceptually\- how does the variability in our target break down?
\\\[\\textrm{Total Variance} \= \\textrm{Model Explained Variance} \+ \\textrm{Residual Variance}\\]
So the variability in our target (TV) can be decomposed into that which we can explain with the predictor variables (MEV), and everything else that is not in our model (RV). If we have nothing in the model, then TV \= RV.
Let’s revisit the summary of our model. Note the *F\-statistic*, which represents a statistical test for the model as a whole.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
The standard F statistic can be calculated as follows, where \\(p\\) is the number of predictors[36](#fn36):
\\\[F \= \\frac{MV/p}{RV/(N\-p\-1\)}\\]
Conceptually it is a ratio of average squared variance to average unexplained variance. We can see this more explicitly as follows, where each predictor’s contribution to the total variance is provided in the `Sum Sq` column.
```
anova(happy_model_base)
```
```
Analysis of Variance Table
Response: happiness_score
Df Sum Sq Mean Sq F value Pr(>F)
democratic_quality 1 189.192 189.192 479.300 < 2.2e-16 ***
generosity 1 6.774 6.774 17.162 4.177e-05 ***
log_gdp_per_capita 1 170.649 170.649 432.324 < 2.2e-16 ***
Residuals 407 160.653 0.395
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
If we add those together and use our formula above we get:
\\\[F \= \\frac{366\.62/3}{160\.653/407} \= 309\.6\\]
Which is what is reported in the summary of the model. And the p\-value is just `pf(309.6, 3, 407, lower = FALSE)`, whose values can be extracted from the summary object.
```
happy_model_base_sum$fstatistic
```
```
value numdf dendf
309.5954 3.0000 407.0000
```
```
pf(309.6, 3, 407, lower.tail = FALSE)
```
```
[1] 1.239283e-104
```
Because the F\-value is so large and p\-value so small, the printed result in the summary doesn’t give us the actual p\-value. So let’s demonstrate again with a worse model, where the p\-value will be higher.
```
f_test = lm(happiness_score ~ generosity, happy)
summary(f_test)
```
```
Call:
lm(formula = happiness_score ~ generosity, data = happy)
Residuals:
Min 1Q Median 3Q Max
-2.81037 -0.89930 0.00716 0.84924 2.33153
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.41905 0.04852 111.692 < 2e-16 ***
generosity 0.89936 0.30351 2.963 0.00318 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.122 on 533 degrees of freedom
(1169 observations deleted due to missingness)
Multiple R-squared: 0.01621, Adjusted R-squared: 0.01436
F-statistic: 8.78 on 1 and 533 DF, p-value: 0.003181
```
```
pf(8.78, 1, 533, lower.tail = FALSE)
```
```
[1] 0.003181551
```
We can make this F\-test more explicit by actually fitting a null model and making the comparison. The following will provide the same result as before. We make sure to use the same data as in the original model, since there are missing values for some covariates.
```
happy_model_null = lm(happiness_score ~ 1, data = model.frame(happy_model_base))
anova(happy_model_null, happy_model_base)
```
```
Analysis of Variance Table
Model 1: happiness_score ~ 1
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Res.Df RSS Df Sum of Sq F Pr(>F)
1 410 527.27
2 407 160.65 3 366.62 309.6 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
In this case our F statistic generalizes to the following, where \\(\\textrm{Model}\_1\\) is the simpler model and \\(p\\) now refers to the total number of parameters estimated (i.e. same as before \+ 1 for the intercept)
\\\[F \= \\frac{(\\textrm{Model}\_2\\ \\textrm{RV} \- \\textrm{Model}\_1\\ \\textrm{RV})/(p\_2 \- p\_1\)}{\\textrm{Model}\_2\\ \\textrm{RV}/(N\-p\_2\-1\)}\\]
From the previous results, we can perform the necessary arithmetic based on this formula to get the F statistic.
```
((527.27 - 160.65)/3) / (160.65/407)
```
```
[1] 309.6054
```
#### \\(R^2\\)
The statistical result just shown is mostly a straw man type of test\- who actually cares if our model does statistically better than a model with nothing in it? Surely if you don’t do better than nothing, then you may need to think more intently about what you are trying to model and how. But just because you can knock the straw man down, it isn’t something to get overly excited about. Let’s turn instead to a different concept\- the amount of variance of the target variable that is explained by our predictors. For the standard linear model setting, this statistic is called *R\-squared* (\\(R^2\\)).
Going back to our previous notions, \\(R^2\\) is just:
\\\[R^2 \=\\textrm{Model Explained Variance}/\\textrm{Total Variance}\\]
This also is reported by default in our summary printout.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
With our values from before for model and total variance, we can calculate it ourselves.
```
366.62 / 527.27
```
```
[1] 0.6953174
```
Here is another way. Let’s get the model predictions, and see how well they correlate with the target.
```
predictions = predict(happy_model_base)
target = happy_model_base$model$happiness_score
rho = cor(predictions, target)
rho
```
```
[1] 0.8338528
```
```
rho^2
```
```
[1] 0.6953106
```
Now you can see why it’s called \\(R^2\\). It is the squared Pearson \\(r\\) of the model expected value and the observed target variable.
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
### Beyond OLS
People love \\(R^2\\), so much that they will report it wherever they can, even coming up with things like ‘Pseudo\-\\(R^2\\)’ when it proves difficult. However, outside of the OLS setting where we assume a normal distribution as the underlying data\-generating mechanism, \\(R^2\\) has little application, and so is not very useful. In some sense, for any numeric target variable we can ask how well our predictions correlate with the observed target values, but the notion of ‘variance explained’ doesn’t easily follow us. For example, for other distributions the estimated variance is a function of the mean (e.g. Poisson, Binomial), and so isn’t constant. In other settings we have multiple sources of (residual) variance, and some sources where it’s not clear whether the variance should be considered as part of the model explained variance or residual variance. For categorical targets the notion doesn’t really apply very well at all.
At least for GLM for non\-normal distributions, we can work with *deviance*, which is similar to the residual sum of squares in the OLS setting. We can get a ‘deviance explained’ using the following approach:
1. Fit a null model, i.e. intercept only. This gives the total deviance (`tot_dev`).
2. Fit the desired model. This provides the model unexplained deviance (`model_dev`)
3. Calculate \\(\\frac{\\textrm{tot\_dev} \-\\textrm{model\_dev}}{\\textrm{tot\_dev}}\\)
But this value doesn’t really behave in the same manner as \\(R^2\\). For one, it can actually go down for a more complex model, and there is no standard adjustment, neither of which is the case with \\(R^2\\) for the standard linear model. At most this can serve as an approximation. For more complicated settings you will have to rely on other means to determine model fit.
### Classification
For categorical targets we must think about obtaining predictions that allow us to classify the observations into specific categories. Not surprisingly, this will require different metrics to assess model performance.
#### Accuracy and other metrics
A very natural starting point is *accuracy*, or what percentage of our predicted class labels match the observed class labels. However, our model will not spit out a character string, only a number. On the scale of the linear predictor it can be anything, but we will at some point transform it to the probability scale, obtaining a predicted probability for each category. The class associated with the highest probability is the predicted class. In the case of binary targets, this is just an if\_else statement for one class `if_else(probability >= .5, 'class A', 'class B')`.
With those predicted labels and the observed labels we create what is commonly called a *confusion matrix*, but would more sanely be called a *classification table*, *prediction table*, or just about any other name one could come up with in the first 10 seconds of trying. Let’s look at the following hypothetical result.
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | 41 | 21 | | Predicted \= 0 | 16 | 13 | | | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | A | B | | Predicted \= 0 | C | D | |
In some cases we predict correctly, in other cases not. In this 2 x 2 setting we label the cells A through D. With things in place, consider the following the following nomenclature.
*True Positive*, *False Positive*, *True Negative*, *False Negative*: Above, these are A, B, D, and C respectively.
Now let’s see what we can calculate.
*Accuracy*: Number of correct classifications out of all predictions (A \+ D)/Total. In the above example this would be (41 \+ 13\)/91, about 59%.
*Error Rate*: 1 \- Accuracy.
*Sensitivity*: is the proportion of correctly predicted positives to all true positive events: A/(A \+ C). In the above example this would be 41/57, about 72%. High sensitivity would suggest a low type II error rate (see below), or high statistical power. Also known as *true positive rate*.
*Specificity*: is the proportion of correctly predicted negatives to all true negative events: D/(B \+ D). In the above example this would be 13/34, about 38%. High specificity would suggest a low type I error rate (see below). Also known as *true negative rate*.
*Positive Predictive Value* (PPV): proportion of true positives of those that are predicted positives: A/(A \+ B). In the above example this would be 41/62, about 66%.
*Negative Predictive Value* (NPV): proportion of true negatives of those that are predicted negative: D/(C \+ D). In the above example this would be 13/29, about 45%.
*Precision*: See PPV.
*Recall*: See sensitivity.
*Lift*: Ratio of positive predictions given actual positives to the proportion of positive predictions out of the total: (A/(A \+ C)) / ((A \+ B)/Total). In the above example this would be (41/(41 \+ 16\))/((41 \+ 21\)/(91\)), or 1\.06\.
*F Score* (F1 score): Harmonic mean of precision and recall: 2\*(Precision\*Recall)/(Precision\+Recall). In the above example this would be 2\*(.66\*.72\)/(.66\+.72\), about 0\.69\.
*Type I Error Rate* (false positive rate): proportion of true negatives that are incorrectly predicted positive: B/(B\+D). In the above example this would be 21/34, about 62%. Also known as *alpha*.
*Type II Error Rate* (false negative rate): proportion of true positives that are incorrectly predicted negative: C/(C\+A). In the above example this would be 16/57, about 28%. Also known as *beta*.
*False Discovery Rate*: proportion of false positives among all positive predictions: B/(A\+B). In the above example this would be 21/62, about 34%. Often used in multiple comparison testing in the context of ANOVA.
*Phi coefficient*: A measure of association: (A\*D \- B\*C) / (sqrt((A\+C)\*(D\+B)\*(A\+B)\*(D\+C))). In the above example this would be 0\.11\.
Several of these may also be produced on a per\-class basis when there are more than two classes. In addition, for multi\-class scenarios there are other metrics commonly employed. In general there are many, many other metrics for confusion matrices, any of which might be useful for your situation, but the above provides a starting point, and is enough for many situations.
#### Accuracy and other metrics
A very natural starting point is *accuracy*, or what percentage of our predicted class labels match the observed class labels. However, our model will not spit out a character string, only a number. On the scale of the linear predictor it can be anything, but we will at some point transform it to the probability scale, obtaining a predicted probability for each category. The class associated with the highest probability is the predicted class. In the case of binary targets, this is just an if\_else statement for one class `if_else(probability >= .5, 'class A', 'class B')`.
With those predicted labels and the observed labels we create what is commonly called a *confusion matrix*, but would more sanely be called a *classification table*, *prediction table*, or just about any other name one could come up with in the first 10 seconds of trying. Let’s look at the following hypothetical result.
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | 41 | 21 | | Predicted \= 0 | 16 | 13 | | | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | A | B | | Predicted \= 0 | C | D | |
In some cases we predict correctly, in other cases not. In this 2 x 2 setting we label the cells A through D. With things in place, consider the following the following nomenclature.
*True Positive*, *False Positive*, *True Negative*, *False Negative*: Above, these are A, B, D, and C respectively.
Now let’s see what we can calculate.
*Accuracy*: Number of correct classifications out of all predictions (A \+ D)/Total. In the above example this would be (41 \+ 13\)/91, about 59%.
*Error Rate*: 1 \- Accuracy.
*Sensitivity*: is the proportion of correctly predicted positives to all true positive events: A/(A \+ C). In the above example this would be 41/57, about 72%. High sensitivity would suggest a low type II error rate (see below), or high statistical power. Also known as *true positive rate*.
*Specificity*: is the proportion of correctly predicted negatives to all true negative events: D/(B \+ D). In the above example this would be 13/34, about 38%. High specificity would suggest a low type I error rate (see below). Also known as *true negative rate*.
*Positive Predictive Value* (PPV): proportion of true positives of those that are predicted positives: A/(A \+ B). In the above example this would be 41/62, about 66%.
*Negative Predictive Value* (NPV): proportion of true negatives of those that are predicted negative: D/(C \+ D). In the above example this would be 13/29, about 45%.
*Precision*: See PPV.
*Recall*: See sensitivity.
*Lift*: Ratio of positive predictions given actual positives to the proportion of positive predictions out of the total: (A/(A \+ C)) / ((A \+ B)/Total). In the above example this would be (41/(41 \+ 16\))/((41 \+ 21\)/(91\)), or 1\.06\.
*F Score* (F1 score): Harmonic mean of precision and recall: 2\*(Precision\*Recall)/(Precision\+Recall). In the above example this would be 2\*(.66\*.72\)/(.66\+.72\), about 0\.69\.
*Type I Error Rate* (false positive rate): proportion of true negatives that are incorrectly predicted positive: B/(B\+D). In the above example this would be 21/34, about 62%. Also known as *alpha*.
*Type II Error Rate* (false negative rate): proportion of true positives that are incorrectly predicted negative: C/(C\+A). In the above example this would be 16/57, about 28%. Also known as *beta*.
*False Discovery Rate*: proportion of false positives among all positive predictions: B/(A\+B). In the above example this would be 21/62, about 34%. Often used in multiple comparison testing in the context of ANOVA.
*Phi coefficient*: A measure of association: (A\*D \- B\*C) / (sqrt((A\+C)\*(D\+B)\*(A\+B)\*(D\+C))). In the above example this would be 0\.11\.
Several of these may also be produced on a per\-class basis when there are more than two classes. In addition, for multi\-class scenarios there are other metrics commonly employed. In general there are many, many other metrics for confusion matrices, any of which might be useful for your situation, but the above provides a starting point, and is enough for many situations.
Model Assumptions
-----------------
There are quite a few assumptions for the standard linear model that we could talk about, but I’ll focus on just a handful, ordered roughly in terms of the severity of violation.
* Correct model
* Heteroscedasticity
* Independence of observations
* Normality
These concern bias (the first), accurate inference (most of the rest), or other statistical concepts (efficiency, consistency). The issue with most of the assumptions you learn about in your statistics course is that they mostly just apply to the OLS setting. Moreover, you can meet all the assumptions you want and still have a crappy model. Practically speaking, the effects on inference often aren’t large enough to matter in many cases, as we shouldn’t be making any important decision based on a p\-value, or slight differences in the boundaries of an interval. Even then, at least for OLS and other simpler settings, the solutions to these issues are often easy, for example, to obtain correct standard errors, or are mostly overcome by having a large amount of data.
Still, the diagnostic tools can provide clues to model failure, and so have utility in that sense. As before, visualization will aid us here.
```
library(ggfortify)
autoplot(happy_model_base)
```
The first plot shows the spread of the residuals vs. the model estimated values. By default, the three most extreme observations are noted. In this plot we are looking for a lack of any conspicuous pattern, e.g. a fanning out to one side or butterfly shape. If the variance was dependent on some of the model estimated values, we have a couple options:
* Use a model that does not assume constant variance
* Add complexity to the model to better capture more extreme observations
* Change the assumed distribution
In this example we have it about as good as it gets. The second plot regards the normality of the residuals. If they are normally distributed, they would fall along the dotted line. Again, in practical application this is about as good as you’re going to get. In the following we can see that we have some issues, where predictions are worse at low and high ends, and we may not be capturing some of the tail of the target distribution.
Another plot we can use to assess model fit is simply to note the predictions vs. the observed values, and this sort of plot would be appropriate for any model. Here I show this both as a scatterplot and a density plot. With the first, the closer the result is to a line the better, with the latter, we can more adequately see what the model is predicting in relation to the observed values. In this case, while we’re doing well, one limitation of the model is that it does not have as much spread as target, and so is not capturing the more extreme values.
Beyond the OLS setting, assumptions may change, are more difficult to check, and guarantees are harder to come by. The primary one \- that you have an adequate and sufficiently complex model \- still remains the most vital. It is important to remember that these assumptions regard inference, not predictive capabilities. In addition, in many modeling scenarios we will actually induce bias to have more predictive capacity. In such settings statistical tests are of less importance, and there often may not even be an obvious test to use. Typically we will still have some means to get interval estimates for weights or predictions though.
Predictive Performance
----------------------
While we can gauge predictive performance to some extent with a metric like \\(R^2\\) in the standard linear model case, even then it almost certainly an optimistic viewpoint, and adjusted \\(R^2\\) doesn’t really deal with the underlying issue. What is the problem? The concern is that we are judging model performance on the very data it was fit to. Any potential deviation to the underlying data would certainly result in a different result for \\(R^2\\), accuracy, or any metric we choose to look at.
So the better estimate of how the model is doing is to observe performance on data it hasn’t seen, using a metric that better captures how close we hit the target. This data goes by different names\- *test set*, *validation set*, *holdout sample*, etc., but the basic idea is that we use some data that wasn’t used in model fitting to assess performance. We can do this in any data situation by randomly splitting into a data set for training the model, and one used for testing the model’s performance.
```
library(tidymodels)
set.seed(12)
happy_split = initial_split(happy, prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split) %>% drop_na()
happy_model_train = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_train
)
predictions = predict(happy_model_train, newdata = happy_test)
```
Comparing our loss on training and test (i.e. RMSE), we can see the loss is greater on the test set. You can use a package like yardstick to calculate this.
| RMSE\_train | RMSE\_test | % increase |
| --- | --- | --- |
| 0\.622 | 0\.758 | 21\.9 |
While in many settings we could simply report performance metrics from the test set, for a more accurate assessment of test error, we’d do better by taking an average over several test sets, an approach known as *cross\-validation*, something we’ll talk more about [later](ml.html#cross-validation).
In general, we may do okay in scenarios where the model is simple and uses a lot of data, but even then we may find a notable increase in test error relative to training error. For more complex models and/or with less data, the difference in training vs. test could be quite significant.
Model Comparison
----------------
Up until now the focus has been entirely on one model. However, if you’re trying to learn something new, you’ll almost always want to have multiple plausible models to explore, rather than just confirming what you think you already know. This can be as simple as starting with a baseline model and adding complexity to it, but it could also be pitting fundamentally different theoretical models against one another.
A notable problem is that complex models should always do better than simple ones. The question often then becomes if they are doing notably better given the additional complexity. So we’ll need some way to compare models in a way that takes the complexity of the model into account.
### Example: Additional covariates
A starting point for adding model complexity is simply adding more covariates. Let’s add life expectancy and a yearly trend to our happiness model. To make this model comparable to our baseline model, they need to be fit to the same data, and life expectancy has a couple missing values the others do not. So we’ll start with some data processing. I will start by standardizing some of the variables, and making year start at zero, which will represent 2008, and finally dropping missing values. Refer to our previous section on [transforming variables](models.html#numeric-variables) if you want to.
```
happy_recipe = happy %>%
select(
year,
happiness_score,
democratic_quality,
generosity,
healthy_life_expectancy_at_birth,
log_gdp_per_capita
) %>%
recipe(happiness_score ~ . ) %>%
step_center(all_numeric(), -log_gdp_per_capita, -year) %>%
step_scale(all_numeric(), -log_gdp_per_capita, -year) %>%
step_knnimpute(all_numeric()) %>%
step_naomit(everything()) %>%
step_center(year, means = 2005) %>%
prep()
happy_processed = happy_recipe %>% bake(happy)
```
Now let’s start with our baseline model again.
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_processed
)
summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.53727 -0.29553 -0.01258 0.32002 1.52749
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -5.49178 0.10993 -49.958 <2e-16 ***
democratic_quality 0.14175 0.01441 9.838 <2e-16 ***
generosity 0.19826 0.01096 18.092 <2e-16 ***
log_gdp_per_capita 0.59284 0.01187 49.946 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.44 on 1700 degrees of freedom
Multiple R-squared: 0.7805, Adjusted R-squared: 0.7801
F-statistic: 2014 on 3 and 1700 DF, p-value: < 2.2e-16
```
We can see that moving one standard deviation on democratic quality and generosity leads to similar standard deviation increases in happiness. Moving 10 percentage points in GDP would lead to less than .1 standard deviation increase in happiness.
Now we add our life expectancy and yearly trend.
```
happy_model_more = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed
)
summary(happy_model_more)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.50879 -0.27081 -0.01524 0.29640 1.60540
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.691818 0.148921 -24.790 < 2e-16 ***
democratic_quality 0.099717 0.013618 7.322 3.75e-13 ***
generosity 0.189113 0.010193 18.554 < 2e-16 ***
log_gdp_per_capita 0.397559 0.016121 24.661 < 2e-16 ***
healthy_life_expectancy_at_birth 0.311129 0.018732 16.609 < 2e-16 ***
year -0.007363 0.002728 -2.699 0.00702 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.4083 on 1698 degrees of freedom
Multiple R-squared: 0.8111, Adjusted R-squared: 0.8106
F-statistic: 1459 on 5 and 1698 DF, p-value: < 2.2e-16
```
Here it would seem that life expectancy has a notable effect on happiness (shocker), but the yearly trend, while negative, is not statistically notable. In addition, the democratic effect is no longer significant, as it would seem that it’s contribution was more due to it’s correlation with life expectancy. But the key question is\- is this model better?
The adjusted \\(R^2\\) seems to indicate that we are doing slightly better with this model, but not much (0\.81 vs. 0\.78\). We can test if the increase is a statistically notable one. [Recall previously](model_criticism.html#statistical-assessment) when we compared our model versus a null model to obtain a statistical test of model fit. Since these models are *nested*, i.e. one is a simpler form of the other, we can use the more general approach we depicted to compare these models. This ANOVA, or analysis of variance test, is essentially comparing whether the residual sum of squares (i.e. the loss) is statistically less for one model vs. the other. In many settings it is often called a *likelihood ratio test*.
```
anova(happy_model_base, happy_model_more, test = 'Chi')
```
```
Analysis of Variance Table
Model 1: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth + year
Res.Df RSS Df Sum of Sq Pr(>Chi)
1 1700 329.11
2 1698 283.11 2 45.997 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The `Df` from the test denotes that we have two additional parameters, i.e. coefficients, in the more complex model. But the main thing to note is whether the model statistically reduces the RSS, and so we see that this is a statistically notable improvement as well.
I actually do not like this test though. It requires nested models, which in some settings is either not the case or can be hard to determine, and ignores various aspects of uncertainty in parameter estimates. Furthermore, it may not be appropriate for some complex model settings. An approach that works in many settings is to compare *AIC* (Akaike Information Criterion). AIC is a value based on the likelihood for a given model, but which adds a penalty for complexity, since otherwise any more complex model would result in a larger likelihood (or in this case, smaller negative likelihood). In the following, \\(\\mathcal{L}\\) is the likelihood, and \\(\\mathcal{P}\\) is the number of parameters estimated for the model.
\\\[AIC \= \-2 ( \\ln (\\mathcal{L})) \+ 2 \\mathcal{P}\\]
```
AIC(happy_model_base)
```
```
[1] 2043.77
```
The value itself is meaningless until we compare models, in which case the lower value is the better model (because we are working with the negative log likelihood). With AIC, we don’t have to have nested models, so that’s a plus over the statistical test.
```
AIC(happy_model_base, happy_model_more)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
```
Again, our new model works better. However, this still may miss out on some uncertainty in the models. To try and capture this, I will calculate interval estimates for the adjusted \\(R^2\\) via *bootstrapping*, and then calculate an interval for their difference. The details are beyond what I want to delve into here, but the gist is we just want a confidence interval for the difference in adjusted \\(R^2\\).
| model | r2 | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 0\.780 | 0\.762 | 0\.798 |
| more | 0\.811 | 0\.795 | 0\.827 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in \\(R^2\\) | 0\.013 | 0\.049 |
It would seem the difference in adjusted \\(R^2\\) is not statistically different from zero. Likewise we could do the same for AIC.
| model | aic | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 2043\.770 | 1917\.958 | 2161\.231 |
| more | 1791\.237 | 1657\.755 | 1911\.073 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in AIC | \-369\.994 | \-126\.722 |
In this case, the more complex model may not be statistically better either, as the interval for the difference in AIC also contains zero, and exhibits a notably wide range.
### Example: Interactions
Let’s now add interactions to our model. Interactions allow the relationship of a predictor variable and target to vary depending on the values of another covariate. To keep things simple, we’ll add a single interaction to start\- I will interact democratic quality with life expectancy.
```
happy_model_interact = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth +
democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed
)
summary(happy_model_interact)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.42801 -0.26473 -0.00607 0.26868 1.48161
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.63990 0.14517 -25.074 < 2e-16 ***
democratic_quality 0.08785 0.01335 6.580 6.24e-11 ***
generosity 0.16479 0.01030 16.001 < 2e-16 ***
log_gdp_per_capita 0.38501 0.01578 24.404 < 2e-16 ***
healthy_life_expectancy_at_birth 0.33247 0.01830 18.165 < 2e-16 ***
democratic_quality:healthy_life_expectancy_at_birth 0.10526 0.01105 9.527 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.3987 on 1698 degrees of freedom
Multiple R-squared: 0.82, Adjusted R-squared: 0.8194
F-statistic: 1547 on 5 and 1698 DF, p-value: < 2.2e-16
```
The coefficient interpretation for variables in the interaction model changes. For those involved in an interaction, the base coefficient now only describes the effect when the variable they interact with is zero (or is at the reference group if it’s categorical). So democratic quality has a slight positive, but not statistically notable, effect at the mean of life expectancy (0\.088\). However, this effect increases by 0\.11 when life expectancy increases by 1 (i.e. 1 standard deviation since we standardized). The same interpretation goes for life expectancy. It’s base coefficient is when democratic quality is at it’s mean (0\.332\), and the interaction term is interpreted identically.
It seems most people (including journal reviewers) seem to have trouble understanding interactions if you just report them in a table. Furthermore, beyond the standard linear model with non\-normal distributions, the coefficient for the interaction term doesn’t even have the same precise meaning. But you know what helps us in every interaction setting? Visualization!
Let’s use ggeffects again. We’ll plot the effect of democratic quality at the mean of life expectancy, and at one standard deviation below and above. Since we already standardized it, this is even easier.
```
library(ggeffects)
plot(
ggpredict(
happy_model_interact,
terms = c('democratic_quality', 'healthy_life_expectancy_at_birth[-1, 0, 1]')
)
)
```
We seem to have discovered something interesting here! Democratic quality only has a positive effect for those countries with a high life expectancy, i.e. that are already in a good place in general. It may even be negative in countries in the contrary case. While this has to be taken with a lot of caution, it shows how exploring interactions can be fun and surprising!
Another way to plot interactions in which the variables are continuous is with a contour plot similar to the following. Here we don’t have to pick arbitrary values to plot against, and can see the predictions at all values of the covariates in question.
We see the the lowest expected happiness based on the model is with high democratic quality and low life expectancy. The best case scenario is to be high on both.
Here is our model comparison for all three models with AIC.
```
AIC(happy_model_base, happy_model_more, happy_model_interact)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
happy_model_interact 7 1709.801
```
Looks like our interaction model is winning.
### Example: Additive models
*Generalized additive models* allow our predictors to have a *wiggly* relationship with the target variable. For more information, see [this document](https://m-clark.github.io/generalized-additive-models/), but for our purposes, that’s all you really need to know\- effects don’t have to be linear even with linear models! We will use the base R mgcv package because it is awesome and you don’t need to install anything. In this case, we’ll allow all the covariates to have a nonlinear relationship, and we denote this with the `s()` syntax.
```
library(mgcv)
happy_model_gam = gam(
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth),
data = happy_processed
)
summary(happy_model_gam)
```
```
Family: gaussian
Link function: identity
Formula:
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth)
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.028888 0.008125 -3.555 0.000388 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(democratic_quality) 8.685 8.972 13.26 <2e-16 ***
s(generosity) 6.726 7.870 27.25 <2e-16 ***
s(log_gdp_per_capita) 8.893 8.996 87.20 <2e-16 ***
s(healthy_life_expectancy_at_birth) 8.717 8.977 65.82 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R-sq.(adj) = 0.872 Deviance explained = 87.5%
GCV = 0.11479 Scale est. = 0.11249 n = 1704
```
The first thing you may notice is that there are no regression coefficients. This is because the effect of any of these predictors depends on their value, so trying to assess it by a single value would be problematic at best. You can guess what will help us interpret this…
```
library(mgcViz)
plot.gamViz(happy_model_gam, allTerms = T)
```
Here is a brief summary of interpretation. We generally don’t have to worry about small wiggles.
* `democratic_quality`: Effect is most notable (positive and strong) for higher values. Negligible otherwise.
* `generosity`: Effect seems seems strongly positive, but mostly for lower values of generosity.
* `life_expectancy`: Effect is positive, but only if the country is around the mean or higher.
* `log GDP per capita`: Effect is mostly positive, but may depend on other factors not included in the model.
In terms of general model fit, the `Scale est.` is the same as the residual standard error (squared) in the other models, and is a notably lower than even the model with the interaction (0\.11 vs. 0\.16\). We can also see that the adjusted \\(R^2\\) is higher as well (0\.87 vs. 0\.82\). If we wanted, we can actually do wiggly interactions also! Here is our interaction from before for the GAM case.
Let’s check our AIC now to see which model wins.
```
AIC(
happy_model_null,
happy_model_base,
happy_model_more,
happy_model_interact,
happy_model_gam
)
```
```
df AIC
happy_model_null 2.00000 1272.755
happy_model_base 5.00000 2043.770
happy_model_more 7.00000 1791.237
happy_model_interact 7.00000 1709.801
happy_model_gam 35.02128 1148.417
```
It’s pretty clear our wiggly model is the winner, even with the added complexity. Note that even though we used a different function for the GAM model, the AIC is still comparable.
### Example: Additional covariates
A starting point for adding model complexity is simply adding more covariates. Let’s add life expectancy and a yearly trend to our happiness model. To make this model comparable to our baseline model, they need to be fit to the same data, and life expectancy has a couple missing values the others do not. So we’ll start with some data processing. I will start by standardizing some of the variables, and making year start at zero, which will represent 2008, and finally dropping missing values. Refer to our previous section on [transforming variables](models.html#numeric-variables) if you want to.
```
happy_recipe = happy %>%
select(
year,
happiness_score,
democratic_quality,
generosity,
healthy_life_expectancy_at_birth,
log_gdp_per_capita
) %>%
recipe(happiness_score ~ . ) %>%
step_center(all_numeric(), -log_gdp_per_capita, -year) %>%
step_scale(all_numeric(), -log_gdp_per_capita, -year) %>%
step_knnimpute(all_numeric()) %>%
step_naomit(everything()) %>%
step_center(year, means = 2005) %>%
prep()
happy_processed = happy_recipe %>% bake(happy)
```
Now let’s start with our baseline model again.
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_processed
)
summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.53727 -0.29553 -0.01258 0.32002 1.52749
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -5.49178 0.10993 -49.958 <2e-16 ***
democratic_quality 0.14175 0.01441 9.838 <2e-16 ***
generosity 0.19826 0.01096 18.092 <2e-16 ***
log_gdp_per_capita 0.59284 0.01187 49.946 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.44 on 1700 degrees of freedom
Multiple R-squared: 0.7805, Adjusted R-squared: 0.7801
F-statistic: 2014 on 3 and 1700 DF, p-value: < 2.2e-16
```
We can see that moving one standard deviation on democratic quality and generosity leads to similar standard deviation increases in happiness. Moving 10 percentage points in GDP would lead to less than .1 standard deviation increase in happiness.
Now we add our life expectancy and yearly trend.
```
happy_model_more = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed
)
summary(happy_model_more)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.50879 -0.27081 -0.01524 0.29640 1.60540
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.691818 0.148921 -24.790 < 2e-16 ***
democratic_quality 0.099717 0.013618 7.322 3.75e-13 ***
generosity 0.189113 0.010193 18.554 < 2e-16 ***
log_gdp_per_capita 0.397559 0.016121 24.661 < 2e-16 ***
healthy_life_expectancy_at_birth 0.311129 0.018732 16.609 < 2e-16 ***
year -0.007363 0.002728 -2.699 0.00702 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.4083 on 1698 degrees of freedom
Multiple R-squared: 0.8111, Adjusted R-squared: 0.8106
F-statistic: 1459 on 5 and 1698 DF, p-value: < 2.2e-16
```
Here it would seem that life expectancy has a notable effect on happiness (shocker), but the yearly trend, while negative, is not statistically notable. In addition, the democratic effect is no longer significant, as it would seem that it’s contribution was more due to it’s correlation with life expectancy. But the key question is\- is this model better?
The adjusted \\(R^2\\) seems to indicate that we are doing slightly better with this model, but not much (0\.81 vs. 0\.78\). We can test if the increase is a statistically notable one. [Recall previously](model_criticism.html#statistical-assessment) when we compared our model versus a null model to obtain a statistical test of model fit. Since these models are *nested*, i.e. one is a simpler form of the other, we can use the more general approach we depicted to compare these models. This ANOVA, or analysis of variance test, is essentially comparing whether the residual sum of squares (i.e. the loss) is statistically less for one model vs. the other. In many settings it is often called a *likelihood ratio test*.
```
anova(happy_model_base, happy_model_more, test = 'Chi')
```
```
Analysis of Variance Table
Model 1: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth + year
Res.Df RSS Df Sum of Sq Pr(>Chi)
1 1700 329.11
2 1698 283.11 2 45.997 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The `Df` from the test denotes that we have two additional parameters, i.e. coefficients, in the more complex model. But the main thing to note is whether the model statistically reduces the RSS, and so we see that this is a statistically notable improvement as well.
I actually do not like this test though. It requires nested models, which in some settings is either not the case or can be hard to determine, and ignores various aspects of uncertainty in parameter estimates. Furthermore, it may not be appropriate for some complex model settings. An approach that works in many settings is to compare *AIC* (Akaike Information Criterion). AIC is a value based on the likelihood for a given model, but which adds a penalty for complexity, since otherwise any more complex model would result in a larger likelihood (or in this case, smaller negative likelihood). In the following, \\(\\mathcal{L}\\) is the likelihood, and \\(\\mathcal{P}\\) is the number of parameters estimated for the model.
\\\[AIC \= \-2 ( \\ln (\\mathcal{L})) \+ 2 \\mathcal{P}\\]
```
AIC(happy_model_base)
```
```
[1] 2043.77
```
The value itself is meaningless until we compare models, in which case the lower value is the better model (because we are working with the negative log likelihood). With AIC, we don’t have to have nested models, so that’s a plus over the statistical test.
```
AIC(happy_model_base, happy_model_more)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
```
Again, our new model works better. However, this still may miss out on some uncertainty in the models. To try and capture this, I will calculate interval estimates for the adjusted \\(R^2\\) via *bootstrapping*, and then calculate an interval for their difference. The details are beyond what I want to delve into here, but the gist is we just want a confidence interval for the difference in adjusted \\(R^2\\).
| model | r2 | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 0\.780 | 0\.762 | 0\.798 |
| more | 0\.811 | 0\.795 | 0\.827 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in \\(R^2\\) | 0\.013 | 0\.049 |
It would seem the difference in adjusted \\(R^2\\) is not statistically different from zero. Likewise we could do the same for AIC.
| model | aic | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 2043\.770 | 1917\.958 | 2161\.231 |
| more | 1791\.237 | 1657\.755 | 1911\.073 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in AIC | \-369\.994 | \-126\.722 |
In this case, the more complex model may not be statistically better either, as the interval for the difference in AIC also contains zero, and exhibits a notably wide range.
### Example: Interactions
Let’s now add interactions to our model. Interactions allow the relationship of a predictor variable and target to vary depending on the values of another covariate. To keep things simple, we’ll add a single interaction to start\- I will interact democratic quality with life expectancy.
```
happy_model_interact = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth +
democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed
)
summary(happy_model_interact)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.42801 -0.26473 -0.00607 0.26868 1.48161
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.63990 0.14517 -25.074 < 2e-16 ***
democratic_quality 0.08785 0.01335 6.580 6.24e-11 ***
generosity 0.16479 0.01030 16.001 < 2e-16 ***
log_gdp_per_capita 0.38501 0.01578 24.404 < 2e-16 ***
healthy_life_expectancy_at_birth 0.33247 0.01830 18.165 < 2e-16 ***
democratic_quality:healthy_life_expectancy_at_birth 0.10526 0.01105 9.527 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.3987 on 1698 degrees of freedom
Multiple R-squared: 0.82, Adjusted R-squared: 0.8194
F-statistic: 1547 on 5 and 1698 DF, p-value: < 2.2e-16
```
The coefficient interpretation for variables in the interaction model changes. For those involved in an interaction, the base coefficient now only describes the effect when the variable they interact with is zero (or is at the reference group if it’s categorical). So democratic quality has a slight positive, but not statistically notable, effect at the mean of life expectancy (0\.088\). However, this effect increases by 0\.11 when life expectancy increases by 1 (i.e. 1 standard deviation since we standardized). The same interpretation goes for life expectancy. It’s base coefficient is when democratic quality is at it’s mean (0\.332\), and the interaction term is interpreted identically.
It seems most people (including journal reviewers) seem to have trouble understanding interactions if you just report them in a table. Furthermore, beyond the standard linear model with non\-normal distributions, the coefficient for the interaction term doesn’t even have the same precise meaning. But you know what helps us in every interaction setting? Visualization!
Let’s use ggeffects again. We’ll plot the effect of democratic quality at the mean of life expectancy, and at one standard deviation below and above. Since we already standardized it, this is even easier.
```
library(ggeffects)
plot(
ggpredict(
happy_model_interact,
terms = c('democratic_quality', 'healthy_life_expectancy_at_birth[-1, 0, 1]')
)
)
```
We seem to have discovered something interesting here! Democratic quality only has a positive effect for those countries with a high life expectancy, i.e. that are already in a good place in general. It may even be negative in countries in the contrary case. While this has to be taken with a lot of caution, it shows how exploring interactions can be fun and surprising!
Another way to plot interactions in which the variables are continuous is with a contour plot similar to the following. Here we don’t have to pick arbitrary values to plot against, and can see the predictions at all values of the covariates in question.
We see the the lowest expected happiness based on the model is with high democratic quality and low life expectancy. The best case scenario is to be high on both.
Here is our model comparison for all three models with AIC.
```
AIC(happy_model_base, happy_model_more, happy_model_interact)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
happy_model_interact 7 1709.801
```
Looks like our interaction model is winning.
### Example: Additive models
*Generalized additive models* allow our predictors to have a *wiggly* relationship with the target variable. For more information, see [this document](https://m-clark.github.io/generalized-additive-models/), but for our purposes, that’s all you really need to know\- effects don’t have to be linear even with linear models! We will use the base R mgcv package because it is awesome and you don’t need to install anything. In this case, we’ll allow all the covariates to have a nonlinear relationship, and we denote this with the `s()` syntax.
```
library(mgcv)
happy_model_gam = gam(
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth),
data = happy_processed
)
summary(happy_model_gam)
```
```
Family: gaussian
Link function: identity
Formula:
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth)
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.028888 0.008125 -3.555 0.000388 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(democratic_quality) 8.685 8.972 13.26 <2e-16 ***
s(generosity) 6.726 7.870 27.25 <2e-16 ***
s(log_gdp_per_capita) 8.893 8.996 87.20 <2e-16 ***
s(healthy_life_expectancy_at_birth) 8.717 8.977 65.82 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R-sq.(adj) = 0.872 Deviance explained = 87.5%
GCV = 0.11479 Scale est. = 0.11249 n = 1704
```
The first thing you may notice is that there are no regression coefficients. This is because the effect of any of these predictors depends on their value, so trying to assess it by a single value would be problematic at best. You can guess what will help us interpret this…
```
library(mgcViz)
plot.gamViz(happy_model_gam, allTerms = T)
```
Here is a brief summary of interpretation. We generally don’t have to worry about small wiggles.
* `democratic_quality`: Effect is most notable (positive and strong) for higher values. Negligible otherwise.
* `generosity`: Effect seems seems strongly positive, but mostly for lower values of generosity.
* `life_expectancy`: Effect is positive, but only if the country is around the mean or higher.
* `log GDP per capita`: Effect is mostly positive, but may depend on other factors not included in the model.
In terms of general model fit, the `Scale est.` is the same as the residual standard error (squared) in the other models, and is a notably lower than even the model with the interaction (0\.11 vs. 0\.16\). We can also see that the adjusted \\(R^2\\) is higher as well (0\.87 vs. 0\.82\). If we wanted, we can actually do wiggly interactions also! Here is our interaction from before for the GAM case.
Let’s check our AIC now to see which model wins.
```
AIC(
happy_model_null,
happy_model_base,
happy_model_more,
happy_model_interact,
happy_model_gam
)
```
```
df AIC
happy_model_null 2.00000 1272.755
happy_model_base 5.00000 2043.770
happy_model_more 7.00000 1791.237
happy_model_interact 7.00000 1709.801
happy_model_gam 35.02128 1148.417
```
It’s pretty clear our wiggly model is the winner, even with the added complexity. Note that even though we used a different function for the GAM model, the AIC is still comparable.
Model Averaging
---------------
Have you ever suffered from choice overload? Many folks who seek to understand some phenomenon via modeling do so. There are plenty of choices due to data processing, but then there may be many models to consider as well, and should be if you’re doing things correctly. But you know what? You don’t have to pick a best.
Model averaging is a common technique in the Bayesian world and also with some applications of machine learning (usually under the guise of *stacking*), but not as widely applied elsewhere, even though it could be. As an example if we (inversely) weight models by the AIC, we can get an average parameter that favors the better models, while not ignoring the lesser models if they aren’t notably poorer. People will use such an approach to get model averaged effects (i.e. coefficients) or predictions. In our setting, the GAM is doing so much better, that it’s weight would basically be 1\.0 and zero for the others. So the model averaged predictions would be almost identical to the GAM predictions.
| model | df | AIC | AICc | deltaAICc | Rel. Like. | weight |
| --- | --- | --- | --- | --- | --- | --- |
| happy\_model\_base | 5\.000 | 2043\.770 | 2043\.805 | 893\.875 | 0 | 0 |
| happy\_model\_more | 7\.000 | 1791\.237 | 1791\.303 | 641\.373 | 0 | 0 |
| happy\_model\_interact | 7\.000 | 1709\.801 | 1709\.867 | 559\.937 | 0 | 0 |
| happy\_model\_gam | 35\.021 | 1148\.417 | 1149\.930 | 0\.000 | 1 | 1 |
Model Criticism Summary
-----------------------
Statistical significance with a single model does not provide enough of a story to tell with your data. A better assessment of performance can be made on data the model has not seen, and can provide a better idea of the practical capabilities of it. Furthermore, pitting various models of differing complexities will allow for better confidence in the model or set of models we ultimately deem worthy. In general, in more explanatory settings we strive to balance performance with complexity through various means.
Model Criticism Exercises
-------------------------
### Exercise 0
Recall the [google app exercises](models.html#model-exploration-exercises), we use a standard linear model (i.e. lm) to predict one of three target variables:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
After that we did a model with an interaction.
Either using those models, or running new ones with a different target variable, conduct the following exercises.
```
load('data/google_apps.RData')
```
### Exercise 1
Assess the model fit and performance of your first model. Perform additional diagnostics to assess how the model is doing (e.g. plot the model to look at residuals).
```
summary(model)
plot(model)
```
### Exercise 2
Compare the model with the interaction model. Based on AIC or some other metric, which one would you choose? Visualize the interaction model if it’s the better model.
```
anova(model1, model2)
AIC(model1, model2)
```
### Exercise 0
Recall the [google app exercises](models.html#model-exploration-exercises), we use a standard linear model (i.e. lm) to predict one of three target variables:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
After that we did a model with an interaction.
Either using those models, or running new ones with a different target variable, conduct the following exercises.
```
load('data/google_apps.RData')
```
### Exercise 1
Assess the model fit and performance of your first model. Perform additional diagnostics to assess how the model is doing (e.g. plot the model to look at residuals).
```
summary(model)
plot(model)
```
### Exercise 2
Compare the model with the interaction model. Based on AIC or some other metric, which one would you choose? Visualize the interaction model if it’s the better model.
```
anova(model1, model2)
AIC(model1, model2)
```
Python Model Criticism Notebook
-------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/model_criticism.ipynb)
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/model_criticism.html |
Model Criticism
===============
It isn’t enough to simply fit a particular model, we must also ask how well it matches the data under study, if it can predict well on new data, where it fails, and more. In the following we will discuss how we can better understand our model and its limitations.
Model Fit
---------
### Standard linear model
In the basic regression setting we can think of model fit in terms of a statistical result, or in terms of the match between our model predictions and the observed target values. The former provides an inferential perspective, but as we will see, is limited. The latter regards a more practical result, and may provide a more nuanced or different conclusion.
#### Statistical assessment
In a standard linear model we can compare a model where there are no covariates vs. the model we actually care about, which may have many predictor variables. This is an almost useless test, but the results are typically reported both in standard output and academic presentation. Let’s think about it conceptually\- how does the variability in our target break down?
\\\[\\textrm{Total Variance} \= \\textrm{Model Explained Variance} \+ \\textrm{Residual Variance}\\]
So the variability in our target (TV) can be decomposed into that which we can explain with the predictor variables (MEV), and everything else that is not in our model (RV). If we have nothing in the model, then TV \= RV.
Let’s revisit the summary of our model. Note the *F\-statistic*, which represents a statistical test for the model as a whole.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
The standard F statistic can be calculated as follows, where \\(p\\) is the number of predictors[36](#fn36):
\\\[F \= \\frac{MV/p}{RV/(N\-p\-1\)}\\]
Conceptually it is a ratio of average squared variance to average unexplained variance. We can see this more explicitly as follows, where each predictor’s contribution to the total variance is provided in the `Sum Sq` column.
```
anova(happy_model_base)
```
```
Analysis of Variance Table
Response: happiness_score
Df Sum Sq Mean Sq F value Pr(>F)
democratic_quality 1 189.192 189.192 479.300 < 2.2e-16 ***
generosity 1 6.774 6.774 17.162 4.177e-05 ***
log_gdp_per_capita 1 170.649 170.649 432.324 < 2.2e-16 ***
Residuals 407 160.653 0.395
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
If we add those together and use our formula above we get:
\\\[F \= \\frac{366\.62/3}{160\.653/407} \= 309\.6\\]
Which is what is reported in the summary of the model. And the p\-value is just `pf(309.6, 3, 407, lower = FALSE)`, whose values can be extracted from the summary object.
```
happy_model_base_sum$fstatistic
```
```
value numdf dendf
309.5954 3.0000 407.0000
```
```
pf(309.6, 3, 407, lower.tail = FALSE)
```
```
[1] 1.239283e-104
```
Because the F\-value is so large and p\-value so small, the printed result in the summary doesn’t give us the actual p\-value. So let’s demonstrate again with a worse model, where the p\-value will be higher.
```
f_test = lm(happiness_score ~ generosity, happy)
summary(f_test)
```
```
Call:
lm(formula = happiness_score ~ generosity, data = happy)
Residuals:
Min 1Q Median 3Q Max
-2.81037 -0.89930 0.00716 0.84924 2.33153
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.41905 0.04852 111.692 < 2e-16 ***
generosity 0.89936 0.30351 2.963 0.00318 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.122 on 533 degrees of freedom
(1169 observations deleted due to missingness)
Multiple R-squared: 0.01621, Adjusted R-squared: 0.01436
F-statistic: 8.78 on 1 and 533 DF, p-value: 0.003181
```
```
pf(8.78, 1, 533, lower.tail = FALSE)
```
```
[1] 0.003181551
```
We can make this F\-test more explicit by actually fitting a null model and making the comparison. The following will provide the same result as before. We make sure to use the same data as in the original model, since there are missing values for some covariates.
```
happy_model_null = lm(happiness_score ~ 1, data = model.frame(happy_model_base))
anova(happy_model_null, happy_model_base)
```
```
Analysis of Variance Table
Model 1: happiness_score ~ 1
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Res.Df RSS Df Sum of Sq F Pr(>F)
1 410 527.27
2 407 160.65 3 366.62 309.6 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
In this case our F statistic generalizes to the following, where \\(\\textrm{Model}\_1\\) is the simpler model and \\(p\\) now refers to the total number of parameters estimated (i.e. same as before \+ 1 for the intercept)
\\\[F \= \\frac{(\\textrm{Model}\_2\\ \\textrm{RV} \- \\textrm{Model}\_1\\ \\textrm{RV})/(p\_2 \- p\_1\)}{\\textrm{Model}\_2\\ \\textrm{RV}/(N\-p\_2\-1\)}\\]
From the previous results, we can perform the necessary arithmetic based on this formula to get the F statistic.
```
((527.27 - 160.65)/3) / (160.65/407)
```
```
[1] 309.6054
```
#### \\(R^2\\)
The statistical result just shown is mostly a straw man type of test\- who actually cares if our model does statistically better than a model with nothing in it? Surely if you don’t do better than nothing, then you may need to think more intently about what you are trying to model and how. But just because you can knock the straw man down, it isn’t something to get overly excited about. Let’s turn instead to a different concept\- the amount of variance of the target variable that is explained by our predictors. For the standard linear model setting, this statistic is called *R\-squared* (\\(R^2\\)).
Going back to our previous notions, \\(R^2\\) is just:
\\\[R^2 \=\\textrm{Model Explained Variance}/\\textrm{Total Variance}\\]
This also is reported by default in our summary printout.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
With our values from before for model and total variance, we can calculate it ourselves.
```
366.62 / 527.27
```
```
[1] 0.6953174
```
Here is another way. Let’s get the model predictions, and see how well they correlate with the target.
```
predictions = predict(happy_model_base)
target = happy_model_base$model$happiness_score
rho = cor(predictions, target)
rho
```
```
[1] 0.8338528
```
```
rho^2
```
```
[1] 0.6953106
```
Now you can see why it’s called \\(R^2\\). It is the squared Pearson \\(r\\) of the model expected value and the observed target variable.
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
### Beyond OLS
People love \\(R^2\\), so much that they will report it wherever they can, even coming up with things like ‘Pseudo\-\\(R^2\\)’ when it proves difficult. However, outside of the OLS setting where we assume a normal distribution as the underlying data\-generating mechanism, \\(R^2\\) has little application, and so is not very useful. In some sense, for any numeric target variable we can ask how well our predictions correlate with the observed target values, but the notion of ‘variance explained’ doesn’t easily follow us. For example, for other distributions the estimated variance is a function of the mean (e.g. Poisson, Binomial), and so isn’t constant. In other settings we have multiple sources of (residual) variance, and some sources where it’s not clear whether the variance should be considered as part of the model explained variance or residual variance. For categorical targets the notion doesn’t really apply very well at all.
At least for GLM for non\-normal distributions, we can work with *deviance*, which is similar to the residual sum of squares in the OLS setting. We can get a ‘deviance explained’ using the following approach:
1. Fit a null model, i.e. intercept only. This gives the total deviance (`tot_dev`).
2. Fit the desired model. This provides the model unexplained deviance (`model_dev`)
3. Calculate \\(\\frac{\\textrm{tot\_dev} \-\\textrm{model\_dev}}{\\textrm{tot\_dev}}\\)
But this value doesn’t really behave in the same manner as \\(R^2\\). For one, it can actually go down for a more complex model, and there is no standard adjustment, neither of which is the case with \\(R^2\\) for the standard linear model. At most this can serve as an approximation. For more complicated settings you will have to rely on other means to determine model fit.
### Classification
For categorical targets we must think about obtaining predictions that allow us to classify the observations into specific categories. Not surprisingly, this will require different metrics to assess model performance.
#### Accuracy and other metrics
A very natural starting point is *accuracy*, or what percentage of our predicted class labels match the observed class labels. However, our model will not spit out a character string, only a number. On the scale of the linear predictor it can be anything, but we will at some point transform it to the probability scale, obtaining a predicted probability for each category. The class associated with the highest probability is the predicted class. In the case of binary targets, this is just an if\_else statement for one class `if_else(probability >= .5, 'class A', 'class B')`.
With those predicted labels and the observed labels we create what is commonly called a *confusion matrix*, but would more sanely be called a *classification table*, *prediction table*, or just about any other name one could come up with in the first 10 seconds of trying. Let’s look at the following hypothetical result.
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | 41 | 21 | | Predicted \= 0 | 16 | 13 | | | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | A | B | | Predicted \= 0 | C | D | |
In some cases we predict correctly, in other cases not. In this 2 x 2 setting we label the cells A through D. With things in place, consider the following the following nomenclature.
*True Positive*, *False Positive*, *True Negative*, *False Negative*: Above, these are A, B, D, and C respectively.
Now let’s see what we can calculate.
*Accuracy*: Number of correct classifications out of all predictions (A \+ D)/Total. In the above example this would be (41 \+ 13\)/91, about 59%.
*Error Rate*: 1 \- Accuracy.
*Sensitivity*: is the proportion of correctly predicted positives to all true positive events: A/(A \+ C). In the above example this would be 41/57, about 72%. High sensitivity would suggest a low type II error rate (see below), or high statistical power. Also known as *true positive rate*.
*Specificity*: is the proportion of correctly predicted negatives to all true negative events: D/(B \+ D). In the above example this would be 13/34, about 38%. High specificity would suggest a low type I error rate (see below). Also known as *true negative rate*.
*Positive Predictive Value* (PPV): proportion of true positives of those that are predicted positives: A/(A \+ B). In the above example this would be 41/62, about 66%.
*Negative Predictive Value* (NPV): proportion of true negatives of those that are predicted negative: D/(C \+ D). In the above example this would be 13/29, about 45%.
*Precision*: See PPV.
*Recall*: See sensitivity.
*Lift*: Ratio of positive predictions given actual positives to the proportion of positive predictions out of the total: (A/(A \+ C)) / ((A \+ B)/Total). In the above example this would be (41/(41 \+ 16\))/((41 \+ 21\)/(91\)), or 1\.06\.
*F Score* (F1 score): Harmonic mean of precision and recall: 2\*(Precision\*Recall)/(Precision\+Recall). In the above example this would be 2\*(.66\*.72\)/(.66\+.72\), about 0\.69\.
*Type I Error Rate* (false positive rate): proportion of true negatives that are incorrectly predicted positive: B/(B\+D). In the above example this would be 21/34, about 62%. Also known as *alpha*.
*Type II Error Rate* (false negative rate): proportion of true positives that are incorrectly predicted negative: C/(C\+A). In the above example this would be 16/57, about 28%. Also known as *beta*.
*False Discovery Rate*: proportion of false positives among all positive predictions: B/(A\+B). In the above example this would be 21/62, about 34%. Often used in multiple comparison testing in the context of ANOVA.
*Phi coefficient*: A measure of association: (A\*D \- B\*C) / (sqrt((A\+C)\*(D\+B)\*(A\+B)\*(D\+C))). In the above example this would be 0\.11\.
Several of these may also be produced on a per\-class basis when there are more than two classes. In addition, for multi\-class scenarios there are other metrics commonly employed. In general there are many, many other metrics for confusion matrices, any of which might be useful for your situation, but the above provides a starting point, and is enough for many situations.
Model Assumptions
-----------------
There are quite a few assumptions for the standard linear model that we could talk about, but I’ll focus on just a handful, ordered roughly in terms of the severity of violation.
* Correct model
* Heteroscedasticity
* Independence of observations
* Normality
These concern bias (the first), accurate inference (most of the rest), or other statistical concepts (efficiency, consistency). The issue with most of the assumptions you learn about in your statistics course is that they mostly just apply to the OLS setting. Moreover, you can meet all the assumptions you want and still have a crappy model. Practically speaking, the effects on inference often aren’t large enough to matter in many cases, as we shouldn’t be making any important decision based on a p\-value, or slight differences in the boundaries of an interval. Even then, at least for OLS and other simpler settings, the solutions to these issues are often easy, for example, to obtain correct standard errors, or are mostly overcome by having a large amount of data.
Still, the diagnostic tools can provide clues to model failure, and so have utility in that sense. As before, visualization will aid us here.
```
library(ggfortify)
autoplot(happy_model_base)
```
The first plot shows the spread of the residuals vs. the model estimated values. By default, the three most extreme observations are noted. In this plot we are looking for a lack of any conspicuous pattern, e.g. a fanning out to one side or butterfly shape. If the variance was dependent on some of the model estimated values, we have a couple options:
* Use a model that does not assume constant variance
* Add complexity to the model to better capture more extreme observations
* Change the assumed distribution
In this example we have it about as good as it gets. The second plot regards the normality of the residuals. If they are normally distributed, they would fall along the dotted line. Again, in practical application this is about as good as you’re going to get. In the following we can see that we have some issues, where predictions are worse at low and high ends, and we may not be capturing some of the tail of the target distribution.
Another plot we can use to assess model fit is simply to note the predictions vs. the observed values, and this sort of plot would be appropriate for any model. Here I show this both as a scatterplot and a density plot. With the first, the closer the result is to a line the better, with the latter, we can more adequately see what the model is predicting in relation to the observed values. In this case, while we’re doing well, one limitation of the model is that it does not have as much spread as target, and so is not capturing the more extreme values.
Beyond the OLS setting, assumptions may change, are more difficult to check, and guarantees are harder to come by. The primary one \- that you have an adequate and sufficiently complex model \- still remains the most vital. It is important to remember that these assumptions regard inference, not predictive capabilities. In addition, in many modeling scenarios we will actually induce bias to have more predictive capacity. In such settings statistical tests are of less importance, and there often may not even be an obvious test to use. Typically we will still have some means to get interval estimates for weights or predictions though.
Predictive Performance
----------------------
While we can gauge predictive performance to some extent with a metric like \\(R^2\\) in the standard linear model case, even then it almost certainly an optimistic viewpoint, and adjusted \\(R^2\\) doesn’t really deal with the underlying issue. What is the problem? The concern is that we are judging model performance on the very data it was fit to. Any potential deviation to the underlying data would certainly result in a different result for \\(R^2\\), accuracy, or any metric we choose to look at.
So the better estimate of how the model is doing is to observe performance on data it hasn’t seen, using a metric that better captures how close we hit the target. This data goes by different names\- *test set*, *validation set*, *holdout sample*, etc., but the basic idea is that we use some data that wasn’t used in model fitting to assess performance. We can do this in any data situation by randomly splitting into a data set for training the model, and one used for testing the model’s performance.
```
library(tidymodels)
set.seed(12)
happy_split = initial_split(happy, prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split) %>% drop_na()
happy_model_train = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_train
)
predictions = predict(happy_model_train, newdata = happy_test)
```
Comparing our loss on training and test (i.e. RMSE), we can see the loss is greater on the test set. You can use a package like yardstick to calculate this.
| RMSE\_train | RMSE\_test | % increase |
| --- | --- | --- |
| 0\.622 | 0\.758 | 21\.9 |
While in many settings we could simply report performance metrics from the test set, for a more accurate assessment of test error, we’d do better by taking an average over several test sets, an approach known as *cross\-validation*, something we’ll talk more about [later](ml.html#cross-validation).
In general, we may do okay in scenarios where the model is simple and uses a lot of data, but even then we may find a notable increase in test error relative to training error. For more complex models and/or with less data, the difference in training vs. test could be quite significant.
Model Comparison
----------------
Up until now the focus has been entirely on one model. However, if you’re trying to learn something new, you’ll almost always want to have multiple plausible models to explore, rather than just confirming what you think you already know. This can be as simple as starting with a baseline model and adding complexity to it, but it could also be pitting fundamentally different theoretical models against one another.
A notable problem is that complex models should always do better than simple ones. The question often then becomes if they are doing notably better given the additional complexity. So we’ll need some way to compare models in a way that takes the complexity of the model into account.
### Example: Additional covariates
A starting point for adding model complexity is simply adding more covariates. Let’s add life expectancy and a yearly trend to our happiness model. To make this model comparable to our baseline model, they need to be fit to the same data, and life expectancy has a couple missing values the others do not. So we’ll start with some data processing. I will start by standardizing some of the variables, and making year start at zero, which will represent 2008, and finally dropping missing values. Refer to our previous section on [transforming variables](models.html#numeric-variables) if you want to.
```
happy_recipe = happy %>%
select(
year,
happiness_score,
democratic_quality,
generosity,
healthy_life_expectancy_at_birth,
log_gdp_per_capita
) %>%
recipe(happiness_score ~ . ) %>%
step_center(all_numeric(), -log_gdp_per_capita, -year) %>%
step_scale(all_numeric(), -log_gdp_per_capita, -year) %>%
step_knnimpute(all_numeric()) %>%
step_naomit(everything()) %>%
step_center(year, means = 2005) %>%
prep()
happy_processed = happy_recipe %>% bake(happy)
```
Now let’s start with our baseline model again.
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_processed
)
summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.53727 -0.29553 -0.01258 0.32002 1.52749
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -5.49178 0.10993 -49.958 <2e-16 ***
democratic_quality 0.14175 0.01441 9.838 <2e-16 ***
generosity 0.19826 0.01096 18.092 <2e-16 ***
log_gdp_per_capita 0.59284 0.01187 49.946 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.44 on 1700 degrees of freedom
Multiple R-squared: 0.7805, Adjusted R-squared: 0.7801
F-statistic: 2014 on 3 and 1700 DF, p-value: < 2.2e-16
```
We can see that moving one standard deviation on democratic quality and generosity leads to similar standard deviation increases in happiness. Moving 10 percentage points in GDP would lead to less than .1 standard deviation increase in happiness.
Now we add our life expectancy and yearly trend.
```
happy_model_more = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed
)
summary(happy_model_more)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.50879 -0.27081 -0.01524 0.29640 1.60540
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.691818 0.148921 -24.790 < 2e-16 ***
democratic_quality 0.099717 0.013618 7.322 3.75e-13 ***
generosity 0.189113 0.010193 18.554 < 2e-16 ***
log_gdp_per_capita 0.397559 0.016121 24.661 < 2e-16 ***
healthy_life_expectancy_at_birth 0.311129 0.018732 16.609 < 2e-16 ***
year -0.007363 0.002728 -2.699 0.00702 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.4083 on 1698 degrees of freedom
Multiple R-squared: 0.8111, Adjusted R-squared: 0.8106
F-statistic: 1459 on 5 and 1698 DF, p-value: < 2.2e-16
```
Here it would seem that life expectancy has a notable effect on happiness (shocker), but the yearly trend, while negative, is not statistically notable. In addition, the democratic effect is no longer significant, as it would seem that it’s contribution was more due to it’s correlation with life expectancy. But the key question is\- is this model better?
The adjusted \\(R^2\\) seems to indicate that we are doing slightly better with this model, but not much (0\.81 vs. 0\.78\). We can test if the increase is a statistically notable one. [Recall previously](model_criticism.html#statistical-assessment) when we compared our model versus a null model to obtain a statistical test of model fit. Since these models are *nested*, i.e. one is a simpler form of the other, we can use the more general approach we depicted to compare these models. This ANOVA, or analysis of variance test, is essentially comparing whether the residual sum of squares (i.e. the loss) is statistically less for one model vs. the other. In many settings it is often called a *likelihood ratio test*.
```
anova(happy_model_base, happy_model_more, test = 'Chi')
```
```
Analysis of Variance Table
Model 1: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth + year
Res.Df RSS Df Sum of Sq Pr(>Chi)
1 1700 329.11
2 1698 283.11 2 45.997 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The `Df` from the test denotes that we have two additional parameters, i.e. coefficients, in the more complex model. But the main thing to note is whether the model statistically reduces the RSS, and so we see that this is a statistically notable improvement as well.
I actually do not like this test though. It requires nested models, which in some settings is either not the case or can be hard to determine, and ignores various aspects of uncertainty in parameter estimates. Furthermore, it may not be appropriate for some complex model settings. An approach that works in many settings is to compare *AIC* (Akaike Information Criterion). AIC is a value based on the likelihood for a given model, but which adds a penalty for complexity, since otherwise any more complex model would result in a larger likelihood (or in this case, smaller negative likelihood). In the following, \\(\\mathcal{L}\\) is the likelihood, and \\(\\mathcal{P}\\) is the number of parameters estimated for the model.
\\\[AIC \= \-2 ( \\ln (\\mathcal{L})) \+ 2 \\mathcal{P}\\]
```
AIC(happy_model_base)
```
```
[1] 2043.77
```
The value itself is meaningless until we compare models, in which case the lower value is the better model (because we are working with the negative log likelihood). With AIC, we don’t have to have nested models, so that’s a plus over the statistical test.
```
AIC(happy_model_base, happy_model_more)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
```
Again, our new model works better. However, this still may miss out on some uncertainty in the models. To try and capture this, I will calculate interval estimates for the adjusted \\(R^2\\) via *bootstrapping*, and then calculate an interval for their difference. The details are beyond what I want to delve into here, but the gist is we just want a confidence interval for the difference in adjusted \\(R^2\\).
| model | r2 | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 0\.780 | 0\.762 | 0\.798 |
| more | 0\.811 | 0\.795 | 0\.827 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in \\(R^2\\) | 0\.013 | 0\.049 |
It would seem the difference in adjusted \\(R^2\\) is not statistically different from zero. Likewise we could do the same for AIC.
| model | aic | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 2043\.770 | 1917\.958 | 2161\.231 |
| more | 1791\.237 | 1657\.755 | 1911\.073 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in AIC | \-369\.994 | \-126\.722 |
In this case, the more complex model may not be statistically better either, as the interval for the difference in AIC also contains zero, and exhibits a notably wide range.
### Example: Interactions
Let’s now add interactions to our model. Interactions allow the relationship of a predictor variable and target to vary depending on the values of another covariate. To keep things simple, we’ll add a single interaction to start\- I will interact democratic quality with life expectancy.
```
happy_model_interact = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth +
democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed
)
summary(happy_model_interact)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.42801 -0.26473 -0.00607 0.26868 1.48161
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.63990 0.14517 -25.074 < 2e-16 ***
democratic_quality 0.08785 0.01335 6.580 6.24e-11 ***
generosity 0.16479 0.01030 16.001 < 2e-16 ***
log_gdp_per_capita 0.38501 0.01578 24.404 < 2e-16 ***
healthy_life_expectancy_at_birth 0.33247 0.01830 18.165 < 2e-16 ***
democratic_quality:healthy_life_expectancy_at_birth 0.10526 0.01105 9.527 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.3987 on 1698 degrees of freedom
Multiple R-squared: 0.82, Adjusted R-squared: 0.8194
F-statistic: 1547 on 5 and 1698 DF, p-value: < 2.2e-16
```
The coefficient interpretation for variables in the interaction model changes. For those involved in an interaction, the base coefficient now only describes the effect when the variable they interact with is zero (or is at the reference group if it’s categorical). So democratic quality has a slight positive, but not statistically notable, effect at the mean of life expectancy (0\.088\). However, this effect increases by 0\.11 when life expectancy increases by 1 (i.e. 1 standard deviation since we standardized). The same interpretation goes for life expectancy. It’s base coefficient is when democratic quality is at it’s mean (0\.332\), and the interaction term is interpreted identically.
It seems most people (including journal reviewers) seem to have trouble understanding interactions if you just report them in a table. Furthermore, beyond the standard linear model with non\-normal distributions, the coefficient for the interaction term doesn’t even have the same precise meaning. But you know what helps us in every interaction setting? Visualization!
Let’s use ggeffects again. We’ll plot the effect of democratic quality at the mean of life expectancy, and at one standard deviation below and above. Since we already standardized it, this is even easier.
```
library(ggeffects)
plot(
ggpredict(
happy_model_interact,
terms = c('democratic_quality', 'healthy_life_expectancy_at_birth[-1, 0, 1]')
)
)
```
We seem to have discovered something interesting here! Democratic quality only has a positive effect for those countries with a high life expectancy, i.e. that are already in a good place in general. It may even be negative in countries in the contrary case. While this has to be taken with a lot of caution, it shows how exploring interactions can be fun and surprising!
Another way to plot interactions in which the variables are continuous is with a contour plot similar to the following. Here we don’t have to pick arbitrary values to plot against, and can see the predictions at all values of the covariates in question.
We see the the lowest expected happiness based on the model is with high democratic quality and low life expectancy. The best case scenario is to be high on both.
Here is our model comparison for all three models with AIC.
```
AIC(happy_model_base, happy_model_more, happy_model_interact)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
happy_model_interact 7 1709.801
```
Looks like our interaction model is winning.
### Example: Additive models
*Generalized additive models* allow our predictors to have a *wiggly* relationship with the target variable. For more information, see [this document](https://m-clark.github.io/generalized-additive-models/), but for our purposes, that’s all you really need to know\- effects don’t have to be linear even with linear models! We will use the base R mgcv package because it is awesome and you don’t need to install anything. In this case, we’ll allow all the covariates to have a nonlinear relationship, and we denote this with the `s()` syntax.
```
library(mgcv)
happy_model_gam = gam(
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth),
data = happy_processed
)
summary(happy_model_gam)
```
```
Family: gaussian
Link function: identity
Formula:
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth)
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.028888 0.008125 -3.555 0.000388 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(democratic_quality) 8.685 8.972 13.26 <2e-16 ***
s(generosity) 6.726 7.870 27.25 <2e-16 ***
s(log_gdp_per_capita) 8.893 8.996 87.20 <2e-16 ***
s(healthy_life_expectancy_at_birth) 8.717 8.977 65.82 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R-sq.(adj) = 0.872 Deviance explained = 87.5%
GCV = 0.11479 Scale est. = 0.11249 n = 1704
```
The first thing you may notice is that there are no regression coefficients. This is because the effect of any of these predictors depends on their value, so trying to assess it by a single value would be problematic at best. You can guess what will help us interpret this…
```
library(mgcViz)
plot.gamViz(happy_model_gam, allTerms = T)
```
Here is a brief summary of interpretation. We generally don’t have to worry about small wiggles.
* `democratic_quality`: Effect is most notable (positive and strong) for higher values. Negligible otherwise.
* `generosity`: Effect seems seems strongly positive, but mostly for lower values of generosity.
* `life_expectancy`: Effect is positive, but only if the country is around the mean or higher.
* `log GDP per capita`: Effect is mostly positive, but may depend on other factors not included in the model.
In terms of general model fit, the `Scale est.` is the same as the residual standard error (squared) in the other models, and is a notably lower than even the model with the interaction (0\.11 vs. 0\.16\). We can also see that the adjusted \\(R^2\\) is higher as well (0\.87 vs. 0\.82\). If we wanted, we can actually do wiggly interactions also! Here is our interaction from before for the GAM case.
Let’s check our AIC now to see which model wins.
```
AIC(
happy_model_null,
happy_model_base,
happy_model_more,
happy_model_interact,
happy_model_gam
)
```
```
df AIC
happy_model_null 2.00000 1272.755
happy_model_base 5.00000 2043.770
happy_model_more 7.00000 1791.237
happy_model_interact 7.00000 1709.801
happy_model_gam 35.02128 1148.417
```
It’s pretty clear our wiggly model is the winner, even with the added complexity. Note that even though we used a different function for the GAM model, the AIC is still comparable.
Model Averaging
---------------
Have you ever suffered from choice overload? Many folks who seek to understand some phenomenon via modeling do so. There are plenty of choices due to data processing, but then there may be many models to consider as well, and should be if you’re doing things correctly. But you know what? You don’t have to pick a best.
Model averaging is a common technique in the Bayesian world and also with some applications of machine learning (usually under the guise of *stacking*), but not as widely applied elsewhere, even though it could be. As an example if we (inversely) weight models by the AIC, we can get an average parameter that favors the better models, while not ignoring the lesser models if they aren’t notably poorer. People will use such an approach to get model averaged effects (i.e. coefficients) or predictions. In our setting, the GAM is doing so much better, that it’s weight would basically be 1\.0 and zero for the others. So the model averaged predictions would be almost identical to the GAM predictions.
| model | df | AIC | AICc | deltaAICc | Rel. Like. | weight |
| --- | --- | --- | --- | --- | --- | --- |
| happy\_model\_base | 5\.000 | 2043\.770 | 2043\.805 | 893\.875 | 0 | 0 |
| happy\_model\_more | 7\.000 | 1791\.237 | 1791\.303 | 641\.373 | 0 | 0 |
| happy\_model\_interact | 7\.000 | 1709\.801 | 1709\.867 | 559\.937 | 0 | 0 |
| happy\_model\_gam | 35\.021 | 1148\.417 | 1149\.930 | 0\.000 | 1 | 1 |
Model Criticism Summary
-----------------------
Statistical significance with a single model does not provide enough of a story to tell with your data. A better assessment of performance can be made on data the model has not seen, and can provide a better idea of the practical capabilities of it. Furthermore, pitting various models of differing complexities will allow for better confidence in the model or set of models we ultimately deem worthy. In general, in more explanatory settings we strive to balance performance with complexity through various means.
Model Criticism Exercises
-------------------------
### Exercise 0
Recall the [google app exercises](models.html#model-exploration-exercises), we use a standard linear model (i.e. lm) to predict one of three target variables:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
After that we did a model with an interaction.
Either using those models, or running new ones with a different target variable, conduct the following exercises.
```
load('data/google_apps.RData')
```
### Exercise 1
Assess the model fit and performance of your first model. Perform additional diagnostics to assess how the model is doing (e.g. plot the model to look at residuals).
```
summary(model)
plot(model)
```
### Exercise 2
Compare the model with the interaction model. Based on AIC or some other metric, which one would you choose? Visualize the interaction model if it’s the better model.
```
anova(model1, model2)
AIC(model1, model2)
```
Python Model Criticism Notebook
-------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/model_criticism.ipynb)
Model Fit
---------
### Standard linear model
In the basic regression setting we can think of model fit in terms of a statistical result, or in terms of the match between our model predictions and the observed target values. The former provides an inferential perspective, but as we will see, is limited. The latter regards a more practical result, and may provide a more nuanced or different conclusion.
#### Statistical assessment
In a standard linear model we can compare a model where there are no covariates vs. the model we actually care about, which may have many predictor variables. This is an almost useless test, but the results are typically reported both in standard output and academic presentation. Let’s think about it conceptually\- how does the variability in our target break down?
\\\[\\textrm{Total Variance} \= \\textrm{Model Explained Variance} \+ \\textrm{Residual Variance}\\]
So the variability in our target (TV) can be decomposed into that which we can explain with the predictor variables (MEV), and everything else that is not in our model (RV). If we have nothing in the model, then TV \= RV.
Let’s revisit the summary of our model. Note the *F\-statistic*, which represents a statistical test for the model as a whole.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
The standard F statistic can be calculated as follows, where \\(p\\) is the number of predictors[36](#fn36):
\\\[F \= \\frac{MV/p}{RV/(N\-p\-1\)}\\]
Conceptually it is a ratio of average squared variance to average unexplained variance. We can see this more explicitly as follows, where each predictor’s contribution to the total variance is provided in the `Sum Sq` column.
```
anova(happy_model_base)
```
```
Analysis of Variance Table
Response: happiness_score
Df Sum Sq Mean Sq F value Pr(>F)
democratic_quality 1 189.192 189.192 479.300 < 2.2e-16 ***
generosity 1 6.774 6.774 17.162 4.177e-05 ***
log_gdp_per_capita 1 170.649 170.649 432.324 < 2.2e-16 ***
Residuals 407 160.653 0.395
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
If we add those together and use our formula above we get:
\\\[F \= \\frac{366\.62/3}{160\.653/407} \= 309\.6\\]
Which is what is reported in the summary of the model. And the p\-value is just `pf(309.6, 3, 407, lower = FALSE)`, whose values can be extracted from the summary object.
```
happy_model_base_sum$fstatistic
```
```
value numdf dendf
309.5954 3.0000 407.0000
```
```
pf(309.6, 3, 407, lower.tail = FALSE)
```
```
[1] 1.239283e-104
```
Because the F\-value is so large and p\-value so small, the printed result in the summary doesn’t give us the actual p\-value. So let’s demonstrate again with a worse model, where the p\-value will be higher.
```
f_test = lm(happiness_score ~ generosity, happy)
summary(f_test)
```
```
Call:
lm(formula = happiness_score ~ generosity, data = happy)
Residuals:
Min 1Q Median 3Q Max
-2.81037 -0.89930 0.00716 0.84924 2.33153
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.41905 0.04852 111.692 < 2e-16 ***
generosity 0.89936 0.30351 2.963 0.00318 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.122 on 533 degrees of freedom
(1169 observations deleted due to missingness)
Multiple R-squared: 0.01621, Adjusted R-squared: 0.01436
F-statistic: 8.78 on 1 and 533 DF, p-value: 0.003181
```
```
pf(8.78, 1, 533, lower.tail = FALSE)
```
```
[1] 0.003181551
```
We can make this F\-test more explicit by actually fitting a null model and making the comparison. The following will provide the same result as before. We make sure to use the same data as in the original model, since there are missing values for some covariates.
```
happy_model_null = lm(happiness_score ~ 1, data = model.frame(happy_model_base))
anova(happy_model_null, happy_model_base)
```
```
Analysis of Variance Table
Model 1: happiness_score ~ 1
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Res.Df RSS Df Sum of Sq F Pr(>F)
1 410 527.27
2 407 160.65 3 366.62 309.6 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
In this case our F statistic generalizes to the following, where \\(\\textrm{Model}\_1\\) is the simpler model and \\(p\\) now refers to the total number of parameters estimated (i.e. same as before \+ 1 for the intercept)
\\\[F \= \\frac{(\\textrm{Model}\_2\\ \\textrm{RV} \- \\textrm{Model}\_1\\ \\textrm{RV})/(p\_2 \- p\_1\)}{\\textrm{Model}\_2\\ \\textrm{RV}/(N\-p\_2\-1\)}\\]
From the previous results, we can perform the necessary arithmetic based on this formula to get the F statistic.
```
((527.27 - 160.65)/3) / (160.65/407)
```
```
[1] 309.6054
```
#### \\(R^2\\)
The statistical result just shown is mostly a straw man type of test\- who actually cares if our model does statistically better than a model with nothing in it? Surely if you don’t do better than nothing, then you may need to think more intently about what you are trying to model and how. But just because you can knock the straw man down, it isn’t something to get overly excited about. Let’s turn instead to a different concept\- the amount of variance of the target variable that is explained by our predictors. For the standard linear model setting, this statistic is called *R\-squared* (\\(R^2\\)).
Going back to our previous notions, \\(R^2\\) is just:
\\\[R^2 \=\\textrm{Model Explained Variance}/\\textrm{Total Variance}\\]
This also is reported by default in our summary printout.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
With our values from before for model and total variance, we can calculate it ourselves.
```
366.62 / 527.27
```
```
[1] 0.6953174
```
Here is another way. Let’s get the model predictions, and see how well they correlate with the target.
```
predictions = predict(happy_model_base)
target = happy_model_base$model$happiness_score
rho = cor(predictions, target)
rho
```
```
[1] 0.8338528
```
```
rho^2
```
```
[1] 0.6953106
```
Now you can see why it’s called \\(R^2\\). It is the squared Pearson \\(r\\) of the model expected value and the observed target variable.
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
### Beyond OLS
People love \\(R^2\\), so much that they will report it wherever they can, even coming up with things like ‘Pseudo\-\\(R^2\\)’ when it proves difficult. However, outside of the OLS setting where we assume a normal distribution as the underlying data\-generating mechanism, \\(R^2\\) has little application, and so is not very useful. In some sense, for any numeric target variable we can ask how well our predictions correlate with the observed target values, but the notion of ‘variance explained’ doesn’t easily follow us. For example, for other distributions the estimated variance is a function of the mean (e.g. Poisson, Binomial), and so isn’t constant. In other settings we have multiple sources of (residual) variance, and some sources where it’s not clear whether the variance should be considered as part of the model explained variance or residual variance. For categorical targets the notion doesn’t really apply very well at all.
At least for GLM for non\-normal distributions, we can work with *deviance*, which is similar to the residual sum of squares in the OLS setting. We can get a ‘deviance explained’ using the following approach:
1. Fit a null model, i.e. intercept only. This gives the total deviance (`tot_dev`).
2. Fit the desired model. This provides the model unexplained deviance (`model_dev`)
3. Calculate \\(\\frac{\\textrm{tot\_dev} \-\\textrm{model\_dev}}{\\textrm{tot\_dev}}\\)
But this value doesn’t really behave in the same manner as \\(R^2\\). For one, it can actually go down for a more complex model, and there is no standard adjustment, neither of which is the case with \\(R^2\\) for the standard linear model. At most this can serve as an approximation. For more complicated settings you will have to rely on other means to determine model fit.
### Classification
For categorical targets we must think about obtaining predictions that allow us to classify the observations into specific categories. Not surprisingly, this will require different metrics to assess model performance.
#### Accuracy and other metrics
A very natural starting point is *accuracy*, or what percentage of our predicted class labels match the observed class labels. However, our model will not spit out a character string, only a number. On the scale of the linear predictor it can be anything, but we will at some point transform it to the probability scale, obtaining a predicted probability for each category. The class associated with the highest probability is the predicted class. In the case of binary targets, this is just an if\_else statement for one class `if_else(probability >= .5, 'class A', 'class B')`.
With those predicted labels and the observed labels we create what is commonly called a *confusion matrix*, but would more sanely be called a *classification table*, *prediction table*, or just about any other name one could come up with in the first 10 seconds of trying. Let’s look at the following hypothetical result.
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | 41 | 21 | | Predicted \= 0 | 16 | 13 | | | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | A | B | | Predicted \= 0 | C | D | |
In some cases we predict correctly, in other cases not. In this 2 x 2 setting we label the cells A through D. With things in place, consider the following the following nomenclature.
*True Positive*, *False Positive*, *True Negative*, *False Negative*: Above, these are A, B, D, and C respectively.
Now let’s see what we can calculate.
*Accuracy*: Number of correct classifications out of all predictions (A \+ D)/Total. In the above example this would be (41 \+ 13\)/91, about 59%.
*Error Rate*: 1 \- Accuracy.
*Sensitivity*: is the proportion of correctly predicted positives to all true positive events: A/(A \+ C). In the above example this would be 41/57, about 72%. High sensitivity would suggest a low type II error rate (see below), or high statistical power. Also known as *true positive rate*.
*Specificity*: is the proportion of correctly predicted negatives to all true negative events: D/(B \+ D). In the above example this would be 13/34, about 38%. High specificity would suggest a low type I error rate (see below). Also known as *true negative rate*.
*Positive Predictive Value* (PPV): proportion of true positives of those that are predicted positives: A/(A \+ B). In the above example this would be 41/62, about 66%.
*Negative Predictive Value* (NPV): proportion of true negatives of those that are predicted negative: D/(C \+ D). In the above example this would be 13/29, about 45%.
*Precision*: See PPV.
*Recall*: See sensitivity.
*Lift*: Ratio of positive predictions given actual positives to the proportion of positive predictions out of the total: (A/(A \+ C)) / ((A \+ B)/Total). In the above example this would be (41/(41 \+ 16\))/((41 \+ 21\)/(91\)), or 1\.06\.
*F Score* (F1 score): Harmonic mean of precision and recall: 2\*(Precision\*Recall)/(Precision\+Recall). In the above example this would be 2\*(.66\*.72\)/(.66\+.72\), about 0\.69\.
*Type I Error Rate* (false positive rate): proportion of true negatives that are incorrectly predicted positive: B/(B\+D). In the above example this would be 21/34, about 62%. Also known as *alpha*.
*Type II Error Rate* (false negative rate): proportion of true positives that are incorrectly predicted negative: C/(C\+A). In the above example this would be 16/57, about 28%. Also known as *beta*.
*False Discovery Rate*: proportion of false positives among all positive predictions: B/(A\+B). In the above example this would be 21/62, about 34%. Often used in multiple comparison testing in the context of ANOVA.
*Phi coefficient*: A measure of association: (A\*D \- B\*C) / (sqrt((A\+C)\*(D\+B)\*(A\+B)\*(D\+C))). In the above example this would be 0\.11\.
Several of these may also be produced on a per\-class basis when there are more than two classes. In addition, for multi\-class scenarios there are other metrics commonly employed. In general there are many, many other metrics for confusion matrices, any of which might be useful for your situation, but the above provides a starting point, and is enough for many situations.
### Standard linear model
In the basic regression setting we can think of model fit in terms of a statistical result, or in terms of the match between our model predictions and the observed target values. The former provides an inferential perspective, but as we will see, is limited. The latter regards a more practical result, and may provide a more nuanced or different conclusion.
#### Statistical assessment
In a standard linear model we can compare a model where there are no covariates vs. the model we actually care about, which may have many predictor variables. This is an almost useless test, but the results are typically reported both in standard output and academic presentation. Let’s think about it conceptually\- how does the variability in our target break down?
\\\[\\textrm{Total Variance} \= \\textrm{Model Explained Variance} \+ \\textrm{Residual Variance}\\]
So the variability in our target (TV) can be decomposed into that which we can explain with the predictor variables (MEV), and everything else that is not in our model (RV). If we have nothing in the model, then TV \= RV.
Let’s revisit the summary of our model. Note the *F\-statistic*, which represents a statistical test for the model as a whole.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
The standard F statistic can be calculated as follows, where \\(p\\) is the number of predictors[36](#fn36):
\\\[F \= \\frac{MV/p}{RV/(N\-p\-1\)}\\]
Conceptually it is a ratio of average squared variance to average unexplained variance. We can see this more explicitly as follows, where each predictor’s contribution to the total variance is provided in the `Sum Sq` column.
```
anova(happy_model_base)
```
```
Analysis of Variance Table
Response: happiness_score
Df Sum Sq Mean Sq F value Pr(>F)
democratic_quality 1 189.192 189.192 479.300 < 2.2e-16 ***
generosity 1 6.774 6.774 17.162 4.177e-05 ***
log_gdp_per_capita 1 170.649 170.649 432.324 < 2.2e-16 ***
Residuals 407 160.653 0.395
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
If we add those together and use our formula above we get:
\\\[F \= \\frac{366\.62/3}{160\.653/407} \= 309\.6\\]
Which is what is reported in the summary of the model. And the p\-value is just `pf(309.6, 3, 407, lower = FALSE)`, whose values can be extracted from the summary object.
```
happy_model_base_sum$fstatistic
```
```
value numdf dendf
309.5954 3.0000 407.0000
```
```
pf(309.6, 3, 407, lower.tail = FALSE)
```
```
[1] 1.239283e-104
```
Because the F\-value is so large and p\-value so small, the printed result in the summary doesn’t give us the actual p\-value. So let’s demonstrate again with a worse model, where the p\-value will be higher.
```
f_test = lm(happiness_score ~ generosity, happy)
summary(f_test)
```
```
Call:
lm(formula = happiness_score ~ generosity, data = happy)
Residuals:
Min 1Q Median 3Q Max
-2.81037 -0.89930 0.00716 0.84924 2.33153
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.41905 0.04852 111.692 < 2e-16 ***
generosity 0.89936 0.30351 2.963 0.00318 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.122 on 533 degrees of freedom
(1169 observations deleted due to missingness)
Multiple R-squared: 0.01621, Adjusted R-squared: 0.01436
F-statistic: 8.78 on 1 and 533 DF, p-value: 0.003181
```
```
pf(8.78, 1, 533, lower.tail = FALSE)
```
```
[1] 0.003181551
```
We can make this F\-test more explicit by actually fitting a null model and making the comparison. The following will provide the same result as before. We make sure to use the same data as in the original model, since there are missing values for some covariates.
```
happy_model_null = lm(happiness_score ~ 1, data = model.frame(happy_model_base))
anova(happy_model_null, happy_model_base)
```
```
Analysis of Variance Table
Model 1: happiness_score ~ 1
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Res.Df RSS Df Sum of Sq F Pr(>F)
1 410 527.27
2 407 160.65 3 366.62 309.6 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
In this case our F statistic generalizes to the following, where \\(\\textrm{Model}\_1\\) is the simpler model and \\(p\\) now refers to the total number of parameters estimated (i.e. same as before \+ 1 for the intercept)
\\\[F \= \\frac{(\\textrm{Model}\_2\\ \\textrm{RV} \- \\textrm{Model}\_1\\ \\textrm{RV})/(p\_2 \- p\_1\)}{\\textrm{Model}\_2\\ \\textrm{RV}/(N\-p\_2\-1\)}\\]
From the previous results, we can perform the necessary arithmetic based on this formula to get the F statistic.
```
((527.27 - 160.65)/3) / (160.65/407)
```
```
[1] 309.6054
```
#### \\(R^2\\)
The statistical result just shown is mostly a straw man type of test\- who actually cares if our model does statistically better than a model with nothing in it? Surely if you don’t do better than nothing, then you may need to think more intently about what you are trying to model and how. But just because you can knock the straw man down, it isn’t something to get overly excited about. Let’s turn instead to a different concept\- the amount of variance of the target variable that is explained by our predictors. For the standard linear model setting, this statistic is called *R\-squared* (\\(R^2\\)).
Going back to our previous notions, \\(R^2\\) is just:
\\\[R^2 \=\\textrm{Model Explained Variance}/\\textrm{Total Variance}\\]
This also is reported by default in our summary printout.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
With our values from before for model and total variance, we can calculate it ourselves.
```
366.62 / 527.27
```
```
[1] 0.6953174
```
Here is another way. Let’s get the model predictions, and see how well they correlate with the target.
```
predictions = predict(happy_model_base)
target = happy_model_base$model$happiness_score
rho = cor(predictions, target)
rho
```
```
[1] 0.8338528
```
```
rho^2
```
```
[1] 0.6953106
```
Now you can see why it’s called \\(R^2\\). It is the squared Pearson \\(r\\) of the model expected value and the observed target variable.
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
#### Statistical assessment
In a standard linear model we can compare a model where there are no covariates vs. the model we actually care about, which may have many predictor variables. This is an almost useless test, but the results are typically reported both in standard output and academic presentation. Let’s think about it conceptually\- how does the variability in our target break down?
\\\[\\textrm{Total Variance} \= \\textrm{Model Explained Variance} \+ \\textrm{Residual Variance}\\]
So the variability in our target (TV) can be decomposed into that which we can explain with the predictor variables (MEV), and everything else that is not in our model (RV). If we have nothing in the model, then TV \= RV.
Let’s revisit the summary of our model. Note the *F\-statistic*, which represents a statistical test for the model as a whole.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
The standard F statistic can be calculated as follows, where \\(p\\) is the number of predictors[36](#fn36):
\\\[F \= \\frac{MV/p}{RV/(N\-p\-1\)}\\]
Conceptually it is a ratio of average squared variance to average unexplained variance. We can see this more explicitly as follows, where each predictor’s contribution to the total variance is provided in the `Sum Sq` column.
```
anova(happy_model_base)
```
```
Analysis of Variance Table
Response: happiness_score
Df Sum Sq Mean Sq F value Pr(>F)
democratic_quality 1 189.192 189.192 479.300 < 2.2e-16 ***
generosity 1 6.774 6.774 17.162 4.177e-05 ***
log_gdp_per_capita 1 170.649 170.649 432.324 < 2.2e-16 ***
Residuals 407 160.653 0.395
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
If we add those together and use our formula above we get:
\\\[F \= \\frac{366\.62/3}{160\.653/407} \= 309\.6\\]
Which is what is reported in the summary of the model. And the p\-value is just `pf(309.6, 3, 407, lower = FALSE)`, whose values can be extracted from the summary object.
```
happy_model_base_sum$fstatistic
```
```
value numdf dendf
309.5954 3.0000 407.0000
```
```
pf(309.6, 3, 407, lower.tail = FALSE)
```
```
[1] 1.239283e-104
```
Because the F\-value is so large and p\-value so small, the printed result in the summary doesn’t give us the actual p\-value. So let’s demonstrate again with a worse model, where the p\-value will be higher.
```
f_test = lm(happiness_score ~ generosity, happy)
summary(f_test)
```
```
Call:
lm(formula = happiness_score ~ generosity, data = happy)
Residuals:
Min 1Q Median 3Q Max
-2.81037 -0.89930 0.00716 0.84924 2.33153
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.41905 0.04852 111.692 < 2e-16 ***
generosity 0.89936 0.30351 2.963 0.00318 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.122 on 533 degrees of freedom
(1169 observations deleted due to missingness)
Multiple R-squared: 0.01621, Adjusted R-squared: 0.01436
F-statistic: 8.78 on 1 and 533 DF, p-value: 0.003181
```
```
pf(8.78, 1, 533, lower.tail = FALSE)
```
```
[1] 0.003181551
```
We can make this F\-test more explicit by actually fitting a null model and making the comparison. The following will provide the same result as before. We make sure to use the same data as in the original model, since there are missing values for some covariates.
```
happy_model_null = lm(happiness_score ~ 1, data = model.frame(happy_model_base))
anova(happy_model_null, happy_model_base)
```
```
Analysis of Variance Table
Model 1: happiness_score ~ 1
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Res.Df RSS Df Sum of Sq F Pr(>F)
1 410 527.27
2 407 160.65 3 366.62 309.6 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
In this case our F statistic generalizes to the following, where \\(\\textrm{Model}\_1\\) is the simpler model and \\(p\\) now refers to the total number of parameters estimated (i.e. same as before \+ 1 for the intercept)
\\\[F \= \\frac{(\\textrm{Model}\_2\\ \\textrm{RV} \- \\textrm{Model}\_1\\ \\textrm{RV})/(p\_2 \- p\_1\)}{\\textrm{Model}\_2\\ \\textrm{RV}/(N\-p\_2\-1\)}\\]
From the previous results, we can perform the necessary arithmetic based on this formula to get the F statistic.
```
((527.27 - 160.65)/3) / (160.65/407)
```
```
[1] 309.6054
```
#### \\(R^2\\)
The statistical result just shown is mostly a straw man type of test\- who actually cares if our model does statistically better than a model with nothing in it? Surely if you don’t do better than nothing, then you may need to think more intently about what you are trying to model and how. But just because you can knock the straw man down, it isn’t something to get overly excited about. Let’s turn instead to a different concept\- the amount of variance of the target variable that is explained by our predictors. For the standard linear model setting, this statistic is called *R\-squared* (\\(R^2\\)).
Going back to our previous notions, \\(R^2\\) is just:
\\\[R^2 \=\\textrm{Model Explained Variance}/\\textrm{Total Variance}\\]
This also is reported by default in our summary printout.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
With our values from before for model and total variance, we can calculate it ourselves.
```
366.62 / 527.27
```
```
[1] 0.6953174
```
Here is another way. Let’s get the model predictions, and see how well they correlate with the target.
```
predictions = predict(happy_model_base)
target = happy_model_base$model$happiness_score
rho = cor(predictions, target)
rho
```
```
[1] 0.8338528
```
```
rho^2
```
```
[1] 0.6953106
```
Now you can see why it’s called \\(R^2\\). It is the squared Pearson \\(r\\) of the model expected value and the observed target variable.
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
### Beyond OLS
People love \\(R^2\\), so much that they will report it wherever they can, even coming up with things like ‘Pseudo\-\\(R^2\\)’ when it proves difficult. However, outside of the OLS setting where we assume a normal distribution as the underlying data\-generating mechanism, \\(R^2\\) has little application, and so is not very useful. In some sense, for any numeric target variable we can ask how well our predictions correlate with the observed target values, but the notion of ‘variance explained’ doesn’t easily follow us. For example, for other distributions the estimated variance is a function of the mean (e.g. Poisson, Binomial), and so isn’t constant. In other settings we have multiple sources of (residual) variance, and some sources where it’s not clear whether the variance should be considered as part of the model explained variance or residual variance. For categorical targets the notion doesn’t really apply very well at all.
At least for GLM for non\-normal distributions, we can work with *deviance*, which is similar to the residual sum of squares in the OLS setting. We can get a ‘deviance explained’ using the following approach:
1. Fit a null model, i.e. intercept only. This gives the total deviance (`tot_dev`).
2. Fit the desired model. This provides the model unexplained deviance (`model_dev`)
3. Calculate \\(\\frac{\\textrm{tot\_dev} \-\\textrm{model\_dev}}{\\textrm{tot\_dev}}\\)
But this value doesn’t really behave in the same manner as \\(R^2\\). For one, it can actually go down for a more complex model, and there is no standard adjustment, neither of which is the case with \\(R^2\\) for the standard linear model. At most this can serve as an approximation. For more complicated settings you will have to rely on other means to determine model fit.
### Classification
For categorical targets we must think about obtaining predictions that allow us to classify the observations into specific categories. Not surprisingly, this will require different metrics to assess model performance.
#### Accuracy and other metrics
A very natural starting point is *accuracy*, or what percentage of our predicted class labels match the observed class labels. However, our model will not spit out a character string, only a number. On the scale of the linear predictor it can be anything, but we will at some point transform it to the probability scale, obtaining a predicted probability for each category. The class associated with the highest probability is the predicted class. In the case of binary targets, this is just an if\_else statement for one class `if_else(probability >= .5, 'class A', 'class B')`.
With those predicted labels and the observed labels we create what is commonly called a *confusion matrix*, but would more sanely be called a *classification table*, *prediction table*, or just about any other name one could come up with in the first 10 seconds of trying. Let’s look at the following hypothetical result.
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | 41 | 21 | | Predicted \= 0 | 16 | 13 | | | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | A | B | | Predicted \= 0 | C | D | |
In some cases we predict correctly, in other cases not. In this 2 x 2 setting we label the cells A through D. With things in place, consider the following the following nomenclature.
*True Positive*, *False Positive*, *True Negative*, *False Negative*: Above, these are A, B, D, and C respectively.
Now let’s see what we can calculate.
*Accuracy*: Number of correct classifications out of all predictions (A \+ D)/Total. In the above example this would be (41 \+ 13\)/91, about 59%.
*Error Rate*: 1 \- Accuracy.
*Sensitivity*: is the proportion of correctly predicted positives to all true positive events: A/(A \+ C). In the above example this would be 41/57, about 72%. High sensitivity would suggest a low type II error rate (see below), or high statistical power. Also known as *true positive rate*.
*Specificity*: is the proportion of correctly predicted negatives to all true negative events: D/(B \+ D). In the above example this would be 13/34, about 38%. High specificity would suggest a low type I error rate (see below). Also known as *true negative rate*.
*Positive Predictive Value* (PPV): proportion of true positives of those that are predicted positives: A/(A \+ B). In the above example this would be 41/62, about 66%.
*Negative Predictive Value* (NPV): proportion of true negatives of those that are predicted negative: D/(C \+ D). In the above example this would be 13/29, about 45%.
*Precision*: See PPV.
*Recall*: See sensitivity.
*Lift*: Ratio of positive predictions given actual positives to the proportion of positive predictions out of the total: (A/(A \+ C)) / ((A \+ B)/Total). In the above example this would be (41/(41 \+ 16\))/((41 \+ 21\)/(91\)), or 1\.06\.
*F Score* (F1 score): Harmonic mean of precision and recall: 2\*(Precision\*Recall)/(Precision\+Recall). In the above example this would be 2\*(.66\*.72\)/(.66\+.72\), about 0\.69\.
*Type I Error Rate* (false positive rate): proportion of true negatives that are incorrectly predicted positive: B/(B\+D). In the above example this would be 21/34, about 62%. Also known as *alpha*.
*Type II Error Rate* (false negative rate): proportion of true positives that are incorrectly predicted negative: C/(C\+A). In the above example this would be 16/57, about 28%. Also known as *beta*.
*False Discovery Rate*: proportion of false positives among all positive predictions: B/(A\+B). In the above example this would be 21/62, about 34%. Often used in multiple comparison testing in the context of ANOVA.
*Phi coefficient*: A measure of association: (A\*D \- B\*C) / (sqrt((A\+C)\*(D\+B)\*(A\+B)\*(D\+C))). In the above example this would be 0\.11\.
Several of these may also be produced on a per\-class basis when there are more than two classes. In addition, for multi\-class scenarios there are other metrics commonly employed. In general there are many, many other metrics for confusion matrices, any of which might be useful for your situation, but the above provides a starting point, and is enough for many situations.
#### Accuracy and other metrics
A very natural starting point is *accuracy*, or what percentage of our predicted class labels match the observed class labels. However, our model will not spit out a character string, only a number. On the scale of the linear predictor it can be anything, but we will at some point transform it to the probability scale, obtaining a predicted probability for each category. The class associated with the highest probability is the predicted class. In the case of binary targets, this is just an if\_else statement for one class `if_else(probability >= .5, 'class A', 'class B')`.
With those predicted labels and the observed labels we create what is commonly called a *confusion matrix*, but would more sanely be called a *classification table*, *prediction table*, or just about any other name one could come up with in the first 10 seconds of trying. Let’s look at the following hypothetical result.
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | 41 | 21 | | Predicted \= 0 | 16 | 13 | | | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | A | B | | Predicted \= 0 | C | D | |
In some cases we predict correctly, in other cases not. In this 2 x 2 setting we label the cells A through D. With things in place, consider the following the following nomenclature.
*True Positive*, *False Positive*, *True Negative*, *False Negative*: Above, these are A, B, D, and C respectively.
Now let’s see what we can calculate.
*Accuracy*: Number of correct classifications out of all predictions (A \+ D)/Total. In the above example this would be (41 \+ 13\)/91, about 59%.
*Error Rate*: 1 \- Accuracy.
*Sensitivity*: is the proportion of correctly predicted positives to all true positive events: A/(A \+ C). In the above example this would be 41/57, about 72%. High sensitivity would suggest a low type II error rate (see below), or high statistical power. Also known as *true positive rate*.
*Specificity*: is the proportion of correctly predicted negatives to all true negative events: D/(B \+ D). In the above example this would be 13/34, about 38%. High specificity would suggest a low type I error rate (see below). Also known as *true negative rate*.
*Positive Predictive Value* (PPV): proportion of true positives of those that are predicted positives: A/(A \+ B). In the above example this would be 41/62, about 66%.
*Negative Predictive Value* (NPV): proportion of true negatives of those that are predicted negative: D/(C \+ D). In the above example this would be 13/29, about 45%.
*Precision*: See PPV.
*Recall*: See sensitivity.
*Lift*: Ratio of positive predictions given actual positives to the proportion of positive predictions out of the total: (A/(A \+ C)) / ((A \+ B)/Total). In the above example this would be (41/(41 \+ 16\))/((41 \+ 21\)/(91\)), or 1\.06\.
*F Score* (F1 score): Harmonic mean of precision and recall: 2\*(Precision\*Recall)/(Precision\+Recall). In the above example this would be 2\*(.66\*.72\)/(.66\+.72\), about 0\.69\.
*Type I Error Rate* (false positive rate): proportion of true negatives that are incorrectly predicted positive: B/(B\+D). In the above example this would be 21/34, about 62%. Also known as *alpha*.
*Type II Error Rate* (false negative rate): proportion of true positives that are incorrectly predicted negative: C/(C\+A). In the above example this would be 16/57, about 28%. Also known as *beta*.
*False Discovery Rate*: proportion of false positives among all positive predictions: B/(A\+B). In the above example this would be 21/62, about 34%. Often used in multiple comparison testing in the context of ANOVA.
*Phi coefficient*: A measure of association: (A\*D \- B\*C) / (sqrt((A\+C)\*(D\+B)\*(A\+B)\*(D\+C))). In the above example this would be 0\.11\.
Several of these may also be produced on a per\-class basis when there are more than two classes. In addition, for multi\-class scenarios there are other metrics commonly employed. In general there are many, many other metrics for confusion matrices, any of which might be useful for your situation, but the above provides a starting point, and is enough for many situations.
Model Assumptions
-----------------
There are quite a few assumptions for the standard linear model that we could talk about, but I’ll focus on just a handful, ordered roughly in terms of the severity of violation.
* Correct model
* Heteroscedasticity
* Independence of observations
* Normality
These concern bias (the first), accurate inference (most of the rest), or other statistical concepts (efficiency, consistency). The issue with most of the assumptions you learn about in your statistics course is that they mostly just apply to the OLS setting. Moreover, you can meet all the assumptions you want and still have a crappy model. Practically speaking, the effects on inference often aren’t large enough to matter in many cases, as we shouldn’t be making any important decision based on a p\-value, or slight differences in the boundaries of an interval. Even then, at least for OLS and other simpler settings, the solutions to these issues are often easy, for example, to obtain correct standard errors, or are mostly overcome by having a large amount of data.
Still, the diagnostic tools can provide clues to model failure, and so have utility in that sense. As before, visualization will aid us here.
```
library(ggfortify)
autoplot(happy_model_base)
```
The first plot shows the spread of the residuals vs. the model estimated values. By default, the three most extreme observations are noted. In this plot we are looking for a lack of any conspicuous pattern, e.g. a fanning out to one side or butterfly shape. If the variance was dependent on some of the model estimated values, we have a couple options:
* Use a model that does not assume constant variance
* Add complexity to the model to better capture more extreme observations
* Change the assumed distribution
In this example we have it about as good as it gets. The second plot regards the normality of the residuals. If they are normally distributed, they would fall along the dotted line. Again, in practical application this is about as good as you’re going to get. In the following we can see that we have some issues, where predictions are worse at low and high ends, and we may not be capturing some of the tail of the target distribution.
Another plot we can use to assess model fit is simply to note the predictions vs. the observed values, and this sort of plot would be appropriate for any model. Here I show this both as a scatterplot and a density plot. With the first, the closer the result is to a line the better, with the latter, we can more adequately see what the model is predicting in relation to the observed values. In this case, while we’re doing well, one limitation of the model is that it does not have as much spread as target, and so is not capturing the more extreme values.
Beyond the OLS setting, assumptions may change, are more difficult to check, and guarantees are harder to come by. The primary one \- that you have an adequate and sufficiently complex model \- still remains the most vital. It is important to remember that these assumptions regard inference, not predictive capabilities. In addition, in many modeling scenarios we will actually induce bias to have more predictive capacity. In such settings statistical tests are of less importance, and there often may not even be an obvious test to use. Typically we will still have some means to get interval estimates for weights or predictions though.
Predictive Performance
----------------------
While we can gauge predictive performance to some extent with a metric like \\(R^2\\) in the standard linear model case, even then it almost certainly an optimistic viewpoint, and adjusted \\(R^2\\) doesn’t really deal with the underlying issue. What is the problem? The concern is that we are judging model performance on the very data it was fit to. Any potential deviation to the underlying data would certainly result in a different result for \\(R^2\\), accuracy, or any metric we choose to look at.
So the better estimate of how the model is doing is to observe performance on data it hasn’t seen, using a metric that better captures how close we hit the target. This data goes by different names\- *test set*, *validation set*, *holdout sample*, etc., but the basic idea is that we use some data that wasn’t used in model fitting to assess performance. We can do this in any data situation by randomly splitting into a data set for training the model, and one used for testing the model’s performance.
```
library(tidymodels)
set.seed(12)
happy_split = initial_split(happy, prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split) %>% drop_na()
happy_model_train = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_train
)
predictions = predict(happy_model_train, newdata = happy_test)
```
Comparing our loss on training and test (i.e. RMSE), we can see the loss is greater on the test set. You can use a package like yardstick to calculate this.
| RMSE\_train | RMSE\_test | % increase |
| --- | --- | --- |
| 0\.622 | 0\.758 | 21\.9 |
While in many settings we could simply report performance metrics from the test set, for a more accurate assessment of test error, we’d do better by taking an average over several test sets, an approach known as *cross\-validation*, something we’ll talk more about [later](ml.html#cross-validation).
In general, we may do okay in scenarios where the model is simple and uses a lot of data, but even then we may find a notable increase in test error relative to training error. For more complex models and/or with less data, the difference in training vs. test could be quite significant.
Model Comparison
----------------
Up until now the focus has been entirely on one model. However, if you’re trying to learn something new, you’ll almost always want to have multiple plausible models to explore, rather than just confirming what you think you already know. This can be as simple as starting with a baseline model and adding complexity to it, but it could also be pitting fundamentally different theoretical models against one another.
A notable problem is that complex models should always do better than simple ones. The question often then becomes if they are doing notably better given the additional complexity. So we’ll need some way to compare models in a way that takes the complexity of the model into account.
### Example: Additional covariates
A starting point for adding model complexity is simply adding more covariates. Let’s add life expectancy and a yearly trend to our happiness model. To make this model comparable to our baseline model, they need to be fit to the same data, and life expectancy has a couple missing values the others do not. So we’ll start with some data processing. I will start by standardizing some of the variables, and making year start at zero, which will represent 2008, and finally dropping missing values. Refer to our previous section on [transforming variables](models.html#numeric-variables) if you want to.
```
happy_recipe = happy %>%
select(
year,
happiness_score,
democratic_quality,
generosity,
healthy_life_expectancy_at_birth,
log_gdp_per_capita
) %>%
recipe(happiness_score ~ . ) %>%
step_center(all_numeric(), -log_gdp_per_capita, -year) %>%
step_scale(all_numeric(), -log_gdp_per_capita, -year) %>%
step_knnimpute(all_numeric()) %>%
step_naomit(everything()) %>%
step_center(year, means = 2005) %>%
prep()
happy_processed = happy_recipe %>% bake(happy)
```
Now let’s start with our baseline model again.
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_processed
)
summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.53727 -0.29553 -0.01258 0.32002 1.52749
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -5.49178 0.10993 -49.958 <2e-16 ***
democratic_quality 0.14175 0.01441 9.838 <2e-16 ***
generosity 0.19826 0.01096 18.092 <2e-16 ***
log_gdp_per_capita 0.59284 0.01187 49.946 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.44 on 1700 degrees of freedom
Multiple R-squared: 0.7805, Adjusted R-squared: 0.7801
F-statistic: 2014 on 3 and 1700 DF, p-value: < 2.2e-16
```
We can see that moving one standard deviation on democratic quality and generosity leads to similar standard deviation increases in happiness. Moving 10 percentage points in GDP would lead to less than .1 standard deviation increase in happiness.
Now we add our life expectancy and yearly trend.
```
happy_model_more = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed
)
summary(happy_model_more)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.50879 -0.27081 -0.01524 0.29640 1.60540
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.691818 0.148921 -24.790 < 2e-16 ***
democratic_quality 0.099717 0.013618 7.322 3.75e-13 ***
generosity 0.189113 0.010193 18.554 < 2e-16 ***
log_gdp_per_capita 0.397559 0.016121 24.661 < 2e-16 ***
healthy_life_expectancy_at_birth 0.311129 0.018732 16.609 < 2e-16 ***
year -0.007363 0.002728 -2.699 0.00702 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.4083 on 1698 degrees of freedom
Multiple R-squared: 0.8111, Adjusted R-squared: 0.8106
F-statistic: 1459 on 5 and 1698 DF, p-value: < 2.2e-16
```
Here it would seem that life expectancy has a notable effect on happiness (shocker), but the yearly trend, while negative, is not statistically notable. In addition, the democratic effect is no longer significant, as it would seem that it’s contribution was more due to it’s correlation with life expectancy. But the key question is\- is this model better?
The adjusted \\(R^2\\) seems to indicate that we are doing slightly better with this model, but not much (0\.81 vs. 0\.78\). We can test if the increase is a statistically notable one. [Recall previously](model_criticism.html#statistical-assessment) when we compared our model versus a null model to obtain a statistical test of model fit. Since these models are *nested*, i.e. one is a simpler form of the other, we can use the more general approach we depicted to compare these models. This ANOVA, or analysis of variance test, is essentially comparing whether the residual sum of squares (i.e. the loss) is statistically less for one model vs. the other. In many settings it is often called a *likelihood ratio test*.
```
anova(happy_model_base, happy_model_more, test = 'Chi')
```
```
Analysis of Variance Table
Model 1: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth + year
Res.Df RSS Df Sum of Sq Pr(>Chi)
1 1700 329.11
2 1698 283.11 2 45.997 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The `Df` from the test denotes that we have two additional parameters, i.e. coefficients, in the more complex model. But the main thing to note is whether the model statistically reduces the RSS, and so we see that this is a statistically notable improvement as well.
I actually do not like this test though. It requires nested models, which in some settings is either not the case or can be hard to determine, and ignores various aspects of uncertainty in parameter estimates. Furthermore, it may not be appropriate for some complex model settings. An approach that works in many settings is to compare *AIC* (Akaike Information Criterion). AIC is a value based on the likelihood for a given model, but which adds a penalty for complexity, since otherwise any more complex model would result in a larger likelihood (or in this case, smaller negative likelihood). In the following, \\(\\mathcal{L}\\) is the likelihood, and \\(\\mathcal{P}\\) is the number of parameters estimated for the model.
\\\[AIC \= \-2 ( \\ln (\\mathcal{L})) \+ 2 \\mathcal{P}\\]
```
AIC(happy_model_base)
```
```
[1] 2043.77
```
The value itself is meaningless until we compare models, in which case the lower value is the better model (because we are working with the negative log likelihood). With AIC, we don’t have to have nested models, so that’s a plus over the statistical test.
```
AIC(happy_model_base, happy_model_more)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
```
Again, our new model works better. However, this still may miss out on some uncertainty in the models. To try and capture this, I will calculate interval estimates for the adjusted \\(R^2\\) via *bootstrapping*, and then calculate an interval for their difference. The details are beyond what I want to delve into here, but the gist is we just want a confidence interval for the difference in adjusted \\(R^2\\).
| model | r2 | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 0\.780 | 0\.762 | 0\.798 |
| more | 0\.811 | 0\.795 | 0\.827 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in \\(R^2\\) | 0\.013 | 0\.049 |
It would seem the difference in adjusted \\(R^2\\) is not statistically different from zero. Likewise we could do the same for AIC.
| model | aic | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 2043\.770 | 1917\.958 | 2161\.231 |
| more | 1791\.237 | 1657\.755 | 1911\.073 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in AIC | \-369\.994 | \-126\.722 |
In this case, the more complex model may not be statistically better either, as the interval for the difference in AIC also contains zero, and exhibits a notably wide range.
### Example: Interactions
Let’s now add interactions to our model. Interactions allow the relationship of a predictor variable and target to vary depending on the values of another covariate. To keep things simple, we’ll add a single interaction to start\- I will interact democratic quality with life expectancy.
```
happy_model_interact = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth +
democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed
)
summary(happy_model_interact)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.42801 -0.26473 -0.00607 0.26868 1.48161
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.63990 0.14517 -25.074 < 2e-16 ***
democratic_quality 0.08785 0.01335 6.580 6.24e-11 ***
generosity 0.16479 0.01030 16.001 < 2e-16 ***
log_gdp_per_capita 0.38501 0.01578 24.404 < 2e-16 ***
healthy_life_expectancy_at_birth 0.33247 0.01830 18.165 < 2e-16 ***
democratic_quality:healthy_life_expectancy_at_birth 0.10526 0.01105 9.527 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.3987 on 1698 degrees of freedom
Multiple R-squared: 0.82, Adjusted R-squared: 0.8194
F-statistic: 1547 on 5 and 1698 DF, p-value: < 2.2e-16
```
The coefficient interpretation for variables in the interaction model changes. For those involved in an interaction, the base coefficient now only describes the effect when the variable they interact with is zero (or is at the reference group if it’s categorical). So democratic quality has a slight positive, but not statistically notable, effect at the mean of life expectancy (0\.088\). However, this effect increases by 0\.11 when life expectancy increases by 1 (i.e. 1 standard deviation since we standardized). The same interpretation goes for life expectancy. It’s base coefficient is when democratic quality is at it’s mean (0\.332\), and the interaction term is interpreted identically.
It seems most people (including journal reviewers) seem to have trouble understanding interactions if you just report them in a table. Furthermore, beyond the standard linear model with non\-normal distributions, the coefficient for the interaction term doesn’t even have the same precise meaning. But you know what helps us in every interaction setting? Visualization!
Let’s use ggeffects again. We’ll plot the effect of democratic quality at the mean of life expectancy, and at one standard deviation below and above. Since we already standardized it, this is even easier.
```
library(ggeffects)
plot(
ggpredict(
happy_model_interact,
terms = c('democratic_quality', 'healthy_life_expectancy_at_birth[-1, 0, 1]')
)
)
```
We seem to have discovered something interesting here! Democratic quality only has a positive effect for those countries with a high life expectancy, i.e. that are already in a good place in general. It may even be negative in countries in the contrary case. While this has to be taken with a lot of caution, it shows how exploring interactions can be fun and surprising!
Another way to plot interactions in which the variables are continuous is with a contour plot similar to the following. Here we don’t have to pick arbitrary values to plot against, and can see the predictions at all values of the covariates in question.
We see the the lowest expected happiness based on the model is with high democratic quality and low life expectancy. The best case scenario is to be high on both.
Here is our model comparison for all three models with AIC.
```
AIC(happy_model_base, happy_model_more, happy_model_interact)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
happy_model_interact 7 1709.801
```
Looks like our interaction model is winning.
### Example: Additive models
*Generalized additive models* allow our predictors to have a *wiggly* relationship with the target variable. For more information, see [this document](https://m-clark.github.io/generalized-additive-models/), but for our purposes, that’s all you really need to know\- effects don’t have to be linear even with linear models! We will use the base R mgcv package because it is awesome and you don’t need to install anything. In this case, we’ll allow all the covariates to have a nonlinear relationship, and we denote this with the `s()` syntax.
```
library(mgcv)
happy_model_gam = gam(
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth),
data = happy_processed
)
summary(happy_model_gam)
```
```
Family: gaussian
Link function: identity
Formula:
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth)
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.028888 0.008125 -3.555 0.000388 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(democratic_quality) 8.685 8.972 13.26 <2e-16 ***
s(generosity) 6.726 7.870 27.25 <2e-16 ***
s(log_gdp_per_capita) 8.893 8.996 87.20 <2e-16 ***
s(healthy_life_expectancy_at_birth) 8.717 8.977 65.82 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R-sq.(adj) = 0.872 Deviance explained = 87.5%
GCV = 0.11479 Scale est. = 0.11249 n = 1704
```
The first thing you may notice is that there are no regression coefficients. This is because the effect of any of these predictors depends on their value, so trying to assess it by a single value would be problematic at best. You can guess what will help us interpret this…
```
library(mgcViz)
plot.gamViz(happy_model_gam, allTerms = T)
```
Here is a brief summary of interpretation. We generally don’t have to worry about small wiggles.
* `democratic_quality`: Effect is most notable (positive and strong) for higher values. Negligible otherwise.
* `generosity`: Effect seems seems strongly positive, but mostly for lower values of generosity.
* `life_expectancy`: Effect is positive, but only if the country is around the mean or higher.
* `log GDP per capita`: Effect is mostly positive, but may depend on other factors not included in the model.
In terms of general model fit, the `Scale est.` is the same as the residual standard error (squared) in the other models, and is a notably lower than even the model with the interaction (0\.11 vs. 0\.16\). We can also see that the adjusted \\(R^2\\) is higher as well (0\.87 vs. 0\.82\). If we wanted, we can actually do wiggly interactions also! Here is our interaction from before for the GAM case.
Let’s check our AIC now to see which model wins.
```
AIC(
happy_model_null,
happy_model_base,
happy_model_more,
happy_model_interact,
happy_model_gam
)
```
```
df AIC
happy_model_null 2.00000 1272.755
happy_model_base 5.00000 2043.770
happy_model_more 7.00000 1791.237
happy_model_interact 7.00000 1709.801
happy_model_gam 35.02128 1148.417
```
It’s pretty clear our wiggly model is the winner, even with the added complexity. Note that even though we used a different function for the GAM model, the AIC is still comparable.
### Example: Additional covariates
A starting point for adding model complexity is simply adding more covariates. Let’s add life expectancy and a yearly trend to our happiness model. To make this model comparable to our baseline model, they need to be fit to the same data, and life expectancy has a couple missing values the others do not. So we’ll start with some data processing. I will start by standardizing some of the variables, and making year start at zero, which will represent 2008, and finally dropping missing values. Refer to our previous section on [transforming variables](models.html#numeric-variables) if you want to.
```
happy_recipe = happy %>%
select(
year,
happiness_score,
democratic_quality,
generosity,
healthy_life_expectancy_at_birth,
log_gdp_per_capita
) %>%
recipe(happiness_score ~ . ) %>%
step_center(all_numeric(), -log_gdp_per_capita, -year) %>%
step_scale(all_numeric(), -log_gdp_per_capita, -year) %>%
step_knnimpute(all_numeric()) %>%
step_naomit(everything()) %>%
step_center(year, means = 2005) %>%
prep()
happy_processed = happy_recipe %>% bake(happy)
```
Now let’s start with our baseline model again.
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_processed
)
summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.53727 -0.29553 -0.01258 0.32002 1.52749
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -5.49178 0.10993 -49.958 <2e-16 ***
democratic_quality 0.14175 0.01441 9.838 <2e-16 ***
generosity 0.19826 0.01096 18.092 <2e-16 ***
log_gdp_per_capita 0.59284 0.01187 49.946 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.44 on 1700 degrees of freedom
Multiple R-squared: 0.7805, Adjusted R-squared: 0.7801
F-statistic: 2014 on 3 and 1700 DF, p-value: < 2.2e-16
```
We can see that moving one standard deviation on democratic quality and generosity leads to similar standard deviation increases in happiness. Moving 10 percentage points in GDP would lead to less than .1 standard deviation increase in happiness.
Now we add our life expectancy and yearly trend.
```
happy_model_more = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed
)
summary(happy_model_more)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.50879 -0.27081 -0.01524 0.29640 1.60540
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.691818 0.148921 -24.790 < 2e-16 ***
democratic_quality 0.099717 0.013618 7.322 3.75e-13 ***
generosity 0.189113 0.010193 18.554 < 2e-16 ***
log_gdp_per_capita 0.397559 0.016121 24.661 < 2e-16 ***
healthy_life_expectancy_at_birth 0.311129 0.018732 16.609 < 2e-16 ***
year -0.007363 0.002728 -2.699 0.00702 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.4083 on 1698 degrees of freedom
Multiple R-squared: 0.8111, Adjusted R-squared: 0.8106
F-statistic: 1459 on 5 and 1698 DF, p-value: < 2.2e-16
```
Here it would seem that life expectancy has a notable effect on happiness (shocker), but the yearly trend, while negative, is not statistically notable. In addition, the democratic effect is no longer significant, as it would seem that it’s contribution was more due to it’s correlation with life expectancy. But the key question is\- is this model better?
The adjusted \\(R^2\\) seems to indicate that we are doing slightly better with this model, but not much (0\.81 vs. 0\.78\). We can test if the increase is a statistically notable one. [Recall previously](model_criticism.html#statistical-assessment) when we compared our model versus a null model to obtain a statistical test of model fit. Since these models are *nested*, i.e. one is a simpler form of the other, we can use the more general approach we depicted to compare these models. This ANOVA, or analysis of variance test, is essentially comparing whether the residual sum of squares (i.e. the loss) is statistically less for one model vs. the other. In many settings it is often called a *likelihood ratio test*.
```
anova(happy_model_base, happy_model_more, test = 'Chi')
```
```
Analysis of Variance Table
Model 1: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth + year
Res.Df RSS Df Sum of Sq Pr(>Chi)
1 1700 329.11
2 1698 283.11 2 45.997 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The `Df` from the test denotes that we have two additional parameters, i.e. coefficients, in the more complex model. But the main thing to note is whether the model statistically reduces the RSS, and so we see that this is a statistically notable improvement as well.
I actually do not like this test though. It requires nested models, which in some settings is either not the case or can be hard to determine, and ignores various aspects of uncertainty in parameter estimates. Furthermore, it may not be appropriate for some complex model settings. An approach that works in many settings is to compare *AIC* (Akaike Information Criterion). AIC is a value based on the likelihood for a given model, but which adds a penalty for complexity, since otherwise any more complex model would result in a larger likelihood (or in this case, smaller negative likelihood). In the following, \\(\\mathcal{L}\\) is the likelihood, and \\(\\mathcal{P}\\) is the number of parameters estimated for the model.
\\\[AIC \= \-2 ( \\ln (\\mathcal{L})) \+ 2 \\mathcal{P}\\]
```
AIC(happy_model_base)
```
```
[1] 2043.77
```
The value itself is meaningless until we compare models, in which case the lower value is the better model (because we are working with the negative log likelihood). With AIC, we don’t have to have nested models, so that’s a plus over the statistical test.
```
AIC(happy_model_base, happy_model_more)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
```
Again, our new model works better. However, this still may miss out on some uncertainty in the models. To try and capture this, I will calculate interval estimates for the adjusted \\(R^2\\) via *bootstrapping*, and then calculate an interval for their difference. The details are beyond what I want to delve into here, but the gist is we just want a confidence interval for the difference in adjusted \\(R^2\\).
| model | r2 | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 0\.780 | 0\.762 | 0\.798 |
| more | 0\.811 | 0\.795 | 0\.827 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in \\(R^2\\) | 0\.013 | 0\.049 |
It would seem the difference in adjusted \\(R^2\\) is not statistically different from zero. Likewise we could do the same for AIC.
| model | aic | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 2043\.770 | 1917\.958 | 2161\.231 |
| more | 1791\.237 | 1657\.755 | 1911\.073 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in AIC | \-369\.994 | \-126\.722 |
In this case, the more complex model may not be statistically better either, as the interval for the difference in AIC also contains zero, and exhibits a notably wide range.
### Example: Interactions
Let’s now add interactions to our model. Interactions allow the relationship of a predictor variable and target to vary depending on the values of another covariate. To keep things simple, we’ll add a single interaction to start\- I will interact democratic quality with life expectancy.
```
happy_model_interact = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth +
democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed
)
summary(happy_model_interact)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.42801 -0.26473 -0.00607 0.26868 1.48161
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.63990 0.14517 -25.074 < 2e-16 ***
democratic_quality 0.08785 0.01335 6.580 6.24e-11 ***
generosity 0.16479 0.01030 16.001 < 2e-16 ***
log_gdp_per_capita 0.38501 0.01578 24.404 < 2e-16 ***
healthy_life_expectancy_at_birth 0.33247 0.01830 18.165 < 2e-16 ***
democratic_quality:healthy_life_expectancy_at_birth 0.10526 0.01105 9.527 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.3987 on 1698 degrees of freedom
Multiple R-squared: 0.82, Adjusted R-squared: 0.8194
F-statistic: 1547 on 5 and 1698 DF, p-value: < 2.2e-16
```
The coefficient interpretation for variables in the interaction model changes. For those involved in an interaction, the base coefficient now only describes the effect when the variable they interact with is zero (or is at the reference group if it’s categorical). So democratic quality has a slight positive, but not statistically notable, effect at the mean of life expectancy (0\.088\). However, this effect increases by 0\.11 when life expectancy increases by 1 (i.e. 1 standard deviation since we standardized). The same interpretation goes for life expectancy. It’s base coefficient is when democratic quality is at it’s mean (0\.332\), and the interaction term is interpreted identically.
It seems most people (including journal reviewers) seem to have trouble understanding interactions if you just report them in a table. Furthermore, beyond the standard linear model with non\-normal distributions, the coefficient for the interaction term doesn’t even have the same precise meaning. But you know what helps us in every interaction setting? Visualization!
Let’s use ggeffects again. We’ll plot the effect of democratic quality at the mean of life expectancy, and at one standard deviation below and above. Since we already standardized it, this is even easier.
```
library(ggeffects)
plot(
ggpredict(
happy_model_interact,
terms = c('democratic_quality', 'healthy_life_expectancy_at_birth[-1, 0, 1]')
)
)
```
We seem to have discovered something interesting here! Democratic quality only has a positive effect for those countries with a high life expectancy, i.e. that are already in a good place in general. It may even be negative in countries in the contrary case. While this has to be taken with a lot of caution, it shows how exploring interactions can be fun and surprising!
Another way to plot interactions in which the variables are continuous is with a contour plot similar to the following. Here we don’t have to pick arbitrary values to plot against, and can see the predictions at all values of the covariates in question.
We see the the lowest expected happiness based on the model is with high democratic quality and low life expectancy. The best case scenario is to be high on both.
Here is our model comparison for all three models with AIC.
```
AIC(happy_model_base, happy_model_more, happy_model_interact)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
happy_model_interact 7 1709.801
```
Looks like our interaction model is winning.
### Example: Additive models
*Generalized additive models* allow our predictors to have a *wiggly* relationship with the target variable. For more information, see [this document](https://m-clark.github.io/generalized-additive-models/), but for our purposes, that’s all you really need to know\- effects don’t have to be linear even with linear models! We will use the base R mgcv package because it is awesome and you don’t need to install anything. In this case, we’ll allow all the covariates to have a nonlinear relationship, and we denote this with the `s()` syntax.
```
library(mgcv)
happy_model_gam = gam(
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth),
data = happy_processed
)
summary(happy_model_gam)
```
```
Family: gaussian
Link function: identity
Formula:
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth)
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.028888 0.008125 -3.555 0.000388 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(democratic_quality) 8.685 8.972 13.26 <2e-16 ***
s(generosity) 6.726 7.870 27.25 <2e-16 ***
s(log_gdp_per_capita) 8.893 8.996 87.20 <2e-16 ***
s(healthy_life_expectancy_at_birth) 8.717 8.977 65.82 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R-sq.(adj) = 0.872 Deviance explained = 87.5%
GCV = 0.11479 Scale est. = 0.11249 n = 1704
```
The first thing you may notice is that there are no regression coefficients. This is because the effect of any of these predictors depends on their value, so trying to assess it by a single value would be problematic at best. You can guess what will help us interpret this…
```
library(mgcViz)
plot.gamViz(happy_model_gam, allTerms = T)
```
Here is a brief summary of interpretation. We generally don’t have to worry about small wiggles.
* `democratic_quality`: Effect is most notable (positive and strong) for higher values. Negligible otherwise.
* `generosity`: Effect seems seems strongly positive, but mostly for lower values of generosity.
* `life_expectancy`: Effect is positive, but only if the country is around the mean or higher.
* `log GDP per capita`: Effect is mostly positive, but may depend on other factors not included in the model.
In terms of general model fit, the `Scale est.` is the same as the residual standard error (squared) in the other models, and is a notably lower than even the model with the interaction (0\.11 vs. 0\.16\). We can also see that the adjusted \\(R^2\\) is higher as well (0\.87 vs. 0\.82\). If we wanted, we can actually do wiggly interactions also! Here is our interaction from before for the GAM case.
Let’s check our AIC now to see which model wins.
```
AIC(
happy_model_null,
happy_model_base,
happy_model_more,
happy_model_interact,
happy_model_gam
)
```
```
df AIC
happy_model_null 2.00000 1272.755
happy_model_base 5.00000 2043.770
happy_model_more 7.00000 1791.237
happy_model_interact 7.00000 1709.801
happy_model_gam 35.02128 1148.417
```
It’s pretty clear our wiggly model is the winner, even with the added complexity. Note that even though we used a different function for the GAM model, the AIC is still comparable.
Model Averaging
---------------
Have you ever suffered from choice overload? Many folks who seek to understand some phenomenon via modeling do so. There are plenty of choices due to data processing, but then there may be many models to consider as well, and should be if you’re doing things correctly. But you know what? You don’t have to pick a best.
Model averaging is a common technique in the Bayesian world and also with some applications of machine learning (usually under the guise of *stacking*), but not as widely applied elsewhere, even though it could be. As an example if we (inversely) weight models by the AIC, we can get an average parameter that favors the better models, while not ignoring the lesser models if they aren’t notably poorer. People will use such an approach to get model averaged effects (i.e. coefficients) or predictions. In our setting, the GAM is doing so much better, that it’s weight would basically be 1\.0 and zero for the others. So the model averaged predictions would be almost identical to the GAM predictions.
| model | df | AIC | AICc | deltaAICc | Rel. Like. | weight |
| --- | --- | --- | --- | --- | --- | --- |
| happy\_model\_base | 5\.000 | 2043\.770 | 2043\.805 | 893\.875 | 0 | 0 |
| happy\_model\_more | 7\.000 | 1791\.237 | 1791\.303 | 641\.373 | 0 | 0 |
| happy\_model\_interact | 7\.000 | 1709\.801 | 1709\.867 | 559\.937 | 0 | 0 |
| happy\_model\_gam | 35\.021 | 1148\.417 | 1149\.930 | 0\.000 | 1 | 1 |
Model Criticism Summary
-----------------------
Statistical significance with a single model does not provide enough of a story to tell with your data. A better assessment of performance can be made on data the model has not seen, and can provide a better idea of the practical capabilities of it. Furthermore, pitting various models of differing complexities will allow for better confidence in the model or set of models we ultimately deem worthy. In general, in more explanatory settings we strive to balance performance with complexity through various means.
Model Criticism Exercises
-------------------------
### Exercise 0
Recall the [google app exercises](models.html#model-exploration-exercises), we use a standard linear model (i.e. lm) to predict one of three target variables:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
After that we did a model with an interaction.
Either using those models, or running new ones with a different target variable, conduct the following exercises.
```
load('data/google_apps.RData')
```
### Exercise 1
Assess the model fit and performance of your first model. Perform additional diagnostics to assess how the model is doing (e.g. plot the model to look at residuals).
```
summary(model)
plot(model)
```
### Exercise 2
Compare the model with the interaction model. Based on AIC or some other metric, which one would you choose? Visualize the interaction model if it’s the better model.
```
anova(model1, model2)
AIC(model1, model2)
```
### Exercise 0
Recall the [google app exercises](models.html#model-exploration-exercises), we use a standard linear model (i.e. lm) to predict one of three target variables:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
After that we did a model with an interaction.
Either using those models, or running new ones with a different target variable, conduct the following exercises.
```
load('data/google_apps.RData')
```
### Exercise 1
Assess the model fit and performance of your first model. Perform additional diagnostics to assess how the model is doing (e.g. plot the model to look at residuals).
```
summary(model)
plot(model)
```
### Exercise 2
Compare the model with the interaction model. Based on AIC or some other metric, which one would you choose? Visualize the interaction model if it’s the better model.
```
anova(model1, model2)
AIC(model1, model2)
```
Python Model Criticism Notebook
-------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/model_criticism.ipynb)
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/model_criticism.html |
Model Criticism
===============
It isn’t enough to simply fit a particular model, we must also ask how well it matches the data under study, if it can predict well on new data, where it fails, and more. In the following we will discuss how we can better understand our model and its limitations.
Model Fit
---------
### Standard linear model
In the basic regression setting we can think of model fit in terms of a statistical result, or in terms of the match between our model predictions and the observed target values. The former provides an inferential perspective, but as we will see, is limited. The latter regards a more practical result, and may provide a more nuanced or different conclusion.
#### Statistical assessment
In a standard linear model we can compare a model where there are no covariates vs. the model we actually care about, which may have many predictor variables. This is an almost useless test, but the results are typically reported both in standard output and academic presentation. Let’s think about it conceptually\- how does the variability in our target break down?
\\\[\\textrm{Total Variance} \= \\textrm{Model Explained Variance} \+ \\textrm{Residual Variance}\\]
So the variability in our target (TV) can be decomposed into that which we can explain with the predictor variables (MEV), and everything else that is not in our model (RV). If we have nothing in the model, then TV \= RV.
Let’s revisit the summary of our model. Note the *F\-statistic*, which represents a statistical test for the model as a whole.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
The standard F statistic can be calculated as follows, where \\(p\\) is the number of predictors[36](#fn36):
\\\[F \= \\frac{MV/p}{RV/(N\-p\-1\)}\\]
Conceptually it is a ratio of average squared variance to average unexplained variance. We can see this more explicitly as follows, where each predictor’s contribution to the total variance is provided in the `Sum Sq` column.
```
anova(happy_model_base)
```
```
Analysis of Variance Table
Response: happiness_score
Df Sum Sq Mean Sq F value Pr(>F)
democratic_quality 1 189.192 189.192 479.300 < 2.2e-16 ***
generosity 1 6.774 6.774 17.162 4.177e-05 ***
log_gdp_per_capita 1 170.649 170.649 432.324 < 2.2e-16 ***
Residuals 407 160.653 0.395
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
If we add those together and use our formula above we get:
\\\[F \= \\frac{366\.62/3}{160\.653/407} \= 309\.6\\]
Which is what is reported in the summary of the model. And the p\-value is just `pf(309.6, 3, 407, lower = FALSE)`, whose values can be extracted from the summary object.
```
happy_model_base_sum$fstatistic
```
```
value numdf dendf
309.5954 3.0000 407.0000
```
```
pf(309.6, 3, 407, lower.tail = FALSE)
```
```
[1] 1.239283e-104
```
Because the F\-value is so large and p\-value so small, the printed result in the summary doesn’t give us the actual p\-value. So let’s demonstrate again with a worse model, where the p\-value will be higher.
```
f_test = lm(happiness_score ~ generosity, happy)
summary(f_test)
```
```
Call:
lm(formula = happiness_score ~ generosity, data = happy)
Residuals:
Min 1Q Median 3Q Max
-2.81037 -0.89930 0.00716 0.84924 2.33153
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.41905 0.04852 111.692 < 2e-16 ***
generosity 0.89936 0.30351 2.963 0.00318 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.122 on 533 degrees of freedom
(1169 observations deleted due to missingness)
Multiple R-squared: 0.01621, Adjusted R-squared: 0.01436
F-statistic: 8.78 on 1 and 533 DF, p-value: 0.003181
```
```
pf(8.78, 1, 533, lower.tail = FALSE)
```
```
[1] 0.003181551
```
We can make this F\-test more explicit by actually fitting a null model and making the comparison. The following will provide the same result as before. We make sure to use the same data as in the original model, since there are missing values for some covariates.
```
happy_model_null = lm(happiness_score ~ 1, data = model.frame(happy_model_base))
anova(happy_model_null, happy_model_base)
```
```
Analysis of Variance Table
Model 1: happiness_score ~ 1
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Res.Df RSS Df Sum of Sq F Pr(>F)
1 410 527.27
2 407 160.65 3 366.62 309.6 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
In this case our F statistic generalizes to the following, where \\(\\textrm{Model}\_1\\) is the simpler model and \\(p\\) now refers to the total number of parameters estimated (i.e. same as before \+ 1 for the intercept)
\\\[F \= \\frac{(\\textrm{Model}\_2\\ \\textrm{RV} \- \\textrm{Model}\_1\\ \\textrm{RV})/(p\_2 \- p\_1\)}{\\textrm{Model}\_2\\ \\textrm{RV}/(N\-p\_2\-1\)}\\]
From the previous results, we can perform the necessary arithmetic based on this formula to get the F statistic.
```
((527.27 - 160.65)/3) / (160.65/407)
```
```
[1] 309.6054
```
#### \\(R^2\\)
The statistical result just shown is mostly a straw man type of test\- who actually cares if our model does statistically better than a model with nothing in it? Surely if you don’t do better than nothing, then you may need to think more intently about what you are trying to model and how. But just because you can knock the straw man down, it isn’t something to get overly excited about. Let’s turn instead to a different concept\- the amount of variance of the target variable that is explained by our predictors. For the standard linear model setting, this statistic is called *R\-squared* (\\(R^2\\)).
Going back to our previous notions, \\(R^2\\) is just:
\\\[R^2 \=\\textrm{Model Explained Variance}/\\textrm{Total Variance}\\]
This also is reported by default in our summary printout.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
With our values from before for model and total variance, we can calculate it ourselves.
```
366.62 / 527.27
```
```
[1] 0.6953174
```
Here is another way. Let’s get the model predictions, and see how well they correlate with the target.
```
predictions = predict(happy_model_base)
target = happy_model_base$model$happiness_score
rho = cor(predictions, target)
rho
```
```
[1] 0.8338528
```
```
rho^2
```
```
[1] 0.6953106
```
Now you can see why it’s called \\(R^2\\). It is the squared Pearson \\(r\\) of the model expected value and the observed target variable.
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
### Beyond OLS
People love \\(R^2\\), so much that they will report it wherever they can, even coming up with things like ‘Pseudo\-\\(R^2\\)’ when it proves difficult. However, outside of the OLS setting where we assume a normal distribution as the underlying data\-generating mechanism, \\(R^2\\) has little application, and so is not very useful. In some sense, for any numeric target variable we can ask how well our predictions correlate with the observed target values, but the notion of ‘variance explained’ doesn’t easily follow us. For example, for other distributions the estimated variance is a function of the mean (e.g. Poisson, Binomial), and so isn’t constant. In other settings we have multiple sources of (residual) variance, and some sources where it’s not clear whether the variance should be considered as part of the model explained variance or residual variance. For categorical targets the notion doesn’t really apply very well at all.
At least for GLM for non\-normal distributions, we can work with *deviance*, which is similar to the residual sum of squares in the OLS setting. We can get a ‘deviance explained’ using the following approach:
1. Fit a null model, i.e. intercept only. This gives the total deviance (`tot_dev`).
2. Fit the desired model. This provides the model unexplained deviance (`model_dev`)
3. Calculate \\(\\frac{\\textrm{tot\_dev} \-\\textrm{model\_dev}}{\\textrm{tot\_dev}}\\)
But this value doesn’t really behave in the same manner as \\(R^2\\). For one, it can actually go down for a more complex model, and there is no standard adjustment, neither of which is the case with \\(R^2\\) for the standard linear model. At most this can serve as an approximation. For more complicated settings you will have to rely on other means to determine model fit.
### Classification
For categorical targets we must think about obtaining predictions that allow us to classify the observations into specific categories. Not surprisingly, this will require different metrics to assess model performance.
#### Accuracy and other metrics
A very natural starting point is *accuracy*, or what percentage of our predicted class labels match the observed class labels. However, our model will not spit out a character string, only a number. On the scale of the linear predictor it can be anything, but we will at some point transform it to the probability scale, obtaining a predicted probability for each category. The class associated with the highest probability is the predicted class. In the case of binary targets, this is just an if\_else statement for one class `if_else(probability >= .5, 'class A', 'class B')`.
With those predicted labels and the observed labels we create what is commonly called a *confusion matrix*, but would more sanely be called a *classification table*, *prediction table*, or just about any other name one could come up with in the first 10 seconds of trying. Let’s look at the following hypothetical result.
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | 41 | 21 | | Predicted \= 0 | 16 | 13 | | | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | A | B | | Predicted \= 0 | C | D | |
In some cases we predict correctly, in other cases not. In this 2 x 2 setting we label the cells A through D. With things in place, consider the following the following nomenclature.
*True Positive*, *False Positive*, *True Negative*, *False Negative*: Above, these are A, B, D, and C respectively.
Now let’s see what we can calculate.
*Accuracy*: Number of correct classifications out of all predictions (A \+ D)/Total. In the above example this would be (41 \+ 13\)/91, about 59%.
*Error Rate*: 1 \- Accuracy.
*Sensitivity*: is the proportion of correctly predicted positives to all true positive events: A/(A \+ C). In the above example this would be 41/57, about 72%. High sensitivity would suggest a low type II error rate (see below), or high statistical power. Also known as *true positive rate*.
*Specificity*: is the proportion of correctly predicted negatives to all true negative events: D/(B \+ D). In the above example this would be 13/34, about 38%. High specificity would suggest a low type I error rate (see below). Also known as *true negative rate*.
*Positive Predictive Value* (PPV): proportion of true positives of those that are predicted positives: A/(A \+ B). In the above example this would be 41/62, about 66%.
*Negative Predictive Value* (NPV): proportion of true negatives of those that are predicted negative: D/(C \+ D). In the above example this would be 13/29, about 45%.
*Precision*: See PPV.
*Recall*: See sensitivity.
*Lift*: Ratio of positive predictions given actual positives to the proportion of positive predictions out of the total: (A/(A \+ C)) / ((A \+ B)/Total). In the above example this would be (41/(41 \+ 16\))/((41 \+ 21\)/(91\)), or 1\.06\.
*F Score* (F1 score): Harmonic mean of precision and recall: 2\*(Precision\*Recall)/(Precision\+Recall). In the above example this would be 2\*(.66\*.72\)/(.66\+.72\), about 0\.69\.
*Type I Error Rate* (false positive rate): proportion of true negatives that are incorrectly predicted positive: B/(B\+D). In the above example this would be 21/34, about 62%. Also known as *alpha*.
*Type II Error Rate* (false negative rate): proportion of true positives that are incorrectly predicted negative: C/(C\+A). In the above example this would be 16/57, about 28%. Also known as *beta*.
*False Discovery Rate*: proportion of false positives among all positive predictions: B/(A\+B). In the above example this would be 21/62, about 34%. Often used in multiple comparison testing in the context of ANOVA.
*Phi coefficient*: A measure of association: (A\*D \- B\*C) / (sqrt((A\+C)\*(D\+B)\*(A\+B)\*(D\+C))). In the above example this would be 0\.11\.
Several of these may also be produced on a per\-class basis when there are more than two classes. In addition, for multi\-class scenarios there are other metrics commonly employed. In general there are many, many other metrics for confusion matrices, any of which might be useful for your situation, but the above provides a starting point, and is enough for many situations.
Model Assumptions
-----------------
There are quite a few assumptions for the standard linear model that we could talk about, but I’ll focus on just a handful, ordered roughly in terms of the severity of violation.
* Correct model
* Heteroscedasticity
* Independence of observations
* Normality
These concern bias (the first), accurate inference (most of the rest), or other statistical concepts (efficiency, consistency). The issue with most of the assumptions you learn about in your statistics course is that they mostly just apply to the OLS setting. Moreover, you can meet all the assumptions you want and still have a crappy model. Practically speaking, the effects on inference often aren’t large enough to matter in many cases, as we shouldn’t be making any important decision based on a p\-value, or slight differences in the boundaries of an interval. Even then, at least for OLS and other simpler settings, the solutions to these issues are often easy, for example, to obtain correct standard errors, or are mostly overcome by having a large amount of data.
Still, the diagnostic tools can provide clues to model failure, and so have utility in that sense. As before, visualization will aid us here.
```
library(ggfortify)
autoplot(happy_model_base)
```
The first plot shows the spread of the residuals vs. the model estimated values. By default, the three most extreme observations are noted. In this plot we are looking for a lack of any conspicuous pattern, e.g. a fanning out to one side or butterfly shape. If the variance was dependent on some of the model estimated values, we have a couple options:
* Use a model that does not assume constant variance
* Add complexity to the model to better capture more extreme observations
* Change the assumed distribution
In this example we have it about as good as it gets. The second plot regards the normality of the residuals. If they are normally distributed, they would fall along the dotted line. Again, in practical application this is about as good as you’re going to get. In the following we can see that we have some issues, where predictions are worse at low and high ends, and we may not be capturing some of the tail of the target distribution.
Another plot we can use to assess model fit is simply to note the predictions vs. the observed values, and this sort of plot would be appropriate for any model. Here I show this both as a scatterplot and a density plot. With the first, the closer the result is to a line the better, with the latter, we can more adequately see what the model is predicting in relation to the observed values. In this case, while we’re doing well, one limitation of the model is that it does not have as much spread as target, and so is not capturing the more extreme values.
Beyond the OLS setting, assumptions may change, are more difficult to check, and guarantees are harder to come by. The primary one \- that you have an adequate and sufficiently complex model \- still remains the most vital. It is important to remember that these assumptions regard inference, not predictive capabilities. In addition, in many modeling scenarios we will actually induce bias to have more predictive capacity. In such settings statistical tests are of less importance, and there often may not even be an obvious test to use. Typically we will still have some means to get interval estimates for weights or predictions though.
Predictive Performance
----------------------
While we can gauge predictive performance to some extent with a metric like \\(R^2\\) in the standard linear model case, even then it almost certainly an optimistic viewpoint, and adjusted \\(R^2\\) doesn’t really deal with the underlying issue. What is the problem? The concern is that we are judging model performance on the very data it was fit to. Any potential deviation to the underlying data would certainly result in a different result for \\(R^2\\), accuracy, or any metric we choose to look at.
So the better estimate of how the model is doing is to observe performance on data it hasn’t seen, using a metric that better captures how close we hit the target. This data goes by different names\- *test set*, *validation set*, *holdout sample*, etc., but the basic idea is that we use some data that wasn’t used in model fitting to assess performance. We can do this in any data situation by randomly splitting into a data set for training the model, and one used for testing the model’s performance.
```
library(tidymodels)
set.seed(12)
happy_split = initial_split(happy, prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split) %>% drop_na()
happy_model_train = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_train
)
predictions = predict(happy_model_train, newdata = happy_test)
```
Comparing our loss on training and test (i.e. RMSE), we can see the loss is greater on the test set. You can use a package like yardstick to calculate this.
| RMSE\_train | RMSE\_test | % increase |
| --- | --- | --- |
| 0\.622 | 0\.758 | 21\.9 |
While in many settings we could simply report performance metrics from the test set, for a more accurate assessment of test error, we’d do better by taking an average over several test sets, an approach known as *cross\-validation*, something we’ll talk more about [later](ml.html#cross-validation).
In general, we may do okay in scenarios where the model is simple and uses a lot of data, but even then we may find a notable increase in test error relative to training error. For more complex models and/or with less data, the difference in training vs. test could be quite significant.
Model Comparison
----------------
Up until now the focus has been entirely on one model. However, if you’re trying to learn something new, you’ll almost always want to have multiple plausible models to explore, rather than just confirming what you think you already know. This can be as simple as starting with a baseline model and adding complexity to it, but it could also be pitting fundamentally different theoretical models against one another.
A notable problem is that complex models should always do better than simple ones. The question often then becomes if they are doing notably better given the additional complexity. So we’ll need some way to compare models in a way that takes the complexity of the model into account.
### Example: Additional covariates
A starting point for adding model complexity is simply adding more covariates. Let’s add life expectancy and a yearly trend to our happiness model. To make this model comparable to our baseline model, they need to be fit to the same data, and life expectancy has a couple missing values the others do not. So we’ll start with some data processing. I will start by standardizing some of the variables, and making year start at zero, which will represent 2008, and finally dropping missing values. Refer to our previous section on [transforming variables](models.html#numeric-variables) if you want to.
```
happy_recipe = happy %>%
select(
year,
happiness_score,
democratic_quality,
generosity,
healthy_life_expectancy_at_birth,
log_gdp_per_capita
) %>%
recipe(happiness_score ~ . ) %>%
step_center(all_numeric(), -log_gdp_per_capita, -year) %>%
step_scale(all_numeric(), -log_gdp_per_capita, -year) %>%
step_knnimpute(all_numeric()) %>%
step_naomit(everything()) %>%
step_center(year, means = 2005) %>%
prep()
happy_processed = happy_recipe %>% bake(happy)
```
Now let’s start with our baseline model again.
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_processed
)
summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.53727 -0.29553 -0.01258 0.32002 1.52749
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -5.49178 0.10993 -49.958 <2e-16 ***
democratic_quality 0.14175 0.01441 9.838 <2e-16 ***
generosity 0.19826 0.01096 18.092 <2e-16 ***
log_gdp_per_capita 0.59284 0.01187 49.946 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.44 on 1700 degrees of freedom
Multiple R-squared: 0.7805, Adjusted R-squared: 0.7801
F-statistic: 2014 on 3 and 1700 DF, p-value: < 2.2e-16
```
We can see that moving one standard deviation on democratic quality and generosity leads to similar standard deviation increases in happiness. Moving 10 percentage points in GDP would lead to less than .1 standard deviation increase in happiness.
Now we add our life expectancy and yearly trend.
```
happy_model_more = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed
)
summary(happy_model_more)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.50879 -0.27081 -0.01524 0.29640 1.60540
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.691818 0.148921 -24.790 < 2e-16 ***
democratic_quality 0.099717 0.013618 7.322 3.75e-13 ***
generosity 0.189113 0.010193 18.554 < 2e-16 ***
log_gdp_per_capita 0.397559 0.016121 24.661 < 2e-16 ***
healthy_life_expectancy_at_birth 0.311129 0.018732 16.609 < 2e-16 ***
year -0.007363 0.002728 -2.699 0.00702 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.4083 on 1698 degrees of freedom
Multiple R-squared: 0.8111, Adjusted R-squared: 0.8106
F-statistic: 1459 on 5 and 1698 DF, p-value: < 2.2e-16
```
Here it would seem that life expectancy has a notable effect on happiness (shocker), but the yearly trend, while negative, is not statistically notable. In addition, the democratic effect is no longer significant, as it would seem that it’s contribution was more due to it’s correlation with life expectancy. But the key question is\- is this model better?
The adjusted \\(R^2\\) seems to indicate that we are doing slightly better with this model, but not much (0\.81 vs. 0\.78\). We can test if the increase is a statistically notable one. [Recall previously](model_criticism.html#statistical-assessment) when we compared our model versus a null model to obtain a statistical test of model fit. Since these models are *nested*, i.e. one is a simpler form of the other, we can use the more general approach we depicted to compare these models. This ANOVA, or analysis of variance test, is essentially comparing whether the residual sum of squares (i.e. the loss) is statistically less for one model vs. the other. In many settings it is often called a *likelihood ratio test*.
```
anova(happy_model_base, happy_model_more, test = 'Chi')
```
```
Analysis of Variance Table
Model 1: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth + year
Res.Df RSS Df Sum of Sq Pr(>Chi)
1 1700 329.11
2 1698 283.11 2 45.997 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The `Df` from the test denotes that we have two additional parameters, i.e. coefficients, in the more complex model. But the main thing to note is whether the model statistically reduces the RSS, and so we see that this is a statistically notable improvement as well.
I actually do not like this test though. It requires nested models, which in some settings is either not the case or can be hard to determine, and ignores various aspects of uncertainty in parameter estimates. Furthermore, it may not be appropriate for some complex model settings. An approach that works in many settings is to compare *AIC* (Akaike Information Criterion). AIC is a value based on the likelihood for a given model, but which adds a penalty for complexity, since otherwise any more complex model would result in a larger likelihood (or in this case, smaller negative likelihood). In the following, \\(\\mathcal{L}\\) is the likelihood, and \\(\\mathcal{P}\\) is the number of parameters estimated for the model.
\\\[AIC \= \-2 ( \\ln (\\mathcal{L})) \+ 2 \\mathcal{P}\\]
```
AIC(happy_model_base)
```
```
[1] 2043.77
```
The value itself is meaningless until we compare models, in which case the lower value is the better model (because we are working with the negative log likelihood). With AIC, we don’t have to have nested models, so that’s a plus over the statistical test.
```
AIC(happy_model_base, happy_model_more)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
```
Again, our new model works better. However, this still may miss out on some uncertainty in the models. To try and capture this, I will calculate interval estimates for the adjusted \\(R^2\\) via *bootstrapping*, and then calculate an interval for their difference. The details are beyond what I want to delve into here, but the gist is we just want a confidence interval for the difference in adjusted \\(R^2\\).
| model | r2 | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 0\.780 | 0\.762 | 0\.798 |
| more | 0\.811 | 0\.795 | 0\.827 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in \\(R^2\\) | 0\.013 | 0\.049 |
It would seem the difference in adjusted \\(R^2\\) is not statistically different from zero. Likewise we could do the same for AIC.
| model | aic | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 2043\.770 | 1917\.958 | 2161\.231 |
| more | 1791\.237 | 1657\.755 | 1911\.073 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in AIC | \-369\.994 | \-126\.722 |
In this case, the more complex model may not be statistically better either, as the interval for the difference in AIC also contains zero, and exhibits a notably wide range.
### Example: Interactions
Let’s now add interactions to our model. Interactions allow the relationship of a predictor variable and target to vary depending on the values of another covariate. To keep things simple, we’ll add a single interaction to start\- I will interact democratic quality with life expectancy.
```
happy_model_interact = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth +
democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed
)
summary(happy_model_interact)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.42801 -0.26473 -0.00607 0.26868 1.48161
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.63990 0.14517 -25.074 < 2e-16 ***
democratic_quality 0.08785 0.01335 6.580 6.24e-11 ***
generosity 0.16479 0.01030 16.001 < 2e-16 ***
log_gdp_per_capita 0.38501 0.01578 24.404 < 2e-16 ***
healthy_life_expectancy_at_birth 0.33247 0.01830 18.165 < 2e-16 ***
democratic_quality:healthy_life_expectancy_at_birth 0.10526 0.01105 9.527 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.3987 on 1698 degrees of freedom
Multiple R-squared: 0.82, Adjusted R-squared: 0.8194
F-statistic: 1547 on 5 and 1698 DF, p-value: < 2.2e-16
```
The coefficient interpretation for variables in the interaction model changes. For those involved in an interaction, the base coefficient now only describes the effect when the variable they interact with is zero (or is at the reference group if it’s categorical). So democratic quality has a slight positive, but not statistically notable, effect at the mean of life expectancy (0\.088\). However, this effect increases by 0\.11 when life expectancy increases by 1 (i.e. 1 standard deviation since we standardized). The same interpretation goes for life expectancy. It’s base coefficient is when democratic quality is at it’s mean (0\.332\), and the interaction term is interpreted identically.
It seems most people (including journal reviewers) seem to have trouble understanding interactions if you just report them in a table. Furthermore, beyond the standard linear model with non\-normal distributions, the coefficient for the interaction term doesn’t even have the same precise meaning. But you know what helps us in every interaction setting? Visualization!
Let’s use ggeffects again. We’ll plot the effect of democratic quality at the mean of life expectancy, and at one standard deviation below and above. Since we already standardized it, this is even easier.
```
library(ggeffects)
plot(
ggpredict(
happy_model_interact,
terms = c('democratic_quality', 'healthy_life_expectancy_at_birth[-1, 0, 1]')
)
)
```
We seem to have discovered something interesting here! Democratic quality only has a positive effect for those countries with a high life expectancy, i.e. that are already in a good place in general. It may even be negative in countries in the contrary case. While this has to be taken with a lot of caution, it shows how exploring interactions can be fun and surprising!
Another way to plot interactions in which the variables are continuous is with a contour plot similar to the following. Here we don’t have to pick arbitrary values to plot against, and can see the predictions at all values of the covariates in question.
We see the the lowest expected happiness based on the model is with high democratic quality and low life expectancy. The best case scenario is to be high on both.
Here is our model comparison for all three models with AIC.
```
AIC(happy_model_base, happy_model_more, happy_model_interact)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
happy_model_interact 7 1709.801
```
Looks like our interaction model is winning.
### Example: Additive models
*Generalized additive models* allow our predictors to have a *wiggly* relationship with the target variable. For more information, see [this document](https://m-clark.github.io/generalized-additive-models/), but for our purposes, that’s all you really need to know\- effects don’t have to be linear even with linear models! We will use the base R mgcv package because it is awesome and you don’t need to install anything. In this case, we’ll allow all the covariates to have a nonlinear relationship, and we denote this with the `s()` syntax.
```
library(mgcv)
happy_model_gam = gam(
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth),
data = happy_processed
)
summary(happy_model_gam)
```
```
Family: gaussian
Link function: identity
Formula:
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth)
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.028888 0.008125 -3.555 0.000388 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(democratic_quality) 8.685 8.972 13.26 <2e-16 ***
s(generosity) 6.726 7.870 27.25 <2e-16 ***
s(log_gdp_per_capita) 8.893 8.996 87.20 <2e-16 ***
s(healthy_life_expectancy_at_birth) 8.717 8.977 65.82 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R-sq.(adj) = 0.872 Deviance explained = 87.5%
GCV = 0.11479 Scale est. = 0.11249 n = 1704
```
The first thing you may notice is that there are no regression coefficients. This is because the effect of any of these predictors depends on their value, so trying to assess it by a single value would be problematic at best. You can guess what will help us interpret this…
```
library(mgcViz)
plot.gamViz(happy_model_gam, allTerms = T)
```
Here is a brief summary of interpretation. We generally don’t have to worry about small wiggles.
* `democratic_quality`: Effect is most notable (positive and strong) for higher values. Negligible otherwise.
* `generosity`: Effect seems seems strongly positive, but mostly for lower values of generosity.
* `life_expectancy`: Effect is positive, but only if the country is around the mean or higher.
* `log GDP per capita`: Effect is mostly positive, but may depend on other factors not included in the model.
In terms of general model fit, the `Scale est.` is the same as the residual standard error (squared) in the other models, and is a notably lower than even the model with the interaction (0\.11 vs. 0\.16\). We can also see that the adjusted \\(R^2\\) is higher as well (0\.87 vs. 0\.82\). If we wanted, we can actually do wiggly interactions also! Here is our interaction from before for the GAM case.
Let’s check our AIC now to see which model wins.
```
AIC(
happy_model_null,
happy_model_base,
happy_model_more,
happy_model_interact,
happy_model_gam
)
```
```
df AIC
happy_model_null 2.00000 1272.755
happy_model_base 5.00000 2043.770
happy_model_more 7.00000 1791.237
happy_model_interact 7.00000 1709.801
happy_model_gam 35.02128 1148.417
```
It’s pretty clear our wiggly model is the winner, even with the added complexity. Note that even though we used a different function for the GAM model, the AIC is still comparable.
Model Averaging
---------------
Have you ever suffered from choice overload? Many folks who seek to understand some phenomenon via modeling do so. There are plenty of choices due to data processing, but then there may be many models to consider as well, and should be if you’re doing things correctly. But you know what? You don’t have to pick a best.
Model averaging is a common technique in the Bayesian world and also with some applications of machine learning (usually under the guise of *stacking*), but not as widely applied elsewhere, even though it could be. As an example if we (inversely) weight models by the AIC, we can get an average parameter that favors the better models, while not ignoring the lesser models if they aren’t notably poorer. People will use such an approach to get model averaged effects (i.e. coefficients) or predictions. In our setting, the GAM is doing so much better, that it’s weight would basically be 1\.0 and zero for the others. So the model averaged predictions would be almost identical to the GAM predictions.
| model | df | AIC | AICc | deltaAICc | Rel. Like. | weight |
| --- | --- | --- | --- | --- | --- | --- |
| happy\_model\_base | 5\.000 | 2043\.770 | 2043\.805 | 893\.875 | 0 | 0 |
| happy\_model\_more | 7\.000 | 1791\.237 | 1791\.303 | 641\.373 | 0 | 0 |
| happy\_model\_interact | 7\.000 | 1709\.801 | 1709\.867 | 559\.937 | 0 | 0 |
| happy\_model\_gam | 35\.021 | 1148\.417 | 1149\.930 | 0\.000 | 1 | 1 |
Model Criticism Summary
-----------------------
Statistical significance with a single model does not provide enough of a story to tell with your data. A better assessment of performance can be made on data the model has not seen, and can provide a better idea of the practical capabilities of it. Furthermore, pitting various models of differing complexities will allow for better confidence in the model or set of models we ultimately deem worthy. In general, in more explanatory settings we strive to balance performance with complexity through various means.
Model Criticism Exercises
-------------------------
### Exercise 0
Recall the [google app exercises](models.html#model-exploration-exercises), we use a standard linear model (i.e. lm) to predict one of three target variables:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
After that we did a model with an interaction.
Either using those models, or running new ones with a different target variable, conduct the following exercises.
```
load('data/google_apps.RData')
```
### Exercise 1
Assess the model fit and performance of your first model. Perform additional diagnostics to assess how the model is doing (e.g. plot the model to look at residuals).
```
summary(model)
plot(model)
```
### Exercise 2
Compare the model with the interaction model. Based on AIC or some other metric, which one would you choose? Visualize the interaction model if it’s the better model.
```
anova(model1, model2)
AIC(model1, model2)
```
Python Model Criticism Notebook
-------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/model_criticism.ipynb)
Model Fit
---------
### Standard linear model
In the basic regression setting we can think of model fit in terms of a statistical result, or in terms of the match between our model predictions and the observed target values. The former provides an inferential perspective, but as we will see, is limited. The latter regards a more practical result, and may provide a more nuanced or different conclusion.
#### Statistical assessment
In a standard linear model we can compare a model where there are no covariates vs. the model we actually care about, which may have many predictor variables. This is an almost useless test, but the results are typically reported both in standard output and academic presentation. Let’s think about it conceptually\- how does the variability in our target break down?
\\\[\\textrm{Total Variance} \= \\textrm{Model Explained Variance} \+ \\textrm{Residual Variance}\\]
So the variability in our target (TV) can be decomposed into that which we can explain with the predictor variables (MEV), and everything else that is not in our model (RV). If we have nothing in the model, then TV \= RV.
Let’s revisit the summary of our model. Note the *F\-statistic*, which represents a statistical test for the model as a whole.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
The standard F statistic can be calculated as follows, where \\(p\\) is the number of predictors[36](#fn36):
\\\[F \= \\frac{MV/p}{RV/(N\-p\-1\)}\\]
Conceptually it is a ratio of average squared variance to average unexplained variance. We can see this more explicitly as follows, where each predictor’s contribution to the total variance is provided in the `Sum Sq` column.
```
anova(happy_model_base)
```
```
Analysis of Variance Table
Response: happiness_score
Df Sum Sq Mean Sq F value Pr(>F)
democratic_quality 1 189.192 189.192 479.300 < 2.2e-16 ***
generosity 1 6.774 6.774 17.162 4.177e-05 ***
log_gdp_per_capita 1 170.649 170.649 432.324 < 2.2e-16 ***
Residuals 407 160.653 0.395
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
If we add those together and use our formula above we get:
\\\[F \= \\frac{366\.62/3}{160\.653/407} \= 309\.6\\]
Which is what is reported in the summary of the model. And the p\-value is just `pf(309.6, 3, 407, lower = FALSE)`, whose values can be extracted from the summary object.
```
happy_model_base_sum$fstatistic
```
```
value numdf dendf
309.5954 3.0000 407.0000
```
```
pf(309.6, 3, 407, lower.tail = FALSE)
```
```
[1] 1.239283e-104
```
Because the F\-value is so large and p\-value so small, the printed result in the summary doesn’t give us the actual p\-value. So let’s demonstrate again with a worse model, where the p\-value will be higher.
```
f_test = lm(happiness_score ~ generosity, happy)
summary(f_test)
```
```
Call:
lm(formula = happiness_score ~ generosity, data = happy)
Residuals:
Min 1Q Median 3Q Max
-2.81037 -0.89930 0.00716 0.84924 2.33153
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.41905 0.04852 111.692 < 2e-16 ***
generosity 0.89936 0.30351 2.963 0.00318 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.122 on 533 degrees of freedom
(1169 observations deleted due to missingness)
Multiple R-squared: 0.01621, Adjusted R-squared: 0.01436
F-statistic: 8.78 on 1 and 533 DF, p-value: 0.003181
```
```
pf(8.78, 1, 533, lower.tail = FALSE)
```
```
[1] 0.003181551
```
We can make this F\-test more explicit by actually fitting a null model and making the comparison. The following will provide the same result as before. We make sure to use the same data as in the original model, since there are missing values for some covariates.
```
happy_model_null = lm(happiness_score ~ 1, data = model.frame(happy_model_base))
anova(happy_model_null, happy_model_base)
```
```
Analysis of Variance Table
Model 1: happiness_score ~ 1
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Res.Df RSS Df Sum of Sq F Pr(>F)
1 410 527.27
2 407 160.65 3 366.62 309.6 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
In this case our F statistic generalizes to the following, where \\(\\textrm{Model}\_1\\) is the simpler model and \\(p\\) now refers to the total number of parameters estimated (i.e. same as before \+ 1 for the intercept)
\\\[F \= \\frac{(\\textrm{Model}\_2\\ \\textrm{RV} \- \\textrm{Model}\_1\\ \\textrm{RV})/(p\_2 \- p\_1\)}{\\textrm{Model}\_2\\ \\textrm{RV}/(N\-p\_2\-1\)}\\]
From the previous results, we can perform the necessary arithmetic based on this formula to get the F statistic.
```
((527.27 - 160.65)/3) / (160.65/407)
```
```
[1] 309.6054
```
#### \\(R^2\\)
The statistical result just shown is mostly a straw man type of test\- who actually cares if our model does statistically better than a model with nothing in it? Surely if you don’t do better than nothing, then you may need to think more intently about what you are trying to model and how. But just because you can knock the straw man down, it isn’t something to get overly excited about. Let’s turn instead to a different concept\- the amount of variance of the target variable that is explained by our predictors. For the standard linear model setting, this statistic is called *R\-squared* (\\(R^2\\)).
Going back to our previous notions, \\(R^2\\) is just:
\\\[R^2 \=\\textrm{Model Explained Variance}/\\textrm{Total Variance}\\]
This also is reported by default in our summary printout.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
With our values from before for model and total variance, we can calculate it ourselves.
```
366.62 / 527.27
```
```
[1] 0.6953174
```
Here is another way. Let’s get the model predictions, and see how well they correlate with the target.
```
predictions = predict(happy_model_base)
target = happy_model_base$model$happiness_score
rho = cor(predictions, target)
rho
```
```
[1] 0.8338528
```
```
rho^2
```
```
[1] 0.6953106
```
Now you can see why it’s called \\(R^2\\). It is the squared Pearson \\(r\\) of the model expected value and the observed target variable.
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
### Beyond OLS
People love \\(R^2\\), so much that they will report it wherever they can, even coming up with things like ‘Pseudo\-\\(R^2\\)’ when it proves difficult. However, outside of the OLS setting where we assume a normal distribution as the underlying data\-generating mechanism, \\(R^2\\) has little application, and so is not very useful. In some sense, for any numeric target variable we can ask how well our predictions correlate with the observed target values, but the notion of ‘variance explained’ doesn’t easily follow us. For example, for other distributions the estimated variance is a function of the mean (e.g. Poisson, Binomial), and so isn’t constant. In other settings we have multiple sources of (residual) variance, and some sources where it’s not clear whether the variance should be considered as part of the model explained variance or residual variance. For categorical targets the notion doesn’t really apply very well at all.
At least for GLM for non\-normal distributions, we can work with *deviance*, which is similar to the residual sum of squares in the OLS setting. We can get a ‘deviance explained’ using the following approach:
1. Fit a null model, i.e. intercept only. This gives the total deviance (`tot_dev`).
2. Fit the desired model. This provides the model unexplained deviance (`model_dev`)
3. Calculate \\(\\frac{\\textrm{tot\_dev} \-\\textrm{model\_dev}}{\\textrm{tot\_dev}}\\)
But this value doesn’t really behave in the same manner as \\(R^2\\). For one, it can actually go down for a more complex model, and there is no standard adjustment, neither of which is the case with \\(R^2\\) for the standard linear model. At most this can serve as an approximation. For more complicated settings you will have to rely on other means to determine model fit.
### Classification
For categorical targets we must think about obtaining predictions that allow us to classify the observations into specific categories. Not surprisingly, this will require different metrics to assess model performance.
#### Accuracy and other metrics
A very natural starting point is *accuracy*, or what percentage of our predicted class labels match the observed class labels. However, our model will not spit out a character string, only a number. On the scale of the linear predictor it can be anything, but we will at some point transform it to the probability scale, obtaining a predicted probability for each category. The class associated with the highest probability is the predicted class. In the case of binary targets, this is just an if\_else statement for one class `if_else(probability >= .5, 'class A', 'class B')`.
With those predicted labels and the observed labels we create what is commonly called a *confusion matrix*, but would more sanely be called a *classification table*, *prediction table*, or just about any other name one could come up with in the first 10 seconds of trying. Let’s look at the following hypothetical result.
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | 41 | 21 | | Predicted \= 0 | 16 | 13 | | | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | A | B | | Predicted \= 0 | C | D | |
In some cases we predict correctly, in other cases not. In this 2 x 2 setting we label the cells A through D. With things in place, consider the following the following nomenclature.
*True Positive*, *False Positive*, *True Negative*, *False Negative*: Above, these are A, B, D, and C respectively.
Now let’s see what we can calculate.
*Accuracy*: Number of correct classifications out of all predictions (A \+ D)/Total. In the above example this would be (41 \+ 13\)/91, about 59%.
*Error Rate*: 1 \- Accuracy.
*Sensitivity*: is the proportion of correctly predicted positives to all true positive events: A/(A \+ C). In the above example this would be 41/57, about 72%. High sensitivity would suggest a low type II error rate (see below), or high statistical power. Also known as *true positive rate*.
*Specificity*: is the proportion of correctly predicted negatives to all true negative events: D/(B \+ D). In the above example this would be 13/34, about 38%. High specificity would suggest a low type I error rate (see below). Also known as *true negative rate*.
*Positive Predictive Value* (PPV): proportion of true positives of those that are predicted positives: A/(A \+ B). In the above example this would be 41/62, about 66%.
*Negative Predictive Value* (NPV): proportion of true negatives of those that are predicted negative: D/(C \+ D). In the above example this would be 13/29, about 45%.
*Precision*: See PPV.
*Recall*: See sensitivity.
*Lift*: Ratio of positive predictions given actual positives to the proportion of positive predictions out of the total: (A/(A \+ C)) / ((A \+ B)/Total). In the above example this would be (41/(41 \+ 16\))/((41 \+ 21\)/(91\)), or 1\.06\.
*F Score* (F1 score): Harmonic mean of precision and recall: 2\*(Precision\*Recall)/(Precision\+Recall). In the above example this would be 2\*(.66\*.72\)/(.66\+.72\), about 0\.69\.
*Type I Error Rate* (false positive rate): proportion of true negatives that are incorrectly predicted positive: B/(B\+D). In the above example this would be 21/34, about 62%. Also known as *alpha*.
*Type II Error Rate* (false negative rate): proportion of true positives that are incorrectly predicted negative: C/(C\+A). In the above example this would be 16/57, about 28%. Also known as *beta*.
*False Discovery Rate*: proportion of false positives among all positive predictions: B/(A\+B). In the above example this would be 21/62, about 34%. Often used in multiple comparison testing in the context of ANOVA.
*Phi coefficient*: A measure of association: (A\*D \- B\*C) / (sqrt((A\+C)\*(D\+B)\*(A\+B)\*(D\+C))). In the above example this would be 0\.11\.
Several of these may also be produced on a per\-class basis when there are more than two classes. In addition, for multi\-class scenarios there are other metrics commonly employed. In general there are many, many other metrics for confusion matrices, any of which might be useful for your situation, but the above provides a starting point, and is enough for many situations.
### Standard linear model
In the basic regression setting we can think of model fit in terms of a statistical result, or in terms of the match between our model predictions and the observed target values. The former provides an inferential perspective, but as we will see, is limited. The latter regards a more practical result, and may provide a more nuanced or different conclusion.
#### Statistical assessment
In a standard linear model we can compare a model where there are no covariates vs. the model we actually care about, which may have many predictor variables. This is an almost useless test, but the results are typically reported both in standard output and academic presentation. Let’s think about it conceptually\- how does the variability in our target break down?
\\\[\\textrm{Total Variance} \= \\textrm{Model Explained Variance} \+ \\textrm{Residual Variance}\\]
So the variability in our target (TV) can be decomposed into that which we can explain with the predictor variables (MEV), and everything else that is not in our model (RV). If we have nothing in the model, then TV \= RV.
Let’s revisit the summary of our model. Note the *F\-statistic*, which represents a statistical test for the model as a whole.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
The standard F statistic can be calculated as follows, where \\(p\\) is the number of predictors[36](#fn36):
\\\[F \= \\frac{MV/p}{RV/(N\-p\-1\)}\\]
Conceptually it is a ratio of average squared variance to average unexplained variance. We can see this more explicitly as follows, where each predictor’s contribution to the total variance is provided in the `Sum Sq` column.
```
anova(happy_model_base)
```
```
Analysis of Variance Table
Response: happiness_score
Df Sum Sq Mean Sq F value Pr(>F)
democratic_quality 1 189.192 189.192 479.300 < 2.2e-16 ***
generosity 1 6.774 6.774 17.162 4.177e-05 ***
log_gdp_per_capita 1 170.649 170.649 432.324 < 2.2e-16 ***
Residuals 407 160.653 0.395
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
If we add those together and use our formula above we get:
\\\[F \= \\frac{366\.62/3}{160\.653/407} \= 309\.6\\]
Which is what is reported in the summary of the model. And the p\-value is just `pf(309.6, 3, 407, lower = FALSE)`, whose values can be extracted from the summary object.
```
happy_model_base_sum$fstatistic
```
```
value numdf dendf
309.5954 3.0000 407.0000
```
```
pf(309.6, 3, 407, lower.tail = FALSE)
```
```
[1] 1.239283e-104
```
Because the F\-value is so large and p\-value so small, the printed result in the summary doesn’t give us the actual p\-value. So let’s demonstrate again with a worse model, where the p\-value will be higher.
```
f_test = lm(happiness_score ~ generosity, happy)
summary(f_test)
```
```
Call:
lm(formula = happiness_score ~ generosity, data = happy)
Residuals:
Min 1Q Median 3Q Max
-2.81037 -0.89930 0.00716 0.84924 2.33153
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.41905 0.04852 111.692 < 2e-16 ***
generosity 0.89936 0.30351 2.963 0.00318 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.122 on 533 degrees of freedom
(1169 observations deleted due to missingness)
Multiple R-squared: 0.01621, Adjusted R-squared: 0.01436
F-statistic: 8.78 on 1 and 533 DF, p-value: 0.003181
```
```
pf(8.78, 1, 533, lower.tail = FALSE)
```
```
[1] 0.003181551
```
We can make this F\-test more explicit by actually fitting a null model and making the comparison. The following will provide the same result as before. We make sure to use the same data as in the original model, since there are missing values for some covariates.
```
happy_model_null = lm(happiness_score ~ 1, data = model.frame(happy_model_base))
anova(happy_model_null, happy_model_base)
```
```
Analysis of Variance Table
Model 1: happiness_score ~ 1
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Res.Df RSS Df Sum of Sq F Pr(>F)
1 410 527.27
2 407 160.65 3 366.62 309.6 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
In this case our F statistic generalizes to the following, where \\(\\textrm{Model}\_1\\) is the simpler model and \\(p\\) now refers to the total number of parameters estimated (i.e. same as before \+ 1 for the intercept)
\\\[F \= \\frac{(\\textrm{Model}\_2\\ \\textrm{RV} \- \\textrm{Model}\_1\\ \\textrm{RV})/(p\_2 \- p\_1\)}{\\textrm{Model}\_2\\ \\textrm{RV}/(N\-p\_2\-1\)}\\]
From the previous results, we can perform the necessary arithmetic based on this formula to get the F statistic.
```
((527.27 - 160.65)/3) / (160.65/407)
```
```
[1] 309.6054
```
#### \\(R^2\\)
The statistical result just shown is mostly a straw man type of test\- who actually cares if our model does statistically better than a model with nothing in it? Surely if you don’t do better than nothing, then you may need to think more intently about what you are trying to model and how. But just because you can knock the straw man down, it isn’t something to get overly excited about. Let’s turn instead to a different concept\- the amount of variance of the target variable that is explained by our predictors. For the standard linear model setting, this statistic is called *R\-squared* (\\(R^2\\)).
Going back to our previous notions, \\(R^2\\) is just:
\\\[R^2 \=\\textrm{Model Explained Variance}/\\textrm{Total Variance}\\]
This also is reported by default in our summary printout.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
With our values from before for model and total variance, we can calculate it ourselves.
```
366.62 / 527.27
```
```
[1] 0.6953174
```
Here is another way. Let’s get the model predictions, and see how well they correlate with the target.
```
predictions = predict(happy_model_base)
target = happy_model_base$model$happiness_score
rho = cor(predictions, target)
rho
```
```
[1] 0.8338528
```
```
rho^2
```
```
[1] 0.6953106
```
Now you can see why it’s called \\(R^2\\). It is the squared Pearson \\(r\\) of the model expected value and the observed target variable.
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
#### Statistical assessment
In a standard linear model we can compare a model where there are no covariates vs. the model we actually care about, which may have many predictor variables. This is an almost useless test, but the results are typically reported both in standard output and academic presentation. Let’s think about it conceptually\- how does the variability in our target break down?
\\\[\\textrm{Total Variance} \= \\textrm{Model Explained Variance} \+ \\textrm{Residual Variance}\\]
So the variability in our target (TV) can be decomposed into that which we can explain with the predictor variables (MEV), and everything else that is not in our model (RV). If we have nothing in the model, then TV \= RV.
Let’s revisit the summary of our model. Note the *F\-statistic*, which represents a statistical test for the model as a whole.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
The standard F statistic can be calculated as follows, where \\(p\\) is the number of predictors[36](#fn36):
\\\[F \= \\frac{MV/p}{RV/(N\-p\-1\)}\\]
Conceptually it is a ratio of average squared variance to average unexplained variance. We can see this more explicitly as follows, where each predictor’s contribution to the total variance is provided in the `Sum Sq` column.
```
anova(happy_model_base)
```
```
Analysis of Variance Table
Response: happiness_score
Df Sum Sq Mean Sq F value Pr(>F)
democratic_quality 1 189.192 189.192 479.300 < 2.2e-16 ***
generosity 1 6.774 6.774 17.162 4.177e-05 ***
log_gdp_per_capita 1 170.649 170.649 432.324 < 2.2e-16 ***
Residuals 407 160.653 0.395
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
If we add those together and use our formula above we get:
\\\[F \= \\frac{366\.62/3}{160\.653/407} \= 309\.6\\]
Which is what is reported in the summary of the model. And the p\-value is just `pf(309.6, 3, 407, lower = FALSE)`, whose values can be extracted from the summary object.
```
happy_model_base_sum$fstatistic
```
```
value numdf dendf
309.5954 3.0000 407.0000
```
```
pf(309.6, 3, 407, lower.tail = FALSE)
```
```
[1] 1.239283e-104
```
Because the F\-value is so large and p\-value so small, the printed result in the summary doesn’t give us the actual p\-value. So let’s demonstrate again with a worse model, where the p\-value will be higher.
```
f_test = lm(happiness_score ~ generosity, happy)
summary(f_test)
```
```
Call:
lm(formula = happiness_score ~ generosity, data = happy)
Residuals:
Min 1Q Median 3Q Max
-2.81037 -0.89930 0.00716 0.84924 2.33153
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.41905 0.04852 111.692 < 2e-16 ***
generosity 0.89936 0.30351 2.963 0.00318 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 1.122 on 533 degrees of freedom
(1169 observations deleted due to missingness)
Multiple R-squared: 0.01621, Adjusted R-squared: 0.01436
F-statistic: 8.78 on 1 and 533 DF, p-value: 0.003181
```
```
pf(8.78, 1, 533, lower.tail = FALSE)
```
```
[1] 0.003181551
```
We can make this F\-test more explicit by actually fitting a null model and making the comparison. The following will provide the same result as before. We make sure to use the same data as in the original model, since there are missing values for some covariates.
```
happy_model_null = lm(happiness_score ~ 1, data = model.frame(happy_model_base))
anova(happy_model_null, happy_model_base)
```
```
Analysis of Variance Table
Model 1: happiness_score ~ 1
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Res.Df RSS Df Sum of Sq F Pr(>F)
1 410 527.27
2 407 160.65 3 366.62 309.6 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
In this case our F statistic generalizes to the following, where \\(\\textrm{Model}\_1\\) is the simpler model and \\(p\\) now refers to the total number of parameters estimated (i.e. same as before \+ 1 for the intercept)
\\\[F \= \\frac{(\\textrm{Model}\_2\\ \\textrm{RV} \- \\textrm{Model}\_1\\ \\textrm{RV})/(p\_2 \- p\_1\)}{\\textrm{Model}\_2\\ \\textrm{RV}/(N\-p\_2\-1\)}\\]
From the previous results, we can perform the necessary arithmetic based on this formula to get the F statistic.
```
((527.27 - 160.65)/3) / (160.65/407)
```
```
[1] 309.6054
```
#### \\(R^2\\)
The statistical result just shown is mostly a straw man type of test\- who actually cares if our model does statistically better than a model with nothing in it? Surely if you don’t do better than nothing, then you may need to think more intently about what you are trying to model and how. But just because you can knock the straw man down, it isn’t something to get overly excited about. Let’s turn instead to a different concept\- the amount of variance of the target variable that is explained by our predictors. For the standard linear model setting, this statistic is called *R\-squared* (\\(R^2\\)).
Going back to our previous notions, \\(R^2\\) is just:
\\\[R^2 \=\\textrm{Model Explained Variance}/\\textrm{Total Variance}\\]
This also is reported by default in our summary printout.
```
happy_model_base_sum
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy)
Residuals:
Min 1Q Median 3Q Max
-1.75376 -0.45585 -0.00307 0.46013 1.69925
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.01048 0.31436 -3.214 0.001412 **
democratic_quality 0.17037 0.04588 3.714 0.000233 ***
generosity 1.16085 0.19548 5.938 6.18e-09 ***
log_gdp_per_capita 0.69342 0.03335 20.792 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.6283 on 407 degrees of freedom
(1293 observations deleted due to missingness)
Multiple R-squared: 0.6953, Adjusted R-squared: 0.6931
F-statistic: 309.6 on 3 and 407 DF, p-value: < 2.2e-16
```
With our values from before for model and total variance, we can calculate it ourselves.
```
366.62 / 527.27
```
```
[1] 0.6953174
```
Here is another way. Let’s get the model predictions, and see how well they correlate with the target.
```
predictions = predict(happy_model_base)
target = happy_model_base$model$happiness_score
rho = cor(predictions, target)
rho
```
```
[1] 0.8338528
```
```
rho^2
```
```
[1] 0.6953106
```
Now you can see why it’s called \\(R^2\\). It is the squared Pearson \\(r\\) of the model expected value and the observed target variable.
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
##### Adjustment
One problem with \\(R^2\\) is that it always goes up, no matter what nonsense you add to a model. This is why we have an *adjusted \\(R^2\\)* that attempts to balance the sample size and model complexity. For very large data and/or simpler models, the difference is negligible. But you should always report the adjusted \\(R^2\\), as the default \\(R^2\\) is actually upwardly biased and doesn’t account for additional model complexity[37](#fn37).
### Beyond OLS
People love \\(R^2\\), so much that they will report it wherever they can, even coming up with things like ‘Pseudo\-\\(R^2\\)’ when it proves difficult. However, outside of the OLS setting where we assume a normal distribution as the underlying data\-generating mechanism, \\(R^2\\) has little application, and so is not very useful. In some sense, for any numeric target variable we can ask how well our predictions correlate with the observed target values, but the notion of ‘variance explained’ doesn’t easily follow us. For example, for other distributions the estimated variance is a function of the mean (e.g. Poisson, Binomial), and so isn’t constant. In other settings we have multiple sources of (residual) variance, and some sources where it’s not clear whether the variance should be considered as part of the model explained variance or residual variance. For categorical targets the notion doesn’t really apply very well at all.
At least for GLM for non\-normal distributions, we can work with *deviance*, which is similar to the residual sum of squares in the OLS setting. We can get a ‘deviance explained’ using the following approach:
1. Fit a null model, i.e. intercept only. This gives the total deviance (`tot_dev`).
2. Fit the desired model. This provides the model unexplained deviance (`model_dev`)
3. Calculate \\(\\frac{\\textrm{tot\_dev} \-\\textrm{model\_dev}}{\\textrm{tot\_dev}}\\)
But this value doesn’t really behave in the same manner as \\(R^2\\). For one, it can actually go down for a more complex model, and there is no standard adjustment, neither of which is the case with \\(R^2\\) for the standard linear model. At most this can serve as an approximation. For more complicated settings you will have to rely on other means to determine model fit.
### Classification
For categorical targets we must think about obtaining predictions that allow us to classify the observations into specific categories. Not surprisingly, this will require different metrics to assess model performance.
#### Accuracy and other metrics
A very natural starting point is *accuracy*, or what percentage of our predicted class labels match the observed class labels. However, our model will not spit out a character string, only a number. On the scale of the linear predictor it can be anything, but we will at some point transform it to the probability scale, obtaining a predicted probability for each category. The class associated with the highest probability is the predicted class. In the case of binary targets, this is just an if\_else statement for one class `if_else(probability >= .5, 'class A', 'class B')`.
With those predicted labels and the observed labels we create what is commonly called a *confusion matrix*, but would more sanely be called a *classification table*, *prediction table*, or just about any other name one could come up with in the first 10 seconds of trying. Let’s look at the following hypothetical result.
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | 41 | 21 | | Predicted \= 0 | 16 | 13 | | | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | A | B | | Predicted \= 0 | C | D | |
In some cases we predict correctly, in other cases not. In this 2 x 2 setting we label the cells A through D. With things in place, consider the following the following nomenclature.
*True Positive*, *False Positive*, *True Negative*, *False Negative*: Above, these are A, B, D, and C respectively.
Now let’s see what we can calculate.
*Accuracy*: Number of correct classifications out of all predictions (A \+ D)/Total. In the above example this would be (41 \+ 13\)/91, about 59%.
*Error Rate*: 1 \- Accuracy.
*Sensitivity*: is the proportion of correctly predicted positives to all true positive events: A/(A \+ C). In the above example this would be 41/57, about 72%. High sensitivity would suggest a low type II error rate (see below), or high statistical power. Also known as *true positive rate*.
*Specificity*: is the proportion of correctly predicted negatives to all true negative events: D/(B \+ D). In the above example this would be 13/34, about 38%. High specificity would suggest a low type I error rate (see below). Also known as *true negative rate*.
*Positive Predictive Value* (PPV): proportion of true positives of those that are predicted positives: A/(A \+ B). In the above example this would be 41/62, about 66%.
*Negative Predictive Value* (NPV): proportion of true negatives of those that are predicted negative: D/(C \+ D). In the above example this would be 13/29, about 45%.
*Precision*: See PPV.
*Recall*: See sensitivity.
*Lift*: Ratio of positive predictions given actual positives to the proportion of positive predictions out of the total: (A/(A \+ C)) / ((A \+ B)/Total). In the above example this would be (41/(41 \+ 16\))/((41 \+ 21\)/(91\)), or 1\.06\.
*F Score* (F1 score): Harmonic mean of precision and recall: 2\*(Precision\*Recall)/(Precision\+Recall). In the above example this would be 2\*(.66\*.72\)/(.66\+.72\), about 0\.69\.
*Type I Error Rate* (false positive rate): proportion of true negatives that are incorrectly predicted positive: B/(B\+D). In the above example this would be 21/34, about 62%. Also known as *alpha*.
*Type II Error Rate* (false negative rate): proportion of true positives that are incorrectly predicted negative: C/(C\+A). In the above example this would be 16/57, about 28%. Also known as *beta*.
*False Discovery Rate*: proportion of false positives among all positive predictions: B/(A\+B). In the above example this would be 21/62, about 34%. Often used in multiple comparison testing in the context of ANOVA.
*Phi coefficient*: A measure of association: (A\*D \- B\*C) / (sqrt((A\+C)\*(D\+B)\*(A\+B)\*(D\+C))). In the above example this would be 0\.11\.
Several of these may also be produced on a per\-class basis when there are more than two classes. In addition, for multi\-class scenarios there are other metrics commonly employed. In general there are many, many other metrics for confusion matrices, any of which might be useful for your situation, but the above provides a starting point, and is enough for many situations.
#### Accuracy and other metrics
A very natural starting point is *accuracy*, or what percentage of our predicted class labels match the observed class labels. However, our model will not spit out a character string, only a number. On the scale of the linear predictor it can be anything, but we will at some point transform it to the probability scale, obtaining a predicted probability for each category. The class associated with the highest probability is the predicted class. In the case of binary targets, this is just an if\_else statement for one class `if_else(probability >= .5, 'class A', 'class B')`.
With those predicted labels and the observed labels we create what is commonly called a *confusion matrix*, but would more sanely be called a *classification table*, *prediction table*, or just about any other name one could come up with in the first 10 seconds of trying. Let’s look at the following hypothetical result.
| | | | | | | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | 41 | 21 | | Predicted \= 0 | 16 | 13 | | | | Observed \= 1 | Observed \= 0 | | --- | --- | --- | | Predicted \= 1 | A | B | | Predicted \= 0 | C | D | |
In some cases we predict correctly, in other cases not. In this 2 x 2 setting we label the cells A through D. With things in place, consider the following the following nomenclature.
*True Positive*, *False Positive*, *True Negative*, *False Negative*: Above, these are A, B, D, and C respectively.
Now let’s see what we can calculate.
*Accuracy*: Number of correct classifications out of all predictions (A \+ D)/Total. In the above example this would be (41 \+ 13\)/91, about 59%.
*Error Rate*: 1 \- Accuracy.
*Sensitivity*: is the proportion of correctly predicted positives to all true positive events: A/(A \+ C). In the above example this would be 41/57, about 72%. High sensitivity would suggest a low type II error rate (see below), or high statistical power. Also known as *true positive rate*.
*Specificity*: is the proportion of correctly predicted negatives to all true negative events: D/(B \+ D). In the above example this would be 13/34, about 38%. High specificity would suggest a low type I error rate (see below). Also known as *true negative rate*.
*Positive Predictive Value* (PPV): proportion of true positives of those that are predicted positives: A/(A \+ B). In the above example this would be 41/62, about 66%.
*Negative Predictive Value* (NPV): proportion of true negatives of those that are predicted negative: D/(C \+ D). In the above example this would be 13/29, about 45%.
*Precision*: See PPV.
*Recall*: See sensitivity.
*Lift*: Ratio of positive predictions given actual positives to the proportion of positive predictions out of the total: (A/(A \+ C)) / ((A \+ B)/Total). In the above example this would be (41/(41 \+ 16\))/((41 \+ 21\)/(91\)), or 1\.06\.
*F Score* (F1 score): Harmonic mean of precision and recall: 2\*(Precision\*Recall)/(Precision\+Recall). In the above example this would be 2\*(.66\*.72\)/(.66\+.72\), about 0\.69\.
*Type I Error Rate* (false positive rate): proportion of true negatives that are incorrectly predicted positive: B/(B\+D). In the above example this would be 21/34, about 62%. Also known as *alpha*.
*Type II Error Rate* (false negative rate): proportion of true positives that are incorrectly predicted negative: C/(C\+A). In the above example this would be 16/57, about 28%. Also known as *beta*.
*False Discovery Rate*: proportion of false positives among all positive predictions: B/(A\+B). In the above example this would be 21/62, about 34%. Often used in multiple comparison testing in the context of ANOVA.
*Phi coefficient*: A measure of association: (A\*D \- B\*C) / (sqrt((A\+C)\*(D\+B)\*(A\+B)\*(D\+C))). In the above example this would be 0\.11\.
Several of these may also be produced on a per\-class basis when there are more than two classes. In addition, for multi\-class scenarios there are other metrics commonly employed. In general there are many, many other metrics for confusion matrices, any of which might be useful for your situation, but the above provides a starting point, and is enough for many situations.
Model Assumptions
-----------------
There are quite a few assumptions for the standard linear model that we could talk about, but I’ll focus on just a handful, ordered roughly in terms of the severity of violation.
* Correct model
* Heteroscedasticity
* Independence of observations
* Normality
These concern bias (the first), accurate inference (most of the rest), or other statistical concepts (efficiency, consistency). The issue with most of the assumptions you learn about in your statistics course is that they mostly just apply to the OLS setting. Moreover, you can meet all the assumptions you want and still have a crappy model. Practically speaking, the effects on inference often aren’t large enough to matter in many cases, as we shouldn’t be making any important decision based on a p\-value, or slight differences in the boundaries of an interval. Even then, at least for OLS and other simpler settings, the solutions to these issues are often easy, for example, to obtain correct standard errors, or are mostly overcome by having a large amount of data.
Still, the diagnostic tools can provide clues to model failure, and so have utility in that sense. As before, visualization will aid us here.
```
library(ggfortify)
autoplot(happy_model_base)
```
The first plot shows the spread of the residuals vs. the model estimated values. By default, the three most extreme observations are noted. In this plot we are looking for a lack of any conspicuous pattern, e.g. a fanning out to one side or butterfly shape. If the variance was dependent on some of the model estimated values, we have a couple options:
* Use a model that does not assume constant variance
* Add complexity to the model to better capture more extreme observations
* Change the assumed distribution
In this example we have it about as good as it gets. The second plot regards the normality of the residuals. If they are normally distributed, they would fall along the dotted line. Again, in practical application this is about as good as you’re going to get. In the following we can see that we have some issues, where predictions are worse at low and high ends, and we may not be capturing some of the tail of the target distribution.
Another plot we can use to assess model fit is simply to note the predictions vs. the observed values, and this sort of plot would be appropriate for any model. Here I show this both as a scatterplot and a density plot. With the first, the closer the result is to a line the better, with the latter, we can more adequately see what the model is predicting in relation to the observed values. In this case, while we’re doing well, one limitation of the model is that it does not have as much spread as target, and so is not capturing the more extreme values.
Beyond the OLS setting, assumptions may change, are more difficult to check, and guarantees are harder to come by. The primary one \- that you have an adequate and sufficiently complex model \- still remains the most vital. It is important to remember that these assumptions regard inference, not predictive capabilities. In addition, in many modeling scenarios we will actually induce bias to have more predictive capacity. In such settings statistical tests are of less importance, and there often may not even be an obvious test to use. Typically we will still have some means to get interval estimates for weights or predictions though.
Predictive Performance
----------------------
While we can gauge predictive performance to some extent with a metric like \\(R^2\\) in the standard linear model case, even then it almost certainly an optimistic viewpoint, and adjusted \\(R^2\\) doesn’t really deal with the underlying issue. What is the problem? The concern is that we are judging model performance on the very data it was fit to. Any potential deviation to the underlying data would certainly result in a different result for \\(R^2\\), accuracy, or any metric we choose to look at.
So the better estimate of how the model is doing is to observe performance on data it hasn’t seen, using a metric that better captures how close we hit the target. This data goes by different names\- *test set*, *validation set*, *holdout sample*, etc., but the basic idea is that we use some data that wasn’t used in model fitting to assess performance. We can do this in any data situation by randomly splitting into a data set for training the model, and one used for testing the model’s performance.
```
library(tidymodels)
set.seed(12)
happy_split = initial_split(happy, prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split) %>% drop_na()
happy_model_train = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_train
)
predictions = predict(happy_model_train, newdata = happy_test)
```
Comparing our loss on training and test (i.e. RMSE), we can see the loss is greater on the test set. You can use a package like yardstick to calculate this.
| RMSE\_train | RMSE\_test | % increase |
| --- | --- | --- |
| 0\.622 | 0\.758 | 21\.9 |
While in many settings we could simply report performance metrics from the test set, for a more accurate assessment of test error, we’d do better by taking an average over several test sets, an approach known as *cross\-validation*, something we’ll talk more about [later](ml.html#cross-validation).
In general, we may do okay in scenarios where the model is simple and uses a lot of data, but even then we may find a notable increase in test error relative to training error. For more complex models and/or with less data, the difference in training vs. test could be quite significant.
Model Comparison
----------------
Up until now the focus has been entirely on one model. However, if you’re trying to learn something new, you’ll almost always want to have multiple plausible models to explore, rather than just confirming what you think you already know. This can be as simple as starting with a baseline model and adding complexity to it, but it could also be pitting fundamentally different theoretical models against one another.
A notable problem is that complex models should always do better than simple ones. The question often then becomes if they are doing notably better given the additional complexity. So we’ll need some way to compare models in a way that takes the complexity of the model into account.
### Example: Additional covariates
A starting point for adding model complexity is simply adding more covariates. Let’s add life expectancy and a yearly trend to our happiness model. To make this model comparable to our baseline model, they need to be fit to the same data, and life expectancy has a couple missing values the others do not. So we’ll start with some data processing. I will start by standardizing some of the variables, and making year start at zero, which will represent 2008, and finally dropping missing values. Refer to our previous section on [transforming variables](models.html#numeric-variables) if you want to.
```
happy_recipe = happy %>%
select(
year,
happiness_score,
democratic_quality,
generosity,
healthy_life_expectancy_at_birth,
log_gdp_per_capita
) %>%
recipe(happiness_score ~ . ) %>%
step_center(all_numeric(), -log_gdp_per_capita, -year) %>%
step_scale(all_numeric(), -log_gdp_per_capita, -year) %>%
step_knnimpute(all_numeric()) %>%
step_naomit(everything()) %>%
step_center(year, means = 2005) %>%
prep()
happy_processed = happy_recipe %>% bake(happy)
```
Now let’s start with our baseline model again.
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_processed
)
summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.53727 -0.29553 -0.01258 0.32002 1.52749
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -5.49178 0.10993 -49.958 <2e-16 ***
democratic_quality 0.14175 0.01441 9.838 <2e-16 ***
generosity 0.19826 0.01096 18.092 <2e-16 ***
log_gdp_per_capita 0.59284 0.01187 49.946 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.44 on 1700 degrees of freedom
Multiple R-squared: 0.7805, Adjusted R-squared: 0.7801
F-statistic: 2014 on 3 and 1700 DF, p-value: < 2.2e-16
```
We can see that moving one standard deviation on democratic quality and generosity leads to similar standard deviation increases in happiness. Moving 10 percentage points in GDP would lead to less than .1 standard deviation increase in happiness.
Now we add our life expectancy and yearly trend.
```
happy_model_more = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed
)
summary(happy_model_more)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.50879 -0.27081 -0.01524 0.29640 1.60540
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.691818 0.148921 -24.790 < 2e-16 ***
democratic_quality 0.099717 0.013618 7.322 3.75e-13 ***
generosity 0.189113 0.010193 18.554 < 2e-16 ***
log_gdp_per_capita 0.397559 0.016121 24.661 < 2e-16 ***
healthy_life_expectancy_at_birth 0.311129 0.018732 16.609 < 2e-16 ***
year -0.007363 0.002728 -2.699 0.00702 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.4083 on 1698 degrees of freedom
Multiple R-squared: 0.8111, Adjusted R-squared: 0.8106
F-statistic: 1459 on 5 and 1698 DF, p-value: < 2.2e-16
```
Here it would seem that life expectancy has a notable effect on happiness (shocker), but the yearly trend, while negative, is not statistically notable. In addition, the democratic effect is no longer significant, as it would seem that it’s contribution was more due to it’s correlation with life expectancy. But the key question is\- is this model better?
The adjusted \\(R^2\\) seems to indicate that we are doing slightly better with this model, but not much (0\.81 vs. 0\.78\). We can test if the increase is a statistically notable one. [Recall previously](model_criticism.html#statistical-assessment) when we compared our model versus a null model to obtain a statistical test of model fit. Since these models are *nested*, i.e. one is a simpler form of the other, we can use the more general approach we depicted to compare these models. This ANOVA, or analysis of variance test, is essentially comparing whether the residual sum of squares (i.e. the loss) is statistically less for one model vs. the other. In many settings it is often called a *likelihood ratio test*.
```
anova(happy_model_base, happy_model_more, test = 'Chi')
```
```
Analysis of Variance Table
Model 1: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth + year
Res.Df RSS Df Sum of Sq Pr(>Chi)
1 1700 329.11
2 1698 283.11 2 45.997 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The `Df` from the test denotes that we have two additional parameters, i.e. coefficients, in the more complex model. But the main thing to note is whether the model statistically reduces the RSS, and so we see that this is a statistically notable improvement as well.
I actually do not like this test though. It requires nested models, which in some settings is either not the case or can be hard to determine, and ignores various aspects of uncertainty in parameter estimates. Furthermore, it may not be appropriate for some complex model settings. An approach that works in many settings is to compare *AIC* (Akaike Information Criterion). AIC is a value based on the likelihood for a given model, but which adds a penalty for complexity, since otherwise any more complex model would result in a larger likelihood (or in this case, smaller negative likelihood). In the following, \\(\\mathcal{L}\\) is the likelihood, and \\(\\mathcal{P}\\) is the number of parameters estimated for the model.
\\\[AIC \= \-2 ( \\ln (\\mathcal{L})) \+ 2 \\mathcal{P}\\]
```
AIC(happy_model_base)
```
```
[1] 2043.77
```
The value itself is meaningless until we compare models, in which case the lower value is the better model (because we are working with the negative log likelihood). With AIC, we don’t have to have nested models, so that’s a plus over the statistical test.
```
AIC(happy_model_base, happy_model_more)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
```
Again, our new model works better. However, this still may miss out on some uncertainty in the models. To try and capture this, I will calculate interval estimates for the adjusted \\(R^2\\) via *bootstrapping*, and then calculate an interval for their difference. The details are beyond what I want to delve into here, but the gist is we just want a confidence interval for the difference in adjusted \\(R^2\\).
| model | r2 | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 0\.780 | 0\.762 | 0\.798 |
| more | 0\.811 | 0\.795 | 0\.827 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in \\(R^2\\) | 0\.013 | 0\.049 |
It would seem the difference in adjusted \\(R^2\\) is not statistically different from zero. Likewise we could do the same for AIC.
| model | aic | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 2043\.770 | 1917\.958 | 2161\.231 |
| more | 1791\.237 | 1657\.755 | 1911\.073 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in AIC | \-369\.994 | \-126\.722 |
In this case, the more complex model may not be statistically better either, as the interval for the difference in AIC also contains zero, and exhibits a notably wide range.
### Example: Interactions
Let’s now add interactions to our model. Interactions allow the relationship of a predictor variable and target to vary depending on the values of another covariate. To keep things simple, we’ll add a single interaction to start\- I will interact democratic quality with life expectancy.
```
happy_model_interact = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth +
democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed
)
summary(happy_model_interact)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.42801 -0.26473 -0.00607 0.26868 1.48161
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.63990 0.14517 -25.074 < 2e-16 ***
democratic_quality 0.08785 0.01335 6.580 6.24e-11 ***
generosity 0.16479 0.01030 16.001 < 2e-16 ***
log_gdp_per_capita 0.38501 0.01578 24.404 < 2e-16 ***
healthy_life_expectancy_at_birth 0.33247 0.01830 18.165 < 2e-16 ***
democratic_quality:healthy_life_expectancy_at_birth 0.10526 0.01105 9.527 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.3987 on 1698 degrees of freedom
Multiple R-squared: 0.82, Adjusted R-squared: 0.8194
F-statistic: 1547 on 5 and 1698 DF, p-value: < 2.2e-16
```
The coefficient interpretation for variables in the interaction model changes. For those involved in an interaction, the base coefficient now only describes the effect when the variable they interact with is zero (or is at the reference group if it’s categorical). So democratic quality has a slight positive, but not statistically notable, effect at the mean of life expectancy (0\.088\). However, this effect increases by 0\.11 when life expectancy increases by 1 (i.e. 1 standard deviation since we standardized). The same interpretation goes for life expectancy. It’s base coefficient is when democratic quality is at it’s mean (0\.332\), and the interaction term is interpreted identically.
It seems most people (including journal reviewers) seem to have trouble understanding interactions if you just report them in a table. Furthermore, beyond the standard linear model with non\-normal distributions, the coefficient for the interaction term doesn’t even have the same precise meaning. But you know what helps us in every interaction setting? Visualization!
Let’s use ggeffects again. We’ll plot the effect of democratic quality at the mean of life expectancy, and at one standard deviation below and above. Since we already standardized it, this is even easier.
```
library(ggeffects)
plot(
ggpredict(
happy_model_interact,
terms = c('democratic_quality', 'healthy_life_expectancy_at_birth[-1, 0, 1]')
)
)
```
We seem to have discovered something interesting here! Democratic quality only has a positive effect for those countries with a high life expectancy, i.e. that are already in a good place in general. It may even be negative in countries in the contrary case. While this has to be taken with a lot of caution, it shows how exploring interactions can be fun and surprising!
Another way to plot interactions in which the variables are continuous is with a contour plot similar to the following. Here we don’t have to pick arbitrary values to plot against, and can see the predictions at all values of the covariates in question.
We see the the lowest expected happiness based on the model is with high democratic quality and low life expectancy. The best case scenario is to be high on both.
Here is our model comparison for all three models with AIC.
```
AIC(happy_model_base, happy_model_more, happy_model_interact)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
happy_model_interact 7 1709.801
```
Looks like our interaction model is winning.
### Example: Additive models
*Generalized additive models* allow our predictors to have a *wiggly* relationship with the target variable. For more information, see [this document](https://m-clark.github.io/generalized-additive-models/), but for our purposes, that’s all you really need to know\- effects don’t have to be linear even with linear models! We will use the base R mgcv package because it is awesome and you don’t need to install anything. In this case, we’ll allow all the covariates to have a nonlinear relationship, and we denote this with the `s()` syntax.
```
library(mgcv)
happy_model_gam = gam(
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth),
data = happy_processed
)
summary(happy_model_gam)
```
```
Family: gaussian
Link function: identity
Formula:
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth)
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.028888 0.008125 -3.555 0.000388 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(democratic_quality) 8.685 8.972 13.26 <2e-16 ***
s(generosity) 6.726 7.870 27.25 <2e-16 ***
s(log_gdp_per_capita) 8.893 8.996 87.20 <2e-16 ***
s(healthy_life_expectancy_at_birth) 8.717 8.977 65.82 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R-sq.(adj) = 0.872 Deviance explained = 87.5%
GCV = 0.11479 Scale est. = 0.11249 n = 1704
```
The first thing you may notice is that there are no regression coefficients. This is because the effect of any of these predictors depends on their value, so trying to assess it by a single value would be problematic at best. You can guess what will help us interpret this…
```
library(mgcViz)
plot.gamViz(happy_model_gam, allTerms = T)
```
Here is a brief summary of interpretation. We generally don’t have to worry about small wiggles.
* `democratic_quality`: Effect is most notable (positive and strong) for higher values. Negligible otherwise.
* `generosity`: Effect seems seems strongly positive, but mostly for lower values of generosity.
* `life_expectancy`: Effect is positive, but only if the country is around the mean or higher.
* `log GDP per capita`: Effect is mostly positive, but may depend on other factors not included in the model.
In terms of general model fit, the `Scale est.` is the same as the residual standard error (squared) in the other models, and is a notably lower than even the model with the interaction (0\.11 vs. 0\.16\). We can also see that the adjusted \\(R^2\\) is higher as well (0\.87 vs. 0\.82\). If we wanted, we can actually do wiggly interactions also! Here is our interaction from before for the GAM case.
Let’s check our AIC now to see which model wins.
```
AIC(
happy_model_null,
happy_model_base,
happy_model_more,
happy_model_interact,
happy_model_gam
)
```
```
df AIC
happy_model_null 2.00000 1272.755
happy_model_base 5.00000 2043.770
happy_model_more 7.00000 1791.237
happy_model_interact 7.00000 1709.801
happy_model_gam 35.02128 1148.417
```
It’s pretty clear our wiggly model is the winner, even with the added complexity. Note that even though we used a different function for the GAM model, the AIC is still comparable.
### Example: Additional covariates
A starting point for adding model complexity is simply adding more covariates. Let’s add life expectancy and a yearly trend to our happiness model. To make this model comparable to our baseline model, they need to be fit to the same data, and life expectancy has a couple missing values the others do not. So we’ll start with some data processing. I will start by standardizing some of the variables, and making year start at zero, which will represent 2008, and finally dropping missing values. Refer to our previous section on [transforming variables](models.html#numeric-variables) if you want to.
```
happy_recipe = happy %>%
select(
year,
happiness_score,
democratic_quality,
generosity,
healthy_life_expectancy_at_birth,
log_gdp_per_capita
) %>%
recipe(happiness_score ~ . ) %>%
step_center(all_numeric(), -log_gdp_per_capita, -year) %>%
step_scale(all_numeric(), -log_gdp_per_capita, -year) %>%
step_knnimpute(all_numeric()) %>%
step_naomit(everything()) %>%
step_center(year, means = 2005) %>%
prep()
happy_processed = happy_recipe %>% bake(happy)
```
Now let’s start with our baseline model again.
```
happy_model_base = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
data = happy_processed
)
summary(happy_model_base)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita, data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.53727 -0.29553 -0.01258 0.32002 1.52749
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -5.49178 0.10993 -49.958 <2e-16 ***
democratic_quality 0.14175 0.01441 9.838 <2e-16 ***
generosity 0.19826 0.01096 18.092 <2e-16 ***
log_gdp_per_capita 0.59284 0.01187 49.946 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.44 on 1700 degrees of freedom
Multiple R-squared: 0.7805, Adjusted R-squared: 0.7801
F-statistic: 2014 on 3 and 1700 DF, p-value: < 2.2e-16
```
We can see that moving one standard deviation on democratic quality and generosity leads to similar standard deviation increases in happiness. Moving 10 percentage points in GDP would lead to less than .1 standard deviation increase in happiness.
Now we add our life expectancy and yearly trend.
```
happy_model_more = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed
)
summary(happy_model_more)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + year,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.50879 -0.27081 -0.01524 0.29640 1.60540
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.691818 0.148921 -24.790 < 2e-16 ***
democratic_quality 0.099717 0.013618 7.322 3.75e-13 ***
generosity 0.189113 0.010193 18.554 < 2e-16 ***
log_gdp_per_capita 0.397559 0.016121 24.661 < 2e-16 ***
healthy_life_expectancy_at_birth 0.311129 0.018732 16.609 < 2e-16 ***
year -0.007363 0.002728 -2.699 0.00702 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.4083 on 1698 degrees of freedom
Multiple R-squared: 0.8111, Adjusted R-squared: 0.8106
F-statistic: 1459 on 5 and 1698 DF, p-value: < 2.2e-16
```
Here it would seem that life expectancy has a notable effect on happiness (shocker), but the yearly trend, while negative, is not statistically notable. In addition, the democratic effect is no longer significant, as it would seem that it’s contribution was more due to it’s correlation with life expectancy. But the key question is\- is this model better?
The adjusted \\(R^2\\) seems to indicate that we are doing slightly better with this model, but not much (0\.81 vs. 0\.78\). We can test if the increase is a statistically notable one. [Recall previously](model_criticism.html#statistical-assessment) when we compared our model versus a null model to obtain a statistical test of model fit. Since these models are *nested*, i.e. one is a simpler form of the other, we can use the more general approach we depicted to compare these models. This ANOVA, or analysis of variance test, is essentially comparing whether the residual sum of squares (i.e. the loss) is statistically less for one model vs. the other. In many settings it is often called a *likelihood ratio test*.
```
anova(happy_model_base, happy_model_more, test = 'Chi')
```
```
Analysis of Variance Table
Model 1: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita
Model 2: happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth + year
Res.Df RSS Df Sum of Sq Pr(>Chi)
1 1700 329.11
2 1698 283.11 2 45.997 < 2.2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The `Df` from the test denotes that we have two additional parameters, i.e. coefficients, in the more complex model. But the main thing to note is whether the model statistically reduces the RSS, and so we see that this is a statistically notable improvement as well.
I actually do not like this test though. It requires nested models, which in some settings is either not the case or can be hard to determine, and ignores various aspects of uncertainty in parameter estimates. Furthermore, it may not be appropriate for some complex model settings. An approach that works in many settings is to compare *AIC* (Akaike Information Criterion). AIC is a value based on the likelihood for a given model, but which adds a penalty for complexity, since otherwise any more complex model would result in a larger likelihood (or in this case, smaller negative likelihood). In the following, \\(\\mathcal{L}\\) is the likelihood, and \\(\\mathcal{P}\\) is the number of parameters estimated for the model.
\\\[AIC \= \-2 ( \\ln (\\mathcal{L})) \+ 2 \\mathcal{P}\\]
```
AIC(happy_model_base)
```
```
[1] 2043.77
```
The value itself is meaningless until we compare models, in which case the lower value is the better model (because we are working with the negative log likelihood). With AIC, we don’t have to have nested models, so that’s a plus over the statistical test.
```
AIC(happy_model_base, happy_model_more)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
```
Again, our new model works better. However, this still may miss out on some uncertainty in the models. To try and capture this, I will calculate interval estimates for the adjusted \\(R^2\\) via *bootstrapping*, and then calculate an interval for their difference. The details are beyond what I want to delve into here, but the gist is we just want a confidence interval for the difference in adjusted \\(R^2\\).
| model | r2 | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 0\.780 | 0\.762 | 0\.798 |
| more | 0\.811 | 0\.795 | 0\.827 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in \\(R^2\\) | 0\.013 | 0\.049 |
It would seem the difference in adjusted \\(R^2\\) is not statistically different from zero. Likewise we could do the same for AIC.
| model | aic | 2\.5% | 97\.5% |
| --- | --- | --- | --- |
| base | 2043\.770 | 1917\.958 | 2161\.231 |
| more | 1791\.237 | 1657\.755 | 1911\.073 |
| | 2\.5% | 97\.5% |
| --- | --- | --- |
| Difference in AIC | \-369\.994 | \-126\.722 |
In this case, the more complex model may not be statistically better either, as the interval for the difference in AIC also contains zero, and exhibits a notably wide range.
### Example: Interactions
Let’s now add interactions to our model. Interactions allow the relationship of a predictor variable and target to vary depending on the values of another covariate. To keep things simple, we’ll add a single interaction to start\- I will interact democratic quality with life expectancy.
```
happy_model_interact = lm(
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita +
healthy_life_expectancy_at_birth +
democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed
)
summary(happy_model_interact)
```
```
Call:
lm(formula = happiness_score ~ democratic_quality + generosity +
log_gdp_per_capita + healthy_life_expectancy_at_birth + democratic_quality:healthy_life_expectancy_at_birth,
data = happy_processed)
Residuals:
Min 1Q Median 3Q Max
-1.42801 -0.26473 -0.00607 0.26868 1.48161
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.63990 0.14517 -25.074 < 2e-16 ***
democratic_quality 0.08785 0.01335 6.580 6.24e-11 ***
generosity 0.16479 0.01030 16.001 < 2e-16 ***
log_gdp_per_capita 0.38501 0.01578 24.404 < 2e-16 ***
healthy_life_expectancy_at_birth 0.33247 0.01830 18.165 < 2e-16 ***
democratic_quality:healthy_life_expectancy_at_birth 0.10526 0.01105 9.527 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.3987 on 1698 degrees of freedom
Multiple R-squared: 0.82, Adjusted R-squared: 0.8194
F-statistic: 1547 on 5 and 1698 DF, p-value: < 2.2e-16
```
The coefficient interpretation for variables in the interaction model changes. For those involved in an interaction, the base coefficient now only describes the effect when the variable they interact with is zero (or is at the reference group if it’s categorical). So democratic quality has a slight positive, but not statistically notable, effect at the mean of life expectancy (0\.088\). However, this effect increases by 0\.11 when life expectancy increases by 1 (i.e. 1 standard deviation since we standardized). The same interpretation goes for life expectancy. It’s base coefficient is when democratic quality is at it’s mean (0\.332\), and the interaction term is interpreted identically.
It seems most people (including journal reviewers) seem to have trouble understanding interactions if you just report them in a table. Furthermore, beyond the standard linear model with non\-normal distributions, the coefficient for the interaction term doesn’t even have the same precise meaning. But you know what helps us in every interaction setting? Visualization!
Let’s use ggeffects again. We’ll plot the effect of democratic quality at the mean of life expectancy, and at one standard deviation below and above. Since we already standardized it, this is even easier.
```
library(ggeffects)
plot(
ggpredict(
happy_model_interact,
terms = c('democratic_quality', 'healthy_life_expectancy_at_birth[-1, 0, 1]')
)
)
```
We seem to have discovered something interesting here! Democratic quality only has a positive effect for those countries with a high life expectancy, i.e. that are already in a good place in general. It may even be negative in countries in the contrary case. While this has to be taken with a lot of caution, it shows how exploring interactions can be fun and surprising!
Another way to plot interactions in which the variables are continuous is with a contour plot similar to the following. Here we don’t have to pick arbitrary values to plot against, and can see the predictions at all values of the covariates in question.
We see the the lowest expected happiness based on the model is with high democratic quality and low life expectancy. The best case scenario is to be high on both.
Here is our model comparison for all three models with AIC.
```
AIC(happy_model_base, happy_model_more, happy_model_interact)
```
```
df AIC
happy_model_base 5 2043.770
happy_model_more 7 1791.237
happy_model_interact 7 1709.801
```
Looks like our interaction model is winning.
### Example: Additive models
*Generalized additive models* allow our predictors to have a *wiggly* relationship with the target variable. For more information, see [this document](https://m-clark.github.io/generalized-additive-models/), but for our purposes, that’s all you really need to know\- effects don’t have to be linear even with linear models! We will use the base R mgcv package because it is awesome and you don’t need to install anything. In this case, we’ll allow all the covariates to have a nonlinear relationship, and we denote this with the `s()` syntax.
```
library(mgcv)
happy_model_gam = gam(
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth),
data = happy_processed
)
summary(happy_model_gam)
```
```
Family: gaussian
Link function: identity
Formula:
happiness_score ~ s(democratic_quality) + s(generosity) + s(log_gdp_per_capita) +
s(healthy_life_expectancy_at_birth)
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.028888 0.008125 -3.555 0.000388 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(democratic_quality) 8.685 8.972 13.26 <2e-16 ***
s(generosity) 6.726 7.870 27.25 <2e-16 ***
s(log_gdp_per_capita) 8.893 8.996 87.20 <2e-16 ***
s(healthy_life_expectancy_at_birth) 8.717 8.977 65.82 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R-sq.(adj) = 0.872 Deviance explained = 87.5%
GCV = 0.11479 Scale est. = 0.11249 n = 1704
```
The first thing you may notice is that there are no regression coefficients. This is because the effect of any of these predictors depends on their value, so trying to assess it by a single value would be problematic at best. You can guess what will help us interpret this…
```
library(mgcViz)
plot.gamViz(happy_model_gam, allTerms = T)
```
Here is a brief summary of interpretation. We generally don’t have to worry about small wiggles.
* `democratic_quality`: Effect is most notable (positive and strong) for higher values. Negligible otherwise.
* `generosity`: Effect seems seems strongly positive, but mostly for lower values of generosity.
* `life_expectancy`: Effect is positive, but only if the country is around the mean or higher.
* `log GDP per capita`: Effect is mostly positive, but may depend on other factors not included in the model.
In terms of general model fit, the `Scale est.` is the same as the residual standard error (squared) in the other models, and is a notably lower than even the model with the interaction (0\.11 vs. 0\.16\). We can also see that the adjusted \\(R^2\\) is higher as well (0\.87 vs. 0\.82\). If we wanted, we can actually do wiggly interactions also! Here is our interaction from before for the GAM case.
Let’s check our AIC now to see which model wins.
```
AIC(
happy_model_null,
happy_model_base,
happy_model_more,
happy_model_interact,
happy_model_gam
)
```
```
df AIC
happy_model_null 2.00000 1272.755
happy_model_base 5.00000 2043.770
happy_model_more 7.00000 1791.237
happy_model_interact 7.00000 1709.801
happy_model_gam 35.02128 1148.417
```
It’s pretty clear our wiggly model is the winner, even with the added complexity. Note that even though we used a different function for the GAM model, the AIC is still comparable.
Model Averaging
---------------
Have you ever suffered from choice overload? Many folks who seek to understand some phenomenon via modeling do so. There are plenty of choices due to data processing, but then there may be many models to consider as well, and should be if you’re doing things correctly. But you know what? You don’t have to pick a best.
Model averaging is a common technique in the Bayesian world and also with some applications of machine learning (usually under the guise of *stacking*), but not as widely applied elsewhere, even though it could be. As an example if we (inversely) weight models by the AIC, we can get an average parameter that favors the better models, while not ignoring the lesser models if they aren’t notably poorer. People will use such an approach to get model averaged effects (i.e. coefficients) or predictions. In our setting, the GAM is doing so much better, that it’s weight would basically be 1\.0 and zero for the others. So the model averaged predictions would be almost identical to the GAM predictions.
| model | df | AIC | AICc | deltaAICc | Rel. Like. | weight |
| --- | --- | --- | --- | --- | --- | --- |
| happy\_model\_base | 5\.000 | 2043\.770 | 2043\.805 | 893\.875 | 0 | 0 |
| happy\_model\_more | 7\.000 | 1791\.237 | 1791\.303 | 641\.373 | 0 | 0 |
| happy\_model\_interact | 7\.000 | 1709\.801 | 1709\.867 | 559\.937 | 0 | 0 |
| happy\_model\_gam | 35\.021 | 1148\.417 | 1149\.930 | 0\.000 | 1 | 1 |
Model Criticism Summary
-----------------------
Statistical significance with a single model does not provide enough of a story to tell with your data. A better assessment of performance can be made on data the model has not seen, and can provide a better idea of the practical capabilities of it. Furthermore, pitting various models of differing complexities will allow for better confidence in the model or set of models we ultimately deem worthy. In general, in more explanatory settings we strive to balance performance with complexity through various means.
Model Criticism Exercises
-------------------------
### Exercise 0
Recall the [google app exercises](models.html#model-exploration-exercises), we use a standard linear model (i.e. lm) to predict one of three target variables:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
After that we did a model with an interaction.
Either using those models, or running new ones with a different target variable, conduct the following exercises.
```
load('data/google_apps.RData')
```
### Exercise 1
Assess the model fit and performance of your first model. Perform additional diagnostics to assess how the model is doing (e.g. plot the model to look at residuals).
```
summary(model)
plot(model)
```
### Exercise 2
Compare the model with the interaction model. Based on AIC or some other metric, which one would you choose? Visualize the interaction model if it’s the better model.
```
anova(model1, model2)
AIC(model1, model2)
```
### Exercise 0
Recall the [google app exercises](models.html#model-exploration-exercises), we use a standard linear model (i.e. lm) to predict one of three target variables:
* `rating`: the user ratings of the app
* `avg_sentiment_polarity`: the average sentiment score (positive vs. negative) for the app
* `avg_sentiment_subjectivity`: the average subjectivity score (subjective vs. objective) for the app
For prediction use the following variables:
* `reviews`: number of reviews
* `type`: free vs. paid
* `size_in_MB`: size of the app in megabytes
After that we did a model with an interaction.
Either using those models, or running new ones with a different target variable, conduct the following exercises.
```
load('data/google_apps.RData')
```
### Exercise 1
Assess the model fit and performance of your first model. Perform additional diagnostics to assess how the model is doing (e.g. plot the model to look at residuals).
```
summary(model)
plot(model)
```
### Exercise 2
Compare the model with the interaction model. Based on AIC or some other metric, which one would you choose? Visualize the interaction model if it’s the better model.
```
anova(model1, model2)
AIC(model1, model2)
```
Python Model Criticism Notebook
-------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/model_criticism.ipynb)
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/ml.html |
Machine Learning
================
*Machine learning* (ML) encompasses a wide variety of techniques, from standard regression models to almost impenetrably complex modeling tools. While it may seem like magic to the uninitiated, the main thing that distinguishes it from standard statistical methods discussed thus far is an approach that heavily favors prediction over inference and explanatory power, and which takes the necessary steps to gain any predictive advantage[38](#fn38).
ML could potentially be applied in any setting, but typically works best with data sets much larger than classical statistical methods are usually applied to. However, nowadays even complex regression models can be applied to extremely large data sets, and properly applied ML may even work in simpler data settings, so this distinction is muddier than it used to be. The main distinguishing factor is mostly one of focus.
The following only very briefly provides a demonstration of concepts and approaches. I have more [in\-depth document available](https://m-clark.github.io/introduction-to-machine-learning/) for more details.
Concepts
--------
### Loss
We discussed loss functions [before](models.html#estimation), and there was a reason I went more in depth there, mainly because I feel, unlike with ML, loss is not explicitly focused on as much in applied research, leaving the results produced to come across as more magical than it should be. In ML however, we are explicitly concerned with loss functions and, more specifically, evaluating loss on test data. This loss is evaluated over successive iterations of a particular technique, or averaged over several test sets via cross\-validation. Typical loss functions are *Root Mean Squared Error* for numeric targets (essentially the same as for a standard linear model), and *cross\-entropy* for categorical outcomes. There are robust alternatives, such as mean absolute error and hinge loss functions respectively, and many other options besides. You will come across others that might be used for specific scenarios.
The following image, typically called a *learning curve*, shows an example of loss on a test set as a function of model complexity. In this case, models with more complexity perform better, but only to a point, before test error begins to rise again.
### Bias\-variance tradeoff
Prediction error, i.e. loss, is composed of several sources. One part is *measurement error*, which we can’t do anything about, and two components of specific interest: *bias*, the difference in the observed value and our average predicted value, and *variance* how much that prediction would change had we trained on different data. More generally we can think of this as a problem of *underfitting* vs. *overfitting*. With a model that is too simple, we underfit, and bias is high. If we overfit, the model is too close to the training data, and likely will do poorly with new observations. ML techniques trade some increased bias for even greater reduced variance, which often means less overfitting to the training data, leading to increased performance on new data.
In the following[39](#fn39), the blue line represents models applied to training data, while the red line regards performance on the test set. We can see that for the data we train the model to, error will always go down with increased complexity. However, we can see that at some point, the test error will increase as we have started to overfit to the training data.
### Regularization
As we have noted, a model fit to a single data set might do very well with the data at hand, but then suffer when predicting independent data. Also, oftentimes we are interested in a ‘best’ subset of predictors among a great many, and typically the estimated coefficients from standard approaches are overly optimistic unless dealing with sufficiently large sample sizes. This general issue can be improved by shrinking estimates toward zero, such that some of the performance in the initial fit is sacrificed for improvement with regard to prediction. The basic idea in terms of the tradeoff is that we are trading some bias for notably reduced variance. We demonstrate regularized regression below.
### Cross\-validation
*Cross\-validation* is widely used for validation and/or testing. With validation, we are usually concerned with picking parameter settings for the model, while the testing is used for ultimate assessment of model performance. Conceptually there is nothing new beyond what was [discussed previously](model_criticism.html#predictive-performance) regarding holding out data for assessing predictive performance, we just do more of it.
As an example, let’s say we split our data into three parts. We use two parts (combined) as our training data, then the third part as test. At this point this is identical to our demonstration before. But then, we switch which part is test and which two are training, and do the whole thing over again. And finally once more, so that each of our three parts has taken a turn as a test set. Our estimated error is the average loss across the three times.
Typically we do it more than three times, usually 10, and there are fancier methods of *k\-fold cross\-validation*, though they typically don’t serve to add much value. In any case, let’s try it with our previous example. The following uses the tidymodels approach to be consistent with early chapters use of the tidyverse[40](#fn40). With it we can employ k\-fold cross validation to evaluate the loss.
```
# install.packages(tidymodels) # if needed
library(tidymodels)
load('data/world_happiness.RData')
set.seed(1212)
# specify the model
happy_base_spec = linear_reg() %>%
set_engine(engine = "lm")
# by default 10-folds
happy_folds = vfold_cv(happy)
library(tune)
happy_base_results = fit_resamples(
happy_base_spec,
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_res = happy_base_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.629 | 10 | 0\.022 |
| rsq | standard | 0\.697 | 10 | 0\.022 |
We now see that our average test error is 0\.629\. It also gives the average R2\.
### Optimization
With ML, much more attention is paid to different optimizers, but the vast majority for deep learning and other many other methods are some flavor of *stochastic gradient descent*. Often due to the sheer volume of data/parameters, this optimization is done on chunks of the data and in parallel. In general, some optimization approaches may work better in some situations or for some models, where ‘better’ means quicker convergence, or perhaps a smoother ride toward convergence. It is not the case that you would come to incorrect conclusions using one method vs. another per se, just that you might reach those conclusions in a more efficient fashion. The following graphic displays SGD versus several variants[41](#fn41). The x and y axes represent the potential values two parameters might take, with the best selection of those values based on a loss function somewhere toward the bottom right. We can see that they all would get there eventually, but some might do so more quickly. This may or may not be the case for some other data situation.
### Tuning parameters
In any ML setting there are parameters that need to set in order to even run the model. In regularized regression this may be the penalty parameter, for random forests the tree depth, for neural nets, how many hidden units, and many other things. None of these *tuning parameters* is known beforehand, and so must be tuned, or learned, just like any other. This is usually done with through validation process like k\-fold cross validation. The ‘best’ settings are then used to make final predictions on the test set.
The usual workflow is something like the following:
* **Tuning**: With the **training data**, use a cross\-validation approach to run models with different values for tuning parameters.
* **Model Selection**: Select the ‘best’ model as that which minimizes or maximizes the objective function estimated during cross\-validation (e.g. RMSE, accuracy, etc.). The test data in this setting are typically referred to as *validation sets*.
* **Prediction**: Use the best model to make predictions on the **test set**.
Techniques
----------
### Regularized regression
A starting point for getting into ML from the more inferential methods is to use *regularized regression*. These are conceptually no different than standard LM/GLM types of approaches, but they add something to the loss function.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2 \+ \\lambda\\cdot\\Sigma\\beta^2\\]
In the above, this is the same squared error loss function as before, but we add a penalty that is based on the size of the coefficients. So, while initially our loss goes down with some set of estimates, the penalty based on their size might be such that the estimated loss actually increases. This has the effect of shrinking the estimates toward zero. Well, [why would we want that](https://stats.stackexchange.com/questions/179864/why-does-shrinkage-work)? This introduces [bias in the coefficients](https://stats.stackexchange.com/questions/207760/when-is-a-biased-estimator-preferable-to-unbiased-one), but the end result is a model that will do better on test set prediction, which is the goal of the ML approach. The way this works regards the bias\-variance tradeoff we discussed previously.
The following demonstrates regularized regression using the glmnet package. It actually uses *elastic net*, which has a mixture of two penalties, one of which is the squared sum of coefficients (typically called *ridge regression*) and the other is the sum of their absolute values (the so\-called *lasso*).
```
library(tidymodels)
happy_prepped = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
recipe(happiness_score ~ .) %>%
step_scale(everything()) %>%
step_naomit(happiness_score) %>%
prep() %>%
bake(happy)
happy_folds = happy_prepped %>%
drop_na() %>%
vfold_cv()
library(tune)
happy_regLM_spec = linear_reg(penalty = 1e-3, mixture = .5) %>%
set_engine(engine = "glmnet")
happy_regLM_results = fit_resamples(
happy_regLM_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_regLM_res = happy_regLM_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.335 | 10 | 0\.018 |
| rsq | standard | 0\.897 | 10 | 0\.013 |
#### Tuning parameters for regularized regression
For the previous model setting, we wouldn’t know what the penalty or the mixing parameter should be. This is where we can use cross validation to choose those. We’ll redo our model spec, create a set of values to search over, and pass that to the tuning function for cross\-validation. Our ultimate model will then be applied to the test data.
First we create our training\-test split.
```
# removing some variables with lots of missing values
happy_split = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
initial_split(prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split)
```
Next we process the data. This is all specific to the tidymodels approach.
```
happy_prepped = happy_train %>%
recipe(happiness_score ~ .) %>%
step_knnimpute(everything()) %>% # impute missing values
step_center(everything()) %>% # standardize
step_scale(everything()) %>% # standardize
prep() # prepare for other uses
happy_test_normalized <- bake(happy_prepped, new_data = happy_test, everything())
happy_folds = happy_prepped %>%
bake(happy) %>%
vfold_cv()
# now we are indicating we don't know the value to place
happy_regLM_spec = linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine(engine = "glmnet")
```
Now, we need to create a set of values (grid) to try an test. In this case we set the penalty parameter from near zero to near 1, and the mixture parameter a range of values from 0 (ridge regression) to 1 (lasso).
```
grid_search = expand_grid(
penalty = exp(seq(-4, -.25, length.out = 10)),
mixture = seq(0, 1, length.out = 10)
)
regLM_tune = tune_grid(
happy_prepped,
model = happy_regLM_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(regLM_tune, metric = "rmse") + geom_smooth(se = FALSE)
```
```
best = show_best(regLM_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
penalty mixture .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0.0183 0.111 rmse standard 0.288 10 0.00686 Model011
```
The results suggest a more ridge type mixture and smaller penalty tends to work better, and this is more or less in keeping with the ‘best’ model. Here is a plot where size indicates RMSE (smaller better) but only for RMSE \< .5 (slight jitter added).
With the ‘best’ model selected, we can refit to the training data with the parameters in hand. We can then do our usual performance assessment with the test set.
```
# for technical reasons, only mixture is passed to the model; see https://github.com/tidymodels/parsnip/issues/215
tuned_model = linear_reg(penalty = best$penalty, mixture = best$mixture) %>%
set_engine(engine = "glmnet") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.297 |
| rsq | standard | 0\.912 |
Not too bad!
### Random forests
A limitation of standard linear models, especially with many input variables, is that there’s not a real automatic way to incorporate interactions and nonlinearities. So we often will want to use techniques that do so. To understand *random forests* and similar techniques (boosted trees, etc.), we can start with a simple decision tree. To begin, for a single input variable (`X1`) whose range might be 1 to 10, we find that a cut at 5\.75 results in the best classification, such that if all observations greater than or equal to 5\.75 are classified as positive, and the rest negative. This general approach is fairly straightforward and conceptually easy to grasp, and it is because of this that tree approaches are appealing.
Now let’s add a second input (`X2`), also on a 1 to 10 range. We now might find that even better classification results if, upon looking at the portion of data regarding those greater than or equal to 5\.75, that we only classify positive if they are also less than 3 on the second variable. The following is a hypothetical tree reflecting this.
This tree structure allows for both interactions between variables, and nonlinear relationships between some input and the target variable (e.g. the second branch could just be the same `X1` but with some cut value greater than 5\.75\). Random forests randomly select a few from the available input variables, and create a tree that minimizes (maximizes) some loss (objective) function on a validation set. A given tree can potentially be very wide/deep, but instead of just one tree, we now do, say, 1000 trees. A final prediction is made based on the average across all trees.
We demonstrate the random forest using the ranger package. We initially don’t do any tuning here, but do note that the number of variables to randomly select (`mtry` below), the number of total trees, the tree depth \- all of these are potential tuning parameters to investigate in the model building process.
```
happy_rf_spec = rand_forest(mode = 'regression', mtry = 6) %>%
set_engine(engine = "ranger")
happy_rf_results = fit_resamples(
happy_rf_spec,
happiness_score ~ .,
happy_folds
)
cv_rf_res = happy_rf_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.222 | 10 | 0\.004 |
| rsq | standard | 0\.950 | 10 | 0\.003 |
It would appear we’re doing a bit better than the regularized regression.
#### Tuning parameters for random forests
As mentioned, we’d have a few tuning parameters to play around with. We’ll tune the number of predictors to randomly select per tree, as well as the minimum sample size for each leaf. The following takes the same appraoch as with the regularized regression model. Note that this will take a while (several minutes).
```
grid_search = expand.grid(
mtry = c(3, 5, ncol(happy_train)-1), # up to total number of predictors
min_n = c(1, 5, 10)
)
happy_rf_spec = rand_forest(mode = 'regression',
mtry = tune(),
min_n = tune()) %>%
set_engine(engine = "ranger")
rf_tune = tune_grid(
happy_prepped,
model = happy_rf_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(rf_tune, metric = "rmse")
```
```
best = show_best(rf_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
mtry min_n .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 3 1 rmse standard 0.219 10 0.00422 Model1
```
Looks like in general using all the variables for selection is the best (this is in keeping with standard approaches for random forest with regression).
Now we are ready to refit the model with the selected tuning parameters and make predictions on the test data.
```
tuned_model = rand_forest(mode = 'regression', mtry = best$mtry, min_n = best$min_n) %>%
set_engine(engine = "ranger") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.217 |
| rsq | standard | 0\.955 |
### Neural networks
*Neural networks* have been around for a long while as a general concept in artificial intelligence and even as a machine learning algorithm, and often work quite well. In some sense, neural networks can simply be thought of as nonlinear regression. Visually however, we can see them as a graphical model with layers of inputs and outputs. Weighted combinations of the inputs are created[42](#fn42) and put through some function (e.g. the sigmoid function) to produce the next layer of inputs. This next layer goes through the same process to produce either another layer, or to predict the output, or even multiple outputs, which serves as the final layer. All the layers between the input and output are usually referred to as hidden layers. If there were a single hidden layer with a single unit and no transformation, then it becomes the standard regression problem.
As a simple example, we can run a simple neural network with a single hidden layer of 1000 units[43](#fn43). Since this is a regression problem, no further transformation is required of the end result to map it onto the target variable. I set the number of epochs to 500, which you can think of as the number of iterations from our discussion of optimization. There are many tuning parameters I am not showing that could certainly be fiddled with as well. This is just an example that will run quickly with comparable performance to the previous. If you do not have keras installed, you can change the engine to `nnet`, which was a part of the base R set of packages well before neural nets became cool again[44](#fn44). This will likely take several minutes for typical machines.
```
happy_nn_spec = mlp(
mode = 'regression',
hidden_units = 1000,
epochs = 500,
activation = 'linear'
) %>%
set_engine(engine = "keras")
happy_nn_results = fit_resamples(
happy_nn_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE,
verbose = FALSE,
allow_par = TRUE)
)
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.818 | 10 | 0\.102 |
| rsq | standard | 0\.896 | 10 | 0\.014 |
You will typically see neural nets applied to image and natural language processing, but as demonstrated above, they can be applied in any scenario. It will take longer to set up and train, but once that’s done, you’re good to go, and may have a much better predictive result.
I leave tuning the model as an [exercise](ml.html#machine-learning-exercises), but definitely switch to using `nnet` if you do so, otherwise you’ll have to install keras (both for R and Python) and be waiting a long time besides. As mentioned, the nnet package is in base R, so you already have it.
#### Deep learning
*Deep learning* can be summarized succinctly as ‘very complicated neural nets’. Really, that’s about it. The complexity can be quite tremendous however, and there is a wide variety from which to choose. For example, we just ran a basic neural net above, but for image processing we might use a convolutional neural network, and for natural language processing some LTSM model. Here is a small(!) version of the convolutional neural network known as ‘resnet’ which has many layers in between input and output.
The nice thing is that a lot of the work has already been done for you, and you can use models where most of the layers in the neural net have already been trained by people at Google, Facebook, and others who have much more resources to do do so than you. In such cases, you may only have to worry about the last couple layers for your particular problem. Applying a pre\-trained model to a different data scenario is called *transfer learning*, and regardless of what your intuition is, it will work, and very well.
*Artificial intelligence* (AI) used to refer to specific applications of deep/machine learning (e.g. areas in computer vision and natural language processing), but thanks to the popular press, the term has pretty much lost all meaning. AI actually has a very old history dating to the cognitive revolution in psychology and the early days of computer science in the late 50s and early 60s. Again though, you can think of it as a subset of the machine learning problems.
Interpreting the Black Box
--------------------------
One of the key issues with ML techniques is interpretability. While a decision tree is immensely interpretable, a thousand of them is not so much. What any particular node or even layer in a complex neural network represents may be difficult to fathom. However, we can still interpret prediction changes based on input changes, which is what we really care about, and really is not necessarily more difficult than our standard inferential setting.
For example, a regularized regression might not have straightforward inference, but the coefficients are interpreted exactly the same as a standard GLM. Random forests can have the interactions visualized, which is what we said was required for interpretation in standard settings. Furthermore, there are many approaches such as *Local Interpretable Model\-Agnostic Explanations* (LIME), variable importance measures, Shapley values, and more to help us in this process. It might take more work, but honestly, in my consulting experience, a great many have trouble interpreting anything beyond a standard linear model any way, and I’m not convinced that it’s fundamentally different problem to [extract meaning from the machine learning context](https://christophm.github.io/interpretable-ml-book/) these days, though it may take a little work.
Machine Learning Summary
------------------------
Hopefully this has demystified ML for you somewhat. Nowadays it may take little effort to get to state\-of\-the\-art results from even just a year or two ago, and which, for all intents and purposes, would be good enough both now and for the foreseeable future. Despite what many may think, it is not magic, but for more classically statistically minded, it may require a bit of a different perspective.
Machine Learning Exercises
--------------------------
### Exercise 1
Use the ranger package to predict the Google variable `rating` by several covariates. Feel free to just use the standard function approach rather than all the tidymodels stuff if you want, but do use a training and test approach. You can then try the model again with a different tuning. For the first go around, starter code is provided.
```
# run these if needed to load data and install the package
# load('data/google_apps.RData')
# install.packages('ranger')
google_for_mod = google_apps %>%
select(avg_sentiment_polarity, rating, type,installs, reviews, size_in_MB, category) %>%
drop_na()
google_split = google_for_mod %>%
initial_split(prop = 0.75)
google_train = training(google_split)
google_test = testing(google_split)
ga_rf_results = rand_forest(mode = 'regression', mtry = 2, trees = 1000) %>%
set_engine(engine = "ranger") %>%
fit(
rating ~ ?,
google_train
)
test_predictions = predict(ga_rf_results, new_data = google_test)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
bind_rows(
rmse,
rsq
)
```
### Exercise 2
Respecify the neural net model demonstrated above as follows, and tune over the number of hidden units to have. This will probably take several minutes depending on your machine.
```
grid_search = expand.grid(
hidden_units = c(25, 50),
penalty = exp(seq(-4, -.25, length.out = 5))
)
happy_nn_spec = mlp(mode = 'regression',
penalty = tune(),
hidden_units = tune()) %>%
set_engine(engine = "nnet")
nn_tune = tune_grid(
happy_prepped, # from previous examples, see tuning for regularized regression
model = happy_nn_spec,
resamples = happy_folds, # from previous examples, see tuning for regularized regression
grid = grid_search
)
show_best(nn_tune, metric = "rmse", maximize = FALSE, n = 1)
```
Python Machine Learning Notebook
--------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/ml.ipynb)
Concepts
--------
### Loss
We discussed loss functions [before](models.html#estimation), and there was a reason I went more in depth there, mainly because I feel, unlike with ML, loss is not explicitly focused on as much in applied research, leaving the results produced to come across as more magical than it should be. In ML however, we are explicitly concerned with loss functions and, more specifically, evaluating loss on test data. This loss is evaluated over successive iterations of a particular technique, or averaged over several test sets via cross\-validation. Typical loss functions are *Root Mean Squared Error* for numeric targets (essentially the same as for a standard linear model), and *cross\-entropy* for categorical outcomes. There are robust alternatives, such as mean absolute error and hinge loss functions respectively, and many other options besides. You will come across others that might be used for specific scenarios.
The following image, typically called a *learning curve*, shows an example of loss on a test set as a function of model complexity. In this case, models with more complexity perform better, but only to a point, before test error begins to rise again.
### Bias\-variance tradeoff
Prediction error, i.e. loss, is composed of several sources. One part is *measurement error*, which we can’t do anything about, and two components of specific interest: *bias*, the difference in the observed value and our average predicted value, and *variance* how much that prediction would change had we trained on different data. More generally we can think of this as a problem of *underfitting* vs. *overfitting*. With a model that is too simple, we underfit, and bias is high. If we overfit, the model is too close to the training data, and likely will do poorly with new observations. ML techniques trade some increased bias for even greater reduced variance, which often means less overfitting to the training data, leading to increased performance on new data.
In the following[39](#fn39), the blue line represents models applied to training data, while the red line regards performance on the test set. We can see that for the data we train the model to, error will always go down with increased complexity. However, we can see that at some point, the test error will increase as we have started to overfit to the training data.
### Regularization
As we have noted, a model fit to a single data set might do very well with the data at hand, but then suffer when predicting independent data. Also, oftentimes we are interested in a ‘best’ subset of predictors among a great many, and typically the estimated coefficients from standard approaches are overly optimistic unless dealing with sufficiently large sample sizes. This general issue can be improved by shrinking estimates toward zero, such that some of the performance in the initial fit is sacrificed for improvement with regard to prediction. The basic idea in terms of the tradeoff is that we are trading some bias for notably reduced variance. We demonstrate regularized regression below.
### Cross\-validation
*Cross\-validation* is widely used for validation and/or testing. With validation, we are usually concerned with picking parameter settings for the model, while the testing is used for ultimate assessment of model performance. Conceptually there is nothing new beyond what was [discussed previously](model_criticism.html#predictive-performance) regarding holding out data for assessing predictive performance, we just do more of it.
As an example, let’s say we split our data into three parts. We use two parts (combined) as our training data, then the third part as test. At this point this is identical to our demonstration before. But then, we switch which part is test and which two are training, and do the whole thing over again. And finally once more, so that each of our three parts has taken a turn as a test set. Our estimated error is the average loss across the three times.
Typically we do it more than three times, usually 10, and there are fancier methods of *k\-fold cross\-validation*, though they typically don’t serve to add much value. In any case, let’s try it with our previous example. The following uses the tidymodels approach to be consistent with early chapters use of the tidyverse[40](#fn40). With it we can employ k\-fold cross validation to evaluate the loss.
```
# install.packages(tidymodels) # if needed
library(tidymodels)
load('data/world_happiness.RData')
set.seed(1212)
# specify the model
happy_base_spec = linear_reg() %>%
set_engine(engine = "lm")
# by default 10-folds
happy_folds = vfold_cv(happy)
library(tune)
happy_base_results = fit_resamples(
happy_base_spec,
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_res = happy_base_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.629 | 10 | 0\.022 |
| rsq | standard | 0\.697 | 10 | 0\.022 |
We now see that our average test error is 0\.629\. It also gives the average R2\.
### Optimization
With ML, much more attention is paid to different optimizers, but the vast majority for deep learning and other many other methods are some flavor of *stochastic gradient descent*. Often due to the sheer volume of data/parameters, this optimization is done on chunks of the data and in parallel. In general, some optimization approaches may work better in some situations or for some models, where ‘better’ means quicker convergence, or perhaps a smoother ride toward convergence. It is not the case that you would come to incorrect conclusions using one method vs. another per se, just that you might reach those conclusions in a more efficient fashion. The following graphic displays SGD versus several variants[41](#fn41). The x and y axes represent the potential values two parameters might take, with the best selection of those values based on a loss function somewhere toward the bottom right. We can see that they all would get there eventually, but some might do so more quickly. This may or may not be the case for some other data situation.
### Tuning parameters
In any ML setting there are parameters that need to set in order to even run the model. In regularized regression this may be the penalty parameter, for random forests the tree depth, for neural nets, how many hidden units, and many other things. None of these *tuning parameters* is known beforehand, and so must be tuned, or learned, just like any other. This is usually done with through validation process like k\-fold cross validation. The ‘best’ settings are then used to make final predictions on the test set.
The usual workflow is something like the following:
* **Tuning**: With the **training data**, use a cross\-validation approach to run models with different values for tuning parameters.
* **Model Selection**: Select the ‘best’ model as that which minimizes or maximizes the objective function estimated during cross\-validation (e.g. RMSE, accuracy, etc.). The test data in this setting are typically referred to as *validation sets*.
* **Prediction**: Use the best model to make predictions on the **test set**.
### Loss
We discussed loss functions [before](models.html#estimation), and there was a reason I went more in depth there, mainly because I feel, unlike with ML, loss is not explicitly focused on as much in applied research, leaving the results produced to come across as more magical than it should be. In ML however, we are explicitly concerned with loss functions and, more specifically, evaluating loss on test data. This loss is evaluated over successive iterations of a particular technique, or averaged over several test sets via cross\-validation. Typical loss functions are *Root Mean Squared Error* for numeric targets (essentially the same as for a standard linear model), and *cross\-entropy* for categorical outcomes. There are robust alternatives, such as mean absolute error and hinge loss functions respectively, and many other options besides. You will come across others that might be used for specific scenarios.
The following image, typically called a *learning curve*, shows an example of loss on a test set as a function of model complexity. In this case, models with more complexity perform better, but only to a point, before test error begins to rise again.
### Bias\-variance tradeoff
Prediction error, i.e. loss, is composed of several sources. One part is *measurement error*, which we can’t do anything about, and two components of specific interest: *bias*, the difference in the observed value and our average predicted value, and *variance* how much that prediction would change had we trained on different data. More generally we can think of this as a problem of *underfitting* vs. *overfitting*. With a model that is too simple, we underfit, and bias is high. If we overfit, the model is too close to the training data, and likely will do poorly with new observations. ML techniques trade some increased bias for even greater reduced variance, which often means less overfitting to the training data, leading to increased performance on new data.
In the following[39](#fn39), the blue line represents models applied to training data, while the red line regards performance on the test set. We can see that for the data we train the model to, error will always go down with increased complexity. However, we can see that at some point, the test error will increase as we have started to overfit to the training data.
### Regularization
As we have noted, a model fit to a single data set might do very well with the data at hand, but then suffer when predicting independent data. Also, oftentimes we are interested in a ‘best’ subset of predictors among a great many, and typically the estimated coefficients from standard approaches are overly optimistic unless dealing with sufficiently large sample sizes. This general issue can be improved by shrinking estimates toward zero, such that some of the performance in the initial fit is sacrificed for improvement with regard to prediction. The basic idea in terms of the tradeoff is that we are trading some bias for notably reduced variance. We demonstrate regularized regression below.
### Cross\-validation
*Cross\-validation* is widely used for validation and/or testing. With validation, we are usually concerned with picking parameter settings for the model, while the testing is used for ultimate assessment of model performance. Conceptually there is nothing new beyond what was [discussed previously](model_criticism.html#predictive-performance) regarding holding out data for assessing predictive performance, we just do more of it.
As an example, let’s say we split our data into three parts. We use two parts (combined) as our training data, then the third part as test. At this point this is identical to our demonstration before. But then, we switch which part is test and which two are training, and do the whole thing over again. And finally once more, so that each of our three parts has taken a turn as a test set. Our estimated error is the average loss across the three times.
Typically we do it more than three times, usually 10, and there are fancier methods of *k\-fold cross\-validation*, though they typically don’t serve to add much value. In any case, let’s try it with our previous example. The following uses the tidymodels approach to be consistent with early chapters use of the tidyverse[40](#fn40). With it we can employ k\-fold cross validation to evaluate the loss.
```
# install.packages(tidymodels) # if needed
library(tidymodels)
load('data/world_happiness.RData')
set.seed(1212)
# specify the model
happy_base_spec = linear_reg() %>%
set_engine(engine = "lm")
# by default 10-folds
happy_folds = vfold_cv(happy)
library(tune)
happy_base_results = fit_resamples(
happy_base_spec,
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_res = happy_base_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.629 | 10 | 0\.022 |
| rsq | standard | 0\.697 | 10 | 0\.022 |
We now see that our average test error is 0\.629\. It also gives the average R2\.
### Optimization
With ML, much more attention is paid to different optimizers, but the vast majority for deep learning and other many other methods are some flavor of *stochastic gradient descent*. Often due to the sheer volume of data/parameters, this optimization is done on chunks of the data and in parallel. In general, some optimization approaches may work better in some situations or for some models, where ‘better’ means quicker convergence, or perhaps a smoother ride toward convergence. It is not the case that you would come to incorrect conclusions using one method vs. another per se, just that you might reach those conclusions in a more efficient fashion. The following graphic displays SGD versus several variants[41](#fn41). The x and y axes represent the potential values two parameters might take, with the best selection of those values based on a loss function somewhere toward the bottom right. We can see that they all would get there eventually, but some might do so more quickly. This may or may not be the case for some other data situation.
### Tuning parameters
In any ML setting there are parameters that need to set in order to even run the model. In regularized regression this may be the penalty parameter, for random forests the tree depth, for neural nets, how many hidden units, and many other things. None of these *tuning parameters* is known beforehand, and so must be tuned, or learned, just like any other. This is usually done with through validation process like k\-fold cross validation. The ‘best’ settings are then used to make final predictions on the test set.
The usual workflow is something like the following:
* **Tuning**: With the **training data**, use a cross\-validation approach to run models with different values for tuning parameters.
* **Model Selection**: Select the ‘best’ model as that which minimizes or maximizes the objective function estimated during cross\-validation (e.g. RMSE, accuracy, etc.). The test data in this setting are typically referred to as *validation sets*.
* **Prediction**: Use the best model to make predictions on the **test set**.
Techniques
----------
### Regularized regression
A starting point for getting into ML from the more inferential methods is to use *regularized regression*. These are conceptually no different than standard LM/GLM types of approaches, but they add something to the loss function.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2 \+ \\lambda\\cdot\\Sigma\\beta^2\\]
In the above, this is the same squared error loss function as before, but we add a penalty that is based on the size of the coefficients. So, while initially our loss goes down with some set of estimates, the penalty based on their size might be such that the estimated loss actually increases. This has the effect of shrinking the estimates toward zero. Well, [why would we want that](https://stats.stackexchange.com/questions/179864/why-does-shrinkage-work)? This introduces [bias in the coefficients](https://stats.stackexchange.com/questions/207760/when-is-a-biased-estimator-preferable-to-unbiased-one), but the end result is a model that will do better on test set prediction, which is the goal of the ML approach. The way this works regards the bias\-variance tradeoff we discussed previously.
The following demonstrates regularized regression using the glmnet package. It actually uses *elastic net*, which has a mixture of two penalties, one of which is the squared sum of coefficients (typically called *ridge regression*) and the other is the sum of their absolute values (the so\-called *lasso*).
```
library(tidymodels)
happy_prepped = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
recipe(happiness_score ~ .) %>%
step_scale(everything()) %>%
step_naomit(happiness_score) %>%
prep() %>%
bake(happy)
happy_folds = happy_prepped %>%
drop_na() %>%
vfold_cv()
library(tune)
happy_regLM_spec = linear_reg(penalty = 1e-3, mixture = .5) %>%
set_engine(engine = "glmnet")
happy_regLM_results = fit_resamples(
happy_regLM_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_regLM_res = happy_regLM_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.335 | 10 | 0\.018 |
| rsq | standard | 0\.897 | 10 | 0\.013 |
#### Tuning parameters for regularized regression
For the previous model setting, we wouldn’t know what the penalty or the mixing parameter should be. This is where we can use cross validation to choose those. We’ll redo our model spec, create a set of values to search over, and pass that to the tuning function for cross\-validation. Our ultimate model will then be applied to the test data.
First we create our training\-test split.
```
# removing some variables with lots of missing values
happy_split = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
initial_split(prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split)
```
Next we process the data. This is all specific to the tidymodels approach.
```
happy_prepped = happy_train %>%
recipe(happiness_score ~ .) %>%
step_knnimpute(everything()) %>% # impute missing values
step_center(everything()) %>% # standardize
step_scale(everything()) %>% # standardize
prep() # prepare for other uses
happy_test_normalized <- bake(happy_prepped, new_data = happy_test, everything())
happy_folds = happy_prepped %>%
bake(happy) %>%
vfold_cv()
# now we are indicating we don't know the value to place
happy_regLM_spec = linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine(engine = "glmnet")
```
Now, we need to create a set of values (grid) to try an test. In this case we set the penalty parameter from near zero to near 1, and the mixture parameter a range of values from 0 (ridge regression) to 1 (lasso).
```
grid_search = expand_grid(
penalty = exp(seq(-4, -.25, length.out = 10)),
mixture = seq(0, 1, length.out = 10)
)
regLM_tune = tune_grid(
happy_prepped,
model = happy_regLM_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(regLM_tune, metric = "rmse") + geom_smooth(se = FALSE)
```
```
best = show_best(regLM_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
penalty mixture .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0.0183 0.111 rmse standard 0.288 10 0.00686 Model011
```
The results suggest a more ridge type mixture and smaller penalty tends to work better, and this is more or less in keeping with the ‘best’ model. Here is a plot where size indicates RMSE (smaller better) but only for RMSE \< .5 (slight jitter added).
With the ‘best’ model selected, we can refit to the training data with the parameters in hand. We can then do our usual performance assessment with the test set.
```
# for technical reasons, only mixture is passed to the model; see https://github.com/tidymodels/parsnip/issues/215
tuned_model = linear_reg(penalty = best$penalty, mixture = best$mixture) %>%
set_engine(engine = "glmnet") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.297 |
| rsq | standard | 0\.912 |
Not too bad!
### Random forests
A limitation of standard linear models, especially with many input variables, is that there’s not a real automatic way to incorporate interactions and nonlinearities. So we often will want to use techniques that do so. To understand *random forests* and similar techniques (boosted trees, etc.), we can start with a simple decision tree. To begin, for a single input variable (`X1`) whose range might be 1 to 10, we find that a cut at 5\.75 results in the best classification, such that if all observations greater than or equal to 5\.75 are classified as positive, and the rest negative. This general approach is fairly straightforward and conceptually easy to grasp, and it is because of this that tree approaches are appealing.
Now let’s add a second input (`X2`), also on a 1 to 10 range. We now might find that even better classification results if, upon looking at the portion of data regarding those greater than or equal to 5\.75, that we only classify positive if they are also less than 3 on the second variable. The following is a hypothetical tree reflecting this.
This tree structure allows for both interactions between variables, and nonlinear relationships between some input and the target variable (e.g. the second branch could just be the same `X1` but with some cut value greater than 5\.75\). Random forests randomly select a few from the available input variables, and create a tree that minimizes (maximizes) some loss (objective) function on a validation set. A given tree can potentially be very wide/deep, but instead of just one tree, we now do, say, 1000 trees. A final prediction is made based on the average across all trees.
We demonstrate the random forest using the ranger package. We initially don’t do any tuning here, but do note that the number of variables to randomly select (`mtry` below), the number of total trees, the tree depth \- all of these are potential tuning parameters to investigate in the model building process.
```
happy_rf_spec = rand_forest(mode = 'regression', mtry = 6) %>%
set_engine(engine = "ranger")
happy_rf_results = fit_resamples(
happy_rf_spec,
happiness_score ~ .,
happy_folds
)
cv_rf_res = happy_rf_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.222 | 10 | 0\.004 |
| rsq | standard | 0\.950 | 10 | 0\.003 |
It would appear we’re doing a bit better than the regularized regression.
#### Tuning parameters for random forests
As mentioned, we’d have a few tuning parameters to play around with. We’ll tune the number of predictors to randomly select per tree, as well as the minimum sample size for each leaf. The following takes the same appraoch as with the regularized regression model. Note that this will take a while (several minutes).
```
grid_search = expand.grid(
mtry = c(3, 5, ncol(happy_train)-1), # up to total number of predictors
min_n = c(1, 5, 10)
)
happy_rf_spec = rand_forest(mode = 'regression',
mtry = tune(),
min_n = tune()) %>%
set_engine(engine = "ranger")
rf_tune = tune_grid(
happy_prepped,
model = happy_rf_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(rf_tune, metric = "rmse")
```
```
best = show_best(rf_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
mtry min_n .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 3 1 rmse standard 0.219 10 0.00422 Model1
```
Looks like in general using all the variables for selection is the best (this is in keeping with standard approaches for random forest with regression).
Now we are ready to refit the model with the selected tuning parameters and make predictions on the test data.
```
tuned_model = rand_forest(mode = 'regression', mtry = best$mtry, min_n = best$min_n) %>%
set_engine(engine = "ranger") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.217 |
| rsq | standard | 0\.955 |
### Neural networks
*Neural networks* have been around for a long while as a general concept in artificial intelligence and even as a machine learning algorithm, and often work quite well. In some sense, neural networks can simply be thought of as nonlinear regression. Visually however, we can see them as a graphical model with layers of inputs and outputs. Weighted combinations of the inputs are created[42](#fn42) and put through some function (e.g. the sigmoid function) to produce the next layer of inputs. This next layer goes through the same process to produce either another layer, or to predict the output, or even multiple outputs, which serves as the final layer. All the layers between the input and output are usually referred to as hidden layers. If there were a single hidden layer with a single unit and no transformation, then it becomes the standard regression problem.
As a simple example, we can run a simple neural network with a single hidden layer of 1000 units[43](#fn43). Since this is a regression problem, no further transformation is required of the end result to map it onto the target variable. I set the number of epochs to 500, which you can think of as the number of iterations from our discussion of optimization. There are many tuning parameters I am not showing that could certainly be fiddled with as well. This is just an example that will run quickly with comparable performance to the previous. If you do not have keras installed, you can change the engine to `nnet`, which was a part of the base R set of packages well before neural nets became cool again[44](#fn44). This will likely take several minutes for typical machines.
```
happy_nn_spec = mlp(
mode = 'regression',
hidden_units = 1000,
epochs = 500,
activation = 'linear'
) %>%
set_engine(engine = "keras")
happy_nn_results = fit_resamples(
happy_nn_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE,
verbose = FALSE,
allow_par = TRUE)
)
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.818 | 10 | 0\.102 |
| rsq | standard | 0\.896 | 10 | 0\.014 |
You will typically see neural nets applied to image and natural language processing, but as demonstrated above, they can be applied in any scenario. It will take longer to set up and train, but once that’s done, you’re good to go, and may have a much better predictive result.
I leave tuning the model as an [exercise](ml.html#machine-learning-exercises), but definitely switch to using `nnet` if you do so, otherwise you’ll have to install keras (both for R and Python) and be waiting a long time besides. As mentioned, the nnet package is in base R, so you already have it.
#### Deep learning
*Deep learning* can be summarized succinctly as ‘very complicated neural nets’. Really, that’s about it. The complexity can be quite tremendous however, and there is a wide variety from which to choose. For example, we just ran a basic neural net above, but for image processing we might use a convolutional neural network, and for natural language processing some LTSM model. Here is a small(!) version of the convolutional neural network known as ‘resnet’ which has many layers in between input and output.
The nice thing is that a lot of the work has already been done for you, and you can use models where most of the layers in the neural net have already been trained by people at Google, Facebook, and others who have much more resources to do do so than you. In such cases, you may only have to worry about the last couple layers for your particular problem. Applying a pre\-trained model to a different data scenario is called *transfer learning*, and regardless of what your intuition is, it will work, and very well.
*Artificial intelligence* (AI) used to refer to specific applications of deep/machine learning (e.g. areas in computer vision and natural language processing), but thanks to the popular press, the term has pretty much lost all meaning. AI actually has a very old history dating to the cognitive revolution in psychology and the early days of computer science in the late 50s and early 60s. Again though, you can think of it as a subset of the machine learning problems.
### Regularized regression
A starting point for getting into ML from the more inferential methods is to use *regularized regression*. These are conceptually no different than standard LM/GLM types of approaches, but they add something to the loss function.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2 \+ \\lambda\\cdot\\Sigma\\beta^2\\]
In the above, this is the same squared error loss function as before, but we add a penalty that is based on the size of the coefficients. So, while initially our loss goes down with some set of estimates, the penalty based on their size might be such that the estimated loss actually increases. This has the effect of shrinking the estimates toward zero. Well, [why would we want that](https://stats.stackexchange.com/questions/179864/why-does-shrinkage-work)? This introduces [bias in the coefficients](https://stats.stackexchange.com/questions/207760/when-is-a-biased-estimator-preferable-to-unbiased-one), but the end result is a model that will do better on test set prediction, which is the goal of the ML approach. The way this works regards the bias\-variance tradeoff we discussed previously.
The following demonstrates regularized regression using the glmnet package. It actually uses *elastic net*, which has a mixture of two penalties, one of which is the squared sum of coefficients (typically called *ridge regression*) and the other is the sum of their absolute values (the so\-called *lasso*).
```
library(tidymodels)
happy_prepped = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
recipe(happiness_score ~ .) %>%
step_scale(everything()) %>%
step_naomit(happiness_score) %>%
prep() %>%
bake(happy)
happy_folds = happy_prepped %>%
drop_na() %>%
vfold_cv()
library(tune)
happy_regLM_spec = linear_reg(penalty = 1e-3, mixture = .5) %>%
set_engine(engine = "glmnet")
happy_regLM_results = fit_resamples(
happy_regLM_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_regLM_res = happy_regLM_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.335 | 10 | 0\.018 |
| rsq | standard | 0\.897 | 10 | 0\.013 |
#### Tuning parameters for regularized regression
For the previous model setting, we wouldn’t know what the penalty or the mixing parameter should be. This is where we can use cross validation to choose those. We’ll redo our model spec, create a set of values to search over, and pass that to the tuning function for cross\-validation. Our ultimate model will then be applied to the test data.
First we create our training\-test split.
```
# removing some variables with lots of missing values
happy_split = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
initial_split(prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split)
```
Next we process the data. This is all specific to the tidymodels approach.
```
happy_prepped = happy_train %>%
recipe(happiness_score ~ .) %>%
step_knnimpute(everything()) %>% # impute missing values
step_center(everything()) %>% # standardize
step_scale(everything()) %>% # standardize
prep() # prepare for other uses
happy_test_normalized <- bake(happy_prepped, new_data = happy_test, everything())
happy_folds = happy_prepped %>%
bake(happy) %>%
vfold_cv()
# now we are indicating we don't know the value to place
happy_regLM_spec = linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine(engine = "glmnet")
```
Now, we need to create a set of values (grid) to try an test. In this case we set the penalty parameter from near zero to near 1, and the mixture parameter a range of values from 0 (ridge regression) to 1 (lasso).
```
grid_search = expand_grid(
penalty = exp(seq(-4, -.25, length.out = 10)),
mixture = seq(0, 1, length.out = 10)
)
regLM_tune = tune_grid(
happy_prepped,
model = happy_regLM_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(regLM_tune, metric = "rmse") + geom_smooth(se = FALSE)
```
```
best = show_best(regLM_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
penalty mixture .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0.0183 0.111 rmse standard 0.288 10 0.00686 Model011
```
The results suggest a more ridge type mixture and smaller penalty tends to work better, and this is more or less in keeping with the ‘best’ model. Here is a plot where size indicates RMSE (smaller better) but only for RMSE \< .5 (slight jitter added).
With the ‘best’ model selected, we can refit to the training data with the parameters in hand. We can then do our usual performance assessment with the test set.
```
# for technical reasons, only mixture is passed to the model; see https://github.com/tidymodels/parsnip/issues/215
tuned_model = linear_reg(penalty = best$penalty, mixture = best$mixture) %>%
set_engine(engine = "glmnet") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.297 |
| rsq | standard | 0\.912 |
Not too bad!
#### Tuning parameters for regularized regression
For the previous model setting, we wouldn’t know what the penalty or the mixing parameter should be. This is where we can use cross validation to choose those. We’ll redo our model spec, create a set of values to search over, and pass that to the tuning function for cross\-validation. Our ultimate model will then be applied to the test data.
First we create our training\-test split.
```
# removing some variables with lots of missing values
happy_split = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
initial_split(prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split)
```
Next we process the data. This is all specific to the tidymodels approach.
```
happy_prepped = happy_train %>%
recipe(happiness_score ~ .) %>%
step_knnimpute(everything()) %>% # impute missing values
step_center(everything()) %>% # standardize
step_scale(everything()) %>% # standardize
prep() # prepare for other uses
happy_test_normalized <- bake(happy_prepped, new_data = happy_test, everything())
happy_folds = happy_prepped %>%
bake(happy) %>%
vfold_cv()
# now we are indicating we don't know the value to place
happy_regLM_spec = linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine(engine = "glmnet")
```
Now, we need to create a set of values (grid) to try an test. In this case we set the penalty parameter from near zero to near 1, and the mixture parameter a range of values from 0 (ridge regression) to 1 (lasso).
```
grid_search = expand_grid(
penalty = exp(seq(-4, -.25, length.out = 10)),
mixture = seq(0, 1, length.out = 10)
)
regLM_tune = tune_grid(
happy_prepped,
model = happy_regLM_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(regLM_tune, metric = "rmse") + geom_smooth(se = FALSE)
```
```
best = show_best(regLM_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
penalty mixture .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0.0183 0.111 rmse standard 0.288 10 0.00686 Model011
```
The results suggest a more ridge type mixture and smaller penalty tends to work better, and this is more or less in keeping with the ‘best’ model. Here is a plot where size indicates RMSE (smaller better) but only for RMSE \< .5 (slight jitter added).
With the ‘best’ model selected, we can refit to the training data with the parameters in hand. We can then do our usual performance assessment with the test set.
```
# for technical reasons, only mixture is passed to the model; see https://github.com/tidymodels/parsnip/issues/215
tuned_model = linear_reg(penalty = best$penalty, mixture = best$mixture) %>%
set_engine(engine = "glmnet") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.297 |
| rsq | standard | 0\.912 |
Not too bad!
### Random forests
A limitation of standard linear models, especially with many input variables, is that there’s not a real automatic way to incorporate interactions and nonlinearities. So we often will want to use techniques that do so. To understand *random forests* and similar techniques (boosted trees, etc.), we can start with a simple decision tree. To begin, for a single input variable (`X1`) whose range might be 1 to 10, we find that a cut at 5\.75 results in the best classification, such that if all observations greater than or equal to 5\.75 are classified as positive, and the rest negative. This general approach is fairly straightforward and conceptually easy to grasp, and it is because of this that tree approaches are appealing.
Now let’s add a second input (`X2`), also on a 1 to 10 range. We now might find that even better classification results if, upon looking at the portion of data regarding those greater than or equal to 5\.75, that we only classify positive if they are also less than 3 on the second variable. The following is a hypothetical tree reflecting this.
This tree structure allows for both interactions between variables, and nonlinear relationships between some input and the target variable (e.g. the second branch could just be the same `X1` but with some cut value greater than 5\.75\). Random forests randomly select a few from the available input variables, and create a tree that minimizes (maximizes) some loss (objective) function on a validation set. A given tree can potentially be very wide/deep, but instead of just one tree, we now do, say, 1000 trees. A final prediction is made based on the average across all trees.
We demonstrate the random forest using the ranger package. We initially don’t do any tuning here, but do note that the number of variables to randomly select (`mtry` below), the number of total trees, the tree depth \- all of these are potential tuning parameters to investigate in the model building process.
```
happy_rf_spec = rand_forest(mode = 'regression', mtry = 6) %>%
set_engine(engine = "ranger")
happy_rf_results = fit_resamples(
happy_rf_spec,
happiness_score ~ .,
happy_folds
)
cv_rf_res = happy_rf_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.222 | 10 | 0\.004 |
| rsq | standard | 0\.950 | 10 | 0\.003 |
It would appear we’re doing a bit better than the regularized regression.
#### Tuning parameters for random forests
As mentioned, we’d have a few tuning parameters to play around with. We’ll tune the number of predictors to randomly select per tree, as well as the minimum sample size for each leaf. The following takes the same appraoch as with the regularized regression model. Note that this will take a while (several minutes).
```
grid_search = expand.grid(
mtry = c(3, 5, ncol(happy_train)-1), # up to total number of predictors
min_n = c(1, 5, 10)
)
happy_rf_spec = rand_forest(mode = 'regression',
mtry = tune(),
min_n = tune()) %>%
set_engine(engine = "ranger")
rf_tune = tune_grid(
happy_prepped,
model = happy_rf_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(rf_tune, metric = "rmse")
```
```
best = show_best(rf_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
mtry min_n .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 3 1 rmse standard 0.219 10 0.00422 Model1
```
Looks like in general using all the variables for selection is the best (this is in keeping with standard approaches for random forest with regression).
Now we are ready to refit the model with the selected tuning parameters and make predictions on the test data.
```
tuned_model = rand_forest(mode = 'regression', mtry = best$mtry, min_n = best$min_n) %>%
set_engine(engine = "ranger") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.217 |
| rsq | standard | 0\.955 |
#### Tuning parameters for random forests
As mentioned, we’d have a few tuning parameters to play around with. We’ll tune the number of predictors to randomly select per tree, as well as the minimum sample size for each leaf. The following takes the same appraoch as with the regularized regression model. Note that this will take a while (several minutes).
```
grid_search = expand.grid(
mtry = c(3, 5, ncol(happy_train)-1), # up to total number of predictors
min_n = c(1, 5, 10)
)
happy_rf_spec = rand_forest(mode = 'regression',
mtry = tune(),
min_n = tune()) %>%
set_engine(engine = "ranger")
rf_tune = tune_grid(
happy_prepped,
model = happy_rf_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(rf_tune, metric = "rmse")
```
```
best = show_best(rf_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
mtry min_n .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 3 1 rmse standard 0.219 10 0.00422 Model1
```
Looks like in general using all the variables for selection is the best (this is in keeping with standard approaches for random forest with regression).
Now we are ready to refit the model with the selected tuning parameters and make predictions on the test data.
```
tuned_model = rand_forest(mode = 'regression', mtry = best$mtry, min_n = best$min_n) %>%
set_engine(engine = "ranger") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.217 |
| rsq | standard | 0\.955 |
### Neural networks
*Neural networks* have been around for a long while as a general concept in artificial intelligence and even as a machine learning algorithm, and often work quite well. In some sense, neural networks can simply be thought of as nonlinear regression. Visually however, we can see them as a graphical model with layers of inputs and outputs. Weighted combinations of the inputs are created[42](#fn42) and put through some function (e.g. the sigmoid function) to produce the next layer of inputs. This next layer goes through the same process to produce either another layer, or to predict the output, or even multiple outputs, which serves as the final layer. All the layers between the input and output are usually referred to as hidden layers. If there were a single hidden layer with a single unit and no transformation, then it becomes the standard regression problem.
As a simple example, we can run a simple neural network with a single hidden layer of 1000 units[43](#fn43). Since this is a regression problem, no further transformation is required of the end result to map it onto the target variable. I set the number of epochs to 500, which you can think of as the number of iterations from our discussion of optimization. There are many tuning parameters I am not showing that could certainly be fiddled with as well. This is just an example that will run quickly with comparable performance to the previous. If you do not have keras installed, you can change the engine to `nnet`, which was a part of the base R set of packages well before neural nets became cool again[44](#fn44). This will likely take several minutes for typical machines.
```
happy_nn_spec = mlp(
mode = 'regression',
hidden_units = 1000,
epochs = 500,
activation = 'linear'
) %>%
set_engine(engine = "keras")
happy_nn_results = fit_resamples(
happy_nn_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE,
verbose = FALSE,
allow_par = TRUE)
)
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.818 | 10 | 0\.102 |
| rsq | standard | 0\.896 | 10 | 0\.014 |
You will typically see neural nets applied to image and natural language processing, but as demonstrated above, they can be applied in any scenario. It will take longer to set up and train, but once that’s done, you’re good to go, and may have a much better predictive result.
I leave tuning the model as an [exercise](ml.html#machine-learning-exercises), but definitely switch to using `nnet` if you do so, otherwise you’ll have to install keras (both for R and Python) and be waiting a long time besides. As mentioned, the nnet package is in base R, so you already have it.
#### Deep learning
*Deep learning* can be summarized succinctly as ‘very complicated neural nets’. Really, that’s about it. The complexity can be quite tremendous however, and there is a wide variety from which to choose. For example, we just ran a basic neural net above, but for image processing we might use a convolutional neural network, and for natural language processing some LTSM model. Here is a small(!) version of the convolutional neural network known as ‘resnet’ which has many layers in between input and output.
The nice thing is that a lot of the work has already been done for you, and you can use models where most of the layers in the neural net have already been trained by people at Google, Facebook, and others who have much more resources to do do so than you. In such cases, you may only have to worry about the last couple layers for your particular problem. Applying a pre\-trained model to a different data scenario is called *transfer learning*, and regardless of what your intuition is, it will work, and very well.
*Artificial intelligence* (AI) used to refer to specific applications of deep/machine learning (e.g. areas in computer vision and natural language processing), but thanks to the popular press, the term has pretty much lost all meaning. AI actually has a very old history dating to the cognitive revolution in psychology and the early days of computer science in the late 50s and early 60s. Again though, you can think of it as a subset of the machine learning problems.
#### Deep learning
*Deep learning* can be summarized succinctly as ‘very complicated neural nets’. Really, that’s about it. The complexity can be quite tremendous however, and there is a wide variety from which to choose. For example, we just ran a basic neural net above, but for image processing we might use a convolutional neural network, and for natural language processing some LTSM model. Here is a small(!) version of the convolutional neural network known as ‘resnet’ which has many layers in between input and output.
The nice thing is that a lot of the work has already been done for you, and you can use models where most of the layers in the neural net have already been trained by people at Google, Facebook, and others who have much more resources to do do so than you. In such cases, you may only have to worry about the last couple layers for your particular problem. Applying a pre\-trained model to a different data scenario is called *transfer learning*, and regardless of what your intuition is, it will work, and very well.
*Artificial intelligence* (AI) used to refer to specific applications of deep/machine learning (e.g. areas in computer vision and natural language processing), but thanks to the popular press, the term has pretty much lost all meaning. AI actually has a very old history dating to the cognitive revolution in psychology and the early days of computer science in the late 50s and early 60s. Again though, you can think of it as a subset of the machine learning problems.
Interpreting the Black Box
--------------------------
One of the key issues with ML techniques is interpretability. While a decision tree is immensely interpretable, a thousand of them is not so much. What any particular node or even layer in a complex neural network represents may be difficult to fathom. However, we can still interpret prediction changes based on input changes, which is what we really care about, and really is not necessarily more difficult than our standard inferential setting.
For example, a regularized regression might not have straightforward inference, but the coefficients are interpreted exactly the same as a standard GLM. Random forests can have the interactions visualized, which is what we said was required for interpretation in standard settings. Furthermore, there are many approaches such as *Local Interpretable Model\-Agnostic Explanations* (LIME), variable importance measures, Shapley values, and more to help us in this process. It might take more work, but honestly, in my consulting experience, a great many have trouble interpreting anything beyond a standard linear model any way, and I’m not convinced that it’s fundamentally different problem to [extract meaning from the machine learning context](https://christophm.github.io/interpretable-ml-book/) these days, though it may take a little work.
Machine Learning Summary
------------------------
Hopefully this has demystified ML for you somewhat. Nowadays it may take little effort to get to state\-of\-the\-art results from even just a year or two ago, and which, for all intents and purposes, would be good enough both now and for the foreseeable future. Despite what many may think, it is not magic, but for more classically statistically minded, it may require a bit of a different perspective.
Machine Learning Exercises
--------------------------
### Exercise 1
Use the ranger package to predict the Google variable `rating` by several covariates. Feel free to just use the standard function approach rather than all the tidymodels stuff if you want, but do use a training and test approach. You can then try the model again with a different tuning. For the first go around, starter code is provided.
```
# run these if needed to load data and install the package
# load('data/google_apps.RData')
# install.packages('ranger')
google_for_mod = google_apps %>%
select(avg_sentiment_polarity, rating, type,installs, reviews, size_in_MB, category) %>%
drop_na()
google_split = google_for_mod %>%
initial_split(prop = 0.75)
google_train = training(google_split)
google_test = testing(google_split)
ga_rf_results = rand_forest(mode = 'regression', mtry = 2, trees = 1000) %>%
set_engine(engine = "ranger") %>%
fit(
rating ~ ?,
google_train
)
test_predictions = predict(ga_rf_results, new_data = google_test)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
bind_rows(
rmse,
rsq
)
```
### Exercise 2
Respecify the neural net model demonstrated above as follows, and tune over the number of hidden units to have. This will probably take several minutes depending on your machine.
```
grid_search = expand.grid(
hidden_units = c(25, 50),
penalty = exp(seq(-4, -.25, length.out = 5))
)
happy_nn_spec = mlp(mode = 'regression',
penalty = tune(),
hidden_units = tune()) %>%
set_engine(engine = "nnet")
nn_tune = tune_grid(
happy_prepped, # from previous examples, see tuning for regularized regression
model = happy_nn_spec,
resamples = happy_folds, # from previous examples, see tuning for regularized regression
grid = grid_search
)
show_best(nn_tune, metric = "rmse", maximize = FALSE, n = 1)
```
### Exercise 1
Use the ranger package to predict the Google variable `rating` by several covariates. Feel free to just use the standard function approach rather than all the tidymodels stuff if you want, but do use a training and test approach. You can then try the model again with a different tuning. For the first go around, starter code is provided.
```
# run these if needed to load data and install the package
# load('data/google_apps.RData')
# install.packages('ranger')
google_for_mod = google_apps %>%
select(avg_sentiment_polarity, rating, type,installs, reviews, size_in_MB, category) %>%
drop_na()
google_split = google_for_mod %>%
initial_split(prop = 0.75)
google_train = training(google_split)
google_test = testing(google_split)
ga_rf_results = rand_forest(mode = 'regression', mtry = 2, trees = 1000) %>%
set_engine(engine = "ranger") %>%
fit(
rating ~ ?,
google_train
)
test_predictions = predict(ga_rf_results, new_data = google_test)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
bind_rows(
rmse,
rsq
)
```
### Exercise 2
Respecify the neural net model demonstrated above as follows, and tune over the number of hidden units to have. This will probably take several minutes depending on your machine.
```
grid_search = expand.grid(
hidden_units = c(25, 50),
penalty = exp(seq(-4, -.25, length.out = 5))
)
happy_nn_spec = mlp(mode = 'regression',
penalty = tune(),
hidden_units = tune()) %>%
set_engine(engine = "nnet")
nn_tune = tune_grid(
happy_prepped, # from previous examples, see tuning for regularized regression
model = happy_nn_spec,
resamples = happy_folds, # from previous examples, see tuning for regularized regression
grid = grid_search
)
show_best(nn_tune, metric = "rmse", maximize = FALSE, n = 1)
```
Python Machine Learning Notebook
--------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/ml.ipynb)
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/ml.html |
Machine Learning
================
*Machine learning* (ML) encompasses a wide variety of techniques, from standard regression models to almost impenetrably complex modeling tools. While it may seem like magic to the uninitiated, the main thing that distinguishes it from standard statistical methods discussed thus far is an approach that heavily favors prediction over inference and explanatory power, and which takes the necessary steps to gain any predictive advantage[38](#fn38).
ML could potentially be applied in any setting, but typically works best with data sets much larger than classical statistical methods are usually applied to. However, nowadays even complex regression models can be applied to extremely large data sets, and properly applied ML may even work in simpler data settings, so this distinction is muddier than it used to be. The main distinguishing factor is mostly one of focus.
The following only very briefly provides a demonstration of concepts and approaches. I have more [in\-depth document available](https://m-clark.github.io/introduction-to-machine-learning/) for more details.
Concepts
--------
### Loss
We discussed loss functions [before](models.html#estimation), and there was a reason I went more in depth there, mainly because I feel, unlike with ML, loss is not explicitly focused on as much in applied research, leaving the results produced to come across as more magical than it should be. In ML however, we are explicitly concerned with loss functions and, more specifically, evaluating loss on test data. This loss is evaluated over successive iterations of a particular technique, or averaged over several test sets via cross\-validation. Typical loss functions are *Root Mean Squared Error* for numeric targets (essentially the same as for a standard linear model), and *cross\-entropy* for categorical outcomes. There are robust alternatives, such as mean absolute error and hinge loss functions respectively, and many other options besides. You will come across others that might be used for specific scenarios.
The following image, typically called a *learning curve*, shows an example of loss on a test set as a function of model complexity. In this case, models with more complexity perform better, but only to a point, before test error begins to rise again.
### Bias\-variance tradeoff
Prediction error, i.e. loss, is composed of several sources. One part is *measurement error*, which we can’t do anything about, and two components of specific interest: *bias*, the difference in the observed value and our average predicted value, and *variance* how much that prediction would change had we trained on different data. More generally we can think of this as a problem of *underfitting* vs. *overfitting*. With a model that is too simple, we underfit, and bias is high. If we overfit, the model is too close to the training data, and likely will do poorly with new observations. ML techniques trade some increased bias for even greater reduced variance, which often means less overfitting to the training data, leading to increased performance on new data.
In the following[39](#fn39), the blue line represents models applied to training data, while the red line regards performance on the test set. We can see that for the data we train the model to, error will always go down with increased complexity. However, we can see that at some point, the test error will increase as we have started to overfit to the training data.
### Regularization
As we have noted, a model fit to a single data set might do very well with the data at hand, but then suffer when predicting independent data. Also, oftentimes we are interested in a ‘best’ subset of predictors among a great many, and typically the estimated coefficients from standard approaches are overly optimistic unless dealing with sufficiently large sample sizes. This general issue can be improved by shrinking estimates toward zero, such that some of the performance in the initial fit is sacrificed for improvement with regard to prediction. The basic idea in terms of the tradeoff is that we are trading some bias for notably reduced variance. We demonstrate regularized regression below.
### Cross\-validation
*Cross\-validation* is widely used for validation and/or testing. With validation, we are usually concerned with picking parameter settings for the model, while the testing is used for ultimate assessment of model performance. Conceptually there is nothing new beyond what was [discussed previously](model_criticism.html#predictive-performance) regarding holding out data for assessing predictive performance, we just do more of it.
As an example, let’s say we split our data into three parts. We use two parts (combined) as our training data, then the third part as test. At this point this is identical to our demonstration before. But then, we switch which part is test and which two are training, and do the whole thing over again. And finally once more, so that each of our three parts has taken a turn as a test set. Our estimated error is the average loss across the three times.
Typically we do it more than three times, usually 10, and there are fancier methods of *k\-fold cross\-validation*, though they typically don’t serve to add much value. In any case, let’s try it with our previous example. The following uses the tidymodels approach to be consistent with early chapters use of the tidyverse[40](#fn40). With it we can employ k\-fold cross validation to evaluate the loss.
```
# install.packages(tidymodels) # if needed
library(tidymodels)
load('data/world_happiness.RData')
set.seed(1212)
# specify the model
happy_base_spec = linear_reg() %>%
set_engine(engine = "lm")
# by default 10-folds
happy_folds = vfold_cv(happy)
library(tune)
happy_base_results = fit_resamples(
happy_base_spec,
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_res = happy_base_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.629 | 10 | 0\.022 |
| rsq | standard | 0\.697 | 10 | 0\.022 |
We now see that our average test error is 0\.629\. It also gives the average R2\.
### Optimization
With ML, much more attention is paid to different optimizers, but the vast majority for deep learning and other many other methods are some flavor of *stochastic gradient descent*. Often due to the sheer volume of data/parameters, this optimization is done on chunks of the data and in parallel. In general, some optimization approaches may work better in some situations or for some models, where ‘better’ means quicker convergence, or perhaps a smoother ride toward convergence. It is not the case that you would come to incorrect conclusions using one method vs. another per se, just that you might reach those conclusions in a more efficient fashion. The following graphic displays SGD versus several variants[41](#fn41). The x and y axes represent the potential values two parameters might take, with the best selection of those values based on a loss function somewhere toward the bottom right. We can see that they all would get there eventually, but some might do so more quickly. This may or may not be the case for some other data situation.
### Tuning parameters
In any ML setting there are parameters that need to set in order to even run the model. In regularized regression this may be the penalty parameter, for random forests the tree depth, for neural nets, how many hidden units, and many other things. None of these *tuning parameters* is known beforehand, and so must be tuned, or learned, just like any other. This is usually done with through validation process like k\-fold cross validation. The ‘best’ settings are then used to make final predictions on the test set.
The usual workflow is something like the following:
* **Tuning**: With the **training data**, use a cross\-validation approach to run models with different values for tuning parameters.
* **Model Selection**: Select the ‘best’ model as that which minimizes or maximizes the objective function estimated during cross\-validation (e.g. RMSE, accuracy, etc.). The test data in this setting are typically referred to as *validation sets*.
* **Prediction**: Use the best model to make predictions on the **test set**.
Techniques
----------
### Regularized regression
A starting point for getting into ML from the more inferential methods is to use *regularized regression*. These are conceptually no different than standard LM/GLM types of approaches, but they add something to the loss function.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2 \+ \\lambda\\cdot\\Sigma\\beta^2\\]
In the above, this is the same squared error loss function as before, but we add a penalty that is based on the size of the coefficients. So, while initially our loss goes down with some set of estimates, the penalty based on their size might be such that the estimated loss actually increases. This has the effect of shrinking the estimates toward zero. Well, [why would we want that](https://stats.stackexchange.com/questions/179864/why-does-shrinkage-work)? This introduces [bias in the coefficients](https://stats.stackexchange.com/questions/207760/when-is-a-biased-estimator-preferable-to-unbiased-one), but the end result is a model that will do better on test set prediction, which is the goal of the ML approach. The way this works regards the bias\-variance tradeoff we discussed previously.
The following demonstrates regularized regression using the glmnet package. It actually uses *elastic net*, which has a mixture of two penalties, one of which is the squared sum of coefficients (typically called *ridge regression*) and the other is the sum of their absolute values (the so\-called *lasso*).
```
library(tidymodels)
happy_prepped = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
recipe(happiness_score ~ .) %>%
step_scale(everything()) %>%
step_naomit(happiness_score) %>%
prep() %>%
bake(happy)
happy_folds = happy_prepped %>%
drop_na() %>%
vfold_cv()
library(tune)
happy_regLM_spec = linear_reg(penalty = 1e-3, mixture = .5) %>%
set_engine(engine = "glmnet")
happy_regLM_results = fit_resamples(
happy_regLM_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_regLM_res = happy_regLM_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.335 | 10 | 0\.018 |
| rsq | standard | 0\.897 | 10 | 0\.013 |
#### Tuning parameters for regularized regression
For the previous model setting, we wouldn’t know what the penalty or the mixing parameter should be. This is where we can use cross validation to choose those. We’ll redo our model spec, create a set of values to search over, and pass that to the tuning function for cross\-validation. Our ultimate model will then be applied to the test data.
First we create our training\-test split.
```
# removing some variables with lots of missing values
happy_split = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
initial_split(prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split)
```
Next we process the data. This is all specific to the tidymodels approach.
```
happy_prepped = happy_train %>%
recipe(happiness_score ~ .) %>%
step_knnimpute(everything()) %>% # impute missing values
step_center(everything()) %>% # standardize
step_scale(everything()) %>% # standardize
prep() # prepare for other uses
happy_test_normalized <- bake(happy_prepped, new_data = happy_test, everything())
happy_folds = happy_prepped %>%
bake(happy) %>%
vfold_cv()
# now we are indicating we don't know the value to place
happy_regLM_spec = linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine(engine = "glmnet")
```
Now, we need to create a set of values (grid) to try an test. In this case we set the penalty parameter from near zero to near 1, and the mixture parameter a range of values from 0 (ridge regression) to 1 (lasso).
```
grid_search = expand_grid(
penalty = exp(seq(-4, -.25, length.out = 10)),
mixture = seq(0, 1, length.out = 10)
)
regLM_tune = tune_grid(
happy_prepped,
model = happy_regLM_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(regLM_tune, metric = "rmse") + geom_smooth(se = FALSE)
```
```
best = show_best(regLM_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
penalty mixture .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0.0183 0.111 rmse standard 0.288 10 0.00686 Model011
```
The results suggest a more ridge type mixture and smaller penalty tends to work better, and this is more or less in keeping with the ‘best’ model. Here is a plot where size indicates RMSE (smaller better) but only for RMSE \< .5 (slight jitter added).
With the ‘best’ model selected, we can refit to the training data with the parameters in hand. We can then do our usual performance assessment with the test set.
```
# for technical reasons, only mixture is passed to the model; see https://github.com/tidymodels/parsnip/issues/215
tuned_model = linear_reg(penalty = best$penalty, mixture = best$mixture) %>%
set_engine(engine = "glmnet") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.297 |
| rsq | standard | 0\.912 |
Not too bad!
### Random forests
A limitation of standard linear models, especially with many input variables, is that there’s not a real automatic way to incorporate interactions and nonlinearities. So we often will want to use techniques that do so. To understand *random forests* and similar techniques (boosted trees, etc.), we can start with a simple decision tree. To begin, for a single input variable (`X1`) whose range might be 1 to 10, we find that a cut at 5\.75 results in the best classification, such that if all observations greater than or equal to 5\.75 are classified as positive, and the rest negative. This general approach is fairly straightforward and conceptually easy to grasp, and it is because of this that tree approaches are appealing.
Now let’s add a second input (`X2`), also on a 1 to 10 range. We now might find that even better classification results if, upon looking at the portion of data regarding those greater than or equal to 5\.75, that we only classify positive if they are also less than 3 on the second variable. The following is a hypothetical tree reflecting this.
This tree structure allows for both interactions between variables, and nonlinear relationships between some input and the target variable (e.g. the second branch could just be the same `X1` but with some cut value greater than 5\.75\). Random forests randomly select a few from the available input variables, and create a tree that minimizes (maximizes) some loss (objective) function on a validation set. A given tree can potentially be very wide/deep, but instead of just one tree, we now do, say, 1000 trees. A final prediction is made based on the average across all trees.
We demonstrate the random forest using the ranger package. We initially don’t do any tuning here, but do note that the number of variables to randomly select (`mtry` below), the number of total trees, the tree depth \- all of these are potential tuning parameters to investigate in the model building process.
```
happy_rf_spec = rand_forest(mode = 'regression', mtry = 6) %>%
set_engine(engine = "ranger")
happy_rf_results = fit_resamples(
happy_rf_spec,
happiness_score ~ .,
happy_folds
)
cv_rf_res = happy_rf_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.222 | 10 | 0\.004 |
| rsq | standard | 0\.950 | 10 | 0\.003 |
It would appear we’re doing a bit better than the regularized regression.
#### Tuning parameters for random forests
As mentioned, we’d have a few tuning parameters to play around with. We’ll tune the number of predictors to randomly select per tree, as well as the minimum sample size for each leaf. The following takes the same appraoch as with the regularized regression model. Note that this will take a while (several minutes).
```
grid_search = expand.grid(
mtry = c(3, 5, ncol(happy_train)-1), # up to total number of predictors
min_n = c(1, 5, 10)
)
happy_rf_spec = rand_forest(mode = 'regression',
mtry = tune(),
min_n = tune()) %>%
set_engine(engine = "ranger")
rf_tune = tune_grid(
happy_prepped,
model = happy_rf_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(rf_tune, metric = "rmse")
```
```
best = show_best(rf_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
mtry min_n .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 3 1 rmse standard 0.219 10 0.00422 Model1
```
Looks like in general using all the variables for selection is the best (this is in keeping with standard approaches for random forest with regression).
Now we are ready to refit the model with the selected tuning parameters and make predictions on the test data.
```
tuned_model = rand_forest(mode = 'regression', mtry = best$mtry, min_n = best$min_n) %>%
set_engine(engine = "ranger") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.217 |
| rsq | standard | 0\.955 |
### Neural networks
*Neural networks* have been around for a long while as a general concept in artificial intelligence and even as a machine learning algorithm, and often work quite well. In some sense, neural networks can simply be thought of as nonlinear regression. Visually however, we can see them as a graphical model with layers of inputs and outputs. Weighted combinations of the inputs are created[42](#fn42) and put through some function (e.g. the sigmoid function) to produce the next layer of inputs. This next layer goes through the same process to produce either another layer, or to predict the output, or even multiple outputs, which serves as the final layer. All the layers between the input and output are usually referred to as hidden layers. If there were a single hidden layer with a single unit and no transformation, then it becomes the standard regression problem.
As a simple example, we can run a simple neural network with a single hidden layer of 1000 units[43](#fn43). Since this is a regression problem, no further transformation is required of the end result to map it onto the target variable. I set the number of epochs to 500, which you can think of as the number of iterations from our discussion of optimization. There are many tuning parameters I am not showing that could certainly be fiddled with as well. This is just an example that will run quickly with comparable performance to the previous. If you do not have keras installed, you can change the engine to `nnet`, which was a part of the base R set of packages well before neural nets became cool again[44](#fn44). This will likely take several minutes for typical machines.
```
happy_nn_spec = mlp(
mode = 'regression',
hidden_units = 1000,
epochs = 500,
activation = 'linear'
) %>%
set_engine(engine = "keras")
happy_nn_results = fit_resamples(
happy_nn_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE,
verbose = FALSE,
allow_par = TRUE)
)
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.818 | 10 | 0\.102 |
| rsq | standard | 0\.896 | 10 | 0\.014 |
You will typically see neural nets applied to image and natural language processing, but as demonstrated above, they can be applied in any scenario. It will take longer to set up and train, but once that’s done, you’re good to go, and may have a much better predictive result.
I leave tuning the model as an [exercise](ml.html#machine-learning-exercises), but definitely switch to using `nnet` if you do so, otherwise you’ll have to install keras (both for R and Python) and be waiting a long time besides. As mentioned, the nnet package is in base R, so you already have it.
#### Deep learning
*Deep learning* can be summarized succinctly as ‘very complicated neural nets’. Really, that’s about it. The complexity can be quite tremendous however, and there is a wide variety from which to choose. For example, we just ran a basic neural net above, but for image processing we might use a convolutional neural network, and for natural language processing some LTSM model. Here is a small(!) version of the convolutional neural network known as ‘resnet’ which has many layers in between input and output.
The nice thing is that a lot of the work has already been done for you, and you can use models where most of the layers in the neural net have already been trained by people at Google, Facebook, and others who have much more resources to do do so than you. In such cases, you may only have to worry about the last couple layers for your particular problem. Applying a pre\-trained model to a different data scenario is called *transfer learning*, and regardless of what your intuition is, it will work, and very well.
*Artificial intelligence* (AI) used to refer to specific applications of deep/machine learning (e.g. areas in computer vision and natural language processing), but thanks to the popular press, the term has pretty much lost all meaning. AI actually has a very old history dating to the cognitive revolution in psychology and the early days of computer science in the late 50s and early 60s. Again though, you can think of it as a subset of the machine learning problems.
Interpreting the Black Box
--------------------------
One of the key issues with ML techniques is interpretability. While a decision tree is immensely interpretable, a thousand of them is not so much. What any particular node or even layer in a complex neural network represents may be difficult to fathom. However, we can still interpret prediction changes based on input changes, which is what we really care about, and really is not necessarily more difficult than our standard inferential setting.
For example, a regularized regression might not have straightforward inference, but the coefficients are interpreted exactly the same as a standard GLM. Random forests can have the interactions visualized, which is what we said was required for interpretation in standard settings. Furthermore, there are many approaches such as *Local Interpretable Model\-Agnostic Explanations* (LIME), variable importance measures, Shapley values, and more to help us in this process. It might take more work, but honestly, in my consulting experience, a great many have trouble interpreting anything beyond a standard linear model any way, and I’m not convinced that it’s fundamentally different problem to [extract meaning from the machine learning context](https://christophm.github.io/interpretable-ml-book/) these days, though it may take a little work.
Machine Learning Summary
------------------------
Hopefully this has demystified ML for you somewhat. Nowadays it may take little effort to get to state\-of\-the\-art results from even just a year or two ago, and which, for all intents and purposes, would be good enough both now and for the foreseeable future. Despite what many may think, it is not magic, but for more classically statistically minded, it may require a bit of a different perspective.
Machine Learning Exercises
--------------------------
### Exercise 1
Use the ranger package to predict the Google variable `rating` by several covariates. Feel free to just use the standard function approach rather than all the tidymodels stuff if you want, but do use a training and test approach. You can then try the model again with a different tuning. For the first go around, starter code is provided.
```
# run these if needed to load data and install the package
# load('data/google_apps.RData')
# install.packages('ranger')
google_for_mod = google_apps %>%
select(avg_sentiment_polarity, rating, type,installs, reviews, size_in_MB, category) %>%
drop_na()
google_split = google_for_mod %>%
initial_split(prop = 0.75)
google_train = training(google_split)
google_test = testing(google_split)
ga_rf_results = rand_forest(mode = 'regression', mtry = 2, trees = 1000) %>%
set_engine(engine = "ranger") %>%
fit(
rating ~ ?,
google_train
)
test_predictions = predict(ga_rf_results, new_data = google_test)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
bind_rows(
rmse,
rsq
)
```
### Exercise 2
Respecify the neural net model demonstrated above as follows, and tune over the number of hidden units to have. This will probably take several minutes depending on your machine.
```
grid_search = expand.grid(
hidden_units = c(25, 50),
penalty = exp(seq(-4, -.25, length.out = 5))
)
happy_nn_spec = mlp(mode = 'regression',
penalty = tune(),
hidden_units = tune()) %>%
set_engine(engine = "nnet")
nn_tune = tune_grid(
happy_prepped, # from previous examples, see tuning for regularized regression
model = happy_nn_spec,
resamples = happy_folds, # from previous examples, see tuning for regularized regression
grid = grid_search
)
show_best(nn_tune, metric = "rmse", maximize = FALSE, n = 1)
```
Python Machine Learning Notebook
--------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/ml.ipynb)
Concepts
--------
### Loss
We discussed loss functions [before](models.html#estimation), and there was a reason I went more in depth there, mainly because I feel, unlike with ML, loss is not explicitly focused on as much in applied research, leaving the results produced to come across as more magical than it should be. In ML however, we are explicitly concerned with loss functions and, more specifically, evaluating loss on test data. This loss is evaluated over successive iterations of a particular technique, or averaged over several test sets via cross\-validation. Typical loss functions are *Root Mean Squared Error* for numeric targets (essentially the same as for a standard linear model), and *cross\-entropy* for categorical outcomes. There are robust alternatives, such as mean absolute error and hinge loss functions respectively, and many other options besides. You will come across others that might be used for specific scenarios.
The following image, typically called a *learning curve*, shows an example of loss on a test set as a function of model complexity. In this case, models with more complexity perform better, but only to a point, before test error begins to rise again.
### Bias\-variance tradeoff
Prediction error, i.e. loss, is composed of several sources. One part is *measurement error*, which we can’t do anything about, and two components of specific interest: *bias*, the difference in the observed value and our average predicted value, and *variance* how much that prediction would change had we trained on different data. More generally we can think of this as a problem of *underfitting* vs. *overfitting*. With a model that is too simple, we underfit, and bias is high. If we overfit, the model is too close to the training data, and likely will do poorly with new observations. ML techniques trade some increased bias for even greater reduced variance, which often means less overfitting to the training data, leading to increased performance on new data.
In the following[39](#fn39), the blue line represents models applied to training data, while the red line regards performance on the test set. We can see that for the data we train the model to, error will always go down with increased complexity. However, we can see that at some point, the test error will increase as we have started to overfit to the training data.
### Regularization
As we have noted, a model fit to a single data set might do very well with the data at hand, but then suffer when predicting independent data. Also, oftentimes we are interested in a ‘best’ subset of predictors among a great many, and typically the estimated coefficients from standard approaches are overly optimistic unless dealing with sufficiently large sample sizes. This general issue can be improved by shrinking estimates toward zero, such that some of the performance in the initial fit is sacrificed for improvement with regard to prediction. The basic idea in terms of the tradeoff is that we are trading some bias for notably reduced variance. We demonstrate regularized regression below.
### Cross\-validation
*Cross\-validation* is widely used for validation and/or testing. With validation, we are usually concerned with picking parameter settings for the model, while the testing is used for ultimate assessment of model performance. Conceptually there is nothing new beyond what was [discussed previously](model_criticism.html#predictive-performance) regarding holding out data for assessing predictive performance, we just do more of it.
As an example, let’s say we split our data into three parts. We use two parts (combined) as our training data, then the third part as test. At this point this is identical to our demonstration before. But then, we switch which part is test and which two are training, and do the whole thing over again. And finally once more, so that each of our three parts has taken a turn as a test set. Our estimated error is the average loss across the three times.
Typically we do it more than three times, usually 10, and there are fancier methods of *k\-fold cross\-validation*, though they typically don’t serve to add much value. In any case, let’s try it with our previous example. The following uses the tidymodels approach to be consistent with early chapters use of the tidyverse[40](#fn40). With it we can employ k\-fold cross validation to evaluate the loss.
```
# install.packages(tidymodels) # if needed
library(tidymodels)
load('data/world_happiness.RData')
set.seed(1212)
# specify the model
happy_base_spec = linear_reg() %>%
set_engine(engine = "lm")
# by default 10-folds
happy_folds = vfold_cv(happy)
library(tune)
happy_base_results = fit_resamples(
happy_base_spec,
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_res = happy_base_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.629 | 10 | 0\.022 |
| rsq | standard | 0\.697 | 10 | 0\.022 |
We now see that our average test error is 0\.629\. It also gives the average R2\.
### Optimization
With ML, much more attention is paid to different optimizers, but the vast majority for deep learning and other many other methods are some flavor of *stochastic gradient descent*. Often due to the sheer volume of data/parameters, this optimization is done on chunks of the data and in parallel. In general, some optimization approaches may work better in some situations or for some models, where ‘better’ means quicker convergence, or perhaps a smoother ride toward convergence. It is not the case that you would come to incorrect conclusions using one method vs. another per se, just that you might reach those conclusions in a more efficient fashion. The following graphic displays SGD versus several variants[41](#fn41). The x and y axes represent the potential values two parameters might take, with the best selection of those values based on a loss function somewhere toward the bottom right. We can see that they all would get there eventually, but some might do so more quickly. This may or may not be the case for some other data situation.
### Tuning parameters
In any ML setting there are parameters that need to set in order to even run the model. In regularized regression this may be the penalty parameter, for random forests the tree depth, for neural nets, how many hidden units, and many other things. None of these *tuning parameters* is known beforehand, and so must be tuned, or learned, just like any other. This is usually done with through validation process like k\-fold cross validation. The ‘best’ settings are then used to make final predictions on the test set.
The usual workflow is something like the following:
* **Tuning**: With the **training data**, use a cross\-validation approach to run models with different values for tuning parameters.
* **Model Selection**: Select the ‘best’ model as that which minimizes or maximizes the objective function estimated during cross\-validation (e.g. RMSE, accuracy, etc.). The test data in this setting are typically referred to as *validation sets*.
* **Prediction**: Use the best model to make predictions on the **test set**.
### Loss
We discussed loss functions [before](models.html#estimation), and there was a reason I went more in depth there, mainly because I feel, unlike with ML, loss is not explicitly focused on as much in applied research, leaving the results produced to come across as more magical than it should be. In ML however, we are explicitly concerned with loss functions and, more specifically, evaluating loss on test data. This loss is evaluated over successive iterations of a particular technique, or averaged over several test sets via cross\-validation. Typical loss functions are *Root Mean Squared Error* for numeric targets (essentially the same as for a standard linear model), and *cross\-entropy* for categorical outcomes. There are robust alternatives, such as mean absolute error and hinge loss functions respectively, and many other options besides. You will come across others that might be used for specific scenarios.
The following image, typically called a *learning curve*, shows an example of loss on a test set as a function of model complexity. In this case, models with more complexity perform better, but only to a point, before test error begins to rise again.
### Bias\-variance tradeoff
Prediction error, i.e. loss, is composed of several sources. One part is *measurement error*, which we can’t do anything about, and two components of specific interest: *bias*, the difference in the observed value and our average predicted value, and *variance* how much that prediction would change had we trained on different data. More generally we can think of this as a problem of *underfitting* vs. *overfitting*. With a model that is too simple, we underfit, and bias is high. If we overfit, the model is too close to the training data, and likely will do poorly with new observations. ML techniques trade some increased bias for even greater reduced variance, which often means less overfitting to the training data, leading to increased performance on new data.
In the following[39](#fn39), the blue line represents models applied to training data, while the red line regards performance on the test set. We can see that for the data we train the model to, error will always go down with increased complexity. However, we can see that at some point, the test error will increase as we have started to overfit to the training data.
### Regularization
As we have noted, a model fit to a single data set might do very well with the data at hand, but then suffer when predicting independent data. Also, oftentimes we are interested in a ‘best’ subset of predictors among a great many, and typically the estimated coefficients from standard approaches are overly optimistic unless dealing with sufficiently large sample sizes. This general issue can be improved by shrinking estimates toward zero, such that some of the performance in the initial fit is sacrificed for improvement with regard to prediction. The basic idea in terms of the tradeoff is that we are trading some bias for notably reduced variance. We demonstrate regularized regression below.
### Cross\-validation
*Cross\-validation* is widely used for validation and/or testing. With validation, we are usually concerned with picking parameter settings for the model, while the testing is used for ultimate assessment of model performance. Conceptually there is nothing new beyond what was [discussed previously](model_criticism.html#predictive-performance) regarding holding out data for assessing predictive performance, we just do more of it.
As an example, let’s say we split our data into three parts. We use two parts (combined) as our training data, then the third part as test. At this point this is identical to our demonstration before. But then, we switch which part is test and which two are training, and do the whole thing over again. And finally once more, so that each of our three parts has taken a turn as a test set. Our estimated error is the average loss across the three times.
Typically we do it more than three times, usually 10, and there are fancier methods of *k\-fold cross\-validation*, though they typically don’t serve to add much value. In any case, let’s try it with our previous example. The following uses the tidymodels approach to be consistent with early chapters use of the tidyverse[40](#fn40). With it we can employ k\-fold cross validation to evaluate the loss.
```
# install.packages(tidymodels) # if needed
library(tidymodels)
load('data/world_happiness.RData')
set.seed(1212)
# specify the model
happy_base_spec = linear_reg() %>%
set_engine(engine = "lm")
# by default 10-folds
happy_folds = vfold_cv(happy)
library(tune)
happy_base_results = fit_resamples(
happy_base_spec,
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_res = happy_base_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.629 | 10 | 0\.022 |
| rsq | standard | 0\.697 | 10 | 0\.022 |
We now see that our average test error is 0\.629\. It also gives the average R2\.
### Optimization
With ML, much more attention is paid to different optimizers, but the vast majority for deep learning and other many other methods are some flavor of *stochastic gradient descent*. Often due to the sheer volume of data/parameters, this optimization is done on chunks of the data and in parallel. In general, some optimization approaches may work better in some situations or for some models, where ‘better’ means quicker convergence, or perhaps a smoother ride toward convergence. It is not the case that you would come to incorrect conclusions using one method vs. another per se, just that you might reach those conclusions in a more efficient fashion. The following graphic displays SGD versus several variants[41](#fn41). The x and y axes represent the potential values two parameters might take, with the best selection of those values based on a loss function somewhere toward the bottom right. We can see that they all would get there eventually, but some might do so more quickly. This may or may not be the case for some other data situation.
### Tuning parameters
In any ML setting there are parameters that need to set in order to even run the model. In regularized regression this may be the penalty parameter, for random forests the tree depth, for neural nets, how many hidden units, and many other things. None of these *tuning parameters* is known beforehand, and so must be tuned, or learned, just like any other. This is usually done with through validation process like k\-fold cross validation. The ‘best’ settings are then used to make final predictions on the test set.
The usual workflow is something like the following:
* **Tuning**: With the **training data**, use a cross\-validation approach to run models with different values for tuning parameters.
* **Model Selection**: Select the ‘best’ model as that which minimizes or maximizes the objective function estimated during cross\-validation (e.g. RMSE, accuracy, etc.). The test data in this setting are typically referred to as *validation sets*.
* **Prediction**: Use the best model to make predictions on the **test set**.
Techniques
----------
### Regularized regression
A starting point for getting into ML from the more inferential methods is to use *regularized regression*. These are conceptually no different than standard LM/GLM types of approaches, but they add something to the loss function.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2 \+ \\lambda\\cdot\\Sigma\\beta^2\\]
In the above, this is the same squared error loss function as before, but we add a penalty that is based on the size of the coefficients. So, while initially our loss goes down with some set of estimates, the penalty based on their size might be such that the estimated loss actually increases. This has the effect of shrinking the estimates toward zero. Well, [why would we want that](https://stats.stackexchange.com/questions/179864/why-does-shrinkage-work)? This introduces [bias in the coefficients](https://stats.stackexchange.com/questions/207760/when-is-a-biased-estimator-preferable-to-unbiased-one), but the end result is a model that will do better on test set prediction, which is the goal of the ML approach. The way this works regards the bias\-variance tradeoff we discussed previously.
The following demonstrates regularized regression using the glmnet package. It actually uses *elastic net*, which has a mixture of two penalties, one of which is the squared sum of coefficients (typically called *ridge regression*) and the other is the sum of their absolute values (the so\-called *lasso*).
```
library(tidymodels)
happy_prepped = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
recipe(happiness_score ~ .) %>%
step_scale(everything()) %>%
step_naomit(happiness_score) %>%
prep() %>%
bake(happy)
happy_folds = happy_prepped %>%
drop_na() %>%
vfold_cv()
library(tune)
happy_regLM_spec = linear_reg(penalty = 1e-3, mixture = .5) %>%
set_engine(engine = "glmnet")
happy_regLM_results = fit_resamples(
happy_regLM_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_regLM_res = happy_regLM_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.335 | 10 | 0\.018 |
| rsq | standard | 0\.897 | 10 | 0\.013 |
#### Tuning parameters for regularized regression
For the previous model setting, we wouldn’t know what the penalty or the mixing parameter should be. This is where we can use cross validation to choose those. We’ll redo our model spec, create a set of values to search over, and pass that to the tuning function for cross\-validation. Our ultimate model will then be applied to the test data.
First we create our training\-test split.
```
# removing some variables with lots of missing values
happy_split = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
initial_split(prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split)
```
Next we process the data. This is all specific to the tidymodels approach.
```
happy_prepped = happy_train %>%
recipe(happiness_score ~ .) %>%
step_knnimpute(everything()) %>% # impute missing values
step_center(everything()) %>% # standardize
step_scale(everything()) %>% # standardize
prep() # prepare for other uses
happy_test_normalized <- bake(happy_prepped, new_data = happy_test, everything())
happy_folds = happy_prepped %>%
bake(happy) %>%
vfold_cv()
# now we are indicating we don't know the value to place
happy_regLM_spec = linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine(engine = "glmnet")
```
Now, we need to create a set of values (grid) to try an test. In this case we set the penalty parameter from near zero to near 1, and the mixture parameter a range of values from 0 (ridge regression) to 1 (lasso).
```
grid_search = expand_grid(
penalty = exp(seq(-4, -.25, length.out = 10)),
mixture = seq(0, 1, length.out = 10)
)
regLM_tune = tune_grid(
happy_prepped,
model = happy_regLM_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(regLM_tune, metric = "rmse") + geom_smooth(se = FALSE)
```
```
best = show_best(regLM_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
penalty mixture .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0.0183 0.111 rmse standard 0.288 10 0.00686 Model011
```
The results suggest a more ridge type mixture and smaller penalty tends to work better, and this is more or less in keeping with the ‘best’ model. Here is a plot where size indicates RMSE (smaller better) but only for RMSE \< .5 (slight jitter added).
With the ‘best’ model selected, we can refit to the training data with the parameters in hand. We can then do our usual performance assessment with the test set.
```
# for technical reasons, only mixture is passed to the model; see https://github.com/tidymodels/parsnip/issues/215
tuned_model = linear_reg(penalty = best$penalty, mixture = best$mixture) %>%
set_engine(engine = "glmnet") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.297 |
| rsq | standard | 0\.912 |
Not too bad!
### Random forests
A limitation of standard linear models, especially with many input variables, is that there’s not a real automatic way to incorporate interactions and nonlinearities. So we often will want to use techniques that do so. To understand *random forests* and similar techniques (boosted trees, etc.), we can start with a simple decision tree. To begin, for a single input variable (`X1`) whose range might be 1 to 10, we find that a cut at 5\.75 results in the best classification, such that if all observations greater than or equal to 5\.75 are classified as positive, and the rest negative. This general approach is fairly straightforward and conceptually easy to grasp, and it is because of this that tree approaches are appealing.
Now let’s add a second input (`X2`), also on a 1 to 10 range. We now might find that even better classification results if, upon looking at the portion of data regarding those greater than or equal to 5\.75, that we only classify positive if they are also less than 3 on the second variable. The following is a hypothetical tree reflecting this.
This tree structure allows for both interactions between variables, and nonlinear relationships between some input and the target variable (e.g. the second branch could just be the same `X1` but with some cut value greater than 5\.75\). Random forests randomly select a few from the available input variables, and create a tree that minimizes (maximizes) some loss (objective) function on a validation set. A given tree can potentially be very wide/deep, but instead of just one tree, we now do, say, 1000 trees. A final prediction is made based on the average across all trees.
We demonstrate the random forest using the ranger package. We initially don’t do any tuning here, but do note that the number of variables to randomly select (`mtry` below), the number of total trees, the tree depth \- all of these are potential tuning parameters to investigate in the model building process.
```
happy_rf_spec = rand_forest(mode = 'regression', mtry = 6) %>%
set_engine(engine = "ranger")
happy_rf_results = fit_resamples(
happy_rf_spec,
happiness_score ~ .,
happy_folds
)
cv_rf_res = happy_rf_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.222 | 10 | 0\.004 |
| rsq | standard | 0\.950 | 10 | 0\.003 |
It would appear we’re doing a bit better than the regularized regression.
#### Tuning parameters for random forests
As mentioned, we’d have a few tuning parameters to play around with. We’ll tune the number of predictors to randomly select per tree, as well as the minimum sample size for each leaf. The following takes the same appraoch as with the regularized regression model. Note that this will take a while (several minutes).
```
grid_search = expand.grid(
mtry = c(3, 5, ncol(happy_train)-1), # up to total number of predictors
min_n = c(1, 5, 10)
)
happy_rf_spec = rand_forest(mode = 'regression',
mtry = tune(),
min_n = tune()) %>%
set_engine(engine = "ranger")
rf_tune = tune_grid(
happy_prepped,
model = happy_rf_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(rf_tune, metric = "rmse")
```
```
best = show_best(rf_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
mtry min_n .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 3 1 rmse standard 0.219 10 0.00422 Model1
```
Looks like in general using all the variables for selection is the best (this is in keeping with standard approaches for random forest with regression).
Now we are ready to refit the model with the selected tuning parameters and make predictions on the test data.
```
tuned_model = rand_forest(mode = 'regression', mtry = best$mtry, min_n = best$min_n) %>%
set_engine(engine = "ranger") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.217 |
| rsq | standard | 0\.955 |
### Neural networks
*Neural networks* have been around for a long while as a general concept in artificial intelligence and even as a machine learning algorithm, and often work quite well. In some sense, neural networks can simply be thought of as nonlinear regression. Visually however, we can see them as a graphical model with layers of inputs and outputs. Weighted combinations of the inputs are created[42](#fn42) and put through some function (e.g. the sigmoid function) to produce the next layer of inputs. This next layer goes through the same process to produce either another layer, or to predict the output, or even multiple outputs, which serves as the final layer. All the layers between the input and output are usually referred to as hidden layers. If there were a single hidden layer with a single unit and no transformation, then it becomes the standard regression problem.
As a simple example, we can run a simple neural network with a single hidden layer of 1000 units[43](#fn43). Since this is a regression problem, no further transformation is required of the end result to map it onto the target variable. I set the number of epochs to 500, which you can think of as the number of iterations from our discussion of optimization. There are many tuning parameters I am not showing that could certainly be fiddled with as well. This is just an example that will run quickly with comparable performance to the previous. If you do not have keras installed, you can change the engine to `nnet`, which was a part of the base R set of packages well before neural nets became cool again[44](#fn44). This will likely take several minutes for typical machines.
```
happy_nn_spec = mlp(
mode = 'regression',
hidden_units = 1000,
epochs = 500,
activation = 'linear'
) %>%
set_engine(engine = "keras")
happy_nn_results = fit_resamples(
happy_nn_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE,
verbose = FALSE,
allow_par = TRUE)
)
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.818 | 10 | 0\.102 |
| rsq | standard | 0\.896 | 10 | 0\.014 |
You will typically see neural nets applied to image and natural language processing, but as demonstrated above, they can be applied in any scenario. It will take longer to set up and train, but once that’s done, you’re good to go, and may have a much better predictive result.
I leave tuning the model as an [exercise](ml.html#machine-learning-exercises), but definitely switch to using `nnet` if you do so, otherwise you’ll have to install keras (both for R and Python) and be waiting a long time besides. As mentioned, the nnet package is in base R, so you already have it.
#### Deep learning
*Deep learning* can be summarized succinctly as ‘very complicated neural nets’. Really, that’s about it. The complexity can be quite tremendous however, and there is a wide variety from which to choose. For example, we just ran a basic neural net above, but for image processing we might use a convolutional neural network, and for natural language processing some LTSM model. Here is a small(!) version of the convolutional neural network known as ‘resnet’ which has many layers in between input and output.
The nice thing is that a lot of the work has already been done for you, and you can use models where most of the layers in the neural net have already been trained by people at Google, Facebook, and others who have much more resources to do do so than you. In such cases, you may only have to worry about the last couple layers for your particular problem. Applying a pre\-trained model to a different data scenario is called *transfer learning*, and regardless of what your intuition is, it will work, and very well.
*Artificial intelligence* (AI) used to refer to specific applications of deep/machine learning (e.g. areas in computer vision and natural language processing), but thanks to the popular press, the term has pretty much lost all meaning. AI actually has a very old history dating to the cognitive revolution in psychology and the early days of computer science in the late 50s and early 60s. Again though, you can think of it as a subset of the machine learning problems.
### Regularized regression
A starting point for getting into ML from the more inferential methods is to use *regularized regression*. These are conceptually no different than standard LM/GLM types of approaches, but they add something to the loss function.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2 \+ \\lambda\\cdot\\Sigma\\beta^2\\]
In the above, this is the same squared error loss function as before, but we add a penalty that is based on the size of the coefficients. So, while initially our loss goes down with some set of estimates, the penalty based on their size might be such that the estimated loss actually increases. This has the effect of shrinking the estimates toward zero. Well, [why would we want that](https://stats.stackexchange.com/questions/179864/why-does-shrinkage-work)? This introduces [bias in the coefficients](https://stats.stackexchange.com/questions/207760/when-is-a-biased-estimator-preferable-to-unbiased-one), but the end result is a model that will do better on test set prediction, which is the goal of the ML approach. The way this works regards the bias\-variance tradeoff we discussed previously.
The following demonstrates regularized regression using the glmnet package. It actually uses *elastic net*, which has a mixture of two penalties, one of which is the squared sum of coefficients (typically called *ridge regression*) and the other is the sum of their absolute values (the so\-called *lasso*).
```
library(tidymodels)
happy_prepped = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
recipe(happiness_score ~ .) %>%
step_scale(everything()) %>%
step_naomit(happiness_score) %>%
prep() %>%
bake(happy)
happy_folds = happy_prepped %>%
drop_na() %>%
vfold_cv()
library(tune)
happy_regLM_spec = linear_reg(penalty = 1e-3, mixture = .5) %>%
set_engine(engine = "glmnet")
happy_regLM_results = fit_resamples(
happy_regLM_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_regLM_res = happy_regLM_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.335 | 10 | 0\.018 |
| rsq | standard | 0\.897 | 10 | 0\.013 |
#### Tuning parameters for regularized regression
For the previous model setting, we wouldn’t know what the penalty or the mixing parameter should be. This is where we can use cross validation to choose those. We’ll redo our model spec, create a set of values to search over, and pass that to the tuning function for cross\-validation. Our ultimate model will then be applied to the test data.
First we create our training\-test split.
```
# removing some variables with lots of missing values
happy_split = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
initial_split(prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split)
```
Next we process the data. This is all specific to the tidymodels approach.
```
happy_prepped = happy_train %>%
recipe(happiness_score ~ .) %>%
step_knnimpute(everything()) %>% # impute missing values
step_center(everything()) %>% # standardize
step_scale(everything()) %>% # standardize
prep() # prepare for other uses
happy_test_normalized <- bake(happy_prepped, new_data = happy_test, everything())
happy_folds = happy_prepped %>%
bake(happy) %>%
vfold_cv()
# now we are indicating we don't know the value to place
happy_regLM_spec = linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine(engine = "glmnet")
```
Now, we need to create a set of values (grid) to try an test. In this case we set the penalty parameter from near zero to near 1, and the mixture parameter a range of values from 0 (ridge regression) to 1 (lasso).
```
grid_search = expand_grid(
penalty = exp(seq(-4, -.25, length.out = 10)),
mixture = seq(0, 1, length.out = 10)
)
regLM_tune = tune_grid(
happy_prepped,
model = happy_regLM_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(regLM_tune, metric = "rmse") + geom_smooth(se = FALSE)
```
```
best = show_best(regLM_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
penalty mixture .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0.0183 0.111 rmse standard 0.288 10 0.00686 Model011
```
The results suggest a more ridge type mixture and smaller penalty tends to work better, and this is more or less in keeping with the ‘best’ model. Here is a plot where size indicates RMSE (smaller better) but only for RMSE \< .5 (slight jitter added).
With the ‘best’ model selected, we can refit to the training data with the parameters in hand. We can then do our usual performance assessment with the test set.
```
# for technical reasons, only mixture is passed to the model; see https://github.com/tidymodels/parsnip/issues/215
tuned_model = linear_reg(penalty = best$penalty, mixture = best$mixture) %>%
set_engine(engine = "glmnet") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.297 |
| rsq | standard | 0\.912 |
Not too bad!
#### Tuning parameters for regularized regression
For the previous model setting, we wouldn’t know what the penalty or the mixing parameter should be. This is where we can use cross validation to choose those. We’ll redo our model spec, create a set of values to search over, and pass that to the tuning function for cross\-validation. Our ultimate model will then be applied to the test data.
First we create our training\-test split.
```
# removing some variables with lots of missing values
happy_split = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
initial_split(prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split)
```
Next we process the data. This is all specific to the tidymodels approach.
```
happy_prepped = happy_train %>%
recipe(happiness_score ~ .) %>%
step_knnimpute(everything()) %>% # impute missing values
step_center(everything()) %>% # standardize
step_scale(everything()) %>% # standardize
prep() # prepare for other uses
happy_test_normalized <- bake(happy_prepped, new_data = happy_test, everything())
happy_folds = happy_prepped %>%
bake(happy) %>%
vfold_cv()
# now we are indicating we don't know the value to place
happy_regLM_spec = linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine(engine = "glmnet")
```
Now, we need to create a set of values (grid) to try an test. In this case we set the penalty parameter from near zero to near 1, and the mixture parameter a range of values from 0 (ridge regression) to 1 (lasso).
```
grid_search = expand_grid(
penalty = exp(seq(-4, -.25, length.out = 10)),
mixture = seq(0, 1, length.out = 10)
)
regLM_tune = tune_grid(
happy_prepped,
model = happy_regLM_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(regLM_tune, metric = "rmse") + geom_smooth(se = FALSE)
```
```
best = show_best(regLM_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
penalty mixture .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0.0183 0.111 rmse standard 0.288 10 0.00686 Model011
```
The results suggest a more ridge type mixture and smaller penalty tends to work better, and this is more or less in keeping with the ‘best’ model. Here is a plot where size indicates RMSE (smaller better) but only for RMSE \< .5 (slight jitter added).
With the ‘best’ model selected, we can refit to the training data with the parameters in hand. We can then do our usual performance assessment with the test set.
```
# for technical reasons, only mixture is passed to the model; see https://github.com/tidymodels/parsnip/issues/215
tuned_model = linear_reg(penalty = best$penalty, mixture = best$mixture) %>%
set_engine(engine = "glmnet") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.297 |
| rsq | standard | 0\.912 |
Not too bad!
### Random forests
A limitation of standard linear models, especially with many input variables, is that there’s not a real automatic way to incorporate interactions and nonlinearities. So we often will want to use techniques that do so. To understand *random forests* and similar techniques (boosted trees, etc.), we can start with a simple decision tree. To begin, for a single input variable (`X1`) whose range might be 1 to 10, we find that a cut at 5\.75 results in the best classification, such that if all observations greater than or equal to 5\.75 are classified as positive, and the rest negative. This general approach is fairly straightforward and conceptually easy to grasp, and it is because of this that tree approaches are appealing.
Now let’s add a second input (`X2`), also on a 1 to 10 range. We now might find that even better classification results if, upon looking at the portion of data regarding those greater than or equal to 5\.75, that we only classify positive if they are also less than 3 on the second variable. The following is a hypothetical tree reflecting this.
This tree structure allows for both interactions between variables, and nonlinear relationships between some input and the target variable (e.g. the second branch could just be the same `X1` but with some cut value greater than 5\.75\). Random forests randomly select a few from the available input variables, and create a tree that minimizes (maximizes) some loss (objective) function on a validation set. A given tree can potentially be very wide/deep, but instead of just one tree, we now do, say, 1000 trees. A final prediction is made based on the average across all trees.
We demonstrate the random forest using the ranger package. We initially don’t do any tuning here, but do note that the number of variables to randomly select (`mtry` below), the number of total trees, the tree depth \- all of these are potential tuning parameters to investigate in the model building process.
```
happy_rf_spec = rand_forest(mode = 'regression', mtry = 6) %>%
set_engine(engine = "ranger")
happy_rf_results = fit_resamples(
happy_rf_spec,
happiness_score ~ .,
happy_folds
)
cv_rf_res = happy_rf_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.222 | 10 | 0\.004 |
| rsq | standard | 0\.950 | 10 | 0\.003 |
It would appear we’re doing a bit better than the regularized regression.
#### Tuning parameters for random forests
As mentioned, we’d have a few tuning parameters to play around with. We’ll tune the number of predictors to randomly select per tree, as well as the minimum sample size for each leaf. The following takes the same appraoch as with the regularized regression model. Note that this will take a while (several minutes).
```
grid_search = expand.grid(
mtry = c(3, 5, ncol(happy_train)-1), # up to total number of predictors
min_n = c(1, 5, 10)
)
happy_rf_spec = rand_forest(mode = 'regression',
mtry = tune(),
min_n = tune()) %>%
set_engine(engine = "ranger")
rf_tune = tune_grid(
happy_prepped,
model = happy_rf_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(rf_tune, metric = "rmse")
```
```
best = show_best(rf_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
mtry min_n .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 3 1 rmse standard 0.219 10 0.00422 Model1
```
Looks like in general using all the variables for selection is the best (this is in keeping with standard approaches for random forest with regression).
Now we are ready to refit the model with the selected tuning parameters and make predictions on the test data.
```
tuned_model = rand_forest(mode = 'regression', mtry = best$mtry, min_n = best$min_n) %>%
set_engine(engine = "ranger") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.217 |
| rsq | standard | 0\.955 |
#### Tuning parameters for random forests
As mentioned, we’d have a few tuning parameters to play around with. We’ll tune the number of predictors to randomly select per tree, as well as the minimum sample size for each leaf. The following takes the same appraoch as with the regularized regression model. Note that this will take a while (several minutes).
```
grid_search = expand.grid(
mtry = c(3, 5, ncol(happy_train)-1), # up to total number of predictors
min_n = c(1, 5, 10)
)
happy_rf_spec = rand_forest(mode = 'regression',
mtry = tune(),
min_n = tune()) %>%
set_engine(engine = "ranger")
rf_tune = tune_grid(
happy_prepped,
model = happy_rf_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(rf_tune, metric = "rmse")
```
```
best = show_best(rf_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
mtry min_n .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 3 1 rmse standard 0.219 10 0.00422 Model1
```
Looks like in general using all the variables for selection is the best (this is in keeping with standard approaches for random forest with regression).
Now we are ready to refit the model with the selected tuning parameters and make predictions on the test data.
```
tuned_model = rand_forest(mode = 'regression', mtry = best$mtry, min_n = best$min_n) %>%
set_engine(engine = "ranger") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.217 |
| rsq | standard | 0\.955 |
### Neural networks
*Neural networks* have been around for a long while as a general concept in artificial intelligence and even as a machine learning algorithm, and often work quite well. In some sense, neural networks can simply be thought of as nonlinear regression. Visually however, we can see them as a graphical model with layers of inputs and outputs. Weighted combinations of the inputs are created[42](#fn42) and put through some function (e.g. the sigmoid function) to produce the next layer of inputs. This next layer goes through the same process to produce either another layer, or to predict the output, or even multiple outputs, which serves as the final layer. All the layers between the input and output are usually referred to as hidden layers. If there were a single hidden layer with a single unit and no transformation, then it becomes the standard regression problem.
As a simple example, we can run a simple neural network with a single hidden layer of 1000 units[43](#fn43). Since this is a regression problem, no further transformation is required of the end result to map it onto the target variable. I set the number of epochs to 500, which you can think of as the number of iterations from our discussion of optimization. There are many tuning parameters I am not showing that could certainly be fiddled with as well. This is just an example that will run quickly with comparable performance to the previous. If you do not have keras installed, you can change the engine to `nnet`, which was a part of the base R set of packages well before neural nets became cool again[44](#fn44). This will likely take several minutes for typical machines.
```
happy_nn_spec = mlp(
mode = 'regression',
hidden_units = 1000,
epochs = 500,
activation = 'linear'
) %>%
set_engine(engine = "keras")
happy_nn_results = fit_resamples(
happy_nn_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE,
verbose = FALSE,
allow_par = TRUE)
)
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.818 | 10 | 0\.102 |
| rsq | standard | 0\.896 | 10 | 0\.014 |
You will typically see neural nets applied to image and natural language processing, but as demonstrated above, they can be applied in any scenario. It will take longer to set up and train, but once that’s done, you’re good to go, and may have a much better predictive result.
I leave tuning the model as an [exercise](ml.html#machine-learning-exercises), but definitely switch to using `nnet` if you do so, otherwise you’ll have to install keras (both for R and Python) and be waiting a long time besides. As mentioned, the nnet package is in base R, so you already have it.
#### Deep learning
*Deep learning* can be summarized succinctly as ‘very complicated neural nets’. Really, that’s about it. The complexity can be quite tremendous however, and there is a wide variety from which to choose. For example, we just ran a basic neural net above, but for image processing we might use a convolutional neural network, and for natural language processing some LTSM model. Here is a small(!) version of the convolutional neural network known as ‘resnet’ which has many layers in between input and output.
The nice thing is that a lot of the work has already been done for you, and you can use models where most of the layers in the neural net have already been trained by people at Google, Facebook, and others who have much more resources to do do so than you. In such cases, you may only have to worry about the last couple layers for your particular problem. Applying a pre\-trained model to a different data scenario is called *transfer learning*, and regardless of what your intuition is, it will work, and very well.
*Artificial intelligence* (AI) used to refer to specific applications of deep/machine learning (e.g. areas in computer vision and natural language processing), but thanks to the popular press, the term has pretty much lost all meaning. AI actually has a very old history dating to the cognitive revolution in psychology and the early days of computer science in the late 50s and early 60s. Again though, you can think of it as a subset of the machine learning problems.
#### Deep learning
*Deep learning* can be summarized succinctly as ‘very complicated neural nets’. Really, that’s about it. The complexity can be quite tremendous however, and there is a wide variety from which to choose. For example, we just ran a basic neural net above, but for image processing we might use a convolutional neural network, and for natural language processing some LTSM model. Here is a small(!) version of the convolutional neural network known as ‘resnet’ which has many layers in between input and output.
The nice thing is that a lot of the work has already been done for you, and you can use models where most of the layers in the neural net have already been trained by people at Google, Facebook, and others who have much more resources to do do so than you. In such cases, you may only have to worry about the last couple layers for your particular problem. Applying a pre\-trained model to a different data scenario is called *transfer learning*, and regardless of what your intuition is, it will work, and very well.
*Artificial intelligence* (AI) used to refer to specific applications of deep/machine learning (e.g. areas in computer vision and natural language processing), but thanks to the popular press, the term has pretty much lost all meaning. AI actually has a very old history dating to the cognitive revolution in psychology and the early days of computer science in the late 50s and early 60s. Again though, you can think of it as a subset of the machine learning problems.
Interpreting the Black Box
--------------------------
One of the key issues with ML techniques is interpretability. While a decision tree is immensely interpretable, a thousand of them is not so much. What any particular node or even layer in a complex neural network represents may be difficult to fathom. However, we can still interpret prediction changes based on input changes, which is what we really care about, and really is not necessarily more difficult than our standard inferential setting.
For example, a regularized regression might not have straightforward inference, but the coefficients are interpreted exactly the same as a standard GLM. Random forests can have the interactions visualized, which is what we said was required for interpretation in standard settings. Furthermore, there are many approaches such as *Local Interpretable Model\-Agnostic Explanations* (LIME), variable importance measures, Shapley values, and more to help us in this process. It might take more work, but honestly, in my consulting experience, a great many have trouble interpreting anything beyond a standard linear model any way, and I’m not convinced that it’s fundamentally different problem to [extract meaning from the machine learning context](https://christophm.github.io/interpretable-ml-book/) these days, though it may take a little work.
Machine Learning Summary
------------------------
Hopefully this has demystified ML for you somewhat. Nowadays it may take little effort to get to state\-of\-the\-art results from even just a year or two ago, and which, for all intents and purposes, would be good enough both now and for the foreseeable future. Despite what many may think, it is not magic, but for more classically statistically minded, it may require a bit of a different perspective.
Machine Learning Exercises
--------------------------
### Exercise 1
Use the ranger package to predict the Google variable `rating` by several covariates. Feel free to just use the standard function approach rather than all the tidymodels stuff if you want, but do use a training and test approach. You can then try the model again with a different tuning. For the first go around, starter code is provided.
```
# run these if needed to load data and install the package
# load('data/google_apps.RData')
# install.packages('ranger')
google_for_mod = google_apps %>%
select(avg_sentiment_polarity, rating, type,installs, reviews, size_in_MB, category) %>%
drop_na()
google_split = google_for_mod %>%
initial_split(prop = 0.75)
google_train = training(google_split)
google_test = testing(google_split)
ga_rf_results = rand_forest(mode = 'regression', mtry = 2, trees = 1000) %>%
set_engine(engine = "ranger") %>%
fit(
rating ~ ?,
google_train
)
test_predictions = predict(ga_rf_results, new_data = google_test)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
bind_rows(
rmse,
rsq
)
```
### Exercise 2
Respecify the neural net model demonstrated above as follows, and tune over the number of hidden units to have. This will probably take several minutes depending on your machine.
```
grid_search = expand.grid(
hidden_units = c(25, 50),
penalty = exp(seq(-4, -.25, length.out = 5))
)
happy_nn_spec = mlp(mode = 'regression',
penalty = tune(),
hidden_units = tune()) %>%
set_engine(engine = "nnet")
nn_tune = tune_grid(
happy_prepped, # from previous examples, see tuning for regularized regression
model = happy_nn_spec,
resamples = happy_folds, # from previous examples, see tuning for regularized regression
grid = grid_search
)
show_best(nn_tune, metric = "rmse", maximize = FALSE, n = 1)
```
### Exercise 1
Use the ranger package to predict the Google variable `rating` by several covariates. Feel free to just use the standard function approach rather than all the tidymodels stuff if you want, but do use a training and test approach. You can then try the model again with a different tuning. For the first go around, starter code is provided.
```
# run these if needed to load data and install the package
# load('data/google_apps.RData')
# install.packages('ranger')
google_for_mod = google_apps %>%
select(avg_sentiment_polarity, rating, type,installs, reviews, size_in_MB, category) %>%
drop_na()
google_split = google_for_mod %>%
initial_split(prop = 0.75)
google_train = training(google_split)
google_test = testing(google_split)
ga_rf_results = rand_forest(mode = 'regression', mtry = 2, trees = 1000) %>%
set_engine(engine = "ranger") %>%
fit(
rating ~ ?,
google_train
)
test_predictions = predict(ga_rf_results, new_data = google_test)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
bind_rows(
rmse,
rsq
)
```
### Exercise 2
Respecify the neural net model demonstrated above as follows, and tune over the number of hidden units to have. This will probably take several minutes depending on your machine.
```
grid_search = expand.grid(
hidden_units = c(25, 50),
penalty = exp(seq(-4, -.25, length.out = 5))
)
happy_nn_spec = mlp(mode = 'regression',
penalty = tune(),
hidden_units = tune()) %>%
set_engine(engine = "nnet")
nn_tune = tune_grid(
happy_prepped, # from previous examples, see tuning for regularized regression
model = happy_nn_spec,
resamples = happy_folds, # from previous examples, see tuning for regularized regression
grid = grid_search
)
show_best(nn_tune, metric = "rmse", maximize = FALSE, n = 1)
```
Python Machine Learning Notebook
--------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/ml.ipynb)
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/ml.html |
Machine Learning
================
*Machine learning* (ML) encompasses a wide variety of techniques, from standard regression models to almost impenetrably complex modeling tools. While it may seem like magic to the uninitiated, the main thing that distinguishes it from standard statistical methods discussed thus far is an approach that heavily favors prediction over inference and explanatory power, and which takes the necessary steps to gain any predictive advantage[38](#fn38).
ML could potentially be applied in any setting, but typically works best with data sets much larger than classical statistical methods are usually applied to. However, nowadays even complex regression models can be applied to extremely large data sets, and properly applied ML may even work in simpler data settings, so this distinction is muddier than it used to be. The main distinguishing factor is mostly one of focus.
The following only very briefly provides a demonstration of concepts and approaches. I have more [in\-depth document available](https://m-clark.github.io/introduction-to-machine-learning/) for more details.
Concepts
--------
### Loss
We discussed loss functions [before](models.html#estimation), and there was a reason I went more in depth there, mainly because I feel, unlike with ML, loss is not explicitly focused on as much in applied research, leaving the results produced to come across as more magical than it should be. In ML however, we are explicitly concerned with loss functions and, more specifically, evaluating loss on test data. This loss is evaluated over successive iterations of a particular technique, or averaged over several test sets via cross\-validation. Typical loss functions are *Root Mean Squared Error* for numeric targets (essentially the same as for a standard linear model), and *cross\-entropy* for categorical outcomes. There are robust alternatives, such as mean absolute error and hinge loss functions respectively, and many other options besides. You will come across others that might be used for specific scenarios.
The following image, typically called a *learning curve*, shows an example of loss on a test set as a function of model complexity. In this case, models with more complexity perform better, but only to a point, before test error begins to rise again.
### Bias\-variance tradeoff
Prediction error, i.e. loss, is composed of several sources. One part is *measurement error*, which we can’t do anything about, and two components of specific interest: *bias*, the difference in the observed value and our average predicted value, and *variance* how much that prediction would change had we trained on different data. More generally we can think of this as a problem of *underfitting* vs. *overfitting*. With a model that is too simple, we underfit, and bias is high. If we overfit, the model is too close to the training data, and likely will do poorly with new observations. ML techniques trade some increased bias for even greater reduced variance, which often means less overfitting to the training data, leading to increased performance on new data.
In the following[39](#fn39), the blue line represents models applied to training data, while the red line regards performance on the test set. We can see that for the data we train the model to, error will always go down with increased complexity. However, we can see that at some point, the test error will increase as we have started to overfit to the training data.
### Regularization
As we have noted, a model fit to a single data set might do very well with the data at hand, but then suffer when predicting independent data. Also, oftentimes we are interested in a ‘best’ subset of predictors among a great many, and typically the estimated coefficients from standard approaches are overly optimistic unless dealing with sufficiently large sample sizes. This general issue can be improved by shrinking estimates toward zero, such that some of the performance in the initial fit is sacrificed for improvement with regard to prediction. The basic idea in terms of the tradeoff is that we are trading some bias for notably reduced variance. We demonstrate regularized regression below.
### Cross\-validation
*Cross\-validation* is widely used for validation and/or testing. With validation, we are usually concerned with picking parameter settings for the model, while the testing is used for ultimate assessment of model performance. Conceptually there is nothing new beyond what was [discussed previously](model_criticism.html#predictive-performance) regarding holding out data for assessing predictive performance, we just do more of it.
As an example, let’s say we split our data into three parts. We use two parts (combined) as our training data, then the third part as test. At this point this is identical to our demonstration before. But then, we switch which part is test and which two are training, and do the whole thing over again. And finally once more, so that each of our three parts has taken a turn as a test set. Our estimated error is the average loss across the three times.
Typically we do it more than three times, usually 10, and there are fancier methods of *k\-fold cross\-validation*, though they typically don’t serve to add much value. In any case, let’s try it with our previous example. The following uses the tidymodels approach to be consistent with early chapters use of the tidyverse[40](#fn40). With it we can employ k\-fold cross validation to evaluate the loss.
```
# install.packages(tidymodels) # if needed
library(tidymodels)
load('data/world_happiness.RData')
set.seed(1212)
# specify the model
happy_base_spec = linear_reg() %>%
set_engine(engine = "lm")
# by default 10-folds
happy_folds = vfold_cv(happy)
library(tune)
happy_base_results = fit_resamples(
happy_base_spec,
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_res = happy_base_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.629 | 10 | 0\.022 |
| rsq | standard | 0\.697 | 10 | 0\.022 |
We now see that our average test error is 0\.629\. It also gives the average R2\.
### Optimization
With ML, much more attention is paid to different optimizers, but the vast majority for deep learning and other many other methods are some flavor of *stochastic gradient descent*. Often due to the sheer volume of data/parameters, this optimization is done on chunks of the data and in parallel. In general, some optimization approaches may work better in some situations or for some models, where ‘better’ means quicker convergence, or perhaps a smoother ride toward convergence. It is not the case that you would come to incorrect conclusions using one method vs. another per se, just that you might reach those conclusions in a more efficient fashion. The following graphic displays SGD versus several variants[41](#fn41). The x and y axes represent the potential values two parameters might take, with the best selection of those values based on a loss function somewhere toward the bottom right. We can see that they all would get there eventually, but some might do so more quickly. This may or may not be the case for some other data situation.
### Tuning parameters
In any ML setting there are parameters that need to set in order to even run the model. In regularized regression this may be the penalty parameter, for random forests the tree depth, for neural nets, how many hidden units, and many other things. None of these *tuning parameters* is known beforehand, and so must be tuned, or learned, just like any other. This is usually done with through validation process like k\-fold cross validation. The ‘best’ settings are then used to make final predictions on the test set.
The usual workflow is something like the following:
* **Tuning**: With the **training data**, use a cross\-validation approach to run models with different values for tuning parameters.
* **Model Selection**: Select the ‘best’ model as that which minimizes or maximizes the objective function estimated during cross\-validation (e.g. RMSE, accuracy, etc.). The test data in this setting are typically referred to as *validation sets*.
* **Prediction**: Use the best model to make predictions on the **test set**.
Techniques
----------
### Regularized regression
A starting point for getting into ML from the more inferential methods is to use *regularized regression*. These are conceptually no different than standard LM/GLM types of approaches, but they add something to the loss function.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2 \+ \\lambda\\cdot\\Sigma\\beta^2\\]
In the above, this is the same squared error loss function as before, but we add a penalty that is based on the size of the coefficients. So, while initially our loss goes down with some set of estimates, the penalty based on their size might be such that the estimated loss actually increases. This has the effect of shrinking the estimates toward zero. Well, [why would we want that](https://stats.stackexchange.com/questions/179864/why-does-shrinkage-work)? This introduces [bias in the coefficients](https://stats.stackexchange.com/questions/207760/when-is-a-biased-estimator-preferable-to-unbiased-one), but the end result is a model that will do better on test set prediction, which is the goal of the ML approach. The way this works regards the bias\-variance tradeoff we discussed previously.
The following demonstrates regularized regression using the glmnet package. It actually uses *elastic net*, which has a mixture of two penalties, one of which is the squared sum of coefficients (typically called *ridge regression*) and the other is the sum of their absolute values (the so\-called *lasso*).
```
library(tidymodels)
happy_prepped = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
recipe(happiness_score ~ .) %>%
step_scale(everything()) %>%
step_naomit(happiness_score) %>%
prep() %>%
bake(happy)
happy_folds = happy_prepped %>%
drop_na() %>%
vfold_cv()
library(tune)
happy_regLM_spec = linear_reg(penalty = 1e-3, mixture = .5) %>%
set_engine(engine = "glmnet")
happy_regLM_results = fit_resamples(
happy_regLM_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_regLM_res = happy_regLM_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.335 | 10 | 0\.018 |
| rsq | standard | 0\.897 | 10 | 0\.013 |
#### Tuning parameters for regularized regression
For the previous model setting, we wouldn’t know what the penalty or the mixing parameter should be. This is where we can use cross validation to choose those. We’ll redo our model spec, create a set of values to search over, and pass that to the tuning function for cross\-validation. Our ultimate model will then be applied to the test data.
First we create our training\-test split.
```
# removing some variables with lots of missing values
happy_split = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
initial_split(prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split)
```
Next we process the data. This is all specific to the tidymodels approach.
```
happy_prepped = happy_train %>%
recipe(happiness_score ~ .) %>%
step_knnimpute(everything()) %>% # impute missing values
step_center(everything()) %>% # standardize
step_scale(everything()) %>% # standardize
prep() # prepare for other uses
happy_test_normalized <- bake(happy_prepped, new_data = happy_test, everything())
happy_folds = happy_prepped %>%
bake(happy) %>%
vfold_cv()
# now we are indicating we don't know the value to place
happy_regLM_spec = linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine(engine = "glmnet")
```
Now, we need to create a set of values (grid) to try an test. In this case we set the penalty parameter from near zero to near 1, and the mixture parameter a range of values from 0 (ridge regression) to 1 (lasso).
```
grid_search = expand_grid(
penalty = exp(seq(-4, -.25, length.out = 10)),
mixture = seq(0, 1, length.out = 10)
)
regLM_tune = tune_grid(
happy_prepped,
model = happy_regLM_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(regLM_tune, metric = "rmse") + geom_smooth(se = FALSE)
```
```
best = show_best(regLM_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
penalty mixture .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0.0183 0.111 rmse standard 0.288 10 0.00686 Model011
```
The results suggest a more ridge type mixture and smaller penalty tends to work better, and this is more or less in keeping with the ‘best’ model. Here is a plot where size indicates RMSE (smaller better) but only for RMSE \< .5 (slight jitter added).
With the ‘best’ model selected, we can refit to the training data with the parameters in hand. We can then do our usual performance assessment with the test set.
```
# for technical reasons, only mixture is passed to the model; see https://github.com/tidymodels/parsnip/issues/215
tuned_model = linear_reg(penalty = best$penalty, mixture = best$mixture) %>%
set_engine(engine = "glmnet") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.297 |
| rsq | standard | 0\.912 |
Not too bad!
### Random forests
A limitation of standard linear models, especially with many input variables, is that there’s not a real automatic way to incorporate interactions and nonlinearities. So we often will want to use techniques that do so. To understand *random forests* and similar techniques (boosted trees, etc.), we can start with a simple decision tree. To begin, for a single input variable (`X1`) whose range might be 1 to 10, we find that a cut at 5\.75 results in the best classification, such that if all observations greater than or equal to 5\.75 are classified as positive, and the rest negative. This general approach is fairly straightforward and conceptually easy to grasp, and it is because of this that tree approaches are appealing.
Now let’s add a second input (`X2`), also on a 1 to 10 range. We now might find that even better classification results if, upon looking at the portion of data regarding those greater than or equal to 5\.75, that we only classify positive if they are also less than 3 on the second variable. The following is a hypothetical tree reflecting this.
This tree structure allows for both interactions between variables, and nonlinear relationships between some input and the target variable (e.g. the second branch could just be the same `X1` but with some cut value greater than 5\.75\). Random forests randomly select a few from the available input variables, and create a tree that minimizes (maximizes) some loss (objective) function on a validation set. A given tree can potentially be very wide/deep, but instead of just one tree, we now do, say, 1000 trees. A final prediction is made based on the average across all trees.
We demonstrate the random forest using the ranger package. We initially don’t do any tuning here, but do note that the number of variables to randomly select (`mtry` below), the number of total trees, the tree depth \- all of these are potential tuning parameters to investigate in the model building process.
```
happy_rf_spec = rand_forest(mode = 'regression', mtry = 6) %>%
set_engine(engine = "ranger")
happy_rf_results = fit_resamples(
happy_rf_spec,
happiness_score ~ .,
happy_folds
)
cv_rf_res = happy_rf_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.222 | 10 | 0\.004 |
| rsq | standard | 0\.950 | 10 | 0\.003 |
It would appear we’re doing a bit better than the regularized regression.
#### Tuning parameters for random forests
As mentioned, we’d have a few tuning parameters to play around with. We’ll tune the number of predictors to randomly select per tree, as well as the minimum sample size for each leaf. The following takes the same appraoch as with the regularized regression model. Note that this will take a while (several minutes).
```
grid_search = expand.grid(
mtry = c(3, 5, ncol(happy_train)-1), # up to total number of predictors
min_n = c(1, 5, 10)
)
happy_rf_spec = rand_forest(mode = 'regression',
mtry = tune(),
min_n = tune()) %>%
set_engine(engine = "ranger")
rf_tune = tune_grid(
happy_prepped,
model = happy_rf_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(rf_tune, metric = "rmse")
```
```
best = show_best(rf_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
mtry min_n .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 3 1 rmse standard 0.219 10 0.00422 Model1
```
Looks like in general using all the variables for selection is the best (this is in keeping with standard approaches for random forest with regression).
Now we are ready to refit the model with the selected tuning parameters and make predictions on the test data.
```
tuned_model = rand_forest(mode = 'regression', mtry = best$mtry, min_n = best$min_n) %>%
set_engine(engine = "ranger") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.217 |
| rsq | standard | 0\.955 |
### Neural networks
*Neural networks* have been around for a long while as a general concept in artificial intelligence and even as a machine learning algorithm, and often work quite well. In some sense, neural networks can simply be thought of as nonlinear regression. Visually however, we can see them as a graphical model with layers of inputs and outputs. Weighted combinations of the inputs are created[42](#fn42) and put through some function (e.g. the sigmoid function) to produce the next layer of inputs. This next layer goes through the same process to produce either another layer, or to predict the output, or even multiple outputs, which serves as the final layer. All the layers between the input and output are usually referred to as hidden layers. If there were a single hidden layer with a single unit and no transformation, then it becomes the standard regression problem.
As a simple example, we can run a simple neural network with a single hidden layer of 1000 units[43](#fn43). Since this is a regression problem, no further transformation is required of the end result to map it onto the target variable. I set the number of epochs to 500, which you can think of as the number of iterations from our discussion of optimization. There are many tuning parameters I am not showing that could certainly be fiddled with as well. This is just an example that will run quickly with comparable performance to the previous. If you do not have keras installed, you can change the engine to `nnet`, which was a part of the base R set of packages well before neural nets became cool again[44](#fn44). This will likely take several minutes for typical machines.
```
happy_nn_spec = mlp(
mode = 'regression',
hidden_units = 1000,
epochs = 500,
activation = 'linear'
) %>%
set_engine(engine = "keras")
happy_nn_results = fit_resamples(
happy_nn_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE,
verbose = FALSE,
allow_par = TRUE)
)
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.818 | 10 | 0\.102 |
| rsq | standard | 0\.896 | 10 | 0\.014 |
You will typically see neural nets applied to image and natural language processing, but as demonstrated above, they can be applied in any scenario. It will take longer to set up and train, but once that’s done, you’re good to go, and may have a much better predictive result.
I leave tuning the model as an [exercise](ml.html#machine-learning-exercises), but definitely switch to using `nnet` if you do so, otherwise you’ll have to install keras (both for R and Python) and be waiting a long time besides. As mentioned, the nnet package is in base R, so you already have it.
#### Deep learning
*Deep learning* can be summarized succinctly as ‘very complicated neural nets’. Really, that’s about it. The complexity can be quite tremendous however, and there is a wide variety from which to choose. For example, we just ran a basic neural net above, but for image processing we might use a convolutional neural network, and for natural language processing some LTSM model. Here is a small(!) version of the convolutional neural network known as ‘resnet’ which has many layers in between input and output.
The nice thing is that a lot of the work has already been done for you, and you can use models where most of the layers in the neural net have already been trained by people at Google, Facebook, and others who have much more resources to do do so than you. In such cases, you may only have to worry about the last couple layers for your particular problem. Applying a pre\-trained model to a different data scenario is called *transfer learning*, and regardless of what your intuition is, it will work, and very well.
*Artificial intelligence* (AI) used to refer to specific applications of deep/machine learning (e.g. areas in computer vision and natural language processing), but thanks to the popular press, the term has pretty much lost all meaning. AI actually has a very old history dating to the cognitive revolution in psychology and the early days of computer science in the late 50s and early 60s. Again though, you can think of it as a subset of the machine learning problems.
Interpreting the Black Box
--------------------------
One of the key issues with ML techniques is interpretability. While a decision tree is immensely interpretable, a thousand of them is not so much. What any particular node or even layer in a complex neural network represents may be difficult to fathom. However, we can still interpret prediction changes based on input changes, which is what we really care about, and really is not necessarily more difficult than our standard inferential setting.
For example, a regularized regression might not have straightforward inference, but the coefficients are interpreted exactly the same as a standard GLM. Random forests can have the interactions visualized, which is what we said was required for interpretation in standard settings. Furthermore, there are many approaches such as *Local Interpretable Model\-Agnostic Explanations* (LIME), variable importance measures, Shapley values, and more to help us in this process. It might take more work, but honestly, in my consulting experience, a great many have trouble interpreting anything beyond a standard linear model any way, and I’m not convinced that it’s fundamentally different problem to [extract meaning from the machine learning context](https://christophm.github.io/interpretable-ml-book/) these days, though it may take a little work.
Machine Learning Summary
------------------------
Hopefully this has demystified ML for you somewhat. Nowadays it may take little effort to get to state\-of\-the\-art results from even just a year or two ago, and which, for all intents and purposes, would be good enough both now and for the foreseeable future. Despite what many may think, it is not magic, but for more classically statistically minded, it may require a bit of a different perspective.
Machine Learning Exercises
--------------------------
### Exercise 1
Use the ranger package to predict the Google variable `rating` by several covariates. Feel free to just use the standard function approach rather than all the tidymodels stuff if you want, but do use a training and test approach. You can then try the model again with a different tuning. For the first go around, starter code is provided.
```
# run these if needed to load data and install the package
# load('data/google_apps.RData')
# install.packages('ranger')
google_for_mod = google_apps %>%
select(avg_sentiment_polarity, rating, type,installs, reviews, size_in_MB, category) %>%
drop_na()
google_split = google_for_mod %>%
initial_split(prop = 0.75)
google_train = training(google_split)
google_test = testing(google_split)
ga_rf_results = rand_forest(mode = 'regression', mtry = 2, trees = 1000) %>%
set_engine(engine = "ranger") %>%
fit(
rating ~ ?,
google_train
)
test_predictions = predict(ga_rf_results, new_data = google_test)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
bind_rows(
rmse,
rsq
)
```
### Exercise 2
Respecify the neural net model demonstrated above as follows, and tune over the number of hidden units to have. This will probably take several minutes depending on your machine.
```
grid_search = expand.grid(
hidden_units = c(25, 50),
penalty = exp(seq(-4, -.25, length.out = 5))
)
happy_nn_spec = mlp(mode = 'regression',
penalty = tune(),
hidden_units = tune()) %>%
set_engine(engine = "nnet")
nn_tune = tune_grid(
happy_prepped, # from previous examples, see tuning for regularized regression
model = happy_nn_spec,
resamples = happy_folds, # from previous examples, see tuning for regularized regression
grid = grid_search
)
show_best(nn_tune, metric = "rmse", maximize = FALSE, n = 1)
```
Python Machine Learning Notebook
--------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/ml.ipynb)
Concepts
--------
### Loss
We discussed loss functions [before](models.html#estimation), and there was a reason I went more in depth there, mainly because I feel, unlike with ML, loss is not explicitly focused on as much in applied research, leaving the results produced to come across as more magical than it should be. In ML however, we are explicitly concerned with loss functions and, more specifically, evaluating loss on test data. This loss is evaluated over successive iterations of a particular technique, or averaged over several test sets via cross\-validation. Typical loss functions are *Root Mean Squared Error* for numeric targets (essentially the same as for a standard linear model), and *cross\-entropy* for categorical outcomes. There are robust alternatives, such as mean absolute error and hinge loss functions respectively, and many other options besides. You will come across others that might be used for specific scenarios.
The following image, typically called a *learning curve*, shows an example of loss on a test set as a function of model complexity. In this case, models with more complexity perform better, but only to a point, before test error begins to rise again.
### Bias\-variance tradeoff
Prediction error, i.e. loss, is composed of several sources. One part is *measurement error*, which we can’t do anything about, and two components of specific interest: *bias*, the difference in the observed value and our average predicted value, and *variance* how much that prediction would change had we trained on different data. More generally we can think of this as a problem of *underfitting* vs. *overfitting*. With a model that is too simple, we underfit, and bias is high. If we overfit, the model is too close to the training data, and likely will do poorly with new observations. ML techniques trade some increased bias for even greater reduced variance, which often means less overfitting to the training data, leading to increased performance on new data.
In the following[39](#fn39), the blue line represents models applied to training data, while the red line regards performance on the test set. We can see that for the data we train the model to, error will always go down with increased complexity. However, we can see that at some point, the test error will increase as we have started to overfit to the training data.
### Regularization
As we have noted, a model fit to a single data set might do very well with the data at hand, but then suffer when predicting independent data. Also, oftentimes we are interested in a ‘best’ subset of predictors among a great many, and typically the estimated coefficients from standard approaches are overly optimistic unless dealing with sufficiently large sample sizes. This general issue can be improved by shrinking estimates toward zero, such that some of the performance in the initial fit is sacrificed for improvement with regard to prediction. The basic idea in terms of the tradeoff is that we are trading some bias for notably reduced variance. We demonstrate regularized regression below.
### Cross\-validation
*Cross\-validation* is widely used for validation and/or testing. With validation, we are usually concerned with picking parameter settings for the model, while the testing is used for ultimate assessment of model performance. Conceptually there is nothing new beyond what was [discussed previously](model_criticism.html#predictive-performance) regarding holding out data for assessing predictive performance, we just do more of it.
As an example, let’s say we split our data into three parts. We use two parts (combined) as our training data, then the third part as test. At this point this is identical to our demonstration before. But then, we switch which part is test and which two are training, and do the whole thing over again. And finally once more, so that each of our three parts has taken a turn as a test set. Our estimated error is the average loss across the three times.
Typically we do it more than three times, usually 10, and there are fancier methods of *k\-fold cross\-validation*, though they typically don’t serve to add much value. In any case, let’s try it with our previous example. The following uses the tidymodels approach to be consistent with early chapters use of the tidyverse[40](#fn40). With it we can employ k\-fold cross validation to evaluate the loss.
```
# install.packages(tidymodels) # if needed
library(tidymodels)
load('data/world_happiness.RData')
set.seed(1212)
# specify the model
happy_base_spec = linear_reg() %>%
set_engine(engine = "lm")
# by default 10-folds
happy_folds = vfold_cv(happy)
library(tune)
happy_base_results = fit_resamples(
happy_base_spec,
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_res = happy_base_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.629 | 10 | 0\.022 |
| rsq | standard | 0\.697 | 10 | 0\.022 |
We now see that our average test error is 0\.629\. It also gives the average R2\.
### Optimization
With ML, much more attention is paid to different optimizers, but the vast majority for deep learning and other many other methods are some flavor of *stochastic gradient descent*. Often due to the sheer volume of data/parameters, this optimization is done on chunks of the data and in parallel. In general, some optimization approaches may work better in some situations or for some models, where ‘better’ means quicker convergence, or perhaps a smoother ride toward convergence. It is not the case that you would come to incorrect conclusions using one method vs. another per se, just that you might reach those conclusions in a more efficient fashion. The following graphic displays SGD versus several variants[41](#fn41). The x and y axes represent the potential values two parameters might take, with the best selection of those values based on a loss function somewhere toward the bottom right. We can see that they all would get there eventually, but some might do so more quickly. This may or may not be the case for some other data situation.
### Tuning parameters
In any ML setting there are parameters that need to set in order to even run the model. In regularized regression this may be the penalty parameter, for random forests the tree depth, for neural nets, how many hidden units, and many other things. None of these *tuning parameters* is known beforehand, and so must be tuned, or learned, just like any other. This is usually done with through validation process like k\-fold cross validation. The ‘best’ settings are then used to make final predictions on the test set.
The usual workflow is something like the following:
* **Tuning**: With the **training data**, use a cross\-validation approach to run models with different values for tuning parameters.
* **Model Selection**: Select the ‘best’ model as that which minimizes or maximizes the objective function estimated during cross\-validation (e.g. RMSE, accuracy, etc.). The test data in this setting are typically referred to as *validation sets*.
* **Prediction**: Use the best model to make predictions on the **test set**.
### Loss
We discussed loss functions [before](models.html#estimation), and there was a reason I went more in depth there, mainly because I feel, unlike with ML, loss is not explicitly focused on as much in applied research, leaving the results produced to come across as more magical than it should be. In ML however, we are explicitly concerned with loss functions and, more specifically, evaluating loss on test data. This loss is evaluated over successive iterations of a particular technique, or averaged over several test sets via cross\-validation. Typical loss functions are *Root Mean Squared Error* for numeric targets (essentially the same as for a standard linear model), and *cross\-entropy* for categorical outcomes. There are robust alternatives, such as mean absolute error and hinge loss functions respectively, and many other options besides. You will come across others that might be used for specific scenarios.
The following image, typically called a *learning curve*, shows an example of loss on a test set as a function of model complexity. In this case, models with more complexity perform better, but only to a point, before test error begins to rise again.
### Bias\-variance tradeoff
Prediction error, i.e. loss, is composed of several sources. One part is *measurement error*, which we can’t do anything about, and two components of specific interest: *bias*, the difference in the observed value and our average predicted value, and *variance* how much that prediction would change had we trained on different data. More generally we can think of this as a problem of *underfitting* vs. *overfitting*. With a model that is too simple, we underfit, and bias is high. If we overfit, the model is too close to the training data, and likely will do poorly with new observations. ML techniques trade some increased bias for even greater reduced variance, which often means less overfitting to the training data, leading to increased performance on new data.
In the following[39](#fn39), the blue line represents models applied to training data, while the red line regards performance on the test set. We can see that for the data we train the model to, error will always go down with increased complexity. However, we can see that at some point, the test error will increase as we have started to overfit to the training data.
### Regularization
As we have noted, a model fit to a single data set might do very well with the data at hand, but then suffer when predicting independent data. Also, oftentimes we are interested in a ‘best’ subset of predictors among a great many, and typically the estimated coefficients from standard approaches are overly optimistic unless dealing with sufficiently large sample sizes. This general issue can be improved by shrinking estimates toward zero, such that some of the performance in the initial fit is sacrificed for improvement with regard to prediction. The basic idea in terms of the tradeoff is that we are trading some bias for notably reduced variance. We demonstrate regularized regression below.
### Cross\-validation
*Cross\-validation* is widely used for validation and/or testing. With validation, we are usually concerned with picking parameter settings for the model, while the testing is used for ultimate assessment of model performance. Conceptually there is nothing new beyond what was [discussed previously](model_criticism.html#predictive-performance) regarding holding out data for assessing predictive performance, we just do more of it.
As an example, let’s say we split our data into three parts. We use two parts (combined) as our training data, then the third part as test. At this point this is identical to our demonstration before. But then, we switch which part is test and which two are training, and do the whole thing over again. And finally once more, so that each of our three parts has taken a turn as a test set. Our estimated error is the average loss across the three times.
Typically we do it more than three times, usually 10, and there are fancier methods of *k\-fold cross\-validation*, though they typically don’t serve to add much value. In any case, let’s try it with our previous example. The following uses the tidymodels approach to be consistent with early chapters use of the tidyverse[40](#fn40). With it we can employ k\-fold cross validation to evaluate the loss.
```
# install.packages(tidymodels) # if needed
library(tidymodels)
load('data/world_happiness.RData')
set.seed(1212)
# specify the model
happy_base_spec = linear_reg() %>%
set_engine(engine = "lm")
# by default 10-folds
happy_folds = vfold_cv(happy)
library(tune)
happy_base_results = fit_resamples(
happy_base_spec,
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_res = happy_base_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.629 | 10 | 0\.022 |
| rsq | standard | 0\.697 | 10 | 0\.022 |
We now see that our average test error is 0\.629\. It also gives the average R2\.
### Optimization
With ML, much more attention is paid to different optimizers, but the vast majority for deep learning and other many other methods are some flavor of *stochastic gradient descent*. Often due to the sheer volume of data/parameters, this optimization is done on chunks of the data and in parallel. In general, some optimization approaches may work better in some situations or for some models, where ‘better’ means quicker convergence, or perhaps a smoother ride toward convergence. It is not the case that you would come to incorrect conclusions using one method vs. another per se, just that you might reach those conclusions in a more efficient fashion. The following graphic displays SGD versus several variants[41](#fn41). The x and y axes represent the potential values two parameters might take, with the best selection of those values based on a loss function somewhere toward the bottom right. We can see that they all would get there eventually, but some might do so more quickly. This may or may not be the case for some other data situation.
### Tuning parameters
In any ML setting there are parameters that need to set in order to even run the model. In regularized regression this may be the penalty parameter, for random forests the tree depth, for neural nets, how many hidden units, and many other things. None of these *tuning parameters* is known beforehand, and so must be tuned, or learned, just like any other. This is usually done with through validation process like k\-fold cross validation. The ‘best’ settings are then used to make final predictions on the test set.
The usual workflow is something like the following:
* **Tuning**: With the **training data**, use a cross\-validation approach to run models with different values for tuning parameters.
* **Model Selection**: Select the ‘best’ model as that which minimizes or maximizes the objective function estimated during cross\-validation (e.g. RMSE, accuracy, etc.). The test data in this setting are typically referred to as *validation sets*.
* **Prediction**: Use the best model to make predictions on the **test set**.
Techniques
----------
### Regularized regression
A starting point for getting into ML from the more inferential methods is to use *regularized regression*. These are conceptually no different than standard LM/GLM types of approaches, but they add something to the loss function.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2 \+ \\lambda\\cdot\\Sigma\\beta^2\\]
In the above, this is the same squared error loss function as before, but we add a penalty that is based on the size of the coefficients. So, while initially our loss goes down with some set of estimates, the penalty based on their size might be such that the estimated loss actually increases. This has the effect of shrinking the estimates toward zero. Well, [why would we want that](https://stats.stackexchange.com/questions/179864/why-does-shrinkage-work)? This introduces [bias in the coefficients](https://stats.stackexchange.com/questions/207760/when-is-a-biased-estimator-preferable-to-unbiased-one), but the end result is a model that will do better on test set prediction, which is the goal of the ML approach. The way this works regards the bias\-variance tradeoff we discussed previously.
The following demonstrates regularized regression using the glmnet package. It actually uses *elastic net*, which has a mixture of two penalties, one of which is the squared sum of coefficients (typically called *ridge regression*) and the other is the sum of their absolute values (the so\-called *lasso*).
```
library(tidymodels)
happy_prepped = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
recipe(happiness_score ~ .) %>%
step_scale(everything()) %>%
step_naomit(happiness_score) %>%
prep() %>%
bake(happy)
happy_folds = happy_prepped %>%
drop_na() %>%
vfold_cv()
library(tune)
happy_regLM_spec = linear_reg(penalty = 1e-3, mixture = .5) %>%
set_engine(engine = "glmnet")
happy_regLM_results = fit_resamples(
happy_regLM_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_regLM_res = happy_regLM_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.335 | 10 | 0\.018 |
| rsq | standard | 0\.897 | 10 | 0\.013 |
#### Tuning parameters for regularized regression
For the previous model setting, we wouldn’t know what the penalty or the mixing parameter should be. This is where we can use cross validation to choose those. We’ll redo our model spec, create a set of values to search over, and pass that to the tuning function for cross\-validation. Our ultimate model will then be applied to the test data.
First we create our training\-test split.
```
# removing some variables with lots of missing values
happy_split = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
initial_split(prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split)
```
Next we process the data. This is all specific to the tidymodels approach.
```
happy_prepped = happy_train %>%
recipe(happiness_score ~ .) %>%
step_knnimpute(everything()) %>% # impute missing values
step_center(everything()) %>% # standardize
step_scale(everything()) %>% # standardize
prep() # prepare for other uses
happy_test_normalized <- bake(happy_prepped, new_data = happy_test, everything())
happy_folds = happy_prepped %>%
bake(happy) %>%
vfold_cv()
# now we are indicating we don't know the value to place
happy_regLM_spec = linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine(engine = "glmnet")
```
Now, we need to create a set of values (grid) to try an test. In this case we set the penalty parameter from near zero to near 1, and the mixture parameter a range of values from 0 (ridge regression) to 1 (lasso).
```
grid_search = expand_grid(
penalty = exp(seq(-4, -.25, length.out = 10)),
mixture = seq(0, 1, length.out = 10)
)
regLM_tune = tune_grid(
happy_prepped,
model = happy_regLM_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(regLM_tune, metric = "rmse") + geom_smooth(se = FALSE)
```
```
best = show_best(regLM_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
penalty mixture .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0.0183 0.111 rmse standard 0.288 10 0.00686 Model011
```
The results suggest a more ridge type mixture and smaller penalty tends to work better, and this is more or less in keeping with the ‘best’ model. Here is a plot where size indicates RMSE (smaller better) but only for RMSE \< .5 (slight jitter added).
With the ‘best’ model selected, we can refit to the training data with the parameters in hand. We can then do our usual performance assessment with the test set.
```
# for technical reasons, only mixture is passed to the model; see https://github.com/tidymodels/parsnip/issues/215
tuned_model = linear_reg(penalty = best$penalty, mixture = best$mixture) %>%
set_engine(engine = "glmnet") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.297 |
| rsq | standard | 0\.912 |
Not too bad!
### Random forests
A limitation of standard linear models, especially with many input variables, is that there’s not a real automatic way to incorporate interactions and nonlinearities. So we often will want to use techniques that do so. To understand *random forests* and similar techniques (boosted trees, etc.), we can start with a simple decision tree. To begin, for a single input variable (`X1`) whose range might be 1 to 10, we find that a cut at 5\.75 results in the best classification, such that if all observations greater than or equal to 5\.75 are classified as positive, and the rest negative. This general approach is fairly straightforward and conceptually easy to grasp, and it is because of this that tree approaches are appealing.
Now let’s add a second input (`X2`), also on a 1 to 10 range. We now might find that even better classification results if, upon looking at the portion of data regarding those greater than or equal to 5\.75, that we only classify positive if they are also less than 3 on the second variable. The following is a hypothetical tree reflecting this.
This tree structure allows for both interactions between variables, and nonlinear relationships between some input and the target variable (e.g. the second branch could just be the same `X1` but with some cut value greater than 5\.75\). Random forests randomly select a few from the available input variables, and create a tree that minimizes (maximizes) some loss (objective) function on a validation set. A given tree can potentially be very wide/deep, but instead of just one tree, we now do, say, 1000 trees. A final prediction is made based on the average across all trees.
We demonstrate the random forest using the ranger package. We initially don’t do any tuning here, but do note that the number of variables to randomly select (`mtry` below), the number of total trees, the tree depth \- all of these are potential tuning parameters to investigate in the model building process.
```
happy_rf_spec = rand_forest(mode = 'regression', mtry = 6) %>%
set_engine(engine = "ranger")
happy_rf_results = fit_resamples(
happy_rf_spec,
happiness_score ~ .,
happy_folds
)
cv_rf_res = happy_rf_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.222 | 10 | 0\.004 |
| rsq | standard | 0\.950 | 10 | 0\.003 |
It would appear we’re doing a bit better than the regularized regression.
#### Tuning parameters for random forests
As mentioned, we’d have a few tuning parameters to play around with. We’ll tune the number of predictors to randomly select per tree, as well as the minimum sample size for each leaf. The following takes the same appraoch as with the regularized regression model. Note that this will take a while (several minutes).
```
grid_search = expand.grid(
mtry = c(3, 5, ncol(happy_train)-1), # up to total number of predictors
min_n = c(1, 5, 10)
)
happy_rf_spec = rand_forest(mode = 'regression',
mtry = tune(),
min_n = tune()) %>%
set_engine(engine = "ranger")
rf_tune = tune_grid(
happy_prepped,
model = happy_rf_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(rf_tune, metric = "rmse")
```
```
best = show_best(rf_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
mtry min_n .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 3 1 rmse standard 0.219 10 0.00422 Model1
```
Looks like in general using all the variables for selection is the best (this is in keeping with standard approaches for random forest with regression).
Now we are ready to refit the model with the selected tuning parameters and make predictions on the test data.
```
tuned_model = rand_forest(mode = 'regression', mtry = best$mtry, min_n = best$min_n) %>%
set_engine(engine = "ranger") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.217 |
| rsq | standard | 0\.955 |
### Neural networks
*Neural networks* have been around for a long while as a general concept in artificial intelligence and even as a machine learning algorithm, and often work quite well. In some sense, neural networks can simply be thought of as nonlinear regression. Visually however, we can see them as a graphical model with layers of inputs and outputs. Weighted combinations of the inputs are created[42](#fn42) and put through some function (e.g. the sigmoid function) to produce the next layer of inputs. This next layer goes through the same process to produce either another layer, or to predict the output, or even multiple outputs, which serves as the final layer. All the layers between the input and output are usually referred to as hidden layers. If there were a single hidden layer with a single unit and no transformation, then it becomes the standard regression problem.
As a simple example, we can run a simple neural network with a single hidden layer of 1000 units[43](#fn43). Since this is a regression problem, no further transformation is required of the end result to map it onto the target variable. I set the number of epochs to 500, which you can think of as the number of iterations from our discussion of optimization. There are many tuning parameters I am not showing that could certainly be fiddled with as well. This is just an example that will run quickly with comparable performance to the previous. If you do not have keras installed, you can change the engine to `nnet`, which was a part of the base R set of packages well before neural nets became cool again[44](#fn44). This will likely take several minutes for typical machines.
```
happy_nn_spec = mlp(
mode = 'regression',
hidden_units = 1000,
epochs = 500,
activation = 'linear'
) %>%
set_engine(engine = "keras")
happy_nn_results = fit_resamples(
happy_nn_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE,
verbose = FALSE,
allow_par = TRUE)
)
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.818 | 10 | 0\.102 |
| rsq | standard | 0\.896 | 10 | 0\.014 |
You will typically see neural nets applied to image and natural language processing, but as demonstrated above, they can be applied in any scenario. It will take longer to set up and train, but once that’s done, you’re good to go, and may have a much better predictive result.
I leave tuning the model as an [exercise](ml.html#machine-learning-exercises), but definitely switch to using `nnet` if you do so, otherwise you’ll have to install keras (both for R and Python) and be waiting a long time besides. As mentioned, the nnet package is in base R, so you already have it.
#### Deep learning
*Deep learning* can be summarized succinctly as ‘very complicated neural nets’. Really, that’s about it. The complexity can be quite tremendous however, and there is a wide variety from which to choose. For example, we just ran a basic neural net above, but for image processing we might use a convolutional neural network, and for natural language processing some LTSM model. Here is a small(!) version of the convolutional neural network known as ‘resnet’ which has many layers in between input and output.
The nice thing is that a lot of the work has already been done for you, and you can use models where most of the layers in the neural net have already been trained by people at Google, Facebook, and others who have much more resources to do do so than you. In such cases, you may only have to worry about the last couple layers for your particular problem. Applying a pre\-trained model to a different data scenario is called *transfer learning*, and regardless of what your intuition is, it will work, and very well.
*Artificial intelligence* (AI) used to refer to specific applications of deep/machine learning (e.g. areas in computer vision and natural language processing), but thanks to the popular press, the term has pretty much lost all meaning. AI actually has a very old history dating to the cognitive revolution in psychology and the early days of computer science in the late 50s and early 60s. Again though, you can think of it as a subset of the machine learning problems.
### Regularized regression
A starting point for getting into ML from the more inferential methods is to use *regularized regression*. These are conceptually no different than standard LM/GLM types of approaches, but they add something to the loss function.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2 \+ \\lambda\\cdot\\Sigma\\beta^2\\]
In the above, this is the same squared error loss function as before, but we add a penalty that is based on the size of the coefficients. So, while initially our loss goes down with some set of estimates, the penalty based on their size might be such that the estimated loss actually increases. This has the effect of shrinking the estimates toward zero. Well, [why would we want that](https://stats.stackexchange.com/questions/179864/why-does-shrinkage-work)? This introduces [bias in the coefficients](https://stats.stackexchange.com/questions/207760/when-is-a-biased-estimator-preferable-to-unbiased-one), but the end result is a model that will do better on test set prediction, which is the goal of the ML approach. The way this works regards the bias\-variance tradeoff we discussed previously.
The following demonstrates regularized regression using the glmnet package. It actually uses *elastic net*, which has a mixture of two penalties, one of which is the squared sum of coefficients (typically called *ridge regression*) and the other is the sum of their absolute values (the so\-called *lasso*).
```
library(tidymodels)
happy_prepped = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
recipe(happiness_score ~ .) %>%
step_scale(everything()) %>%
step_naomit(happiness_score) %>%
prep() %>%
bake(happy)
happy_folds = happy_prepped %>%
drop_na() %>%
vfold_cv()
library(tune)
happy_regLM_spec = linear_reg(penalty = 1e-3, mixture = .5) %>%
set_engine(engine = "glmnet")
happy_regLM_results = fit_resamples(
happy_regLM_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_regLM_res = happy_regLM_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.335 | 10 | 0\.018 |
| rsq | standard | 0\.897 | 10 | 0\.013 |
#### Tuning parameters for regularized regression
For the previous model setting, we wouldn’t know what the penalty or the mixing parameter should be. This is where we can use cross validation to choose those. We’ll redo our model spec, create a set of values to search over, and pass that to the tuning function for cross\-validation. Our ultimate model will then be applied to the test data.
First we create our training\-test split.
```
# removing some variables with lots of missing values
happy_split = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
initial_split(prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split)
```
Next we process the data. This is all specific to the tidymodels approach.
```
happy_prepped = happy_train %>%
recipe(happiness_score ~ .) %>%
step_knnimpute(everything()) %>% # impute missing values
step_center(everything()) %>% # standardize
step_scale(everything()) %>% # standardize
prep() # prepare for other uses
happy_test_normalized <- bake(happy_prepped, new_data = happy_test, everything())
happy_folds = happy_prepped %>%
bake(happy) %>%
vfold_cv()
# now we are indicating we don't know the value to place
happy_regLM_spec = linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine(engine = "glmnet")
```
Now, we need to create a set of values (grid) to try an test. In this case we set the penalty parameter from near zero to near 1, and the mixture parameter a range of values from 0 (ridge regression) to 1 (lasso).
```
grid_search = expand_grid(
penalty = exp(seq(-4, -.25, length.out = 10)),
mixture = seq(0, 1, length.out = 10)
)
regLM_tune = tune_grid(
happy_prepped,
model = happy_regLM_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(regLM_tune, metric = "rmse") + geom_smooth(se = FALSE)
```
```
best = show_best(regLM_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
penalty mixture .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0.0183 0.111 rmse standard 0.288 10 0.00686 Model011
```
The results suggest a more ridge type mixture and smaller penalty tends to work better, and this is more or less in keeping with the ‘best’ model. Here is a plot where size indicates RMSE (smaller better) but only for RMSE \< .5 (slight jitter added).
With the ‘best’ model selected, we can refit to the training data with the parameters in hand. We can then do our usual performance assessment with the test set.
```
# for technical reasons, only mixture is passed to the model; see https://github.com/tidymodels/parsnip/issues/215
tuned_model = linear_reg(penalty = best$penalty, mixture = best$mixture) %>%
set_engine(engine = "glmnet") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.297 |
| rsq | standard | 0\.912 |
Not too bad!
#### Tuning parameters for regularized regression
For the previous model setting, we wouldn’t know what the penalty or the mixing parameter should be. This is where we can use cross validation to choose those. We’ll redo our model spec, create a set of values to search over, and pass that to the tuning function for cross\-validation. Our ultimate model will then be applied to the test data.
First we create our training\-test split.
```
# removing some variables with lots of missing values
happy_split = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
initial_split(prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split)
```
Next we process the data. This is all specific to the tidymodels approach.
```
happy_prepped = happy_train %>%
recipe(happiness_score ~ .) %>%
step_knnimpute(everything()) %>% # impute missing values
step_center(everything()) %>% # standardize
step_scale(everything()) %>% # standardize
prep() # prepare for other uses
happy_test_normalized <- bake(happy_prepped, new_data = happy_test, everything())
happy_folds = happy_prepped %>%
bake(happy) %>%
vfold_cv()
# now we are indicating we don't know the value to place
happy_regLM_spec = linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine(engine = "glmnet")
```
Now, we need to create a set of values (grid) to try an test. In this case we set the penalty parameter from near zero to near 1, and the mixture parameter a range of values from 0 (ridge regression) to 1 (lasso).
```
grid_search = expand_grid(
penalty = exp(seq(-4, -.25, length.out = 10)),
mixture = seq(0, 1, length.out = 10)
)
regLM_tune = tune_grid(
happy_prepped,
model = happy_regLM_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(regLM_tune, metric = "rmse") + geom_smooth(se = FALSE)
```
```
best = show_best(regLM_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
penalty mixture .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0.0183 0.111 rmse standard 0.288 10 0.00686 Model011
```
The results suggest a more ridge type mixture and smaller penalty tends to work better, and this is more or less in keeping with the ‘best’ model. Here is a plot where size indicates RMSE (smaller better) but only for RMSE \< .5 (slight jitter added).
With the ‘best’ model selected, we can refit to the training data with the parameters in hand. We can then do our usual performance assessment with the test set.
```
# for technical reasons, only mixture is passed to the model; see https://github.com/tidymodels/parsnip/issues/215
tuned_model = linear_reg(penalty = best$penalty, mixture = best$mixture) %>%
set_engine(engine = "glmnet") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.297 |
| rsq | standard | 0\.912 |
Not too bad!
### Random forests
A limitation of standard linear models, especially with many input variables, is that there’s not a real automatic way to incorporate interactions and nonlinearities. So we often will want to use techniques that do so. To understand *random forests* and similar techniques (boosted trees, etc.), we can start with a simple decision tree. To begin, for a single input variable (`X1`) whose range might be 1 to 10, we find that a cut at 5\.75 results in the best classification, such that if all observations greater than or equal to 5\.75 are classified as positive, and the rest negative. This general approach is fairly straightforward and conceptually easy to grasp, and it is because of this that tree approaches are appealing.
Now let’s add a second input (`X2`), also on a 1 to 10 range. We now might find that even better classification results if, upon looking at the portion of data regarding those greater than or equal to 5\.75, that we only classify positive if they are also less than 3 on the second variable. The following is a hypothetical tree reflecting this.
This tree structure allows for both interactions between variables, and nonlinear relationships between some input and the target variable (e.g. the second branch could just be the same `X1` but with some cut value greater than 5\.75\). Random forests randomly select a few from the available input variables, and create a tree that minimizes (maximizes) some loss (objective) function on a validation set. A given tree can potentially be very wide/deep, but instead of just one tree, we now do, say, 1000 trees. A final prediction is made based on the average across all trees.
We demonstrate the random forest using the ranger package. We initially don’t do any tuning here, but do note that the number of variables to randomly select (`mtry` below), the number of total trees, the tree depth \- all of these are potential tuning parameters to investigate in the model building process.
```
happy_rf_spec = rand_forest(mode = 'regression', mtry = 6) %>%
set_engine(engine = "ranger")
happy_rf_results = fit_resamples(
happy_rf_spec,
happiness_score ~ .,
happy_folds
)
cv_rf_res = happy_rf_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.222 | 10 | 0\.004 |
| rsq | standard | 0\.950 | 10 | 0\.003 |
It would appear we’re doing a bit better than the regularized regression.
#### Tuning parameters for random forests
As mentioned, we’d have a few tuning parameters to play around with. We’ll tune the number of predictors to randomly select per tree, as well as the minimum sample size for each leaf. The following takes the same appraoch as with the regularized regression model. Note that this will take a while (several minutes).
```
grid_search = expand.grid(
mtry = c(3, 5, ncol(happy_train)-1), # up to total number of predictors
min_n = c(1, 5, 10)
)
happy_rf_spec = rand_forest(mode = 'regression',
mtry = tune(),
min_n = tune()) %>%
set_engine(engine = "ranger")
rf_tune = tune_grid(
happy_prepped,
model = happy_rf_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(rf_tune, metric = "rmse")
```
```
best = show_best(rf_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
mtry min_n .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 3 1 rmse standard 0.219 10 0.00422 Model1
```
Looks like in general using all the variables for selection is the best (this is in keeping with standard approaches for random forest with regression).
Now we are ready to refit the model with the selected tuning parameters and make predictions on the test data.
```
tuned_model = rand_forest(mode = 'regression', mtry = best$mtry, min_n = best$min_n) %>%
set_engine(engine = "ranger") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.217 |
| rsq | standard | 0\.955 |
#### Tuning parameters for random forests
As mentioned, we’d have a few tuning parameters to play around with. We’ll tune the number of predictors to randomly select per tree, as well as the minimum sample size for each leaf. The following takes the same appraoch as with the regularized regression model. Note that this will take a while (several minutes).
```
grid_search = expand.grid(
mtry = c(3, 5, ncol(happy_train)-1), # up to total number of predictors
min_n = c(1, 5, 10)
)
happy_rf_spec = rand_forest(mode = 'regression',
mtry = tune(),
min_n = tune()) %>%
set_engine(engine = "ranger")
rf_tune = tune_grid(
happy_prepped,
model = happy_rf_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(rf_tune, metric = "rmse")
```
```
best = show_best(rf_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
mtry min_n .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 3 1 rmse standard 0.219 10 0.00422 Model1
```
Looks like in general using all the variables for selection is the best (this is in keeping with standard approaches for random forest with regression).
Now we are ready to refit the model with the selected tuning parameters and make predictions on the test data.
```
tuned_model = rand_forest(mode = 'regression', mtry = best$mtry, min_n = best$min_n) %>%
set_engine(engine = "ranger") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.217 |
| rsq | standard | 0\.955 |
### Neural networks
*Neural networks* have been around for a long while as a general concept in artificial intelligence and even as a machine learning algorithm, and often work quite well. In some sense, neural networks can simply be thought of as nonlinear regression. Visually however, we can see them as a graphical model with layers of inputs and outputs. Weighted combinations of the inputs are created[42](#fn42) and put through some function (e.g. the sigmoid function) to produce the next layer of inputs. This next layer goes through the same process to produce either another layer, or to predict the output, or even multiple outputs, which serves as the final layer. All the layers between the input and output are usually referred to as hidden layers. If there were a single hidden layer with a single unit and no transformation, then it becomes the standard regression problem.
As a simple example, we can run a simple neural network with a single hidden layer of 1000 units[43](#fn43). Since this is a regression problem, no further transformation is required of the end result to map it onto the target variable. I set the number of epochs to 500, which you can think of as the number of iterations from our discussion of optimization. There are many tuning parameters I am not showing that could certainly be fiddled with as well. This is just an example that will run quickly with comparable performance to the previous. If you do not have keras installed, you can change the engine to `nnet`, which was a part of the base R set of packages well before neural nets became cool again[44](#fn44). This will likely take several minutes for typical machines.
```
happy_nn_spec = mlp(
mode = 'regression',
hidden_units = 1000,
epochs = 500,
activation = 'linear'
) %>%
set_engine(engine = "keras")
happy_nn_results = fit_resamples(
happy_nn_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE,
verbose = FALSE,
allow_par = TRUE)
)
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.818 | 10 | 0\.102 |
| rsq | standard | 0\.896 | 10 | 0\.014 |
You will typically see neural nets applied to image and natural language processing, but as demonstrated above, they can be applied in any scenario. It will take longer to set up and train, but once that’s done, you’re good to go, and may have a much better predictive result.
I leave tuning the model as an [exercise](ml.html#machine-learning-exercises), but definitely switch to using `nnet` if you do so, otherwise you’ll have to install keras (both for R and Python) and be waiting a long time besides. As mentioned, the nnet package is in base R, so you already have it.
#### Deep learning
*Deep learning* can be summarized succinctly as ‘very complicated neural nets’. Really, that’s about it. The complexity can be quite tremendous however, and there is a wide variety from which to choose. For example, we just ran a basic neural net above, but for image processing we might use a convolutional neural network, and for natural language processing some LTSM model. Here is a small(!) version of the convolutional neural network known as ‘resnet’ which has many layers in between input and output.
The nice thing is that a lot of the work has already been done for you, and you can use models where most of the layers in the neural net have already been trained by people at Google, Facebook, and others who have much more resources to do do so than you. In such cases, you may only have to worry about the last couple layers for your particular problem. Applying a pre\-trained model to a different data scenario is called *transfer learning*, and regardless of what your intuition is, it will work, and very well.
*Artificial intelligence* (AI) used to refer to specific applications of deep/machine learning (e.g. areas in computer vision and natural language processing), but thanks to the popular press, the term has pretty much lost all meaning. AI actually has a very old history dating to the cognitive revolution in psychology and the early days of computer science in the late 50s and early 60s. Again though, you can think of it as a subset of the machine learning problems.
#### Deep learning
*Deep learning* can be summarized succinctly as ‘very complicated neural nets’. Really, that’s about it. The complexity can be quite tremendous however, and there is a wide variety from which to choose. For example, we just ran a basic neural net above, but for image processing we might use a convolutional neural network, and for natural language processing some LTSM model. Here is a small(!) version of the convolutional neural network known as ‘resnet’ which has many layers in between input and output.
The nice thing is that a lot of the work has already been done for you, and you can use models where most of the layers in the neural net have already been trained by people at Google, Facebook, and others who have much more resources to do do so than you. In such cases, you may only have to worry about the last couple layers for your particular problem. Applying a pre\-trained model to a different data scenario is called *transfer learning*, and regardless of what your intuition is, it will work, and very well.
*Artificial intelligence* (AI) used to refer to specific applications of deep/machine learning (e.g. areas in computer vision and natural language processing), but thanks to the popular press, the term has pretty much lost all meaning. AI actually has a very old history dating to the cognitive revolution in psychology and the early days of computer science in the late 50s and early 60s. Again though, you can think of it as a subset of the machine learning problems.
Interpreting the Black Box
--------------------------
One of the key issues with ML techniques is interpretability. While a decision tree is immensely interpretable, a thousand of them is not so much. What any particular node or even layer in a complex neural network represents may be difficult to fathom. However, we can still interpret prediction changes based on input changes, which is what we really care about, and really is not necessarily more difficult than our standard inferential setting.
For example, a regularized regression might not have straightforward inference, but the coefficients are interpreted exactly the same as a standard GLM. Random forests can have the interactions visualized, which is what we said was required for interpretation in standard settings. Furthermore, there are many approaches such as *Local Interpretable Model\-Agnostic Explanations* (LIME), variable importance measures, Shapley values, and more to help us in this process. It might take more work, but honestly, in my consulting experience, a great many have trouble interpreting anything beyond a standard linear model any way, and I’m not convinced that it’s fundamentally different problem to [extract meaning from the machine learning context](https://christophm.github.io/interpretable-ml-book/) these days, though it may take a little work.
Machine Learning Summary
------------------------
Hopefully this has demystified ML for you somewhat. Nowadays it may take little effort to get to state\-of\-the\-art results from even just a year or two ago, and which, for all intents and purposes, would be good enough both now and for the foreseeable future. Despite what many may think, it is not magic, but for more classically statistically minded, it may require a bit of a different perspective.
Machine Learning Exercises
--------------------------
### Exercise 1
Use the ranger package to predict the Google variable `rating` by several covariates. Feel free to just use the standard function approach rather than all the tidymodels stuff if you want, but do use a training and test approach. You can then try the model again with a different tuning. For the first go around, starter code is provided.
```
# run these if needed to load data and install the package
# load('data/google_apps.RData')
# install.packages('ranger')
google_for_mod = google_apps %>%
select(avg_sentiment_polarity, rating, type,installs, reviews, size_in_MB, category) %>%
drop_na()
google_split = google_for_mod %>%
initial_split(prop = 0.75)
google_train = training(google_split)
google_test = testing(google_split)
ga_rf_results = rand_forest(mode = 'regression', mtry = 2, trees = 1000) %>%
set_engine(engine = "ranger") %>%
fit(
rating ~ ?,
google_train
)
test_predictions = predict(ga_rf_results, new_data = google_test)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
bind_rows(
rmse,
rsq
)
```
### Exercise 2
Respecify the neural net model demonstrated above as follows, and tune over the number of hidden units to have. This will probably take several minutes depending on your machine.
```
grid_search = expand.grid(
hidden_units = c(25, 50),
penalty = exp(seq(-4, -.25, length.out = 5))
)
happy_nn_spec = mlp(mode = 'regression',
penalty = tune(),
hidden_units = tune()) %>%
set_engine(engine = "nnet")
nn_tune = tune_grid(
happy_prepped, # from previous examples, see tuning for regularized regression
model = happy_nn_spec,
resamples = happy_folds, # from previous examples, see tuning for regularized regression
grid = grid_search
)
show_best(nn_tune, metric = "rmse", maximize = FALSE, n = 1)
```
### Exercise 1
Use the ranger package to predict the Google variable `rating` by several covariates. Feel free to just use the standard function approach rather than all the tidymodels stuff if you want, but do use a training and test approach. You can then try the model again with a different tuning. For the first go around, starter code is provided.
```
# run these if needed to load data and install the package
# load('data/google_apps.RData')
# install.packages('ranger')
google_for_mod = google_apps %>%
select(avg_sentiment_polarity, rating, type,installs, reviews, size_in_MB, category) %>%
drop_na()
google_split = google_for_mod %>%
initial_split(prop = 0.75)
google_train = training(google_split)
google_test = testing(google_split)
ga_rf_results = rand_forest(mode = 'regression', mtry = 2, trees = 1000) %>%
set_engine(engine = "ranger") %>%
fit(
rating ~ ?,
google_train
)
test_predictions = predict(ga_rf_results, new_data = google_test)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
bind_rows(
rmse,
rsq
)
```
### Exercise 2
Respecify the neural net model demonstrated above as follows, and tune over the number of hidden units to have. This will probably take several minutes depending on your machine.
```
grid_search = expand.grid(
hidden_units = c(25, 50),
penalty = exp(seq(-4, -.25, length.out = 5))
)
happy_nn_spec = mlp(mode = 'regression',
penalty = tune(),
hidden_units = tune()) %>%
set_engine(engine = "nnet")
nn_tune = tune_grid(
happy_prepped, # from previous examples, see tuning for regularized regression
model = happy_nn_spec,
resamples = happy_folds, # from previous examples, see tuning for regularized regression
grid = grid_search
)
show_best(nn_tune, metric = "rmse", maximize = FALSE, n = 1)
```
Python Machine Learning Notebook
--------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/ml.ipynb)
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/ml.html |
Machine Learning
================
*Machine learning* (ML) encompasses a wide variety of techniques, from standard regression models to almost impenetrably complex modeling tools. While it may seem like magic to the uninitiated, the main thing that distinguishes it from standard statistical methods discussed thus far is an approach that heavily favors prediction over inference and explanatory power, and which takes the necessary steps to gain any predictive advantage[38](#fn38).
ML could potentially be applied in any setting, but typically works best with data sets much larger than classical statistical methods are usually applied to. However, nowadays even complex regression models can be applied to extremely large data sets, and properly applied ML may even work in simpler data settings, so this distinction is muddier than it used to be. The main distinguishing factor is mostly one of focus.
The following only very briefly provides a demonstration of concepts and approaches. I have more [in\-depth document available](https://m-clark.github.io/introduction-to-machine-learning/) for more details.
Concepts
--------
### Loss
We discussed loss functions [before](models.html#estimation), and there was a reason I went more in depth there, mainly because I feel, unlike with ML, loss is not explicitly focused on as much in applied research, leaving the results produced to come across as more magical than it should be. In ML however, we are explicitly concerned with loss functions and, more specifically, evaluating loss on test data. This loss is evaluated over successive iterations of a particular technique, or averaged over several test sets via cross\-validation. Typical loss functions are *Root Mean Squared Error* for numeric targets (essentially the same as for a standard linear model), and *cross\-entropy* for categorical outcomes. There are robust alternatives, such as mean absolute error and hinge loss functions respectively, and many other options besides. You will come across others that might be used for specific scenarios.
The following image, typically called a *learning curve*, shows an example of loss on a test set as a function of model complexity. In this case, models with more complexity perform better, but only to a point, before test error begins to rise again.
### Bias\-variance tradeoff
Prediction error, i.e. loss, is composed of several sources. One part is *measurement error*, which we can’t do anything about, and two components of specific interest: *bias*, the difference in the observed value and our average predicted value, and *variance* how much that prediction would change had we trained on different data. More generally we can think of this as a problem of *underfitting* vs. *overfitting*. With a model that is too simple, we underfit, and bias is high. If we overfit, the model is too close to the training data, and likely will do poorly with new observations. ML techniques trade some increased bias for even greater reduced variance, which often means less overfitting to the training data, leading to increased performance on new data.
In the following[39](#fn39), the blue line represents models applied to training data, while the red line regards performance on the test set. We can see that for the data we train the model to, error will always go down with increased complexity. However, we can see that at some point, the test error will increase as we have started to overfit to the training data.
### Regularization
As we have noted, a model fit to a single data set might do very well with the data at hand, but then suffer when predicting independent data. Also, oftentimes we are interested in a ‘best’ subset of predictors among a great many, and typically the estimated coefficients from standard approaches are overly optimistic unless dealing with sufficiently large sample sizes. This general issue can be improved by shrinking estimates toward zero, such that some of the performance in the initial fit is sacrificed for improvement with regard to prediction. The basic idea in terms of the tradeoff is that we are trading some bias for notably reduced variance. We demonstrate regularized regression below.
### Cross\-validation
*Cross\-validation* is widely used for validation and/or testing. With validation, we are usually concerned with picking parameter settings for the model, while the testing is used for ultimate assessment of model performance. Conceptually there is nothing new beyond what was [discussed previously](model_criticism.html#predictive-performance) regarding holding out data for assessing predictive performance, we just do more of it.
As an example, let’s say we split our data into three parts. We use two parts (combined) as our training data, then the third part as test. At this point this is identical to our demonstration before. But then, we switch which part is test and which two are training, and do the whole thing over again. And finally once more, so that each of our three parts has taken a turn as a test set. Our estimated error is the average loss across the three times.
Typically we do it more than three times, usually 10, and there are fancier methods of *k\-fold cross\-validation*, though they typically don’t serve to add much value. In any case, let’s try it with our previous example. The following uses the tidymodels approach to be consistent with early chapters use of the tidyverse[40](#fn40). With it we can employ k\-fold cross validation to evaluate the loss.
```
# install.packages(tidymodels) # if needed
library(tidymodels)
load('data/world_happiness.RData')
set.seed(1212)
# specify the model
happy_base_spec = linear_reg() %>%
set_engine(engine = "lm")
# by default 10-folds
happy_folds = vfold_cv(happy)
library(tune)
happy_base_results = fit_resamples(
happy_base_spec,
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_res = happy_base_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.629 | 10 | 0\.022 |
| rsq | standard | 0\.697 | 10 | 0\.022 |
We now see that our average test error is 0\.629\. It also gives the average R2\.
### Optimization
With ML, much more attention is paid to different optimizers, but the vast majority for deep learning and other many other methods are some flavor of *stochastic gradient descent*. Often due to the sheer volume of data/parameters, this optimization is done on chunks of the data and in parallel. In general, some optimization approaches may work better in some situations or for some models, where ‘better’ means quicker convergence, or perhaps a smoother ride toward convergence. It is not the case that you would come to incorrect conclusions using one method vs. another per se, just that you might reach those conclusions in a more efficient fashion. The following graphic displays SGD versus several variants[41](#fn41). The x and y axes represent the potential values two parameters might take, with the best selection of those values based on a loss function somewhere toward the bottom right. We can see that they all would get there eventually, but some might do so more quickly. This may or may not be the case for some other data situation.
### Tuning parameters
In any ML setting there are parameters that need to set in order to even run the model. In regularized regression this may be the penalty parameter, for random forests the tree depth, for neural nets, how many hidden units, and many other things. None of these *tuning parameters* is known beforehand, and so must be tuned, or learned, just like any other. This is usually done with through validation process like k\-fold cross validation. The ‘best’ settings are then used to make final predictions on the test set.
The usual workflow is something like the following:
* **Tuning**: With the **training data**, use a cross\-validation approach to run models with different values for tuning parameters.
* **Model Selection**: Select the ‘best’ model as that which minimizes or maximizes the objective function estimated during cross\-validation (e.g. RMSE, accuracy, etc.). The test data in this setting are typically referred to as *validation sets*.
* **Prediction**: Use the best model to make predictions on the **test set**.
Techniques
----------
### Regularized regression
A starting point for getting into ML from the more inferential methods is to use *regularized regression*. These are conceptually no different than standard LM/GLM types of approaches, but they add something to the loss function.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2 \+ \\lambda\\cdot\\Sigma\\beta^2\\]
In the above, this is the same squared error loss function as before, but we add a penalty that is based on the size of the coefficients. So, while initially our loss goes down with some set of estimates, the penalty based on their size might be such that the estimated loss actually increases. This has the effect of shrinking the estimates toward zero. Well, [why would we want that](https://stats.stackexchange.com/questions/179864/why-does-shrinkage-work)? This introduces [bias in the coefficients](https://stats.stackexchange.com/questions/207760/when-is-a-biased-estimator-preferable-to-unbiased-one), but the end result is a model that will do better on test set prediction, which is the goal of the ML approach. The way this works regards the bias\-variance tradeoff we discussed previously.
The following demonstrates regularized regression using the glmnet package. It actually uses *elastic net*, which has a mixture of two penalties, one of which is the squared sum of coefficients (typically called *ridge regression*) and the other is the sum of their absolute values (the so\-called *lasso*).
```
library(tidymodels)
happy_prepped = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
recipe(happiness_score ~ .) %>%
step_scale(everything()) %>%
step_naomit(happiness_score) %>%
prep() %>%
bake(happy)
happy_folds = happy_prepped %>%
drop_na() %>%
vfold_cv()
library(tune)
happy_regLM_spec = linear_reg(penalty = 1e-3, mixture = .5) %>%
set_engine(engine = "glmnet")
happy_regLM_results = fit_resamples(
happy_regLM_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_regLM_res = happy_regLM_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.335 | 10 | 0\.018 |
| rsq | standard | 0\.897 | 10 | 0\.013 |
#### Tuning parameters for regularized regression
For the previous model setting, we wouldn’t know what the penalty or the mixing parameter should be. This is where we can use cross validation to choose those. We’ll redo our model spec, create a set of values to search over, and pass that to the tuning function for cross\-validation. Our ultimate model will then be applied to the test data.
First we create our training\-test split.
```
# removing some variables with lots of missing values
happy_split = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
initial_split(prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split)
```
Next we process the data. This is all specific to the tidymodels approach.
```
happy_prepped = happy_train %>%
recipe(happiness_score ~ .) %>%
step_knnimpute(everything()) %>% # impute missing values
step_center(everything()) %>% # standardize
step_scale(everything()) %>% # standardize
prep() # prepare for other uses
happy_test_normalized <- bake(happy_prepped, new_data = happy_test, everything())
happy_folds = happy_prepped %>%
bake(happy) %>%
vfold_cv()
# now we are indicating we don't know the value to place
happy_regLM_spec = linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine(engine = "glmnet")
```
Now, we need to create a set of values (grid) to try an test. In this case we set the penalty parameter from near zero to near 1, and the mixture parameter a range of values from 0 (ridge regression) to 1 (lasso).
```
grid_search = expand_grid(
penalty = exp(seq(-4, -.25, length.out = 10)),
mixture = seq(0, 1, length.out = 10)
)
regLM_tune = tune_grid(
happy_prepped,
model = happy_regLM_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(regLM_tune, metric = "rmse") + geom_smooth(se = FALSE)
```
```
best = show_best(regLM_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
penalty mixture .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0.0183 0.111 rmse standard 0.288 10 0.00686 Model011
```
The results suggest a more ridge type mixture and smaller penalty tends to work better, and this is more or less in keeping with the ‘best’ model. Here is a plot where size indicates RMSE (smaller better) but only for RMSE \< .5 (slight jitter added).
With the ‘best’ model selected, we can refit to the training data with the parameters in hand. We can then do our usual performance assessment with the test set.
```
# for technical reasons, only mixture is passed to the model; see https://github.com/tidymodels/parsnip/issues/215
tuned_model = linear_reg(penalty = best$penalty, mixture = best$mixture) %>%
set_engine(engine = "glmnet") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.297 |
| rsq | standard | 0\.912 |
Not too bad!
### Random forests
A limitation of standard linear models, especially with many input variables, is that there’s not a real automatic way to incorporate interactions and nonlinearities. So we often will want to use techniques that do so. To understand *random forests* and similar techniques (boosted trees, etc.), we can start with a simple decision tree. To begin, for a single input variable (`X1`) whose range might be 1 to 10, we find that a cut at 5\.75 results in the best classification, such that if all observations greater than or equal to 5\.75 are classified as positive, and the rest negative. This general approach is fairly straightforward and conceptually easy to grasp, and it is because of this that tree approaches are appealing.
Now let’s add a second input (`X2`), also on a 1 to 10 range. We now might find that even better classification results if, upon looking at the portion of data regarding those greater than or equal to 5\.75, that we only classify positive if they are also less than 3 on the second variable. The following is a hypothetical tree reflecting this.
This tree structure allows for both interactions between variables, and nonlinear relationships between some input and the target variable (e.g. the second branch could just be the same `X1` but with some cut value greater than 5\.75\). Random forests randomly select a few from the available input variables, and create a tree that minimizes (maximizes) some loss (objective) function on a validation set. A given tree can potentially be very wide/deep, but instead of just one tree, we now do, say, 1000 trees. A final prediction is made based on the average across all trees.
We demonstrate the random forest using the ranger package. We initially don’t do any tuning here, but do note that the number of variables to randomly select (`mtry` below), the number of total trees, the tree depth \- all of these are potential tuning parameters to investigate in the model building process.
```
happy_rf_spec = rand_forest(mode = 'regression', mtry = 6) %>%
set_engine(engine = "ranger")
happy_rf_results = fit_resamples(
happy_rf_spec,
happiness_score ~ .,
happy_folds
)
cv_rf_res = happy_rf_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.222 | 10 | 0\.004 |
| rsq | standard | 0\.950 | 10 | 0\.003 |
It would appear we’re doing a bit better than the regularized regression.
#### Tuning parameters for random forests
As mentioned, we’d have a few tuning parameters to play around with. We’ll tune the number of predictors to randomly select per tree, as well as the minimum sample size for each leaf. The following takes the same appraoch as with the regularized regression model. Note that this will take a while (several minutes).
```
grid_search = expand.grid(
mtry = c(3, 5, ncol(happy_train)-1), # up to total number of predictors
min_n = c(1, 5, 10)
)
happy_rf_spec = rand_forest(mode = 'regression',
mtry = tune(),
min_n = tune()) %>%
set_engine(engine = "ranger")
rf_tune = tune_grid(
happy_prepped,
model = happy_rf_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(rf_tune, metric = "rmse")
```
```
best = show_best(rf_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
mtry min_n .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 3 1 rmse standard 0.219 10 0.00422 Model1
```
Looks like in general using all the variables for selection is the best (this is in keeping with standard approaches for random forest with regression).
Now we are ready to refit the model with the selected tuning parameters and make predictions on the test data.
```
tuned_model = rand_forest(mode = 'regression', mtry = best$mtry, min_n = best$min_n) %>%
set_engine(engine = "ranger") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.217 |
| rsq | standard | 0\.955 |
### Neural networks
*Neural networks* have been around for a long while as a general concept in artificial intelligence and even as a machine learning algorithm, and often work quite well. In some sense, neural networks can simply be thought of as nonlinear regression. Visually however, we can see them as a graphical model with layers of inputs and outputs. Weighted combinations of the inputs are created[42](#fn42) and put through some function (e.g. the sigmoid function) to produce the next layer of inputs. This next layer goes through the same process to produce either another layer, or to predict the output, or even multiple outputs, which serves as the final layer. All the layers between the input and output are usually referred to as hidden layers. If there were a single hidden layer with a single unit and no transformation, then it becomes the standard regression problem.
As a simple example, we can run a simple neural network with a single hidden layer of 1000 units[43](#fn43). Since this is a regression problem, no further transformation is required of the end result to map it onto the target variable. I set the number of epochs to 500, which you can think of as the number of iterations from our discussion of optimization. There are many tuning parameters I am not showing that could certainly be fiddled with as well. This is just an example that will run quickly with comparable performance to the previous. If you do not have keras installed, you can change the engine to `nnet`, which was a part of the base R set of packages well before neural nets became cool again[44](#fn44). This will likely take several minutes for typical machines.
```
happy_nn_spec = mlp(
mode = 'regression',
hidden_units = 1000,
epochs = 500,
activation = 'linear'
) %>%
set_engine(engine = "keras")
happy_nn_results = fit_resamples(
happy_nn_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE,
verbose = FALSE,
allow_par = TRUE)
)
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.818 | 10 | 0\.102 |
| rsq | standard | 0\.896 | 10 | 0\.014 |
You will typically see neural nets applied to image and natural language processing, but as demonstrated above, they can be applied in any scenario. It will take longer to set up and train, but once that’s done, you’re good to go, and may have a much better predictive result.
I leave tuning the model as an [exercise](ml.html#machine-learning-exercises), but definitely switch to using `nnet` if you do so, otherwise you’ll have to install keras (both for R and Python) and be waiting a long time besides. As mentioned, the nnet package is in base R, so you already have it.
#### Deep learning
*Deep learning* can be summarized succinctly as ‘very complicated neural nets’. Really, that’s about it. The complexity can be quite tremendous however, and there is a wide variety from which to choose. For example, we just ran a basic neural net above, but for image processing we might use a convolutional neural network, and for natural language processing some LTSM model. Here is a small(!) version of the convolutional neural network known as ‘resnet’ which has many layers in between input and output.
The nice thing is that a lot of the work has already been done for you, and you can use models where most of the layers in the neural net have already been trained by people at Google, Facebook, and others who have much more resources to do do so than you. In such cases, you may only have to worry about the last couple layers for your particular problem. Applying a pre\-trained model to a different data scenario is called *transfer learning*, and regardless of what your intuition is, it will work, and very well.
*Artificial intelligence* (AI) used to refer to specific applications of deep/machine learning (e.g. areas in computer vision and natural language processing), but thanks to the popular press, the term has pretty much lost all meaning. AI actually has a very old history dating to the cognitive revolution in psychology and the early days of computer science in the late 50s and early 60s. Again though, you can think of it as a subset of the machine learning problems.
Interpreting the Black Box
--------------------------
One of the key issues with ML techniques is interpretability. While a decision tree is immensely interpretable, a thousand of them is not so much. What any particular node or even layer in a complex neural network represents may be difficult to fathom. However, we can still interpret prediction changes based on input changes, which is what we really care about, and really is not necessarily more difficult than our standard inferential setting.
For example, a regularized regression might not have straightforward inference, but the coefficients are interpreted exactly the same as a standard GLM. Random forests can have the interactions visualized, which is what we said was required for interpretation in standard settings. Furthermore, there are many approaches such as *Local Interpretable Model\-Agnostic Explanations* (LIME), variable importance measures, Shapley values, and more to help us in this process. It might take more work, but honestly, in my consulting experience, a great many have trouble interpreting anything beyond a standard linear model any way, and I’m not convinced that it’s fundamentally different problem to [extract meaning from the machine learning context](https://christophm.github.io/interpretable-ml-book/) these days, though it may take a little work.
Machine Learning Summary
------------------------
Hopefully this has demystified ML for you somewhat. Nowadays it may take little effort to get to state\-of\-the\-art results from even just a year or two ago, and which, for all intents and purposes, would be good enough both now and for the foreseeable future. Despite what many may think, it is not magic, but for more classically statistically minded, it may require a bit of a different perspective.
Machine Learning Exercises
--------------------------
### Exercise 1
Use the ranger package to predict the Google variable `rating` by several covariates. Feel free to just use the standard function approach rather than all the tidymodels stuff if you want, but do use a training and test approach. You can then try the model again with a different tuning. For the first go around, starter code is provided.
```
# run these if needed to load data and install the package
# load('data/google_apps.RData')
# install.packages('ranger')
google_for_mod = google_apps %>%
select(avg_sentiment_polarity, rating, type,installs, reviews, size_in_MB, category) %>%
drop_na()
google_split = google_for_mod %>%
initial_split(prop = 0.75)
google_train = training(google_split)
google_test = testing(google_split)
ga_rf_results = rand_forest(mode = 'regression', mtry = 2, trees = 1000) %>%
set_engine(engine = "ranger") %>%
fit(
rating ~ ?,
google_train
)
test_predictions = predict(ga_rf_results, new_data = google_test)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
bind_rows(
rmse,
rsq
)
```
### Exercise 2
Respecify the neural net model demonstrated above as follows, and tune over the number of hidden units to have. This will probably take several minutes depending on your machine.
```
grid_search = expand.grid(
hidden_units = c(25, 50),
penalty = exp(seq(-4, -.25, length.out = 5))
)
happy_nn_spec = mlp(mode = 'regression',
penalty = tune(),
hidden_units = tune()) %>%
set_engine(engine = "nnet")
nn_tune = tune_grid(
happy_prepped, # from previous examples, see tuning for regularized regression
model = happy_nn_spec,
resamples = happy_folds, # from previous examples, see tuning for regularized regression
grid = grid_search
)
show_best(nn_tune, metric = "rmse", maximize = FALSE, n = 1)
```
Python Machine Learning Notebook
--------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/ml.ipynb)
Concepts
--------
### Loss
We discussed loss functions [before](models.html#estimation), and there was a reason I went more in depth there, mainly because I feel, unlike with ML, loss is not explicitly focused on as much in applied research, leaving the results produced to come across as more magical than it should be. In ML however, we are explicitly concerned with loss functions and, more specifically, evaluating loss on test data. This loss is evaluated over successive iterations of a particular technique, or averaged over several test sets via cross\-validation. Typical loss functions are *Root Mean Squared Error* for numeric targets (essentially the same as for a standard linear model), and *cross\-entropy* for categorical outcomes. There are robust alternatives, such as mean absolute error and hinge loss functions respectively, and many other options besides. You will come across others that might be used for specific scenarios.
The following image, typically called a *learning curve*, shows an example of loss on a test set as a function of model complexity. In this case, models with more complexity perform better, but only to a point, before test error begins to rise again.
### Bias\-variance tradeoff
Prediction error, i.e. loss, is composed of several sources. One part is *measurement error*, which we can’t do anything about, and two components of specific interest: *bias*, the difference in the observed value and our average predicted value, and *variance* how much that prediction would change had we trained on different data. More generally we can think of this as a problem of *underfitting* vs. *overfitting*. With a model that is too simple, we underfit, and bias is high. If we overfit, the model is too close to the training data, and likely will do poorly with new observations. ML techniques trade some increased bias for even greater reduced variance, which often means less overfitting to the training data, leading to increased performance on new data.
In the following[39](#fn39), the blue line represents models applied to training data, while the red line regards performance on the test set. We can see that for the data we train the model to, error will always go down with increased complexity. However, we can see that at some point, the test error will increase as we have started to overfit to the training data.
### Regularization
As we have noted, a model fit to a single data set might do very well with the data at hand, but then suffer when predicting independent data. Also, oftentimes we are interested in a ‘best’ subset of predictors among a great many, and typically the estimated coefficients from standard approaches are overly optimistic unless dealing with sufficiently large sample sizes. This general issue can be improved by shrinking estimates toward zero, such that some of the performance in the initial fit is sacrificed for improvement with regard to prediction. The basic idea in terms of the tradeoff is that we are trading some bias for notably reduced variance. We demonstrate regularized regression below.
### Cross\-validation
*Cross\-validation* is widely used for validation and/or testing. With validation, we are usually concerned with picking parameter settings for the model, while the testing is used for ultimate assessment of model performance. Conceptually there is nothing new beyond what was [discussed previously](model_criticism.html#predictive-performance) regarding holding out data for assessing predictive performance, we just do more of it.
As an example, let’s say we split our data into three parts. We use two parts (combined) as our training data, then the third part as test. At this point this is identical to our demonstration before. But then, we switch which part is test and which two are training, and do the whole thing over again. And finally once more, so that each of our three parts has taken a turn as a test set. Our estimated error is the average loss across the three times.
Typically we do it more than three times, usually 10, and there are fancier methods of *k\-fold cross\-validation*, though they typically don’t serve to add much value. In any case, let’s try it with our previous example. The following uses the tidymodels approach to be consistent with early chapters use of the tidyverse[40](#fn40). With it we can employ k\-fold cross validation to evaluate the loss.
```
# install.packages(tidymodels) # if needed
library(tidymodels)
load('data/world_happiness.RData')
set.seed(1212)
# specify the model
happy_base_spec = linear_reg() %>%
set_engine(engine = "lm")
# by default 10-folds
happy_folds = vfold_cv(happy)
library(tune)
happy_base_results = fit_resamples(
happy_base_spec,
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_res = happy_base_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.629 | 10 | 0\.022 |
| rsq | standard | 0\.697 | 10 | 0\.022 |
We now see that our average test error is 0\.629\. It also gives the average R2\.
### Optimization
With ML, much more attention is paid to different optimizers, but the vast majority for deep learning and other many other methods are some flavor of *stochastic gradient descent*. Often due to the sheer volume of data/parameters, this optimization is done on chunks of the data and in parallel. In general, some optimization approaches may work better in some situations or for some models, where ‘better’ means quicker convergence, or perhaps a smoother ride toward convergence. It is not the case that you would come to incorrect conclusions using one method vs. another per se, just that you might reach those conclusions in a more efficient fashion. The following graphic displays SGD versus several variants[41](#fn41). The x and y axes represent the potential values two parameters might take, with the best selection of those values based on a loss function somewhere toward the bottom right. We can see that they all would get there eventually, but some might do so more quickly. This may or may not be the case for some other data situation.
### Tuning parameters
In any ML setting there are parameters that need to set in order to even run the model. In regularized regression this may be the penalty parameter, for random forests the tree depth, for neural nets, how many hidden units, and many other things. None of these *tuning parameters* is known beforehand, and so must be tuned, or learned, just like any other. This is usually done with through validation process like k\-fold cross validation. The ‘best’ settings are then used to make final predictions on the test set.
The usual workflow is something like the following:
* **Tuning**: With the **training data**, use a cross\-validation approach to run models with different values for tuning parameters.
* **Model Selection**: Select the ‘best’ model as that which minimizes or maximizes the objective function estimated during cross\-validation (e.g. RMSE, accuracy, etc.). The test data in this setting are typically referred to as *validation sets*.
* **Prediction**: Use the best model to make predictions on the **test set**.
### Loss
We discussed loss functions [before](models.html#estimation), and there was a reason I went more in depth there, mainly because I feel, unlike with ML, loss is not explicitly focused on as much in applied research, leaving the results produced to come across as more magical than it should be. In ML however, we are explicitly concerned with loss functions and, more specifically, evaluating loss on test data. This loss is evaluated over successive iterations of a particular technique, or averaged over several test sets via cross\-validation. Typical loss functions are *Root Mean Squared Error* for numeric targets (essentially the same as for a standard linear model), and *cross\-entropy* for categorical outcomes. There are robust alternatives, such as mean absolute error and hinge loss functions respectively, and many other options besides. You will come across others that might be used for specific scenarios.
The following image, typically called a *learning curve*, shows an example of loss on a test set as a function of model complexity. In this case, models with more complexity perform better, but only to a point, before test error begins to rise again.
### Bias\-variance tradeoff
Prediction error, i.e. loss, is composed of several sources. One part is *measurement error*, which we can’t do anything about, and two components of specific interest: *bias*, the difference in the observed value and our average predicted value, and *variance* how much that prediction would change had we trained on different data. More generally we can think of this as a problem of *underfitting* vs. *overfitting*. With a model that is too simple, we underfit, and bias is high. If we overfit, the model is too close to the training data, and likely will do poorly with new observations. ML techniques trade some increased bias for even greater reduced variance, which often means less overfitting to the training data, leading to increased performance on new data.
In the following[39](#fn39), the blue line represents models applied to training data, while the red line regards performance on the test set. We can see that for the data we train the model to, error will always go down with increased complexity. However, we can see that at some point, the test error will increase as we have started to overfit to the training data.
### Regularization
As we have noted, a model fit to a single data set might do very well with the data at hand, but then suffer when predicting independent data. Also, oftentimes we are interested in a ‘best’ subset of predictors among a great many, and typically the estimated coefficients from standard approaches are overly optimistic unless dealing with sufficiently large sample sizes. This general issue can be improved by shrinking estimates toward zero, such that some of the performance in the initial fit is sacrificed for improvement with regard to prediction. The basic idea in terms of the tradeoff is that we are trading some bias for notably reduced variance. We demonstrate regularized regression below.
### Cross\-validation
*Cross\-validation* is widely used for validation and/or testing. With validation, we are usually concerned with picking parameter settings for the model, while the testing is used for ultimate assessment of model performance. Conceptually there is nothing new beyond what was [discussed previously](model_criticism.html#predictive-performance) regarding holding out data for assessing predictive performance, we just do more of it.
As an example, let’s say we split our data into three parts. We use two parts (combined) as our training data, then the third part as test. At this point this is identical to our demonstration before. But then, we switch which part is test and which two are training, and do the whole thing over again. And finally once more, so that each of our three parts has taken a turn as a test set. Our estimated error is the average loss across the three times.
Typically we do it more than three times, usually 10, and there are fancier methods of *k\-fold cross\-validation*, though they typically don’t serve to add much value. In any case, let’s try it with our previous example. The following uses the tidymodels approach to be consistent with early chapters use of the tidyverse[40](#fn40). With it we can employ k\-fold cross validation to evaluate the loss.
```
# install.packages(tidymodels) # if needed
library(tidymodels)
load('data/world_happiness.RData')
set.seed(1212)
# specify the model
happy_base_spec = linear_reg() %>%
set_engine(engine = "lm")
# by default 10-folds
happy_folds = vfold_cv(happy)
library(tune)
happy_base_results = fit_resamples(
happy_base_spec,
happiness_score ~ democratic_quality + generosity + log_gdp_per_capita,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_res = happy_base_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.629 | 10 | 0\.022 |
| rsq | standard | 0\.697 | 10 | 0\.022 |
We now see that our average test error is 0\.629\. It also gives the average R2\.
### Optimization
With ML, much more attention is paid to different optimizers, but the vast majority for deep learning and other many other methods are some flavor of *stochastic gradient descent*. Often due to the sheer volume of data/parameters, this optimization is done on chunks of the data and in parallel. In general, some optimization approaches may work better in some situations or for some models, where ‘better’ means quicker convergence, or perhaps a smoother ride toward convergence. It is not the case that you would come to incorrect conclusions using one method vs. another per se, just that you might reach those conclusions in a more efficient fashion. The following graphic displays SGD versus several variants[41](#fn41). The x and y axes represent the potential values two parameters might take, with the best selection of those values based on a loss function somewhere toward the bottom right. We can see that they all would get there eventually, but some might do so more quickly. This may or may not be the case for some other data situation.
### Tuning parameters
In any ML setting there are parameters that need to set in order to even run the model. In regularized regression this may be the penalty parameter, for random forests the tree depth, for neural nets, how many hidden units, and many other things. None of these *tuning parameters* is known beforehand, and so must be tuned, or learned, just like any other. This is usually done with through validation process like k\-fold cross validation. The ‘best’ settings are then used to make final predictions on the test set.
The usual workflow is something like the following:
* **Tuning**: With the **training data**, use a cross\-validation approach to run models with different values for tuning parameters.
* **Model Selection**: Select the ‘best’ model as that which minimizes or maximizes the objective function estimated during cross\-validation (e.g. RMSE, accuracy, etc.). The test data in this setting are typically referred to as *validation sets*.
* **Prediction**: Use the best model to make predictions on the **test set**.
Techniques
----------
### Regularized regression
A starting point for getting into ML from the more inferential methods is to use *regularized regression*. These are conceptually no different than standard LM/GLM types of approaches, but they add something to the loss function.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2 \+ \\lambda\\cdot\\Sigma\\beta^2\\]
In the above, this is the same squared error loss function as before, but we add a penalty that is based on the size of the coefficients. So, while initially our loss goes down with some set of estimates, the penalty based on their size might be such that the estimated loss actually increases. This has the effect of shrinking the estimates toward zero. Well, [why would we want that](https://stats.stackexchange.com/questions/179864/why-does-shrinkage-work)? This introduces [bias in the coefficients](https://stats.stackexchange.com/questions/207760/when-is-a-biased-estimator-preferable-to-unbiased-one), but the end result is a model that will do better on test set prediction, which is the goal of the ML approach. The way this works regards the bias\-variance tradeoff we discussed previously.
The following demonstrates regularized regression using the glmnet package. It actually uses *elastic net*, which has a mixture of two penalties, one of which is the squared sum of coefficients (typically called *ridge regression*) and the other is the sum of their absolute values (the so\-called *lasso*).
```
library(tidymodels)
happy_prepped = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
recipe(happiness_score ~ .) %>%
step_scale(everything()) %>%
step_naomit(happiness_score) %>%
prep() %>%
bake(happy)
happy_folds = happy_prepped %>%
drop_na() %>%
vfold_cv()
library(tune)
happy_regLM_spec = linear_reg(penalty = 1e-3, mixture = .5) %>%
set_engine(engine = "glmnet")
happy_regLM_results = fit_resamples(
happy_regLM_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_regLM_res = happy_regLM_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.335 | 10 | 0\.018 |
| rsq | standard | 0\.897 | 10 | 0\.013 |
#### Tuning parameters for regularized regression
For the previous model setting, we wouldn’t know what the penalty or the mixing parameter should be. This is where we can use cross validation to choose those. We’ll redo our model spec, create a set of values to search over, and pass that to the tuning function for cross\-validation. Our ultimate model will then be applied to the test data.
First we create our training\-test split.
```
# removing some variables with lots of missing values
happy_split = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
initial_split(prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split)
```
Next we process the data. This is all specific to the tidymodels approach.
```
happy_prepped = happy_train %>%
recipe(happiness_score ~ .) %>%
step_knnimpute(everything()) %>% # impute missing values
step_center(everything()) %>% # standardize
step_scale(everything()) %>% # standardize
prep() # prepare for other uses
happy_test_normalized <- bake(happy_prepped, new_data = happy_test, everything())
happy_folds = happy_prepped %>%
bake(happy) %>%
vfold_cv()
# now we are indicating we don't know the value to place
happy_regLM_spec = linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine(engine = "glmnet")
```
Now, we need to create a set of values (grid) to try an test. In this case we set the penalty parameter from near zero to near 1, and the mixture parameter a range of values from 0 (ridge regression) to 1 (lasso).
```
grid_search = expand_grid(
penalty = exp(seq(-4, -.25, length.out = 10)),
mixture = seq(0, 1, length.out = 10)
)
regLM_tune = tune_grid(
happy_prepped,
model = happy_regLM_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(regLM_tune, metric = "rmse") + geom_smooth(se = FALSE)
```
```
best = show_best(regLM_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
penalty mixture .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0.0183 0.111 rmse standard 0.288 10 0.00686 Model011
```
The results suggest a more ridge type mixture and smaller penalty tends to work better, and this is more or less in keeping with the ‘best’ model. Here is a plot where size indicates RMSE (smaller better) but only for RMSE \< .5 (slight jitter added).
With the ‘best’ model selected, we can refit to the training data with the parameters in hand. We can then do our usual performance assessment with the test set.
```
# for technical reasons, only mixture is passed to the model; see https://github.com/tidymodels/parsnip/issues/215
tuned_model = linear_reg(penalty = best$penalty, mixture = best$mixture) %>%
set_engine(engine = "glmnet") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.297 |
| rsq | standard | 0\.912 |
Not too bad!
### Random forests
A limitation of standard linear models, especially with many input variables, is that there’s not a real automatic way to incorporate interactions and nonlinearities. So we often will want to use techniques that do so. To understand *random forests* and similar techniques (boosted trees, etc.), we can start with a simple decision tree. To begin, for a single input variable (`X1`) whose range might be 1 to 10, we find that a cut at 5\.75 results in the best classification, such that if all observations greater than or equal to 5\.75 are classified as positive, and the rest negative. This general approach is fairly straightforward and conceptually easy to grasp, and it is because of this that tree approaches are appealing.
Now let’s add a second input (`X2`), also on a 1 to 10 range. We now might find that even better classification results if, upon looking at the portion of data regarding those greater than or equal to 5\.75, that we only classify positive if they are also less than 3 on the second variable. The following is a hypothetical tree reflecting this.
This tree structure allows for both interactions between variables, and nonlinear relationships between some input and the target variable (e.g. the second branch could just be the same `X1` but with some cut value greater than 5\.75\). Random forests randomly select a few from the available input variables, and create a tree that minimizes (maximizes) some loss (objective) function on a validation set. A given tree can potentially be very wide/deep, but instead of just one tree, we now do, say, 1000 trees. A final prediction is made based on the average across all trees.
We demonstrate the random forest using the ranger package. We initially don’t do any tuning here, but do note that the number of variables to randomly select (`mtry` below), the number of total trees, the tree depth \- all of these are potential tuning parameters to investigate in the model building process.
```
happy_rf_spec = rand_forest(mode = 'regression', mtry = 6) %>%
set_engine(engine = "ranger")
happy_rf_results = fit_resamples(
happy_rf_spec,
happiness_score ~ .,
happy_folds
)
cv_rf_res = happy_rf_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.222 | 10 | 0\.004 |
| rsq | standard | 0\.950 | 10 | 0\.003 |
It would appear we’re doing a bit better than the regularized regression.
#### Tuning parameters for random forests
As mentioned, we’d have a few tuning parameters to play around with. We’ll tune the number of predictors to randomly select per tree, as well as the minimum sample size for each leaf. The following takes the same appraoch as with the regularized regression model. Note that this will take a while (several minutes).
```
grid_search = expand.grid(
mtry = c(3, 5, ncol(happy_train)-1), # up to total number of predictors
min_n = c(1, 5, 10)
)
happy_rf_spec = rand_forest(mode = 'regression',
mtry = tune(),
min_n = tune()) %>%
set_engine(engine = "ranger")
rf_tune = tune_grid(
happy_prepped,
model = happy_rf_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(rf_tune, metric = "rmse")
```
```
best = show_best(rf_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
mtry min_n .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 3 1 rmse standard 0.219 10 0.00422 Model1
```
Looks like in general using all the variables for selection is the best (this is in keeping with standard approaches for random forest with regression).
Now we are ready to refit the model with the selected tuning parameters and make predictions on the test data.
```
tuned_model = rand_forest(mode = 'regression', mtry = best$mtry, min_n = best$min_n) %>%
set_engine(engine = "ranger") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.217 |
| rsq | standard | 0\.955 |
### Neural networks
*Neural networks* have been around for a long while as a general concept in artificial intelligence and even as a machine learning algorithm, and often work quite well. In some sense, neural networks can simply be thought of as nonlinear regression. Visually however, we can see them as a graphical model with layers of inputs and outputs. Weighted combinations of the inputs are created[42](#fn42) and put through some function (e.g. the sigmoid function) to produce the next layer of inputs. This next layer goes through the same process to produce either another layer, or to predict the output, or even multiple outputs, which serves as the final layer. All the layers between the input and output are usually referred to as hidden layers. If there were a single hidden layer with a single unit and no transformation, then it becomes the standard regression problem.
As a simple example, we can run a simple neural network with a single hidden layer of 1000 units[43](#fn43). Since this is a regression problem, no further transformation is required of the end result to map it onto the target variable. I set the number of epochs to 500, which you can think of as the number of iterations from our discussion of optimization. There are many tuning parameters I am not showing that could certainly be fiddled with as well. This is just an example that will run quickly with comparable performance to the previous. If you do not have keras installed, you can change the engine to `nnet`, which was a part of the base R set of packages well before neural nets became cool again[44](#fn44). This will likely take several minutes for typical machines.
```
happy_nn_spec = mlp(
mode = 'regression',
hidden_units = 1000,
epochs = 500,
activation = 'linear'
) %>%
set_engine(engine = "keras")
happy_nn_results = fit_resamples(
happy_nn_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE,
verbose = FALSE,
allow_par = TRUE)
)
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.818 | 10 | 0\.102 |
| rsq | standard | 0\.896 | 10 | 0\.014 |
You will typically see neural nets applied to image and natural language processing, but as demonstrated above, they can be applied in any scenario. It will take longer to set up and train, but once that’s done, you’re good to go, and may have a much better predictive result.
I leave tuning the model as an [exercise](ml.html#machine-learning-exercises), but definitely switch to using `nnet` if you do so, otherwise you’ll have to install keras (both for R and Python) and be waiting a long time besides. As mentioned, the nnet package is in base R, so you already have it.
#### Deep learning
*Deep learning* can be summarized succinctly as ‘very complicated neural nets’. Really, that’s about it. The complexity can be quite tremendous however, and there is a wide variety from which to choose. For example, we just ran a basic neural net above, but for image processing we might use a convolutional neural network, and for natural language processing some LTSM model. Here is a small(!) version of the convolutional neural network known as ‘resnet’ which has many layers in between input and output.
The nice thing is that a lot of the work has already been done for you, and you can use models where most of the layers in the neural net have already been trained by people at Google, Facebook, and others who have much more resources to do do so than you. In such cases, you may only have to worry about the last couple layers for your particular problem. Applying a pre\-trained model to a different data scenario is called *transfer learning*, and regardless of what your intuition is, it will work, and very well.
*Artificial intelligence* (AI) used to refer to specific applications of deep/machine learning (e.g. areas in computer vision and natural language processing), but thanks to the popular press, the term has pretty much lost all meaning. AI actually has a very old history dating to the cognitive revolution in psychology and the early days of computer science in the late 50s and early 60s. Again though, you can think of it as a subset of the machine learning problems.
### Regularized regression
A starting point for getting into ML from the more inferential methods is to use *regularized regression*. These are conceptually no different than standard LM/GLM types of approaches, but they add something to the loss function.
\\\[\\mathcal{Loss} \= \\Sigma(y \- \\hat{y})^2 \+ \\lambda\\cdot\\Sigma\\beta^2\\]
In the above, this is the same squared error loss function as before, but we add a penalty that is based on the size of the coefficients. So, while initially our loss goes down with some set of estimates, the penalty based on their size might be such that the estimated loss actually increases. This has the effect of shrinking the estimates toward zero. Well, [why would we want that](https://stats.stackexchange.com/questions/179864/why-does-shrinkage-work)? This introduces [bias in the coefficients](https://stats.stackexchange.com/questions/207760/when-is-a-biased-estimator-preferable-to-unbiased-one), but the end result is a model that will do better on test set prediction, which is the goal of the ML approach. The way this works regards the bias\-variance tradeoff we discussed previously.
The following demonstrates regularized regression using the glmnet package. It actually uses *elastic net*, which has a mixture of two penalties, one of which is the squared sum of coefficients (typically called *ridge regression*) and the other is the sum of their absolute values (the so\-called *lasso*).
```
library(tidymodels)
happy_prepped = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
recipe(happiness_score ~ .) %>%
step_scale(everything()) %>%
step_naomit(happiness_score) %>%
prep() %>%
bake(happy)
happy_folds = happy_prepped %>%
drop_na() %>%
vfold_cv()
library(tune)
happy_regLM_spec = linear_reg(penalty = 1e-3, mixture = .5) %>%
set_engine(engine = "glmnet")
happy_regLM_results = fit_resamples(
happy_regLM_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE)
)
cv_regLM_res = happy_regLM_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.335 | 10 | 0\.018 |
| rsq | standard | 0\.897 | 10 | 0\.013 |
#### Tuning parameters for regularized regression
For the previous model setting, we wouldn’t know what the penalty or the mixing parameter should be. This is where we can use cross validation to choose those. We’ll redo our model spec, create a set of values to search over, and pass that to the tuning function for cross\-validation. Our ultimate model will then be applied to the test data.
First we create our training\-test split.
```
# removing some variables with lots of missing values
happy_split = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
initial_split(prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split)
```
Next we process the data. This is all specific to the tidymodels approach.
```
happy_prepped = happy_train %>%
recipe(happiness_score ~ .) %>%
step_knnimpute(everything()) %>% # impute missing values
step_center(everything()) %>% # standardize
step_scale(everything()) %>% # standardize
prep() # prepare for other uses
happy_test_normalized <- bake(happy_prepped, new_data = happy_test, everything())
happy_folds = happy_prepped %>%
bake(happy) %>%
vfold_cv()
# now we are indicating we don't know the value to place
happy_regLM_spec = linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine(engine = "glmnet")
```
Now, we need to create a set of values (grid) to try an test. In this case we set the penalty parameter from near zero to near 1, and the mixture parameter a range of values from 0 (ridge regression) to 1 (lasso).
```
grid_search = expand_grid(
penalty = exp(seq(-4, -.25, length.out = 10)),
mixture = seq(0, 1, length.out = 10)
)
regLM_tune = tune_grid(
happy_prepped,
model = happy_regLM_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(regLM_tune, metric = "rmse") + geom_smooth(se = FALSE)
```
```
best = show_best(regLM_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
penalty mixture .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0.0183 0.111 rmse standard 0.288 10 0.00686 Model011
```
The results suggest a more ridge type mixture and smaller penalty tends to work better, and this is more or less in keeping with the ‘best’ model. Here is a plot where size indicates RMSE (smaller better) but only for RMSE \< .5 (slight jitter added).
With the ‘best’ model selected, we can refit to the training data with the parameters in hand. We can then do our usual performance assessment with the test set.
```
# for technical reasons, only mixture is passed to the model; see https://github.com/tidymodels/parsnip/issues/215
tuned_model = linear_reg(penalty = best$penalty, mixture = best$mixture) %>%
set_engine(engine = "glmnet") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.297 |
| rsq | standard | 0\.912 |
Not too bad!
#### Tuning parameters for regularized regression
For the previous model setting, we wouldn’t know what the penalty or the mixing parameter should be. This is where we can use cross validation to choose those. We’ll redo our model spec, create a set of values to search over, and pass that to the tuning function for cross\-validation. Our ultimate model will then be applied to the test data.
First we create our training\-test split.
```
# removing some variables with lots of missing values
happy_split = happy %>%
select(-country, -gini_index_world_bank_estimate, -dystopia_residual) %>%
initial_split(prop = 0.75)
happy_train = training(happy_split)
happy_test = testing(happy_split)
```
Next we process the data. This is all specific to the tidymodels approach.
```
happy_prepped = happy_train %>%
recipe(happiness_score ~ .) %>%
step_knnimpute(everything()) %>% # impute missing values
step_center(everything()) %>% # standardize
step_scale(everything()) %>% # standardize
prep() # prepare for other uses
happy_test_normalized <- bake(happy_prepped, new_data = happy_test, everything())
happy_folds = happy_prepped %>%
bake(happy) %>%
vfold_cv()
# now we are indicating we don't know the value to place
happy_regLM_spec = linear_reg(penalty = tune(), mixture = tune()) %>%
set_engine(engine = "glmnet")
```
Now, we need to create a set of values (grid) to try an test. In this case we set the penalty parameter from near zero to near 1, and the mixture parameter a range of values from 0 (ridge regression) to 1 (lasso).
```
grid_search = expand_grid(
penalty = exp(seq(-4, -.25, length.out = 10)),
mixture = seq(0, 1, length.out = 10)
)
regLM_tune = tune_grid(
happy_prepped,
model = happy_regLM_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(regLM_tune, metric = "rmse") + geom_smooth(se = FALSE)
```
```
best = show_best(regLM_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
penalty mixture .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 0.0183 0.111 rmse standard 0.288 10 0.00686 Model011
```
The results suggest a more ridge type mixture and smaller penalty tends to work better, and this is more or less in keeping with the ‘best’ model. Here is a plot where size indicates RMSE (smaller better) but only for RMSE \< .5 (slight jitter added).
With the ‘best’ model selected, we can refit to the training data with the parameters in hand. We can then do our usual performance assessment with the test set.
```
# for technical reasons, only mixture is passed to the model; see https://github.com/tidymodels/parsnip/issues/215
tuned_model = linear_reg(penalty = best$penalty, mixture = best$mixture) %>%
set_engine(engine = "glmnet") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.297 |
| rsq | standard | 0\.912 |
Not too bad!
### Random forests
A limitation of standard linear models, especially with many input variables, is that there’s not a real automatic way to incorporate interactions and nonlinearities. So we often will want to use techniques that do so. To understand *random forests* and similar techniques (boosted trees, etc.), we can start with a simple decision tree. To begin, for a single input variable (`X1`) whose range might be 1 to 10, we find that a cut at 5\.75 results in the best classification, such that if all observations greater than or equal to 5\.75 are classified as positive, and the rest negative. This general approach is fairly straightforward and conceptually easy to grasp, and it is because of this that tree approaches are appealing.
Now let’s add a second input (`X2`), also on a 1 to 10 range. We now might find that even better classification results if, upon looking at the portion of data regarding those greater than or equal to 5\.75, that we only classify positive if they are also less than 3 on the second variable. The following is a hypothetical tree reflecting this.
This tree structure allows for both interactions between variables, and nonlinear relationships between some input and the target variable (e.g. the second branch could just be the same `X1` but with some cut value greater than 5\.75\). Random forests randomly select a few from the available input variables, and create a tree that minimizes (maximizes) some loss (objective) function on a validation set. A given tree can potentially be very wide/deep, but instead of just one tree, we now do, say, 1000 trees. A final prediction is made based on the average across all trees.
We demonstrate the random forest using the ranger package. We initially don’t do any tuning here, but do note that the number of variables to randomly select (`mtry` below), the number of total trees, the tree depth \- all of these are potential tuning parameters to investigate in the model building process.
```
happy_rf_spec = rand_forest(mode = 'regression', mtry = 6) %>%
set_engine(engine = "ranger")
happy_rf_results = fit_resamples(
happy_rf_spec,
happiness_score ~ .,
happy_folds
)
cv_rf_res = happy_rf_results %>%
collect_metrics()
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.222 | 10 | 0\.004 |
| rsq | standard | 0\.950 | 10 | 0\.003 |
It would appear we’re doing a bit better than the regularized regression.
#### Tuning parameters for random forests
As mentioned, we’d have a few tuning parameters to play around with. We’ll tune the number of predictors to randomly select per tree, as well as the minimum sample size for each leaf. The following takes the same appraoch as with the regularized regression model. Note that this will take a while (several minutes).
```
grid_search = expand.grid(
mtry = c(3, 5, ncol(happy_train)-1), # up to total number of predictors
min_n = c(1, 5, 10)
)
happy_rf_spec = rand_forest(mode = 'regression',
mtry = tune(),
min_n = tune()) %>%
set_engine(engine = "ranger")
rf_tune = tune_grid(
happy_prepped,
model = happy_rf_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(rf_tune, metric = "rmse")
```
```
best = show_best(rf_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
mtry min_n .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 3 1 rmse standard 0.219 10 0.00422 Model1
```
Looks like in general using all the variables for selection is the best (this is in keeping with standard approaches for random forest with regression).
Now we are ready to refit the model with the selected tuning parameters and make predictions on the test data.
```
tuned_model = rand_forest(mode = 'regression', mtry = best$mtry, min_n = best$min_n) %>%
set_engine(engine = "ranger") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.217 |
| rsq | standard | 0\.955 |
#### Tuning parameters for random forests
As mentioned, we’d have a few tuning parameters to play around with. We’ll tune the number of predictors to randomly select per tree, as well as the minimum sample size for each leaf. The following takes the same appraoch as with the regularized regression model. Note that this will take a while (several minutes).
```
grid_search = expand.grid(
mtry = c(3, 5, ncol(happy_train)-1), # up to total number of predictors
min_n = c(1, 5, 10)
)
happy_rf_spec = rand_forest(mode = 'regression',
mtry = tune(),
min_n = tune()) %>%
set_engine(engine = "ranger")
rf_tune = tune_grid(
happy_prepped,
model = happy_rf_spec,
resamples = happy_folds,
grid = grid_search
)
autoplot(rf_tune, metric = "rmse")
```
```
best = show_best(rf_tune, metric = "rmse", maximize = FALSE, n = 1) # we want to minimize rmse
best
```
```
# A tibble: 1 x 8
mtry min_n .metric .estimator mean n std_err .config
<dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <chr>
1 3 1 rmse standard 0.219 10 0.00422 Model1
```
Looks like in general using all the variables for selection is the best (this is in keeping with standard approaches for random forest with regression).
Now we are ready to refit the model with the selected tuning parameters and make predictions on the test data.
```
tuned_model = rand_forest(mode = 'regression', mtry = best$mtry, min_n = best$min_n) %>%
set_engine(engine = "ranger") %>%
fit(happiness_score ~ ., data = juice(happy_prepped))
test_predictions = predict(tuned_model, new_data = happy_test_normalized)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, happy_test_normalized),
truth = happiness_score,
estimate = .pred
)
```
| .metric | .estimator | .estimate |
| --- | --- | --- |
| rmse | standard | 0\.217 |
| rsq | standard | 0\.955 |
### Neural networks
*Neural networks* have been around for a long while as a general concept in artificial intelligence and even as a machine learning algorithm, and often work quite well. In some sense, neural networks can simply be thought of as nonlinear regression. Visually however, we can see them as a graphical model with layers of inputs and outputs. Weighted combinations of the inputs are created[42](#fn42) and put through some function (e.g. the sigmoid function) to produce the next layer of inputs. This next layer goes through the same process to produce either another layer, or to predict the output, or even multiple outputs, which serves as the final layer. All the layers between the input and output are usually referred to as hidden layers. If there were a single hidden layer with a single unit and no transformation, then it becomes the standard regression problem.
As a simple example, we can run a simple neural network with a single hidden layer of 1000 units[43](#fn43). Since this is a regression problem, no further transformation is required of the end result to map it onto the target variable. I set the number of epochs to 500, which you can think of as the number of iterations from our discussion of optimization. There are many tuning parameters I am not showing that could certainly be fiddled with as well. This is just an example that will run quickly with comparable performance to the previous. If you do not have keras installed, you can change the engine to `nnet`, which was a part of the base R set of packages well before neural nets became cool again[44](#fn44). This will likely take several minutes for typical machines.
```
happy_nn_spec = mlp(
mode = 'regression',
hidden_units = 1000,
epochs = 500,
activation = 'linear'
) %>%
set_engine(engine = "keras")
happy_nn_results = fit_resamples(
happy_nn_spec,
happiness_score ~ .,
happy_folds,
control = control_resamples(save_pred = TRUE,
verbose = FALSE,
allow_par = TRUE)
)
```
| .metric | .estimator | mean | n | std\_err |
| --- | --- | --- | --- | --- |
| rmse | standard | 0\.818 | 10 | 0\.102 |
| rsq | standard | 0\.896 | 10 | 0\.014 |
You will typically see neural nets applied to image and natural language processing, but as demonstrated above, they can be applied in any scenario. It will take longer to set up and train, but once that’s done, you’re good to go, and may have a much better predictive result.
I leave tuning the model as an [exercise](ml.html#machine-learning-exercises), but definitely switch to using `nnet` if you do so, otherwise you’ll have to install keras (both for R and Python) and be waiting a long time besides. As mentioned, the nnet package is in base R, so you already have it.
#### Deep learning
*Deep learning* can be summarized succinctly as ‘very complicated neural nets’. Really, that’s about it. The complexity can be quite tremendous however, and there is a wide variety from which to choose. For example, we just ran a basic neural net above, but for image processing we might use a convolutional neural network, and for natural language processing some LTSM model. Here is a small(!) version of the convolutional neural network known as ‘resnet’ which has many layers in between input and output.
The nice thing is that a lot of the work has already been done for you, and you can use models where most of the layers in the neural net have already been trained by people at Google, Facebook, and others who have much more resources to do do so than you. In such cases, you may only have to worry about the last couple layers for your particular problem. Applying a pre\-trained model to a different data scenario is called *transfer learning*, and regardless of what your intuition is, it will work, and very well.
*Artificial intelligence* (AI) used to refer to specific applications of deep/machine learning (e.g. areas in computer vision and natural language processing), but thanks to the popular press, the term has pretty much lost all meaning. AI actually has a very old history dating to the cognitive revolution in psychology and the early days of computer science in the late 50s and early 60s. Again though, you can think of it as a subset of the machine learning problems.
#### Deep learning
*Deep learning* can be summarized succinctly as ‘very complicated neural nets’. Really, that’s about it. The complexity can be quite tremendous however, and there is a wide variety from which to choose. For example, we just ran a basic neural net above, but for image processing we might use a convolutional neural network, and for natural language processing some LTSM model. Here is a small(!) version of the convolutional neural network known as ‘resnet’ which has many layers in between input and output.
The nice thing is that a lot of the work has already been done for you, and you can use models where most of the layers in the neural net have already been trained by people at Google, Facebook, and others who have much more resources to do do so than you. In such cases, you may only have to worry about the last couple layers for your particular problem. Applying a pre\-trained model to a different data scenario is called *transfer learning*, and regardless of what your intuition is, it will work, and very well.
*Artificial intelligence* (AI) used to refer to specific applications of deep/machine learning (e.g. areas in computer vision and natural language processing), but thanks to the popular press, the term has pretty much lost all meaning. AI actually has a very old history dating to the cognitive revolution in psychology and the early days of computer science in the late 50s and early 60s. Again though, you can think of it as a subset of the machine learning problems.
Interpreting the Black Box
--------------------------
One of the key issues with ML techniques is interpretability. While a decision tree is immensely interpretable, a thousand of them is not so much. What any particular node or even layer in a complex neural network represents may be difficult to fathom. However, we can still interpret prediction changes based on input changes, which is what we really care about, and really is not necessarily more difficult than our standard inferential setting.
For example, a regularized regression might not have straightforward inference, but the coefficients are interpreted exactly the same as a standard GLM. Random forests can have the interactions visualized, which is what we said was required for interpretation in standard settings. Furthermore, there are many approaches such as *Local Interpretable Model\-Agnostic Explanations* (LIME), variable importance measures, Shapley values, and more to help us in this process. It might take more work, but honestly, in my consulting experience, a great many have trouble interpreting anything beyond a standard linear model any way, and I’m not convinced that it’s fundamentally different problem to [extract meaning from the machine learning context](https://christophm.github.io/interpretable-ml-book/) these days, though it may take a little work.
Machine Learning Summary
------------------------
Hopefully this has demystified ML for you somewhat. Nowadays it may take little effort to get to state\-of\-the\-art results from even just a year or two ago, and which, for all intents and purposes, would be good enough both now and for the foreseeable future. Despite what many may think, it is not magic, but for more classically statistically minded, it may require a bit of a different perspective.
Machine Learning Exercises
--------------------------
### Exercise 1
Use the ranger package to predict the Google variable `rating` by several covariates. Feel free to just use the standard function approach rather than all the tidymodels stuff if you want, but do use a training and test approach. You can then try the model again with a different tuning. For the first go around, starter code is provided.
```
# run these if needed to load data and install the package
# load('data/google_apps.RData')
# install.packages('ranger')
google_for_mod = google_apps %>%
select(avg_sentiment_polarity, rating, type,installs, reviews, size_in_MB, category) %>%
drop_na()
google_split = google_for_mod %>%
initial_split(prop = 0.75)
google_train = training(google_split)
google_test = testing(google_split)
ga_rf_results = rand_forest(mode = 'regression', mtry = 2, trees = 1000) %>%
set_engine(engine = "ranger") %>%
fit(
rating ~ ?,
google_train
)
test_predictions = predict(ga_rf_results, new_data = google_test)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
bind_rows(
rmse,
rsq
)
```
### Exercise 2
Respecify the neural net model demonstrated above as follows, and tune over the number of hidden units to have. This will probably take several minutes depending on your machine.
```
grid_search = expand.grid(
hidden_units = c(25, 50),
penalty = exp(seq(-4, -.25, length.out = 5))
)
happy_nn_spec = mlp(mode = 'regression',
penalty = tune(),
hidden_units = tune()) %>%
set_engine(engine = "nnet")
nn_tune = tune_grid(
happy_prepped, # from previous examples, see tuning for regularized regression
model = happy_nn_spec,
resamples = happy_folds, # from previous examples, see tuning for regularized regression
grid = grid_search
)
show_best(nn_tune, metric = "rmse", maximize = FALSE, n = 1)
```
### Exercise 1
Use the ranger package to predict the Google variable `rating` by several covariates. Feel free to just use the standard function approach rather than all the tidymodels stuff if you want, but do use a training and test approach. You can then try the model again with a different tuning. For the first go around, starter code is provided.
```
# run these if needed to load data and install the package
# load('data/google_apps.RData')
# install.packages('ranger')
google_for_mod = google_apps %>%
select(avg_sentiment_polarity, rating, type,installs, reviews, size_in_MB, category) %>%
drop_na()
google_split = google_for_mod %>%
initial_split(prop = 0.75)
google_train = training(google_split)
google_test = testing(google_split)
ga_rf_results = rand_forest(mode = 'regression', mtry = 2, trees = 1000) %>%
set_engine(engine = "ranger") %>%
fit(
rating ~ ?,
google_train
)
test_predictions = predict(ga_rf_results, new_data = google_test)
rmse = yardstick::rmse(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
rsq = yardstick::rsq(
data = bind_cols(test_predictions, google_test),
truth = rating,
estimate = .pred
)
bind_rows(
rmse,
rsq
)
```
### Exercise 2
Respecify the neural net model demonstrated above as follows, and tune over the number of hidden units to have. This will probably take several minutes depending on your machine.
```
grid_search = expand.grid(
hidden_units = c(25, 50),
penalty = exp(seq(-4, -.25, length.out = 5))
)
happy_nn_spec = mlp(mode = 'regression',
penalty = tune(),
hidden_units = tune()) %>%
set_engine(engine = "nnet")
nn_tune = tune_grid(
happy_prepped, # from previous examples, see tuning for regularized regression
model = happy_nn_spec,
resamples = happy_folds, # from previous examples, see tuning for regularized regression
grid = grid_search
)
show_best(nn_tune, metric = "rmse", maximize = FALSE, n = 1)
```
Python Machine Learning Notebook
--------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/ml.ipynb)
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/ggplot2.html |
ggplot2
=======
Visualization is key to telling the data’s story, and it can take a lot of work to get things to look just right. But, it can also be a lot of fun, so let’s dive in!
When it comes to visualization, the most [popular](https://r-pkg.org/downloaded) package used in R is ggplot2. It’s so popular, it or its aesthetic is even copied in other languages/programs as well. It entails a grammar of graphics (hence the **gg**), and learning that grammar is key to using it effectively. Some of the strengths of ggplot2 include:
* The ease of getting a good looking plot
* Easy customization
* A lot of necessary data processing is done for you
* Clear syntax
* Easy multidimensional approach
* Decent default color scheme as a default
* *Lots* of extensions
Every graph is built from the same few parts, and it’s important to be aware of a few key ideas, which we will cover in turn.
* Layers (and geoms)
* Piping
* Aesthetics
* Facets
* Scales
* Themes
* Extensions
Note that while you can obviously use base R for visualization, it’s not going to be as easy or as flexible as ggplot2.
Layers
------
In general, we start with a base layer and add to it. In most cases you’ll start as follows.
```
# recall that starwars is in the dplyr package
ggplot(aes(x = height, y = mass), data = starwars)
```
The code above would just produce a plot background, but nothing else. However, with the foundation in place, we’re now ready to add something to it. Let’s add some points (the outlier is Jabba the Hut).
```
ggplot(aes(x = height, y = mass), data = starwars) +
geom_point()
```
Perhaps we want to change labels or theme. These would be additional layers to the plot.
```
ggplot(aes(x = height, y = mass), data = starwars) +
geom_point(color = 'white') +
labs(x = 'Height in cm', y = 'Weight in kg') +
theme_dark()
```
Each layer is consecutively added by means of a pipe operator, and layers may regard geoms, scales, labels, facets etc. You may have many different layers to produce one plot, and there really is no limit. However some efficiencies may be possible for a given situation. For example, it’s more straightforward to use geom\_smooth than calculate fits, standard errors etc. and then add multiple geoms to produce the same thing. This is the sort of thing you’ll get used to as you use ggplot more.
Piping
------
As we saw, layers are added via piping (\+). The first layers added after the base are typically geoms, or geometric objects that represent the data, and include things like:
* points
* lines
* density
* text
In case you’re wondering why ggplot doesn’t use `%>%` as in the tidyverse and other visualization packages, it’s because ggplot2 was using pipes before it was cool, well before those came along. Otherwise, the concept is the same as we saw in the [data processing section](pipes.html#pipes).
```
ggplot(aes(x = myvar, y = myvar2), data = mydata) +
geom_point()
```
Our base is provided via the ggplot function, and specifies the data at the very least, but commonly also the x and y aesthetics.
The geom\_point function adds a layer of points, and now we would have a scatterplot. Alternatively, you could have specified the x and y aesthetic at the geom\_point layer, but if you’re going to have the same x, y, color, etc. aesthetics regardless of layer, put it in the base. Otherwise, doing it by layer gives you more flexibility if needed. Geoms even have their own data argument, allowing you to combine information from several sources for a single visualization.
Aesthetics
----------
Aesthetics map data to various visual aspects of the plot, including size, color etc. The function used in ggplot to do this is aes.
```
aes(
x = myvar,
y = myvar2,
color = myvar3,
group = g
)
```
The best way to understand what goes into the aes function is if the value is varying. For example, if I want the size of points to be a certain value, I would code the following.
```
... +
geom_point(..., size = 4)
```
However, if I want the size to be associated with the data in some way, I use it as an aesthetic.
```
... +
geom_point(aes(size = myvar))
```
The same goes for practically any aspect of a geom\- size, color, fill, etc. If it is a fixed value, set it outside the aesthetic. If it varies based on the data, put it within an aesthetic.
Geoms
-----
In the ggplot2 world, geoms are the geometric objects\- shapes, lines, and other parts of the visualization we want to display. Even if you use ggplot2 a lot, you probably didn’t know about many or most of these.
* geom\_abline: Reference lines: horizontal, vertical, and diagonal
* geom\_area: Ribbons and area plots
* geom\_bar: Bar charts
* geom\_bin2d: Heatmap of 2d bin counts
* geom\_blank: Draw nothing
* geom\_boxplot: A box and whiskers plot (in the style of Tukey)
* geom\_col: Bar charts
* geom\_contour: 2d contours of a 3d surface
* geom\_count: Count overlapping points
* geom\_crossbar: Vertical intervals: lines, crossbars \& errorbars
* geom\_curve: Line segments and curves
* geom\_density: Smoothed density estimates
* geom\_density\_2d: Contours of a 2d density estimate
* geom\_dotplot: Dot plot
* geom\_errorbar: Vertical intervals: lines, crossbars \& errorbars
* geom\_errorbarh: Horizontal error bars
* geom\_freqpoly: Histograms and frequency polygons
* geom\_hex: Hexagonal heatmap of 2d bin counts
* geom\_histogram: Histograms and frequency polygons
* geom\_hline: Reference lines: horizontal, vertical, and diagonal
* geom\_jitter: Jittered points
* geom\_label: Text
* geom\_line: Connect observations
* geom\_linerange: Vertical intervals: lines, crossbars \& errorbars
* geom\_map: Polygons from a reference map
* geom\_path: Connect observations
* geom\_point: Points
* geom\_pointrange: Vertical intervals: lines, crossbars \& errorbars
* geom\_polygon: Polygons
* geom\_qq: A quantile\-quantile plot
* geom\_qq\_line: A quantile\-quantile plot
* geom\_quantile: Quantile regression
* geom\_raster: Rectangles
* geom\_rect: Rectangles
* geom\_ribbon: Ribbons and area plots
* geom\_rug: Rug plots in the margins
* geom\_segment: Line segments and curves
* geom\_sf: Visualise sf objects
* geom\_sf\_label: Visualise sf objects
* geom\_sf\_text: Visualise sf objects
* geom\_smooth: Smoothed conditional means
* geom\_spoke: Line segments parameterised by location, direction and distance
* geom\_step: Connect observations
* geom\_text: Text
* geom\_tile: Rectangles
* geom\_violin: Violin plot
* geom\_vline: Reference lines: horizontal, vertical, and diagonal
Examples
--------
Let’s get more of a feel for things by seeing some examples that demonstrate some geoms and aesthetics.
To begin, after setting the base aesthetic, we’ll set some explicit values for the geom.
```
library(ggplot2)
data("diamonds")
data('economics')
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(size = .5, color = 'peru')
```
Next we use two different geoms, and one is even using a different data source. Note that geoms have arguments both common and specific to them. In the following, `label` is used for geom\_text, but it would be ignored by geom\_line.
```
ggplot(aes(x = date, y = unemploy), data = economics) +
geom_line() +
geom_text(
aes(label = unemploy),
vjust = -.5,
data = filter(economics, date == '2009-10-01')
)
```
In the following, one setting, alpha (transparency), is not mapped to the data, while size and color are[45](#fn45).
```
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(size = carat, color = clarity), alpha = .05)
```
There are some other options to play with as well.
```
ggplot(aes(x = carat, y = price), data = diamonds %>% sample_frac(.01)) +
geom_point(aes(size = carat, color = clarity), key_glyph = "vpath")
```
Stats
-----
There are many statistical functions built in, and it is a key strength of ggplot that you don’t have to do a lot of processing for very common plots.
Her are some quantile regression lines:
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
geom_quantile()
```
The following shows loess (or additive model) smooths. We can do some fine\-tuning and use model\-based approaches for visualization.
```
data(mcycle, package = 'MASS')
ggplot(aes(x = times, y = accel), data = mcycle) +
geom_point() +
geom_smooth(formula = y ~ s(x, bs = 'ad'), method = 'gam')
```
Bootstrapped confidence intervals:
```
ggplot(mtcars, aes(cyl, mpg)) +
geom_point() +
stat_summary(
fun.data = "mean_cl_boot",
colour = "orange",
alpha = .75,
size = 1
)
```
The take\-home message here is to always let ggplot do the work for you if at all possible. However, I will say that I find it easier to create the summary data I want to visualize with tidyverse tools, rather than use stat\_summary, and you may have a similar experience.
Scales
------
Often there are many things we want to change about the plot, for example, the size and values of axis labels, the range of sizes for points to take, the specific colors we want to use, and so forth. Be aware that there are a great many options here, and you will regularly want to use them.
A very common thing you’ll do is change the labels for the axes. You definitely don’t have to go and change the variable name itself to do this, just use the labs function. There are also functions for individual parts, e.g. xlab, ylab and ggtitle.
```
ggplot(aes(x = times, y = accel), data = mcycle) +
geom_smooth(se = FALSE) +
labs(
x = 'milliseconds after impact',
y = 'head acceleration',
title = 'Motorcycle Accident'
)
```
A frequent operation is changing the x and y look in the form of limits and tick marks. Like labs, there is a general lims function and specific functions for just the specific parts. In addition, we may want to get really detailed using scale\_x\_\* or scale\_y\_\*.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
ylim(c(0, 60))
```
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
scale_y_continuous(
limits = c(0, 60),
breaks = seq(0, 60, by = 12),
minor_breaks = seq(6, 60, by = 6)
)
```
Another common option is to change the size of points in some way. While we assign the aesthetic as before, it comes with defaults that might not work for a given situation. Play around with the range values.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
scale_size(range = c(1, 3))
```
We will talk about color issues later, but for now, you may want to apply something besides the default options. The following shows a built\-in color scale for a color aesthetic that is treated as continuous, and one that is discrete and which we want to supply our own colors (these actually come from plotly’s default color scheme).
```
ggplot(mpg, aes(x = displ, y = hwy, color = cyl)) +
geom_point() +
scale_color_gradient2()
```
```
ggplot(mpg, aes(x = displ, y = hwy, color = factor(cyl))) +
geom_point() +
scale_color_manual(values = c("#1f77b4", "#ff7f0e", "#2ca02c", "#d62728"))
```
We can even change the scale of the data itself.
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
scale_x_log10()
```
In short, scale alterations are really useful for getting just the plot you want, and there is a lot of flexibility for you to work with. There are a lot of scales too, so know what you have available.
* scale\_alpha, scale\_alpha\_continuous, scale\_alpha\_date, scale\_alpha\_datetime, scale\_alpha\_discrete, scale\_alpha\_identity, scale\_alpha\_manual, scale\_alpha\_ordinal: Alpha transparency scales
* scale\_color\_brewer, scale\_color\_distiller: Sequential, diverging and qualitative colour scales from colorbrewer.org
* scale\_color\_continuous, scale\_color\_discrete, scale\_color\_gradient, scale\_color\_gradient2, scale\_color\_gradientn, scale\_color\_grey, scale\_color\_hue, scale\_color\_identity, scale\_color\_manual, scale\_color\_viridis\_c, scale\_color\_viridis\_d, scale\_continuous\_identity Various color scales
* scale\_discrete\_identity, scale\_discrete\_manual: Discrete scales
* scale\_fill\_brewer, scale\_fill\_continuous, scale\_fill\_date, scale\_fill\_datetime, scale\_fill\_discrete, scale\_fill\_distiller, scale\_fill\_gradient, scale\_fill\_gradient2, scale\_fill\_gradientn, scale\_fill\_grey, scale\_fill\_hue, scale\_fill\_identity, scale\_fill\_manual, scale\_fill\_ordinal, scale\_fill\_viridis\_c, scale\_fill\_viridis\_d: Scales for geoms that can be filled with color
* scale\_linetype, scale\_linetype\_continuous, scale\_linetype\_discrete, scale\_linetype\_identity, scale\_linetype\_manual: Scales for line patterns
* scale\_shape, scale\_shape\_continuous, scale\_shape\_discrete, scale\_shape\_identity, scale\_shape\_manual, scale\_shape\_ordinal: Scales for shapes, aka glyphs
* scale\_size, scale\_size\_area, scale\_size\_continuous, scale\_size\_date, scale\_size\_datetime, scale\_size\_discrete, scale\_size\_identity, scale\_size\_manual, scale\_size\_ordinal: Scales for area or radius
* scale\_x\_continuous, scale\_x\_date, scale\_x\_datetime, scale\_x\_discrete, scale\_x\_log10, scale\_x\_reverse, scale\_x\_sqrt, \< scale\_y\_continuous, scale\_y\_date, scale\_y\_datetime, scale\_y\_discrete, scale\_y\_log10, scale\_y\_reverse, scale\_y\_sqrt: Position scales for continuous data (x \& y)
* scale\_x\_time, scale\_y\_time: Position scales for date/time data
Facets
------
Facets allow for paneled display, a very common operation. In general, we often want comparison plots. The facet\_grid function will produce a grid, and often this is all that’s needed. However, facet\_wrap is more flexible, while possibly taking a bit extra effort to get things just the way you want. Both use a formula approach to specify the grouping.
#### facet\_grid
Facet by cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(~ cyl)
```
Facet by vs and cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(vs ~ cyl, labeller = label_both)
```
#### facet\_wrap
Specify the number of columns or rows with facet\_wrap.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_wrap(vs ~ cyl, labeller = label_both, ncol=2)
```
Multiple plots
--------------
Often we want distinct visualizations to come together in one plot. There are several packages that can help you here: gridExtra, cowplot, and more recently patchwork[46](#fn46). The latter especially makes things easy.
```
library(patchwork)
g1 = ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point()
g2 = ggplot(mtcars, aes(wt)) +
geom_density()
g3 = ggplot(mtcars, aes(mpg)) +
geom_density()
g1 / # initial plot, place next part underneath
(g2 | g3) # groups g2 and g3 side by side
```
Not that you want this, but just to demonstrate the flexibility.
```
p1 = ggplot(mtcars) + geom_point(aes(mpg, disp))
p2 = ggplot(mtcars) + geom_boxplot(aes(gear, disp, group = gear))
p3 = ggplot(mtcars) + geom_smooth(aes(disp, qsec))
p4 = ggplot(mtcars) + geom_bar(aes(carb))
p5 = ggplot(mtcars) + geom_violin(aes(cyl, mpg, group = cyl))
p1 +
p2 +
(p3 / p4) * theme_void() +
p5 +
plot_layout(widths = c(2, 1))
```
You’ll typically want to use facets to show subsets of the same data, and tools like patchwork to show different kinds of plots together.
Fine control
------------
ggplot2 makes it easy to get good looking graphs quickly. However the amount of fine control is extensive. The following plot is hideous (aside from the background, which is totally rad), but illustrates the point.
```
ggplot(aes(x = carat, y = price), data = diamonds) +
annotation_custom(
rasterGrob(
lambosun,
width = unit(1, "npc"),
height = unit(1, "npc"),
interpolate = FALSE
),-Inf,
Inf,
-Inf,
Inf
) +
geom_point(aes(color = clarity), alpha = .5) +
scale_y_log10(breaks = c(1000, 5000, 10000)) +
xlim(0, 10) +
scale_color_brewer(type = 'div') +
facet_wrap( ~ cut, ncol = 3) +
theme_minimal() +
theme(
axis.ticks.x = element_line(color = 'darkred'),
axis.text.x = element_text(angle = -45),
axis.text.y = element_text(size = 20),
strip.text = element_text(color = 'forestgreen'),
strip.background = element_blank(),
panel.grid.minor = element_line(color = 'lightblue'),
legend.key = element_rect(linetype = 4),
legend.position = 'bottom'
)
```
Themes
------
In the last example you saw two uses of a theme\- a built\-in version that comes with ggplot (theme\_minimal), and specific customization (theme(…)). The built\-in themes provide ready\-made approaches that might already be good enough for a finished product. For the theme function, each argument, and there are many, takes on a specific value or an element function:
* element\_rect
* element\_line
* element\_text
* element\_blank
Each of those element functions has arguments specific to it. For example, for element\_text you can specify the font size, while for element line you could specify the line type.
Note that the base theme of ggplot, and I would say every plotting package, is probably going to need manipulation before a plot is ready for presentation. For example, the ggplot theme doesn’t work well for web presentation, and is even worse for print. You will almost invariably need to tweak it. I suggest using and saving your own custom theme for easy application for any visualization package you use frequently.
Extensions
----------
ggplot2 now has its own extension system, and there is even a [website](http://www.ggplot2-exts.org/) to track the extensions. Examples include:
* additional themes
* maps
* interactivity
* animations
* marginal plots
* network graphs
* time series
* aligning multiple ggplot visualizations, possibly of different types
Here’s an example with gganimate.
```
library(gganimate)
load('data/gapminder.RData')
gap_plot = gapminder_2019 %>%
filter(giniPercap != 40)
gap_plot_filter = gap_plot %>%
filter(country %in% c('United States', 'Mexico', 'Canada'))
initial_plot = ggplot(gap_plot, aes(x = year, y = giniPercap, group = country)) +
geom_line(alpha = .05) +
geom_path(
aes(color = country),
lwd = 2,
arrow = arrow(
length = unit(0.25, "cm")
),
alpha = .5,
data = gap_plot_filter,
show.legend = FALSE
) +
geom_text(
aes(color = country, label = country),
nudge_x = 5,
nudge_y = 2,
size = 2,
data = gap_plot_filter,
show.legend = FALSE
) +
theme_clean() +
transition_reveal(year)
animate(initial_plot, end_pause = 50, nframes = 150, rewind = TRUE)
```
As one can see, ggplot2 is only the beginning. You’ll have a lot of tools at your disposal. Furthermore, many modeling and other packages will produce ggplot graphics to which you can add your own layers and tweak like you would any other ggplot.
ggplot2 Summary
---------------
ggplot2 is an easy to use, but powerful visualization tool. It allows one to think in many dimensions for any graph, and extends well beyond the basics. Use it to easily create more interesting visualizations.
ggplot2 Exercises
-----------------
### Exercise 0
Load the ggplot2 package if you haven’t already.
### Exercise 1
Create two plots, one a scatterplot (e.g. with geom\_point) and one with lines (e.g. geom\_line) with a data set of your choosing (all of the following are base R or available after loading ggplot2. Some suggestions:
* faithful: Waiting time between eruptions and the duration of the eruption for the Old Faithful geyser in Yellowstone National Park, Wyoming, USA.
* msleep: mammals sleep dataset with sleep times and weights etc.
* diamonds: used in the slides
* economics: US economic time series.
* txhousing: Housing sales in TX.
* midwest: Midwest demographics.
* mpg: Fuel economy data from 1999 and 2008 for 38 popular models of car
Recall the basic form for ggplot.
```
ggplot(data = *, aes(x = *, y = *, other)) +
geom_*() +
otherLayers, theme etc.
```
Themes to play with:
* theme\_bw
* theme\_classic
* theme\_dark
* theme\_gray
* theme\_light
* theme\_linedraw
* theme\_minimal
* theme\_clean (requires the visibly package and an appreciation of the Lamborghini background from the previous visualization)
### Exercise 2
Play around and change the arguments to the following. You’ll need to install the maps package.
* For example, do points for all county midpoints. For that you’d need to change the x and y for the point geom to an aesthetic based on the longitude and latitude, as well as add its data argument to use the seats data frame.
* Make the color of the points or text based on `subregion`. This will require adding the fill argument to the polygon geom and removing the NA setting. In addition, add the argument show.legend\=F (outside the aesthetic), or you’ll have a problematic legend (recall what we said before about too many colors!). Try making color based on subregion too.
* See if you can use element\_blank on a theme argument to remove the axis information. See ?theme for ideas.
```
library(maps)
mi = map_data("county", "michigan")
seats = mi %>%
group_by(subregion) %>%
summarise_at(vars(lat, long), function(x) median(range(x)))
# inspect the data
# head(mi)
# head(seats)
ggplot(mi, aes(long, lat)) +
geom_polygon(aes(group = subregion), fill = NA, colour = "grey60") +
geom_text(aes(label = subregion), data = seats, size = 1, angle = 45) +
geom_point(x=-83.748333, y=42.281389, color='#1e90ff', size=3) +
theme_minimal() +
theme(panel.grid=element_blank())
```
Python Plotnine Notebook
------------------------
The R community really lucked out with ggplot, and the basic philosophy behind it is missing from practically every other static plotting packages or tools. Python’s version of base R plotting is matplotlib, which continues to serve people well. But like R base plots, it can take a lot of work to get anything remotely visually appealing. Seaborn is another option, but still, just isn’t in the same league.
If using Python though, you’re in luck! You get most of the basic functionality of ggplot2 via the plotnine module. A jupyter notebook demonstrating most of the previous is available [here](https://github.com/m-clark/data-processing-and-visualization/blob/master/code/ggplot.ipynb).
Layers
------
In general, we start with a base layer and add to it. In most cases you’ll start as follows.
```
# recall that starwars is in the dplyr package
ggplot(aes(x = height, y = mass), data = starwars)
```
The code above would just produce a plot background, but nothing else. However, with the foundation in place, we’re now ready to add something to it. Let’s add some points (the outlier is Jabba the Hut).
```
ggplot(aes(x = height, y = mass), data = starwars) +
geom_point()
```
Perhaps we want to change labels or theme. These would be additional layers to the plot.
```
ggplot(aes(x = height, y = mass), data = starwars) +
geom_point(color = 'white') +
labs(x = 'Height in cm', y = 'Weight in kg') +
theme_dark()
```
Each layer is consecutively added by means of a pipe operator, and layers may regard geoms, scales, labels, facets etc. You may have many different layers to produce one plot, and there really is no limit. However some efficiencies may be possible for a given situation. For example, it’s more straightforward to use geom\_smooth than calculate fits, standard errors etc. and then add multiple geoms to produce the same thing. This is the sort of thing you’ll get used to as you use ggplot more.
Piping
------
As we saw, layers are added via piping (\+). The first layers added after the base are typically geoms, or geometric objects that represent the data, and include things like:
* points
* lines
* density
* text
In case you’re wondering why ggplot doesn’t use `%>%` as in the tidyverse and other visualization packages, it’s because ggplot2 was using pipes before it was cool, well before those came along. Otherwise, the concept is the same as we saw in the [data processing section](pipes.html#pipes).
```
ggplot(aes(x = myvar, y = myvar2), data = mydata) +
geom_point()
```
Our base is provided via the ggplot function, and specifies the data at the very least, but commonly also the x and y aesthetics.
The geom\_point function adds a layer of points, and now we would have a scatterplot. Alternatively, you could have specified the x and y aesthetic at the geom\_point layer, but if you’re going to have the same x, y, color, etc. aesthetics regardless of layer, put it in the base. Otherwise, doing it by layer gives you more flexibility if needed. Geoms even have their own data argument, allowing you to combine information from several sources for a single visualization.
Aesthetics
----------
Aesthetics map data to various visual aspects of the plot, including size, color etc. The function used in ggplot to do this is aes.
```
aes(
x = myvar,
y = myvar2,
color = myvar3,
group = g
)
```
The best way to understand what goes into the aes function is if the value is varying. For example, if I want the size of points to be a certain value, I would code the following.
```
... +
geom_point(..., size = 4)
```
However, if I want the size to be associated with the data in some way, I use it as an aesthetic.
```
... +
geom_point(aes(size = myvar))
```
The same goes for practically any aspect of a geom\- size, color, fill, etc. If it is a fixed value, set it outside the aesthetic. If it varies based on the data, put it within an aesthetic.
Geoms
-----
In the ggplot2 world, geoms are the geometric objects\- shapes, lines, and other parts of the visualization we want to display. Even if you use ggplot2 a lot, you probably didn’t know about many or most of these.
* geom\_abline: Reference lines: horizontal, vertical, and diagonal
* geom\_area: Ribbons and area plots
* geom\_bar: Bar charts
* geom\_bin2d: Heatmap of 2d bin counts
* geom\_blank: Draw nothing
* geom\_boxplot: A box and whiskers plot (in the style of Tukey)
* geom\_col: Bar charts
* geom\_contour: 2d contours of a 3d surface
* geom\_count: Count overlapping points
* geom\_crossbar: Vertical intervals: lines, crossbars \& errorbars
* geom\_curve: Line segments and curves
* geom\_density: Smoothed density estimates
* geom\_density\_2d: Contours of a 2d density estimate
* geom\_dotplot: Dot plot
* geom\_errorbar: Vertical intervals: lines, crossbars \& errorbars
* geom\_errorbarh: Horizontal error bars
* geom\_freqpoly: Histograms and frequency polygons
* geom\_hex: Hexagonal heatmap of 2d bin counts
* geom\_histogram: Histograms and frequency polygons
* geom\_hline: Reference lines: horizontal, vertical, and diagonal
* geom\_jitter: Jittered points
* geom\_label: Text
* geom\_line: Connect observations
* geom\_linerange: Vertical intervals: lines, crossbars \& errorbars
* geom\_map: Polygons from a reference map
* geom\_path: Connect observations
* geom\_point: Points
* geom\_pointrange: Vertical intervals: lines, crossbars \& errorbars
* geom\_polygon: Polygons
* geom\_qq: A quantile\-quantile plot
* geom\_qq\_line: A quantile\-quantile plot
* geom\_quantile: Quantile regression
* geom\_raster: Rectangles
* geom\_rect: Rectangles
* geom\_ribbon: Ribbons and area plots
* geom\_rug: Rug plots in the margins
* geom\_segment: Line segments and curves
* geom\_sf: Visualise sf objects
* geom\_sf\_label: Visualise sf objects
* geom\_sf\_text: Visualise sf objects
* geom\_smooth: Smoothed conditional means
* geom\_spoke: Line segments parameterised by location, direction and distance
* geom\_step: Connect observations
* geom\_text: Text
* geom\_tile: Rectangles
* geom\_violin: Violin plot
* geom\_vline: Reference lines: horizontal, vertical, and diagonal
Examples
--------
Let’s get more of a feel for things by seeing some examples that demonstrate some geoms and aesthetics.
To begin, after setting the base aesthetic, we’ll set some explicit values for the geom.
```
library(ggplot2)
data("diamonds")
data('economics')
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(size = .5, color = 'peru')
```
Next we use two different geoms, and one is even using a different data source. Note that geoms have arguments both common and specific to them. In the following, `label` is used for geom\_text, but it would be ignored by geom\_line.
```
ggplot(aes(x = date, y = unemploy), data = economics) +
geom_line() +
geom_text(
aes(label = unemploy),
vjust = -.5,
data = filter(economics, date == '2009-10-01')
)
```
In the following, one setting, alpha (transparency), is not mapped to the data, while size and color are[45](#fn45).
```
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(size = carat, color = clarity), alpha = .05)
```
There are some other options to play with as well.
```
ggplot(aes(x = carat, y = price), data = diamonds %>% sample_frac(.01)) +
geom_point(aes(size = carat, color = clarity), key_glyph = "vpath")
```
Stats
-----
There are many statistical functions built in, and it is a key strength of ggplot that you don’t have to do a lot of processing for very common plots.
Her are some quantile regression lines:
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
geom_quantile()
```
The following shows loess (or additive model) smooths. We can do some fine\-tuning and use model\-based approaches for visualization.
```
data(mcycle, package = 'MASS')
ggplot(aes(x = times, y = accel), data = mcycle) +
geom_point() +
geom_smooth(formula = y ~ s(x, bs = 'ad'), method = 'gam')
```
Bootstrapped confidence intervals:
```
ggplot(mtcars, aes(cyl, mpg)) +
geom_point() +
stat_summary(
fun.data = "mean_cl_boot",
colour = "orange",
alpha = .75,
size = 1
)
```
The take\-home message here is to always let ggplot do the work for you if at all possible. However, I will say that I find it easier to create the summary data I want to visualize with tidyverse tools, rather than use stat\_summary, and you may have a similar experience.
Scales
------
Often there are many things we want to change about the plot, for example, the size and values of axis labels, the range of sizes for points to take, the specific colors we want to use, and so forth. Be aware that there are a great many options here, and you will regularly want to use them.
A very common thing you’ll do is change the labels for the axes. You definitely don’t have to go and change the variable name itself to do this, just use the labs function. There are also functions for individual parts, e.g. xlab, ylab and ggtitle.
```
ggplot(aes(x = times, y = accel), data = mcycle) +
geom_smooth(se = FALSE) +
labs(
x = 'milliseconds after impact',
y = 'head acceleration',
title = 'Motorcycle Accident'
)
```
A frequent operation is changing the x and y look in the form of limits and tick marks. Like labs, there is a general lims function and specific functions for just the specific parts. In addition, we may want to get really detailed using scale\_x\_\* or scale\_y\_\*.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
ylim(c(0, 60))
```
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
scale_y_continuous(
limits = c(0, 60),
breaks = seq(0, 60, by = 12),
minor_breaks = seq(6, 60, by = 6)
)
```
Another common option is to change the size of points in some way. While we assign the aesthetic as before, it comes with defaults that might not work for a given situation. Play around with the range values.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
scale_size(range = c(1, 3))
```
We will talk about color issues later, but for now, you may want to apply something besides the default options. The following shows a built\-in color scale for a color aesthetic that is treated as continuous, and one that is discrete and which we want to supply our own colors (these actually come from plotly’s default color scheme).
```
ggplot(mpg, aes(x = displ, y = hwy, color = cyl)) +
geom_point() +
scale_color_gradient2()
```
```
ggplot(mpg, aes(x = displ, y = hwy, color = factor(cyl))) +
geom_point() +
scale_color_manual(values = c("#1f77b4", "#ff7f0e", "#2ca02c", "#d62728"))
```
We can even change the scale of the data itself.
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
scale_x_log10()
```
In short, scale alterations are really useful for getting just the plot you want, and there is a lot of flexibility for you to work with. There are a lot of scales too, so know what you have available.
* scale\_alpha, scale\_alpha\_continuous, scale\_alpha\_date, scale\_alpha\_datetime, scale\_alpha\_discrete, scale\_alpha\_identity, scale\_alpha\_manual, scale\_alpha\_ordinal: Alpha transparency scales
* scale\_color\_brewer, scale\_color\_distiller: Sequential, diverging and qualitative colour scales from colorbrewer.org
* scale\_color\_continuous, scale\_color\_discrete, scale\_color\_gradient, scale\_color\_gradient2, scale\_color\_gradientn, scale\_color\_grey, scale\_color\_hue, scale\_color\_identity, scale\_color\_manual, scale\_color\_viridis\_c, scale\_color\_viridis\_d, scale\_continuous\_identity Various color scales
* scale\_discrete\_identity, scale\_discrete\_manual: Discrete scales
* scale\_fill\_brewer, scale\_fill\_continuous, scale\_fill\_date, scale\_fill\_datetime, scale\_fill\_discrete, scale\_fill\_distiller, scale\_fill\_gradient, scale\_fill\_gradient2, scale\_fill\_gradientn, scale\_fill\_grey, scale\_fill\_hue, scale\_fill\_identity, scale\_fill\_manual, scale\_fill\_ordinal, scale\_fill\_viridis\_c, scale\_fill\_viridis\_d: Scales for geoms that can be filled with color
* scale\_linetype, scale\_linetype\_continuous, scale\_linetype\_discrete, scale\_linetype\_identity, scale\_linetype\_manual: Scales for line patterns
* scale\_shape, scale\_shape\_continuous, scale\_shape\_discrete, scale\_shape\_identity, scale\_shape\_manual, scale\_shape\_ordinal: Scales for shapes, aka glyphs
* scale\_size, scale\_size\_area, scale\_size\_continuous, scale\_size\_date, scale\_size\_datetime, scale\_size\_discrete, scale\_size\_identity, scale\_size\_manual, scale\_size\_ordinal: Scales for area or radius
* scale\_x\_continuous, scale\_x\_date, scale\_x\_datetime, scale\_x\_discrete, scale\_x\_log10, scale\_x\_reverse, scale\_x\_sqrt, \< scale\_y\_continuous, scale\_y\_date, scale\_y\_datetime, scale\_y\_discrete, scale\_y\_log10, scale\_y\_reverse, scale\_y\_sqrt: Position scales for continuous data (x \& y)
* scale\_x\_time, scale\_y\_time: Position scales for date/time data
Facets
------
Facets allow for paneled display, a very common operation. In general, we often want comparison plots. The facet\_grid function will produce a grid, and often this is all that’s needed. However, facet\_wrap is more flexible, while possibly taking a bit extra effort to get things just the way you want. Both use a formula approach to specify the grouping.
#### facet\_grid
Facet by cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(~ cyl)
```
Facet by vs and cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(vs ~ cyl, labeller = label_both)
```
#### facet\_wrap
Specify the number of columns or rows with facet\_wrap.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_wrap(vs ~ cyl, labeller = label_both, ncol=2)
```
#### facet\_grid
Facet by cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(~ cyl)
```
Facet by vs and cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(vs ~ cyl, labeller = label_both)
```
#### facet\_wrap
Specify the number of columns or rows with facet\_wrap.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_wrap(vs ~ cyl, labeller = label_both, ncol=2)
```
Multiple plots
--------------
Often we want distinct visualizations to come together in one plot. There are several packages that can help you here: gridExtra, cowplot, and more recently patchwork[46](#fn46). The latter especially makes things easy.
```
library(patchwork)
g1 = ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point()
g2 = ggplot(mtcars, aes(wt)) +
geom_density()
g3 = ggplot(mtcars, aes(mpg)) +
geom_density()
g1 / # initial plot, place next part underneath
(g2 | g3) # groups g2 and g3 side by side
```
Not that you want this, but just to demonstrate the flexibility.
```
p1 = ggplot(mtcars) + geom_point(aes(mpg, disp))
p2 = ggplot(mtcars) + geom_boxplot(aes(gear, disp, group = gear))
p3 = ggplot(mtcars) + geom_smooth(aes(disp, qsec))
p4 = ggplot(mtcars) + geom_bar(aes(carb))
p5 = ggplot(mtcars) + geom_violin(aes(cyl, mpg, group = cyl))
p1 +
p2 +
(p3 / p4) * theme_void() +
p5 +
plot_layout(widths = c(2, 1))
```
You’ll typically want to use facets to show subsets of the same data, and tools like patchwork to show different kinds of plots together.
Fine control
------------
ggplot2 makes it easy to get good looking graphs quickly. However the amount of fine control is extensive. The following plot is hideous (aside from the background, which is totally rad), but illustrates the point.
```
ggplot(aes(x = carat, y = price), data = diamonds) +
annotation_custom(
rasterGrob(
lambosun,
width = unit(1, "npc"),
height = unit(1, "npc"),
interpolate = FALSE
),-Inf,
Inf,
-Inf,
Inf
) +
geom_point(aes(color = clarity), alpha = .5) +
scale_y_log10(breaks = c(1000, 5000, 10000)) +
xlim(0, 10) +
scale_color_brewer(type = 'div') +
facet_wrap( ~ cut, ncol = 3) +
theme_minimal() +
theme(
axis.ticks.x = element_line(color = 'darkred'),
axis.text.x = element_text(angle = -45),
axis.text.y = element_text(size = 20),
strip.text = element_text(color = 'forestgreen'),
strip.background = element_blank(),
panel.grid.minor = element_line(color = 'lightblue'),
legend.key = element_rect(linetype = 4),
legend.position = 'bottom'
)
```
Themes
------
In the last example you saw two uses of a theme\- a built\-in version that comes with ggplot (theme\_minimal), and specific customization (theme(…)). The built\-in themes provide ready\-made approaches that might already be good enough for a finished product. For the theme function, each argument, and there are many, takes on a specific value or an element function:
* element\_rect
* element\_line
* element\_text
* element\_blank
Each of those element functions has arguments specific to it. For example, for element\_text you can specify the font size, while for element line you could specify the line type.
Note that the base theme of ggplot, and I would say every plotting package, is probably going to need manipulation before a plot is ready for presentation. For example, the ggplot theme doesn’t work well for web presentation, and is even worse for print. You will almost invariably need to tweak it. I suggest using and saving your own custom theme for easy application for any visualization package you use frequently.
Extensions
----------
ggplot2 now has its own extension system, and there is even a [website](http://www.ggplot2-exts.org/) to track the extensions. Examples include:
* additional themes
* maps
* interactivity
* animations
* marginal plots
* network graphs
* time series
* aligning multiple ggplot visualizations, possibly of different types
Here’s an example with gganimate.
```
library(gganimate)
load('data/gapminder.RData')
gap_plot = gapminder_2019 %>%
filter(giniPercap != 40)
gap_plot_filter = gap_plot %>%
filter(country %in% c('United States', 'Mexico', 'Canada'))
initial_plot = ggplot(gap_plot, aes(x = year, y = giniPercap, group = country)) +
geom_line(alpha = .05) +
geom_path(
aes(color = country),
lwd = 2,
arrow = arrow(
length = unit(0.25, "cm")
),
alpha = .5,
data = gap_plot_filter,
show.legend = FALSE
) +
geom_text(
aes(color = country, label = country),
nudge_x = 5,
nudge_y = 2,
size = 2,
data = gap_plot_filter,
show.legend = FALSE
) +
theme_clean() +
transition_reveal(year)
animate(initial_plot, end_pause = 50, nframes = 150, rewind = TRUE)
```
As one can see, ggplot2 is only the beginning. You’ll have a lot of tools at your disposal. Furthermore, many modeling and other packages will produce ggplot graphics to which you can add your own layers and tweak like you would any other ggplot.
ggplot2 Summary
---------------
ggplot2 is an easy to use, but powerful visualization tool. It allows one to think in many dimensions for any graph, and extends well beyond the basics. Use it to easily create more interesting visualizations.
ggplot2 Exercises
-----------------
### Exercise 0
Load the ggplot2 package if you haven’t already.
### Exercise 1
Create two plots, one a scatterplot (e.g. with geom\_point) and one with lines (e.g. geom\_line) with a data set of your choosing (all of the following are base R or available after loading ggplot2. Some suggestions:
* faithful: Waiting time between eruptions and the duration of the eruption for the Old Faithful geyser in Yellowstone National Park, Wyoming, USA.
* msleep: mammals sleep dataset with sleep times and weights etc.
* diamonds: used in the slides
* economics: US economic time series.
* txhousing: Housing sales in TX.
* midwest: Midwest demographics.
* mpg: Fuel economy data from 1999 and 2008 for 38 popular models of car
Recall the basic form for ggplot.
```
ggplot(data = *, aes(x = *, y = *, other)) +
geom_*() +
otherLayers, theme etc.
```
Themes to play with:
* theme\_bw
* theme\_classic
* theme\_dark
* theme\_gray
* theme\_light
* theme\_linedraw
* theme\_minimal
* theme\_clean (requires the visibly package and an appreciation of the Lamborghini background from the previous visualization)
### Exercise 2
Play around and change the arguments to the following. You’ll need to install the maps package.
* For example, do points for all county midpoints. For that you’d need to change the x and y for the point geom to an aesthetic based on the longitude and latitude, as well as add its data argument to use the seats data frame.
* Make the color of the points or text based on `subregion`. This will require adding the fill argument to the polygon geom and removing the NA setting. In addition, add the argument show.legend\=F (outside the aesthetic), or you’ll have a problematic legend (recall what we said before about too many colors!). Try making color based on subregion too.
* See if you can use element\_blank on a theme argument to remove the axis information. See ?theme for ideas.
```
library(maps)
mi = map_data("county", "michigan")
seats = mi %>%
group_by(subregion) %>%
summarise_at(vars(lat, long), function(x) median(range(x)))
# inspect the data
# head(mi)
# head(seats)
ggplot(mi, aes(long, lat)) +
geom_polygon(aes(group = subregion), fill = NA, colour = "grey60") +
geom_text(aes(label = subregion), data = seats, size = 1, angle = 45) +
geom_point(x=-83.748333, y=42.281389, color='#1e90ff', size=3) +
theme_minimal() +
theme(panel.grid=element_blank())
```
### Exercise 0
Load the ggplot2 package if you haven’t already.
### Exercise 1
Create two plots, one a scatterplot (e.g. with geom\_point) and one with lines (e.g. geom\_line) with a data set of your choosing (all of the following are base R or available after loading ggplot2. Some suggestions:
* faithful: Waiting time between eruptions and the duration of the eruption for the Old Faithful geyser in Yellowstone National Park, Wyoming, USA.
* msleep: mammals sleep dataset with sleep times and weights etc.
* diamonds: used in the slides
* economics: US economic time series.
* txhousing: Housing sales in TX.
* midwest: Midwest demographics.
* mpg: Fuel economy data from 1999 and 2008 for 38 popular models of car
Recall the basic form for ggplot.
```
ggplot(data = *, aes(x = *, y = *, other)) +
geom_*() +
otherLayers, theme etc.
```
Themes to play with:
* theme\_bw
* theme\_classic
* theme\_dark
* theme\_gray
* theme\_light
* theme\_linedraw
* theme\_minimal
* theme\_clean (requires the visibly package and an appreciation of the Lamborghini background from the previous visualization)
### Exercise 2
Play around and change the arguments to the following. You’ll need to install the maps package.
* For example, do points for all county midpoints. For that you’d need to change the x and y for the point geom to an aesthetic based on the longitude and latitude, as well as add its data argument to use the seats data frame.
* Make the color of the points or text based on `subregion`. This will require adding the fill argument to the polygon geom and removing the NA setting. In addition, add the argument show.legend\=F (outside the aesthetic), or you’ll have a problematic legend (recall what we said before about too many colors!). Try making color based on subregion too.
* See if you can use element\_blank on a theme argument to remove the axis information. See ?theme for ideas.
```
library(maps)
mi = map_data("county", "michigan")
seats = mi %>%
group_by(subregion) %>%
summarise_at(vars(lat, long), function(x) median(range(x)))
# inspect the data
# head(mi)
# head(seats)
ggplot(mi, aes(long, lat)) +
geom_polygon(aes(group = subregion), fill = NA, colour = "grey60") +
geom_text(aes(label = subregion), data = seats, size = 1, angle = 45) +
geom_point(x=-83.748333, y=42.281389, color='#1e90ff', size=3) +
theme_minimal() +
theme(panel.grid=element_blank())
```
Python Plotnine Notebook
------------------------
The R community really lucked out with ggplot, and the basic philosophy behind it is missing from practically every other static plotting packages or tools. Python’s version of base R plotting is matplotlib, which continues to serve people well. But like R base plots, it can take a lot of work to get anything remotely visually appealing. Seaborn is another option, but still, just isn’t in the same league.
If using Python though, you’re in luck! You get most of the basic functionality of ggplot2 via the plotnine module. A jupyter notebook demonstrating most of the previous is available [here](https://github.com/m-clark/data-processing-and-visualization/blob/master/code/ggplot.ipynb).
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/ggplot2.html |
ggplot2
=======
Visualization is key to telling the data’s story, and it can take a lot of work to get things to look just right. But, it can also be a lot of fun, so let’s dive in!
When it comes to visualization, the most [popular](https://r-pkg.org/downloaded) package used in R is ggplot2. It’s so popular, it or its aesthetic is even copied in other languages/programs as well. It entails a grammar of graphics (hence the **gg**), and learning that grammar is key to using it effectively. Some of the strengths of ggplot2 include:
* The ease of getting a good looking plot
* Easy customization
* A lot of necessary data processing is done for you
* Clear syntax
* Easy multidimensional approach
* Decent default color scheme as a default
* *Lots* of extensions
Every graph is built from the same few parts, and it’s important to be aware of a few key ideas, which we will cover in turn.
* Layers (and geoms)
* Piping
* Aesthetics
* Facets
* Scales
* Themes
* Extensions
Note that while you can obviously use base R for visualization, it’s not going to be as easy or as flexible as ggplot2.
Layers
------
In general, we start with a base layer and add to it. In most cases you’ll start as follows.
```
# recall that starwars is in the dplyr package
ggplot(aes(x = height, y = mass), data = starwars)
```
The code above would just produce a plot background, but nothing else. However, with the foundation in place, we’re now ready to add something to it. Let’s add some points (the outlier is Jabba the Hut).
```
ggplot(aes(x = height, y = mass), data = starwars) +
geom_point()
```
Perhaps we want to change labels or theme. These would be additional layers to the plot.
```
ggplot(aes(x = height, y = mass), data = starwars) +
geom_point(color = 'white') +
labs(x = 'Height in cm', y = 'Weight in kg') +
theme_dark()
```
Each layer is consecutively added by means of a pipe operator, and layers may regard geoms, scales, labels, facets etc. You may have many different layers to produce one plot, and there really is no limit. However some efficiencies may be possible for a given situation. For example, it’s more straightforward to use geom\_smooth than calculate fits, standard errors etc. and then add multiple geoms to produce the same thing. This is the sort of thing you’ll get used to as you use ggplot more.
Piping
------
As we saw, layers are added via piping (\+). The first layers added after the base are typically geoms, or geometric objects that represent the data, and include things like:
* points
* lines
* density
* text
In case you’re wondering why ggplot doesn’t use `%>%` as in the tidyverse and other visualization packages, it’s because ggplot2 was using pipes before it was cool, well before those came along. Otherwise, the concept is the same as we saw in the [data processing section](pipes.html#pipes).
```
ggplot(aes(x = myvar, y = myvar2), data = mydata) +
geom_point()
```
Our base is provided via the ggplot function, and specifies the data at the very least, but commonly also the x and y aesthetics.
The geom\_point function adds a layer of points, and now we would have a scatterplot. Alternatively, you could have specified the x and y aesthetic at the geom\_point layer, but if you’re going to have the same x, y, color, etc. aesthetics regardless of layer, put it in the base. Otherwise, doing it by layer gives you more flexibility if needed. Geoms even have their own data argument, allowing you to combine information from several sources for a single visualization.
Aesthetics
----------
Aesthetics map data to various visual aspects of the plot, including size, color etc. The function used in ggplot to do this is aes.
```
aes(
x = myvar,
y = myvar2,
color = myvar3,
group = g
)
```
The best way to understand what goes into the aes function is if the value is varying. For example, if I want the size of points to be a certain value, I would code the following.
```
... +
geom_point(..., size = 4)
```
However, if I want the size to be associated with the data in some way, I use it as an aesthetic.
```
... +
geom_point(aes(size = myvar))
```
The same goes for practically any aspect of a geom\- size, color, fill, etc. If it is a fixed value, set it outside the aesthetic. If it varies based on the data, put it within an aesthetic.
Geoms
-----
In the ggplot2 world, geoms are the geometric objects\- shapes, lines, and other parts of the visualization we want to display. Even if you use ggplot2 a lot, you probably didn’t know about many or most of these.
* geom\_abline: Reference lines: horizontal, vertical, and diagonal
* geom\_area: Ribbons and area plots
* geom\_bar: Bar charts
* geom\_bin2d: Heatmap of 2d bin counts
* geom\_blank: Draw nothing
* geom\_boxplot: A box and whiskers plot (in the style of Tukey)
* geom\_col: Bar charts
* geom\_contour: 2d contours of a 3d surface
* geom\_count: Count overlapping points
* geom\_crossbar: Vertical intervals: lines, crossbars \& errorbars
* geom\_curve: Line segments and curves
* geom\_density: Smoothed density estimates
* geom\_density\_2d: Contours of a 2d density estimate
* geom\_dotplot: Dot plot
* geom\_errorbar: Vertical intervals: lines, crossbars \& errorbars
* geom\_errorbarh: Horizontal error bars
* geom\_freqpoly: Histograms and frequency polygons
* geom\_hex: Hexagonal heatmap of 2d bin counts
* geom\_histogram: Histograms and frequency polygons
* geom\_hline: Reference lines: horizontal, vertical, and diagonal
* geom\_jitter: Jittered points
* geom\_label: Text
* geom\_line: Connect observations
* geom\_linerange: Vertical intervals: lines, crossbars \& errorbars
* geom\_map: Polygons from a reference map
* geom\_path: Connect observations
* geom\_point: Points
* geom\_pointrange: Vertical intervals: lines, crossbars \& errorbars
* geom\_polygon: Polygons
* geom\_qq: A quantile\-quantile plot
* geom\_qq\_line: A quantile\-quantile plot
* geom\_quantile: Quantile regression
* geom\_raster: Rectangles
* geom\_rect: Rectangles
* geom\_ribbon: Ribbons and area plots
* geom\_rug: Rug plots in the margins
* geom\_segment: Line segments and curves
* geom\_sf: Visualise sf objects
* geom\_sf\_label: Visualise sf objects
* geom\_sf\_text: Visualise sf objects
* geom\_smooth: Smoothed conditional means
* geom\_spoke: Line segments parameterised by location, direction and distance
* geom\_step: Connect observations
* geom\_text: Text
* geom\_tile: Rectangles
* geom\_violin: Violin plot
* geom\_vline: Reference lines: horizontal, vertical, and diagonal
Examples
--------
Let’s get more of a feel for things by seeing some examples that demonstrate some geoms and aesthetics.
To begin, after setting the base aesthetic, we’ll set some explicit values for the geom.
```
library(ggplot2)
data("diamonds")
data('economics')
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(size = .5, color = 'peru')
```
Next we use two different geoms, and one is even using a different data source. Note that geoms have arguments both common and specific to them. In the following, `label` is used for geom\_text, but it would be ignored by geom\_line.
```
ggplot(aes(x = date, y = unemploy), data = economics) +
geom_line() +
geom_text(
aes(label = unemploy),
vjust = -.5,
data = filter(economics, date == '2009-10-01')
)
```
In the following, one setting, alpha (transparency), is not mapped to the data, while size and color are[45](#fn45).
```
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(size = carat, color = clarity), alpha = .05)
```
There are some other options to play with as well.
```
ggplot(aes(x = carat, y = price), data = diamonds %>% sample_frac(.01)) +
geom_point(aes(size = carat, color = clarity), key_glyph = "vpath")
```
Stats
-----
There are many statistical functions built in, and it is a key strength of ggplot that you don’t have to do a lot of processing for very common plots.
Her are some quantile regression lines:
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
geom_quantile()
```
The following shows loess (or additive model) smooths. We can do some fine\-tuning and use model\-based approaches for visualization.
```
data(mcycle, package = 'MASS')
ggplot(aes(x = times, y = accel), data = mcycle) +
geom_point() +
geom_smooth(formula = y ~ s(x, bs = 'ad'), method = 'gam')
```
Bootstrapped confidence intervals:
```
ggplot(mtcars, aes(cyl, mpg)) +
geom_point() +
stat_summary(
fun.data = "mean_cl_boot",
colour = "orange",
alpha = .75,
size = 1
)
```
The take\-home message here is to always let ggplot do the work for you if at all possible. However, I will say that I find it easier to create the summary data I want to visualize with tidyverse tools, rather than use stat\_summary, and you may have a similar experience.
Scales
------
Often there are many things we want to change about the plot, for example, the size and values of axis labels, the range of sizes for points to take, the specific colors we want to use, and so forth. Be aware that there are a great many options here, and you will regularly want to use them.
A very common thing you’ll do is change the labels for the axes. You definitely don’t have to go and change the variable name itself to do this, just use the labs function. There are also functions for individual parts, e.g. xlab, ylab and ggtitle.
```
ggplot(aes(x = times, y = accel), data = mcycle) +
geom_smooth(se = FALSE) +
labs(
x = 'milliseconds after impact',
y = 'head acceleration',
title = 'Motorcycle Accident'
)
```
A frequent operation is changing the x and y look in the form of limits and tick marks. Like labs, there is a general lims function and specific functions for just the specific parts. In addition, we may want to get really detailed using scale\_x\_\* or scale\_y\_\*.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
ylim(c(0, 60))
```
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
scale_y_continuous(
limits = c(0, 60),
breaks = seq(0, 60, by = 12),
minor_breaks = seq(6, 60, by = 6)
)
```
Another common option is to change the size of points in some way. While we assign the aesthetic as before, it comes with defaults that might not work for a given situation. Play around with the range values.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
scale_size(range = c(1, 3))
```
We will talk about color issues later, but for now, you may want to apply something besides the default options. The following shows a built\-in color scale for a color aesthetic that is treated as continuous, and one that is discrete and which we want to supply our own colors (these actually come from plotly’s default color scheme).
```
ggplot(mpg, aes(x = displ, y = hwy, color = cyl)) +
geom_point() +
scale_color_gradient2()
```
```
ggplot(mpg, aes(x = displ, y = hwy, color = factor(cyl))) +
geom_point() +
scale_color_manual(values = c("#1f77b4", "#ff7f0e", "#2ca02c", "#d62728"))
```
We can even change the scale of the data itself.
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
scale_x_log10()
```
In short, scale alterations are really useful for getting just the plot you want, and there is a lot of flexibility for you to work with. There are a lot of scales too, so know what you have available.
* scale\_alpha, scale\_alpha\_continuous, scale\_alpha\_date, scale\_alpha\_datetime, scale\_alpha\_discrete, scale\_alpha\_identity, scale\_alpha\_manual, scale\_alpha\_ordinal: Alpha transparency scales
* scale\_color\_brewer, scale\_color\_distiller: Sequential, diverging and qualitative colour scales from colorbrewer.org
* scale\_color\_continuous, scale\_color\_discrete, scale\_color\_gradient, scale\_color\_gradient2, scale\_color\_gradientn, scale\_color\_grey, scale\_color\_hue, scale\_color\_identity, scale\_color\_manual, scale\_color\_viridis\_c, scale\_color\_viridis\_d, scale\_continuous\_identity Various color scales
* scale\_discrete\_identity, scale\_discrete\_manual: Discrete scales
* scale\_fill\_brewer, scale\_fill\_continuous, scale\_fill\_date, scale\_fill\_datetime, scale\_fill\_discrete, scale\_fill\_distiller, scale\_fill\_gradient, scale\_fill\_gradient2, scale\_fill\_gradientn, scale\_fill\_grey, scale\_fill\_hue, scale\_fill\_identity, scale\_fill\_manual, scale\_fill\_ordinal, scale\_fill\_viridis\_c, scale\_fill\_viridis\_d: Scales for geoms that can be filled with color
* scale\_linetype, scale\_linetype\_continuous, scale\_linetype\_discrete, scale\_linetype\_identity, scale\_linetype\_manual: Scales for line patterns
* scale\_shape, scale\_shape\_continuous, scale\_shape\_discrete, scale\_shape\_identity, scale\_shape\_manual, scale\_shape\_ordinal: Scales for shapes, aka glyphs
* scale\_size, scale\_size\_area, scale\_size\_continuous, scale\_size\_date, scale\_size\_datetime, scale\_size\_discrete, scale\_size\_identity, scale\_size\_manual, scale\_size\_ordinal: Scales for area or radius
* scale\_x\_continuous, scale\_x\_date, scale\_x\_datetime, scale\_x\_discrete, scale\_x\_log10, scale\_x\_reverse, scale\_x\_sqrt, \< scale\_y\_continuous, scale\_y\_date, scale\_y\_datetime, scale\_y\_discrete, scale\_y\_log10, scale\_y\_reverse, scale\_y\_sqrt: Position scales for continuous data (x \& y)
* scale\_x\_time, scale\_y\_time: Position scales for date/time data
Facets
------
Facets allow for paneled display, a very common operation. In general, we often want comparison plots. The facet\_grid function will produce a grid, and often this is all that’s needed. However, facet\_wrap is more flexible, while possibly taking a bit extra effort to get things just the way you want. Both use a formula approach to specify the grouping.
#### facet\_grid
Facet by cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(~ cyl)
```
Facet by vs and cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(vs ~ cyl, labeller = label_both)
```
#### facet\_wrap
Specify the number of columns or rows with facet\_wrap.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_wrap(vs ~ cyl, labeller = label_both, ncol=2)
```
Multiple plots
--------------
Often we want distinct visualizations to come together in one plot. There are several packages that can help you here: gridExtra, cowplot, and more recently patchwork[46](#fn46). The latter especially makes things easy.
```
library(patchwork)
g1 = ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point()
g2 = ggplot(mtcars, aes(wt)) +
geom_density()
g3 = ggplot(mtcars, aes(mpg)) +
geom_density()
g1 / # initial plot, place next part underneath
(g2 | g3) # groups g2 and g3 side by side
```
Not that you want this, but just to demonstrate the flexibility.
```
p1 = ggplot(mtcars) + geom_point(aes(mpg, disp))
p2 = ggplot(mtcars) + geom_boxplot(aes(gear, disp, group = gear))
p3 = ggplot(mtcars) + geom_smooth(aes(disp, qsec))
p4 = ggplot(mtcars) + geom_bar(aes(carb))
p5 = ggplot(mtcars) + geom_violin(aes(cyl, mpg, group = cyl))
p1 +
p2 +
(p3 / p4) * theme_void() +
p5 +
plot_layout(widths = c(2, 1))
```
You’ll typically want to use facets to show subsets of the same data, and tools like patchwork to show different kinds of plots together.
Fine control
------------
ggplot2 makes it easy to get good looking graphs quickly. However the amount of fine control is extensive. The following plot is hideous (aside from the background, which is totally rad), but illustrates the point.
```
ggplot(aes(x = carat, y = price), data = diamonds) +
annotation_custom(
rasterGrob(
lambosun,
width = unit(1, "npc"),
height = unit(1, "npc"),
interpolate = FALSE
),-Inf,
Inf,
-Inf,
Inf
) +
geom_point(aes(color = clarity), alpha = .5) +
scale_y_log10(breaks = c(1000, 5000, 10000)) +
xlim(0, 10) +
scale_color_brewer(type = 'div') +
facet_wrap( ~ cut, ncol = 3) +
theme_minimal() +
theme(
axis.ticks.x = element_line(color = 'darkred'),
axis.text.x = element_text(angle = -45),
axis.text.y = element_text(size = 20),
strip.text = element_text(color = 'forestgreen'),
strip.background = element_blank(),
panel.grid.minor = element_line(color = 'lightblue'),
legend.key = element_rect(linetype = 4),
legend.position = 'bottom'
)
```
Themes
------
In the last example you saw two uses of a theme\- a built\-in version that comes with ggplot (theme\_minimal), and specific customization (theme(…)). The built\-in themes provide ready\-made approaches that might already be good enough for a finished product. For the theme function, each argument, and there are many, takes on a specific value or an element function:
* element\_rect
* element\_line
* element\_text
* element\_blank
Each of those element functions has arguments specific to it. For example, for element\_text you can specify the font size, while for element line you could specify the line type.
Note that the base theme of ggplot, and I would say every plotting package, is probably going to need manipulation before a plot is ready for presentation. For example, the ggplot theme doesn’t work well for web presentation, and is even worse for print. You will almost invariably need to tweak it. I suggest using and saving your own custom theme for easy application for any visualization package you use frequently.
Extensions
----------
ggplot2 now has its own extension system, and there is even a [website](http://www.ggplot2-exts.org/) to track the extensions. Examples include:
* additional themes
* maps
* interactivity
* animations
* marginal plots
* network graphs
* time series
* aligning multiple ggplot visualizations, possibly of different types
Here’s an example with gganimate.
```
library(gganimate)
load('data/gapminder.RData')
gap_plot = gapminder_2019 %>%
filter(giniPercap != 40)
gap_plot_filter = gap_plot %>%
filter(country %in% c('United States', 'Mexico', 'Canada'))
initial_plot = ggplot(gap_plot, aes(x = year, y = giniPercap, group = country)) +
geom_line(alpha = .05) +
geom_path(
aes(color = country),
lwd = 2,
arrow = arrow(
length = unit(0.25, "cm")
),
alpha = .5,
data = gap_plot_filter,
show.legend = FALSE
) +
geom_text(
aes(color = country, label = country),
nudge_x = 5,
nudge_y = 2,
size = 2,
data = gap_plot_filter,
show.legend = FALSE
) +
theme_clean() +
transition_reveal(year)
animate(initial_plot, end_pause = 50, nframes = 150, rewind = TRUE)
```
As one can see, ggplot2 is only the beginning. You’ll have a lot of tools at your disposal. Furthermore, many modeling and other packages will produce ggplot graphics to which you can add your own layers and tweak like you would any other ggplot.
ggplot2 Summary
---------------
ggplot2 is an easy to use, but powerful visualization tool. It allows one to think in many dimensions for any graph, and extends well beyond the basics. Use it to easily create more interesting visualizations.
ggplot2 Exercises
-----------------
### Exercise 0
Load the ggplot2 package if you haven’t already.
### Exercise 1
Create two plots, one a scatterplot (e.g. with geom\_point) and one with lines (e.g. geom\_line) with a data set of your choosing (all of the following are base R or available after loading ggplot2. Some suggestions:
* faithful: Waiting time between eruptions and the duration of the eruption for the Old Faithful geyser in Yellowstone National Park, Wyoming, USA.
* msleep: mammals sleep dataset with sleep times and weights etc.
* diamonds: used in the slides
* economics: US economic time series.
* txhousing: Housing sales in TX.
* midwest: Midwest demographics.
* mpg: Fuel economy data from 1999 and 2008 for 38 popular models of car
Recall the basic form for ggplot.
```
ggplot(data = *, aes(x = *, y = *, other)) +
geom_*() +
otherLayers, theme etc.
```
Themes to play with:
* theme\_bw
* theme\_classic
* theme\_dark
* theme\_gray
* theme\_light
* theme\_linedraw
* theme\_minimal
* theme\_clean (requires the visibly package and an appreciation of the Lamborghini background from the previous visualization)
### Exercise 2
Play around and change the arguments to the following. You’ll need to install the maps package.
* For example, do points for all county midpoints. For that you’d need to change the x and y for the point geom to an aesthetic based on the longitude and latitude, as well as add its data argument to use the seats data frame.
* Make the color of the points or text based on `subregion`. This will require adding the fill argument to the polygon geom and removing the NA setting. In addition, add the argument show.legend\=F (outside the aesthetic), or you’ll have a problematic legend (recall what we said before about too many colors!). Try making color based on subregion too.
* See if you can use element\_blank on a theme argument to remove the axis information. See ?theme for ideas.
```
library(maps)
mi = map_data("county", "michigan")
seats = mi %>%
group_by(subregion) %>%
summarise_at(vars(lat, long), function(x) median(range(x)))
# inspect the data
# head(mi)
# head(seats)
ggplot(mi, aes(long, lat)) +
geom_polygon(aes(group = subregion), fill = NA, colour = "grey60") +
geom_text(aes(label = subregion), data = seats, size = 1, angle = 45) +
geom_point(x=-83.748333, y=42.281389, color='#1e90ff', size=3) +
theme_minimal() +
theme(panel.grid=element_blank())
```
Python Plotnine Notebook
------------------------
The R community really lucked out with ggplot, and the basic philosophy behind it is missing from practically every other static plotting packages or tools. Python’s version of base R plotting is matplotlib, which continues to serve people well. But like R base plots, it can take a lot of work to get anything remotely visually appealing. Seaborn is another option, but still, just isn’t in the same league.
If using Python though, you’re in luck! You get most of the basic functionality of ggplot2 via the plotnine module. A jupyter notebook demonstrating most of the previous is available [here](https://github.com/m-clark/data-processing-and-visualization/blob/master/code/ggplot.ipynb).
Layers
------
In general, we start with a base layer and add to it. In most cases you’ll start as follows.
```
# recall that starwars is in the dplyr package
ggplot(aes(x = height, y = mass), data = starwars)
```
The code above would just produce a plot background, but nothing else. However, with the foundation in place, we’re now ready to add something to it. Let’s add some points (the outlier is Jabba the Hut).
```
ggplot(aes(x = height, y = mass), data = starwars) +
geom_point()
```
Perhaps we want to change labels or theme. These would be additional layers to the plot.
```
ggplot(aes(x = height, y = mass), data = starwars) +
geom_point(color = 'white') +
labs(x = 'Height in cm', y = 'Weight in kg') +
theme_dark()
```
Each layer is consecutively added by means of a pipe operator, and layers may regard geoms, scales, labels, facets etc. You may have many different layers to produce one plot, and there really is no limit. However some efficiencies may be possible for a given situation. For example, it’s more straightforward to use geom\_smooth than calculate fits, standard errors etc. and then add multiple geoms to produce the same thing. This is the sort of thing you’ll get used to as you use ggplot more.
Piping
------
As we saw, layers are added via piping (\+). The first layers added after the base are typically geoms, or geometric objects that represent the data, and include things like:
* points
* lines
* density
* text
In case you’re wondering why ggplot doesn’t use `%>%` as in the tidyverse and other visualization packages, it’s because ggplot2 was using pipes before it was cool, well before those came along. Otherwise, the concept is the same as we saw in the [data processing section](pipes.html#pipes).
```
ggplot(aes(x = myvar, y = myvar2), data = mydata) +
geom_point()
```
Our base is provided via the ggplot function, and specifies the data at the very least, but commonly also the x and y aesthetics.
The geom\_point function adds a layer of points, and now we would have a scatterplot. Alternatively, you could have specified the x and y aesthetic at the geom\_point layer, but if you’re going to have the same x, y, color, etc. aesthetics regardless of layer, put it in the base. Otherwise, doing it by layer gives you more flexibility if needed. Geoms even have their own data argument, allowing you to combine information from several sources for a single visualization.
Aesthetics
----------
Aesthetics map data to various visual aspects of the plot, including size, color etc. The function used in ggplot to do this is aes.
```
aes(
x = myvar,
y = myvar2,
color = myvar3,
group = g
)
```
The best way to understand what goes into the aes function is if the value is varying. For example, if I want the size of points to be a certain value, I would code the following.
```
... +
geom_point(..., size = 4)
```
However, if I want the size to be associated with the data in some way, I use it as an aesthetic.
```
... +
geom_point(aes(size = myvar))
```
The same goes for practically any aspect of a geom\- size, color, fill, etc. If it is a fixed value, set it outside the aesthetic. If it varies based on the data, put it within an aesthetic.
Geoms
-----
In the ggplot2 world, geoms are the geometric objects\- shapes, lines, and other parts of the visualization we want to display. Even if you use ggplot2 a lot, you probably didn’t know about many or most of these.
* geom\_abline: Reference lines: horizontal, vertical, and diagonal
* geom\_area: Ribbons and area plots
* geom\_bar: Bar charts
* geom\_bin2d: Heatmap of 2d bin counts
* geom\_blank: Draw nothing
* geom\_boxplot: A box and whiskers plot (in the style of Tukey)
* geom\_col: Bar charts
* geom\_contour: 2d contours of a 3d surface
* geom\_count: Count overlapping points
* geom\_crossbar: Vertical intervals: lines, crossbars \& errorbars
* geom\_curve: Line segments and curves
* geom\_density: Smoothed density estimates
* geom\_density\_2d: Contours of a 2d density estimate
* geom\_dotplot: Dot plot
* geom\_errorbar: Vertical intervals: lines, crossbars \& errorbars
* geom\_errorbarh: Horizontal error bars
* geom\_freqpoly: Histograms and frequency polygons
* geom\_hex: Hexagonal heatmap of 2d bin counts
* geom\_histogram: Histograms and frequency polygons
* geom\_hline: Reference lines: horizontal, vertical, and diagonal
* geom\_jitter: Jittered points
* geom\_label: Text
* geom\_line: Connect observations
* geom\_linerange: Vertical intervals: lines, crossbars \& errorbars
* geom\_map: Polygons from a reference map
* geom\_path: Connect observations
* geom\_point: Points
* geom\_pointrange: Vertical intervals: lines, crossbars \& errorbars
* geom\_polygon: Polygons
* geom\_qq: A quantile\-quantile plot
* geom\_qq\_line: A quantile\-quantile plot
* geom\_quantile: Quantile regression
* geom\_raster: Rectangles
* geom\_rect: Rectangles
* geom\_ribbon: Ribbons and area plots
* geom\_rug: Rug plots in the margins
* geom\_segment: Line segments and curves
* geom\_sf: Visualise sf objects
* geom\_sf\_label: Visualise sf objects
* geom\_sf\_text: Visualise sf objects
* geom\_smooth: Smoothed conditional means
* geom\_spoke: Line segments parameterised by location, direction and distance
* geom\_step: Connect observations
* geom\_text: Text
* geom\_tile: Rectangles
* geom\_violin: Violin plot
* geom\_vline: Reference lines: horizontal, vertical, and diagonal
Examples
--------
Let’s get more of a feel for things by seeing some examples that demonstrate some geoms and aesthetics.
To begin, after setting the base aesthetic, we’ll set some explicit values for the geom.
```
library(ggplot2)
data("diamonds")
data('economics')
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(size = .5, color = 'peru')
```
Next we use two different geoms, and one is even using a different data source. Note that geoms have arguments both common and specific to them. In the following, `label` is used for geom\_text, but it would be ignored by geom\_line.
```
ggplot(aes(x = date, y = unemploy), data = economics) +
geom_line() +
geom_text(
aes(label = unemploy),
vjust = -.5,
data = filter(economics, date == '2009-10-01')
)
```
In the following, one setting, alpha (transparency), is not mapped to the data, while size and color are[45](#fn45).
```
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(size = carat, color = clarity), alpha = .05)
```
There are some other options to play with as well.
```
ggplot(aes(x = carat, y = price), data = diamonds %>% sample_frac(.01)) +
geom_point(aes(size = carat, color = clarity), key_glyph = "vpath")
```
Stats
-----
There are many statistical functions built in, and it is a key strength of ggplot that you don’t have to do a lot of processing for very common plots.
Her are some quantile regression lines:
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
geom_quantile()
```
The following shows loess (or additive model) smooths. We can do some fine\-tuning and use model\-based approaches for visualization.
```
data(mcycle, package = 'MASS')
ggplot(aes(x = times, y = accel), data = mcycle) +
geom_point() +
geom_smooth(formula = y ~ s(x, bs = 'ad'), method = 'gam')
```
Bootstrapped confidence intervals:
```
ggplot(mtcars, aes(cyl, mpg)) +
geom_point() +
stat_summary(
fun.data = "mean_cl_boot",
colour = "orange",
alpha = .75,
size = 1
)
```
The take\-home message here is to always let ggplot do the work for you if at all possible. However, I will say that I find it easier to create the summary data I want to visualize with tidyverse tools, rather than use stat\_summary, and you may have a similar experience.
Scales
------
Often there are many things we want to change about the plot, for example, the size and values of axis labels, the range of sizes for points to take, the specific colors we want to use, and so forth. Be aware that there are a great many options here, and you will regularly want to use them.
A very common thing you’ll do is change the labels for the axes. You definitely don’t have to go and change the variable name itself to do this, just use the labs function. There are also functions for individual parts, e.g. xlab, ylab and ggtitle.
```
ggplot(aes(x = times, y = accel), data = mcycle) +
geom_smooth(se = FALSE) +
labs(
x = 'milliseconds after impact',
y = 'head acceleration',
title = 'Motorcycle Accident'
)
```
A frequent operation is changing the x and y look in the form of limits and tick marks. Like labs, there is a general lims function and specific functions for just the specific parts. In addition, we may want to get really detailed using scale\_x\_\* or scale\_y\_\*.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
ylim(c(0, 60))
```
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
scale_y_continuous(
limits = c(0, 60),
breaks = seq(0, 60, by = 12),
minor_breaks = seq(6, 60, by = 6)
)
```
Another common option is to change the size of points in some way. While we assign the aesthetic as before, it comes with defaults that might not work for a given situation. Play around with the range values.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
scale_size(range = c(1, 3))
```
We will talk about color issues later, but for now, you may want to apply something besides the default options. The following shows a built\-in color scale for a color aesthetic that is treated as continuous, and one that is discrete and which we want to supply our own colors (these actually come from plotly’s default color scheme).
```
ggplot(mpg, aes(x = displ, y = hwy, color = cyl)) +
geom_point() +
scale_color_gradient2()
```
```
ggplot(mpg, aes(x = displ, y = hwy, color = factor(cyl))) +
geom_point() +
scale_color_manual(values = c("#1f77b4", "#ff7f0e", "#2ca02c", "#d62728"))
```
We can even change the scale of the data itself.
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
scale_x_log10()
```
In short, scale alterations are really useful for getting just the plot you want, and there is a lot of flexibility for you to work with. There are a lot of scales too, so know what you have available.
* scale\_alpha, scale\_alpha\_continuous, scale\_alpha\_date, scale\_alpha\_datetime, scale\_alpha\_discrete, scale\_alpha\_identity, scale\_alpha\_manual, scale\_alpha\_ordinal: Alpha transparency scales
* scale\_color\_brewer, scale\_color\_distiller: Sequential, diverging and qualitative colour scales from colorbrewer.org
* scale\_color\_continuous, scale\_color\_discrete, scale\_color\_gradient, scale\_color\_gradient2, scale\_color\_gradientn, scale\_color\_grey, scale\_color\_hue, scale\_color\_identity, scale\_color\_manual, scale\_color\_viridis\_c, scale\_color\_viridis\_d, scale\_continuous\_identity Various color scales
* scale\_discrete\_identity, scale\_discrete\_manual: Discrete scales
* scale\_fill\_brewer, scale\_fill\_continuous, scale\_fill\_date, scale\_fill\_datetime, scale\_fill\_discrete, scale\_fill\_distiller, scale\_fill\_gradient, scale\_fill\_gradient2, scale\_fill\_gradientn, scale\_fill\_grey, scale\_fill\_hue, scale\_fill\_identity, scale\_fill\_manual, scale\_fill\_ordinal, scale\_fill\_viridis\_c, scale\_fill\_viridis\_d: Scales for geoms that can be filled with color
* scale\_linetype, scale\_linetype\_continuous, scale\_linetype\_discrete, scale\_linetype\_identity, scale\_linetype\_manual: Scales for line patterns
* scale\_shape, scale\_shape\_continuous, scale\_shape\_discrete, scale\_shape\_identity, scale\_shape\_manual, scale\_shape\_ordinal: Scales for shapes, aka glyphs
* scale\_size, scale\_size\_area, scale\_size\_continuous, scale\_size\_date, scale\_size\_datetime, scale\_size\_discrete, scale\_size\_identity, scale\_size\_manual, scale\_size\_ordinal: Scales for area or radius
* scale\_x\_continuous, scale\_x\_date, scale\_x\_datetime, scale\_x\_discrete, scale\_x\_log10, scale\_x\_reverse, scale\_x\_sqrt, \< scale\_y\_continuous, scale\_y\_date, scale\_y\_datetime, scale\_y\_discrete, scale\_y\_log10, scale\_y\_reverse, scale\_y\_sqrt: Position scales for continuous data (x \& y)
* scale\_x\_time, scale\_y\_time: Position scales for date/time data
Facets
------
Facets allow for paneled display, a very common operation. In general, we often want comparison plots. The facet\_grid function will produce a grid, and often this is all that’s needed. However, facet\_wrap is more flexible, while possibly taking a bit extra effort to get things just the way you want. Both use a formula approach to specify the grouping.
#### facet\_grid
Facet by cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(~ cyl)
```
Facet by vs and cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(vs ~ cyl, labeller = label_both)
```
#### facet\_wrap
Specify the number of columns or rows with facet\_wrap.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_wrap(vs ~ cyl, labeller = label_both, ncol=2)
```
#### facet\_grid
Facet by cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(~ cyl)
```
Facet by vs and cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(vs ~ cyl, labeller = label_both)
```
#### facet\_wrap
Specify the number of columns or rows with facet\_wrap.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_wrap(vs ~ cyl, labeller = label_both, ncol=2)
```
Multiple plots
--------------
Often we want distinct visualizations to come together in one plot. There are several packages that can help you here: gridExtra, cowplot, and more recently patchwork[46](#fn46). The latter especially makes things easy.
```
library(patchwork)
g1 = ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point()
g2 = ggplot(mtcars, aes(wt)) +
geom_density()
g3 = ggplot(mtcars, aes(mpg)) +
geom_density()
g1 / # initial plot, place next part underneath
(g2 | g3) # groups g2 and g3 side by side
```
Not that you want this, but just to demonstrate the flexibility.
```
p1 = ggplot(mtcars) + geom_point(aes(mpg, disp))
p2 = ggplot(mtcars) + geom_boxplot(aes(gear, disp, group = gear))
p3 = ggplot(mtcars) + geom_smooth(aes(disp, qsec))
p4 = ggplot(mtcars) + geom_bar(aes(carb))
p5 = ggplot(mtcars) + geom_violin(aes(cyl, mpg, group = cyl))
p1 +
p2 +
(p3 / p4) * theme_void() +
p5 +
plot_layout(widths = c(2, 1))
```
You’ll typically want to use facets to show subsets of the same data, and tools like patchwork to show different kinds of plots together.
Fine control
------------
ggplot2 makes it easy to get good looking graphs quickly. However the amount of fine control is extensive. The following plot is hideous (aside from the background, which is totally rad), but illustrates the point.
```
ggplot(aes(x = carat, y = price), data = diamonds) +
annotation_custom(
rasterGrob(
lambosun,
width = unit(1, "npc"),
height = unit(1, "npc"),
interpolate = FALSE
),-Inf,
Inf,
-Inf,
Inf
) +
geom_point(aes(color = clarity), alpha = .5) +
scale_y_log10(breaks = c(1000, 5000, 10000)) +
xlim(0, 10) +
scale_color_brewer(type = 'div') +
facet_wrap( ~ cut, ncol = 3) +
theme_minimal() +
theme(
axis.ticks.x = element_line(color = 'darkred'),
axis.text.x = element_text(angle = -45),
axis.text.y = element_text(size = 20),
strip.text = element_text(color = 'forestgreen'),
strip.background = element_blank(),
panel.grid.minor = element_line(color = 'lightblue'),
legend.key = element_rect(linetype = 4),
legend.position = 'bottom'
)
```
Themes
------
In the last example you saw two uses of a theme\- a built\-in version that comes with ggplot (theme\_minimal), and specific customization (theme(…)). The built\-in themes provide ready\-made approaches that might already be good enough for a finished product. For the theme function, each argument, and there are many, takes on a specific value or an element function:
* element\_rect
* element\_line
* element\_text
* element\_blank
Each of those element functions has arguments specific to it. For example, for element\_text you can specify the font size, while for element line you could specify the line type.
Note that the base theme of ggplot, and I would say every plotting package, is probably going to need manipulation before a plot is ready for presentation. For example, the ggplot theme doesn’t work well for web presentation, and is even worse for print. You will almost invariably need to tweak it. I suggest using and saving your own custom theme for easy application for any visualization package you use frequently.
Extensions
----------
ggplot2 now has its own extension system, and there is even a [website](http://www.ggplot2-exts.org/) to track the extensions. Examples include:
* additional themes
* maps
* interactivity
* animations
* marginal plots
* network graphs
* time series
* aligning multiple ggplot visualizations, possibly of different types
Here’s an example with gganimate.
```
library(gganimate)
load('data/gapminder.RData')
gap_plot = gapminder_2019 %>%
filter(giniPercap != 40)
gap_plot_filter = gap_plot %>%
filter(country %in% c('United States', 'Mexico', 'Canada'))
initial_plot = ggplot(gap_plot, aes(x = year, y = giniPercap, group = country)) +
geom_line(alpha = .05) +
geom_path(
aes(color = country),
lwd = 2,
arrow = arrow(
length = unit(0.25, "cm")
),
alpha = .5,
data = gap_plot_filter,
show.legend = FALSE
) +
geom_text(
aes(color = country, label = country),
nudge_x = 5,
nudge_y = 2,
size = 2,
data = gap_plot_filter,
show.legend = FALSE
) +
theme_clean() +
transition_reveal(year)
animate(initial_plot, end_pause = 50, nframes = 150, rewind = TRUE)
```
As one can see, ggplot2 is only the beginning. You’ll have a lot of tools at your disposal. Furthermore, many modeling and other packages will produce ggplot graphics to which you can add your own layers and tweak like you would any other ggplot.
ggplot2 Summary
---------------
ggplot2 is an easy to use, but powerful visualization tool. It allows one to think in many dimensions for any graph, and extends well beyond the basics. Use it to easily create more interesting visualizations.
ggplot2 Exercises
-----------------
### Exercise 0
Load the ggplot2 package if you haven’t already.
### Exercise 1
Create two plots, one a scatterplot (e.g. with geom\_point) and one with lines (e.g. geom\_line) with a data set of your choosing (all of the following are base R or available after loading ggplot2. Some suggestions:
* faithful: Waiting time between eruptions and the duration of the eruption for the Old Faithful geyser in Yellowstone National Park, Wyoming, USA.
* msleep: mammals sleep dataset with sleep times and weights etc.
* diamonds: used in the slides
* economics: US economic time series.
* txhousing: Housing sales in TX.
* midwest: Midwest demographics.
* mpg: Fuel economy data from 1999 and 2008 for 38 popular models of car
Recall the basic form for ggplot.
```
ggplot(data = *, aes(x = *, y = *, other)) +
geom_*() +
otherLayers, theme etc.
```
Themes to play with:
* theme\_bw
* theme\_classic
* theme\_dark
* theme\_gray
* theme\_light
* theme\_linedraw
* theme\_minimal
* theme\_clean (requires the visibly package and an appreciation of the Lamborghini background from the previous visualization)
### Exercise 2
Play around and change the arguments to the following. You’ll need to install the maps package.
* For example, do points for all county midpoints. For that you’d need to change the x and y for the point geom to an aesthetic based on the longitude and latitude, as well as add its data argument to use the seats data frame.
* Make the color of the points or text based on `subregion`. This will require adding the fill argument to the polygon geom and removing the NA setting. In addition, add the argument show.legend\=F (outside the aesthetic), or you’ll have a problematic legend (recall what we said before about too many colors!). Try making color based on subregion too.
* See if you can use element\_blank on a theme argument to remove the axis information. See ?theme for ideas.
```
library(maps)
mi = map_data("county", "michigan")
seats = mi %>%
group_by(subregion) %>%
summarise_at(vars(lat, long), function(x) median(range(x)))
# inspect the data
# head(mi)
# head(seats)
ggplot(mi, aes(long, lat)) +
geom_polygon(aes(group = subregion), fill = NA, colour = "grey60") +
geom_text(aes(label = subregion), data = seats, size = 1, angle = 45) +
geom_point(x=-83.748333, y=42.281389, color='#1e90ff', size=3) +
theme_minimal() +
theme(panel.grid=element_blank())
```
### Exercise 0
Load the ggplot2 package if you haven’t already.
### Exercise 1
Create two plots, one a scatterplot (e.g. with geom\_point) and one with lines (e.g. geom\_line) with a data set of your choosing (all of the following are base R or available after loading ggplot2. Some suggestions:
* faithful: Waiting time between eruptions and the duration of the eruption for the Old Faithful geyser in Yellowstone National Park, Wyoming, USA.
* msleep: mammals sleep dataset with sleep times and weights etc.
* diamonds: used in the slides
* economics: US economic time series.
* txhousing: Housing sales in TX.
* midwest: Midwest demographics.
* mpg: Fuel economy data from 1999 and 2008 for 38 popular models of car
Recall the basic form for ggplot.
```
ggplot(data = *, aes(x = *, y = *, other)) +
geom_*() +
otherLayers, theme etc.
```
Themes to play with:
* theme\_bw
* theme\_classic
* theme\_dark
* theme\_gray
* theme\_light
* theme\_linedraw
* theme\_minimal
* theme\_clean (requires the visibly package and an appreciation of the Lamborghini background from the previous visualization)
### Exercise 2
Play around and change the arguments to the following. You’ll need to install the maps package.
* For example, do points for all county midpoints. For that you’d need to change the x and y for the point geom to an aesthetic based on the longitude and latitude, as well as add its data argument to use the seats data frame.
* Make the color of the points or text based on `subregion`. This will require adding the fill argument to the polygon geom and removing the NA setting. In addition, add the argument show.legend\=F (outside the aesthetic), or you’ll have a problematic legend (recall what we said before about too many colors!). Try making color based on subregion too.
* See if you can use element\_blank on a theme argument to remove the axis information. See ?theme for ideas.
```
library(maps)
mi = map_data("county", "michigan")
seats = mi %>%
group_by(subregion) %>%
summarise_at(vars(lat, long), function(x) median(range(x)))
# inspect the data
# head(mi)
# head(seats)
ggplot(mi, aes(long, lat)) +
geom_polygon(aes(group = subregion), fill = NA, colour = "grey60") +
geom_text(aes(label = subregion), data = seats, size = 1, angle = 45) +
geom_point(x=-83.748333, y=42.281389, color='#1e90ff', size=3) +
theme_minimal() +
theme(panel.grid=element_blank())
```
Python Plotnine Notebook
------------------------
The R community really lucked out with ggplot, and the basic philosophy behind it is missing from practically every other static plotting packages or tools. Python’s version of base R plotting is matplotlib, which continues to serve people well. But like R base plots, it can take a lot of work to get anything remotely visually appealing. Seaborn is another option, but still, just isn’t in the same league.
If using Python though, you’re in luck! You get most of the basic functionality of ggplot2 via the plotnine module. A jupyter notebook demonstrating most of the previous is available [here](https://github.com/m-clark/data-processing-and-visualization/blob/master/code/ggplot.ipynb).
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/ggplot2.html |
ggplot2
=======
Visualization is key to telling the data’s story, and it can take a lot of work to get things to look just right. But, it can also be a lot of fun, so let’s dive in!
When it comes to visualization, the most [popular](https://r-pkg.org/downloaded) package used in R is ggplot2. It’s so popular, it or its aesthetic is even copied in other languages/programs as well. It entails a grammar of graphics (hence the **gg**), and learning that grammar is key to using it effectively. Some of the strengths of ggplot2 include:
* The ease of getting a good looking plot
* Easy customization
* A lot of necessary data processing is done for you
* Clear syntax
* Easy multidimensional approach
* Decent default color scheme as a default
* *Lots* of extensions
Every graph is built from the same few parts, and it’s important to be aware of a few key ideas, which we will cover in turn.
* Layers (and geoms)
* Piping
* Aesthetics
* Facets
* Scales
* Themes
* Extensions
Note that while you can obviously use base R for visualization, it’s not going to be as easy or as flexible as ggplot2.
Layers
------
In general, we start with a base layer and add to it. In most cases you’ll start as follows.
```
# recall that starwars is in the dplyr package
ggplot(aes(x = height, y = mass), data = starwars)
```
The code above would just produce a plot background, but nothing else. However, with the foundation in place, we’re now ready to add something to it. Let’s add some points (the outlier is Jabba the Hut).
```
ggplot(aes(x = height, y = mass), data = starwars) +
geom_point()
```
Perhaps we want to change labels or theme. These would be additional layers to the plot.
```
ggplot(aes(x = height, y = mass), data = starwars) +
geom_point(color = 'white') +
labs(x = 'Height in cm', y = 'Weight in kg') +
theme_dark()
```
Each layer is consecutively added by means of a pipe operator, and layers may regard geoms, scales, labels, facets etc. You may have many different layers to produce one plot, and there really is no limit. However some efficiencies may be possible for a given situation. For example, it’s more straightforward to use geom\_smooth than calculate fits, standard errors etc. and then add multiple geoms to produce the same thing. This is the sort of thing you’ll get used to as you use ggplot more.
Piping
------
As we saw, layers are added via piping (\+). The first layers added after the base are typically geoms, or geometric objects that represent the data, and include things like:
* points
* lines
* density
* text
In case you’re wondering why ggplot doesn’t use `%>%` as in the tidyverse and other visualization packages, it’s because ggplot2 was using pipes before it was cool, well before those came along. Otherwise, the concept is the same as we saw in the [data processing section](pipes.html#pipes).
```
ggplot(aes(x = myvar, y = myvar2), data = mydata) +
geom_point()
```
Our base is provided via the ggplot function, and specifies the data at the very least, but commonly also the x and y aesthetics.
The geom\_point function adds a layer of points, and now we would have a scatterplot. Alternatively, you could have specified the x and y aesthetic at the geom\_point layer, but if you’re going to have the same x, y, color, etc. aesthetics regardless of layer, put it in the base. Otherwise, doing it by layer gives you more flexibility if needed. Geoms even have their own data argument, allowing you to combine information from several sources for a single visualization.
Aesthetics
----------
Aesthetics map data to various visual aspects of the plot, including size, color etc. The function used in ggplot to do this is aes.
```
aes(
x = myvar,
y = myvar2,
color = myvar3,
group = g
)
```
The best way to understand what goes into the aes function is if the value is varying. For example, if I want the size of points to be a certain value, I would code the following.
```
... +
geom_point(..., size = 4)
```
However, if I want the size to be associated with the data in some way, I use it as an aesthetic.
```
... +
geom_point(aes(size = myvar))
```
The same goes for practically any aspect of a geom\- size, color, fill, etc. If it is a fixed value, set it outside the aesthetic. If it varies based on the data, put it within an aesthetic.
Geoms
-----
In the ggplot2 world, geoms are the geometric objects\- shapes, lines, and other parts of the visualization we want to display. Even if you use ggplot2 a lot, you probably didn’t know about many or most of these.
* geom\_abline: Reference lines: horizontal, vertical, and diagonal
* geom\_area: Ribbons and area plots
* geom\_bar: Bar charts
* geom\_bin2d: Heatmap of 2d bin counts
* geom\_blank: Draw nothing
* geom\_boxplot: A box and whiskers plot (in the style of Tukey)
* geom\_col: Bar charts
* geom\_contour: 2d contours of a 3d surface
* geom\_count: Count overlapping points
* geom\_crossbar: Vertical intervals: lines, crossbars \& errorbars
* geom\_curve: Line segments and curves
* geom\_density: Smoothed density estimates
* geom\_density\_2d: Contours of a 2d density estimate
* geom\_dotplot: Dot plot
* geom\_errorbar: Vertical intervals: lines, crossbars \& errorbars
* geom\_errorbarh: Horizontal error bars
* geom\_freqpoly: Histograms and frequency polygons
* geom\_hex: Hexagonal heatmap of 2d bin counts
* geom\_histogram: Histograms and frequency polygons
* geom\_hline: Reference lines: horizontal, vertical, and diagonal
* geom\_jitter: Jittered points
* geom\_label: Text
* geom\_line: Connect observations
* geom\_linerange: Vertical intervals: lines, crossbars \& errorbars
* geom\_map: Polygons from a reference map
* geom\_path: Connect observations
* geom\_point: Points
* geom\_pointrange: Vertical intervals: lines, crossbars \& errorbars
* geom\_polygon: Polygons
* geom\_qq: A quantile\-quantile plot
* geom\_qq\_line: A quantile\-quantile plot
* geom\_quantile: Quantile regression
* geom\_raster: Rectangles
* geom\_rect: Rectangles
* geom\_ribbon: Ribbons and area plots
* geom\_rug: Rug plots in the margins
* geom\_segment: Line segments and curves
* geom\_sf: Visualise sf objects
* geom\_sf\_label: Visualise sf objects
* geom\_sf\_text: Visualise sf objects
* geom\_smooth: Smoothed conditional means
* geom\_spoke: Line segments parameterised by location, direction and distance
* geom\_step: Connect observations
* geom\_text: Text
* geom\_tile: Rectangles
* geom\_violin: Violin plot
* geom\_vline: Reference lines: horizontal, vertical, and diagonal
Examples
--------
Let’s get more of a feel for things by seeing some examples that demonstrate some geoms and aesthetics.
To begin, after setting the base aesthetic, we’ll set some explicit values for the geom.
```
library(ggplot2)
data("diamonds")
data('economics')
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(size = .5, color = 'peru')
```
Next we use two different geoms, and one is even using a different data source. Note that geoms have arguments both common and specific to them. In the following, `label` is used for geom\_text, but it would be ignored by geom\_line.
```
ggplot(aes(x = date, y = unemploy), data = economics) +
geom_line() +
geom_text(
aes(label = unemploy),
vjust = -.5,
data = filter(economics, date == '2009-10-01')
)
```
In the following, one setting, alpha (transparency), is not mapped to the data, while size and color are[45](#fn45).
```
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(size = carat, color = clarity), alpha = .05)
```
There are some other options to play with as well.
```
ggplot(aes(x = carat, y = price), data = diamonds %>% sample_frac(.01)) +
geom_point(aes(size = carat, color = clarity), key_glyph = "vpath")
```
Stats
-----
There are many statistical functions built in, and it is a key strength of ggplot that you don’t have to do a lot of processing for very common plots.
Her are some quantile regression lines:
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
geom_quantile()
```
The following shows loess (or additive model) smooths. We can do some fine\-tuning and use model\-based approaches for visualization.
```
data(mcycle, package = 'MASS')
ggplot(aes(x = times, y = accel), data = mcycle) +
geom_point() +
geom_smooth(formula = y ~ s(x, bs = 'ad'), method = 'gam')
```
Bootstrapped confidence intervals:
```
ggplot(mtcars, aes(cyl, mpg)) +
geom_point() +
stat_summary(
fun.data = "mean_cl_boot",
colour = "orange",
alpha = .75,
size = 1
)
```
The take\-home message here is to always let ggplot do the work for you if at all possible. However, I will say that I find it easier to create the summary data I want to visualize with tidyverse tools, rather than use stat\_summary, and you may have a similar experience.
Scales
------
Often there are many things we want to change about the plot, for example, the size and values of axis labels, the range of sizes for points to take, the specific colors we want to use, and so forth. Be aware that there are a great many options here, and you will regularly want to use them.
A very common thing you’ll do is change the labels for the axes. You definitely don’t have to go and change the variable name itself to do this, just use the labs function. There are also functions for individual parts, e.g. xlab, ylab and ggtitle.
```
ggplot(aes(x = times, y = accel), data = mcycle) +
geom_smooth(se = FALSE) +
labs(
x = 'milliseconds after impact',
y = 'head acceleration',
title = 'Motorcycle Accident'
)
```
A frequent operation is changing the x and y look in the form of limits and tick marks. Like labs, there is a general lims function and specific functions for just the specific parts. In addition, we may want to get really detailed using scale\_x\_\* or scale\_y\_\*.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
ylim(c(0, 60))
```
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
scale_y_continuous(
limits = c(0, 60),
breaks = seq(0, 60, by = 12),
minor_breaks = seq(6, 60, by = 6)
)
```
Another common option is to change the size of points in some way. While we assign the aesthetic as before, it comes with defaults that might not work for a given situation. Play around with the range values.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
scale_size(range = c(1, 3))
```
We will talk about color issues later, but for now, you may want to apply something besides the default options. The following shows a built\-in color scale for a color aesthetic that is treated as continuous, and one that is discrete and which we want to supply our own colors (these actually come from plotly’s default color scheme).
```
ggplot(mpg, aes(x = displ, y = hwy, color = cyl)) +
geom_point() +
scale_color_gradient2()
```
```
ggplot(mpg, aes(x = displ, y = hwy, color = factor(cyl))) +
geom_point() +
scale_color_manual(values = c("#1f77b4", "#ff7f0e", "#2ca02c", "#d62728"))
```
We can even change the scale of the data itself.
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
scale_x_log10()
```
In short, scale alterations are really useful for getting just the plot you want, and there is a lot of flexibility for you to work with. There are a lot of scales too, so know what you have available.
* scale\_alpha, scale\_alpha\_continuous, scale\_alpha\_date, scale\_alpha\_datetime, scale\_alpha\_discrete, scale\_alpha\_identity, scale\_alpha\_manual, scale\_alpha\_ordinal: Alpha transparency scales
* scale\_color\_brewer, scale\_color\_distiller: Sequential, diverging and qualitative colour scales from colorbrewer.org
* scale\_color\_continuous, scale\_color\_discrete, scale\_color\_gradient, scale\_color\_gradient2, scale\_color\_gradientn, scale\_color\_grey, scale\_color\_hue, scale\_color\_identity, scale\_color\_manual, scale\_color\_viridis\_c, scale\_color\_viridis\_d, scale\_continuous\_identity Various color scales
* scale\_discrete\_identity, scale\_discrete\_manual: Discrete scales
* scale\_fill\_brewer, scale\_fill\_continuous, scale\_fill\_date, scale\_fill\_datetime, scale\_fill\_discrete, scale\_fill\_distiller, scale\_fill\_gradient, scale\_fill\_gradient2, scale\_fill\_gradientn, scale\_fill\_grey, scale\_fill\_hue, scale\_fill\_identity, scale\_fill\_manual, scale\_fill\_ordinal, scale\_fill\_viridis\_c, scale\_fill\_viridis\_d: Scales for geoms that can be filled with color
* scale\_linetype, scale\_linetype\_continuous, scale\_linetype\_discrete, scale\_linetype\_identity, scale\_linetype\_manual: Scales for line patterns
* scale\_shape, scale\_shape\_continuous, scale\_shape\_discrete, scale\_shape\_identity, scale\_shape\_manual, scale\_shape\_ordinal: Scales for shapes, aka glyphs
* scale\_size, scale\_size\_area, scale\_size\_continuous, scale\_size\_date, scale\_size\_datetime, scale\_size\_discrete, scale\_size\_identity, scale\_size\_manual, scale\_size\_ordinal: Scales for area or radius
* scale\_x\_continuous, scale\_x\_date, scale\_x\_datetime, scale\_x\_discrete, scale\_x\_log10, scale\_x\_reverse, scale\_x\_sqrt, \< scale\_y\_continuous, scale\_y\_date, scale\_y\_datetime, scale\_y\_discrete, scale\_y\_log10, scale\_y\_reverse, scale\_y\_sqrt: Position scales for continuous data (x \& y)
* scale\_x\_time, scale\_y\_time: Position scales for date/time data
Facets
------
Facets allow for paneled display, a very common operation. In general, we often want comparison plots. The facet\_grid function will produce a grid, and often this is all that’s needed. However, facet\_wrap is more flexible, while possibly taking a bit extra effort to get things just the way you want. Both use a formula approach to specify the grouping.
#### facet\_grid
Facet by cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(~ cyl)
```
Facet by vs and cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(vs ~ cyl, labeller = label_both)
```
#### facet\_wrap
Specify the number of columns or rows with facet\_wrap.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_wrap(vs ~ cyl, labeller = label_both, ncol=2)
```
Multiple plots
--------------
Often we want distinct visualizations to come together in one plot. There are several packages that can help you here: gridExtra, cowplot, and more recently patchwork[46](#fn46). The latter especially makes things easy.
```
library(patchwork)
g1 = ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point()
g2 = ggplot(mtcars, aes(wt)) +
geom_density()
g3 = ggplot(mtcars, aes(mpg)) +
geom_density()
g1 / # initial plot, place next part underneath
(g2 | g3) # groups g2 and g3 side by side
```
Not that you want this, but just to demonstrate the flexibility.
```
p1 = ggplot(mtcars) + geom_point(aes(mpg, disp))
p2 = ggplot(mtcars) + geom_boxplot(aes(gear, disp, group = gear))
p3 = ggplot(mtcars) + geom_smooth(aes(disp, qsec))
p4 = ggplot(mtcars) + geom_bar(aes(carb))
p5 = ggplot(mtcars) + geom_violin(aes(cyl, mpg, group = cyl))
p1 +
p2 +
(p3 / p4) * theme_void() +
p5 +
plot_layout(widths = c(2, 1))
```
You’ll typically want to use facets to show subsets of the same data, and tools like patchwork to show different kinds of plots together.
Fine control
------------
ggplot2 makes it easy to get good looking graphs quickly. However the amount of fine control is extensive. The following plot is hideous (aside from the background, which is totally rad), but illustrates the point.
```
ggplot(aes(x = carat, y = price), data = diamonds) +
annotation_custom(
rasterGrob(
lambosun,
width = unit(1, "npc"),
height = unit(1, "npc"),
interpolate = FALSE
),-Inf,
Inf,
-Inf,
Inf
) +
geom_point(aes(color = clarity), alpha = .5) +
scale_y_log10(breaks = c(1000, 5000, 10000)) +
xlim(0, 10) +
scale_color_brewer(type = 'div') +
facet_wrap( ~ cut, ncol = 3) +
theme_minimal() +
theme(
axis.ticks.x = element_line(color = 'darkred'),
axis.text.x = element_text(angle = -45),
axis.text.y = element_text(size = 20),
strip.text = element_text(color = 'forestgreen'),
strip.background = element_blank(),
panel.grid.minor = element_line(color = 'lightblue'),
legend.key = element_rect(linetype = 4),
legend.position = 'bottom'
)
```
Themes
------
In the last example you saw two uses of a theme\- a built\-in version that comes with ggplot (theme\_minimal), and specific customization (theme(…)). The built\-in themes provide ready\-made approaches that might already be good enough for a finished product. For the theme function, each argument, and there are many, takes on a specific value or an element function:
* element\_rect
* element\_line
* element\_text
* element\_blank
Each of those element functions has arguments specific to it. For example, for element\_text you can specify the font size, while for element line you could specify the line type.
Note that the base theme of ggplot, and I would say every plotting package, is probably going to need manipulation before a plot is ready for presentation. For example, the ggplot theme doesn’t work well for web presentation, and is even worse for print. You will almost invariably need to tweak it. I suggest using and saving your own custom theme for easy application for any visualization package you use frequently.
Extensions
----------
ggplot2 now has its own extension system, and there is even a [website](http://www.ggplot2-exts.org/) to track the extensions. Examples include:
* additional themes
* maps
* interactivity
* animations
* marginal plots
* network graphs
* time series
* aligning multiple ggplot visualizations, possibly of different types
Here’s an example with gganimate.
```
library(gganimate)
load('data/gapminder.RData')
gap_plot = gapminder_2019 %>%
filter(giniPercap != 40)
gap_plot_filter = gap_plot %>%
filter(country %in% c('United States', 'Mexico', 'Canada'))
initial_plot = ggplot(gap_plot, aes(x = year, y = giniPercap, group = country)) +
geom_line(alpha = .05) +
geom_path(
aes(color = country),
lwd = 2,
arrow = arrow(
length = unit(0.25, "cm")
),
alpha = .5,
data = gap_plot_filter,
show.legend = FALSE
) +
geom_text(
aes(color = country, label = country),
nudge_x = 5,
nudge_y = 2,
size = 2,
data = gap_plot_filter,
show.legend = FALSE
) +
theme_clean() +
transition_reveal(year)
animate(initial_plot, end_pause = 50, nframes = 150, rewind = TRUE)
```
As one can see, ggplot2 is only the beginning. You’ll have a lot of tools at your disposal. Furthermore, many modeling and other packages will produce ggplot graphics to which you can add your own layers and tweak like you would any other ggplot.
ggplot2 Summary
---------------
ggplot2 is an easy to use, but powerful visualization tool. It allows one to think in many dimensions for any graph, and extends well beyond the basics. Use it to easily create more interesting visualizations.
ggplot2 Exercises
-----------------
### Exercise 0
Load the ggplot2 package if you haven’t already.
### Exercise 1
Create two plots, one a scatterplot (e.g. with geom\_point) and one with lines (e.g. geom\_line) with a data set of your choosing (all of the following are base R or available after loading ggplot2. Some suggestions:
* faithful: Waiting time between eruptions and the duration of the eruption for the Old Faithful geyser in Yellowstone National Park, Wyoming, USA.
* msleep: mammals sleep dataset with sleep times and weights etc.
* diamonds: used in the slides
* economics: US economic time series.
* txhousing: Housing sales in TX.
* midwest: Midwest demographics.
* mpg: Fuel economy data from 1999 and 2008 for 38 popular models of car
Recall the basic form for ggplot.
```
ggplot(data = *, aes(x = *, y = *, other)) +
geom_*() +
otherLayers, theme etc.
```
Themes to play with:
* theme\_bw
* theme\_classic
* theme\_dark
* theme\_gray
* theme\_light
* theme\_linedraw
* theme\_minimal
* theme\_clean (requires the visibly package and an appreciation of the Lamborghini background from the previous visualization)
### Exercise 2
Play around and change the arguments to the following. You’ll need to install the maps package.
* For example, do points for all county midpoints. For that you’d need to change the x and y for the point geom to an aesthetic based on the longitude and latitude, as well as add its data argument to use the seats data frame.
* Make the color of the points or text based on `subregion`. This will require adding the fill argument to the polygon geom and removing the NA setting. In addition, add the argument show.legend\=F (outside the aesthetic), or you’ll have a problematic legend (recall what we said before about too many colors!). Try making color based on subregion too.
* See if you can use element\_blank on a theme argument to remove the axis information. See ?theme for ideas.
```
library(maps)
mi = map_data("county", "michigan")
seats = mi %>%
group_by(subregion) %>%
summarise_at(vars(lat, long), function(x) median(range(x)))
# inspect the data
# head(mi)
# head(seats)
ggplot(mi, aes(long, lat)) +
geom_polygon(aes(group = subregion), fill = NA, colour = "grey60") +
geom_text(aes(label = subregion), data = seats, size = 1, angle = 45) +
geom_point(x=-83.748333, y=42.281389, color='#1e90ff', size=3) +
theme_minimal() +
theme(panel.grid=element_blank())
```
Python Plotnine Notebook
------------------------
The R community really lucked out with ggplot, and the basic philosophy behind it is missing from practically every other static plotting packages or tools. Python’s version of base R plotting is matplotlib, which continues to serve people well. But like R base plots, it can take a lot of work to get anything remotely visually appealing. Seaborn is another option, but still, just isn’t in the same league.
If using Python though, you’re in luck! You get most of the basic functionality of ggplot2 via the plotnine module. A jupyter notebook demonstrating most of the previous is available [here](https://github.com/m-clark/data-processing-and-visualization/blob/master/code/ggplot.ipynb).
Layers
------
In general, we start with a base layer and add to it. In most cases you’ll start as follows.
```
# recall that starwars is in the dplyr package
ggplot(aes(x = height, y = mass), data = starwars)
```
The code above would just produce a plot background, but nothing else. However, with the foundation in place, we’re now ready to add something to it. Let’s add some points (the outlier is Jabba the Hut).
```
ggplot(aes(x = height, y = mass), data = starwars) +
geom_point()
```
Perhaps we want to change labels or theme. These would be additional layers to the plot.
```
ggplot(aes(x = height, y = mass), data = starwars) +
geom_point(color = 'white') +
labs(x = 'Height in cm', y = 'Weight in kg') +
theme_dark()
```
Each layer is consecutively added by means of a pipe operator, and layers may regard geoms, scales, labels, facets etc. You may have many different layers to produce one plot, and there really is no limit. However some efficiencies may be possible for a given situation. For example, it’s more straightforward to use geom\_smooth than calculate fits, standard errors etc. and then add multiple geoms to produce the same thing. This is the sort of thing you’ll get used to as you use ggplot more.
Piping
------
As we saw, layers are added via piping (\+). The first layers added after the base are typically geoms, or geometric objects that represent the data, and include things like:
* points
* lines
* density
* text
In case you’re wondering why ggplot doesn’t use `%>%` as in the tidyverse and other visualization packages, it’s because ggplot2 was using pipes before it was cool, well before those came along. Otherwise, the concept is the same as we saw in the [data processing section](pipes.html#pipes).
```
ggplot(aes(x = myvar, y = myvar2), data = mydata) +
geom_point()
```
Our base is provided via the ggplot function, and specifies the data at the very least, but commonly also the x and y aesthetics.
The geom\_point function adds a layer of points, and now we would have a scatterplot. Alternatively, you could have specified the x and y aesthetic at the geom\_point layer, but if you’re going to have the same x, y, color, etc. aesthetics regardless of layer, put it in the base. Otherwise, doing it by layer gives you more flexibility if needed. Geoms even have their own data argument, allowing you to combine information from several sources for a single visualization.
Aesthetics
----------
Aesthetics map data to various visual aspects of the plot, including size, color etc. The function used in ggplot to do this is aes.
```
aes(
x = myvar,
y = myvar2,
color = myvar3,
group = g
)
```
The best way to understand what goes into the aes function is if the value is varying. For example, if I want the size of points to be a certain value, I would code the following.
```
... +
geom_point(..., size = 4)
```
However, if I want the size to be associated with the data in some way, I use it as an aesthetic.
```
... +
geom_point(aes(size = myvar))
```
The same goes for practically any aspect of a geom\- size, color, fill, etc. If it is a fixed value, set it outside the aesthetic. If it varies based on the data, put it within an aesthetic.
Geoms
-----
In the ggplot2 world, geoms are the geometric objects\- shapes, lines, and other parts of the visualization we want to display. Even if you use ggplot2 a lot, you probably didn’t know about many or most of these.
* geom\_abline: Reference lines: horizontal, vertical, and diagonal
* geom\_area: Ribbons and area plots
* geom\_bar: Bar charts
* geom\_bin2d: Heatmap of 2d bin counts
* geom\_blank: Draw nothing
* geom\_boxplot: A box and whiskers plot (in the style of Tukey)
* geom\_col: Bar charts
* geom\_contour: 2d contours of a 3d surface
* geom\_count: Count overlapping points
* geom\_crossbar: Vertical intervals: lines, crossbars \& errorbars
* geom\_curve: Line segments and curves
* geom\_density: Smoothed density estimates
* geom\_density\_2d: Contours of a 2d density estimate
* geom\_dotplot: Dot plot
* geom\_errorbar: Vertical intervals: lines, crossbars \& errorbars
* geom\_errorbarh: Horizontal error bars
* geom\_freqpoly: Histograms and frequency polygons
* geom\_hex: Hexagonal heatmap of 2d bin counts
* geom\_histogram: Histograms and frequency polygons
* geom\_hline: Reference lines: horizontal, vertical, and diagonal
* geom\_jitter: Jittered points
* geom\_label: Text
* geom\_line: Connect observations
* geom\_linerange: Vertical intervals: lines, crossbars \& errorbars
* geom\_map: Polygons from a reference map
* geom\_path: Connect observations
* geom\_point: Points
* geom\_pointrange: Vertical intervals: lines, crossbars \& errorbars
* geom\_polygon: Polygons
* geom\_qq: A quantile\-quantile plot
* geom\_qq\_line: A quantile\-quantile plot
* geom\_quantile: Quantile regression
* geom\_raster: Rectangles
* geom\_rect: Rectangles
* geom\_ribbon: Ribbons and area plots
* geom\_rug: Rug plots in the margins
* geom\_segment: Line segments and curves
* geom\_sf: Visualise sf objects
* geom\_sf\_label: Visualise sf objects
* geom\_sf\_text: Visualise sf objects
* geom\_smooth: Smoothed conditional means
* geom\_spoke: Line segments parameterised by location, direction and distance
* geom\_step: Connect observations
* geom\_text: Text
* geom\_tile: Rectangles
* geom\_violin: Violin plot
* geom\_vline: Reference lines: horizontal, vertical, and diagonal
Examples
--------
Let’s get more of a feel for things by seeing some examples that demonstrate some geoms and aesthetics.
To begin, after setting the base aesthetic, we’ll set some explicit values for the geom.
```
library(ggplot2)
data("diamonds")
data('economics')
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(size = .5, color = 'peru')
```
Next we use two different geoms, and one is even using a different data source. Note that geoms have arguments both common and specific to them. In the following, `label` is used for geom\_text, but it would be ignored by geom\_line.
```
ggplot(aes(x = date, y = unemploy), data = economics) +
geom_line() +
geom_text(
aes(label = unemploy),
vjust = -.5,
data = filter(economics, date == '2009-10-01')
)
```
In the following, one setting, alpha (transparency), is not mapped to the data, while size and color are[45](#fn45).
```
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(size = carat, color = clarity), alpha = .05)
```
There are some other options to play with as well.
```
ggplot(aes(x = carat, y = price), data = diamonds %>% sample_frac(.01)) +
geom_point(aes(size = carat, color = clarity), key_glyph = "vpath")
```
Stats
-----
There are many statistical functions built in, and it is a key strength of ggplot that you don’t have to do a lot of processing for very common plots.
Her are some quantile regression lines:
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
geom_quantile()
```
The following shows loess (or additive model) smooths. We can do some fine\-tuning and use model\-based approaches for visualization.
```
data(mcycle, package = 'MASS')
ggplot(aes(x = times, y = accel), data = mcycle) +
geom_point() +
geom_smooth(formula = y ~ s(x, bs = 'ad'), method = 'gam')
```
Bootstrapped confidence intervals:
```
ggplot(mtcars, aes(cyl, mpg)) +
geom_point() +
stat_summary(
fun.data = "mean_cl_boot",
colour = "orange",
alpha = .75,
size = 1
)
```
The take\-home message here is to always let ggplot do the work for you if at all possible. However, I will say that I find it easier to create the summary data I want to visualize with tidyverse tools, rather than use stat\_summary, and you may have a similar experience.
Scales
------
Often there are many things we want to change about the plot, for example, the size and values of axis labels, the range of sizes for points to take, the specific colors we want to use, and so forth. Be aware that there are a great many options here, and you will regularly want to use them.
A very common thing you’ll do is change the labels for the axes. You definitely don’t have to go and change the variable name itself to do this, just use the labs function. There are also functions for individual parts, e.g. xlab, ylab and ggtitle.
```
ggplot(aes(x = times, y = accel), data = mcycle) +
geom_smooth(se = FALSE) +
labs(
x = 'milliseconds after impact',
y = 'head acceleration',
title = 'Motorcycle Accident'
)
```
A frequent operation is changing the x and y look in the form of limits and tick marks. Like labs, there is a general lims function and specific functions for just the specific parts. In addition, we may want to get really detailed using scale\_x\_\* or scale\_y\_\*.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
ylim(c(0, 60))
```
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
scale_y_continuous(
limits = c(0, 60),
breaks = seq(0, 60, by = 12),
minor_breaks = seq(6, 60, by = 6)
)
```
Another common option is to change the size of points in some way. While we assign the aesthetic as before, it comes with defaults that might not work for a given situation. Play around with the range values.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
scale_size(range = c(1, 3))
```
We will talk about color issues later, but for now, you may want to apply something besides the default options. The following shows a built\-in color scale for a color aesthetic that is treated as continuous, and one that is discrete and which we want to supply our own colors (these actually come from plotly’s default color scheme).
```
ggplot(mpg, aes(x = displ, y = hwy, color = cyl)) +
geom_point() +
scale_color_gradient2()
```
```
ggplot(mpg, aes(x = displ, y = hwy, color = factor(cyl))) +
geom_point() +
scale_color_manual(values = c("#1f77b4", "#ff7f0e", "#2ca02c", "#d62728"))
```
We can even change the scale of the data itself.
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
scale_x_log10()
```
In short, scale alterations are really useful for getting just the plot you want, and there is a lot of flexibility for you to work with. There are a lot of scales too, so know what you have available.
* scale\_alpha, scale\_alpha\_continuous, scale\_alpha\_date, scale\_alpha\_datetime, scale\_alpha\_discrete, scale\_alpha\_identity, scale\_alpha\_manual, scale\_alpha\_ordinal: Alpha transparency scales
* scale\_color\_brewer, scale\_color\_distiller: Sequential, diverging and qualitative colour scales from colorbrewer.org
* scale\_color\_continuous, scale\_color\_discrete, scale\_color\_gradient, scale\_color\_gradient2, scale\_color\_gradientn, scale\_color\_grey, scale\_color\_hue, scale\_color\_identity, scale\_color\_manual, scale\_color\_viridis\_c, scale\_color\_viridis\_d, scale\_continuous\_identity Various color scales
* scale\_discrete\_identity, scale\_discrete\_manual: Discrete scales
* scale\_fill\_brewer, scale\_fill\_continuous, scale\_fill\_date, scale\_fill\_datetime, scale\_fill\_discrete, scale\_fill\_distiller, scale\_fill\_gradient, scale\_fill\_gradient2, scale\_fill\_gradientn, scale\_fill\_grey, scale\_fill\_hue, scale\_fill\_identity, scale\_fill\_manual, scale\_fill\_ordinal, scale\_fill\_viridis\_c, scale\_fill\_viridis\_d: Scales for geoms that can be filled with color
* scale\_linetype, scale\_linetype\_continuous, scale\_linetype\_discrete, scale\_linetype\_identity, scale\_linetype\_manual: Scales for line patterns
* scale\_shape, scale\_shape\_continuous, scale\_shape\_discrete, scale\_shape\_identity, scale\_shape\_manual, scale\_shape\_ordinal: Scales for shapes, aka glyphs
* scale\_size, scale\_size\_area, scale\_size\_continuous, scale\_size\_date, scale\_size\_datetime, scale\_size\_discrete, scale\_size\_identity, scale\_size\_manual, scale\_size\_ordinal: Scales for area or radius
* scale\_x\_continuous, scale\_x\_date, scale\_x\_datetime, scale\_x\_discrete, scale\_x\_log10, scale\_x\_reverse, scale\_x\_sqrt, \< scale\_y\_continuous, scale\_y\_date, scale\_y\_datetime, scale\_y\_discrete, scale\_y\_log10, scale\_y\_reverse, scale\_y\_sqrt: Position scales for continuous data (x \& y)
* scale\_x\_time, scale\_y\_time: Position scales for date/time data
Facets
------
Facets allow for paneled display, a very common operation. In general, we often want comparison plots. The facet\_grid function will produce a grid, and often this is all that’s needed. However, facet\_wrap is more flexible, while possibly taking a bit extra effort to get things just the way you want. Both use a formula approach to specify the grouping.
#### facet\_grid
Facet by cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(~ cyl)
```
Facet by vs and cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(vs ~ cyl, labeller = label_both)
```
#### facet\_wrap
Specify the number of columns or rows with facet\_wrap.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_wrap(vs ~ cyl, labeller = label_both, ncol=2)
```
#### facet\_grid
Facet by cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(~ cyl)
```
Facet by vs and cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(vs ~ cyl, labeller = label_both)
```
#### facet\_wrap
Specify the number of columns or rows with facet\_wrap.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_wrap(vs ~ cyl, labeller = label_both, ncol=2)
```
Multiple plots
--------------
Often we want distinct visualizations to come together in one plot. There are several packages that can help you here: gridExtra, cowplot, and more recently patchwork[46](#fn46). The latter especially makes things easy.
```
library(patchwork)
g1 = ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point()
g2 = ggplot(mtcars, aes(wt)) +
geom_density()
g3 = ggplot(mtcars, aes(mpg)) +
geom_density()
g1 / # initial plot, place next part underneath
(g2 | g3) # groups g2 and g3 side by side
```
Not that you want this, but just to demonstrate the flexibility.
```
p1 = ggplot(mtcars) + geom_point(aes(mpg, disp))
p2 = ggplot(mtcars) + geom_boxplot(aes(gear, disp, group = gear))
p3 = ggplot(mtcars) + geom_smooth(aes(disp, qsec))
p4 = ggplot(mtcars) + geom_bar(aes(carb))
p5 = ggplot(mtcars) + geom_violin(aes(cyl, mpg, group = cyl))
p1 +
p2 +
(p3 / p4) * theme_void() +
p5 +
plot_layout(widths = c(2, 1))
```
You’ll typically want to use facets to show subsets of the same data, and tools like patchwork to show different kinds of plots together.
Fine control
------------
ggplot2 makes it easy to get good looking graphs quickly. However the amount of fine control is extensive. The following plot is hideous (aside from the background, which is totally rad), but illustrates the point.
```
ggplot(aes(x = carat, y = price), data = diamonds) +
annotation_custom(
rasterGrob(
lambosun,
width = unit(1, "npc"),
height = unit(1, "npc"),
interpolate = FALSE
),-Inf,
Inf,
-Inf,
Inf
) +
geom_point(aes(color = clarity), alpha = .5) +
scale_y_log10(breaks = c(1000, 5000, 10000)) +
xlim(0, 10) +
scale_color_brewer(type = 'div') +
facet_wrap( ~ cut, ncol = 3) +
theme_minimal() +
theme(
axis.ticks.x = element_line(color = 'darkred'),
axis.text.x = element_text(angle = -45),
axis.text.y = element_text(size = 20),
strip.text = element_text(color = 'forestgreen'),
strip.background = element_blank(),
panel.grid.minor = element_line(color = 'lightblue'),
legend.key = element_rect(linetype = 4),
legend.position = 'bottom'
)
```
Themes
------
In the last example you saw two uses of a theme\- a built\-in version that comes with ggplot (theme\_minimal), and specific customization (theme(…)). The built\-in themes provide ready\-made approaches that might already be good enough for a finished product. For the theme function, each argument, and there are many, takes on a specific value or an element function:
* element\_rect
* element\_line
* element\_text
* element\_blank
Each of those element functions has arguments specific to it. For example, for element\_text you can specify the font size, while for element line you could specify the line type.
Note that the base theme of ggplot, and I would say every plotting package, is probably going to need manipulation before a plot is ready for presentation. For example, the ggplot theme doesn’t work well for web presentation, and is even worse for print. You will almost invariably need to tweak it. I suggest using and saving your own custom theme for easy application for any visualization package you use frequently.
Extensions
----------
ggplot2 now has its own extension system, and there is even a [website](http://www.ggplot2-exts.org/) to track the extensions. Examples include:
* additional themes
* maps
* interactivity
* animations
* marginal plots
* network graphs
* time series
* aligning multiple ggplot visualizations, possibly of different types
Here’s an example with gganimate.
```
library(gganimate)
load('data/gapminder.RData')
gap_plot = gapminder_2019 %>%
filter(giniPercap != 40)
gap_plot_filter = gap_plot %>%
filter(country %in% c('United States', 'Mexico', 'Canada'))
initial_plot = ggplot(gap_plot, aes(x = year, y = giniPercap, group = country)) +
geom_line(alpha = .05) +
geom_path(
aes(color = country),
lwd = 2,
arrow = arrow(
length = unit(0.25, "cm")
),
alpha = .5,
data = gap_plot_filter,
show.legend = FALSE
) +
geom_text(
aes(color = country, label = country),
nudge_x = 5,
nudge_y = 2,
size = 2,
data = gap_plot_filter,
show.legend = FALSE
) +
theme_clean() +
transition_reveal(year)
animate(initial_plot, end_pause = 50, nframes = 150, rewind = TRUE)
```
As one can see, ggplot2 is only the beginning. You’ll have a lot of tools at your disposal. Furthermore, many modeling and other packages will produce ggplot graphics to which you can add your own layers and tweak like you would any other ggplot.
ggplot2 Summary
---------------
ggplot2 is an easy to use, but powerful visualization tool. It allows one to think in many dimensions for any graph, and extends well beyond the basics. Use it to easily create more interesting visualizations.
ggplot2 Exercises
-----------------
### Exercise 0
Load the ggplot2 package if you haven’t already.
### Exercise 1
Create two plots, one a scatterplot (e.g. with geom\_point) and one with lines (e.g. geom\_line) with a data set of your choosing (all of the following are base R or available after loading ggplot2. Some suggestions:
* faithful: Waiting time between eruptions and the duration of the eruption for the Old Faithful geyser in Yellowstone National Park, Wyoming, USA.
* msleep: mammals sleep dataset with sleep times and weights etc.
* diamonds: used in the slides
* economics: US economic time series.
* txhousing: Housing sales in TX.
* midwest: Midwest demographics.
* mpg: Fuel economy data from 1999 and 2008 for 38 popular models of car
Recall the basic form for ggplot.
```
ggplot(data = *, aes(x = *, y = *, other)) +
geom_*() +
otherLayers, theme etc.
```
Themes to play with:
* theme\_bw
* theme\_classic
* theme\_dark
* theme\_gray
* theme\_light
* theme\_linedraw
* theme\_minimal
* theme\_clean (requires the visibly package and an appreciation of the Lamborghini background from the previous visualization)
### Exercise 2
Play around and change the arguments to the following. You’ll need to install the maps package.
* For example, do points for all county midpoints. For that you’d need to change the x and y for the point geom to an aesthetic based on the longitude and latitude, as well as add its data argument to use the seats data frame.
* Make the color of the points or text based on `subregion`. This will require adding the fill argument to the polygon geom and removing the NA setting. In addition, add the argument show.legend\=F (outside the aesthetic), or you’ll have a problematic legend (recall what we said before about too many colors!). Try making color based on subregion too.
* See if you can use element\_blank on a theme argument to remove the axis information. See ?theme for ideas.
```
library(maps)
mi = map_data("county", "michigan")
seats = mi %>%
group_by(subregion) %>%
summarise_at(vars(lat, long), function(x) median(range(x)))
# inspect the data
# head(mi)
# head(seats)
ggplot(mi, aes(long, lat)) +
geom_polygon(aes(group = subregion), fill = NA, colour = "grey60") +
geom_text(aes(label = subregion), data = seats, size = 1, angle = 45) +
geom_point(x=-83.748333, y=42.281389, color='#1e90ff', size=3) +
theme_minimal() +
theme(panel.grid=element_blank())
```
### Exercise 0
Load the ggplot2 package if you haven’t already.
### Exercise 1
Create two plots, one a scatterplot (e.g. with geom\_point) and one with lines (e.g. geom\_line) with a data set of your choosing (all of the following are base R or available after loading ggplot2. Some suggestions:
* faithful: Waiting time between eruptions and the duration of the eruption for the Old Faithful geyser in Yellowstone National Park, Wyoming, USA.
* msleep: mammals sleep dataset with sleep times and weights etc.
* diamonds: used in the slides
* economics: US economic time series.
* txhousing: Housing sales in TX.
* midwest: Midwest demographics.
* mpg: Fuel economy data from 1999 and 2008 for 38 popular models of car
Recall the basic form for ggplot.
```
ggplot(data = *, aes(x = *, y = *, other)) +
geom_*() +
otherLayers, theme etc.
```
Themes to play with:
* theme\_bw
* theme\_classic
* theme\_dark
* theme\_gray
* theme\_light
* theme\_linedraw
* theme\_minimal
* theme\_clean (requires the visibly package and an appreciation of the Lamborghini background from the previous visualization)
### Exercise 2
Play around and change the arguments to the following. You’ll need to install the maps package.
* For example, do points for all county midpoints. For that you’d need to change the x and y for the point geom to an aesthetic based on the longitude and latitude, as well as add its data argument to use the seats data frame.
* Make the color of the points or text based on `subregion`. This will require adding the fill argument to the polygon geom and removing the NA setting. In addition, add the argument show.legend\=F (outside the aesthetic), or you’ll have a problematic legend (recall what we said before about too many colors!). Try making color based on subregion too.
* See if you can use element\_blank on a theme argument to remove the axis information. See ?theme for ideas.
```
library(maps)
mi = map_data("county", "michigan")
seats = mi %>%
group_by(subregion) %>%
summarise_at(vars(lat, long), function(x) median(range(x)))
# inspect the data
# head(mi)
# head(seats)
ggplot(mi, aes(long, lat)) +
geom_polygon(aes(group = subregion), fill = NA, colour = "grey60") +
geom_text(aes(label = subregion), data = seats, size = 1, angle = 45) +
geom_point(x=-83.748333, y=42.281389, color='#1e90ff', size=3) +
theme_minimal() +
theme(panel.grid=element_blank())
```
Python Plotnine Notebook
------------------------
The R community really lucked out with ggplot, and the basic philosophy behind it is missing from practically every other static plotting packages or tools. Python’s version of base R plotting is matplotlib, which continues to serve people well. But like R base plots, it can take a lot of work to get anything remotely visually appealing. Seaborn is another option, but still, just isn’t in the same league.
If using Python though, you’re in luck! You get most of the basic functionality of ggplot2 via the plotnine module. A jupyter notebook demonstrating most of the previous is available [here](https://github.com/m-clark/data-processing-and-visualization/blob/master/code/ggplot.ipynb).
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/ggplot2.html |
ggplot2
=======
Visualization is key to telling the data’s story, and it can take a lot of work to get things to look just right. But, it can also be a lot of fun, so let’s dive in!
When it comes to visualization, the most [popular](https://r-pkg.org/downloaded) package used in R is ggplot2. It’s so popular, it or its aesthetic is even copied in other languages/programs as well. It entails a grammar of graphics (hence the **gg**), and learning that grammar is key to using it effectively. Some of the strengths of ggplot2 include:
* The ease of getting a good looking plot
* Easy customization
* A lot of necessary data processing is done for you
* Clear syntax
* Easy multidimensional approach
* Decent default color scheme as a default
* *Lots* of extensions
Every graph is built from the same few parts, and it’s important to be aware of a few key ideas, which we will cover in turn.
* Layers (and geoms)
* Piping
* Aesthetics
* Facets
* Scales
* Themes
* Extensions
Note that while you can obviously use base R for visualization, it’s not going to be as easy or as flexible as ggplot2.
Layers
------
In general, we start with a base layer and add to it. In most cases you’ll start as follows.
```
# recall that starwars is in the dplyr package
ggplot(aes(x = height, y = mass), data = starwars)
```
The code above would just produce a plot background, but nothing else. However, with the foundation in place, we’re now ready to add something to it. Let’s add some points (the outlier is Jabba the Hut).
```
ggplot(aes(x = height, y = mass), data = starwars) +
geom_point()
```
Perhaps we want to change labels or theme. These would be additional layers to the plot.
```
ggplot(aes(x = height, y = mass), data = starwars) +
geom_point(color = 'white') +
labs(x = 'Height in cm', y = 'Weight in kg') +
theme_dark()
```
Each layer is consecutively added by means of a pipe operator, and layers may regard geoms, scales, labels, facets etc. You may have many different layers to produce one plot, and there really is no limit. However some efficiencies may be possible for a given situation. For example, it’s more straightforward to use geom\_smooth than calculate fits, standard errors etc. and then add multiple geoms to produce the same thing. This is the sort of thing you’ll get used to as you use ggplot more.
Piping
------
As we saw, layers are added via piping (\+). The first layers added after the base are typically geoms, or geometric objects that represent the data, and include things like:
* points
* lines
* density
* text
In case you’re wondering why ggplot doesn’t use `%>%` as in the tidyverse and other visualization packages, it’s because ggplot2 was using pipes before it was cool, well before those came along. Otherwise, the concept is the same as we saw in the [data processing section](pipes.html#pipes).
```
ggplot(aes(x = myvar, y = myvar2), data = mydata) +
geom_point()
```
Our base is provided via the ggplot function, and specifies the data at the very least, but commonly also the x and y aesthetics.
The geom\_point function adds a layer of points, and now we would have a scatterplot. Alternatively, you could have specified the x and y aesthetic at the geom\_point layer, but if you’re going to have the same x, y, color, etc. aesthetics regardless of layer, put it in the base. Otherwise, doing it by layer gives you more flexibility if needed. Geoms even have their own data argument, allowing you to combine information from several sources for a single visualization.
Aesthetics
----------
Aesthetics map data to various visual aspects of the plot, including size, color etc. The function used in ggplot to do this is aes.
```
aes(
x = myvar,
y = myvar2,
color = myvar3,
group = g
)
```
The best way to understand what goes into the aes function is if the value is varying. For example, if I want the size of points to be a certain value, I would code the following.
```
... +
geom_point(..., size = 4)
```
However, if I want the size to be associated with the data in some way, I use it as an aesthetic.
```
... +
geom_point(aes(size = myvar))
```
The same goes for practically any aspect of a geom\- size, color, fill, etc. If it is a fixed value, set it outside the aesthetic. If it varies based on the data, put it within an aesthetic.
Geoms
-----
In the ggplot2 world, geoms are the geometric objects\- shapes, lines, and other parts of the visualization we want to display. Even if you use ggplot2 a lot, you probably didn’t know about many or most of these.
* geom\_abline: Reference lines: horizontal, vertical, and diagonal
* geom\_area: Ribbons and area plots
* geom\_bar: Bar charts
* geom\_bin2d: Heatmap of 2d bin counts
* geom\_blank: Draw nothing
* geom\_boxplot: A box and whiskers plot (in the style of Tukey)
* geom\_col: Bar charts
* geom\_contour: 2d contours of a 3d surface
* geom\_count: Count overlapping points
* geom\_crossbar: Vertical intervals: lines, crossbars \& errorbars
* geom\_curve: Line segments and curves
* geom\_density: Smoothed density estimates
* geom\_density\_2d: Contours of a 2d density estimate
* geom\_dotplot: Dot plot
* geom\_errorbar: Vertical intervals: lines, crossbars \& errorbars
* geom\_errorbarh: Horizontal error bars
* geom\_freqpoly: Histograms and frequency polygons
* geom\_hex: Hexagonal heatmap of 2d bin counts
* geom\_histogram: Histograms and frequency polygons
* geom\_hline: Reference lines: horizontal, vertical, and diagonal
* geom\_jitter: Jittered points
* geom\_label: Text
* geom\_line: Connect observations
* geom\_linerange: Vertical intervals: lines, crossbars \& errorbars
* geom\_map: Polygons from a reference map
* geom\_path: Connect observations
* geom\_point: Points
* geom\_pointrange: Vertical intervals: lines, crossbars \& errorbars
* geom\_polygon: Polygons
* geom\_qq: A quantile\-quantile plot
* geom\_qq\_line: A quantile\-quantile plot
* geom\_quantile: Quantile regression
* geom\_raster: Rectangles
* geom\_rect: Rectangles
* geom\_ribbon: Ribbons and area plots
* geom\_rug: Rug plots in the margins
* geom\_segment: Line segments and curves
* geom\_sf: Visualise sf objects
* geom\_sf\_label: Visualise sf objects
* geom\_sf\_text: Visualise sf objects
* geom\_smooth: Smoothed conditional means
* geom\_spoke: Line segments parameterised by location, direction and distance
* geom\_step: Connect observations
* geom\_text: Text
* geom\_tile: Rectangles
* geom\_violin: Violin plot
* geom\_vline: Reference lines: horizontal, vertical, and diagonal
Examples
--------
Let’s get more of a feel for things by seeing some examples that demonstrate some geoms and aesthetics.
To begin, after setting the base aesthetic, we’ll set some explicit values for the geom.
```
library(ggplot2)
data("diamonds")
data('economics')
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(size = .5, color = 'peru')
```
Next we use two different geoms, and one is even using a different data source. Note that geoms have arguments both common and specific to them. In the following, `label` is used for geom\_text, but it would be ignored by geom\_line.
```
ggplot(aes(x = date, y = unemploy), data = economics) +
geom_line() +
geom_text(
aes(label = unemploy),
vjust = -.5,
data = filter(economics, date == '2009-10-01')
)
```
In the following, one setting, alpha (transparency), is not mapped to the data, while size and color are[45](#fn45).
```
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(size = carat, color = clarity), alpha = .05)
```
There are some other options to play with as well.
```
ggplot(aes(x = carat, y = price), data = diamonds %>% sample_frac(.01)) +
geom_point(aes(size = carat, color = clarity), key_glyph = "vpath")
```
Stats
-----
There are many statistical functions built in, and it is a key strength of ggplot that you don’t have to do a lot of processing for very common plots.
Her are some quantile regression lines:
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
geom_quantile()
```
The following shows loess (or additive model) smooths. We can do some fine\-tuning and use model\-based approaches for visualization.
```
data(mcycle, package = 'MASS')
ggplot(aes(x = times, y = accel), data = mcycle) +
geom_point() +
geom_smooth(formula = y ~ s(x, bs = 'ad'), method = 'gam')
```
Bootstrapped confidence intervals:
```
ggplot(mtcars, aes(cyl, mpg)) +
geom_point() +
stat_summary(
fun.data = "mean_cl_boot",
colour = "orange",
alpha = .75,
size = 1
)
```
The take\-home message here is to always let ggplot do the work for you if at all possible. However, I will say that I find it easier to create the summary data I want to visualize with tidyverse tools, rather than use stat\_summary, and you may have a similar experience.
Scales
------
Often there are many things we want to change about the plot, for example, the size and values of axis labels, the range of sizes for points to take, the specific colors we want to use, and so forth. Be aware that there are a great many options here, and you will regularly want to use them.
A very common thing you’ll do is change the labels for the axes. You definitely don’t have to go and change the variable name itself to do this, just use the labs function. There are also functions for individual parts, e.g. xlab, ylab and ggtitle.
```
ggplot(aes(x = times, y = accel), data = mcycle) +
geom_smooth(se = FALSE) +
labs(
x = 'milliseconds after impact',
y = 'head acceleration',
title = 'Motorcycle Accident'
)
```
A frequent operation is changing the x and y look in the form of limits and tick marks. Like labs, there is a general lims function and specific functions for just the specific parts. In addition, we may want to get really detailed using scale\_x\_\* or scale\_y\_\*.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
ylim(c(0, 60))
```
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
scale_y_continuous(
limits = c(0, 60),
breaks = seq(0, 60, by = 12),
minor_breaks = seq(6, 60, by = 6)
)
```
Another common option is to change the size of points in some way. While we assign the aesthetic as before, it comes with defaults that might not work for a given situation. Play around with the range values.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
scale_size(range = c(1, 3))
```
We will talk about color issues later, but for now, you may want to apply something besides the default options. The following shows a built\-in color scale for a color aesthetic that is treated as continuous, and one that is discrete and which we want to supply our own colors (these actually come from plotly’s default color scheme).
```
ggplot(mpg, aes(x = displ, y = hwy, color = cyl)) +
geom_point() +
scale_color_gradient2()
```
```
ggplot(mpg, aes(x = displ, y = hwy, color = factor(cyl))) +
geom_point() +
scale_color_manual(values = c("#1f77b4", "#ff7f0e", "#2ca02c", "#d62728"))
```
We can even change the scale of the data itself.
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
scale_x_log10()
```
In short, scale alterations are really useful for getting just the plot you want, and there is a lot of flexibility for you to work with. There are a lot of scales too, so know what you have available.
* scale\_alpha, scale\_alpha\_continuous, scale\_alpha\_date, scale\_alpha\_datetime, scale\_alpha\_discrete, scale\_alpha\_identity, scale\_alpha\_manual, scale\_alpha\_ordinal: Alpha transparency scales
* scale\_color\_brewer, scale\_color\_distiller: Sequential, diverging and qualitative colour scales from colorbrewer.org
* scale\_color\_continuous, scale\_color\_discrete, scale\_color\_gradient, scale\_color\_gradient2, scale\_color\_gradientn, scale\_color\_grey, scale\_color\_hue, scale\_color\_identity, scale\_color\_manual, scale\_color\_viridis\_c, scale\_color\_viridis\_d, scale\_continuous\_identity Various color scales
* scale\_discrete\_identity, scale\_discrete\_manual: Discrete scales
* scale\_fill\_brewer, scale\_fill\_continuous, scale\_fill\_date, scale\_fill\_datetime, scale\_fill\_discrete, scale\_fill\_distiller, scale\_fill\_gradient, scale\_fill\_gradient2, scale\_fill\_gradientn, scale\_fill\_grey, scale\_fill\_hue, scale\_fill\_identity, scale\_fill\_manual, scale\_fill\_ordinal, scale\_fill\_viridis\_c, scale\_fill\_viridis\_d: Scales for geoms that can be filled with color
* scale\_linetype, scale\_linetype\_continuous, scale\_linetype\_discrete, scale\_linetype\_identity, scale\_linetype\_manual: Scales for line patterns
* scale\_shape, scale\_shape\_continuous, scale\_shape\_discrete, scale\_shape\_identity, scale\_shape\_manual, scale\_shape\_ordinal: Scales for shapes, aka glyphs
* scale\_size, scale\_size\_area, scale\_size\_continuous, scale\_size\_date, scale\_size\_datetime, scale\_size\_discrete, scale\_size\_identity, scale\_size\_manual, scale\_size\_ordinal: Scales for area or radius
* scale\_x\_continuous, scale\_x\_date, scale\_x\_datetime, scale\_x\_discrete, scale\_x\_log10, scale\_x\_reverse, scale\_x\_sqrt, \< scale\_y\_continuous, scale\_y\_date, scale\_y\_datetime, scale\_y\_discrete, scale\_y\_log10, scale\_y\_reverse, scale\_y\_sqrt: Position scales for continuous data (x \& y)
* scale\_x\_time, scale\_y\_time: Position scales for date/time data
Facets
------
Facets allow for paneled display, a very common operation. In general, we often want comparison plots. The facet\_grid function will produce a grid, and often this is all that’s needed. However, facet\_wrap is more flexible, while possibly taking a bit extra effort to get things just the way you want. Both use a formula approach to specify the grouping.
#### facet\_grid
Facet by cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(~ cyl)
```
Facet by vs and cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(vs ~ cyl, labeller = label_both)
```
#### facet\_wrap
Specify the number of columns or rows with facet\_wrap.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_wrap(vs ~ cyl, labeller = label_both, ncol=2)
```
Multiple plots
--------------
Often we want distinct visualizations to come together in one plot. There are several packages that can help you here: gridExtra, cowplot, and more recently patchwork[46](#fn46). The latter especially makes things easy.
```
library(patchwork)
g1 = ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point()
g2 = ggplot(mtcars, aes(wt)) +
geom_density()
g3 = ggplot(mtcars, aes(mpg)) +
geom_density()
g1 / # initial plot, place next part underneath
(g2 | g3) # groups g2 and g3 side by side
```
Not that you want this, but just to demonstrate the flexibility.
```
p1 = ggplot(mtcars) + geom_point(aes(mpg, disp))
p2 = ggplot(mtcars) + geom_boxplot(aes(gear, disp, group = gear))
p3 = ggplot(mtcars) + geom_smooth(aes(disp, qsec))
p4 = ggplot(mtcars) + geom_bar(aes(carb))
p5 = ggplot(mtcars) + geom_violin(aes(cyl, mpg, group = cyl))
p1 +
p2 +
(p3 / p4) * theme_void() +
p5 +
plot_layout(widths = c(2, 1))
```
You’ll typically want to use facets to show subsets of the same data, and tools like patchwork to show different kinds of plots together.
Fine control
------------
ggplot2 makes it easy to get good looking graphs quickly. However the amount of fine control is extensive. The following plot is hideous (aside from the background, which is totally rad), but illustrates the point.
```
ggplot(aes(x = carat, y = price), data = diamonds) +
annotation_custom(
rasterGrob(
lambosun,
width = unit(1, "npc"),
height = unit(1, "npc"),
interpolate = FALSE
),-Inf,
Inf,
-Inf,
Inf
) +
geom_point(aes(color = clarity), alpha = .5) +
scale_y_log10(breaks = c(1000, 5000, 10000)) +
xlim(0, 10) +
scale_color_brewer(type = 'div') +
facet_wrap( ~ cut, ncol = 3) +
theme_minimal() +
theme(
axis.ticks.x = element_line(color = 'darkred'),
axis.text.x = element_text(angle = -45),
axis.text.y = element_text(size = 20),
strip.text = element_text(color = 'forestgreen'),
strip.background = element_blank(),
panel.grid.minor = element_line(color = 'lightblue'),
legend.key = element_rect(linetype = 4),
legend.position = 'bottom'
)
```
Themes
------
In the last example you saw two uses of a theme\- a built\-in version that comes with ggplot (theme\_minimal), and specific customization (theme(…)). The built\-in themes provide ready\-made approaches that might already be good enough for a finished product. For the theme function, each argument, and there are many, takes on a specific value or an element function:
* element\_rect
* element\_line
* element\_text
* element\_blank
Each of those element functions has arguments specific to it. For example, for element\_text you can specify the font size, while for element line you could specify the line type.
Note that the base theme of ggplot, and I would say every plotting package, is probably going to need manipulation before a plot is ready for presentation. For example, the ggplot theme doesn’t work well for web presentation, and is even worse for print. You will almost invariably need to tweak it. I suggest using and saving your own custom theme for easy application for any visualization package you use frequently.
Extensions
----------
ggplot2 now has its own extension system, and there is even a [website](http://www.ggplot2-exts.org/) to track the extensions. Examples include:
* additional themes
* maps
* interactivity
* animations
* marginal plots
* network graphs
* time series
* aligning multiple ggplot visualizations, possibly of different types
Here’s an example with gganimate.
```
library(gganimate)
load('data/gapminder.RData')
gap_plot = gapminder_2019 %>%
filter(giniPercap != 40)
gap_plot_filter = gap_plot %>%
filter(country %in% c('United States', 'Mexico', 'Canada'))
initial_plot = ggplot(gap_plot, aes(x = year, y = giniPercap, group = country)) +
geom_line(alpha = .05) +
geom_path(
aes(color = country),
lwd = 2,
arrow = arrow(
length = unit(0.25, "cm")
),
alpha = .5,
data = gap_plot_filter,
show.legend = FALSE
) +
geom_text(
aes(color = country, label = country),
nudge_x = 5,
nudge_y = 2,
size = 2,
data = gap_plot_filter,
show.legend = FALSE
) +
theme_clean() +
transition_reveal(year)
animate(initial_plot, end_pause = 50, nframes = 150, rewind = TRUE)
```
As one can see, ggplot2 is only the beginning. You’ll have a lot of tools at your disposal. Furthermore, many modeling and other packages will produce ggplot graphics to which you can add your own layers and tweak like you would any other ggplot.
ggplot2 Summary
---------------
ggplot2 is an easy to use, but powerful visualization tool. It allows one to think in many dimensions for any graph, and extends well beyond the basics. Use it to easily create more interesting visualizations.
ggplot2 Exercises
-----------------
### Exercise 0
Load the ggplot2 package if you haven’t already.
### Exercise 1
Create two plots, one a scatterplot (e.g. with geom\_point) and one with lines (e.g. geom\_line) with a data set of your choosing (all of the following are base R or available after loading ggplot2. Some suggestions:
* faithful: Waiting time between eruptions and the duration of the eruption for the Old Faithful geyser in Yellowstone National Park, Wyoming, USA.
* msleep: mammals sleep dataset with sleep times and weights etc.
* diamonds: used in the slides
* economics: US economic time series.
* txhousing: Housing sales in TX.
* midwest: Midwest demographics.
* mpg: Fuel economy data from 1999 and 2008 for 38 popular models of car
Recall the basic form for ggplot.
```
ggplot(data = *, aes(x = *, y = *, other)) +
geom_*() +
otherLayers, theme etc.
```
Themes to play with:
* theme\_bw
* theme\_classic
* theme\_dark
* theme\_gray
* theme\_light
* theme\_linedraw
* theme\_minimal
* theme\_clean (requires the visibly package and an appreciation of the Lamborghini background from the previous visualization)
### Exercise 2
Play around and change the arguments to the following. You’ll need to install the maps package.
* For example, do points for all county midpoints. For that you’d need to change the x and y for the point geom to an aesthetic based on the longitude and latitude, as well as add its data argument to use the seats data frame.
* Make the color of the points or text based on `subregion`. This will require adding the fill argument to the polygon geom and removing the NA setting. In addition, add the argument show.legend\=F (outside the aesthetic), or you’ll have a problematic legend (recall what we said before about too many colors!). Try making color based on subregion too.
* See if you can use element\_blank on a theme argument to remove the axis information. See ?theme for ideas.
```
library(maps)
mi = map_data("county", "michigan")
seats = mi %>%
group_by(subregion) %>%
summarise_at(vars(lat, long), function(x) median(range(x)))
# inspect the data
# head(mi)
# head(seats)
ggplot(mi, aes(long, lat)) +
geom_polygon(aes(group = subregion), fill = NA, colour = "grey60") +
geom_text(aes(label = subregion), data = seats, size = 1, angle = 45) +
geom_point(x=-83.748333, y=42.281389, color='#1e90ff', size=3) +
theme_minimal() +
theme(panel.grid=element_blank())
```
Python Plotnine Notebook
------------------------
The R community really lucked out with ggplot, and the basic philosophy behind it is missing from practically every other static plotting packages or tools. Python’s version of base R plotting is matplotlib, which continues to serve people well. But like R base plots, it can take a lot of work to get anything remotely visually appealing. Seaborn is another option, but still, just isn’t in the same league.
If using Python though, you’re in luck! You get most of the basic functionality of ggplot2 via the plotnine module. A jupyter notebook demonstrating most of the previous is available [here](https://github.com/m-clark/data-processing-and-visualization/blob/master/code/ggplot.ipynb).
Layers
------
In general, we start with a base layer and add to it. In most cases you’ll start as follows.
```
# recall that starwars is in the dplyr package
ggplot(aes(x = height, y = mass), data = starwars)
```
The code above would just produce a plot background, but nothing else. However, with the foundation in place, we’re now ready to add something to it. Let’s add some points (the outlier is Jabba the Hut).
```
ggplot(aes(x = height, y = mass), data = starwars) +
geom_point()
```
Perhaps we want to change labels or theme. These would be additional layers to the plot.
```
ggplot(aes(x = height, y = mass), data = starwars) +
geom_point(color = 'white') +
labs(x = 'Height in cm', y = 'Weight in kg') +
theme_dark()
```
Each layer is consecutively added by means of a pipe operator, and layers may regard geoms, scales, labels, facets etc. You may have many different layers to produce one plot, and there really is no limit. However some efficiencies may be possible for a given situation. For example, it’s more straightforward to use geom\_smooth than calculate fits, standard errors etc. and then add multiple geoms to produce the same thing. This is the sort of thing you’ll get used to as you use ggplot more.
Piping
------
As we saw, layers are added via piping (\+). The first layers added after the base are typically geoms, or geometric objects that represent the data, and include things like:
* points
* lines
* density
* text
In case you’re wondering why ggplot doesn’t use `%>%` as in the tidyverse and other visualization packages, it’s because ggplot2 was using pipes before it was cool, well before those came along. Otherwise, the concept is the same as we saw in the [data processing section](pipes.html#pipes).
```
ggplot(aes(x = myvar, y = myvar2), data = mydata) +
geom_point()
```
Our base is provided via the ggplot function, and specifies the data at the very least, but commonly also the x and y aesthetics.
The geom\_point function adds a layer of points, and now we would have a scatterplot. Alternatively, you could have specified the x and y aesthetic at the geom\_point layer, but if you’re going to have the same x, y, color, etc. aesthetics regardless of layer, put it in the base. Otherwise, doing it by layer gives you more flexibility if needed. Geoms even have their own data argument, allowing you to combine information from several sources for a single visualization.
Aesthetics
----------
Aesthetics map data to various visual aspects of the plot, including size, color etc. The function used in ggplot to do this is aes.
```
aes(
x = myvar,
y = myvar2,
color = myvar3,
group = g
)
```
The best way to understand what goes into the aes function is if the value is varying. For example, if I want the size of points to be a certain value, I would code the following.
```
... +
geom_point(..., size = 4)
```
However, if I want the size to be associated with the data in some way, I use it as an aesthetic.
```
... +
geom_point(aes(size = myvar))
```
The same goes for practically any aspect of a geom\- size, color, fill, etc. If it is a fixed value, set it outside the aesthetic. If it varies based on the data, put it within an aesthetic.
Geoms
-----
In the ggplot2 world, geoms are the geometric objects\- shapes, lines, and other parts of the visualization we want to display. Even if you use ggplot2 a lot, you probably didn’t know about many or most of these.
* geom\_abline: Reference lines: horizontal, vertical, and diagonal
* geom\_area: Ribbons and area plots
* geom\_bar: Bar charts
* geom\_bin2d: Heatmap of 2d bin counts
* geom\_blank: Draw nothing
* geom\_boxplot: A box and whiskers plot (in the style of Tukey)
* geom\_col: Bar charts
* geom\_contour: 2d contours of a 3d surface
* geom\_count: Count overlapping points
* geom\_crossbar: Vertical intervals: lines, crossbars \& errorbars
* geom\_curve: Line segments and curves
* geom\_density: Smoothed density estimates
* geom\_density\_2d: Contours of a 2d density estimate
* geom\_dotplot: Dot plot
* geom\_errorbar: Vertical intervals: lines, crossbars \& errorbars
* geom\_errorbarh: Horizontal error bars
* geom\_freqpoly: Histograms and frequency polygons
* geom\_hex: Hexagonal heatmap of 2d bin counts
* geom\_histogram: Histograms and frequency polygons
* geom\_hline: Reference lines: horizontal, vertical, and diagonal
* geom\_jitter: Jittered points
* geom\_label: Text
* geom\_line: Connect observations
* geom\_linerange: Vertical intervals: lines, crossbars \& errorbars
* geom\_map: Polygons from a reference map
* geom\_path: Connect observations
* geom\_point: Points
* geom\_pointrange: Vertical intervals: lines, crossbars \& errorbars
* geom\_polygon: Polygons
* geom\_qq: A quantile\-quantile plot
* geom\_qq\_line: A quantile\-quantile plot
* geom\_quantile: Quantile regression
* geom\_raster: Rectangles
* geom\_rect: Rectangles
* geom\_ribbon: Ribbons and area plots
* geom\_rug: Rug plots in the margins
* geom\_segment: Line segments and curves
* geom\_sf: Visualise sf objects
* geom\_sf\_label: Visualise sf objects
* geom\_sf\_text: Visualise sf objects
* geom\_smooth: Smoothed conditional means
* geom\_spoke: Line segments parameterised by location, direction and distance
* geom\_step: Connect observations
* geom\_text: Text
* geom\_tile: Rectangles
* geom\_violin: Violin plot
* geom\_vline: Reference lines: horizontal, vertical, and diagonal
Examples
--------
Let’s get more of a feel for things by seeing some examples that demonstrate some geoms and aesthetics.
To begin, after setting the base aesthetic, we’ll set some explicit values for the geom.
```
library(ggplot2)
data("diamonds")
data('economics')
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(size = .5, color = 'peru')
```
Next we use two different geoms, and one is even using a different data source. Note that geoms have arguments both common and specific to them. In the following, `label` is used for geom\_text, but it would be ignored by geom\_line.
```
ggplot(aes(x = date, y = unemploy), data = economics) +
geom_line() +
geom_text(
aes(label = unemploy),
vjust = -.5,
data = filter(economics, date == '2009-10-01')
)
```
In the following, one setting, alpha (transparency), is not mapped to the data, while size and color are[45](#fn45).
```
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(size = carat, color = clarity), alpha = .05)
```
There are some other options to play with as well.
```
ggplot(aes(x = carat, y = price), data = diamonds %>% sample_frac(.01)) +
geom_point(aes(size = carat, color = clarity), key_glyph = "vpath")
```
Stats
-----
There are many statistical functions built in, and it is a key strength of ggplot that you don’t have to do a lot of processing for very common plots.
Her are some quantile regression lines:
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
geom_quantile()
```
The following shows loess (or additive model) smooths. We can do some fine\-tuning and use model\-based approaches for visualization.
```
data(mcycle, package = 'MASS')
ggplot(aes(x = times, y = accel), data = mcycle) +
geom_point() +
geom_smooth(formula = y ~ s(x, bs = 'ad'), method = 'gam')
```
Bootstrapped confidence intervals:
```
ggplot(mtcars, aes(cyl, mpg)) +
geom_point() +
stat_summary(
fun.data = "mean_cl_boot",
colour = "orange",
alpha = .75,
size = 1
)
```
The take\-home message here is to always let ggplot do the work for you if at all possible. However, I will say that I find it easier to create the summary data I want to visualize with tidyverse tools, rather than use stat\_summary, and you may have a similar experience.
Scales
------
Often there are many things we want to change about the plot, for example, the size and values of axis labels, the range of sizes for points to take, the specific colors we want to use, and so forth. Be aware that there are a great many options here, and you will regularly want to use them.
A very common thing you’ll do is change the labels for the axes. You definitely don’t have to go and change the variable name itself to do this, just use the labs function. There are also functions for individual parts, e.g. xlab, ylab and ggtitle.
```
ggplot(aes(x = times, y = accel), data = mcycle) +
geom_smooth(se = FALSE) +
labs(
x = 'milliseconds after impact',
y = 'head acceleration',
title = 'Motorcycle Accident'
)
```
A frequent operation is changing the x and y look in the form of limits and tick marks. Like labs, there is a general lims function and specific functions for just the specific parts. In addition, we may want to get really detailed using scale\_x\_\* or scale\_y\_\*.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
ylim(c(0, 60))
```
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
scale_y_continuous(
limits = c(0, 60),
breaks = seq(0, 60, by = 12),
minor_breaks = seq(6, 60, by = 6)
)
```
Another common option is to change the size of points in some way. While we assign the aesthetic as before, it comes with defaults that might not work for a given situation. Play around with the range values.
```
ggplot(mpg, aes(x = displ, y = hwy, size = cyl)) +
geom_point() +
scale_size(range = c(1, 3))
```
We will talk about color issues later, but for now, you may want to apply something besides the default options. The following shows a built\-in color scale for a color aesthetic that is treated as continuous, and one that is discrete and which we want to supply our own colors (these actually come from plotly’s default color scheme).
```
ggplot(mpg, aes(x = displ, y = hwy, color = cyl)) +
geom_point() +
scale_color_gradient2()
```
```
ggplot(mpg, aes(x = displ, y = hwy, color = factor(cyl))) +
geom_point() +
scale_color_manual(values = c("#1f77b4", "#ff7f0e", "#2ca02c", "#d62728"))
```
We can even change the scale of the data itself.
```
ggplot(mpg, aes(x = displ, y = hwy)) +
geom_point() +
scale_x_log10()
```
In short, scale alterations are really useful for getting just the plot you want, and there is a lot of flexibility for you to work with. There are a lot of scales too, so know what you have available.
* scale\_alpha, scale\_alpha\_continuous, scale\_alpha\_date, scale\_alpha\_datetime, scale\_alpha\_discrete, scale\_alpha\_identity, scale\_alpha\_manual, scale\_alpha\_ordinal: Alpha transparency scales
* scale\_color\_brewer, scale\_color\_distiller: Sequential, diverging and qualitative colour scales from colorbrewer.org
* scale\_color\_continuous, scale\_color\_discrete, scale\_color\_gradient, scale\_color\_gradient2, scale\_color\_gradientn, scale\_color\_grey, scale\_color\_hue, scale\_color\_identity, scale\_color\_manual, scale\_color\_viridis\_c, scale\_color\_viridis\_d, scale\_continuous\_identity Various color scales
* scale\_discrete\_identity, scale\_discrete\_manual: Discrete scales
* scale\_fill\_brewer, scale\_fill\_continuous, scale\_fill\_date, scale\_fill\_datetime, scale\_fill\_discrete, scale\_fill\_distiller, scale\_fill\_gradient, scale\_fill\_gradient2, scale\_fill\_gradientn, scale\_fill\_grey, scale\_fill\_hue, scale\_fill\_identity, scale\_fill\_manual, scale\_fill\_ordinal, scale\_fill\_viridis\_c, scale\_fill\_viridis\_d: Scales for geoms that can be filled with color
* scale\_linetype, scale\_linetype\_continuous, scale\_linetype\_discrete, scale\_linetype\_identity, scale\_linetype\_manual: Scales for line patterns
* scale\_shape, scale\_shape\_continuous, scale\_shape\_discrete, scale\_shape\_identity, scale\_shape\_manual, scale\_shape\_ordinal: Scales for shapes, aka glyphs
* scale\_size, scale\_size\_area, scale\_size\_continuous, scale\_size\_date, scale\_size\_datetime, scale\_size\_discrete, scale\_size\_identity, scale\_size\_manual, scale\_size\_ordinal: Scales for area or radius
* scale\_x\_continuous, scale\_x\_date, scale\_x\_datetime, scale\_x\_discrete, scale\_x\_log10, scale\_x\_reverse, scale\_x\_sqrt, \< scale\_y\_continuous, scale\_y\_date, scale\_y\_datetime, scale\_y\_discrete, scale\_y\_log10, scale\_y\_reverse, scale\_y\_sqrt: Position scales for continuous data (x \& y)
* scale\_x\_time, scale\_y\_time: Position scales for date/time data
Facets
------
Facets allow for paneled display, a very common operation. In general, we often want comparison plots. The facet\_grid function will produce a grid, and often this is all that’s needed. However, facet\_wrap is more flexible, while possibly taking a bit extra effort to get things just the way you want. Both use a formula approach to specify the grouping.
#### facet\_grid
Facet by cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(~ cyl)
```
Facet by vs and cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(vs ~ cyl, labeller = label_both)
```
#### facet\_wrap
Specify the number of columns or rows with facet\_wrap.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_wrap(vs ~ cyl, labeller = label_both, ncol=2)
```
#### facet\_grid
Facet by cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(~ cyl)
```
Facet by vs and cylinder.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_grid(vs ~ cyl, labeller = label_both)
```
#### facet\_wrap
Specify the number of columns or rows with facet\_wrap.
```
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
facet_wrap(vs ~ cyl, labeller = label_both, ncol=2)
```
Multiple plots
--------------
Often we want distinct visualizations to come together in one plot. There are several packages that can help you here: gridExtra, cowplot, and more recently patchwork[46](#fn46). The latter especially makes things easy.
```
library(patchwork)
g1 = ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point()
g2 = ggplot(mtcars, aes(wt)) +
geom_density()
g3 = ggplot(mtcars, aes(mpg)) +
geom_density()
g1 / # initial plot, place next part underneath
(g2 | g3) # groups g2 and g3 side by side
```
Not that you want this, but just to demonstrate the flexibility.
```
p1 = ggplot(mtcars) + geom_point(aes(mpg, disp))
p2 = ggplot(mtcars) + geom_boxplot(aes(gear, disp, group = gear))
p3 = ggplot(mtcars) + geom_smooth(aes(disp, qsec))
p4 = ggplot(mtcars) + geom_bar(aes(carb))
p5 = ggplot(mtcars) + geom_violin(aes(cyl, mpg, group = cyl))
p1 +
p2 +
(p3 / p4) * theme_void() +
p5 +
plot_layout(widths = c(2, 1))
```
You’ll typically want to use facets to show subsets of the same data, and tools like patchwork to show different kinds of plots together.
Fine control
------------
ggplot2 makes it easy to get good looking graphs quickly. However the amount of fine control is extensive. The following plot is hideous (aside from the background, which is totally rad), but illustrates the point.
```
ggplot(aes(x = carat, y = price), data = diamonds) +
annotation_custom(
rasterGrob(
lambosun,
width = unit(1, "npc"),
height = unit(1, "npc"),
interpolate = FALSE
),-Inf,
Inf,
-Inf,
Inf
) +
geom_point(aes(color = clarity), alpha = .5) +
scale_y_log10(breaks = c(1000, 5000, 10000)) +
xlim(0, 10) +
scale_color_brewer(type = 'div') +
facet_wrap( ~ cut, ncol = 3) +
theme_minimal() +
theme(
axis.ticks.x = element_line(color = 'darkred'),
axis.text.x = element_text(angle = -45),
axis.text.y = element_text(size = 20),
strip.text = element_text(color = 'forestgreen'),
strip.background = element_blank(),
panel.grid.minor = element_line(color = 'lightblue'),
legend.key = element_rect(linetype = 4),
legend.position = 'bottom'
)
```
Themes
------
In the last example you saw two uses of a theme\- a built\-in version that comes with ggplot (theme\_minimal), and specific customization (theme(…)). The built\-in themes provide ready\-made approaches that might already be good enough for a finished product. For the theme function, each argument, and there are many, takes on a specific value or an element function:
* element\_rect
* element\_line
* element\_text
* element\_blank
Each of those element functions has arguments specific to it. For example, for element\_text you can specify the font size, while for element line you could specify the line type.
Note that the base theme of ggplot, and I would say every plotting package, is probably going to need manipulation before a plot is ready for presentation. For example, the ggplot theme doesn’t work well for web presentation, and is even worse for print. You will almost invariably need to tweak it. I suggest using and saving your own custom theme for easy application for any visualization package you use frequently.
Extensions
----------
ggplot2 now has its own extension system, and there is even a [website](http://www.ggplot2-exts.org/) to track the extensions. Examples include:
* additional themes
* maps
* interactivity
* animations
* marginal plots
* network graphs
* time series
* aligning multiple ggplot visualizations, possibly of different types
Here’s an example with gganimate.
```
library(gganimate)
load('data/gapminder.RData')
gap_plot = gapminder_2019 %>%
filter(giniPercap != 40)
gap_plot_filter = gap_plot %>%
filter(country %in% c('United States', 'Mexico', 'Canada'))
initial_plot = ggplot(gap_plot, aes(x = year, y = giniPercap, group = country)) +
geom_line(alpha = .05) +
geom_path(
aes(color = country),
lwd = 2,
arrow = arrow(
length = unit(0.25, "cm")
),
alpha = .5,
data = gap_plot_filter,
show.legend = FALSE
) +
geom_text(
aes(color = country, label = country),
nudge_x = 5,
nudge_y = 2,
size = 2,
data = gap_plot_filter,
show.legend = FALSE
) +
theme_clean() +
transition_reveal(year)
animate(initial_plot, end_pause = 50, nframes = 150, rewind = TRUE)
```
As one can see, ggplot2 is only the beginning. You’ll have a lot of tools at your disposal. Furthermore, many modeling and other packages will produce ggplot graphics to which you can add your own layers and tweak like you would any other ggplot.
ggplot2 Summary
---------------
ggplot2 is an easy to use, but powerful visualization tool. It allows one to think in many dimensions for any graph, and extends well beyond the basics. Use it to easily create more interesting visualizations.
ggplot2 Exercises
-----------------
### Exercise 0
Load the ggplot2 package if you haven’t already.
### Exercise 1
Create two plots, one a scatterplot (e.g. with geom\_point) and one with lines (e.g. geom\_line) with a data set of your choosing (all of the following are base R or available after loading ggplot2. Some suggestions:
* faithful: Waiting time between eruptions and the duration of the eruption for the Old Faithful geyser in Yellowstone National Park, Wyoming, USA.
* msleep: mammals sleep dataset with sleep times and weights etc.
* diamonds: used in the slides
* economics: US economic time series.
* txhousing: Housing sales in TX.
* midwest: Midwest demographics.
* mpg: Fuel economy data from 1999 and 2008 for 38 popular models of car
Recall the basic form for ggplot.
```
ggplot(data = *, aes(x = *, y = *, other)) +
geom_*() +
otherLayers, theme etc.
```
Themes to play with:
* theme\_bw
* theme\_classic
* theme\_dark
* theme\_gray
* theme\_light
* theme\_linedraw
* theme\_minimal
* theme\_clean (requires the visibly package and an appreciation of the Lamborghini background from the previous visualization)
### Exercise 2
Play around and change the arguments to the following. You’ll need to install the maps package.
* For example, do points for all county midpoints. For that you’d need to change the x and y for the point geom to an aesthetic based on the longitude and latitude, as well as add its data argument to use the seats data frame.
* Make the color of the points or text based on `subregion`. This will require adding the fill argument to the polygon geom and removing the NA setting. In addition, add the argument show.legend\=F (outside the aesthetic), or you’ll have a problematic legend (recall what we said before about too many colors!). Try making color based on subregion too.
* See if you can use element\_blank on a theme argument to remove the axis information. See ?theme for ideas.
```
library(maps)
mi = map_data("county", "michigan")
seats = mi %>%
group_by(subregion) %>%
summarise_at(vars(lat, long), function(x) median(range(x)))
# inspect the data
# head(mi)
# head(seats)
ggplot(mi, aes(long, lat)) +
geom_polygon(aes(group = subregion), fill = NA, colour = "grey60") +
geom_text(aes(label = subregion), data = seats, size = 1, angle = 45) +
geom_point(x=-83.748333, y=42.281389, color='#1e90ff', size=3) +
theme_minimal() +
theme(panel.grid=element_blank())
```
### Exercise 0
Load the ggplot2 package if you haven’t already.
### Exercise 1
Create two plots, one a scatterplot (e.g. with geom\_point) and one with lines (e.g. geom\_line) with a data set of your choosing (all of the following are base R or available after loading ggplot2. Some suggestions:
* faithful: Waiting time between eruptions and the duration of the eruption for the Old Faithful geyser in Yellowstone National Park, Wyoming, USA.
* msleep: mammals sleep dataset with sleep times and weights etc.
* diamonds: used in the slides
* economics: US economic time series.
* txhousing: Housing sales in TX.
* midwest: Midwest demographics.
* mpg: Fuel economy data from 1999 and 2008 for 38 popular models of car
Recall the basic form for ggplot.
```
ggplot(data = *, aes(x = *, y = *, other)) +
geom_*() +
otherLayers, theme etc.
```
Themes to play with:
* theme\_bw
* theme\_classic
* theme\_dark
* theme\_gray
* theme\_light
* theme\_linedraw
* theme\_minimal
* theme\_clean (requires the visibly package and an appreciation of the Lamborghini background from the previous visualization)
### Exercise 2
Play around and change the arguments to the following. You’ll need to install the maps package.
* For example, do points for all county midpoints. For that you’d need to change the x and y for the point geom to an aesthetic based on the longitude and latitude, as well as add its data argument to use the seats data frame.
* Make the color of the points or text based on `subregion`. This will require adding the fill argument to the polygon geom and removing the NA setting. In addition, add the argument show.legend\=F (outside the aesthetic), or you’ll have a problematic legend (recall what we said before about too many colors!). Try making color based on subregion too.
* See if you can use element\_blank on a theme argument to remove the axis information. See ?theme for ideas.
```
library(maps)
mi = map_data("county", "michigan")
seats = mi %>%
group_by(subregion) %>%
summarise_at(vars(lat, long), function(x) median(range(x)))
# inspect the data
# head(mi)
# head(seats)
ggplot(mi, aes(long, lat)) +
geom_polygon(aes(group = subregion), fill = NA, colour = "grey60") +
geom_text(aes(label = subregion), data = seats, size = 1, angle = 45) +
geom_point(x=-83.748333, y=42.281389, color='#1e90ff', size=3) +
theme_minimal() +
theme(panel.grid=element_blank())
```
Python Plotnine Notebook
------------------------
The R community really lucked out with ggplot, and the basic philosophy behind it is missing from practically every other static plotting packages or tools. Python’s version of base R plotting is matplotlib, which continues to serve people well. But like R base plots, it can take a lot of work to get anything remotely visually appealing. Seaborn is another option, but still, just isn’t in the same league.
If using Python though, you’re in luck! You get most of the basic functionality of ggplot2 via the plotnine module. A jupyter notebook demonstrating most of the previous is available [here](https://github.com/m-clark/data-processing-and-visualization/blob/master/code/ggplot.ipynb).
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/interactive.html |
Interactive Visualization
=========================
Packages
--------
As mentioned, ggplot2 is the most widely used package for visualization in R. However, it is not interactive by default. Many packages use htmlwidgets, d3 (JavaScript library), and other tools to provide interactive graphics. What’s great is that while you may have to learn new packages, you don’t necessarily have to change your approach or thinking about a plot, or learn some other language.
Many of these packages can be lumped into more general packages that try to provide a plotting system (similar to ggplot2), versus those that just aim to do a specific type of plot well. Here are some to give a sense of this.
General (click to visit the associated website):
* [plotly](https://plot.ly/r/)
\- used also in Python, Matlab, Julia
\- can convert ggplot2 images to interactive ones (with varying degrees of success)
* [highcharter](http://jkunst.com/highcharter/)
+ also very general wrapper for highcharts.js and works with some R packages out of the box
* [rbokeh](http://hafen.github.io/rbokeh/)
+ like plotly, it also has cross language support
Specific functionality:
* [DT](https://rstudio.github.io/DT/)
+ interactive data tables
* [leaflet](https://rstudio.github.io/leaflet/)
+ maps with OpenStreetMap
* [visNetwork](http://datastorm-open.github.io/visNetwork/)
+ Network visualization
In what follows we’ll see some of these in action. Note that unlike the previous chapter, the goal here is not to dive deeply, but just to get an idea of what’s available.
Piping for Visualization
------------------------
One of the advantages to piping is that it’s not limited to dplyr style data management functions. *Any* R function can be potentially piped to, and several examples have already been shown. Many newer visualization packages take advantage of piping, and this facilitates data exploration. We don’t have to create objects just to do a visualization. New variables can be easily created and subsequently manipulated just for visualization. Furthermore, data manipulation not separated from visualization.
htmlwidgets
-----------
The htmlwidgets package makes it easy to create visualizations based on JavaScript libraries. If you’re not familiar with JavaScript, you actually are very familiar with its products, as it’s basically the language of the web, visual or otherwise. The R packages using it typically are pipe\-oriented and produce interactive plots. In addition, you can use the htmlwidgets package to create your own functions that use a particular JavaScript library (but someone probably already has, so look first).
Plotly
------
We’ll begin our foray into the interactive world with a couple demonstrations of plotly. To give some background, you can think of plotly similar to RStudio, in that it has both enterprise (i.e. pay for) aspects and open source aspects. Just like RStudio, you have full access to what it has to offer via the open source R package. You may see old help suggestions referring to needing an account, but this is no longer necessary.
When using plotly, you’ll note the layering approach similar to what we had with ggplot2. Piping is used before plotting to do some data manipulation, after which we seamlessly move to the plot itself. The `=~` is essentially the way we denote aesthetics[47](#fn47).
Plotly is able to be used in both R and Python.
#### R
```
library(plotly)
midwest %>%
filter(inmetro == T) %>%
plot_ly(x = ~ percbelowpoverty, y = ~ percollege) %>%
add_markers()
```
#### plotly with Python
The following does the same plot in Python
```
import pandas as pd
import plotly.express as px
midwest = pd.DataFrame(r.midwest) # from previous chunk using reticulate
plt = px.scatter(midwest, x = 'percbelowpoverty', y = 'percollege')
plt.show() # opens in browser
```
### Modes
plotly has modes, which allow for points, lines, text and combinations. Traces, `add_*`, work similar to geoms.
```
library(mgcv)
library(modelr)
library(glue)
mtcars %>%
mutate(
amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = glue('weight: {wt} <br> mpg: {mpg} <br> {amFactor}')
) %>%
add_predictions(gam(mpg ~ s(wt, am, bs = 'fs'), data = mtcars)) %>%
arrange(am) %>%
plot_ly() %>%
add_markers(
x = ~ wt,
y = ~ mpg,
color = ~ amFactor,
opacity = .5,
text = ~ hovertext,
hoverinfo = 'text',
showlegend = F
) %>%
add_lines(
x = ~ wt,
y = ~ pred,
color = ~ amFactor
)
```
While you can use plotly as a one\-liner[48](#fn48), this would only be good for quick peeks while doing data exploration. It would generally be far too limiting otherwise.
```
plot_ly(ggplot2::midwest, x = ~percollege, color = ~state, type = "box")
```
And here is a Python example or two using plotly express.
```
plt = px.box(midwest, x = 'state', y = 'percollege', color = 'state', notched=True)
plt.show() # opens in browser
tips = px.data.tips() # built-in dataset
px.violin(
tips,
y = "tip",
x = "smoker",
color = "sex",
box = True,
points = "all",
hover_data = tips.columns
).show()
```
### ggplotly
One of the strengths of plotly is that we can feed a ggplot object to it, and turn our formerly static plots into interactive ones. It would have been easy to use geom\_smooth to get a similar result, so let’s do so.
```
gp = mtcars %>%
mutate(amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = paste(wt, mpg, amFactor)) %>%
arrange(wt) %>%
ggplot(aes(x = wt, y = mpg, color = amFactor)) +
geom_smooth(se = F) +
geom_point(aes(color = amFactor))
ggplotly()
```
Note that this is not a one\-to\-one transformation. The plotly image will have different line widths and point sizes. It will usually be easier to change it within the ggplot process than tweaking the ggplotly object.
Be prepared to spend time getting used to plotly. It has (in my opinion) poor documentation, is not nearly as flexible as ggplot2, has hidden (and arbitrary) defaults that can creep into a plot based on aspects of the data (rather than your settings), and some modes do not play nicely with others. That said, it works great for a lot of things, and I use it regularly.
Highcharter
-----------
Highcharter is also fairly useful for a wide variety of plots, and is based on the highcharts.js library. If you have data suited to one of its functions, getting a great interactive plot can be ridiculously easy.
In what follows we use quantmod to create an xts (time series) object of Google’s stock price, including opening and closing values. The highcharter object has a ready\-made plot for such data[49](#fn49).
```
library(highcharter)
library(quantmod)
google_price = getSymbols("GOOG", auto.assign = FALSE)
hchart(google_price)
```
Graph networks
--------------
### visNetwork
The visNetwork package is specific to network visualizations and similar, and is based on the vis.js library. Networks require nodes and edges to connect them. These take on different aspects, and so are created in separate data frames.
```
set.seed(1352)
nodes = data.frame(
id = 0:5,
label = c('Bobby', 'Janie', 'Timmie', 'Mary', 'Johnny', 'Billy'),
group = c('friend', 'frenemy', 'frenemy', rep('friend', 3)),
value = sample(10:50, 6)
)
edges = data.frame(
from = c(0, 0, 0, 1, 1, 2, 2, 3, 3, 3, 4, 5, 5),
to = sample(0:5, 13, replace = T),
value = sample(1:10, 13, replace = T)
) %>%
filter(from != to)
library(visNetwork)
visNetwork(nodes, edges, height = 300, width = 800) %>%
visNodes(
shape = 'circle',
font = list(),
scaling = list(
min = 10,
max = 50,
label = list(enable = T)
)
) %>%
visLegend()
```
### sigmajs
The sigmajs package allows one to use the corresponding JS library to create some clean and nice visualizations for graphs. The following creates
```
library(sigmajs)
nodes <- sg_make_nodes(30)
edges <- sg_make_edges(nodes)
# add transitions
n <- nrow(nodes)
nodes$to_x <- runif(n, 5, 10)
nodes$to_y <- runif(n, 5, 10)
nodes$to_size <- runif(n, 5, 10)
nodes$to_color <- sample(c("#ff5500", "#00aaff"), n, replace = TRUE)
sigmajs() %>%
sg_nodes(nodes, id, label, size, color, to_x, to_y, to_size, to_color) %>%
sg_edges(edges, id, source, target) %>%
sg_animate(
mapping = list(
x = "to_x",
y = "to_y",
size = "to_size",
color = "to_color"
),
delay = 0
) %>%
sg_settings(animationsTime = 3500) %>%
sg_button("animate", # button label
"animate", # event name
class = "btn btn-warning")
```
animate
### Plotly
I mention plotly capabilities here as again, it may be useful to stick to one tool that you can learn well, and again, could allow you to bounce to python as well.
```
import plotly.graph_objects as go
import networkx as nx
G = nx.random_geometric_graph(50, 0.125)
edge_x = []
edge_y = []
for edge in G.edges():
x0, y0 = G.nodes[edge[0]]['pos']
x1, y1 = G.nodes[edge[1]]['pos']
edge_x.append(x0)
edge_x.append(x1)
edge_x.append(None)
edge_y.append(y0)
edge_y.append(y1)
edge_y.append(None)
edge_trace = go.Scatter(
x=edge_x,
y=edge_y,
line=dict(width=0.5, color='#888'),
hoverinfo='none',
mode='lines')
node_x = []
node_y = []
for node in G.nodes():
x, y = G.nodes[node]['pos']
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(
x=node_x, y=node_y,
mode='markers',
hoverinfo='text',
marker=dict(
showscale=True,
colorscale='Blackbody',
reversescale=True,
color=[],
size=10,
colorbar=dict(
thickness=15,
title='Node Connections',
xanchor='left',
titleside='right'
),
line_width=2))
node_adjacencies = []
node_text = []
for node, adjacencies in enumerate(G.adjacency()):
node_adjacencies.append(len(adjacencies[1]))
node_text.append('# of connections: '+str(len(adjacencies[1])))
node_trace.marker.color = node_adjacencies
node_trace.text = node_text
fig = go.Figure(data=[edge_trace, node_trace],
layout=go.Layout(
title='<br>Network graph made with Python',
titlefont_size=16,
showlegend=False,
hovermode='closest',
margin=dict(b=20,l=5,r=5,t=40),
annotations=[ dict(
text="Python code: <a href='https://plot.ly/ipython-notebooks/network-graphs/'> https://plot.ly/ipython-notebooks/network-graphs/</a>",
showarrow=False,
xref="paper", yref="paper",
x=0.005, y=-0.002 ) ],
xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
)
fig.show()
```
leaflet
-------
The leaflet package from RStudio is good for quick interactive maps, and it’s quite flexible and has some nice functionality to take your maps further. Unfortunately, it actually doesn’t always play well with many markdown formats.
```
hovertext <- paste(sep = "<br/>",
"<b><a href='http://umich.edu/'>University of Michigan</a></b>",
"Ann Arbor, MI"
)
library(leaflet)
leaflet() %>%
addTiles() %>%
addPopups(
lng = -83.738222,
lat = 42.277030,
popup = hovertext
)
```
DT
--
It might be a bit odd to think of data frames visually, but they can be interactive also. One can use the DT package for interactive data frames. This can be very useful when working in collaborative environments where one shares reports, as you can embed the data within the document itself.
```
library(DT)
ggplot2movies::movies %>%
select(1:6) %>%
filter(rating > 8, !is.na(budget), votes > 1000) %>%
datatable()
```
The other thing to be aware of is that tables *can* be visual, it’s just that many academic outlets waste this opportunity. Simple bolding, italics, and even sizing, can make results pop more easily for the audience. The DT package allows for coloring and even simple things like bars that connotes values. The following gives some idea of its flexibility.
```
iris %>%
# arrange(desc(Petal.Length)) %>%
datatable(rownames = F,
options = list(dom = 'firtp'),
class = 'row-border') %>%
formatStyle('Sepal.Length',
fontWeight = styleInterval(5, c('normal', 'bold'))) %>%
formatStyle('Sepal.Width',
color = styleInterval(c(3.4, 3.8), c('#7f7f7f', '#00aaff', '#ff5500')),
backgroundColor = styleInterval(3.4, c('#ebebeb', 'aliceblue'))) %>%
formatStyle(
'Petal.Length',
# color = 'transparent',
background = styleColorBar(iris$Petal.Length, '#5500ff'),
backgroundSize = '100% 90%',
backgroundRepeat = 'no-repeat',
backgroundPosition = 'center'
) %>%
formatStyle(
'Species',
color = 'white',
transform = 'rotateX(45deg) rotateY(20deg) rotateZ(30deg)',
backgroundColor = styleEqual(unique(iris$Species), c('#1f65b7', '#66b71f', '#b71f66'))
)
```
I would in no way recommend using the bars, unless the you want a visual *instead* of the value and can show all possible values. I would not recommend angled tag options at all, as that is more or less a prime example of chartjunk. However, subtle use of color and emphasis, as with the Sepal columns, can make tables of results that your audience will actually spend time exploring.
Shiny
-----
[
Shiny is a framework that can essentially allow you to build an interactive website/app. Like some of the other packages mentioned, it’s provided by [RStudio](https://shiny.rstudio.com/) developers. However, most of the more recently developed interactive visualization packages will work specifically within the shiny and rmarkdown setting.
You can make shiny apps just for your own use and run them locally. But note, you are using R, a statistical programming language, to build a webpage, and it’s not necessarily particularly well\-suited for it. Much of how you use R will not be useful in building a shiny app, and so it will definitely take some getting used to, and you will likely need to do a lot of tedious adjustments to get things just how you want.
Shiny apps have two main components, a part that specifies the user interface, and a server function that will do all the work. With those in place (either in a single ‘app.R’ file or in separate files), you can then simply click `run app` or use the function.
This example is taken from the shiny help file, and you can actually run it as is.
```
library(shiny)
# Running a Shiny app object
app <- shinyApp(
ui = bootstrapPage(
numericInput('n', 'Number of obs', 10),
plotOutput('plot')
),
server = function(input, output) {
output$plot <- renderPlot({
ggplot2::qplot(rnorm(input$n), xlab = 'Is this normal?!')
})
}
)
runApp(app)
```
You can share your app code/directory with anyone and they’ll be able to run it also. However, this is great mostly just for teaching someone how to do shiny, which most people aren’t going to do. Typically you’ll want someone to use the app itself, not run code. In that case you’ll need a web server. You can get up to 5 free ‘running’ applications at [shinyapps.io](http://shinyapps.io). However, you will notably be limited in the amount of computing resources that can be used to run the apps in a given month. Even minor usage of those could easily overtake the free settings. For personal use it’s plenty though.
### Dash
Dash is a similar approach to interactivity as Shiny brought to you by the plotly gang. The nice thing about it is crossplatform support for R and Python.
#### R
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
app <- Dash$new()
df <- readr::read_csv(file = "data/gapminder_small.csv") %>%
drop_na()
continents <- unique(df$continent)
data_gdp_life <- with(df,
lapply(continents,
function(cont) {
list(
x = gdpPercap[continent == cont],
y = lifeExp[continent == cont],
opacity=0.7,
text = country[continent == cont],
mode = 'markers',
name = cont,
marker = list(size = 15,
line = list(width = 0.5, color = 'white'))
)
}
)
)
app$layout(
htmlDiv(
list(
dccGraph(
id = 'life-exp-vs-gdp',
figure = list(
data = data_gdp_life,
layout = list(
xaxis = list('type' = 'log', 'title' = 'GDP Per Capita'),
yaxis = list('title' = 'Life Expectancy'),
margin = list('l' = 40, 'b' = 40, 't' = 10, 'r' = 10),
legend = list('x' = 0, 'y' = 1),
hovermode = 'closest'
)
)
)
)
)
)
app$run_server()
```
#### Python dash example
Here is a python example. Save as app.py then at the terminal run `python app.py`.
```
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
df = pd.read_csv('data/gapminder_small.csv')
app.layout = html.Div([
dcc.Graph(
id='life-exp-vs-gdp',
figure={
'data': [
dict(
x=df[df['continent'] == i]['gdpPercap'],
y=df[df['continent'] == i]['lifeExp'],
text=df[df['continent'] == i]['country'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) for i in df.continent.unique()
],
'layout': dict(
xaxis={'type': 'log', 'title': 'GDP Per Capita'},
yaxis={'title': 'Life Expectancy'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest'
)
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
Interactive and Visual Data Exploration
---------------------------------------
As seen above, just a couple visualization packages can go a very long way. It’s now very easy to incorporate interactivity, so you should use it even if only for your own data exploration.
In general, interactivity allows for even more dimensions to be brought to a graphic, and can be more fun too!
However, they must serve a purpose. Too often, interactivity can simply serve as distraction, and can actually detract from the data story. Make sure to use them when they can enhance the narrative you wish to express.
Interactive Visualization Exercises
-----------------------------------
### Exercise 0
Install and load the plotly package. Load the tidyverse package if necessary (so you can use dplyr and ggplot2), and install/load the ggplot2movies for the IMDB data.
### Exercise 1
Using dplyr, group by year, and summarize to create a new variable that is the Average rating. Refer to the [tidyverse](tidyverse.html#tidyverse) section if you need a refresher on what’s being done here. Then create a plot with plotly for a line or scatter plot (for the latter, use the add\_markers function). It will take the following form, but you’ll need to supply the plotly arguments.
```
library(ggplot2movies)
movies %>%
group_by(year) %>%
summarise(Avg_Rating = mean(rating))
plot_ly() %>%
add_markers()
```
### Exercise 2
This time group by year *and* Drama. In the summarize create average rating again, but also a variable representing the average number of votes. In your plotly line, use the size and color arguments to represent whether the average number of votes and whether it was drama or not respectively. Use add\_markers. Note that Drama will be treated as numeric since it’s a 0\-1 indicator. This won’t affect the plot, but if you want, you might use mutate to change it to a factor with labels ‘Drama’ and ‘Other’.
### Exercise 3
Create a ggplot of your own design and then use ggplotly to make it interactive.
Python Interactive Visualization Notebook
-----------------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/interactive.ipynb)
If using Python though, you’re in luck! You get most of the basic functionality of ggplot2 via the plotnine module. A jupyter notebook demonstrating most of the previous is available [here](https://github.com/m-clark/data-processing-and-visualization/blob/master/code/ggplot.ipynb).
Packages
--------
As mentioned, ggplot2 is the most widely used package for visualization in R. However, it is not interactive by default. Many packages use htmlwidgets, d3 (JavaScript library), and other tools to provide interactive graphics. What’s great is that while you may have to learn new packages, you don’t necessarily have to change your approach or thinking about a plot, or learn some other language.
Many of these packages can be lumped into more general packages that try to provide a plotting system (similar to ggplot2), versus those that just aim to do a specific type of plot well. Here are some to give a sense of this.
General (click to visit the associated website):
* [plotly](https://plot.ly/r/)
\- used also in Python, Matlab, Julia
\- can convert ggplot2 images to interactive ones (with varying degrees of success)
* [highcharter](http://jkunst.com/highcharter/)
+ also very general wrapper for highcharts.js and works with some R packages out of the box
* [rbokeh](http://hafen.github.io/rbokeh/)
+ like plotly, it also has cross language support
Specific functionality:
* [DT](https://rstudio.github.io/DT/)
+ interactive data tables
* [leaflet](https://rstudio.github.io/leaflet/)
+ maps with OpenStreetMap
* [visNetwork](http://datastorm-open.github.io/visNetwork/)
+ Network visualization
In what follows we’ll see some of these in action. Note that unlike the previous chapter, the goal here is not to dive deeply, but just to get an idea of what’s available.
Piping for Visualization
------------------------
One of the advantages to piping is that it’s not limited to dplyr style data management functions. *Any* R function can be potentially piped to, and several examples have already been shown. Many newer visualization packages take advantage of piping, and this facilitates data exploration. We don’t have to create objects just to do a visualization. New variables can be easily created and subsequently manipulated just for visualization. Furthermore, data manipulation not separated from visualization.
htmlwidgets
-----------
The htmlwidgets package makes it easy to create visualizations based on JavaScript libraries. If you’re not familiar with JavaScript, you actually are very familiar with its products, as it’s basically the language of the web, visual or otherwise. The R packages using it typically are pipe\-oriented and produce interactive plots. In addition, you can use the htmlwidgets package to create your own functions that use a particular JavaScript library (but someone probably already has, so look first).
Plotly
------
We’ll begin our foray into the interactive world with a couple demonstrations of plotly. To give some background, you can think of plotly similar to RStudio, in that it has both enterprise (i.e. pay for) aspects and open source aspects. Just like RStudio, you have full access to what it has to offer via the open source R package. You may see old help suggestions referring to needing an account, but this is no longer necessary.
When using plotly, you’ll note the layering approach similar to what we had with ggplot2. Piping is used before plotting to do some data manipulation, after which we seamlessly move to the plot itself. The `=~` is essentially the way we denote aesthetics[47](#fn47).
Plotly is able to be used in both R and Python.
#### R
```
library(plotly)
midwest %>%
filter(inmetro == T) %>%
plot_ly(x = ~ percbelowpoverty, y = ~ percollege) %>%
add_markers()
```
#### plotly with Python
The following does the same plot in Python
```
import pandas as pd
import plotly.express as px
midwest = pd.DataFrame(r.midwest) # from previous chunk using reticulate
plt = px.scatter(midwest, x = 'percbelowpoverty', y = 'percollege')
plt.show() # opens in browser
```
### Modes
plotly has modes, which allow for points, lines, text and combinations. Traces, `add_*`, work similar to geoms.
```
library(mgcv)
library(modelr)
library(glue)
mtcars %>%
mutate(
amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = glue('weight: {wt} <br> mpg: {mpg} <br> {amFactor}')
) %>%
add_predictions(gam(mpg ~ s(wt, am, bs = 'fs'), data = mtcars)) %>%
arrange(am) %>%
plot_ly() %>%
add_markers(
x = ~ wt,
y = ~ mpg,
color = ~ amFactor,
opacity = .5,
text = ~ hovertext,
hoverinfo = 'text',
showlegend = F
) %>%
add_lines(
x = ~ wt,
y = ~ pred,
color = ~ amFactor
)
```
While you can use plotly as a one\-liner[48](#fn48), this would only be good for quick peeks while doing data exploration. It would generally be far too limiting otherwise.
```
plot_ly(ggplot2::midwest, x = ~percollege, color = ~state, type = "box")
```
And here is a Python example or two using plotly express.
```
plt = px.box(midwest, x = 'state', y = 'percollege', color = 'state', notched=True)
plt.show() # opens in browser
tips = px.data.tips() # built-in dataset
px.violin(
tips,
y = "tip",
x = "smoker",
color = "sex",
box = True,
points = "all",
hover_data = tips.columns
).show()
```
### ggplotly
One of the strengths of plotly is that we can feed a ggplot object to it, and turn our formerly static plots into interactive ones. It would have been easy to use geom\_smooth to get a similar result, so let’s do so.
```
gp = mtcars %>%
mutate(amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = paste(wt, mpg, amFactor)) %>%
arrange(wt) %>%
ggplot(aes(x = wt, y = mpg, color = amFactor)) +
geom_smooth(se = F) +
geom_point(aes(color = amFactor))
ggplotly()
```
Note that this is not a one\-to\-one transformation. The plotly image will have different line widths and point sizes. It will usually be easier to change it within the ggplot process than tweaking the ggplotly object.
Be prepared to spend time getting used to plotly. It has (in my opinion) poor documentation, is not nearly as flexible as ggplot2, has hidden (and arbitrary) defaults that can creep into a plot based on aspects of the data (rather than your settings), and some modes do not play nicely with others. That said, it works great for a lot of things, and I use it regularly.
#### R
```
library(plotly)
midwest %>%
filter(inmetro == T) %>%
plot_ly(x = ~ percbelowpoverty, y = ~ percollege) %>%
add_markers()
```
#### plotly with Python
The following does the same plot in Python
```
import pandas as pd
import plotly.express as px
midwest = pd.DataFrame(r.midwest) # from previous chunk using reticulate
plt = px.scatter(midwest, x = 'percbelowpoverty', y = 'percollege')
plt.show() # opens in browser
```
### Modes
plotly has modes, which allow for points, lines, text and combinations. Traces, `add_*`, work similar to geoms.
```
library(mgcv)
library(modelr)
library(glue)
mtcars %>%
mutate(
amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = glue('weight: {wt} <br> mpg: {mpg} <br> {amFactor}')
) %>%
add_predictions(gam(mpg ~ s(wt, am, bs = 'fs'), data = mtcars)) %>%
arrange(am) %>%
plot_ly() %>%
add_markers(
x = ~ wt,
y = ~ mpg,
color = ~ amFactor,
opacity = .5,
text = ~ hovertext,
hoverinfo = 'text',
showlegend = F
) %>%
add_lines(
x = ~ wt,
y = ~ pred,
color = ~ amFactor
)
```
While you can use plotly as a one\-liner[48](#fn48), this would only be good for quick peeks while doing data exploration. It would generally be far too limiting otherwise.
```
plot_ly(ggplot2::midwest, x = ~percollege, color = ~state, type = "box")
```
And here is a Python example or two using plotly express.
```
plt = px.box(midwest, x = 'state', y = 'percollege', color = 'state', notched=True)
plt.show() # opens in browser
tips = px.data.tips() # built-in dataset
px.violin(
tips,
y = "tip",
x = "smoker",
color = "sex",
box = True,
points = "all",
hover_data = tips.columns
).show()
```
### ggplotly
One of the strengths of plotly is that we can feed a ggplot object to it, and turn our formerly static plots into interactive ones. It would have been easy to use geom\_smooth to get a similar result, so let’s do so.
```
gp = mtcars %>%
mutate(amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = paste(wt, mpg, amFactor)) %>%
arrange(wt) %>%
ggplot(aes(x = wt, y = mpg, color = amFactor)) +
geom_smooth(se = F) +
geom_point(aes(color = amFactor))
ggplotly()
```
Note that this is not a one\-to\-one transformation. The plotly image will have different line widths and point sizes. It will usually be easier to change it within the ggplot process than tweaking the ggplotly object.
Be prepared to spend time getting used to plotly. It has (in my opinion) poor documentation, is not nearly as flexible as ggplot2, has hidden (and arbitrary) defaults that can creep into a plot based on aspects of the data (rather than your settings), and some modes do not play nicely with others. That said, it works great for a lot of things, and I use it regularly.
Highcharter
-----------
Highcharter is also fairly useful for a wide variety of plots, and is based on the highcharts.js library. If you have data suited to one of its functions, getting a great interactive plot can be ridiculously easy.
In what follows we use quantmod to create an xts (time series) object of Google’s stock price, including opening and closing values. The highcharter object has a ready\-made plot for such data[49](#fn49).
```
library(highcharter)
library(quantmod)
google_price = getSymbols("GOOG", auto.assign = FALSE)
hchart(google_price)
```
Graph networks
--------------
### visNetwork
The visNetwork package is specific to network visualizations and similar, and is based on the vis.js library. Networks require nodes and edges to connect them. These take on different aspects, and so are created in separate data frames.
```
set.seed(1352)
nodes = data.frame(
id = 0:5,
label = c('Bobby', 'Janie', 'Timmie', 'Mary', 'Johnny', 'Billy'),
group = c('friend', 'frenemy', 'frenemy', rep('friend', 3)),
value = sample(10:50, 6)
)
edges = data.frame(
from = c(0, 0, 0, 1, 1, 2, 2, 3, 3, 3, 4, 5, 5),
to = sample(0:5, 13, replace = T),
value = sample(1:10, 13, replace = T)
) %>%
filter(from != to)
library(visNetwork)
visNetwork(nodes, edges, height = 300, width = 800) %>%
visNodes(
shape = 'circle',
font = list(),
scaling = list(
min = 10,
max = 50,
label = list(enable = T)
)
) %>%
visLegend()
```
### sigmajs
The sigmajs package allows one to use the corresponding JS library to create some clean and nice visualizations for graphs. The following creates
```
library(sigmajs)
nodes <- sg_make_nodes(30)
edges <- sg_make_edges(nodes)
# add transitions
n <- nrow(nodes)
nodes$to_x <- runif(n, 5, 10)
nodes$to_y <- runif(n, 5, 10)
nodes$to_size <- runif(n, 5, 10)
nodes$to_color <- sample(c("#ff5500", "#00aaff"), n, replace = TRUE)
sigmajs() %>%
sg_nodes(nodes, id, label, size, color, to_x, to_y, to_size, to_color) %>%
sg_edges(edges, id, source, target) %>%
sg_animate(
mapping = list(
x = "to_x",
y = "to_y",
size = "to_size",
color = "to_color"
),
delay = 0
) %>%
sg_settings(animationsTime = 3500) %>%
sg_button("animate", # button label
"animate", # event name
class = "btn btn-warning")
```
animate
### Plotly
I mention plotly capabilities here as again, it may be useful to stick to one tool that you can learn well, and again, could allow you to bounce to python as well.
```
import plotly.graph_objects as go
import networkx as nx
G = nx.random_geometric_graph(50, 0.125)
edge_x = []
edge_y = []
for edge in G.edges():
x0, y0 = G.nodes[edge[0]]['pos']
x1, y1 = G.nodes[edge[1]]['pos']
edge_x.append(x0)
edge_x.append(x1)
edge_x.append(None)
edge_y.append(y0)
edge_y.append(y1)
edge_y.append(None)
edge_trace = go.Scatter(
x=edge_x,
y=edge_y,
line=dict(width=0.5, color='#888'),
hoverinfo='none',
mode='lines')
node_x = []
node_y = []
for node in G.nodes():
x, y = G.nodes[node]['pos']
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(
x=node_x, y=node_y,
mode='markers',
hoverinfo='text',
marker=dict(
showscale=True,
colorscale='Blackbody',
reversescale=True,
color=[],
size=10,
colorbar=dict(
thickness=15,
title='Node Connections',
xanchor='left',
titleside='right'
),
line_width=2))
node_adjacencies = []
node_text = []
for node, adjacencies in enumerate(G.adjacency()):
node_adjacencies.append(len(adjacencies[1]))
node_text.append('# of connections: '+str(len(adjacencies[1])))
node_trace.marker.color = node_adjacencies
node_trace.text = node_text
fig = go.Figure(data=[edge_trace, node_trace],
layout=go.Layout(
title='<br>Network graph made with Python',
titlefont_size=16,
showlegend=False,
hovermode='closest',
margin=dict(b=20,l=5,r=5,t=40),
annotations=[ dict(
text="Python code: <a href='https://plot.ly/ipython-notebooks/network-graphs/'> https://plot.ly/ipython-notebooks/network-graphs/</a>",
showarrow=False,
xref="paper", yref="paper",
x=0.005, y=-0.002 ) ],
xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
)
fig.show()
```
### visNetwork
The visNetwork package is specific to network visualizations and similar, and is based on the vis.js library. Networks require nodes and edges to connect them. These take on different aspects, and so are created in separate data frames.
```
set.seed(1352)
nodes = data.frame(
id = 0:5,
label = c('Bobby', 'Janie', 'Timmie', 'Mary', 'Johnny', 'Billy'),
group = c('friend', 'frenemy', 'frenemy', rep('friend', 3)),
value = sample(10:50, 6)
)
edges = data.frame(
from = c(0, 0, 0, 1, 1, 2, 2, 3, 3, 3, 4, 5, 5),
to = sample(0:5, 13, replace = T),
value = sample(1:10, 13, replace = T)
) %>%
filter(from != to)
library(visNetwork)
visNetwork(nodes, edges, height = 300, width = 800) %>%
visNodes(
shape = 'circle',
font = list(),
scaling = list(
min = 10,
max = 50,
label = list(enable = T)
)
) %>%
visLegend()
```
### sigmajs
The sigmajs package allows one to use the corresponding JS library to create some clean and nice visualizations for graphs. The following creates
```
library(sigmajs)
nodes <- sg_make_nodes(30)
edges <- sg_make_edges(nodes)
# add transitions
n <- nrow(nodes)
nodes$to_x <- runif(n, 5, 10)
nodes$to_y <- runif(n, 5, 10)
nodes$to_size <- runif(n, 5, 10)
nodes$to_color <- sample(c("#ff5500", "#00aaff"), n, replace = TRUE)
sigmajs() %>%
sg_nodes(nodes, id, label, size, color, to_x, to_y, to_size, to_color) %>%
sg_edges(edges, id, source, target) %>%
sg_animate(
mapping = list(
x = "to_x",
y = "to_y",
size = "to_size",
color = "to_color"
),
delay = 0
) %>%
sg_settings(animationsTime = 3500) %>%
sg_button("animate", # button label
"animate", # event name
class = "btn btn-warning")
```
animate
### Plotly
I mention plotly capabilities here as again, it may be useful to stick to one tool that you can learn well, and again, could allow you to bounce to python as well.
```
import plotly.graph_objects as go
import networkx as nx
G = nx.random_geometric_graph(50, 0.125)
edge_x = []
edge_y = []
for edge in G.edges():
x0, y0 = G.nodes[edge[0]]['pos']
x1, y1 = G.nodes[edge[1]]['pos']
edge_x.append(x0)
edge_x.append(x1)
edge_x.append(None)
edge_y.append(y0)
edge_y.append(y1)
edge_y.append(None)
edge_trace = go.Scatter(
x=edge_x,
y=edge_y,
line=dict(width=0.5, color='#888'),
hoverinfo='none',
mode='lines')
node_x = []
node_y = []
for node in G.nodes():
x, y = G.nodes[node]['pos']
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(
x=node_x, y=node_y,
mode='markers',
hoverinfo='text',
marker=dict(
showscale=True,
colorscale='Blackbody',
reversescale=True,
color=[],
size=10,
colorbar=dict(
thickness=15,
title='Node Connections',
xanchor='left',
titleside='right'
),
line_width=2))
node_adjacencies = []
node_text = []
for node, adjacencies in enumerate(G.adjacency()):
node_adjacencies.append(len(adjacencies[1]))
node_text.append('# of connections: '+str(len(adjacencies[1])))
node_trace.marker.color = node_adjacencies
node_trace.text = node_text
fig = go.Figure(data=[edge_trace, node_trace],
layout=go.Layout(
title='<br>Network graph made with Python',
titlefont_size=16,
showlegend=False,
hovermode='closest',
margin=dict(b=20,l=5,r=5,t=40),
annotations=[ dict(
text="Python code: <a href='https://plot.ly/ipython-notebooks/network-graphs/'> https://plot.ly/ipython-notebooks/network-graphs/</a>",
showarrow=False,
xref="paper", yref="paper",
x=0.005, y=-0.002 ) ],
xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
)
fig.show()
```
leaflet
-------
The leaflet package from RStudio is good for quick interactive maps, and it’s quite flexible and has some nice functionality to take your maps further. Unfortunately, it actually doesn’t always play well with many markdown formats.
```
hovertext <- paste(sep = "<br/>",
"<b><a href='http://umich.edu/'>University of Michigan</a></b>",
"Ann Arbor, MI"
)
library(leaflet)
leaflet() %>%
addTiles() %>%
addPopups(
lng = -83.738222,
lat = 42.277030,
popup = hovertext
)
```
DT
--
It might be a bit odd to think of data frames visually, but they can be interactive also. One can use the DT package for interactive data frames. This can be very useful when working in collaborative environments where one shares reports, as you can embed the data within the document itself.
```
library(DT)
ggplot2movies::movies %>%
select(1:6) %>%
filter(rating > 8, !is.na(budget), votes > 1000) %>%
datatable()
```
The other thing to be aware of is that tables *can* be visual, it’s just that many academic outlets waste this opportunity. Simple bolding, italics, and even sizing, can make results pop more easily for the audience. The DT package allows for coloring and even simple things like bars that connotes values. The following gives some idea of its flexibility.
```
iris %>%
# arrange(desc(Petal.Length)) %>%
datatable(rownames = F,
options = list(dom = 'firtp'),
class = 'row-border') %>%
formatStyle('Sepal.Length',
fontWeight = styleInterval(5, c('normal', 'bold'))) %>%
formatStyle('Sepal.Width',
color = styleInterval(c(3.4, 3.8), c('#7f7f7f', '#00aaff', '#ff5500')),
backgroundColor = styleInterval(3.4, c('#ebebeb', 'aliceblue'))) %>%
formatStyle(
'Petal.Length',
# color = 'transparent',
background = styleColorBar(iris$Petal.Length, '#5500ff'),
backgroundSize = '100% 90%',
backgroundRepeat = 'no-repeat',
backgroundPosition = 'center'
) %>%
formatStyle(
'Species',
color = 'white',
transform = 'rotateX(45deg) rotateY(20deg) rotateZ(30deg)',
backgroundColor = styleEqual(unique(iris$Species), c('#1f65b7', '#66b71f', '#b71f66'))
)
```
I would in no way recommend using the bars, unless the you want a visual *instead* of the value and can show all possible values. I would not recommend angled tag options at all, as that is more or less a prime example of chartjunk. However, subtle use of color and emphasis, as with the Sepal columns, can make tables of results that your audience will actually spend time exploring.
Shiny
-----
[
Shiny is a framework that can essentially allow you to build an interactive website/app. Like some of the other packages mentioned, it’s provided by [RStudio](https://shiny.rstudio.com/) developers. However, most of the more recently developed interactive visualization packages will work specifically within the shiny and rmarkdown setting.
You can make shiny apps just for your own use and run them locally. But note, you are using R, a statistical programming language, to build a webpage, and it’s not necessarily particularly well\-suited for it. Much of how you use R will not be useful in building a shiny app, and so it will definitely take some getting used to, and you will likely need to do a lot of tedious adjustments to get things just how you want.
Shiny apps have two main components, a part that specifies the user interface, and a server function that will do all the work. With those in place (either in a single ‘app.R’ file or in separate files), you can then simply click `run app` or use the function.
This example is taken from the shiny help file, and you can actually run it as is.
```
library(shiny)
# Running a Shiny app object
app <- shinyApp(
ui = bootstrapPage(
numericInput('n', 'Number of obs', 10),
plotOutput('plot')
),
server = function(input, output) {
output$plot <- renderPlot({
ggplot2::qplot(rnorm(input$n), xlab = 'Is this normal?!')
})
}
)
runApp(app)
```
You can share your app code/directory with anyone and they’ll be able to run it also. However, this is great mostly just for teaching someone how to do shiny, which most people aren’t going to do. Typically you’ll want someone to use the app itself, not run code. In that case you’ll need a web server. You can get up to 5 free ‘running’ applications at [shinyapps.io](http://shinyapps.io). However, you will notably be limited in the amount of computing resources that can be used to run the apps in a given month. Even minor usage of those could easily overtake the free settings. For personal use it’s plenty though.
### Dash
Dash is a similar approach to interactivity as Shiny brought to you by the plotly gang. The nice thing about it is crossplatform support for R and Python.
#### R
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
app <- Dash$new()
df <- readr::read_csv(file = "data/gapminder_small.csv") %>%
drop_na()
continents <- unique(df$continent)
data_gdp_life <- with(df,
lapply(continents,
function(cont) {
list(
x = gdpPercap[continent == cont],
y = lifeExp[continent == cont],
opacity=0.7,
text = country[continent == cont],
mode = 'markers',
name = cont,
marker = list(size = 15,
line = list(width = 0.5, color = 'white'))
)
}
)
)
app$layout(
htmlDiv(
list(
dccGraph(
id = 'life-exp-vs-gdp',
figure = list(
data = data_gdp_life,
layout = list(
xaxis = list('type' = 'log', 'title' = 'GDP Per Capita'),
yaxis = list('title' = 'Life Expectancy'),
margin = list('l' = 40, 'b' = 40, 't' = 10, 'r' = 10),
legend = list('x' = 0, 'y' = 1),
hovermode = 'closest'
)
)
)
)
)
)
app$run_server()
```
#### Python dash example
Here is a python example. Save as app.py then at the terminal run `python app.py`.
```
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
df = pd.read_csv('data/gapminder_small.csv')
app.layout = html.Div([
dcc.Graph(
id='life-exp-vs-gdp',
figure={
'data': [
dict(
x=df[df['continent'] == i]['gdpPercap'],
y=df[df['continent'] == i]['lifeExp'],
text=df[df['continent'] == i]['country'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) for i in df.continent.unique()
],
'layout': dict(
xaxis={'type': 'log', 'title': 'GDP Per Capita'},
yaxis={'title': 'Life Expectancy'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest'
)
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
### Dash
Dash is a similar approach to interactivity as Shiny brought to you by the plotly gang. The nice thing about it is crossplatform support for R and Python.
#### R
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
app <- Dash$new()
df <- readr::read_csv(file = "data/gapminder_small.csv") %>%
drop_na()
continents <- unique(df$continent)
data_gdp_life <- with(df,
lapply(continents,
function(cont) {
list(
x = gdpPercap[continent == cont],
y = lifeExp[continent == cont],
opacity=0.7,
text = country[continent == cont],
mode = 'markers',
name = cont,
marker = list(size = 15,
line = list(width = 0.5, color = 'white'))
)
}
)
)
app$layout(
htmlDiv(
list(
dccGraph(
id = 'life-exp-vs-gdp',
figure = list(
data = data_gdp_life,
layout = list(
xaxis = list('type' = 'log', 'title' = 'GDP Per Capita'),
yaxis = list('title' = 'Life Expectancy'),
margin = list('l' = 40, 'b' = 40, 't' = 10, 'r' = 10),
legend = list('x' = 0, 'y' = 1),
hovermode = 'closest'
)
)
)
)
)
)
app$run_server()
```
#### Python dash example
Here is a python example. Save as app.py then at the terminal run `python app.py`.
```
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
df = pd.read_csv('data/gapminder_small.csv')
app.layout = html.Div([
dcc.Graph(
id='life-exp-vs-gdp',
figure={
'data': [
dict(
x=df[df['continent'] == i]['gdpPercap'],
y=df[df['continent'] == i]['lifeExp'],
text=df[df['continent'] == i]['country'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) for i in df.continent.unique()
],
'layout': dict(
xaxis={'type': 'log', 'title': 'GDP Per Capita'},
yaxis={'title': 'Life Expectancy'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest'
)
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
#### R
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
app <- Dash$new()
df <- readr::read_csv(file = "data/gapminder_small.csv") %>%
drop_na()
continents <- unique(df$continent)
data_gdp_life <- with(df,
lapply(continents,
function(cont) {
list(
x = gdpPercap[continent == cont],
y = lifeExp[continent == cont],
opacity=0.7,
text = country[continent == cont],
mode = 'markers',
name = cont,
marker = list(size = 15,
line = list(width = 0.5, color = 'white'))
)
}
)
)
app$layout(
htmlDiv(
list(
dccGraph(
id = 'life-exp-vs-gdp',
figure = list(
data = data_gdp_life,
layout = list(
xaxis = list('type' = 'log', 'title' = 'GDP Per Capita'),
yaxis = list('title' = 'Life Expectancy'),
margin = list('l' = 40, 'b' = 40, 't' = 10, 'r' = 10),
legend = list('x' = 0, 'y' = 1),
hovermode = 'closest'
)
)
)
)
)
)
app$run_server()
```
#### Python dash example
Here is a python example. Save as app.py then at the terminal run `python app.py`.
```
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
df = pd.read_csv('data/gapminder_small.csv')
app.layout = html.Div([
dcc.Graph(
id='life-exp-vs-gdp',
figure={
'data': [
dict(
x=df[df['continent'] == i]['gdpPercap'],
y=df[df['continent'] == i]['lifeExp'],
text=df[df['continent'] == i]['country'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) for i in df.continent.unique()
],
'layout': dict(
xaxis={'type': 'log', 'title': 'GDP Per Capita'},
yaxis={'title': 'Life Expectancy'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest'
)
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
Interactive and Visual Data Exploration
---------------------------------------
As seen above, just a couple visualization packages can go a very long way. It’s now very easy to incorporate interactivity, so you should use it even if only for your own data exploration.
In general, interactivity allows for even more dimensions to be brought to a graphic, and can be more fun too!
However, they must serve a purpose. Too often, interactivity can simply serve as distraction, and can actually detract from the data story. Make sure to use them when they can enhance the narrative you wish to express.
Interactive Visualization Exercises
-----------------------------------
### Exercise 0
Install and load the plotly package. Load the tidyverse package if necessary (so you can use dplyr and ggplot2), and install/load the ggplot2movies for the IMDB data.
### Exercise 1
Using dplyr, group by year, and summarize to create a new variable that is the Average rating. Refer to the [tidyverse](tidyverse.html#tidyverse) section if you need a refresher on what’s being done here. Then create a plot with plotly for a line or scatter plot (for the latter, use the add\_markers function). It will take the following form, but you’ll need to supply the plotly arguments.
```
library(ggplot2movies)
movies %>%
group_by(year) %>%
summarise(Avg_Rating = mean(rating))
plot_ly() %>%
add_markers()
```
### Exercise 2
This time group by year *and* Drama. In the summarize create average rating again, but also a variable representing the average number of votes. In your plotly line, use the size and color arguments to represent whether the average number of votes and whether it was drama or not respectively. Use add\_markers. Note that Drama will be treated as numeric since it’s a 0\-1 indicator. This won’t affect the plot, but if you want, you might use mutate to change it to a factor with labels ‘Drama’ and ‘Other’.
### Exercise 3
Create a ggplot of your own design and then use ggplotly to make it interactive.
### Exercise 0
Install and load the plotly package. Load the tidyverse package if necessary (so you can use dplyr and ggplot2), and install/load the ggplot2movies for the IMDB data.
### Exercise 1
Using dplyr, group by year, and summarize to create a new variable that is the Average rating. Refer to the [tidyverse](tidyverse.html#tidyverse) section if you need a refresher on what’s being done here. Then create a plot with plotly for a line or scatter plot (for the latter, use the add\_markers function). It will take the following form, but you’ll need to supply the plotly arguments.
```
library(ggplot2movies)
movies %>%
group_by(year) %>%
summarise(Avg_Rating = mean(rating))
plot_ly() %>%
add_markers()
```
### Exercise 2
This time group by year *and* Drama. In the summarize create average rating again, but also a variable representing the average number of votes. In your plotly line, use the size and color arguments to represent whether the average number of votes and whether it was drama or not respectively. Use add\_markers. Note that Drama will be treated as numeric since it’s a 0\-1 indicator. This won’t affect the plot, but if you want, you might use mutate to change it to a factor with labels ‘Drama’ and ‘Other’.
### Exercise 3
Create a ggplot of your own design and then use ggplotly to make it interactive.
Python Interactive Visualization Notebook
-----------------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/interactive.ipynb)
If using Python though, you’re in luck! You get most of the basic functionality of ggplot2 via the plotnine module. A jupyter notebook demonstrating most of the previous is available [here](https://github.com/m-clark/data-processing-and-visualization/blob/master/code/ggplot.ipynb).
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/interactive.html |
Interactive Visualization
=========================
Packages
--------
As mentioned, ggplot2 is the most widely used package for visualization in R. However, it is not interactive by default. Many packages use htmlwidgets, d3 (JavaScript library), and other tools to provide interactive graphics. What’s great is that while you may have to learn new packages, you don’t necessarily have to change your approach or thinking about a plot, or learn some other language.
Many of these packages can be lumped into more general packages that try to provide a plotting system (similar to ggplot2), versus those that just aim to do a specific type of plot well. Here are some to give a sense of this.
General (click to visit the associated website):
* [plotly](https://plot.ly/r/)
\- used also in Python, Matlab, Julia
\- can convert ggplot2 images to interactive ones (with varying degrees of success)
* [highcharter](http://jkunst.com/highcharter/)
+ also very general wrapper for highcharts.js and works with some R packages out of the box
* [rbokeh](http://hafen.github.io/rbokeh/)
+ like plotly, it also has cross language support
Specific functionality:
* [DT](https://rstudio.github.io/DT/)
+ interactive data tables
* [leaflet](https://rstudio.github.io/leaflet/)
+ maps with OpenStreetMap
* [visNetwork](http://datastorm-open.github.io/visNetwork/)
+ Network visualization
In what follows we’ll see some of these in action. Note that unlike the previous chapter, the goal here is not to dive deeply, but just to get an idea of what’s available.
Piping for Visualization
------------------------
One of the advantages to piping is that it’s not limited to dplyr style data management functions. *Any* R function can be potentially piped to, and several examples have already been shown. Many newer visualization packages take advantage of piping, and this facilitates data exploration. We don’t have to create objects just to do a visualization. New variables can be easily created and subsequently manipulated just for visualization. Furthermore, data manipulation not separated from visualization.
htmlwidgets
-----------
The htmlwidgets package makes it easy to create visualizations based on JavaScript libraries. If you’re not familiar with JavaScript, you actually are very familiar with its products, as it’s basically the language of the web, visual or otherwise. The R packages using it typically are pipe\-oriented and produce interactive plots. In addition, you can use the htmlwidgets package to create your own functions that use a particular JavaScript library (but someone probably already has, so look first).
Plotly
------
We’ll begin our foray into the interactive world with a couple demonstrations of plotly. To give some background, you can think of plotly similar to RStudio, in that it has both enterprise (i.e. pay for) aspects and open source aspects. Just like RStudio, you have full access to what it has to offer via the open source R package. You may see old help suggestions referring to needing an account, but this is no longer necessary.
When using plotly, you’ll note the layering approach similar to what we had with ggplot2. Piping is used before plotting to do some data manipulation, after which we seamlessly move to the plot itself. The `=~` is essentially the way we denote aesthetics[47](#fn47).
Plotly is able to be used in both R and Python.
#### R
```
library(plotly)
midwest %>%
filter(inmetro == T) %>%
plot_ly(x = ~ percbelowpoverty, y = ~ percollege) %>%
add_markers()
```
#### plotly with Python
The following does the same plot in Python
```
import pandas as pd
import plotly.express as px
midwest = pd.DataFrame(r.midwest) # from previous chunk using reticulate
plt = px.scatter(midwest, x = 'percbelowpoverty', y = 'percollege')
plt.show() # opens in browser
```
### Modes
plotly has modes, which allow for points, lines, text and combinations. Traces, `add_*`, work similar to geoms.
```
library(mgcv)
library(modelr)
library(glue)
mtcars %>%
mutate(
amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = glue('weight: {wt} <br> mpg: {mpg} <br> {amFactor}')
) %>%
add_predictions(gam(mpg ~ s(wt, am, bs = 'fs'), data = mtcars)) %>%
arrange(am) %>%
plot_ly() %>%
add_markers(
x = ~ wt,
y = ~ mpg,
color = ~ amFactor,
opacity = .5,
text = ~ hovertext,
hoverinfo = 'text',
showlegend = F
) %>%
add_lines(
x = ~ wt,
y = ~ pred,
color = ~ amFactor
)
```
While you can use plotly as a one\-liner[48](#fn48), this would only be good for quick peeks while doing data exploration. It would generally be far too limiting otherwise.
```
plot_ly(ggplot2::midwest, x = ~percollege, color = ~state, type = "box")
```
And here is a Python example or two using plotly express.
```
plt = px.box(midwest, x = 'state', y = 'percollege', color = 'state', notched=True)
plt.show() # opens in browser
tips = px.data.tips() # built-in dataset
px.violin(
tips,
y = "tip",
x = "smoker",
color = "sex",
box = True,
points = "all",
hover_data = tips.columns
).show()
```
### ggplotly
One of the strengths of plotly is that we can feed a ggplot object to it, and turn our formerly static plots into interactive ones. It would have been easy to use geom\_smooth to get a similar result, so let’s do so.
```
gp = mtcars %>%
mutate(amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = paste(wt, mpg, amFactor)) %>%
arrange(wt) %>%
ggplot(aes(x = wt, y = mpg, color = amFactor)) +
geom_smooth(se = F) +
geom_point(aes(color = amFactor))
ggplotly()
```
Note that this is not a one\-to\-one transformation. The plotly image will have different line widths and point sizes. It will usually be easier to change it within the ggplot process than tweaking the ggplotly object.
Be prepared to spend time getting used to plotly. It has (in my opinion) poor documentation, is not nearly as flexible as ggplot2, has hidden (and arbitrary) defaults that can creep into a plot based on aspects of the data (rather than your settings), and some modes do not play nicely with others. That said, it works great for a lot of things, and I use it regularly.
Highcharter
-----------
Highcharter is also fairly useful for a wide variety of plots, and is based on the highcharts.js library. If you have data suited to one of its functions, getting a great interactive plot can be ridiculously easy.
In what follows we use quantmod to create an xts (time series) object of Google’s stock price, including opening and closing values. The highcharter object has a ready\-made plot for such data[49](#fn49).
```
library(highcharter)
library(quantmod)
google_price = getSymbols("GOOG", auto.assign = FALSE)
hchart(google_price)
```
Graph networks
--------------
### visNetwork
The visNetwork package is specific to network visualizations and similar, and is based on the vis.js library. Networks require nodes and edges to connect them. These take on different aspects, and so are created in separate data frames.
```
set.seed(1352)
nodes = data.frame(
id = 0:5,
label = c('Bobby', 'Janie', 'Timmie', 'Mary', 'Johnny', 'Billy'),
group = c('friend', 'frenemy', 'frenemy', rep('friend', 3)),
value = sample(10:50, 6)
)
edges = data.frame(
from = c(0, 0, 0, 1, 1, 2, 2, 3, 3, 3, 4, 5, 5),
to = sample(0:5, 13, replace = T),
value = sample(1:10, 13, replace = T)
) %>%
filter(from != to)
library(visNetwork)
visNetwork(nodes, edges, height = 300, width = 800) %>%
visNodes(
shape = 'circle',
font = list(),
scaling = list(
min = 10,
max = 50,
label = list(enable = T)
)
) %>%
visLegend()
```
### sigmajs
The sigmajs package allows one to use the corresponding JS library to create some clean and nice visualizations for graphs. The following creates
```
library(sigmajs)
nodes <- sg_make_nodes(30)
edges <- sg_make_edges(nodes)
# add transitions
n <- nrow(nodes)
nodes$to_x <- runif(n, 5, 10)
nodes$to_y <- runif(n, 5, 10)
nodes$to_size <- runif(n, 5, 10)
nodes$to_color <- sample(c("#ff5500", "#00aaff"), n, replace = TRUE)
sigmajs() %>%
sg_nodes(nodes, id, label, size, color, to_x, to_y, to_size, to_color) %>%
sg_edges(edges, id, source, target) %>%
sg_animate(
mapping = list(
x = "to_x",
y = "to_y",
size = "to_size",
color = "to_color"
),
delay = 0
) %>%
sg_settings(animationsTime = 3500) %>%
sg_button("animate", # button label
"animate", # event name
class = "btn btn-warning")
```
animate
### Plotly
I mention plotly capabilities here as again, it may be useful to stick to one tool that you can learn well, and again, could allow you to bounce to python as well.
```
import plotly.graph_objects as go
import networkx as nx
G = nx.random_geometric_graph(50, 0.125)
edge_x = []
edge_y = []
for edge in G.edges():
x0, y0 = G.nodes[edge[0]]['pos']
x1, y1 = G.nodes[edge[1]]['pos']
edge_x.append(x0)
edge_x.append(x1)
edge_x.append(None)
edge_y.append(y0)
edge_y.append(y1)
edge_y.append(None)
edge_trace = go.Scatter(
x=edge_x,
y=edge_y,
line=dict(width=0.5, color='#888'),
hoverinfo='none',
mode='lines')
node_x = []
node_y = []
for node in G.nodes():
x, y = G.nodes[node]['pos']
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(
x=node_x, y=node_y,
mode='markers',
hoverinfo='text',
marker=dict(
showscale=True,
colorscale='Blackbody',
reversescale=True,
color=[],
size=10,
colorbar=dict(
thickness=15,
title='Node Connections',
xanchor='left',
titleside='right'
),
line_width=2))
node_adjacencies = []
node_text = []
for node, adjacencies in enumerate(G.adjacency()):
node_adjacencies.append(len(adjacencies[1]))
node_text.append('# of connections: '+str(len(adjacencies[1])))
node_trace.marker.color = node_adjacencies
node_trace.text = node_text
fig = go.Figure(data=[edge_trace, node_trace],
layout=go.Layout(
title='<br>Network graph made with Python',
titlefont_size=16,
showlegend=False,
hovermode='closest',
margin=dict(b=20,l=5,r=5,t=40),
annotations=[ dict(
text="Python code: <a href='https://plot.ly/ipython-notebooks/network-graphs/'> https://plot.ly/ipython-notebooks/network-graphs/</a>",
showarrow=False,
xref="paper", yref="paper",
x=0.005, y=-0.002 ) ],
xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
)
fig.show()
```
leaflet
-------
The leaflet package from RStudio is good for quick interactive maps, and it’s quite flexible and has some nice functionality to take your maps further. Unfortunately, it actually doesn’t always play well with many markdown formats.
```
hovertext <- paste(sep = "<br/>",
"<b><a href='http://umich.edu/'>University of Michigan</a></b>",
"Ann Arbor, MI"
)
library(leaflet)
leaflet() %>%
addTiles() %>%
addPopups(
lng = -83.738222,
lat = 42.277030,
popup = hovertext
)
```
DT
--
It might be a bit odd to think of data frames visually, but they can be interactive also. One can use the DT package for interactive data frames. This can be very useful when working in collaborative environments where one shares reports, as you can embed the data within the document itself.
```
library(DT)
ggplot2movies::movies %>%
select(1:6) %>%
filter(rating > 8, !is.na(budget), votes > 1000) %>%
datatable()
```
The other thing to be aware of is that tables *can* be visual, it’s just that many academic outlets waste this opportunity. Simple bolding, italics, and even sizing, can make results pop more easily for the audience. The DT package allows for coloring and even simple things like bars that connotes values. The following gives some idea of its flexibility.
```
iris %>%
# arrange(desc(Petal.Length)) %>%
datatable(rownames = F,
options = list(dom = 'firtp'),
class = 'row-border') %>%
formatStyle('Sepal.Length',
fontWeight = styleInterval(5, c('normal', 'bold'))) %>%
formatStyle('Sepal.Width',
color = styleInterval(c(3.4, 3.8), c('#7f7f7f', '#00aaff', '#ff5500')),
backgroundColor = styleInterval(3.4, c('#ebebeb', 'aliceblue'))) %>%
formatStyle(
'Petal.Length',
# color = 'transparent',
background = styleColorBar(iris$Petal.Length, '#5500ff'),
backgroundSize = '100% 90%',
backgroundRepeat = 'no-repeat',
backgroundPosition = 'center'
) %>%
formatStyle(
'Species',
color = 'white',
transform = 'rotateX(45deg) rotateY(20deg) rotateZ(30deg)',
backgroundColor = styleEqual(unique(iris$Species), c('#1f65b7', '#66b71f', '#b71f66'))
)
```
I would in no way recommend using the bars, unless the you want a visual *instead* of the value and can show all possible values. I would not recommend angled tag options at all, as that is more or less a prime example of chartjunk. However, subtle use of color and emphasis, as with the Sepal columns, can make tables of results that your audience will actually spend time exploring.
Shiny
-----
[
Shiny is a framework that can essentially allow you to build an interactive website/app. Like some of the other packages mentioned, it’s provided by [RStudio](https://shiny.rstudio.com/) developers. However, most of the more recently developed interactive visualization packages will work specifically within the shiny and rmarkdown setting.
You can make shiny apps just for your own use and run them locally. But note, you are using R, a statistical programming language, to build a webpage, and it’s not necessarily particularly well\-suited for it. Much of how you use R will not be useful in building a shiny app, and so it will definitely take some getting used to, and you will likely need to do a lot of tedious adjustments to get things just how you want.
Shiny apps have two main components, a part that specifies the user interface, and a server function that will do all the work. With those in place (either in a single ‘app.R’ file or in separate files), you can then simply click `run app` or use the function.
This example is taken from the shiny help file, and you can actually run it as is.
```
library(shiny)
# Running a Shiny app object
app <- shinyApp(
ui = bootstrapPage(
numericInput('n', 'Number of obs', 10),
plotOutput('plot')
),
server = function(input, output) {
output$plot <- renderPlot({
ggplot2::qplot(rnorm(input$n), xlab = 'Is this normal?!')
})
}
)
runApp(app)
```
You can share your app code/directory with anyone and they’ll be able to run it also. However, this is great mostly just for teaching someone how to do shiny, which most people aren’t going to do. Typically you’ll want someone to use the app itself, not run code. In that case you’ll need a web server. You can get up to 5 free ‘running’ applications at [shinyapps.io](http://shinyapps.io). However, you will notably be limited in the amount of computing resources that can be used to run the apps in a given month. Even minor usage of those could easily overtake the free settings. For personal use it’s plenty though.
### Dash
Dash is a similar approach to interactivity as Shiny brought to you by the plotly gang. The nice thing about it is crossplatform support for R and Python.
#### R
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
app <- Dash$new()
df <- readr::read_csv(file = "data/gapminder_small.csv") %>%
drop_na()
continents <- unique(df$continent)
data_gdp_life <- with(df,
lapply(continents,
function(cont) {
list(
x = gdpPercap[continent == cont],
y = lifeExp[continent == cont],
opacity=0.7,
text = country[continent == cont],
mode = 'markers',
name = cont,
marker = list(size = 15,
line = list(width = 0.5, color = 'white'))
)
}
)
)
app$layout(
htmlDiv(
list(
dccGraph(
id = 'life-exp-vs-gdp',
figure = list(
data = data_gdp_life,
layout = list(
xaxis = list('type' = 'log', 'title' = 'GDP Per Capita'),
yaxis = list('title' = 'Life Expectancy'),
margin = list('l' = 40, 'b' = 40, 't' = 10, 'r' = 10),
legend = list('x' = 0, 'y' = 1),
hovermode = 'closest'
)
)
)
)
)
)
app$run_server()
```
#### Python dash example
Here is a python example. Save as app.py then at the terminal run `python app.py`.
```
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
df = pd.read_csv('data/gapminder_small.csv')
app.layout = html.Div([
dcc.Graph(
id='life-exp-vs-gdp',
figure={
'data': [
dict(
x=df[df['continent'] == i]['gdpPercap'],
y=df[df['continent'] == i]['lifeExp'],
text=df[df['continent'] == i]['country'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) for i in df.continent.unique()
],
'layout': dict(
xaxis={'type': 'log', 'title': 'GDP Per Capita'},
yaxis={'title': 'Life Expectancy'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest'
)
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
Interactive and Visual Data Exploration
---------------------------------------
As seen above, just a couple visualization packages can go a very long way. It’s now very easy to incorporate interactivity, so you should use it even if only for your own data exploration.
In general, interactivity allows for even more dimensions to be brought to a graphic, and can be more fun too!
However, they must serve a purpose. Too often, interactivity can simply serve as distraction, and can actually detract from the data story. Make sure to use them when they can enhance the narrative you wish to express.
Interactive Visualization Exercises
-----------------------------------
### Exercise 0
Install and load the plotly package. Load the tidyverse package if necessary (so you can use dplyr and ggplot2), and install/load the ggplot2movies for the IMDB data.
### Exercise 1
Using dplyr, group by year, and summarize to create a new variable that is the Average rating. Refer to the [tidyverse](tidyverse.html#tidyverse) section if you need a refresher on what’s being done here. Then create a plot with plotly for a line or scatter plot (for the latter, use the add\_markers function). It will take the following form, but you’ll need to supply the plotly arguments.
```
library(ggplot2movies)
movies %>%
group_by(year) %>%
summarise(Avg_Rating = mean(rating))
plot_ly() %>%
add_markers()
```
### Exercise 2
This time group by year *and* Drama. In the summarize create average rating again, but also a variable representing the average number of votes. In your plotly line, use the size and color arguments to represent whether the average number of votes and whether it was drama or not respectively. Use add\_markers. Note that Drama will be treated as numeric since it’s a 0\-1 indicator. This won’t affect the plot, but if you want, you might use mutate to change it to a factor with labels ‘Drama’ and ‘Other’.
### Exercise 3
Create a ggplot of your own design and then use ggplotly to make it interactive.
Python Interactive Visualization Notebook
-----------------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/interactive.ipynb)
If using Python though, you’re in luck! You get most of the basic functionality of ggplot2 via the plotnine module. A jupyter notebook demonstrating most of the previous is available [here](https://github.com/m-clark/data-processing-and-visualization/blob/master/code/ggplot.ipynb).
Packages
--------
As mentioned, ggplot2 is the most widely used package for visualization in R. However, it is not interactive by default. Many packages use htmlwidgets, d3 (JavaScript library), and other tools to provide interactive graphics. What’s great is that while you may have to learn new packages, you don’t necessarily have to change your approach or thinking about a plot, or learn some other language.
Many of these packages can be lumped into more general packages that try to provide a plotting system (similar to ggplot2), versus those that just aim to do a specific type of plot well. Here are some to give a sense of this.
General (click to visit the associated website):
* [plotly](https://plot.ly/r/)
\- used also in Python, Matlab, Julia
\- can convert ggplot2 images to interactive ones (with varying degrees of success)
* [highcharter](http://jkunst.com/highcharter/)
+ also very general wrapper for highcharts.js and works with some R packages out of the box
* [rbokeh](http://hafen.github.io/rbokeh/)
+ like plotly, it also has cross language support
Specific functionality:
* [DT](https://rstudio.github.io/DT/)
+ interactive data tables
* [leaflet](https://rstudio.github.io/leaflet/)
+ maps with OpenStreetMap
* [visNetwork](http://datastorm-open.github.io/visNetwork/)
+ Network visualization
In what follows we’ll see some of these in action. Note that unlike the previous chapter, the goal here is not to dive deeply, but just to get an idea of what’s available.
Piping for Visualization
------------------------
One of the advantages to piping is that it’s not limited to dplyr style data management functions. *Any* R function can be potentially piped to, and several examples have already been shown. Many newer visualization packages take advantage of piping, and this facilitates data exploration. We don’t have to create objects just to do a visualization. New variables can be easily created and subsequently manipulated just for visualization. Furthermore, data manipulation not separated from visualization.
htmlwidgets
-----------
The htmlwidgets package makes it easy to create visualizations based on JavaScript libraries. If you’re not familiar with JavaScript, you actually are very familiar with its products, as it’s basically the language of the web, visual or otherwise. The R packages using it typically are pipe\-oriented and produce interactive plots. In addition, you can use the htmlwidgets package to create your own functions that use a particular JavaScript library (but someone probably already has, so look first).
Plotly
------
We’ll begin our foray into the interactive world with a couple demonstrations of plotly. To give some background, you can think of plotly similar to RStudio, in that it has both enterprise (i.e. pay for) aspects and open source aspects. Just like RStudio, you have full access to what it has to offer via the open source R package. You may see old help suggestions referring to needing an account, but this is no longer necessary.
When using plotly, you’ll note the layering approach similar to what we had with ggplot2. Piping is used before plotting to do some data manipulation, after which we seamlessly move to the plot itself. The `=~` is essentially the way we denote aesthetics[47](#fn47).
Plotly is able to be used in both R and Python.
#### R
```
library(plotly)
midwest %>%
filter(inmetro == T) %>%
plot_ly(x = ~ percbelowpoverty, y = ~ percollege) %>%
add_markers()
```
#### plotly with Python
The following does the same plot in Python
```
import pandas as pd
import plotly.express as px
midwest = pd.DataFrame(r.midwest) # from previous chunk using reticulate
plt = px.scatter(midwest, x = 'percbelowpoverty', y = 'percollege')
plt.show() # opens in browser
```
### Modes
plotly has modes, which allow for points, lines, text and combinations. Traces, `add_*`, work similar to geoms.
```
library(mgcv)
library(modelr)
library(glue)
mtcars %>%
mutate(
amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = glue('weight: {wt} <br> mpg: {mpg} <br> {amFactor}')
) %>%
add_predictions(gam(mpg ~ s(wt, am, bs = 'fs'), data = mtcars)) %>%
arrange(am) %>%
plot_ly() %>%
add_markers(
x = ~ wt,
y = ~ mpg,
color = ~ amFactor,
opacity = .5,
text = ~ hovertext,
hoverinfo = 'text',
showlegend = F
) %>%
add_lines(
x = ~ wt,
y = ~ pred,
color = ~ amFactor
)
```
While you can use plotly as a one\-liner[48](#fn48), this would only be good for quick peeks while doing data exploration. It would generally be far too limiting otherwise.
```
plot_ly(ggplot2::midwest, x = ~percollege, color = ~state, type = "box")
```
And here is a Python example or two using plotly express.
```
plt = px.box(midwest, x = 'state', y = 'percollege', color = 'state', notched=True)
plt.show() # opens in browser
tips = px.data.tips() # built-in dataset
px.violin(
tips,
y = "tip",
x = "smoker",
color = "sex",
box = True,
points = "all",
hover_data = tips.columns
).show()
```
### ggplotly
One of the strengths of plotly is that we can feed a ggplot object to it, and turn our formerly static plots into interactive ones. It would have been easy to use geom\_smooth to get a similar result, so let’s do so.
```
gp = mtcars %>%
mutate(amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = paste(wt, mpg, amFactor)) %>%
arrange(wt) %>%
ggplot(aes(x = wt, y = mpg, color = amFactor)) +
geom_smooth(se = F) +
geom_point(aes(color = amFactor))
ggplotly()
```
Note that this is not a one\-to\-one transformation. The plotly image will have different line widths and point sizes. It will usually be easier to change it within the ggplot process than tweaking the ggplotly object.
Be prepared to spend time getting used to plotly. It has (in my opinion) poor documentation, is not nearly as flexible as ggplot2, has hidden (and arbitrary) defaults that can creep into a plot based on aspects of the data (rather than your settings), and some modes do not play nicely with others. That said, it works great for a lot of things, and I use it regularly.
#### R
```
library(plotly)
midwest %>%
filter(inmetro == T) %>%
plot_ly(x = ~ percbelowpoverty, y = ~ percollege) %>%
add_markers()
```
#### plotly with Python
The following does the same plot in Python
```
import pandas as pd
import plotly.express as px
midwest = pd.DataFrame(r.midwest) # from previous chunk using reticulate
plt = px.scatter(midwest, x = 'percbelowpoverty', y = 'percollege')
plt.show() # opens in browser
```
### Modes
plotly has modes, which allow for points, lines, text and combinations. Traces, `add_*`, work similar to geoms.
```
library(mgcv)
library(modelr)
library(glue)
mtcars %>%
mutate(
amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = glue('weight: {wt} <br> mpg: {mpg} <br> {amFactor}')
) %>%
add_predictions(gam(mpg ~ s(wt, am, bs = 'fs'), data = mtcars)) %>%
arrange(am) %>%
plot_ly() %>%
add_markers(
x = ~ wt,
y = ~ mpg,
color = ~ amFactor,
opacity = .5,
text = ~ hovertext,
hoverinfo = 'text',
showlegend = F
) %>%
add_lines(
x = ~ wt,
y = ~ pred,
color = ~ amFactor
)
```
While you can use plotly as a one\-liner[48](#fn48), this would only be good for quick peeks while doing data exploration. It would generally be far too limiting otherwise.
```
plot_ly(ggplot2::midwest, x = ~percollege, color = ~state, type = "box")
```
And here is a Python example or two using plotly express.
```
plt = px.box(midwest, x = 'state', y = 'percollege', color = 'state', notched=True)
plt.show() # opens in browser
tips = px.data.tips() # built-in dataset
px.violin(
tips,
y = "tip",
x = "smoker",
color = "sex",
box = True,
points = "all",
hover_data = tips.columns
).show()
```
### ggplotly
One of the strengths of plotly is that we can feed a ggplot object to it, and turn our formerly static plots into interactive ones. It would have been easy to use geom\_smooth to get a similar result, so let’s do so.
```
gp = mtcars %>%
mutate(amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = paste(wt, mpg, amFactor)) %>%
arrange(wt) %>%
ggplot(aes(x = wt, y = mpg, color = amFactor)) +
geom_smooth(se = F) +
geom_point(aes(color = amFactor))
ggplotly()
```
Note that this is not a one\-to\-one transformation. The plotly image will have different line widths and point sizes. It will usually be easier to change it within the ggplot process than tweaking the ggplotly object.
Be prepared to spend time getting used to plotly. It has (in my opinion) poor documentation, is not nearly as flexible as ggplot2, has hidden (and arbitrary) defaults that can creep into a plot based on aspects of the data (rather than your settings), and some modes do not play nicely with others. That said, it works great for a lot of things, and I use it regularly.
Highcharter
-----------
Highcharter is also fairly useful for a wide variety of plots, and is based on the highcharts.js library. If you have data suited to one of its functions, getting a great interactive plot can be ridiculously easy.
In what follows we use quantmod to create an xts (time series) object of Google’s stock price, including opening and closing values. The highcharter object has a ready\-made plot for such data[49](#fn49).
```
library(highcharter)
library(quantmod)
google_price = getSymbols("GOOG", auto.assign = FALSE)
hchart(google_price)
```
Graph networks
--------------
### visNetwork
The visNetwork package is specific to network visualizations and similar, and is based on the vis.js library. Networks require nodes and edges to connect them. These take on different aspects, and so are created in separate data frames.
```
set.seed(1352)
nodes = data.frame(
id = 0:5,
label = c('Bobby', 'Janie', 'Timmie', 'Mary', 'Johnny', 'Billy'),
group = c('friend', 'frenemy', 'frenemy', rep('friend', 3)),
value = sample(10:50, 6)
)
edges = data.frame(
from = c(0, 0, 0, 1, 1, 2, 2, 3, 3, 3, 4, 5, 5),
to = sample(0:5, 13, replace = T),
value = sample(1:10, 13, replace = T)
) %>%
filter(from != to)
library(visNetwork)
visNetwork(nodes, edges, height = 300, width = 800) %>%
visNodes(
shape = 'circle',
font = list(),
scaling = list(
min = 10,
max = 50,
label = list(enable = T)
)
) %>%
visLegend()
```
### sigmajs
The sigmajs package allows one to use the corresponding JS library to create some clean and nice visualizations for graphs. The following creates
```
library(sigmajs)
nodes <- sg_make_nodes(30)
edges <- sg_make_edges(nodes)
# add transitions
n <- nrow(nodes)
nodes$to_x <- runif(n, 5, 10)
nodes$to_y <- runif(n, 5, 10)
nodes$to_size <- runif(n, 5, 10)
nodes$to_color <- sample(c("#ff5500", "#00aaff"), n, replace = TRUE)
sigmajs() %>%
sg_nodes(nodes, id, label, size, color, to_x, to_y, to_size, to_color) %>%
sg_edges(edges, id, source, target) %>%
sg_animate(
mapping = list(
x = "to_x",
y = "to_y",
size = "to_size",
color = "to_color"
),
delay = 0
) %>%
sg_settings(animationsTime = 3500) %>%
sg_button("animate", # button label
"animate", # event name
class = "btn btn-warning")
```
animate
### Plotly
I mention plotly capabilities here as again, it may be useful to stick to one tool that you can learn well, and again, could allow you to bounce to python as well.
```
import plotly.graph_objects as go
import networkx as nx
G = nx.random_geometric_graph(50, 0.125)
edge_x = []
edge_y = []
for edge in G.edges():
x0, y0 = G.nodes[edge[0]]['pos']
x1, y1 = G.nodes[edge[1]]['pos']
edge_x.append(x0)
edge_x.append(x1)
edge_x.append(None)
edge_y.append(y0)
edge_y.append(y1)
edge_y.append(None)
edge_trace = go.Scatter(
x=edge_x,
y=edge_y,
line=dict(width=0.5, color='#888'),
hoverinfo='none',
mode='lines')
node_x = []
node_y = []
for node in G.nodes():
x, y = G.nodes[node]['pos']
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(
x=node_x, y=node_y,
mode='markers',
hoverinfo='text',
marker=dict(
showscale=True,
colorscale='Blackbody',
reversescale=True,
color=[],
size=10,
colorbar=dict(
thickness=15,
title='Node Connections',
xanchor='left',
titleside='right'
),
line_width=2))
node_adjacencies = []
node_text = []
for node, adjacencies in enumerate(G.adjacency()):
node_adjacencies.append(len(adjacencies[1]))
node_text.append('# of connections: '+str(len(adjacencies[1])))
node_trace.marker.color = node_adjacencies
node_trace.text = node_text
fig = go.Figure(data=[edge_trace, node_trace],
layout=go.Layout(
title='<br>Network graph made with Python',
titlefont_size=16,
showlegend=False,
hovermode='closest',
margin=dict(b=20,l=5,r=5,t=40),
annotations=[ dict(
text="Python code: <a href='https://plot.ly/ipython-notebooks/network-graphs/'> https://plot.ly/ipython-notebooks/network-graphs/</a>",
showarrow=False,
xref="paper", yref="paper",
x=0.005, y=-0.002 ) ],
xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
)
fig.show()
```
### visNetwork
The visNetwork package is specific to network visualizations and similar, and is based on the vis.js library. Networks require nodes and edges to connect them. These take on different aspects, and so are created in separate data frames.
```
set.seed(1352)
nodes = data.frame(
id = 0:5,
label = c('Bobby', 'Janie', 'Timmie', 'Mary', 'Johnny', 'Billy'),
group = c('friend', 'frenemy', 'frenemy', rep('friend', 3)),
value = sample(10:50, 6)
)
edges = data.frame(
from = c(0, 0, 0, 1, 1, 2, 2, 3, 3, 3, 4, 5, 5),
to = sample(0:5, 13, replace = T),
value = sample(1:10, 13, replace = T)
) %>%
filter(from != to)
library(visNetwork)
visNetwork(nodes, edges, height = 300, width = 800) %>%
visNodes(
shape = 'circle',
font = list(),
scaling = list(
min = 10,
max = 50,
label = list(enable = T)
)
) %>%
visLegend()
```
### sigmajs
The sigmajs package allows one to use the corresponding JS library to create some clean and nice visualizations for graphs. The following creates
```
library(sigmajs)
nodes <- sg_make_nodes(30)
edges <- sg_make_edges(nodes)
# add transitions
n <- nrow(nodes)
nodes$to_x <- runif(n, 5, 10)
nodes$to_y <- runif(n, 5, 10)
nodes$to_size <- runif(n, 5, 10)
nodes$to_color <- sample(c("#ff5500", "#00aaff"), n, replace = TRUE)
sigmajs() %>%
sg_nodes(nodes, id, label, size, color, to_x, to_y, to_size, to_color) %>%
sg_edges(edges, id, source, target) %>%
sg_animate(
mapping = list(
x = "to_x",
y = "to_y",
size = "to_size",
color = "to_color"
),
delay = 0
) %>%
sg_settings(animationsTime = 3500) %>%
sg_button("animate", # button label
"animate", # event name
class = "btn btn-warning")
```
animate
### Plotly
I mention plotly capabilities here as again, it may be useful to stick to one tool that you can learn well, and again, could allow you to bounce to python as well.
```
import plotly.graph_objects as go
import networkx as nx
G = nx.random_geometric_graph(50, 0.125)
edge_x = []
edge_y = []
for edge in G.edges():
x0, y0 = G.nodes[edge[0]]['pos']
x1, y1 = G.nodes[edge[1]]['pos']
edge_x.append(x0)
edge_x.append(x1)
edge_x.append(None)
edge_y.append(y0)
edge_y.append(y1)
edge_y.append(None)
edge_trace = go.Scatter(
x=edge_x,
y=edge_y,
line=dict(width=0.5, color='#888'),
hoverinfo='none',
mode='lines')
node_x = []
node_y = []
for node in G.nodes():
x, y = G.nodes[node]['pos']
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(
x=node_x, y=node_y,
mode='markers',
hoverinfo='text',
marker=dict(
showscale=True,
colorscale='Blackbody',
reversescale=True,
color=[],
size=10,
colorbar=dict(
thickness=15,
title='Node Connections',
xanchor='left',
titleside='right'
),
line_width=2))
node_adjacencies = []
node_text = []
for node, adjacencies in enumerate(G.adjacency()):
node_adjacencies.append(len(adjacencies[1]))
node_text.append('# of connections: '+str(len(adjacencies[1])))
node_trace.marker.color = node_adjacencies
node_trace.text = node_text
fig = go.Figure(data=[edge_trace, node_trace],
layout=go.Layout(
title='<br>Network graph made with Python',
titlefont_size=16,
showlegend=False,
hovermode='closest',
margin=dict(b=20,l=5,r=5,t=40),
annotations=[ dict(
text="Python code: <a href='https://plot.ly/ipython-notebooks/network-graphs/'> https://plot.ly/ipython-notebooks/network-graphs/</a>",
showarrow=False,
xref="paper", yref="paper",
x=0.005, y=-0.002 ) ],
xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
)
fig.show()
```
leaflet
-------
The leaflet package from RStudio is good for quick interactive maps, and it’s quite flexible and has some nice functionality to take your maps further. Unfortunately, it actually doesn’t always play well with many markdown formats.
```
hovertext <- paste(sep = "<br/>",
"<b><a href='http://umich.edu/'>University of Michigan</a></b>",
"Ann Arbor, MI"
)
library(leaflet)
leaflet() %>%
addTiles() %>%
addPopups(
lng = -83.738222,
lat = 42.277030,
popup = hovertext
)
```
DT
--
It might be a bit odd to think of data frames visually, but they can be interactive also. One can use the DT package for interactive data frames. This can be very useful when working in collaborative environments where one shares reports, as you can embed the data within the document itself.
```
library(DT)
ggplot2movies::movies %>%
select(1:6) %>%
filter(rating > 8, !is.na(budget), votes > 1000) %>%
datatable()
```
The other thing to be aware of is that tables *can* be visual, it’s just that many academic outlets waste this opportunity. Simple bolding, italics, and even sizing, can make results pop more easily for the audience. The DT package allows for coloring and even simple things like bars that connotes values. The following gives some idea of its flexibility.
```
iris %>%
# arrange(desc(Petal.Length)) %>%
datatable(rownames = F,
options = list(dom = 'firtp'),
class = 'row-border') %>%
formatStyle('Sepal.Length',
fontWeight = styleInterval(5, c('normal', 'bold'))) %>%
formatStyle('Sepal.Width',
color = styleInterval(c(3.4, 3.8), c('#7f7f7f', '#00aaff', '#ff5500')),
backgroundColor = styleInterval(3.4, c('#ebebeb', 'aliceblue'))) %>%
formatStyle(
'Petal.Length',
# color = 'transparent',
background = styleColorBar(iris$Petal.Length, '#5500ff'),
backgroundSize = '100% 90%',
backgroundRepeat = 'no-repeat',
backgroundPosition = 'center'
) %>%
formatStyle(
'Species',
color = 'white',
transform = 'rotateX(45deg) rotateY(20deg) rotateZ(30deg)',
backgroundColor = styleEqual(unique(iris$Species), c('#1f65b7', '#66b71f', '#b71f66'))
)
```
I would in no way recommend using the bars, unless the you want a visual *instead* of the value and can show all possible values. I would not recommend angled tag options at all, as that is more or less a prime example of chartjunk. However, subtle use of color and emphasis, as with the Sepal columns, can make tables of results that your audience will actually spend time exploring.
Shiny
-----
[
Shiny is a framework that can essentially allow you to build an interactive website/app. Like some of the other packages mentioned, it’s provided by [RStudio](https://shiny.rstudio.com/) developers. However, most of the more recently developed interactive visualization packages will work specifically within the shiny and rmarkdown setting.
You can make shiny apps just for your own use and run them locally. But note, you are using R, a statistical programming language, to build a webpage, and it’s not necessarily particularly well\-suited for it. Much of how you use R will not be useful in building a shiny app, and so it will definitely take some getting used to, and you will likely need to do a lot of tedious adjustments to get things just how you want.
Shiny apps have two main components, a part that specifies the user interface, and a server function that will do all the work. With those in place (either in a single ‘app.R’ file or in separate files), you can then simply click `run app` or use the function.
This example is taken from the shiny help file, and you can actually run it as is.
```
library(shiny)
# Running a Shiny app object
app <- shinyApp(
ui = bootstrapPage(
numericInput('n', 'Number of obs', 10),
plotOutput('plot')
),
server = function(input, output) {
output$plot <- renderPlot({
ggplot2::qplot(rnorm(input$n), xlab = 'Is this normal?!')
})
}
)
runApp(app)
```
You can share your app code/directory with anyone and they’ll be able to run it also. However, this is great mostly just for teaching someone how to do shiny, which most people aren’t going to do. Typically you’ll want someone to use the app itself, not run code. In that case you’ll need a web server. You can get up to 5 free ‘running’ applications at [shinyapps.io](http://shinyapps.io). However, you will notably be limited in the amount of computing resources that can be used to run the apps in a given month. Even minor usage of those could easily overtake the free settings. For personal use it’s plenty though.
### Dash
Dash is a similar approach to interactivity as Shiny brought to you by the plotly gang. The nice thing about it is crossplatform support for R and Python.
#### R
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
app <- Dash$new()
df <- readr::read_csv(file = "data/gapminder_small.csv") %>%
drop_na()
continents <- unique(df$continent)
data_gdp_life <- with(df,
lapply(continents,
function(cont) {
list(
x = gdpPercap[continent == cont],
y = lifeExp[continent == cont],
opacity=0.7,
text = country[continent == cont],
mode = 'markers',
name = cont,
marker = list(size = 15,
line = list(width = 0.5, color = 'white'))
)
}
)
)
app$layout(
htmlDiv(
list(
dccGraph(
id = 'life-exp-vs-gdp',
figure = list(
data = data_gdp_life,
layout = list(
xaxis = list('type' = 'log', 'title' = 'GDP Per Capita'),
yaxis = list('title' = 'Life Expectancy'),
margin = list('l' = 40, 'b' = 40, 't' = 10, 'r' = 10),
legend = list('x' = 0, 'y' = 1),
hovermode = 'closest'
)
)
)
)
)
)
app$run_server()
```
#### Python dash example
Here is a python example. Save as app.py then at the terminal run `python app.py`.
```
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
df = pd.read_csv('data/gapminder_small.csv')
app.layout = html.Div([
dcc.Graph(
id='life-exp-vs-gdp',
figure={
'data': [
dict(
x=df[df['continent'] == i]['gdpPercap'],
y=df[df['continent'] == i]['lifeExp'],
text=df[df['continent'] == i]['country'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) for i in df.continent.unique()
],
'layout': dict(
xaxis={'type': 'log', 'title': 'GDP Per Capita'},
yaxis={'title': 'Life Expectancy'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest'
)
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
### Dash
Dash is a similar approach to interactivity as Shiny brought to you by the plotly gang. The nice thing about it is crossplatform support for R and Python.
#### R
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
app <- Dash$new()
df <- readr::read_csv(file = "data/gapminder_small.csv") %>%
drop_na()
continents <- unique(df$continent)
data_gdp_life <- with(df,
lapply(continents,
function(cont) {
list(
x = gdpPercap[continent == cont],
y = lifeExp[continent == cont],
opacity=0.7,
text = country[continent == cont],
mode = 'markers',
name = cont,
marker = list(size = 15,
line = list(width = 0.5, color = 'white'))
)
}
)
)
app$layout(
htmlDiv(
list(
dccGraph(
id = 'life-exp-vs-gdp',
figure = list(
data = data_gdp_life,
layout = list(
xaxis = list('type' = 'log', 'title' = 'GDP Per Capita'),
yaxis = list('title' = 'Life Expectancy'),
margin = list('l' = 40, 'b' = 40, 't' = 10, 'r' = 10),
legend = list('x' = 0, 'y' = 1),
hovermode = 'closest'
)
)
)
)
)
)
app$run_server()
```
#### Python dash example
Here is a python example. Save as app.py then at the terminal run `python app.py`.
```
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
df = pd.read_csv('data/gapminder_small.csv')
app.layout = html.Div([
dcc.Graph(
id='life-exp-vs-gdp',
figure={
'data': [
dict(
x=df[df['continent'] == i]['gdpPercap'],
y=df[df['continent'] == i]['lifeExp'],
text=df[df['continent'] == i]['country'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) for i in df.continent.unique()
],
'layout': dict(
xaxis={'type': 'log', 'title': 'GDP Per Capita'},
yaxis={'title': 'Life Expectancy'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest'
)
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
#### R
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
app <- Dash$new()
df <- readr::read_csv(file = "data/gapminder_small.csv") %>%
drop_na()
continents <- unique(df$continent)
data_gdp_life <- with(df,
lapply(continents,
function(cont) {
list(
x = gdpPercap[continent == cont],
y = lifeExp[continent == cont],
opacity=0.7,
text = country[continent == cont],
mode = 'markers',
name = cont,
marker = list(size = 15,
line = list(width = 0.5, color = 'white'))
)
}
)
)
app$layout(
htmlDiv(
list(
dccGraph(
id = 'life-exp-vs-gdp',
figure = list(
data = data_gdp_life,
layout = list(
xaxis = list('type' = 'log', 'title' = 'GDP Per Capita'),
yaxis = list('title' = 'Life Expectancy'),
margin = list('l' = 40, 'b' = 40, 't' = 10, 'r' = 10),
legend = list('x' = 0, 'y' = 1),
hovermode = 'closest'
)
)
)
)
)
)
app$run_server()
```
#### Python dash example
Here is a python example. Save as app.py then at the terminal run `python app.py`.
```
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
df = pd.read_csv('data/gapminder_small.csv')
app.layout = html.Div([
dcc.Graph(
id='life-exp-vs-gdp',
figure={
'data': [
dict(
x=df[df['continent'] == i]['gdpPercap'],
y=df[df['continent'] == i]['lifeExp'],
text=df[df['continent'] == i]['country'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) for i in df.continent.unique()
],
'layout': dict(
xaxis={'type': 'log', 'title': 'GDP Per Capita'},
yaxis={'title': 'Life Expectancy'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest'
)
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
Interactive and Visual Data Exploration
---------------------------------------
As seen above, just a couple visualization packages can go a very long way. It’s now very easy to incorporate interactivity, so you should use it even if only for your own data exploration.
In general, interactivity allows for even more dimensions to be brought to a graphic, and can be more fun too!
However, they must serve a purpose. Too often, interactivity can simply serve as distraction, and can actually detract from the data story. Make sure to use them when they can enhance the narrative you wish to express.
Interactive Visualization Exercises
-----------------------------------
### Exercise 0
Install and load the plotly package. Load the tidyverse package if necessary (so you can use dplyr and ggplot2), and install/load the ggplot2movies for the IMDB data.
### Exercise 1
Using dplyr, group by year, and summarize to create a new variable that is the Average rating. Refer to the [tidyverse](tidyverse.html#tidyverse) section if you need a refresher on what’s being done here. Then create a plot with plotly for a line or scatter plot (for the latter, use the add\_markers function). It will take the following form, but you’ll need to supply the plotly arguments.
```
library(ggplot2movies)
movies %>%
group_by(year) %>%
summarise(Avg_Rating = mean(rating))
plot_ly() %>%
add_markers()
```
### Exercise 2
This time group by year *and* Drama. In the summarize create average rating again, but also a variable representing the average number of votes. In your plotly line, use the size and color arguments to represent whether the average number of votes and whether it was drama or not respectively. Use add\_markers. Note that Drama will be treated as numeric since it’s a 0\-1 indicator. This won’t affect the plot, but if you want, you might use mutate to change it to a factor with labels ‘Drama’ and ‘Other’.
### Exercise 3
Create a ggplot of your own design and then use ggplotly to make it interactive.
### Exercise 0
Install and load the plotly package. Load the tidyverse package if necessary (so you can use dplyr and ggplot2), and install/load the ggplot2movies for the IMDB data.
### Exercise 1
Using dplyr, group by year, and summarize to create a new variable that is the Average rating. Refer to the [tidyverse](tidyverse.html#tidyverse) section if you need a refresher on what’s being done here. Then create a plot with plotly for a line or scatter plot (for the latter, use the add\_markers function). It will take the following form, but you’ll need to supply the plotly arguments.
```
library(ggplot2movies)
movies %>%
group_by(year) %>%
summarise(Avg_Rating = mean(rating))
plot_ly() %>%
add_markers()
```
### Exercise 2
This time group by year *and* Drama. In the summarize create average rating again, but also a variable representing the average number of votes. In your plotly line, use the size and color arguments to represent whether the average number of votes and whether it was drama or not respectively. Use add\_markers. Note that Drama will be treated as numeric since it’s a 0\-1 indicator. This won’t affect the plot, but if you want, you might use mutate to change it to a factor with labels ‘Drama’ and ‘Other’.
### Exercise 3
Create a ggplot of your own design and then use ggplotly to make it interactive.
Python Interactive Visualization Notebook
-----------------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/interactive.ipynb)
If using Python though, you’re in luck! You get most of the basic functionality of ggplot2 via the plotnine module. A jupyter notebook demonstrating most of the previous is available [here](https://github.com/m-clark/data-processing-and-visualization/blob/master/code/ggplot.ipynb).
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/interactive.html |
Interactive Visualization
=========================
Packages
--------
As mentioned, ggplot2 is the most widely used package for visualization in R. However, it is not interactive by default. Many packages use htmlwidgets, d3 (JavaScript library), and other tools to provide interactive graphics. What’s great is that while you may have to learn new packages, you don’t necessarily have to change your approach or thinking about a plot, or learn some other language.
Many of these packages can be lumped into more general packages that try to provide a plotting system (similar to ggplot2), versus those that just aim to do a specific type of plot well. Here are some to give a sense of this.
General (click to visit the associated website):
* [plotly](https://plot.ly/r/)
\- used also in Python, Matlab, Julia
\- can convert ggplot2 images to interactive ones (with varying degrees of success)
* [highcharter](http://jkunst.com/highcharter/)
+ also very general wrapper for highcharts.js and works with some R packages out of the box
* [rbokeh](http://hafen.github.io/rbokeh/)
+ like plotly, it also has cross language support
Specific functionality:
* [DT](https://rstudio.github.io/DT/)
+ interactive data tables
* [leaflet](https://rstudio.github.io/leaflet/)
+ maps with OpenStreetMap
* [visNetwork](http://datastorm-open.github.io/visNetwork/)
+ Network visualization
In what follows we’ll see some of these in action. Note that unlike the previous chapter, the goal here is not to dive deeply, but just to get an idea of what’s available.
Piping for Visualization
------------------------
One of the advantages to piping is that it’s not limited to dplyr style data management functions. *Any* R function can be potentially piped to, and several examples have already been shown. Many newer visualization packages take advantage of piping, and this facilitates data exploration. We don’t have to create objects just to do a visualization. New variables can be easily created and subsequently manipulated just for visualization. Furthermore, data manipulation not separated from visualization.
htmlwidgets
-----------
The htmlwidgets package makes it easy to create visualizations based on JavaScript libraries. If you’re not familiar with JavaScript, you actually are very familiar with its products, as it’s basically the language of the web, visual or otherwise. The R packages using it typically are pipe\-oriented and produce interactive plots. In addition, you can use the htmlwidgets package to create your own functions that use a particular JavaScript library (but someone probably already has, so look first).
Plotly
------
We’ll begin our foray into the interactive world with a couple demonstrations of plotly. To give some background, you can think of plotly similar to RStudio, in that it has both enterprise (i.e. pay for) aspects and open source aspects. Just like RStudio, you have full access to what it has to offer via the open source R package. You may see old help suggestions referring to needing an account, but this is no longer necessary.
When using plotly, you’ll note the layering approach similar to what we had with ggplot2. Piping is used before plotting to do some data manipulation, after which we seamlessly move to the plot itself. The `=~` is essentially the way we denote aesthetics[47](#fn47).
Plotly is able to be used in both R and Python.
#### R
```
library(plotly)
midwest %>%
filter(inmetro == T) %>%
plot_ly(x = ~ percbelowpoverty, y = ~ percollege) %>%
add_markers()
```
#### plotly with Python
The following does the same plot in Python
```
import pandas as pd
import plotly.express as px
midwest = pd.DataFrame(r.midwest) # from previous chunk using reticulate
plt = px.scatter(midwest, x = 'percbelowpoverty', y = 'percollege')
plt.show() # opens in browser
```
### Modes
plotly has modes, which allow for points, lines, text and combinations. Traces, `add_*`, work similar to geoms.
```
library(mgcv)
library(modelr)
library(glue)
mtcars %>%
mutate(
amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = glue('weight: {wt} <br> mpg: {mpg} <br> {amFactor}')
) %>%
add_predictions(gam(mpg ~ s(wt, am, bs = 'fs'), data = mtcars)) %>%
arrange(am) %>%
plot_ly() %>%
add_markers(
x = ~ wt,
y = ~ mpg,
color = ~ amFactor,
opacity = .5,
text = ~ hovertext,
hoverinfo = 'text',
showlegend = F
) %>%
add_lines(
x = ~ wt,
y = ~ pred,
color = ~ amFactor
)
```
While you can use plotly as a one\-liner[48](#fn48), this would only be good for quick peeks while doing data exploration. It would generally be far too limiting otherwise.
```
plot_ly(ggplot2::midwest, x = ~percollege, color = ~state, type = "box")
```
And here is a Python example or two using plotly express.
```
plt = px.box(midwest, x = 'state', y = 'percollege', color = 'state', notched=True)
plt.show() # opens in browser
tips = px.data.tips() # built-in dataset
px.violin(
tips,
y = "tip",
x = "smoker",
color = "sex",
box = True,
points = "all",
hover_data = tips.columns
).show()
```
### ggplotly
One of the strengths of plotly is that we can feed a ggplot object to it, and turn our formerly static plots into interactive ones. It would have been easy to use geom\_smooth to get a similar result, so let’s do so.
```
gp = mtcars %>%
mutate(amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = paste(wt, mpg, amFactor)) %>%
arrange(wt) %>%
ggplot(aes(x = wt, y = mpg, color = amFactor)) +
geom_smooth(se = F) +
geom_point(aes(color = amFactor))
ggplotly()
```
Note that this is not a one\-to\-one transformation. The plotly image will have different line widths and point sizes. It will usually be easier to change it within the ggplot process than tweaking the ggplotly object.
Be prepared to spend time getting used to plotly. It has (in my opinion) poor documentation, is not nearly as flexible as ggplot2, has hidden (and arbitrary) defaults that can creep into a plot based on aspects of the data (rather than your settings), and some modes do not play nicely with others. That said, it works great for a lot of things, and I use it regularly.
Highcharter
-----------
Highcharter is also fairly useful for a wide variety of plots, and is based on the highcharts.js library. If you have data suited to one of its functions, getting a great interactive plot can be ridiculously easy.
In what follows we use quantmod to create an xts (time series) object of Google’s stock price, including opening and closing values. The highcharter object has a ready\-made plot for such data[49](#fn49).
```
library(highcharter)
library(quantmod)
google_price = getSymbols("GOOG", auto.assign = FALSE)
hchart(google_price)
```
Graph networks
--------------
### visNetwork
The visNetwork package is specific to network visualizations and similar, and is based on the vis.js library. Networks require nodes and edges to connect them. These take on different aspects, and so are created in separate data frames.
```
set.seed(1352)
nodes = data.frame(
id = 0:5,
label = c('Bobby', 'Janie', 'Timmie', 'Mary', 'Johnny', 'Billy'),
group = c('friend', 'frenemy', 'frenemy', rep('friend', 3)),
value = sample(10:50, 6)
)
edges = data.frame(
from = c(0, 0, 0, 1, 1, 2, 2, 3, 3, 3, 4, 5, 5),
to = sample(0:5, 13, replace = T),
value = sample(1:10, 13, replace = T)
) %>%
filter(from != to)
library(visNetwork)
visNetwork(nodes, edges, height = 300, width = 800) %>%
visNodes(
shape = 'circle',
font = list(),
scaling = list(
min = 10,
max = 50,
label = list(enable = T)
)
) %>%
visLegend()
```
### sigmajs
The sigmajs package allows one to use the corresponding JS library to create some clean and nice visualizations for graphs. The following creates
```
library(sigmajs)
nodes <- sg_make_nodes(30)
edges <- sg_make_edges(nodes)
# add transitions
n <- nrow(nodes)
nodes$to_x <- runif(n, 5, 10)
nodes$to_y <- runif(n, 5, 10)
nodes$to_size <- runif(n, 5, 10)
nodes$to_color <- sample(c("#ff5500", "#00aaff"), n, replace = TRUE)
sigmajs() %>%
sg_nodes(nodes, id, label, size, color, to_x, to_y, to_size, to_color) %>%
sg_edges(edges, id, source, target) %>%
sg_animate(
mapping = list(
x = "to_x",
y = "to_y",
size = "to_size",
color = "to_color"
),
delay = 0
) %>%
sg_settings(animationsTime = 3500) %>%
sg_button("animate", # button label
"animate", # event name
class = "btn btn-warning")
```
animate
### Plotly
I mention plotly capabilities here as again, it may be useful to stick to one tool that you can learn well, and again, could allow you to bounce to python as well.
```
import plotly.graph_objects as go
import networkx as nx
G = nx.random_geometric_graph(50, 0.125)
edge_x = []
edge_y = []
for edge in G.edges():
x0, y0 = G.nodes[edge[0]]['pos']
x1, y1 = G.nodes[edge[1]]['pos']
edge_x.append(x0)
edge_x.append(x1)
edge_x.append(None)
edge_y.append(y0)
edge_y.append(y1)
edge_y.append(None)
edge_trace = go.Scatter(
x=edge_x,
y=edge_y,
line=dict(width=0.5, color='#888'),
hoverinfo='none',
mode='lines')
node_x = []
node_y = []
for node in G.nodes():
x, y = G.nodes[node]['pos']
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(
x=node_x, y=node_y,
mode='markers',
hoverinfo='text',
marker=dict(
showscale=True,
colorscale='Blackbody',
reversescale=True,
color=[],
size=10,
colorbar=dict(
thickness=15,
title='Node Connections',
xanchor='left',
titleside='right'
),
line_width=2))
node_adjacencies = []
node_text = []
for node, adjacencies in enumerate(G.adjacency()):
node_adjacencies.append(len(adjacencies[1]))
node_text.append('# of connections: '+str(len(adjacencies[1])))
node_trace.marker.color = node_adjacencies
node_trace.text = node_text
fig = go.Figure(data=[edge_trace, node_trace],
layout=go.Layout(
title='<br>Network graph made with Python',
titlefont_size=16,
showlegend=False,
hovermode='closest',
margin=dict(b=20,l=5,r=5,t=40),
annotations=[ dict(
text="Python code: <a href='https://plot.ly/ipython-notebooks/network-graphs/'> https://plot.ly/ipython-notebooks/network-graphs/</a>",
showarrow=False,
xref="paper", yref="paper",
x=0.005, y=-0.002 ) ],
xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
)
fig.show()
```
leaflet
-------
The leaflet package from RStudio is good for quick interactive maps, and it’s quite flexible and has some nice functionality to take your maps further. Unfortunately, it actually doesn’t always play well with many markdown formats.
```
hovertext <- paste(sep = "<br/>",
"<b><a href='http://umich.edu/'>University of Michigan</a></b>",
"Ann Arbor, MI"
)
library(leaflet)
leaflet() %>%
addTiles() %>%
addPopups(
lng = -83.738222,
lat = 42.277030,
popup = hovertext
)
```
DT
--
It might be a bit odd to think of data frames visually, but they can be interactive also. One can use the DT package for interactive data frames. This can be very useful when working in collaborative environments where one shares reports, as you can embed the data within the document itself.
```
library(DT)
ggplot2movies::movies %>%
select(1:6) %>%
filter(rating > 8, !is.na(budget), votes > 1000) %>%
datatable()
```
The other thing to be aware of is that tables *can* be visual, it’s just that many academic outlets waste this opportunity. Simple bolding, italics, and even sizing, can make results pop more easily for the audience. The DT package allows for coloring and even simple things like bars that connotes values. The following gives some idea of its flexibility.
```
iris %>%
# arrange(desc(Petal.Length)) %>%
datatable(rownames = F,
options = list(dom = 'firtp'),
class = 'row-border') %>%
formatStyle('Sepal.Length',
fontWeight = styleInterval(5, c('normal', 'bold'))) %>%
formatStyle('Sepal.Width',
color = styleInterval(c(3.4, 3.8), c('#7f7f7f', '#00aaff', '#ff5500')),
backgroundColor = styleInterval(3.4, c('#ebebeb', 'aliceblue'))) %>%
formatStyle(
'Petal.Length',
# color = 'transparent',
background = styleColorBar(iris$Petal.Length, '#5500ff'),
backgroundSize = '100% 90%',
backgroundRepeat = 'no-repeat',
backgroundPosition = 'center'
) %>%
formatStyle(
'Species',
color = 'white',
transform = 'rotateX(45deg) rotateY(20deg) rotateZ(30deg)',
backgroundColor = styleEqual(unique(iris$Species), c('#1f65b7', '#66b71f', '#b71f66'))
)
```
I would in no way recommend using the bars, unless the you want a visual *instead* of the value and can show all possible values. I would not recommend angled tag options at all, as that is more or less a prime example of chartjunk. However, subtle use of color and emphasis, as with the Sepal columns, can make tables of results that your audience will actually spend time exploring.
Shiny
-----
[
Shiny is a framework that can essentially allow you to build an interactive website/app. Like some of the other packages mentioned, it’s provided by [RStudio](https://shiny.rstudio.com/) developers. However, most of the more recently developed interactive visualization packages will work specifically within the shiny and rmarkdown setting.
You can make shiny apps just for your own use and run them locally. But note, you are using R, a statistical programming language, to build a webpage, and it’s not necessarily particularly well\-suited for it. Much of how you use R will not be useful in building a shiny app, and so it will definitely take some getting used to, and you will likely need to do a lot of tedious adjustments to get things just how you want.
Shiny apps have two main components, a part that specifies the user interface, and a server function that will do all the work. With those in place (either in a single ‘app.R’ file or in separate files), you can then simply click `run app` or use the function.
This example is taken from the shiny help file, and you can actually run it as is.
```
library(shiny)
# Running a Shiny app object
app <- shinyApp(
ui = bootstrapPage(
numericInput('n', 'Number of obs', 10),
plotOutput('plot')
),
server = function(input, output) {
output$plot <- renderPlot({
ggplot2::qplot(rnorm(input$n), xlab = 'Is this normal?!')
})
}
)
runApp(app)
```
You can share your app code/directory with anyone and they’ll be able to run it also. However, this is great mostly just for teaching someone how to do shiny, which most people aren’t going to do. Typically you’ll want someone to use the app itself, not run code. In that case you’ll need a web server. You can get up to 5 free ‘running’ applications at [shinyapps.io](http://shinyapps.io). However, you will notably be limited in the amount of computing resources that can be used to run the apps in a given month. Even minor usage of those could easily overtake the free settings. For personal use it’s plenty though.
### Dash
Dash is a similar approach to interactivity as Shiny brought to you by the plotly gang. The nice thing about it is crossplatform support for R and Python.
#### R
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
app <- Dash$new()
df <- readr::read_csv(file = "data/gapminder_small.csv") %>%
drop_na()
continents <- unique(df$continent)
data_gdp_life <- with(df,
lapply(continents,
function(cont) {
list(
x = gdpPercap[continent == cont],
y = lifeExp[continent == cont],
opacity=0.7,
text = country[continent == cont],
mode = 'markers',
name = cont,
marker = list(size = 15,
line = list(width = 0.5, color = 'white'))
)
}
)
)
app$layout(
htmlDiv(
list(
dccGraph(
id = 'life-exp-vs-gdp',
figure = list(
data = data_gdp_life,
layout = list(
xaxis = list('type' = 'log', 'title' = 'GDP Per Capita'),
yaxis = list('title' = 'Life Expectancy'),
margin = list('l' = 40, 'b' = 40, 't' = 10, 'r' = 10),
legend = list('x' = 0, 'y' = 1),
hovermode = 'closest'
)
)
)
)
)
)
app$run_server()
```
#### Python dash example
Here is a python example. Save as app.py then at the terminal run `python app.py`.
```
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
df = pd.read_csv('data/gapminder_small.csv')
app.layout = html.Div([
dcc.Graph(
id='life-exp-vs-gdp',
figure={
'data': [
dict(
x=df[df['continent'] == i]['gdpPercap'],
y=df[df['continent'] == i]['lifeExp'],
text=df[df['continent'] == i]['country'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) for i in df.continent.unique()
],
'layout': dict(
xaxis={'type': 'log', 'title': 'GDP Per Capita'},
yaxis={'title': 'Life Expectancy'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest'
)
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
Interactive and Visual Data Exploration
---------------------------------------
As seen above, just a couple visualization packages can go a very long way. It’s now very easy to incorporate interactivity, so you should use it even if only for your own data exploration.
In general, interactivity allows for even more dimensions to be brought to a graphic, and can be more fun too!
However, they must serve a purpose. Too often, interactivity can simply serve as distraction, and can actually detract from the data story. Make sure to use them when they can enhance the narrative you wish to express.
Interactive Visualization Exercises
-----------------------------------
### Exercise 0
Install and load the plotly package. Load the tidyverse package if necessary (so you can use dplyr and ggplot2), and install/load the ggplot2movies for the IMDB data.
### Exercise 1
Using dplyr, group by year, and summarize to create a new variable that is the Average rating. Refer to the [tidyverse](tidyverse.html#tidyverse) section if you need a refresher on what’s being done here. Then create a plot with plotly for a line or scatter plot (for the latter, use the add\_markers function). It will take the following form, but you’ll need to supply the plotly arguments.
```
library(ggplot2movies)
movies %>%
group_by(year) %>%
summarise(Avg_Rating = mean(rating))
plot_ly() %>%
add_markers()
```
### Exercise 2
This time group by year *and* Drama. In the summarize create average rating again, but also a variable representing the average number of votes. In your plotly line, use the size and color arguments to represent whether the average number of votes and whether it was drama or not respectively. Use add\_markers. Note that Drama will be treated as numeric since it’s a 0\-1 indicator. This won’t affect the plot, but if you want, you might use mutate to change it to a factor with labels ‘Drama’ and ‘Other’.
### Exercise 3
Create a ggplot of your own design and then use ggplotly to make it interactive.
Python Interactive Visualization Notebook
-----------------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/interactive.ipynb)
If using Python though, you’re in luck! You get most of the basic functionality of ggplot2 via the plotnine module. A jupyter notebook demonstrating most of the previous is available [here](https://github.com/m-clark/data-processing-and-visualization/blob/master/code/ggplot.ipynb).
Packages
--------
As mentioned, ggplot2 is the most widely used package for visualization in R. However, it is not interactive by default. Many packages use htmlwidgets, d3 (JavaScript library), and other tools to provide interactive graphics. What’s great is that while you may have to learn new packages, you don’t necessarily have to change your approach or thinking about a plot, or learn some other language.
Many of these packages can be lumped into more general packages that try to provide a plotting system (similar to ggplot2), versus those that just aim to do a specific type of plot well. Here are some to give a sense of this.
General (click to visit the associated website):
* [plotly](https://plot.ly/r/)
\- used also in Python, Matlab, Julia
\- can convert ggplot2 images to interactive ones (with varying degrees of success)
* [highcharter](http://jkunst.com/highcharter/)
+ also very general wrapper for highcharts.js and works with some R packages out of the box
* [rbokeh](http://hafen.github.io/rbokeh/)
+ like plotly, it also has cross language support
Specific functionality:
* [DT](https://rstudio.github.io/DT/)
+ interactive data tables
* [leaflet](https://rstudio.github.io/leaflet/)
+ maps with OpenStreetMap
* [visNetwork](http://datastorm-open.github.io/visNetwork/)
+ Network visualization
In what follows we’ll see some of these in action. Note that unlike the previous chapter, the goal here is not to dive deeply, but just to get an idea of what’s available.
Piping for Visualization
------------------------
One of the advantages to piping is that it’s not limited to dplyr style data management functions. *Any* R function can be potentially piped to, and several examples have already been shown. Many newer visualization packages take advantage of piping, and this facilitates data exploration. We don’t have to create objects just to do a visualization. New variables can be easily created and subsequently manipulated just for visualization. Furthermore, data manipulation not separated from visualization.
htmlwidgets
-----------
The htmlwidgets package makes it easy to create visualizations based on JavaScript libraries. If you’re not familiar with JavaScript, you actually are very familiar with its products, as it’s basically the language of the web, visual or otherwise. The R packages using it typically are pipe\-oriented and produce interactive plots. In addition, you can use the htmlwidgets package to create your own functions that use a particular JavaScript library (but someone probably already has, so look first).
Plotly
------
We’ll begin our foray into the interactive world with a couple demonstrations of plotly. To give some background, you can think of plotly similar to RStudio, in that it has both enterprise (i.e. pay for) aspects and open source aspects. Just like RStudio, you have full access to what it has to offer via the open source R package. You may see old help suggestions referring to needing an account, but this is no longer necessary.
When using plotly, you’ll note the layering approach similar to what we had with ggplot2. Piping is used before plotting to do some data manipulation, after which we seamlessly move to the plot itself. The `=~` is essentially the way we denote aesthetics[47](#fn47).
Plotly is able to be used in both R and Python.
#### R
```
library(plotly)
midwest %>%
filter(inmetro == T) %>%
plot_ly(x = ~ percbelowpoverty, y = ~ percollege) %>%
add_markers()
```
#### plotly with Python
The following does the same plot in Python
```
import pandas as pd
import plotly.express as px
midwest = pd.DataFrame(r.midwest) # from previous chunk using reticulate
plt = px.scatter(midwest, x = 'percbelowpoverty', y = 'percollege')
plt.show() # opens in browser
```
### Modes
plotly has modes, which allow for points, lines, text and combinations. Traces, `add_*`, work similar to geoms.
```
library(mgcv)
library(modelr)
library(glue)
mtcars %>%
mutate(
amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = glue('weight: {wt} <br> mpg: {mpg} <br> {amFactor}')
) %>%
add_predictions(gam(mpg ~ s(wt, am, bs = 'fs'), data = mtcars)) %>%
arrange(am) %>%
plot_ly() %>%
add_markers(
x = ~ wt,
y = ~ mpg,
color = ~ amFactor,
opacity = .5,
text = ~ hovertext,
hoverinfo = 'text',
showlegend = F
) %>%
add_lines(
x = ~ wt,
y = ~ pred,
color = ~ amFactor
)
```
While you can use plotly as a one\-liner[48](#fn48), this would only be good for quick peeks while doing data exploration. It would generally be far too limiting otherwise.
```
plot_ly(ggplot2::midwest, x = ~percollege, color = ~state, type = "box")
```
And here is a Python example or two using plotly express.
```
plt = px.box(midwest, x = 'state', y = 'percollege', color = 'state', notched=True)
plt.show() # opens in browser
tips = px.data.tips() # built-in dataset
px.violin(
tips,
y = "tip",
x = "smoker",
color = "sex",
box = True,
points = "all",
hover_data = tips.columns
).show()
```
### ggplotly
One of the strengths of plotly is that we can feed a ggplot object to it, and turn our formerly static plots into interactive ones. It would have been easy to use geom\_smooth to get a similar result, so let’s do so.
```
gp = mtcars %>%
mutate(amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = paste(wt, mpg, amFactor)) %>%
arrange(wt) %>%
ggplot(aes(x = wt, y = mpg, color = amFactor)) +
geom_smooth(se = F) +
geom_point(aes(color = amFactor))
ggplotly()
```
Note that this is not a one\-to\-one transformation. The plotly image will have different line widths and point sizes. It will usually be easier to change it within the ggplot process than tweaking the ggplotly object.
Be prepared to spend time getting used to plotly. It has (in my opinion) poor documentation, is not nearly as flexible as ggplot2, has hidden (and arbitrary) defaults that can creep into a plot based on aspects of the data (rather than your settings), and some modes do not play nicely with others. That said, it works great for a lot of things, and I use it regularly.
#### R
```
library(plotly)
midwest %>%
filter(inmetro == T) %>%
plot_ly(x = ~ percbelowpoverty, y = ~ percollege) %>%
add_markers()
```
#### plotly with Python
The following does the same plot in Python
```
import pandas as pd
import plotly.express as px
midwest = pd.DataFrame(r.midwest) # from previous chunk using reticulate
plt = px.scatter(midwest, x = 'percbelowpoverty', y = 'percollege')
plt.show() # opens in browser
```
### Modes
plotly has modes, which allow for points, lines, text and combinations. Traces, `add_*`, work similar to geoms.
```
library(mgcv)
library(modelr)
library(glue)
mtcars %>%
mutate(
amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = glue('weight: {wt} <br> mpg: {mpg} <br> {amFactor}')
) %>%
add_predictions(gam(mpg ~ s(wt, am, bs = 'fs'), data = mtcars)) %>%
arrange(am) %>%
plot_ly() %>%
add_markers(
x = ~ wt,
y = ~ mpg,
color = ~ amFactor,
opacity = .5,
text = ~ hovertext,
hoverinfo = 'text',
showlegend = F
) %>%
add_lines(
x = ~ wt,
y = ~ pred,
color = ~ amFactor
)
```
While you can use plotly as a one\-liner[48](#fn48), this would only be good for quick peeks while doing data exploration. It would generally be far too limiting otherwise.
```
plot_ly(ggplot2::midwest, x = ~percollege, color = ~state, type = "box")
```
And here is a Python example or two using plotly express.
```
plt = px.box(midwest, x = 'state', y = 'percollege', color = 'state', notched=True)
plt.show() # opens in browser
tips = px.data.tips() # built-in dataset
px.violin(
tips,
y = "tip",
x = "smoker",
color = "sex",
box = True,
points = "all",
hover_data = tips.columns
).show()
```
### ggplotly
One of the strengths of plotly is that we can feed a ggplot object to it, and turn our formerly static plots into interactive ones. It would have been easy to use geom\_smooth to get a similar result, so let’s do so.
```
gp = mtcars %>%
mutate(amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = paste(wt, mpg, amFactor)) %>%
arrange(wt) %>%
ggplot(aes(x = wt, y = mpg, color = amFactor)) +
geom_smooth(se = F) +
geom_point(aes(color = amFactor))
ggplotly()
```
Note that this is not a one\-to\-one transformation. The plotly image will have different line widths and point sizes. It will usually be easier to change it within the ggplot process than tweaking the ggplotly object.
Be prepared to spend time getting used to plotly. It has (in my opinion) poor documentation, is not nearly as flexible as ggplot2, has hidden (and arbitrary) defaults that can creep into a plot based on aspects of the data (rather than your settings), and some modes do not play nicely with others. That said, it works great for a lot of things, and I use it regularly.
Highcharter
-----------
Highcharter is also fairly useful for a wide variety of plots, and is based on the highcharts.js library. If you have data suited to one of its functions, getting a great interactive plot can be ridiculously easy.
In what follows we use quantmod to create an xts (time series) object of Google’s stock price, including opening and closing values. The highcharter object has a ready\-made plot for such data[49](#fn49).
```
library(highcharter)
library(quantmod)
google_price = getSymbols("GOOG", auto.assign = FALSE)
hchart(google_price)
```
Graph networks
--------------
### visNetwork
The visNetwork package is specific to network visualizations and similar, and is based on the vis.js library. Networks require nodes and edges to connect them. These take on different aspects, and so are created in separate data frames.
```
set.seed(1352)
nodes = data.frame(
id = 0:5,
label = c('Bobby', 'Janie', 'Timmie', 'Mary', 'Johnny', 'Billy'),
group = c('friend', 'frenemy', 'frenemy', rep('friend', 3)),
value = sample(10:50, 6)
)
edges = data.frame(
from = c(0, 0, 0, 1, 1, 2, 2, 3, 3, 3, 4, 5, 5),
to = sample(0:5, 13, replace = T),
value = sample(1:10, 13, replace = T)
) %>%
filter(from != to)
library(visNetwork)
visNetwork(nodes, edges, height = 300, width = 800) %>%
visNodes(
shape = 'circle',
font = list(),
scaling = list(
min = 10,
max = 50,
label = list(enable = T)
)
) %>%
visLegend()
```
### sigmajs
The sigmajs package allows one to use the corresponding JS library to create some clean and nice visualizations for graphs. The following creates
```
library(sigmajs)
nodes <- sg_make_nodes(30)
edges <- sg_make_edges(nodes)
# add transitions
n <- nrow(nodes)
nodes$to_x <- runif(n, 5, 10)
nodes$to_y <- runif(n, 5, 10)
nodes$to_size <- runif(n, 5, 10)
nodes$to_color <- sample(c("#ff5500", "#00aaff"), n, replace = TRUE)
sigmajs() %>%
sg_nodes(nodes, id, label, size, color, to_x, to_y, to_size, to_color) %>%
sg_edges(edges, id, source, target) %>%
sg_animate(
mapping = list(
x = "to_x",
y = "to_y",
size = "to_size",
color = "to_color"
),
delay = 0
) %>%
sg_settings(animationsTime = 3500) %>%
sg_button("animate", # button label
"animate", # event name
class = "btn btn-warning")
```
animate
### Plotly
I mention plotly capabilities here as again, it may be useful to stick to one tool that you can learn well, and again, could allow you to bounce to python as well.
```
import plotly.graph_objects as go
import networkx as nx
G = nx.random_geometric_graph(50, 0.125)
edge_x = []
edge_y = []
for edge in G.edges():
x0, y0 = G.nodes[edge[0]]['pos']
x1, y1 = G.nodes[edge[1]]['pos']
edge_x.append(x0)
edge_x.append(x1)
edge_x.append(None)
edge_y.append(y0)
edge_y.append(y1)
edge_y.append(None)
edge_trace = go.Scatter(
x=edge_x,
y=edge_y,
line=dict(width=0.5, color='#888'),
hoverinfo='none',
mode='lines')
node_x = []
node_y = []
for node in G.nodes():
x, y = G.nodes[node]['pos']
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(
x=node_x, y=node_y,
mode='markers',
hoverinfo='text',
marker=dict(
showscale=True,
colorscale='Blackbody',
reversescale=True,
color=[],
size=10,
colorbar=dict(
thickness=15,
title='Node Connections',
xanchor='left',
titleside='right'
),
line_width=2))
node_adjacencies = []
node_text = []
for node, adjacencies in enumerate(G.adjacency()):
node_adjacencies.append(len(adjacencies[1]))
node_text.append('# of connections: '+str(len(adjacencies[1])))
node_trace.marker.color = node_adjacencies
node_trace.text = node_text
fig = go.Figure(data=[edge_trace, node_trace],
layout=go.Layout(
title='<br>Network graph made with Python',
titlefont_size=16,
showlegend=False,
hovermode='closest',
margin=dict(b=20,l=5,r=5,t=40),
annotations=[ dict(
text="Python code: <a href='https://plot.ly/ipython-notebooks/network-graphs/'> https://plot.ly/ipython-notebooks/network-graphs/</a>",
showarrow=False,
xref="paper", yref="paper",
x=0.005, y=-0.002 ) ],
xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
)
fig.show()
```
### visNetwork
The visNetwork package is specific to network visualizations and similar, and is based on the vis.js library. Networks require nodes and edges to connect them. These take on different aspects, and so are created in separate data frames.
```
set.seed(1352)
nodes = data.frame(
id = 0:5,
label = c('Bobby', 'Janie', 'Timmie', 'Mary', 'Johnny', 'Billy'),
group = c('friend', 'frenemy', 'frenemy', rep('friend', 3)),
value = sample(10:50, 6)
)
edges = data.frame(
from = c(0, 0, 0, 1, 1, 2, 2, 3, 3, 3, 4, 5, 5),
to = sample(0:5, 13, replace = T),
value = sample(1:10, 13, replace = T)
) %>%
filter(from != to)
library(visNetwork)
visNetwork(nodes, edges, height = 300, width = 800) %>%
visNodes(
shape = 'circle',
font = list(),
scaling = list(
min = 10,
max = 50,
label = list(enable = T)
)
) %>%
visLegend()
```
### sigmajs
The sigmajs package allows one to use the corresponding JS library to create some clean and nice visualizations for graphs. The following creates
```
library(sigmajs)
nodes <- sg_make_nodes(30)
edges <- sg_make_edges(nodes)
# add transitions
n <- nrow(nodes)
nodes$to_x <- runif(n, 5, 10)
nodes$to_y <- runif(n, 5, 10)
nodes$to_size <- runif(n, 5, 10)
nodes$to_color <- sample(c("#ff5500", "#00aaff"), n, replace = TRUE)
sigmajs() %>%
sg_nodes(nodes, id, label, size, color, to_x, to_y, to_size, to_color) %>%
sg_edges(edges, id, source, target) %>%
sg_animate(
mapping = list(
x = "to_x",
y = "to_y",
size = "to_size",
color = "to_color"
),
delay = 0
) %>%
sg_settings(animationsTime = 3500) %>%
sg_button("animate", # button label
"animate", # event name
class = "btn btn-warning")
```
animate
### Plotly
I mention plotly capabilities here as again, it may be useful to stick to one tool that you can learn well, and again, could allow you to bounce to python as well.
```
import plotly.graph_objects as go
import networkx as nx
G = nx.random_geometric_graph(50, 0.125)
edge_x = []
edge_y = []
for edge in G.edges():
x0, y0 = G.nodes[edge[0]]['pos']
x1, y1 = G.nodes[edge[1]]['pos']
edge_x.append(x0)
edge_x.append(x1)
edge_x.append(None)
edge_y.append(y0)
edge_y.append(y1)
edge_y.append(None)
edge_trace = go.Scatter(
x=edge_x,
y=edge_y,
line=dict(width=0.5, color='#888'),
hoverinfo='none',
mode='lines')
node_x = []
node_y = []
for node in G.nodes():
x, y = G.nodes[node]['pos']
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(
x=node_x, y=node_y,
mode='markers',
hoverinfo='text',
marker=dict(
showscale=True,
colorscale='Blackbody',
reversescale=True,
color=[],
size=10,
colorbar=dict(
thickness=15,
title='Node Connections',
xanchor='left',
titleside='right'
),
line_width=2))
node_adjacencies = []
node_text = []
for node, adjacencies in enumerate(G.adjacency()):
node_adjacencies.append(len(adjacencies[1]))
node_text.append('# of connections: '+str(len(adjacencies[1])))
node_trace.marker.color = node_adjacencies
node_trace.text = node_text
fig = go.Figure(data=[edge_trace, node_trace],
layout=go.Layout(
title='<br>Network graph made with Python',
titlefont_size=16,
showlegend=False,
hovermode='closest',
margin=dict(b=20,l=5,r=5,t=40),
annotations=[ dict(
text="Python code: <a href='https://plot.ly/ipython-notebooks/network-graphs/'> https://plot.ly/ipython-notebooks/network-graphs/</a>",
showarrow=False,
xref="paper", yref="paper",
x=0.005, y=-0.002 ) ],
xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
)
fig.show()
```
leaflet
-------
The leaflet package from RStudio is good for quick interactive maps, and it’s quite flexible and has some nice functionality to take your maps further. Unfortunately, it actually doesn’t always play well with many markdown formats.
```
hovertext <- paste(sep = "<br/>",
"<b><a href='http://umich.edu/'>University of Michigan</a></b>",
"Ann Arbor, MI"
)
library(leaflet)
leaflet() %>%
addTiles() %>%
addPopups(
lng = -83.738222,
lat = 42.277030,
popup = hovertext
)
```
DT
--
It might be a bit odd to think of data frames visually, but they can be interactive also. One can use the DT package for interactive data frames. This can be very useful when working in collaborative environments where one shares reports, as you can embed the data within the document itself.
```
library(DT)
ggplot2movies::movies %>%
select(1:6) %>%
filter(rating > 8, !is.na(budget), votes > 1000) %>%
datatable()
```
The other thing to be aware of is that tables *can* be visual, it’s just that many academic outlets waste this opportunity. Simple bolding, italics, and even sizing, can make results pop more easily for the audience. The DT package allows for coloring and even simple things like bars that connotes values. The following gives some idea of its flexibility.
```
iris %>%
# arrange(desc(Petal.Length)) %>%
datatable(rownames = F,
options = list(dom = 'firtp'),
class = 'row-border') %>%
formatStyle('Sepal.Length',
fontWeight = styleInterval(5, c('normal', 'bold'))) %>%
formatStyle('Sepal.Width',
color = styleInterval(c(3.4, 3.8), c('#7f7f7f', '#00aaff', '#ff5500')),
backgroundColor = styleInterval(3.4, c('#ebebeb', 'aliceblue'))) %>%
formatStyle(
'Petal.Length',
# color = 'transparent',
background = styleColorBar(iris$Petal.Length, '#5500ff'),
backgroundSize = '100% 90%',
backgroundRepeat = 'no-repeat',
backgroundPosition = 'center'
) %>%
formatStyle(
'Species',
color = 'white',
transform = 'rotateX(45deg) rotateY(20deg) rotateZ(30deg)',
backgroundColor = styleEqual(unique(iris$Species), c('#1f65b7', '#66b71f', '#b71f66'))
)
```
I would in no way recommend using the bars, unless the you want a visual *instead* of the value and can show all possible values. I would not recommend angled tag options at all, as that is more or less a prime example of chartjunk. However, subtle use of color and emphasis, as with the Sepal columns, can make tables of results that your audience will actually spend time exploring.
Shiny
-----
[
Shiny is a framework that can essentially allow you to build an interactive website/app. Like some of the other packages mentioned, it’s provided by [RStudio](https://shiny.rstudio.com/) developers. However, most of the more recently developed interactive visualization packages will work specifically within the shiny and rmarkdown setting.
You can make shiny apps just for your own use and run them locally. But note, you are using R, a statistical programming language, to build a webpage, and it’s not necessarily particularly well\-suited for it. Much of how you use R will not be useful in building a shiny app, and so it will definitely take some getting used to, and you will likely need to do a lot of tedious adjustments to get things just how you want.
Shiny apps have two main components, a part that specifies the user interface, and a server function that will do all the work. With those in place (either in a single ‘app.R’ file or in separate files), you can then simply click `run app` or use the function.
This example is taken from the shiny help file, and you can actually run it as is.
```
library(shiny)
# Running a Shiny app object
app <- shinyApp(
ui = bootstrapPage(
numericInput('n', 'Number of obs', 10),
plotOutput('plot')
),
server = function(input, output) {
output$plot <- renderPlot({
ggplot2::qplot(rnorm(input$n), xlab = 'Is this normal?!')
})
}
)
runApp(app)
```
You can share your app code/directory with anyone and they’ll be able to run it also. However, this is great mostly just for teaching someone how to do shiny, which most people aren’t going to do. Typically you’ll want someone to use the app itself, not run code. In that case you’ll need a web server. You can get up to 5 free ‘running’ applications at [shinyapps.io](http://shinyapps.io). However, you will notably be limited in the amount of computing resources that can be used to run the apps in a given month. Even minor usage of those could easily overtake the free settings. For personal use it’s plenty though.
### Dash
Dash is a similar approach to interactivity as Shiny brought to you by the plotly gang. The nice thing about it is crossplatform support for R and Python.
#### R
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
app <- Dash$new()
df <- readr::read_csv(file = "data/gapminder_small.csv") %>%
drop_na()
continents <- unique(df$continent)
data_gdp_life <- with(df,
lapply(continents,
function(cont) {
list(
x = gdpPercap[continent == cont],
y = lifeExp[continent == cont],
opacity=0.7,
text = country[continent == cont],
mode = 'markers',
name = cont,
marker = list(size = 15,
line = list(width = 0.5, color = 'white'))
)
}
)
)
app$layout(
htmlDiv(
list(
dccGraph(
id = 'life-exp-vs-gdp',
figure = list(
data = data_gdp_life,
layout = list(
xaxis = list('type' = 'log', 'title' = 'GDP Per Capita'),
yaxis = list('title' = 'Life Expectancy'),
margin = list('l' = 40, 'b' = 40, 't' = 10, 'r' = 10),
legend = list('x' = 0, 'y' = 1),
hovermode = 'closest'
)
)
)
)
)
)
app$run_server()
```
#### Python dash example
Here is a python example. Save as app.py then at the terminal run `python app.py`.
```
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
df = pd.read_csv('data/gapminder_small.csv')
app.layout = html.Div([
dcc.Graph(
id='life-exp-vs-gdp',
figure={
'data': [
dict(
x=df[df['continent'] == i]['gdpPercap'],
y=df[df['continent'] == i]['lifeExp'],
text=df[df['continent'] == i]['country'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) for i in df.continent.unique()
],
'layout': dict(
xaxis={'type': 'log', 'title': 'GDP Per Capita'},
yaxis={'title': 'Life Expectancy'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest'
)
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
### Dash
Dash is a similar approach to interactivity as Shiny brought to you by the plotly gang. The nice thing about it is crossplatform support for R and Python.
#### R
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
app <- Dash$new()
df <- readr::read_csv(file = "data/gapminder_small.csv") %>%
drop_na()
continents <- unique(df$continent)
data_gdp_life <- with(df,
lapply(continents,
function(cont) {
list(
x = gdpPercap[continent == cont],
y = lifeExp[continent == cont],
opacity=0.7,
text = country[continent == cont],
mode = 'markers',
name = cont,
marker = list(size = 15,
line = list(width = 0.5, color = 'white'))
)
}
)
)
app$layout(
htmlDiv(
list(
dccGraph(
id = 'life-exp-vs-gdp',
figure = list(
data = data_gdp_life,
layout = list(
xaxis = list('type' = 'log', 'title' = 'GDP Per Capita'),
yaxis = list('title' = 'Life Expectancy'),
margin = list('l' = 40, 'b' = 40, 't' = 10, 'r' = 10),
legend = list('x' = 0, 'y' = 1),
hovermode = 'closest'
)
)
)
)
)
)
app$run_server()
```
#### Python dash example
Here is a python example. Save as app.py then at the terminal run `python app.py`.
```
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
df = pd.read_csv('data/gapminder_small.csv')
app.layout = html.Div([
dcc.Graph(
id='life-exp-vs-gdp',
figure={
'data': [
dict(
x=df[df['continent'] == i]['gdpPercap'],
y=df[df['continent'] == i]['lifeExp'],
text=df[df['continent'] == i]['country'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) for i in df.continent.unique()
],
'layout': dict(
xaxis={'type': 'log', 'title': 'GDP Per Capita'},
yaxis={'title': 'Life Expectancy'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest'
)
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
#### R
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
app <- Dash$new()
df <- readr::read_csv(file = "data/gapminder_small.csv") %>%
drop_na()
continents <- unique(df$continent)
data_gdp_life <- with(df,
lapply(continents,
function(cont) {
list(
x = gdpPercap[continent == cont],
y = lifeExp[continent == cont],
opacity=0.7,
text = country[continent == cont],
mode = 'markers',
name = cont,
marker = list(size = 15,
line = list(width = 0.5, color = 'white'))
)
}
)
)
app$layout(
htmlDiv(
list(
dccGraph(
id = 'life-exp-vs-gdp',
figure = list(
data = data_gdp_life,
layout = list(
xaxis = list('type' = 'log', 'title' = 'GDP Per Capita'),
yaxis = list('title' = 'Life Expectancy'),
margin = list('l' = 40, 'b' = 40, 't' = 10, 'r' = 10),
legend = list('x' = 0, 'y' = 1),
hovermode = 'closest'
)
)
)
)
)
)
app$run_server()
```
#### Python dash example
Here is a python example. Save as app.py then at the terminal run `python app.py`.
```
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
df = pd.read_csv('data/gapminder_small.csv')
app.layout = html.Div([
dcc.Graph(
id='life-exp-vs-gdp',
figure={
'data': [
dict(
x=df[df['continent'] == i]['gdpPercap'],
y=df[df['continent'] == i]['lifeExp'],
text=df[df['continent'] == i]['country'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) for i in df.continent.unique()
],
'layout': dict(
xaxis={'type': 'log', 'title': 'GDP Per Capita'},
yaxis={'title': 'Life Expectancy'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest'
)
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
Interactive and Visual Data Exploration
---------------------------------------
As seen above, just a couple visualization packages can go a very long way. It’s now very easy to incorporate interactivity, so you should use it even if only for your own data exploration.
In general, interactivity allows for even more dimensions to be brought to a graphic, and can be more fun too!
However, they must serve a purpose. Too often, interactivity can simply serve as distraction, and can actually detract from the data story. Make sure to use them when they can enhance the narrative you wish to express.
Interactive Visualization Exercises
-----------------------------------
### Exercise 0
Install and load the plotly package. Load the tidyverse package if necessary (so you can use dplyr and ggplot2), and install/load the ggplot2movies for the IMDB data.
### Exercise 1
Using dplyr, group by year, and summarize to create a new variable that is the Average rating. Refer to the [tidyverse](tidyverse.html#tidyverse) section if you need a refresher on what’s being done here. Then create a plot with plotly for a line or scatter plot (for the latter, use the add\_markers function). It will take the following form, but you’ll need to supply the plotly arguments.
```
library(ggplot2movies)
movies %>%
group_by(year) %>%
summarise(Avg_Rating = mean(rating))
plot_ly() %>%
add_markers()
```
### Exercise 2
This time group by year *and* Drama. In the summarize create average rating again, but also a variable representing the average number of votes. In your plotly line, use the size and color arguments to represent whether the average number of votes and whether it was drama or not respectively. Use add\_markers. Note that Drama will be treated as numeric since it’s a 0\-1 indicator. This won’t affect the plot, but if you want, you might use mutate to change it to a factor with labels ‘Drama’ and ‘Other’.
### Exercise 3
Create a ggplot of your own design and then use ggplotly to make it interactive.
### Exercise 0
Install and load the plotly package. Load the tidyverse package if necessary (so you can use dplyr and ggplot2), and install/load the ggplot2movies for the IMDB data.
### Exercise 1
Using dplyr, group by year, and summarize to create a new variable that is the Average rating. Refer to the [tidyverse](tidyverse.html#tidyverse) section if you need a refresher on what’s being done here. Then create a plot with plotly for a line or scatter plot (for the latter, use the add\_markers function). It will take the following form, but you’ll need to supply the plotly arguments.
```
library(ggplot2movies)
movies %>%
group_by(year) %>%
summarise(Avg_Rating = mean(rating))
plot_ly() %>%
add_markers()
```
### Exercise 2
This time group by year *and* Drama. In the summarize create average rating again, but also a variable representing the average number of votes. In your plotly line, use the size and color arguments to represent whether the average number of votes and whether it was drama or not respectively. Use add\_markers. Note that Drama will be treated as numeric since it’s a 0\-1 indicator. This won’t affect the plot, but if you want, you might use mutate to change it to a factor with labels ‘Drama’ and ‘Other’.
### Exercise 3
Create a ggplot of your own design and then use ggplotly to make it interactive.
Python Interactive Visualization Notebook
-----------------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/interactive.ipynb)
If using Python though, you’re in luck! You get most of the basic functionality of ggplot2 via the plotnine module. A jupyter notebook demonstrating most of the previous is available [here](https://github.com/m-clark/data-processing-and-visualization/blob/master/code/ggplot.ipynb).
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/interactive.html |
Interactive Visualization
=========================
Packages
--------
As mentioned, ggplot2 is the most widely used package for visualization in R. However, it is not interactive by default. Many packages use htmlwidgets, d3 (JavaScript library), and other tools to provide interactive graphics. What’s great is that while you may have to learn new packages, you don’t necessarily have to change your approach or thinking about a plot, or learn some other language.
Many of these packages can be lumped into more general packages that try to provide a plotting system (similar to ggplot2), versus those that just aim to do a specific type of plot well. Here are some to give a sense of this.
General (click to visit the associated website):
* [plotly](https://plot.ly/r/)
\- used also in Python, Matlab, Julia
\- can convert ggplot2 images to interactive ones (with varying degrees of success)
* [highcharter](http://jkunst.com/highcharter/)
+ also very general wrapper for highcharts.js and works with some R packages out of the box
* [rbokeh](http://hafen.github.io/rbokeh/)
+ like plotly, it also has cross language support
Specific functionality:
* [DT](https://rstudio.github.io/DT/)
+ interactive data tables
* [leaflet](https://rstudio.github.io/leaflet/)
+ maps with OpenStreetMap
* [visNetwork](http://datastorm-open.github.io/visNetwork/)
+ Network visualization
In what follows we’ll see some of these in action. Note that unlike the previous chapter, the goal here is not to dive deeply, but just to get an idea of what’s available.
Piping for Visualization
------------------------
One of the advantages to piping is that it’s not limited to dplyr style data management functions. *Any* R function can be potentially piped to, and several examples have already been shown. Many newer visualization packages take advantage of piping, and this facilitates data exploration. We don’t have to create objects just to do a visualization. New variables can be easily created and subsequently manipulated just for visualization. Furthermore, data manipulation not separated from visualization.
htmlwidgets
-----------
The htmlwidgets package makes it easy to create visualizations based on JavaScript libraries. If you’re not familiar with JavaScript, you actually are very familiar with its products, as it’s basically the language of the web, visual or otherwise. The R packages using it typically are pipe\-oriented and produce interactive plots. In addition, you can use the htmlwidgets package to create your own functions that use a particular JavaScript library (but someone probably already has, so look first).
Plotly
------
We’ll begin our foray into the interactive world with a couple demonstrations of plotly. To give some background, you can think of plotly similar to RStudio, in that it has both enterprise (i.e. pay for) aspects and open source aspects. Just like RStudio, you have full access to what it has to offer via the open source R package. You may see old help suggestions referring to needing an account, but this is no longer necessary.
When using plotly, you’ll note the layering approach similar to what we had with ggplot2. Piping is used before plotting to do some data manipulation, after which we seamlessly move to the plot itself. The `=~` is essentially the way we denote aesthetics[47](#fn47).
Plotly is able to be used in both R and Python.
#### R
```
library(plotly)
midwest %>%
filter(inmetro == T) %>%
plot_ly(x = ~ percbelowpoverty, y = ~ percollege) %>%
add_markers()
```
#### plotly with Python
The following does the same plot in Python
```
import pandas as pd
import plotly.express as px
midwest = pd.DataFrame(r.midwest) # from previous chunk using reticulate
plt = px.scatter(midwest, x = 'percbelowpoverty', y = 'percollege')
plt.show() # opens in browser
```
### Modes
plotly has modes, which allow for points, lines, text and combinations. Traces, `add_*`, work similar to geoms.
```
library(mgcv)
library(modelr)
library(glue)
mtcars %>%
mutate(
amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = glue('weight: {wt} <br> mpg: {mpg} <br> {amFactor}')
) %>%
add_predictions(gam(mpg ~ s(wt, am, bs = 'fs'), data = mtcars)) %>%
arrange(am) %>%
plot_ly() %>%
add_markers(
x = ~ wt,
y = ~ mpg,
color = ~ amFactor,
opacity = .5,
text = ~ hovertext,
hoverinfo = 'text',
showlegend = F
) %>%
add_lines(
x = ~ wt,
y = ~ pred,
color = ~ amFactor
)
```
While you can use plotly as a one\-liner[48](#fn48), this would only be good for quick peeks while doing data exploration. It would generally be far too limiting otherwise.
```
plot_ly(ggplot2::midwest, x = ~percollege, color = ~state, type = "box")
```
And here is a Python example or two using plotly express.
```
plt = px.box(midwest, x = 'state', y = 'percollege', color = 'state', notched=True)
plt.show() # opens in browser
tips = px.data.tips() # built-in dataset
px.violin(
tips,
y = "tip",
x = "smoker",
color = "sex",
box = True,
points = "all",
hover_data = tips.columns
).show()
```
### ggplotly
One of the strengths of plotly is that we can feed a ggplot object to it, and turn our formerly static plots into interactive ones. It would have been easy to use geom\_smooth to get a similar result, so let’s do so.
```
gp = mtcars %>%
mutate(amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = paste(wt, mpg, amFactor)) %>%
arrange(wt) %>%
ggplot(aes(x = wt, y = mpg, color = amFactor)) +
geom_smooth(se = F) +
geom_point(aes(color = amFactor))
ggplotly()
```
Note that this is not a one\-to\-one transformation. The plotly image will have different line widths and point sizes. It will usually be easier to change it within the ggplot process than tweaking the ggplotly object.
Be prepared to spend time getting used to plotly. It has (in my opinion) poor documentation, is not nearly as flexible as ggplot2, has hidden (and arbitrary) defaults that can creep into a plot based on aspects of the data (rather than your settings), and some modes do not play nicely with others. That said, it works great for a lot of things, and I use it regularly.
Highcharter
-----------
Highcharter is also fairly useful for a wide variety of plots, and is based on the highcharts.js library. If you have data suited to one of its functions, getting a great interactive plot can be ridiculously easy.
In what follows we use quantmod to create an xts (time series) object of Google’s stock price, including opening and closing values. The highcharter object has a ready\-made plot for such data[49](#fn49).
```
library(highcharter)
library(quantmod)
google_price = getSymbols("GOOG", auto.assign = FALSE)
hchart(google_price)
```
Graph networks
--------------
### visNetwork
The visNetwork package is specific to network visualizations and similar, and is based on the vis.js library. Networks require nodes and edges to connect them. These take on different aspects, and so are created in separate data frames.
```
set.seed(1352)
nodes = data.frame(
id = 0:5,
label = c('Bobby', 'Janie', 'Timmie', 'Mary', 'Johnny', 'Billy'),
group = c('friend', 'frenemy', 'frenemy', rep('friend', 3)),
value = sample(10:50, 6)
)
edges = data.frame(
from = c(0, 0, 0, 1, 1, 2, 2, 3, 3, 3, 4, 5, 5),
to = sample(0:5, 13, replace = T),
value = sample(1:10, 13, replace = T)
) %>%
filter(from != to)
library(visNetwork)
visNetwork(nodes, edges, height = 300, width = 800) %>%
visNodes(
shape = 'circle',
font = list(),
scaling = list(
min = 10,
max = 50,
label = list(enable = T)
)
) %>%
visLegend()
```
### sigmajs
The sigmajs package allows one to use the corresponding JS library to create some clean and nice visualizations for graphs. The following creates
```
library(sigmajs)
nodes <- sg_make_nodes(30)
edges <- sg_make_edges(nodes)
# add transitions
n <- nrow(nodes)
nodes$to_x <- runif(n, 5, 10)
nodes$to_y <- runif(n, 5, 10)
nodes$to_size <- runif(n, 5, 10)
nodes$to_color <- sample(c("#ff5500", "#00aaff"), n, replace = TRUE)
sigmajs() %>%
sg_nodes(nodes, id, label, size, color, to_x, to_y, to_size, to_color) %>%
sg_edges(edges, id, source, target) %>%
sg_animate(
mapping = list(
x = "to_x",
y = "to_y",
size = "to_size",
color = "to_color"
),
delay = 0
) %>%
sg_settings(animationsTime = 3500) %>%
sg_button("animate", # button label
"animate", # event name
class = "btn btn-warning")
```
animate
### Plotly
I mention plotly capabilities here as again, it may be useful to stick to one tool that you can learn well, and again, could allow you to bounce to python as well.
```
import plotly.graph_objects as go
import networkx as nx
G = nx.random_geometric_graph(50, 0.125)
edge_x = []
edge_y = []
for edge in G.edges():
x0, y0 = G.nodes[edge[0]]['pos']
x1, y1 = G.nodes[edge[1]]['pos']
edge_x.append(x0)
edge_x.append(x1)
edge_x.append(None)
edge_y.append(y0)
edge_y.append(y1)
edge_y.append(None)
edge_trace = go.Scatter(
x=edge_x,
y=edge_y,
line=dict(width=0.5, color='#888'),
hoverinfo='none',
mode='lines')
node_x = []
node_y = []
for node in G.nodes():
x, y = G.nodes[node]['pos']
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(
x=node_x, y=node_y,
mode='markers',
hoverinfo='text',
marker=dict(
showscale=True,
colorscale='Blackbody',
reversescale=True,
color=[],
size=10,
colorbar=dict(
thickness=15,
title='Node Connections',
xanchor='left',
titleside='right'
),
line_width=2))
node_adjacencies = []
node_text = []
for node, adjacencies in enumerate(G.adjacency()):
node_adjacencies.append(len(adjacencies[1]))
node_text.append('# of connections: '+str(len(adjacencies[1])))
node_trace.marker.color = node_adjacencies
node_trace.text = node_text
fig = go.Figure(data=[edge_trace, node_trace],
layout=go.Layout(
title='<br>Network graph made with Python',
titlefont_size=16,
showlegend=False,
hovermode='closest',
margin=dict(b=20,l=5,r=5,t=40),
annotations=[ dict(
text="Python code: <a href='https://plot.ly/ipython-notebooks/network-graphs/'> https://plot.ly/ipython-notebooks/network-graphs/</a>",
showarrow=False,
xref="paper", yref="paper",
x=0.005, y=-0.002 ) ],
xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
)
fig.show()
```
leaflet
-------
The leaflet package from RStudio is good for quick interactive maps, and it’s quite flexible and has some nice functionality to take your maps further. Unfortunately, it actually doesn’t always play well with many markdown formats.
```
hovertext <- paste(sep = "<br/>",
"<b><a href='http://umich.edu/'>University of Michigan</a></b>",
"Ann Arbor, MI"
)
library(leaflet)
leaflet() %>%
addTiles() %>%
addPopups(
lng = -83.738222,
lat = 42.277030,
popup = hovertext
)
```
DT
--
It might be a bit odd to think of data frames visually, but they can be interactive also. One can use the DT package for interactive data frames. This can be very useful when working in collaborative environments where one shares reports, as you can embed the data within the document itself.
```
library(DT)
ggplot2movies::movies %>%
select(1:6) %>%
filter(rating > 8, !is.na(budget), votes > 1000) %>%
datatable()
```
The other thing to be aware of is that tables *can* be visual, it’s just that many academic outlets waste this opportunity. Simple bolding, italics, and even sizing, can make results pop more easily for the audience. The DT package allows for coloring and even simple things like bars that connotes values. The following gives some idea of its flexibility.
```
iris %>%
# arrange(desc(Petal.Length)) %>%
datatable(rownames = F,
options = list(dom = 'firtp'),
class = 'row-border') %>%
formatStyle('Sepal.Length',
fontWeight = styleInterval(5, c('normal', 'bold'))) %>%
formatStyle('Sepal.Width',
color = styleInterval(c(3.4, 3.8), c('#7f7f7f', '#00aaff', '#ff5500')),
backgroundColor = styleInterval(3.4, c('#ebebeb', 'aliceblue'))) %>%
formatStyle(
'Petal.Length',
# color = 'transparent',
background = styleColorBar(iris$Petal.Length, '#5500ff'),
backgroundSize = '100% 90%',
backgroundRepeat = 'no-repeat',
backgroundPosition = 'center'
) %>%
formatStyle(
'Species',
color = 'white',
transform = 'rotateX(45deg) rotateY(20deg) rotateZ(30deg)',
backgroundColor = styleEqual(unique(iris$Species), c('#1f65b7', '#66b71f', '#b71f66'))
)
```
I would in no way recommend using the bars, unless the you want a visual *instead* of the value and can show all possible values. I would not recommend angled tag options at all, as that is more or less a prime example of chartjunk. However, subtle use of color and emphasis, as with the Sepal columns, can make tables of results that your audience will actually spend time exploring.
Shiny
-----
[
Shiny is a framework that can essentially allow you to build an interactive website/app. Like some of the other packages mentioned, it’s provided by [RStudio](https://shiny.rstudio.com/) developers. However, most of the more recently developed interactive visualization packages will work specifically within the shiny and rmarkdown setting.
You can make shiny apps just for your own use and run them locally. But note, you are using R, a statistical programming language, to build a webpage, and it’s not necessarily particularly well\-suited for it. Much of how you use R will not be useful in building a shiny app, and so it will definitely take some getting used to, and you will likely need to do a lot of tedious adjustments to get things just how you want.
Shiny apps have two main components, a part that specifies the user interface, and a server function that will do all the work. With those in place (either in a single ‘app.R’ file or in separate files), you can then simply click `run app` or use the function.
This example is taken from the shiny help file, and you can actually run it as is.
```
library(shiny)
# Running a Shiny app object
app <- shinyApp(
ui = bootstrapPage(
numericInput('n', 'Number of obs', 10),
plotOutput('plot')
),
server = function(input, output) {
output$plot <- renderPlot({
ggplot2::qplot(rnorm(input$n), xlab = 'Is this normal?!')
})
}
)
runApp(app)
```
You can share your app code/directory with anyone and they’ll be able to run it also. However, this is great mostly just for teaching someone how to do shiny, which most people aren’t going to do. Typically you’ll want someone to use the app itself, not run code. In that case you’ll need a web server. You can get up to 5 free ‘running’ applications at [shinyapps.io](http://shinyapps.io). However, you will notably be limited in the amount of computing resources that can be used to run the apps in a given month. Even minor usage of those could easily overtake the free settings. For personal use it’s plenty though.
### Dash
Dash is a similar approach to interactivity as Shiny brought to you by the plotly gang. The nice thing about it is crossplatform support for R and Python.
#### R
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
app <- Dash$new()
df <- readr::read_csv(file = "data/gapminder_small.csv") %>%
drop_na()
continents <- unique(df$continent)
data_gdp_life <- with(df,
lapply(continents,
function(cont) {
list(
x = gdpPercap[continent == cont],
y = lifeExp[continent == cont],
opacity=0.7,
text = country[continent == cont],
mode = 'markers',
name = cont,
marker = list(size = 15,
line = list(width = 0.5, color = 'white'))
)
}
)
)
app$layout(
htmlDiv(
list(
dccGraph(
id = 'life-exp-vs-gdp',
figure = list(
data = data_gdp_life,
layout = list(
xaxis = list('type' = 'log', 'title' = 'GDP Per Capita'),
yaxis = list('title' = 'Life Expectancy'),
margin = list('l' = 40, 'b' = 40, 't' = 10, 'r' = 10),
legend = list('x' = 0, 'y' = 1),
hovermode = 'closest'
)
)
)
)
)
)
app$run_server()
```
#### Python dash example
Here is a python example. Save as app.py then at the terminal run `python app.py`.
```
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
df = pd.read_csv('data/gapminder_small.csv')
app.layout = html.Div([
dcc.Graph(
id='life-exp-vs-gdp',
figure={
'data': [
dict(
x=df[df['continent'] == i]['gdpPercap'],
y=df[df['continent'] == i]['lifeExp'],
text=df[df['continent'] == i]['country'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) for i in df.continent.unique()
],
'layout': dict(
xaxis={'type': 'log', 'title': 'GDP Per Capita'},
yaxis={'title': 'Life Expectancy'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest'
)
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
Interactive and Visual Data Exploration
---------------------------------------
As seen above, just a couple visualization packages can go a very long way. It’s now very easy to incorporate interactivity, so you should use it even if only for your own data exploration.
In general, interactivity allows for even more dimensions to be brought to a graphic, and can be more fun too!
However, they must serve a purpose. Too often, interactivity can simply serve as distraction, and can actually detract from the data story. Make sure to use them when they can enhance the narrative you wish to express.
Interactive Visualization Exercises
-----------------------------------
### Exercise 0
Install and load the plotly package. Load the tidyverse package if necessary (so you can use dplyr and ggplot2), and install/load the ggplot2movies for the IMDB data.
### Exercise 1
Using dplyr, group by year, and summarize to create a new variable that is the Average rating. Refer to the [tidyverse](tidyverse.html#tidyverse) section if you need a refresher on what’s being done here. Then create a plot with plotly for a line or scatter plot (for the latter, use the add\_markers function). It will take the following form, but you’ll need to supply the plotly arguments.
```
library(ggplot2movies)
movies %>%
group_by(year) %>%
summarise(Avg_Rating = mean(rating))
plot_ly() %>%
add_markers()
```
### Exercise 2
This time group by year *and* Drama. In the summarize create average rating again, but also a variable representing the average number of votes. In your plotly line, use the size and color arguments to represent whether the average number of votes and whether it was drama or not respectively. Use add\_markers. Note that Drama will be treated as numeric since it’s a 0\-1 indicator. This won’t affect the plot, but if you want, you might use mutate to change it to a factor with labels ‘Drama’ and ‘Other’.
### Exercise 3
Create a ggplot of your own design and then use ggplotly to make it interactive.
Python Interactive Visualization Notebook
-----------------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/interactive.ipynb)
If using Python though, you’re in luck! You get most of the basic functionality of ggplot2 via the plotnine module. A jupyter notebook demonstrating most of the previous is available [here](https://github.com/m-clark/data-processing-and-visualization/blob/master/code/ggplot.ipynb).
Packages
--------
As mentioned, ggplot2 is the most widely used package for visualization in R. However, it is not interactive by default. Many packages use htmlwidgets, d3 (JavaScript library), and other tools to provide interactive graphics. What’s great is that while you may have to learn new packages, you don’t necessarily have to change your approach or thinking about a plot, or learn some other language.
Many of these packages can be lumped into more general packages that try to provide a plotting system (similar to ggplot2), versus those that just aim to do a specific type of plot well. Here are some to give a sense of this.
General (click to visit the associated website):
* [plotly](https://plot.ly/r/)
\- used also in Python, Matlab, Julia
\- can convert ggplot2 images to interactive ones (with varying degrees of success)
* [highcharter](http://jkunst.com/highcharter/)
+ also very general wrapper for highcharts.js and works with some R packages out of the box
* [rbokeh](http://hafen.github.io/rbokeh/)
+ like plotly, it also has cross language support
Specific functionality:
* [DT](https://rstudio.github.io/DT/)
+ interactive data tables
* [leaflet](https://rstudio.github.io/leaflet/)
+ maps with OpenStreetMap
* [visNetwork](http://datastorm-open.github.io/visNetwork/)
+ Network visualization
In what follows we’ll see some of these in action. Note that unlike the previous chapter, the goal here is not to dive deeply, but just to get an idea of what’s available.
Piping for Visualization
------------------------
One of the advantages to piping is that it’s not limited to dplyr style data management functions. *Any* R function can be potentially piped to, and several examples have already been shown. Many newer visualization packages take advantage of piping, and this facilitates data exploration. We don’t have to create objects just to do a visualization. New variables can be easily created and subsequently manipulated just for visualization. Furthermore, data manipulation not separated from visualization.
htmlwidgets
-----------
The htmlwidgets package makes it easy to create visualizations based on JavaScript libraries. If you’re not familiar with JavaScript, you actually are very familiar with its products, as it’s basically the language of the web, visual or otherwise. The R packages using it typically are pipe\-oriented and produce interactive plots. In addition, you can use the htmlwidgets package to create your own functions that use a particular JavaScript library (but someone probably already has, so look first).
Plotly
------
We’ll begin our foray into the interactive world with a couple demonstrations of plotly. To give some background, you can think of plotly similar to RStudio, in that it has both enterprise (i.e. pay for) aspects and open source aspects. Just like RStudio, you have full access to what it has to offer via the open source R package. You may see old help suggestions referring to needing an account, but this is no longer necessary.
When using plotly, you’ll note the layering approach similar to what we had with ggplot2. Piping is used before plotting to do some data manipulation, after which we seamlessly move to the plot itself. The `=~` is essentially the way we denote aesthetics[47](#fn47).
Plotly is able to be used in both R and Python.
#### R
```
library(plotly)
midwest %>%
filter(inmetro == T) %>%
plot_ly(x = ~ percbelowpoverty, y = ~ percollege) %>%
add_markers()
```
#### plotly with Python
The following does the same plot in Python
```
import pandas as pd
import plotly.express as px
midwest = pd.DataFrame(r.midwest) # from previous chunk using reticulate
plt = px.scatter(midwest, x = 'percbelowpoverty', y = 'percollege')
plt.show() # opens in browser
```
### Modes
plotly has modes, which allow for points, lines, text and combinations. Traces, `add_*`, work similar to geoms.
```
library(mgcv)
library(modelr)
library(glue)
mtcars %>%
mutate(
amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = glue('weight: {wt} <br> mpg: {mpg} <br> {amFactor}')
) %>%
add_predictions(gam(mpg ~ s(wt, am, bs = 'fs'), data = mtcars)) %>%
arrange(am) %>%
plot_ly() %>%
add_markers(
x = ~ wt,
y = ~ mpg,
color = ~ amFactor,
opacity = .5,
text = ~ hovertext,
hoverinfo = 'text',
showlegend = F
) %>%
add_lines(
x = ~ wt,
y = ~ pred,
color = ~ amFactor
)
```
While you can use plotly as a one\-liner[48](#fn48), this would only be good for quick peeks while doing data exploration. It would generally be far too limiting otherwise.
```
plot_ly(ggplot2::midwest, x = ~percollege, color = ~state, type = "box")
```
And here is a Python example or two using plotly express.
```
plt = px.box(midwest, x = 'state', y = 'percollege', color = 'state', notched=True)
plt.show() # opens in browser
tips = px.data.tips() # built-in dataset
px.violin(
tips,
y = "tip",
x = "smoker",
color = "sex",
box = True,
points = "all",
hover_data = tips.columns
).show()
```
### ggplotly
One of the strengths of plotly is that we can feed a ggplot object to it, and turn our formerly static plots into interactive ones. It would have been easy to use geom\_smooth to get a similar result, so let’s do so.
```
gp = mtcars %>%
mutate(amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = paste(wt, mpg, amFactor)) %>%
arrange(wt) %>%
ggplot(aes(x = wt, y = mpg, color = amFactor)) +
geom_smooth(se = F) +
geom_point(aes(color = amFactor))
ggplotly()
```
Note that this is not a one\-to\-one transformation. The plotly image will have different line widths and point sizes. It will usually be easier to change it within the ggplot process than tweaking the ggplotly object.
Be prepared to spend time getting used to plotly. It has (in my opinion) poor documentation, is not nearly as flexible as ggplot2, has hidden (and arbitrary) defaults that can creep into a plot based on aspects of the data (rather than your settings), and some modes do not play nicely with others. That said, it works great for a lot of things, and I use it regularly.
#### R
```
library(plotly)
midwest %>%
filter(inmetro == T) %>%
plot_ly(x = ~ percbelowpoverty, y = ~ percollege) %>%
add_markers()
```
#### plotly with Python
The following does the same plot in Python
```
import pandas as pd
import plotly.express as px
midwest = pd.DataFrame(r.midwest) # from previous chunk using reticulate
plt = px.scatter(midwest, x = 'percbelowpoverty', y = 'percollege')
plt.show() # opens in browser
```
### Modes
plotly has modes, which allow for points, lines, text and combinations. Traces, `add_*`, work similar to geoms.
```
library(mgcv)
library(modelr)
library(glue)
mtcars %>%
mutate(
amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = glue('weight: {wt} <br> mpg: {mpg} <br> {amFactor}')
) %>%
add_predictions(gam(mpg ~ s(wt, am, bs = 'fs'), data = mtcars)) %>%
arrange(am) %>%
plot_ly() %>%
add_markers(
x = ~ wt,
y = ~ mpg,
color = ~ amFactor,
opacity = .5,
text = ~ hovertext,
hoverinfo = 'text',
showlegend = F
) %>%
add_lines(
x = ~ wt,
y = ~ pred,
color = ~ amFactor
)
```
While you can use plotly as a one\-liner[48](#fn48), this would only be good for quick peeks while doing data exploration. It would generally be far too limiting otherwise.
```
plot_ly(ggplot2::midwest, x = ~percollege, color = ~state, type = "box")
```
And here is a Python example or two using plotly express.
```
plt = px.box(midwest, x = 'state', y = 'percollege', color = 'state', notched=True)
plt.show() # opens in browser
tips = px.data.tips() # built-in dataset
px.violin(
tips,
y = "tip",
x = "smoker",
color = "sex",
box = True,
points = "all",
hover_data = tips.columns
).show()
```
### ggplotly
One of the strengths of plotly is that we can feed a ggplot object to it, and turn our formerly static plots into interactive ones. It would have been easy to use geom\_smooth to get a similar result, so let’s do so.
```
gp = mtcars %>%
mutate(amFactor = factor(am, labels = c('auto', 'manual')),
hovertext = paste(wt, mpg, amFactor)) %>%
arrange(wt) %>%
ggplot(aes(x = wt, y = mpg, color = amFactor)) +
geom_smooth(se = F) +
geom_point(aes(color = amFactor))
ggplotly()
```
Note that this is not a one\-to\-one transformation. The plotly image will have different line widths and point sizes. It will usually be easier to change it within the ggplot process than tweaking the ggplotly object.
Be prepared to spend time getting used to plotly. It has (in my opinion) poor documentation, is not nearly as flexible as ggplot2, has hidden (and arbitrary) defaults that can creep into a plot based on aspects of the data (rather than your settings), and some modes do not play nicely with others. That said, it works great for a lot of things, and I use it regularly.
Highcharter
-----------
Highcharter is also fairly useful for a wide variety of plots, and is based on the highcharts.js library. If you have data suited to one of its functions, getting a great interactive plot can be ridiculously easy.
In what follows we use quantmod to create an xts (time series) object of Google’s stock price, including opening and closing values. The highcharter object has a ready\-made plot for such data[49](#fn49).
```
library(highcharter)
library(quantmod)
google_price = getSymbols("GOOG", auto.assign = FALSE)
hchart(google_price)
```
Graph networks
--------------
### visNetwork
The visNetwork package is specific to network visualizations and similar, and is based on the vis.js library. Networks require nodes and edges to connect them. These take on different aspects, and so are created in separate data frames.
```
set.seed(1352)
nodes = data.frame(
id = 0:5,
label = c('Bobby', 'Janie', 'Timmie', 'Mary', 'Johnny', 'Billy'),
group = c('friend', 'frenemy', 'frenemy', rep('friend', 3)),
value = sample(10:50, 6)
)
edges = data.frame(
from = c(0, 0, 0, 1, 1, 2, 2, 3, 3, 3, 4, 5, 5),
to = sample(0:5, 13, replace = T),
value = sample(1:10, 13, replace = T)
) %>%
filter(from != to)
library(visNetwork)
visNetwork(nodes, edges, height = 300, width = 800) %>%
visNodes(
shape = 'circle',
font = list(),
scaling = list(
min = 10,
max = 50,
label = list(enable = T)
)
) %>%
visLegend()
```
### sigmajs
The sigmajs package allows one to use the corresponding JS library to create some clean and nice visualizations for graphs. The following creates
```
library(sigmajs)
nodes <- sg_make_nodes(30)
edges <- sg_make_edges(nodes)
# add transitions
n <- nrow(nodes)
nodes$to_x <- runif(n, 5, 10)
nodes$to_y <- runif(n, 5, 10)
nodes$to_size <- runif(n, 5, 10)
nodes$to_color <- sample(c("#ff5500", "#00aaff"), n, replace = TRUE)
sigmajs() %>%
sg_nodes(nodes, id, label, size, color, to_x, to_y, to_size, to_color) %>%
sg_edges(edges, id, source, target) %>%
sg_animate(
mapping = list(
x = "to_x",
y = "to_y",
size = "to_size",
color = "to_color"
),
delay = 0
) %>%
sg_settings(animationsTime = 3500) %>%
sg_button("animate", # button label
"animate", # event name
class = "btn btn-warning")
```
animate
### Plotly
I mention plotly capabilities here as again, it may be useful to stick to one tool that you can learn well, and again, could allow you to bounce to python as well.
```
import plotly.graph_objects as go
import networkx as nx
G = nx.random_geometric_graph(50, 0.125)
edge_x = []
edge_y = []
for edge in G.edges():
x0, y0 = G.nodes[edge[0]]['pos']
x1, y1 = G.nodes[edge[1]]['pos']
edge_x.append(x0)
edge_x.append(x1)
edge_x.append(None)
edge_y.append(y0)
edge_y.append(y1)
edge_y.append(None)
edge_trace = go.Scatter(
x=edge_x,
y=edge_y,
line=dict(width=0.5, color='#888'),
hoverinfo='none',
mode='lines')
node_x = []
node_y = []
for node in G.nodes():
x, y = G.nodes[node]['pos']
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(
x=node_x, y=node_y,
mode='markers',
hoverinfo='text',
marker=dict(
showscale=True,
colorscale='Blackbody',
reversescale=True,
color=[],
size=10,
colorbar=dict(
thickness=15,
title='Node Connections',
xanchor='left',
titleside='right'
),
line_width=2))
node_adjacencies = []
node_text = []
for node, adjacencies in enumerate(G.adjacency()):
node_adjacencies.append(len(adjacencies[1]))
node_text.append('# of connections: '+str(len(adjacencies[1])))
node_trace.marker.color = node_adjacencies
node_trace.text = node_text
fig = go.Figure(data=[edge_trace, node_trace],
layout=go.Layout(
title='<br>Network graph made with Python',
titlefont_size=16,
showlegend=False,
hovermode='closest',
margin=dict(b=20,l=5,r=5,t=40),
annotations=[ dict(
text="Python code: <a href='https://plot.ly/ipython-notebooks/network-graphs/'> https://plot.ly/ipython-notebooks/network-graphs/</a>",
showarrow=False,
xref="paper", yref="paper",
x=0.005, y=-0.002 ) ],
xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
)
fig.show()
```
### visNetwork
The visNetwork package is specific to network visualizations and similar, and is based on the vis.js library. Networks require nodes and edges to connect them. These take on different aspects, and so are created in separate data frames.
```
set.seed(1352)
nodes = data.frame(
id = 0:5,
label = c('Bobby', 'Janie', 'Timmie', 'Mary', 'Johnny', 'Billy'),
group = c('friend', 'frenemy', 'frenemy', rep('friend', 3)),
value = sample(10:50, 6)
)
edges = data.frame(
from = c(0, 0, 0, 1, 1, 2, 2, 3, 3, 3, 4, 5, 5),
to = sample(0:5, 13, replace = T),
value = sample(1:10, 13, replace = T)
) %>%
filter(from != to)
library(visNetwork)
visNetwork(nodes, edges, height = 300, width = 800) %>%
visNodes(
shape = 'circle',
font = list(),
scaling = list(
min = 10,
max = 50,
label = list(enable = T)
)
) %>%
visLegend()
```
### sigmajs
The sigmajs package allows one to use the corresponding JS library to create some clean and nice visualizations for graphs. The following creates
```
library(sigmajs)
nodes <- sg_make_nodes(30)
edges <- sg_make_edges(nodes)
# add transitions
n <- nrow(nodes)
nodes$to_x <- runif(n, 5, 10)
nodes$to_y <- runif(n, 5, 10)
nodes$to_size <- runif(n, 5, 10)
nodes$to_color <- sample(c("#ff5500", "#00aaff"), n, replace = TRUE)
sigmajs() %>%
sg_nodes(nodes, id, label, size, color, to_x, to_y, to_size, to_color) %>%
sg_edges(edges, id, source, target) %>%
sg_animate(
mapping = list(
x = "to_x",
y = "to_y",
size = "to_size",
color = "to_color"
),
delay = 0
) %>%
sg_settings(animationsTime = 3500) %>%
sg_button("animate", # button label
"animate", # event name
class = "btn btn-warning")
```
animate
### Plotly
I mention plotly capabilities here as again, it may be useful to stick to one tool that you can learn well, and again, could allow you to bounce to python as well.
```
import plotly.graph_objects as go
import networkx as nx
G = nx.random_geometric_graph(50, 0.125)
edge_x = []
edge_y = []
for edge in G.edges():
x0, y0 = G.nodes[edge[0]]['pos']
x1, y1 = G.nodes[edge[1]]['pos']
edge_x.append(x0)
edge_x.append(x1)
edge_x.append(None)
edge_y.append(y0)
edge_y.append(y1)
edge_y.append(None)
edge_trace = go.Scatter(
x=edge_x,
y=edge_y,
line=dict(width=0.5, color='#888'),
hoverinfo='none',
mode='lines')
node_x = []
node_y = []
for node in G.nodes():
x, y = G.nodes[node]['pos']
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(
x=node_x, y=node_y,
mode='markers',
hoverinfo='text',
marker=dict(
showscale=True,
colorscale='Blackbody',
reversescale=True,
color=[],
size=10,
colorbar=dict(
thickness=15,
title='Node Connections',
xanchor='left',
titleside='right'
),
line_width=2))
node_adjacencies = []
node_text = []
for node, adjacencies in enumerate(G.adjacency()):
node_adjacencies.append(len(adjacencies[1]))
node_text.append('# of connections: '+str(len(adjacencies[1])))
node_trace.marker.color = node_adjacencies
node_trace.text = node_text
fig = go.Figure(data=[edge_trace, node_trace],
layout=go.Layout(
title='<br>Network graph made with Python',
titlefont_size=16,
showlegend=False,
hovermode='closest',
margin=dict(b=20,l=5,r=5,t=40),
annotations=[ dict(
text="Python code: <a href='https://plot.ly/ipython-notebooks/network-graphs/'> https://plot.ly/ipython-notebooks/network-graphs/</a>",
showarrow=False,
xref="paper", yref="paper",
x=0.005, y=-0.002 ) ],
xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
)
fig.show()
```
leaflet
-------
The leaflet package from RStudio is good for quick interactive maps, and it’s quite flexible and has some nice functionality to take your maps further. Unfortunately, it actually doesn’t always play well with many markdown formats.
```
hovertext <- paste(sep = "<br/>",
"<b><a href='http://umich.edu/'>University of Michigan</a></b>",
"Ann Arbor, MI"
)
library(leaflet)
leaflet() %>%
addTiles() %>%
addPopups(
lng = -83.738222,
lat = 42.277030,
popup = hovertext
)
```
DT
--
It might be a bit odd to think of data frames visually, but they can be interactive also. One can use the DT package for interactive data frames. This can be very useful when working in collaborative environments where one shares reports, as you can embed the data within the document itself.
```
library(DT)
ggplot2movies::movies %>%
select(1:6) %>%
filter(rating > 8, !is.na(budget), votes > 1000) %>%
datatable()
```
The other thing to be aware of is that tables *can* be visual, it’s just that many academic outlets waste this opportunity. Simple bolding, italics, and even sizing, can make results pop more easily for the audience. The DT package allows for coloring and even simple things like bars that connotes values. The following gives some idea of its flexibility.
```
iris %>%
# arrange(desc(Petal.Length)) %>%
datatable(rownames = F,
options = list(dom = 'firtp'),
class = 'row-border') %>%
formatStyle('Sepal.Length',
fontWeight = styleInterval(5, c('normal', 'bold'))) %>%
formatStyle('Sepal.Width',
color = styleInterval(c(3.4, 3.8), c('#7f7f7f', '#00aaff', '#ff5500')),
backgroundColor = styleInterval(3.4, c('#ebebeb', 'aliceblue'))) %>%
formatStyle(
'Petal.Length',
# color = 'transparent',
background = styleColorBar(iris$Petal.Length, '#5500ff'),
backgroundSize = '100% 90%',
backgroundRepeat = 'no-repeat',
backgroundPosition = 'center'
) %>%
formatStyle(
'Species',
color = 'white',
transform = 'rotateX(45deg) rotateY(20deg) rotateZ(30deg)',
backgroundColor = styleEqual(unique(iris$Species), c('#1f65b7', '#66b71f', '#b71f66'))
)
```
I would in no way recommend using the bars, unless the you want a visual *instead* of the value and can show all possible values. I would not recommend angled tag options at all, as that is more or less a prime example of chartjunk. However, subtle use of color and emphasis, as with the Sepal columns, can make tables of results that your audience will actually spend time exploring.
Shiny
-----
[
Shiny is a framework that can essentially allow you to build an interactive website/app. Like some of the other packages mentioned, it’s provided by [RStudio](https://shiny.rstudio.com/) developers. However, most of the more recently developed interactive visualization packages will work specifically within the shiny and rmarkdown setting.
You can make shiny apps just for your own use and run them locally. But note, you are using R, a statistical programming language, to build a webpage, and it’s not necessarily particularly well\-suited for it. Much of how you use R will not be useful in building a shiny app, and so it will definitely take some getting used to, and you will likely need to do a lot of tedious adjustments to get things just how you want.
Shiny apps have two main components, a part that specifies the user interface, and a server function that will do all the work. With those in place (either in a single ‘app.R’ file or in separate files), you can then simply click `run app` or use the function.
This example is taken from the shiny help file, and you can actually run it as is.
```
library(shiny)
# Running a Shiny app object
app <- shinyApp(
ui = bootstrapPage(
numericInput('n', 'Number of obs', 10),
plotOutput('plot')
),
server = function(input, output) {
output$plot <- renderPlot({
ggplot2::qplot(rnorm(input$n), xlab = 'Is this normal?!')
})
}
)
runApp(app)
```
You can share your app code/directory with anyone and they’ll be able to run it also. However, this is great mostly just for teaching someone how to do shiny, which most people aren’t going to do. Typically you’ll want someone to use the app itself, not run code. In that case you’ll need a web server. You can get up to 5 free ‘running’ applications at [shinyapps.io](http://shinyapps.io). However, you will notably be limited in the amount of computing resources that can be used to run the apps in a given month. Even minor usage of those could easily overtake the free settings. For personal use it’s plenty though.
### Dash
Dash is a similar approach to interactivity as Shiny brought to you by the plotly gang. The nice thing about it is crossplatform support for R and Python.
#### R
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
app <- Dash$new()
df <- readr::read_csv(file = "data/gapminder_small.csv") %>%
drop_na()
continents <- unique(df$continent)
data_gdp_life <- with(df,
lapply(continents,
function(cont) {
list(
x = gdpPercap[continent == cont],
y = lifeExp[continent == cont],
opacity=0.7,
text = country[continent == cont],
mode = 'markers',
name = cont,
marker = list(size = 15,
line = list(width = 0.5, color = 'white'))
)
}
)
)
app$layout(
htmlDiv(
list(
dccGraph(
id = 'life-exp-vs-gdp',
figure = list(
data = data_gdp_life,
layout = list(
xaxis = list('type' = 'log', 'title' = 'GDP Per Capita'),
yaxis = list('title' = 'Life Expectancy'),
margin = list('l' = 40, 'b' = 40, 't' = 10, 'r' = 10),
legend = list('x' = 0, 'y' = 1),
hovermode = 'closest'
)
)
)
)
)
)
app$run_server()
```
#### Python dash example
Here is a python example. Save as app.py then at the terminal run `python app.py`.
```
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
df = pd.read_csv('data/gapminder_small.csv')
app.layout = html.Div([
dcc.Graph(
id='life-exp-vs-gdp',
figure={
'data': [
dict(
x=df[df['continent'] == i]['gdpPercap'],
y=df[df['continent'] == i]['lifeExp'],
text=df[df['continent'] == i]['country'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) for i in df.continent.unique()
],
'layout': dict(
xaxis={'type': 'log', 'title': 'GDP Per Capita'},
yaxis={'title': 'Life Expectancy'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest'
)
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
### Dash
Dash is a similar approach to interactivity as Shiny brought to you by the plotly gang. The nice thing about it is crossplatform support for R and Python.
#### R
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
app <- Dash$new()
df <- readr::read_csv(file = "data/gapminder_small.csv") %>%
drop_na()
continents <- unique(df$continent)
data_gdp_life <- with(df,
lapply(continents,
function(cont) {
list(
x = gdpPercap[continent == cont],
y = lifeExp[continent == cont],
opacity=0.7,
text = country[continent == cont],
mode = 'markers',
name = cont,
marker = list(size = 15,
line = list(width = 0.5, color = 'white'))
)
}
)
)
app$layout(
htmlDiv(
list(
dccGraph(
id = 'life-exp-vs-gdp',
figure = list(
data = data_gdp_life,
layout = list(
xaxis = list('type' = 'log', 'title' = 'GDP Per Capita'),
yaxis = list('title' = 'Life Expectancy'),
margin = list('l' = 40, 'b' = 40, 't' = 10, 'r' = 10),
legend = list('x' = 0, 'y' = 1),
hovermode = 'closest'
)
)
)
)
)
)
app$run_server()
```
#### Python dash example
Here is a python example. Save as app.py then at the terminal run `python app.py`.
```
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
df = pd.read_csv('data/gapminder_small.csv')
app.layout = html.Div([
dcc.Graph(
id='life-exp-vs-gdp',
figure={
'data': [
dict(
x=df[df['continent'] == i]['gdpPercap'],
y=df[df['continent'] == i]['lifeExp'],
text=df[df['continent'] == i]['country'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) for i in df.continent.unique()
],
'layout': dict(
xaxis={'type': 'log', 'title': 'GDP Per Capita'},
yaxis={'title': 'Life Expectancy'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest'
)
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
#### R
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
app <- Dash$new()
df <- readr::read_csv(file = "data/gapminder_small.csv") %>%
drop_na()
continents <- unique(df$continent)
data_gdp_life <- with(df,
lapply(continents,
function(cont) {
list(
x = gdpPercap[continent == cont],
y = lifeExp[continent == cont],
opacity=0.7,
text = country[continent == cont],
mode = 'markers',
name = cont,
marker = list(size = 15,
line = list(width = 0.5, color = 'white'))
)
}
)
)
app$layout(
htmlDiv(
list(
dccGraph(
id = 'life-exp-vs-gdp',
figure = list(
data = data_gdp_life,
layout = list(
xaxis = list('type' = 'log', 'title' = 'GDP Per Capita'),
yaxis = list('title' = 'Life Expectancy'),
margin = list('l' = 40, 'b' = 40, 't' = 10, 'r' = 10),
legend = list('x' = 0, 'y' = 1),
hovermode = 'closest'
)
)
)
)
)
)
app$run_server()
```
#### Python dash example
Here is a python example. Save as app.py then at the terminal run `python app.py`.
```
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
df = pd.read_csv('data/gapminder_small.csv')
app.layout = html.Div([
dcc.Graph(
id='life-exp-vs-gdp',
figure={
'data': [
dict(
x=df[df['continent'] == i]['gdpPercap'],
y=df[df['continent'] == i]['lifeExp'],
text=df[df['continent'] == i]['country'],
mode='markers',
opacity=0.7,
marker={
'size': 15,
'line': {'width': 0.5, 'color': 'white'}
},
name=i
) for i in df.continent.unique()
],
'layout': dict(
xaxis={'type': 'log', 'title': 'GDP Per Capita'},
yaxis={'title': 'Life Expectancy'},
margin={'l': 40, 'b': 40, 't': 10, 'r': 10},
legend={'x': 0, 'y': 1},
hovermode='closest'
)
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
Interactive and Visual Data Exploration
---------------------------------------
As seen above, just a couple visualization packages can go a very long way. It’s now very easy to incorporate interactivity, so you should use it even if only for your own data exploration.
In general, interactivity allows for even more dimensions to be brought to a graphic, and can be more fun too!
However, they must serve a purpose. Too often, interactivity can simply serve as distraction, and can actually detract from the data story. Make sure to use them when they can enhance the narrative you wish to express.
Interactive Visualization Exercises
-----------------------------------
### Exercise 0
Install and load the plotly package. Load the tidyverse package if necessary (so you can use dplyr and ggplot2), and install/load the ggplot2movies for the IMDB data.
### Exercise 1
Using dplyr, group by year, and summarize to create a new variable that is the Average rating. Refer to the [tidyverse](tidyverse.html#tidyverse) section if you need a refresher on what’s being done here. Then create a plot with plotly for a line or scatter plot (for the latter, use the add\_markers function). It will take the following form, but you’ll need to supply the plotly arguments.
```
library(ggplot2movies)
movies %>%
group_by(year) %>%
summarise(Avg_Rating = mean(rating))
plot_ly() %>%
add_markers()
```
### Exercise 2
This time group by year *and* Drama. In the summarize create average rating again, but also a variable representing the average number of votes. In your plotly line, use the size and color arguments to represent whether the average number of votes and whether it was drama or not respectively. Use add\_markers. Note that Drama will be treated as numeric since it’s a 0\-1 indicator. This won’t affect the plot, but if you want, you might use mutate to change it to a factor with labels ‘Drama’ and ‘Other’.
### Exercise 3
Create a ggplot of your own design and then use ggplotly to make it interactive.
### Exercise 0
Install and load the plotly package. Load the tidyverse package if necessary (so you can use dplyr and ggplot2), and install/load the ggplot2movies for the IMDB data.
### Exercise 1
Using dplyr, group by year, and summarize to create a new variable that is the Average rating. Refer to the [tidyverse](tidyverse.html#tidyverse) section if you need a refresher on what’s being done here. Then create a plot with plotly for a line or scatter plot (for the latter, use the add\_markers function). It will take the following form, but you’ll need to supply the plotly arguments.
```
library(ggplot2movies)
movies %>%
group_by(year) %>%
summarise(Avg_Rating = mean(rating))
plot_ly() %>%
add_markers()
```
### Exercise 2
This time group by year *and* Drama. In the summarize create average rating again, but also a variable representing the average number of votes. In your plotly line, use the size and color arguments to represent whether the average number of votes and whether it was drama or not respectively. Use add\_markers. Note that Drama will be treated as numeric since it’s a 0\-1 indicator. This won’t affect the plot, but if you want, you might use mutate to change it to a factor with labels ‘Drama’ and ‘Other’.
### Exercise 3
Create a ggplot of your own design and then use ggplotly to make it interactive.
Python Interactive Visualization Notebook
-----------------------------------------
[Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/interactive.ipynb)
If using Python though, you’re in luck! You get most of the basic functionality of ggplot2 via the plotnine module. A jupyter notebook demonstrating most of the previous is available [here](https://github.com/m-clark/data-processing-and-visualization/blob/master/code/ggplot.ipynb).
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/thinking_vis.html |
Thinking Visually
=================
Information
-----------
A starting point for data visualization regards the information you want to display, and then how you want to display it in order to tell the data’s story. As in statistical modeling, parsimony is the goal, but not at the cost of the more compelling story. We don’t want to waste the time of the audience or be redundant, but we also want to avoid unnecessary clutter, chart junk, and the like.
We’ll start with a couple examples. Consider the following.
So what’s wrong with this? Plenty. Aside from being boring, the entire story can be said with a couple words\- males are taller than females (even in the Star Wars universe). There is no reason to have a visualization. And if a simple group difference is the most exciting thing you have to talk about, not many are going to be interested.
Minor issues can also be noted, including unnecessary border around the bars, unnecessary vertical gridlines, and an unnecessary X axis label.
You might think the following is an improvement, but I would say it’s even worse.
Now the y axis has been changed to distort the difference, perceptually suggesting a height increase of over 34%. Furthermore, color is used but the colors are chosen poorly, and add no information, thus making the legend superfluous. And finally, the above doesn’t even convey the information people think it does, assuming they are even standard error bars, which one typically has to guess about in many journal visualizations of this kind[50](#fn50).
Now we add more information, but more problems!
The above has unnecessary border, gridlines, and emphasis. The labels, while possibly interesting, do not relate anything useful to the graph, and many are illegible. It imposes a straight (and too wide of a) straight line on a nonlinear relationship. And finally, color choice is both terrible and tends to draw one’s eye to the female data points. Here is what it looks like to someone with the most common form of colorblindness. If the points were less clumpy on sex, it would be very difficult to distinguish the groups.
And here is what it might look like when printed.
Now consider the following. We have six pieces of information in one graph\- name (on hover), homeworld (shape), age (size), sex (color), mass (x), and height (y). The colors are evenly spaced from one another, and so do not draw one’s attention to one group over another, or even to the line over groups. Opacity allows the line to be added and the points to overlap without loss of information. We technically don’t need a caption, legend or gridlines, because hovering over the data tells us everything we’d want to know about a given data point. The interactivity additionally allows one to select and zoom on specific areas.
Whether this particular scheme is something you’d prefer or not, the point is that we get quite a bit of information without being overwhelming, and the data is allowed to express itself cleanly.
Here are some things to keep in mind when creating visualizations for scientific communication.
### Your audience isn’t dumb
Assume your audience, which in academia is full of people with advanced degrees or those aspiring to obtain one, and in other contexts comprises people who are interested in your story, can handle more than a bar graph. If the visualization is good and well\-explained[51](#fn51), they’ll be fine.
See the data visualization and maps sections of [2019: The Year in Visual Stories and Graphics](https://www.nytimes.com/interactive/2019/12/30/us/2019-year-in-graphics.html) at the New York Times. Good data visualization of even complex relationships can be appreciated by more than an academic audience. Assume you can at least provide visualizations on that level of complexity and be okay. It won’t always work, but at least put the same effort you’d appreciate yourself.
### Clarity is key
Sometimes the clearest message *is* a complicated one. That’s okay, science is an inherently fuzzy process. Make sure your visualization tells the story you think is important, and don’t dumb the story down in the visualization. People will remember the graphic before they’ll remember a table of numbers.
By the same token, don’t needlessly complicate something that is straightforward. Perhaps a scatter plot with some groupwise coloring is enough. That’s fine.
All of this is easier said than done, and there is no right way to do data visualizations. Prepare to experiment, and focus on visuals that display patterns that will be more readily perceived.
### Avoid clutter
In striving for clarity, there are pitfalls to avoid. Gridlines, 3d, unnecessary patterning, and chartjunk in general will only detract from the message. As an example, gridlines might even seem necessary, but even faint ones can potentially hinder the pattern recognition you hope will take place, perceptually imposing clumps of data that do not exist. In addition, they practically insist on a level of data precision that in many situations you simply don’t have. What’s more, with interactivity they literally convey nothing additional, as a simple hover\-over or click on a data point will reveal the precise values. Use sparingly, if at all.
### Color isn’t optional
It’s odd for me to have to say this, as it’s been the case for many years, but no modern scientific outlet should be a print\-first outfit, and if they are, you shouldn’t care to send your work there. The only thing you should be concerned with is how it will look online, because that’s how people will interact with your work first and foremost. That means that color is essentially a requirement for any visualization, so use it well in yours. Appropriate color choice will still look fine in black and white anyway.
### Think interactively
It might be best to start by making the visualization *you want to make*, with interactivity and anything else you like. You can then reduce as necessary for publication or other outlets, and keep the fancy one as supplemental, or accessible on your own website to show off.
Color
-----
There is a lot to consider regarding color. Until recently, the default color schemes of most visualization packages were poor at best. Thankfully, ggplot2, its imitators and extenders, in both the R world and beyond, have made it much easier to have a decent color scheme by default[52](#fn52).
However, the defaults are still potentially problematic, so you should be prepared to go with something else. In other cases, you may just simply prefer something else. For example, for me, the gray background of ggplot2 defaults is something I have to remove for every plot[53](#fn53).
### Viridis
A couple packages will help you get started in choosing a decent color scheme. One is viridis. As stated in the package description:
> These color maps are designed in such a way that they will analytically be perfectly perceptually\-uniform, both in regular form and also when converted to black\-and\-white. They are also designed to be perceived by readers with the most common form of color blindness.
So basically you have something that will take care of your audience without having to do much. There are four primary palettes, plus one version of the main viridis color scheme that will be perceived by those with any type of color blindness (*cividis*).
These color schemes might seem a bit odd from what you’re used to. But recall that the goal is good communication, and these will allow you to convey information accurately, without implicit bias, and be acceptable in different formats. In addition, there is ggplot2 functionality to boot, e.g. scale\_color\_viridis, and it will work for discrete or continuously valued data.
For more, see the [vignette](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html). I also invite you to watch the [introduction of the original module in Python](https://www.youtube.com/watch?v=xAoljeRJ3lU), where you can learn more about the issues in color selection, and why viridis works.
You can use the following functions for with ggplot2:
* scale\_color\_viridis\_c
* scale\_color\_viridis\_d
* scale\_fill\_viridis\_c
* scale\_fill\_viridis\_d
### Scientific colors
Yet another set of palettes are available via the scico package, and are specifically geared toward for scientific presentation. These perceptually\-uniform color maps sequential, divierging, and circular pallets, will handle data variations equally all along the colour bar, and still work for black and white print. They provide more palettes to go with viridis.
* Perceptually uniform
* Perceptually ordered
* Color\-vision\-deficiency friendly
* Readable as black\-and\-white print
You can use the following functions for with ggplot2:
* scale\_color\_scico
* scale\_color\_scico\_d
* scale\_fill\_scico
* scale\_fill\_scico\_d
I personally prefer these for the choices available, and viridis doesn’t seem to work aesthetically that well in a lot of contexts. More information on their development can be found [here](http://www.fabiocrameri.ch/colourmaps.php).
### RColorBrewer
Color Brewer offers a collection of palettes that will generally work well in a variety of situations, but especially for discrete data. While there are print and color\-blind friendly palettes, not all adhere to those restrictions. Specifically though, you have palettes for the following data situations:
* Qualitative (e.g. Dark2[54](#fn54))
* Sequential (e.g. Reds)
* Diverging (e.g. RdBu)
There is a ggplot2 function, scale\_color\_brewer, you can use as well. For more, see [colorbrewer.org](http://colorbrewer2.org/). There you can play around with the palettes to help make your decision.
You can use the following functions for with ggplot2:
* scale\_color\_brewer
* scale\_fill\_brewer/span\>
In R, you have several schemes that work well right out of the box:
* ggplot2 default palette
* viridis
* scico
* RColorBrewer
Furthermore, they’ll work well with discrete or continuous data. You will have to do some work to come up with better, so they should be your default. Sometimes though, [one can’t help oneself](https://github.com/m-clark/NineteenEightyR).
Contrast
--------
Thankfully, websites have mostly gotten past the phase where there text looks like this. The goal of scientific communication is to, well, *communicate*. Making text hard to read is pretty much antithetical to this.
So contrast comes into play with text as well as color. In general, you should consider a 7 to 1 contrast ratio for text, minimally 4 to 1\.
\-Here is text at 2 to 1
\-Here is text at 4 to 1
\-Here is text at 7 to 1 (this document)
\-Here is black
I personally don’t like stark black, and find it visually irritating, but obviously that would be fine to use for most people.
Contrast concerns regard color as well. When considering color, one should also think about the background for plots, or perhaps the surrounding text. The following function will check for this. Ideally one would pass *AAA* status, but *AA* is sufficient for the vast majority of cases.
```
# default ggplot2 discrete color (left) against the default ggplot2 gray background
visibly::color_contrast_checker(foreground = '#F8766D', background = 'gray92')
```
```
ratio AA AALarge AAA AAALarge
1 2.25 fail fail fail fail
```
```
# the dark viridis (right) would be better
visibly::color_contrast_checker(foreground = '#440154', background = 'gray92')
```
```
ratio AA AALarge AAA AAALarge
1 12.7 pass pass pass pass
```
You can’t win all battles however. It will be difficult to choose colors that are perceptually even, avoid color\-blindness issues, have good contrast, work to convey the information you need, and are aesthetically pleasing. The main thing to do is simply make the attempt.
Scaling Size
------------
You might not be aware, but there is more than one way to scale the size of objects, e.g. in a scatterplot. Consider the following, where in both cases dots are scaled by the person’s body\-mass index (BMI).
What’s the difference? The first plot scales the dots by their area, while the second scales the radius, but otherwise they are identical. It’s not generally recommended to scale the radius, as our perceptual system is more attuned to the area. Packages like ggplot2 and plotly will automatically do this, but some might not, so you should check.
Transparency
------------
Using transparency is a great way to keep detailed information available to the audience without being overwhelming. Consider the following. Fifty individual trajectories are shown on the left, but it doesn’t cause any issue graphically. The right has 10 lines plus a fitted line, 20 points and a ribbon to provide a sense of variance. Using transparency and a scientific color scheme allows it to be perceived cleanly.
Without transparency, it just looks ugly, and notably busier if nothing else. This plot is using the exact same scico palette.
In addition, transparency can be used to add additional information to a plot. In the following scatter plot, we can get a better sense of data density from the fact that the plot is darker where points overlap more.
Here we apply transparency to a density plot to convey a group difference in distributions, while still being able to visualize the whole distribution of each group.
Had we not done so, we might not be able to tell what’s going on with some of the groups at all.
In general, a good use of transparency can potentially help any visualization, but consider it especially when trying to display many points, or otherwise have overlapping data.
Accessibility
-------------
Among many things (apparently) rarely considered in typical academic or other visualization is accessibility. The following definition comes from the World Wide Web Consortium.
> Web accessibility means that people with disabilities can use the Web. More specifically, Web accessibility means that people with disabilities can perceive, understand, navigate, and interact with the Web, and that they can contribute to the Web. Web accessibility also benefits others, including older people with changing abilities due to aging.
The main message to get is that not everyone is able to use the web in the same manner. While you won’t be able to satisfy everyone who might come across your work, putting a little thought into your offering can go along way, and potentially widen your audience.
We talked about this previously, but when communicating visually, one can do simple things like choosing a colorblind\-friendly palette, or using a font contrast that will make it easier on the eyes of those reading your work. There are even browser plugins to test your web content for accessibility. In addition, there are little things like adding a title to inserted images, making links more noticeable etc., all of which can help consumers of your information.
File Types
----------
It’s one thing to create a visualization, but at some point you’re likely going to want to share it. RStudio will allow for the export of any visualization created in the Plots or Viewer tab. In addition, various packages may have their own save function, that may allow you to specify size, type or other aspects. Here we’ll discuss some of the options.
* png: These are relatively small in size and ubiquitous on the web. You should feel fine in this format. It does not scale however, so if you make a smaller image and someone zooms, it will become blurry.
* gif: These are the type used for all the silly animations you see on the web. Using them is fine if you want to make an animation, but know that it can go longer than a couple seconds, and there is no requirement for it to be asinine.
* jpg: Commonly used for photographs, which isn’t the case with data generated graphs. Given their relative size I don’t see much need for these.
* svg: These take a different approach to imaging and can scale. You can make a very small one and it (potentially) can still look great when zoomed in to a much larger size. Often useful for logos, but possibly in any situation.
As I don’t know what screen will see my visualizations, I generally opt for svg. It may be a bit slower/larger, but in my usage and for my audience size, this is of little concern relative to it looking proper. They also work for pdf if you’re still creating those, and there are also lighter weight versions in R, e.g. svglite. Beyond that I use png, and have no need for others.
Here is a [discussion on stackexchange](https://stackoverflow.com/questions/2336522/png-vs-gif-vs-jpeg-vs-svg-when-best-to-use) that summarizes some of the above. The initial question is old but there have been recent updates to the responses.
Note also, you can import files directly into your documents with R, markdown, HTML tags, or \\(\\LaTeX\\). See `?png` for a starting point. The following demonstrates an image insert for HTML output, with a couple options for centering and size.
`<img src="file.jpg" style="display:block; margin: 0 auto;" width=50%>`
This uses markdown to get the same result
```
{width=50%}
```
Summary of Thinking Visually
----------------------------
The goal of this section was mostly just to help you realize that there are many things to consider when visualizing information and attempting to communicate the contents of data. The approach is not the same as what one would do in say, an artistic venture, or where there is nothing specific to impart to an audience. Even some of the most common things you see published are fundamentally problematic, so you can’t even use what people traditionally do as a guide. However, there are many tools available to help you. Another thing to keep in mind is that there is no right way to do a particular visualization, and many ways, to have fun with it.
A casual list of things to avoid
--------------------------------
I’m just putting things that come to mind here as I return to this document. Mostly it is personal opinion, though often based on various sources in the data visualization realm or simply my own experience.
### Pie
Pie charts and their cousins, e.g. bar charts (and stacked versions), wind rose plots, radar plots etc., either convey too little information, or make otherwise simple information more difficult to process perceptually. The basic pie chart is really only able to convey proportional data. Beyond that, anything done with a pie chart can almost always be done better, at the very least with a bar chart, but you should really consider better ways to convey your data.
Alternatives:
* bar
* densities/stacked densities
* parallel sets/sankey
### Histograms
Anyone that’s used R’s hist function knows the frustration here. Use density plots instead. They convey the same information but better, and typical defaults are usually fine. However, you should really consider the information and audience\- is a histogram or density plot really displaying what you want to show?
Alternatives:
* density
* quantile dotplot
### Using 3D without adding any communicative value
You will often come across use of 3D in scientific communication which is fairly poor and makes the data harder to interpret. In general, when going beyond two dimensions, your first thought should be to use color, size, etc. and finally, prefer interactivity to 3D. Where it is useful is in things like showing structure (e.g. molecular, geographical), or continuous multi\-way interactions.
Alternatives:
* multiple 2d/faceting
### Using too many colors
Some put a completely non\-scientifically based number on this, but the idea holds. For example, if you’re trying to show U.S. state grouping by using a different color for all 50 states, no one’s going to be able to tell the yellow for Alabama vs. the slightly different yellow for Idaho. Alternatives would be to show the information via a map or use a hover over display.
### Using valenced colors when data isn’t applicable
Often we have data that can be thought of as having a positive/negative or valenced nuance. For example, we might want to show values relative to some cut point, or they might naturally have positive and negative values (e.g. sentiment, standardized scores). Oftentimes though, doing so would mean possibly arbitrarily picking a cut point and unnaturally discretizing the data.
The following shows a plot of water risk for many countries. The first plots the color along a continuum with increasing darkness as one goes along, which is appropriate for this score of positive numeric values from 0\-5\. We can clearly see problematic ones while still getting a sense of where other countries lie along that score. The other plot arbitrarily codes a different color scheme, which might suggest some countries are fundamentally different than others. However, if the goal is to show values relative to the median, then it accurately conveys countries above and below that value. If the median is not a useful value (e.g. to take some action upon), then the former plot would likely be preferred.
### Showing maps that just display population
Many of the maps I see on the web cover a wide range of data and can be very visually appealing, but pretty much just tell me where the most populated areas are, because the value conveyed is highly correlated with it. Such maps are not very interesting, so make sure that your geographical depiction is more informative than this.
### Biplots
A lot of folks doing PCA resort to biplots for interpretation, where a graphical model would be much more straightforward. See [this chapter](http://m-clark.github.io/sem/latent-variables.html) for example.
Thinking Visually Exercises
---------------------------
### Exercise 1
The following uses the diamonds data set that comes with ggplot2. Use the scale\_color\_viridis or scale\_color\_scico function to add a more accessible palette. Use `?` to examine your options.
```
# devtools::install_github("thomasp85/scico") # to use scientific colors
library(ggplot2)
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(color = price)) +
????
```
### Exercise 2
Now color it by the `cut` instead of `price`. Use scale\_color\_viridis/scioc\_d. See the helpfile via `?scale_color_*` to see how to change the palette.
### Thinking exercises
For your upcoming presentation, *who* is your audience?
Information
-----------
A starting point for data visualization regards the information you want to display, and then how you want to display it in order to tell the data’s story. As in statistical modeling, parsimony is the goal, but not at the cost of the more compelling story. We don’t want to waste the time of the audience or be redundant, but we also want to avoid unnecessary clutter, chart junk, and the like.
We’ll start with a couple examples. Consider the following.
So what’s wrong with this? Plenty. Aside from being boring, the entire story can be said with a couple words\- males are taller than females (even in the Star Wars universe). There is no reason to have a visualization. And if a simple group difference is the most exciting thing you have to talk about, not many are going to be interested.
Minor issues can also be noted, including unnecessary border around the bars, unnecessary vertical gridlines, and an unnecessary X axis label.
You might think the following is an improvement, but I would say it’s even worse.
Now the y axis has been changed to distort the difference, perceptually suggesting a height increase of over 34%. Furthermore, color is used but the colors are chosen poorly, and add no information, thus making the legend superfluous. And finally, the above doesn’t even convey the information people think it does, assuming they are even standard error bars, which one typically has to guess about in many journal visualizations of this kind[50](#fn50).
Now we add more information, but more problems!
The above has unnecessary border, gridlines, and emphasis. The labels, while possibly interesting, do not relate anything useful to the graph, and many are illegible. It imposes a straight (and too wide of a) straight line on a nonlinear relationship. And finally, color choice is both terrible and tends to draw one’s eye to the female data points. Here is what it looks like to someone with the most common form of colorblindness. If the points were less clumpy on sex, it would be very difficult to distinguish the groups.
And here is what it might look like when printed.
Now consider the following. We have six pieces of information in one graph\- name (on hover), homeworld (shape), age (size), sex (color), mass (x), and height (y). The colors are evenly spaced from one another, and so do not draw one’s attention to one group over another, or even to the line over groups. Opacity allows the line to be added and the points to overlap without loss of information. We technically don’t need a caption, legend or gridlines, because hovering over the data tells us everything we’d want to know about a given data point. The interactivity additionally allows one to select and zoom on specific areas.
Whether this particular scheme is something you’d prefer or not, the point is that we get quite a bit of information without being overwhelming, and the data is allowed to express itself cleanly.
Here are some things to keep in mind when creating visualizations for scientific communication.
### Your audience isn’t dumb
Assume your audience, which in academia is full of people with advanced degrees or those aspiring to obtain one, and in other contexts comprises people who are interested in your story, can handle more than a bar graph. If the visualization is good and well\-explained[51](#fn51), they’ll be fine.
See the data visualization and maps sections of [2019: The Year in Visual Stories and Graphics](https://www.nytimes.com/interactive/2019/12/30/us/2019-year-in-graphics.html) at the New York Times. Good data visualization of even complex relationships can be appreciated by more than an academic audience. Assume you can at least provide visualizations on that level of complexity and be okay. It won’t always work, but at least put the same effort you’d appreciate yourself.
### Clarity is key
Sometimes the clearest message *is* a complicated one. That’s okay, science is an inherently fuzzy process. Make sure your visualization tells the story you think is important, and don’t dumb the story down in the visualization. People will remember the graphic before they’ll remember a table of numbers.
By the same token, don’t needlessly complicate something that is straightforward. Perhaps a scatter plot with some groupwise coloring is enough. That’s fine.
All of this is easier said than done, and there is no right way to do data visualizations. Prepare to experiment, and focus on visuals that display patterns that will be more readily perceived.
### Avoid clutter
In striving for clarity, there are pitfalls to avoid. Gridlines, 3d, unnecessary patterning, and chartjunk in general will only detract from the message. As an example, gridlines might even seem necessary, but even faint ones can potentially hinder the pattern recognition you hope will take place, perceptually imposing clumps of data that do not exist. In addition, they practically insist on a level of data precision that in many situations you simply don’t have. What’s more, with interactivity they literally convey nothing additional, as a simple hover\-over or click on a data point will reveal the precise values. Use sparingly, if at all.
### Color isn’t optional
It’s odd for me to have to say this, as it’s been the case for many years, but no modern scientific outlet should be a print\-first outfit, and if they are, you shouldn’t care to send your work there. The only thing you should be concerned with is how it will look online, because that’s how people will interact with your work first and foremost. That means that color is essentially a requirement for any visualization, so use it well in yours. Appropriate color choice will still look fine in black and white anyway.
### Think interactively
It might be best to start by making the visualization *you want to make*, with interactivity and anything else you like. You can then reduce as necessary for publication or other outlets, and keep the fancy one as supplemental, or accessible on your own website to show off.
### Your audience isn’t dumb
Assume your audience, which in academia is full of people with advanced degrees or those aspiring to obtain one, and in other contexts comprises people who are interested in your story, can handle more than a bar graph. If the visualization is good and well\-explained[51](#fn51), they’ll be fine.
See the data visualization and maps sections of [2019: The Year in Visual Stories and Graphics](https://www.nytimes.com/interactive/2019/12/30/us/2019-year-in-graphics.html) at the New York Times. Good data visualization of even complex relationships can be appreciated by more than an academic audience. Assume you can at least provide visualizations on that level of complexity and be okay. It won’t always work, but at least put the same effort you’d appreciate yourself.
### Clarity is key
Sometimes the clearest message *is* a complicated one. That’s okay, science is an inherently fuzzy process. Make sure your visualization tells the story you think is important, and don’t dumb the story down in the visualization. People will remember the graphic before they’ll remember a table of numbers.
By the same token, don’t needlessly complicate something that is straightforward. Perhaps a scatter plot with some groupwise coloring is enough. That’s fine.
All of this is easier said than done, and there is no right way to do data visualizations. Prepare to experiment, and focus on visuals that display patterns that will be more readily perceived.
### Avoid clutter
In striving for clarity, there are pitfalls to avoid. Gridlines, 3d, unnecessary patterning, and chartjunk in general will only detract from the message. As an example, gridlines might even seem necessary, but even faint ones can potentially hinder the pattern recognition you hope will take place, perceptually imposing clumps of data that do not exist. In addition, they practically insist on a level of data precision that in many situations you simply don’t have. What’s more, with interactivity they literally convey nothing additional, as a simple hover\-over or click on a data point will reveal the precise values. Use sparingly, if at all.
### Color isn’t optional
It’s odd for me to have to say this, as it’s been the case for many years, but no modern scientific outlet should be a print\-first outfit, and if they are, you shouldn’t care to send your work there. The only thing you should be concerned with is how it will look online, because that’s how people will interact with your work first and foremost. That means that color is essentially a requirement for any visualization, so use it well in yours. Appropriate color choice will still look fine in black and white anyway.
### Think interactively
It might be best to start by making the visualization *you want to make*, with interactivity and anything else you like. You can then reduce as necessary for publication or other outlets, and keep the fancy one as supplemental, or accessible on your own website to show off.
Color
-----
There is a lot to consider regarding color. Until recently, the default color schemes of most visualization packages were poor at best. Thankfully, ggplot2, its imitators and extenders, in both the R world and beyond, have made it much easier to have a decent color scheme by default[52](#fn52).
However, the defaults are still potentially problematic, so you should be prepared to go with something else. In other cases, you may just simply prefer something else. For example, for me, the gray background of ggplot2 defaults is something I have to remove for every plot[53](#fn53).
### Viridis
A couple packages will help you get started in choosing a decent color scheme. One is viridis. As stated in the package description:
> These color maps are designed in such a way that they will analytically be perfectly perceptually\-uniform, both in regular form and also when converted to black\-and\-white. They are also designed to be perceived by readers with the most common form of color blindness.
So basically you have something that will take care of your audience without having to do much. There are four primary palettes, plus one version of the main viridis color scheme that will be perceived by those with any type of color blindness (*cividis*).
These color schemes might seem a bit odd from what you’re used to. But recall that the goal is good communication, and these will allow you to convey information accurately, without implicit bias, and be acceptable in different formats. In addition, there is ggplot2 functionality to boot, e.g. scale\_color\_viridis, and it will work for discrete or continuously valued data.
For more, see the [vignette](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html). I also invite you to watch the [introduction of the original module in Python](https://www.youtube.com/watch?v=xAoljeRJ3lU), where you can learn more about the issues in color selection, and why viridis works.
You can use the following functions for with ggplot2:
* scale\_color\_viridis\_c
* scale\_color\_viridis\_d
* scale\_fill\_viridis\_c
* scale\_fill\_viridis\_d
### Scientific colors
Yet another set of palettes are available via the scico package, and are specifically geared toward for scientific presentation. These perceptually\-uniform color maps sequential, divierging, and circular pallets, will handle data variations equally all along the colour bar, and still work for black and white print. They provide more palettes to go with viridis.
* Perceptually uniform
* Perceptually ordered
* Color\-vision\-deficiency friendly
* Readable as black\-and\-white print
You can use the following functions for with ggplot2:
* scale\_color\_scico
* scale\_color\_scico\_d
* scale\_fill\_scico
* scale\_fill\_scico\_d
I personally prefer these for the choices available, and viridis doesn’t seem to work aesthetically that well in a lot of contexts. More information on their development can be found [here](http://www.fabiocrameri.ch/colourmaps.php).
### RColorBrewer
Color Brewer offers a collection of palettes that will generally work well in a variety of situations, but especially for discrete data. While there are print and color\-blind friendly palettes, not all adhere to those restrictions. Specifically though, you have palettes for the following data situations:
* Qualitative (e.g. Dark2[54](#fn54))
* Sequential (e.g. Reds)
* Diverging (e.g. RdBu)
There is a ggplot2 function, scale\_color\_brewer, you can use as well. For more, see [colorbrewer.org](http://colorbrewer2.org/). There you can play around with the palettes to help make your decision.
You can use the following functions for with ggplot2:
* scale\_color\_brewer
* scale\_fill\_brewer/span\>
In R, you have several schemes that work well right out of the box:
* ggplot2 default palette
* viridis
* scico
* RColorBrewer
Furthermore, they’ll work well with discrete or continuous data. You will have to do some work to come up with better, so they should be your default. Sometimes though, [one can’t help oneself](https://github.com/m-clark/NineteenEightyR).
### Viridis
A couple packages will help you get started in choosing a decent color scheme. One is viridis. As stated in the package description:
> These color maps are designed in such a way that they will analytically be perfectly perceptually\-uniform, both in regular form and also when converted to black\-and\-white. They are also designed to be perceived by readers with the most common form of color blindness.
So basically you have something that will take care of your audience without having to do much. There are four primary palettes, plus one version of the main viridis color scheme that will be perceived by those with any type of color blindness (*cividis*).
These color schemes might seem a bit odd from what you’re used to. But recall that the goal is good communication, and these will allow you to convey information accurately, without implicit bias, and be acceptable in different formats. In addition, there is ggplot2 functionality to boot, e.g. scale\_color\_viridis, and it will work for discrete or continuously valued data.
For more, see the [vignette](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html). I also invite you to watch the [introduction of the original module in Python](https://www.youtube.com/watch?v=xAoljeRJ3lU), where you can learn more about the issues in color selection, and why viridis works.
You can use the following functions for with ggplot2:
* scale\_color\_viridis\_c
* scale\_color\_viridis\_d
* scale\_fill\_viridis\_c
* scale\_fill\_viridis\_d
### Scientific colors
Yet another set of palettes are available via the scico package, and are specifically geared toward for scientific presentation. These perceptually\-uniform color maps sequential, divierging, and circular pallets, will handle data variations equally all along the colour bar, and still work for black and white print. They provide more palettes to go with viridis.
* Perceptually uniform
* Perceptually ordered
* Color\-vision\-deficiency friendly
* Readable as black\-and\-white print
You can use the following functions for with ggplot2:
* scale\_color\_scico
* scale\_color\_scico\_d
* scale\_fill\_scico
* scale\_fill\_scico\_d
I personally prefer these for the choices available, and viridis doesn’t seem to work aesthetically that well in a lot of contexts. More information on their development can be found [here](http://www.fabiocrameri.ch/colourmaps.php).
### RColorBrewer
Color Brewer offers a collection of palettes that will generally work well in a variety of situations, but especially for discrete data. While there are print and color\-blind friendly palettes, not all adhere to those restrictions. Specifically though, you have palettes for the following data situations:
* Qualitative (e.g. Dark2[54](#fn54))
* Sequential (e.g. Reds)
* Diverging (e.g. RdBu)
There is a ggplot2 function, scale\_color\_brewer, you can use as well. For more, see [colorbrewer.org](http://colorbrewer2.org/). There you can play around with the palettes to help make your decision.
You can use the following functions for with ggplot2:
* scale\_color\_brewer
* scale\_fill\_brewer/span\>
In R, you have several schemes that work well right out of the box:
* ggplot2 default palette
* viridis
* scico
* RColorBrewer
Furthermore, they’ll work well with discrete or continuous data. You will have to do some work to come up with better, so they should be your default. Sometimes though, [one can’t help oneself](https://github.com/m-clark/NineteenEightyR).
Contrast
--------
Thankfully, websites have mostly gotten past the phase where there text looks like this. The goal of scientific communication is to, well, *communicate*. Making text hard to read is pretty much antithetical to this.
So contrast comes into play with text as well as color. In general, you should consider a 7 to 1 contrast ratio for text, minimally 4 to 1\.
\-Here is text at 2 to 1
\-Here is text at 4 to 1
\-Here is text at 7 to 1 (this document)
\-Here is black
I personally don’t like stark black, and find it visually irritating, but obviously that would be fine to use for most people.
Contrast concerns regard color as well. When considering color, one should also think about the background for plots, or perhaps the surrounding text. The following function will check for this. Ideally one would pass *AAA* status, but *AA* is sufficient for the vast majority of cases.
```
# default ggplot2 discrete color (left) against the default ggplot2 gray background
visibly::color_contrast_checker(foreground = '#F8766D', background = 'gray92')
```
```
ratio AA AALarge AAA AAALarge
1 2.25 fail fail fail fail
```
```
# the dark viridis (right) would be better
visibly::color_contrast_checker(foreground = '#440154', background = 'gray92')
```
```
ratio AA AALarge AAA AAALarge
1 12.7 pass pass pass pass
```
You can’t win all battles however. It will be difficult to choose colors that are perceptually even, avoid color\-blindness issues, have good contrast, work to convey the information you need, and are aesthetically pleasing. The main thing to do is simply make the attempt.
Scaling Size
------------
You might not be aware, but there is more than one way to scale the size of objects, e.g. in a scatterplot. Consider the following, where in both cases dots are scaled by the person’s body\-mass index (BMI).
What’s the difference? The first plot scales the dots by their area, while the second scales the radius, but otherwise they are identical. It’s not generally recommended to scale the radius, as our perceptual system is more attuned to the area. Packages like ggplot2 and plotly will automatically do this, but some might not, so you should check.
Transparency
------------
Using transparency is a great way to keep detailed information available to the audience without being overwhelming. Consider the following. Fifty individual trajectories are shown on the left, but it doesn’t cause any issue graphically. The right has 10 lines plus a fitted line, 20 points and a ribbon to provide a sense of variance. Using transparency and a scientific color scheme allows it to be perceived cleanly.
Without transparency, it just looks ugly, and notably busier if nothing else. This plot is using the exact same scico palette.
In addition, transparency can be used to add additional information to a plot. In the following scatter plot, we can get a better sense of data density from the fact that the plot is darker where points overlap more.
Here we apply transparency to a density plot to convey a group difference in distributions, while still being able to visualize the whole distribution of each group.
Had we not done so, we might not be able to tell what’s going on with some of the groups at all.
In general, a good use of transparency can potentially help any visualization, but consider it especially when trying to display many points, or otherwise have overlapping data.
Accessibility
-------------
Among many things (apparently) rarely considered in typical academic or other visualization is accessibility. The following definition comes from the World Wide Web Consortium.
> Web accessibility means that people with disabilities can use the Web. More specifically, Web accessibility means that people with disabilities can perceive, understand, navigate, and interact with the Web, and that they can contribute to the Web. Web accessibility also benefits others, including older people with changing abilities due to aging.
The main message to get is that not everyone is able to use the web in the same manner. While you won’t be able to satisfy everyone who might come across your work, putting a little thought into your offering can go along way, and potentially widen your audience.
We talked about this previously, but when communicating visually, one can do simple things like choosing a colorblind\-friendly palette, or using a font contrast that will make it easier on the eyes of those reading your work. There are even browser plugins to test your web content for accessibility. In addition, there are little things like adding a title to inserted images, making links more noticeable etc., all of which can help consumers of your information.
File Types
----------
It’s one thing to create a visualization, but at some point you’re likely going to want to share it. RStudio will allow for the export of any visualization created in the Plots or Viewer tab. In addition, various packages may have their own save function, that may allow you to specify size, type or other aspects. Here we’ll discuss some of the options.
* png: These are relatively small in size and ubiquitous on the web. You should feel fine in this format. It does not scale however, so if you make a smaller image and someone zooms, it will become blurry.
* gif: These are the type used for all the silly animations you see on the web. Using them is fine if you want to make an animation, but know that it can go longer than a couple seconds, and there is no requirement for it to be asinine.
* jpg: Commonly used for photographs, which isn’t the case with data generated graphs. Given their relative size I don’t see much need for these.
* svg: These take a different approach to imaging and can scale. You can make a very small one and it (potentially) can still look great when zoomed in to a much larger size. Often useful for logos, but possibly in any situation.
As I don’t know what screen will see my visualizations, I generally opt for svg. It may be a bit slower/larger, but in my usage and for my audience size, this is of little concern relative to it looking proper. They also work for pdf if you’re still creating those, and there are also lighter weight versions in R, e.g. svglite. Beyond that I use png, and have no need for others.
Here is a [discussion on stackexchange](https://stackoverflow.com/questions/2336522/png-vs-gif-vs-jpeg-vs-svg-when-best-to-use) that summarizes some of the above. The initial question is old but there have been recent updates to the responses.
Note also, you can import files directly into your documents with R, markdown, HTML tags, or \\(\\LaTeX\\). See `?png` for a starting point. The following demonstrates an image insert for HTML output, with a couple options for centering and size.
`<img src="file.jpg" style="display:block; margin: 0 auto;" width=50%>`
This uses markdown to get the same result
```
{width=50%}
```
Summary of Thinking Visually
----------------------------
The goal of this section was mostly just to help you realize that there are many things to consider when visualizing information and attempting to communicate the contents of data. The approach is not the same as what one would do in say, an artistic venture, or where there is nothing specific to impart to an audience. Even some of the most common things you see published are fundamentally problematic, so you can’t even use what people traditionally do as a guide. However, there are many tools available to help you. Another thing to keep in mind is that there is no right way to do a particular visualization, and many ways, to have fun with it.
A casual list of things to avoid
--------------------------------
I’m just putting things that come to mind here as I return to this document. Mostly it is personal opinion, though often based on various sources in the data visualization realm or simply my own experience.
### Pie
Pie charts and their cousins, e.g. bar charts (and stacked versions), wind rose plots, radar plots etc., either convey too little information, or make otherwise simple information more difficult to process perceptually. The basic pie chart is really only able to convey proportional data. Beyond that, anything done with a pie chart can almost always be done better, at the very least with a bar chart, but you should really consider better ways to convey your data.
Alternatives:
* bar
* densities/stacked densities
* parallel sets/sankey
### Histograms
Anyone that’s used R’s hist function knows the frustration here. Use density plots instead. They convey the same information but better, and typical defaults are usually fine. However, you should really consider the information and audience\- is a histogram or density plot really displaying what you want to show?
Alternatives:
* density
* quantile dotplot
### Using 3D without adding any communicative value
You will often come across use of 3D in scientific communication which is fairly poor and makes the data harder to interpret. In general, when going beyond two dimensions, your first thought should be to use color, size, etc. and finally, prefer interactivity to 3D. Where it is useful is in things like showing structure (e.g. molecular, geographical), or continuous multi\-way interactions.
Alternatives:
* multiple 2d/faceting
### Using too many colors
Some put a completely non\-scientifically based number on this, but the idea holds. For example, if you’re trying to show U.S. state grouping by using a different color for all 50 states, no one’s going to be able to tell the yellow for Alabama vs. the slightly different yellow for Idaho. Alternatives would be to show the information via a map or use a hover over display.
### Using valenced colors when data isn’t applicable
Often we have data that can be thought of as having a positive/negative or valenced nuance. For example, we might want to show values relative to some cut point, or they might naturally have positive and negative values (e.g. sentiment, standardized scores). Oftentimes though, doing so would mean possibly arbitrarily picking a cut point and unnaturally discretizing the data.
The following shows a plot of water risk for many countries. The first plots the color along a continuum with increasing darkness as one goes along, which is appropriate for this score of positive numeric values from 0\-5\. We can clearly see problematic ones while still getting a sense of where other countries lie along that score. The other plot arbitrarily codes a different color scheme, which might suggest some countries are fundamentally different than others. However, if the goal is to show values relative to the median, then it accurately conveys countries above and below that value. If the median is not a useful value (e.g. to take some action upon), then the former plot would likely be preferred.
### Showing maps that just display population
Many of the maps I see on the web cover a wide range of data and can be very visually appealing, but pretty much just tell me where the most populated areas are, because the value conveyed is highly correlated with it. Such maps are not very interesting, so make sure that your geographical depiction is more informative than this.
### Biplots
A lot of folks doing PCA resort to biplots for interpretation, where a graphical model would be much more straightforward. See [this chapter](http://m-clark.github.io/sem/latent-variables.html) for example.
### Pie
Pie charts and their cousins, e.g. bar charts (and stacked versions), wind rose plots, radar plots etc., either convey too little information, or make otherwise simple information more difficult to process perceptually. The basic pie chart is really only able to convey proportional data. Beyond that, anything done with a pie chart can almost always be done better, at the very least with a bar chart, but you should really consider better ways to convey your data.
Alternatives:
* bar
* densities/stacked densities
* parallel sets/sankey
### Histograms
Anyone that’s used R’s hist function knows the frustration here. Use density plots instead. They convey the same information but better, and typical defaults are usually fine. However, you should really consider the information and audience\- is a histogram or density plot really displaying what you want to show?
Alternatives:
* density
* quantile dotplot
### Using 3D without adding any communicative value
You will often come across use of 3D in scientific communication which is fairly poor and makes the data harder to interpret. In general, when going beyond two dimensions, your first thought should be to use color, size, etc. and finally, prefer interactivity to 3D. Where it is useful is in things like showing structure (e.g. molecular, geographical), or continuous multi\-way interactions.
Alternatives:
* multiple 2d/faceting
### Using too many colors
Some put a completely non\-scientifically based number on this, but the idea holds. For example, if you’re trying to show U.S. state grouping by using a different color for all 50 states, no one’s going to be able to tell the yellow for Alabama vs. the slightly different yellow for Idaho. Alternatives would be to show the information via a map or use a hover over display.
### Using valenced colors when data isn’t applicable
Often we have data that can be thought of as having a positive/negative or valenced nuance. For example, we might want to show values relative to some cut point, or they might naturally have positive and negative values (e.g. sentiment, standardized scores). Oftentimes though, doing so would mean possibly arbitrarily picking a cut point and unnaturally discretizing the data.
The following shows a plot of water risk for many countries. The first plots the color along a continuum with increasing darkness as one goes along, which is appropriate for this score of positive numeric values from 0\-5\. We can clearly see problematic ones while still getting a sense of where other countries lie along that score. The other plot arbitrarily codes a different color scheme, which might suggest some countries are fundamentally different than others. However, if the goal is to show values relative to the median, then it accurately conveys countries above and below that value. If the median is not a useful value (e.g. to take some action upon), then the former plot would likely be preferred.
### Showing maps that just display population
Many of the maps I see on the web cover a wide range of data and can be very visually appealing, but pretty much just tell me where the most populated areas are, because the value conveyed is highly correlated with it. Such maps are not very interesting, so make sure that your geographical depiction is more informative than this.
### Biplots
A lot of folks doing PCA resort to biplots for interpretation, where a graphical model would be much more straightforward. See [this chapter](http://m-clark.github.io/sem/latent-variables.html) for example.
Thinking Visually Exercises
---------------------------
### Exercise 1
The following uses the diamonds data set that comes with ggplot2. Use the scale\_color\_viridis or scale\_color\_scico function to add a more accessible palette. Use `?` to examine your options.
```
# devtools::install_github("thomasp85/scico") # to use scientific colors
library(ggplot2)
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(color = price)) +
????
```
### Exercise 2
Now color it by the `cut` instead of `price`. Use scale\_color\_viridis/scioc\_d. See the helpfile via `?scale_color_*` to see how to change the palette.
### Thinking exercises
For your upcoming presentation, *who* is your audience?
### Exercise 1
The following uses the diamonds data set that comes with ggplot2. Use the scale\_color\_viridis or scale\_color\_scico function to add a more accessible palette. Use `?` to examine your options.
```
# devtools::install_github("thomasp85/scico") # to use scientific colors
library(ggplot2)
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(color = price)) +
????
```
### Exercise 2
Now color it by the `cut` instead of `price`. Use scale\_color\_viridis/scioc\_d. See the helpfile via `?scale_color_*` to see how to change the palette.
### Thinking exercises
For your upcoming presentation, *who* is your audience?
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/thinking_vis.html |
Thinking Visually
=================
Information
-----------
A starting point for data visualization regards the information you want to display, and then how you want to display it in order to tell the data’s story. As in statistical modeling, parsimony is the goal, but not at the cost of the more compelling story. We don’t want to waste the time of the audience or be redundant, but we also want to avoid unnecessary clutter, chart junk, and the like.
We’ll start with a couple examples. Consider the following.
So what’s wrong with this? Plenty. Aside from being boring, the entire story can be said with a couple words\- males are taller than females (even in the Star Wars universe). There is no reason to have a visualization. And if a simple group difference is the most exciting thing you have to talk about, not many are going to be interested.
Minor issues can also be noted, including unnecessary border around the bars, unnecessary vertical gridlines, and an unnecessary X axis label.
You might think the following is an improvement, but I would say it’s even worse.
Now the y axis has been changed to distort the difference, perceptually suggesting a height increase of over 34%. Furthermore, color is used but the colors are chosen poorly, and add no information, thus making the legend superfluous. And finally, the above doesn’t even convey the information people think it does, assuming they are even standard error bars, which one typically has to guess about in many journal visualizations of this kind[50](#fn50).
Now we add more information, but more problems!
The above has unnecessary border, gridlines, and emphasis. The labels, while possibly interesting, do not relate anything useful to the graph, and many are illegible. It imposes a straight (and too wide of a) straight line on a nonlinear relationship. And finally, color choice is both terrible and tends to draw one’s eye to the female data points. Here is what it looks like to someone with the most common form of colorblindness. If the points were less clumpy on sex, it would be very difficult to distinguish the groups.
And here is what it might look like when printed.
Now consider the following. We have six pieces of information in one graph\- name (on hover), homeworld (shape), age (size), sex (color), mass (x), and height (y). The colors are evenly spaced from one another, and so do not draw one’s attention to one group over another, or even to the line over groups. Opacity allows the line to be added and the points to overlap without loss of information. We technically don’t need a caption, legend or gridlines, because hovering over the data tells us everything we’d want to know about a given data point. The interactivity additionally allows one to select and zoom on specific areas.
Whether this particular scheme is something you’d prefer or not, the point is that we get quite a bit of information without being overwhelming, and the data is allowed to express itself cleanly.
Here are some things to keep in mind when creating visualizations for scientific communication.
### Your audience isn’t dumb
Assume your audience, which in academia is full of people with advanced degrees or those aspiring to obtain one, and in other contexts comprises people who are interested in your story, can handle more than a bar graph. If the visualization is good and well\-explained[51](#fn51), they’ll be fine.
See the data visualization and maps sections of [2019: The Year in Visual Stories and Graphics](https://www.nytimes.com/interactive/2019/12/30/us/2019-year-in-graphics.html) at the New York Times. Good data visualization of even complex relationships can be appreciated by more than an academic audience. Assume you can at least provide visualizations on that level of complexity and be okay. It won’t always work, but at least put the same effort you’d appreciate yourself.
### Clarity is key
Sometimes the clearest message *is* a complicated one. That’s okay, science is an inherently fuzzy process. Make sure your visualization tells the story you think is important, and don’t dumb the story down in the visualization. People will remember the graphic before they’ll remember a table of numbers.
By the same token, don’t needlessly complicate something that is straightforward. Perhaps a scatter plot with some groupwise coloring is enough. That’s fine.
All of this is easier said than done, and there is no right way to do data visualizations. Prepare to experiment, and focus on visuals that display patterns that will be more readily perceived.
### Avoid clutter
In striving for clarity, there are pitfalls to avoid. Gridlines, 3d, unnecessary patterning, and chartjunk in general will only detract from the message. As an example, gridlines might even seem necessary, but even faint ones can potentially hinder the pattern recognition you hope will take place, perceptually imposing clumps of data that do not exist. In addition, they practically insist on a level of data precision that in many situations you simply don’t have. What’s more, with interactivity they literally convey nothing additional, as a simple hover\-over or click on a data point will reveal the precise values. Use sparingly, if at all.
### Color isn’t optional
It’s odd for me to have to say this, as it’s been the case for many years, but no modern scientific outlet should be a print\-first outfit, and if they are, you shouldn’t care to send your work there. The only thing you should be concerned with is how it will look online, because that’s how people will interact with your work first and foremost. That means that color is essentially a requirement for any visualization, so use it well in yours. Appropriate color choice will still look fine in black and white anyway.
### Think interactively
It might be best to start by making the visualization *you want to make*, with interactivity and anything else you like. You can then reduce as necessary for publication or other outlets, and keep the fancy one as supplemental, or accessible on your own website to show off.
Color
-----
There is a lot to consider regarding color. Until recently, the default color schemes of most visualization packages were poor at best. Thankfully, ggplot2, its imitators and extenders, in both the R world and beyond, have made it much easier to have a decent color scheme by default[52](#fn52).
However, the defaults are still potentially problematic, so you should be prepared to go with something else. In other cases, you may just simply prefer something else. For example, for me, the gray background of ggplot2 defaults is something I have to remove for every plot[53](#fn53).
### Viridis
A couple packages will help you get started in choosing a decent color scheme. One is viridis. As stated in the package description:
> These color maps are designed in such a way that they will analytically be perfectly perceptually\-uniform, both in regular form and also when converted to black\-and\-white. They are also designed to be perceived by readers with the most common form of color blindness.
So basically you have something that will take care of your audience without having to do much. There are four primary palettes, plus one version of the main viridis color scheme that will be perceived by those with any type of color blindness (*cividis*).
These color schemes might seem a bit odd from what you’re used to. But recall that the goal is good communication, and these will allow you to convey information accurately, without implicit bias, and be acceptable in different formats. In addition, there is ggplot2 functionality to boot, e.g. scale\_color\_viridis, and it will work for discrete or continuously valued data.
For more, see the [vignette](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html). I also invite you to watch the [introduction of the original module in Python](https://www.youtube.com/watch?v=xAoljeRJ3lU), where you can learn more about the issues in color selection, and why viridis works.
You can use the following functions for with ggplot2:
* scale\_color\_viridis\_c
* scale\_color\_viridis\_d
* scale\_fill\_viridis\_c
* scale\_fill\_viridis\_d
### Scientific colors
Yet another set of palettes are available via the scico package, and are specifically geared toward for scientific presentation. These perceptually\-uniform color maps sequential, divierging, and circular pallets, will handle data variations equally all along the colour bar, and still work for black and white print. They provide more palettes to go with viridis.
* Perceptually uniform
* Perceptually ordered
* Color\-vision\-deficiency friendly
* Readable as black\-and\-white print
You can use the following functions for with ggplot2:
* scale\_color\_scico
* scale\_color\_scico\_d
* scale\_fill\_scico
* scale\_fill\_scico\_d
I personally prefer these for the choices available, and viridis doesn’t seem to work aesthetically that well in a lot of contexts. More information on their development can be found [here](http://www.fabiocrameri.ch/colourmaps.php).
### RColorBrewer
Color Brewer offers a collection of palettes that will generally work well in a variety of situations, but especially for discrete data. While there are print and color\-blind friendly palettes, not all adhere to those restrictions. Specifically though, you have palettes for the following data situations:
* Qualitative (e.g. Dark2[54](#fn54))
* Sequential (e.g. Reds)
* Diverging (e.g. RdBu)
There is a ggplot2 function, scale\_color\_brewer, you can use as well. For more, see [colorbrewer.org](http://colorbrewer2.org/). There you can play around with the palettes to help make your decision.
You can use the following functions for with ggplot2:
* scale\_color\_brewer
* scale\_fill\_brewer/span\>
In R, you have several schemes that work well right out of the box:
* ggplot2 default palette
* viridis
* scico
* RColorBrewer
Furthermore, they’ll work well with discrete or continuous data. You will have to do some work to come up with better, so they should be your default. Sometimes though, [one can’t help oneself](https://github.com/m-clark/NineteenEightyR).
Contrast
--------
Thankfully, websites have mostly gotten past the phase where there text looks like this. The goal of scientific communication is to, well, *communicate*. Making text hard to read is pretty much antithetical to this.
So contrast comes into play with text as well as color. In general, you should consider a 7 to 1 contrast ratio for text, minimally 4 to 1\.
\-Here is text at 2 to 1
\-Here is text at 4 to 1
\-Here is text at 7 to 1 (this document)
\-Here is black
I personally don’t like stark black, and find it visually irritating, but obviously that would be fine to use for most people.
Contrast concerns regard color as well. When considering color, one should also think about the background for plots, or perhaps the surrounding text. The following function will check for this. Ideally one would pass *AAA* status, but *AA* is sufficient for the vast majority of cases.
```
# default ggplot2 discrete color (left) against the default ggplot2 gray background
visibly::color_contrast_checker(foreground = '#F8766D', background = 'gray92')
```
```
ratio AA AALarge AAA AAALarge
1 2.25 fail fail fail fail
```
```
# the dark viridis (right) would be better
visibly::color_contrast_checker(foreground = '#440154', background = 'gray92')
```
```
ratio AA AALarge AAA AAALarge
1 12.7 pass pass pass pass
```
You can’t win all battles however. It will be difficult to choose colors that are perceptually even, avoid color\-blindness issues, have good contrast, work to convey the information you need, and are aesthetically pleasing. The main thing to do is simply make the attempt.
Scaling Size
------------
You might not be aware, but there is more than one way to scale the size of objects, e.g. in a scatterplot. Consider the following, where in both cases dots are scaled by the person’s body\-mass index (BMI).
What’s the difference? The first plot scales the dots by their area, while the second scales the radius, but otherwise they are identical. It’s not generally recommended to scale the radius, as our perceptual system is more attuned to the area. Packages like ggplot2 and plotly will automatically do this, but some might not, so you should check.
Transparency
------------
Using transparency is a great way to keep detailed information available to the audience without being overwhelming. Consider the following. Fifty individual trajectories are shown on the left, but it doesn’t cause any issue graphically. The right has 10 lines plus a fitted line, 20 points and a ribbon to provide a sense of variance. Using transparency and a scientific color scheme allows it to be perceived cleanly.
Without transparency, it just looks ugly, and notably busier if nothing else. This plot is using the exact same scico palette.
In addition, transparency can be used to add additional information to a plot. In the following scatter plot, we can get a better sense of data density from the fact that the plot is darker where points overlap more.
Here we apply transparency to a density plot to convey a group difference in distributions, while still being able to visualize the whole distribution of each group.
Had we not done so, we might not be able to tell what’s going on with some of the groups at all.
In general, a good use of transparency can potentially help any visualization, but consider it especially when trying to display many points, or otherwise have overlapping data.
Accessibility
-------------
Among many things (apparently) rarely considered in typical academic or other visualization is accessibility. The following definition comes from the World Wide Web Consortium.
> Web accessibility means that people with disabilities can use the Web. More specifically, Web accessibility means that people with disabilities can perceive, understand, navigate, and interact with the Web, and that they can contribute to the Web. Web accessibility also benefits others, including older people with changing abilities due to aging.
The main message to get is that not everyone is able to use the web in the same manner. While you won’t be able to satisfy everyone who might come across your work, putting a little thought into your offering can go along way, and potentially widen your audience.
We talked about this previously, but when communicating visually, one can do simple things like choosing a colorblind\-friendly palette, or using a font contrast that will make it easier on the eyes of those reading your work. There are even browser plugins to test your web content for accessibility. In addition, there are little things like adding a title to inserted images, making links more noticeable etc., all of which can help consumers of your information.
File Types
----------
It’s one thing to create a visualization, but at some point you’re likely going to want to share it. RStudio will allow for the export of any visualization created in the Plots or Viewer tab. In addition, various packages may have their own save function, that may allow you to specify size, type or other aspects. Here we’ll discuss some of the options.
* png: These are relatively small in size and ubiquitous on the web. You should feel fine in this format. It does not scale however, so if you make a smaller image and someone zooms, it will become blurry.
* gif: These are the type used for all the silly animations you see on the web. Using them is fine if you want to make an animation, but know that it can go longer than a couple seconds, and there is no requirement for it to be asinine.
* jpg: Commonly used for photographs, which isn’t the case with data generated graphs. Given their relative size I don’t see much need for these.
* svg: These take a different approach to imaging and can scale. You can make a very small one and it (potentially) can still look great when zoomed in to a much larger size. Often useful for logos, but possibly in any situation.
As I don’t know what screen will see my visualizations, I generally opt for svg. It may be a bit slower/larger, but in my usage and for my audience size, this is of little concern relative to it looking proper. They also work for pdf if you’re still creating those, and there are also lighter weight versions in R, e.g. svglite. Beyond that I use png, and have no need for others.
Here is a [discussion on stackexchange](https://stackoverflow.com/questions/2336522/png-vs-gif-vs-jpeg-vs-svg-when-best-to-use) that summarizes some of the above. The initial question is old but there have been recent updates to the responses.
Note also, you can import files directly into your documents with R, markdown, HTML tags, or \\(\\LaTeX\\). See `?png` for a starting point. The following demonstrates an image insert for HTML output, with a couple options for centering and size.
`<img src="file.jpg" style="display:block; margin: 0 auto;" width=50%>`
This uses markdown to get the same result
```
{width=50%}
```
Summary of Thinking Visually
----------------------------
The goal of this section was mostly just to help you realize that there are many things to consider when visualizing information and attempting to communicate the contents of data. The approach is not the same as what one would do in say, an artistic venture, or where there is nothing specific to impart to an audience. Even some of the most common things you see published are fundamentally problematic, so you can’t even use what people traditionally do as a guide. However, there are many tools available to help you. Another thing to keep in mind is that there is no right way to do a particular visualization, and many ways, to have fun with it.
A casual list of things to avoid
--------------------------------
I’m just putting things that come to mind here as I return to this document. Mostly it is personal opinion, though often based on various sources in the data visualization realm or simply my own experience.
### Pie
Pie charts and their cousins, e.g. bar charts (and stacked versions), wind rose plots, radar plots etc., either convey too little information, or make otherwise simple information more difficult to process perceptually. The basic pie chart is really only able to convey proportional data. Beyond that, anything done with a pie chart can almost always be done better, at the very least with a bar chart, but you should really consider better ways to convey your data.
Alternatives:
* bar
* densities/stacked densities
* parallel sets/sankey
### Histograms
Anyone that’s used R’s hist function knows the frustration here. Use density plots instead. They convey the same information but better, and typical defaults are usually fine. However, you should really consider the information and audience\- is a histogram or density plot really displaying what you want to show?
Alternatives:
* density
* quantile dotplot
### Using 3D without adding any communicative value
You will often come across use of 3D in scientific communication which is fairly poor and makes the data harder to interpret. In general, when going beyond two dimensions, your first thought should be to use color, size, etc. and finally, prefer interactivity to 3D. Where it is useful is in things like showing structure (e.g. molecular, geographical), or continuous multi\-way interactions.
Alternatives:
* multiple 2d/faceting
### Using too many colors
Some put a completely non\-scientifically based number on this, but the idea holds. For example, if you’re trying to show U.S. state grouping by using a different color for all 50 states, no one’s going to be able to tell the yellow for Alabama vs. the slightly different yellow for Idaho. Alternatives would be to show the information via a map or use a hover over display.
### Using valenced colors when data isn’t applicable
Often we have data that can be thought of as having a positive/negative or valenced nuance. For example, we might want to show values relative to some cut point, or they might naturally have positive and negative values (e.g. sentiment, standardized scores). Oftentimes though, doing so would mean possibly arbitrarily picking a cut point and unnaturally discretizing the data.
The following shows a plot of water risk for many countries. The first plots the color along a continuum with increasing darkness as one goes along, which is appropriate for this score of positive numeric values from 0\-5\. We can clearly see problematic ones while still getting a sense of where other countries lie along that score. The other plot arbitrarily codes a different color scheme, which might suggest some countries are fundamentally different than others. However, if the goal is to show values relative to the median, then it accurately conveys countries above and below that value. If the median is not a useful value (e.g. to take some action upon), then the former plot would likely be preferred.
### Showing maps that just display population
Many of the maps I see on the web cover a wide range of data and can be very visually appealing, but pretty much just tell me where the most populated areas are, because the value conveyed is highly correlated with it. Such maps are not very interesting, so make sure that your geographical depiction is more informative than this.
### Biplots
A lot of folks doing PCA resort to biplots for interpretation, where a graphical model would be much more straightforward. See [this chapter](http://m-clark.github.io/sem/latent-variables.html) for example.
Thinking Visually Exercises
---------------------------
### Exercise 1
The following uses the diamonds data set that comes with ggplot2. Use the scale\_color\_viridis or scale\_color\_scico function to add a more accessible palette. Use `?` to examine your options.
```
# devtools::install_github("thomasp85/scico") # to use scientific colors
library(ggplot2)
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(color = price)) +
????
```
### Exercise 2
Now color it by the `cut` instead of `price`. Use scale\_color\_viridis/scioc\_d. See the helpfile via `?scale_color_*` to see how to change the palette.
### Thinking exercises
For your upcoming presentation, *who* is your audience?
Information
-----------
A starting point for data visualization regards the information you want to display, and then how you want to display it in order to tell the data’s story. As in statistical modeling, parsimony is the goal, but not at the cost of the more compelling story. We don’t want to waste the time of the audience or be redundant, but we also want to avoid unnecessary clutter, chart junk, and the like.
We’ll start with a couple examples. Consider the following.
So what’s wrong with this? Plenty. Aside from being boring, the entire story can be said with a couple words\- males are taller than females (even in the Star Wars universe). There is no reason to have a visualization. And if a simple group difference is the most exciting thing you have to talk about, not many are going to be interested.
Minor issues can also be noted, including unnecessary border around the bars, unnecessary vertical gridlines, and an unnecessary X axis label.
You might think the following is an improvement, but I would say it’s even worse.
Now the y axis has been changed to distort the difference, perceptually suggesting a height increase of over 34%. Furthermore, color is used but the colors are chosen poorly, and add no information, thus making the legend superfluous. And finally, the above doesn’t even convey the information people think it does, assuming they are even standard error bars, which one typically has to guess about in many journal visualizations of this kind[50](#fn50).
Now we add more information, but more problems!
The above has unnecessary border, gridlines, and emphasis. The labels, while possibly interesting, do not relate anything useful to the graph, and many are illegible. It imposes a straight (and too wide of a) straight line on a nonlinear relationship. And finally, color choice is both terrible and tends to draw one’s eye to the female data points. Here is what it looks like to someone with the most common form of colorblindness. If the points were less clumpy on sex, it would be very difficult to distinguish the groups.
And here is what it might look like when printed.
Now consider the following. We have six pieces of information in one graph\- name (on hover), homeworld (shape), age (size), sex (color), mass (x), and height (y). The colors are evenly spaced from one another, and so do not draw one’s attention to one group over another, or even to the line over groups. Opacity allows the line to be added and the points to overlap without loss of information. We technically don’t need a caption, legend or gridlines, because hovering over the data tells us everything we’d want to know about a given data point. The interactivity additionally allows one to select and zoom on specific areas.
Whether this particular scheme is something you’d prefer or not, the point is that we get quite a bit of information without being overwhelming, and the data is allowed to express itself cleanly.
Here are some things to keep in mind when creating visualizations for scientific communication.
### Your audience isn’t dumb
Assume your audience, which in academia is full of people with advanced degrees or those aspiring to obtain one, and in other contexts comprises people who are interested in your story, can handle more than a bar graph. If the visualization is good and well\-explained[51](#fn51), they’ll be fine.
See the data visualization and maps sections of [2019: The Year in Visual Stories and Graphics](https://www.nytimes.com/interactive/2019/12/30/us/2019-year-in-graphics.html) at the New York Times. Good data visualization of even complex relationships can be appreciated by more than an academic audience. Assume you can at least provide visualizations on that level of complexity and be okay. It won’t always work, but at least put the same effort you’d appreciate yourself.
### Clarity is key
Sometimes the clearest message *is* a complicated one. That’s okay, science is an inherently fuzzy process. Make sure your visualization tells the story you think is important, and don’t dumb the story down in the visualization. People will remember the graphic before they’ll remember a table of numbers.
By the same token, don’t needlessly complicate something that is straightforward. Perhaps a scatter plot with some groupwise coloring is enough. That’s fine.
All of this is easier said than done, and there is no right way to do data visualizations. Prepare to experiment, and focus on visuals that display patterns that will be more readily perceived.
### Avoid clutter
In striving for clarity, there are pitfalls to avoid. Gridlines, 3d, unnecessary patterning, and chartjunk in general will only detract from the message. As an example, gridlines might even seem necessary, but even faint ones can potentially hinder the pattern recognition you hope will take place, perceptually imposing clumps of data that do not exist. In addition, they practically insist on a level of data precision that in many situations you simply don’t have. What’s more, with interactivity they literally convey nothing additional, as a simple hover\-over or click on a data point will reveal the precise values. Use sparingly, if at all.
### Color isn’t optional
It’s odd for me to have to say this, as it’s been the case for many years, but no modern scientific outlet should be a print\-first outfit, and if they are, you shouldn’t care to send your work there. The only thing you should be concerned with is how it will look online, because that’s how people will interact with your work first and foremost. That means that color is essentially a requirement for any visualization, so use it well in yours. Appropriate color choice will still look fine in black and white anyway.
### Think interactively
It might be best to start by making the visualization *you want to make*, with interactivity and anything else you like. You can then reduce as necessary for publication or other outlets, and keep the fancy one as supplemental, or accessible on your own website to show off.
### Your audience isn’t dumb
Assume your audience, which in academia is full of people with advanced degrees or those aspiring to obtain one, and in other contexts comprises people who are interested in your story, can handle more than a bar graph. If the visualization is good and well\-explained[51](#fn51), they’ll be fine.
See the data visualization and maps sections of [2019: The Year in Visual Stories and Graphics](https://www.nytimes.com/interactive/2019/12/30/us/2019-year-in-graphics.html) at the New York Times. Good data visualization of even complex relationships can be appreciated by more than an academic audience. Assume you can at least provide visualizations on that level of complexity and be okay. It won’t always work, but at least put the same effort you’d appreciate yourself.
### Clarity is key
Sometimes the clearest message *is* a complicated one. That’s okay, science is an inherently fuzzy process. Make sure your visualization tells the story you think is important, and don’t dumb the story down in the visualization. People will remember the graphic before they’ll remember a table of numbers.
By the same token, don’t needlessly complicate something that is straightforward. Perhaps a scatter plot with some groupwise coloring is enough. That’s fine.
All of this is easier said than done, and there is no right way to do data visualizations. Prepare to experiment, and focus on visuals that display patterns that will be more readily perceived.
### Avoid clutter
In striving for clarity, there are pitfalls to avoid. Gridlines, 3d, unnecessary patterning, and chartjunk in general will only detract from the message. As an example, gridlines might even seem necessary, but even faint ones can potentially hinder the pattern recognition you hope will take place, perceptually imposing clumps of data that do not exist. In addition, they practically insist on a level of data precision that in many situations you simply don’t have. What’s more, with interactivity they literally convey nothing additional, as a simple hover\-over or click on a data point will reveal the precise values. Use sparingly, if at all.
### Color isn’t optional
It’s odd for me to have to say this, as it’s been the case for many years, but no modern scientific outlet should be a print\-first outfit, and if they are, you shouldn’t care to send your work there. The only thing you should be concerned with is how it will look online, because that’s how people will interact with your work first and foremost. That means that color is essentially a requirement for any visualization, so use it well in yours. Appropriate color choice will still look fine in black and white anyway.
### Think interactively
It might be best to start by making the visualization *you want to make*, with interactivity and anything else you like. You can then reduce as necessary for publication or other outlets, and keep the fancy one as supplemental, or accessible on your own website to show off.
Color
-----
There is a lot to consider regarding color. Until recently, the default color schemes of most visualization packages were poor at best. Thankfully, ggplot2, its imitators and extenders, in both the R world and beyond, have made it much easier to have a decent color scheme by default[52](#fn52).
However, the defaults are still potentially problematic, so you should be prepared to go with something else. In other cases, you may just simply prefer something else. For example, for me, the gray background of ggplot2 defaults is something I have to remove for every plot[53](#fn53).
### Viridis
A couple packages will help you get started in choosing a decent color scheme. One is viridis. As stated in the package description:
> These color maps are designed in such a way that they will analytically be perfectly perceptually\-uniform, both in regular form and also when converted to black\-and\-white. They are also designed to be perceived by readers with the most common form of color blindness.
So basically you have something that will take care of your audience without having to do much. There are four primary palettes, plus one version of the main viridis color scheme that will be perceived by those with any type of color blindness (*cividis*).
These color schemes might seem a bit odd from what you’re used to. But recall that the goal is good communication, and these will allow you to convey information accurately, without implicit bias, and be acceptable in different formats. In addition, there is ggplot2 functionality to boot, e.g. scale\_color\_viridis, and it will work for discrete or continuously valued data.
For more, see the [vignette](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html). I also invite you to watch the [introduction of the original module in Python](https://www.youtube.com/watch?v=xAoljeRJ3lU), where you can learn more about the issues in color selection, and why viridis works.
You can use the following functions for with ggplot2:
* scale\_color\_viridis\_c
* scale\_color\_viridis\_d
* scale\_fill\_viridis\_c
* scale\_fill\_viridis\_d
### Scientific colors
Yet another set of palettes are available via the scico package, and are specifically geared toward for scientific presentation. These perceptually\-uniform color maps sequential, divierging, and circular pallets, will handle data variations equally all along the colour bar, and still work for black and white print. They provide more palettes to go with viridis.
* Perceptually uniform
* Perceptually ordered
* Color\-vision\-deficiency friendly
* Readable as black\-and\-white print
You can use the following functions for with ggplot2:
* scale\_color\_scico
* scale\_color\_scico\_d
* scale\_fill\_scico
* scale\_fill\_scico\_d
I personally prefer these for the choices available, and viridis doesn’t seem to work aesthetically that well in a lot of contexts. More information on their development can be found [here](http://www.fabiocrameri.ch/colourmaps.php).
### RColorBrewer
Color Brewer offers a collection of palettes that will generally work well in a variety of situations, but especially for discrete data. While there are print and color\-blind friendly palettes, not all adhere to those restrictions. Specifically though, you have palettes for the following data situations:
* Qualitative (e.g. Dark2[54](#fn54))
* Sequential (e.g. Reds)
* Diverging (e.g. RdBu)
There is a ggplot2 function, scale\_color\_brewer, you can use as well. For more, see [colorbrewer.org](http://colorbrewer2.org/). There you can play around with the palettes to help make your decision.
You can use the following functions for with ggplot2:
* scale\_color\_brewer
* scale\_fill\_brewer/span\>
In R, you have several schemes that work well right out of the box:
* ggplot2 default palette
* viridis
* scico
* RColorBrewer
Furthermore, they’ll work well with discrete or continuous data. You will have to do some work to come up with better, so they should be your default. Sometimes though, [one can’t help oneself](https://github.com/m-clark/NineteenEightyR).
### Viridis
A couple packages will help you get started in choosing a decent color scheme. One is viridis. As stated in the package description:
> These color maps are designed in such a way that they will analytically be perfectly perceptually\-uniform, both in regular form and also when converted to black\-and\-white. They are also designed to be perceived by readers with the most common form of color blindness.
So basically you have something that will take care of your audience without having to do much. There are four primary palettes, plus one version of the main viridis color scheme that will be perceived by those with any type of color blindness (*cividis*).
These color schemes might seem a bit odd from what you’re used to. But recall that the goal is good communication, and these will allow you to convey information accurately, without implicit bias, and be acceptable in different formats. In addition, there is ggplot2 functionality to boot, e.g. scale\_color\_viridis, and it will work for discrete or continuously valued data.
For more, see the [vignette](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html). I also invite you to watch the [introduction of the original module in Python](https://www.youtube.com/watch?v=xAoljeRJ3lU), where you can learn more about the issues in color selection, and why viridis works.
You can use the following functions for with ggplot2:
* scale\_color\_viridis\_c
* scale\_color\_viridis\_d
* scale\_fill\_viridis\_c
* scale\_fill\_viridis\_d
### Scientific colors
Yet another set of palettes are available via the scico package, and are specifically geared toward for scientific presentation. These perceptually\-uniform color maps sequential, divierging, and circular pallets, will handle data variations equally all along the colour bar, and still work for black and white print. They provide more palettes to go with viridis.
* Perceptually uniform
* Perceptually ordered
* Color\-vision\-deficiency friendly
* Readable as black\-and\-white print
You can use the following functions for with ggplot2:
* scale\_color\_scico
* scale\_color\_scico\_d
* scale\_fill\_scico
* scale\_fill\_scico\_d
I personally prefer these for the choices available, and viridis doesn’t seem to work aesthetically that well in a lot of contexts. More information on their development can be found [here](http://www.fabiocrameri.ch/colourmaps.php).
### RColorBrewer
Color Brewer offers a collection of palettes that will generally work well in a variety of situations, but especially for discrete data. While there are print and color\-blind friendly palettes, not all adhere to those restrictions. Specifically though, you have palettes for the following data situations:
* Qualitative (e.g. Dark2[54](#fn54))
* Sequential (e.g. Reds)
* Diverging (e.g. RdBu)
There is a ggplot2 function, scale\_color\_brewer, you can use as well. For more, see [colorbrewer.org](http://colorbrewer2.org/). There you can play around with the palettes to help make your decision.
You can use the following functions for with ggplot2:
* scale\_color\_brewer
* scale\_fill\_brewer/span\>
In R, you have several schemes that work well right out of the box:
* ggplot2 default palette
* viridis
* scico
* RColorBrewer
Furthermore, they’ll work well with discrete or continuous data. You will have to do some work to come up with better, so they should be your default. Sometimes though, [one can’t help oneself](https://github.com/m-clark/NineteenEightyR).
Contrast
--------
Thankfully, websites have mostly gotten past the phase where there text looks like this. The goal of scientific communication is to, well, *communicate*. Making text hard to read is pretty much antithetical to this.
So contrast comes into play with text as well as color. In general, you should consider a 7 to 1 contrast ratio for text, minimally 4 to 1\.
\-Here is text at 2 to 1
\-Here is text at 4 to 1
\-Here is text at 7 to 1 (this document)
\-Here is black
I personally don’t like stark black, and find it visually irritating, but obviously that would be fine to use for most people.
Contrast concerns regard color as well. When considering color, one should also think about the background for plots, or perhaps the surrounding text. The following function will check for this. Ideally one would pass *AAA* status, but *AA* is sufficient for the vast majority of cases.
```
# default ggplot2 discrete color (left) against the default ggplot2 gray background
visibly::color_contrast_checker(foreground = '#F8766D', background = 'gray92')
```
```
ratio AA AALarge AAA AAALarge
1 2.25 fail fail fail fail
```
```
# the dark viridis (right) would be better
visibly::color_contrast_checker(foreground = '#440154', background = 'gray92')
```
```
ratio AA AALarge AAA AAALarge
1 12.7 pass pass pass pass
```
You can’t win all battles however. It will be difficult to choose colors that are perceptually even, avoid color\-blindness issues, have good contrast, work to convey the information you need, and are aesthetically pleasing. The main thing to do is simply make the attempt.
Scaling Size
------------
You might not be aware, but there is more than one way to scale the size of objects, e.g. in a scatterplot. Consider the following, where in both cases dots are scaled by the person’s body\-mass index (BMI).
What’s the difference? The first plot scales the dots by their area, while the second scales the radius, but otherwise they are identical. It’s not generally recommended to scale the radius, as our perceptual system is more attuned to the area. Packages like ggplot2 and plotly will automatically do this, but some might not, so you should check.
Transparency
------------
Using transparency is a great way to keep detailed information available to the audience without being overwhelming. Consider the following. Fifty individual trajectories are shown on the left, but it doesn’t cause any issue graphically. The right has 10 lines plus a fitted line, 20 points and a ribbon to provide a sense of variance. Using transparency and a scientific color scheme allows it to be perceived cleanly.
Without transparency, it just looks ugly, and notably busier if nothing else. This plot is using the exact same scico palette.
In addition, transparency can be used to add additional information to a plot. In the following scatter plot, we can get a better sense of data density from the fact that the plot is darker where points overlap more.
Here we apply transparency to a density plot to convey a group difference in distributions, while still being able to visualize the whole distribution of each group.
Had we not done so, we might not be able to tell what’s going on with some of the groups at all.
In general, a good use of transparency can potentially help any visualization, but consider it especially when trying to display many points, or otherwise have overlapping data.
Accessibility
-------------
Among many things (apparently) rarely considered in typical academic or other visualization is accessibility. The following definition comes from the World Wide Web Consortium.
> Web accessibility means that people with disabilities can use the Web. More specifically, Web accessibility means that people with disabilities can perceive, understand, navigate, and interact with the Web, and that they can contribute to the Web. Web accessibility also benefits others, including older people with changing abilities due to aging.
The main message to get is that not everyone is able to use the web in the same manner. While you won’t be able to satisfy everyone who might come across your work, putting a little thought into your offering can go along way, and potentially widen your audience.
We talked about this previously, but when communicating visually, one can do simple things like choosing a colorblind\-friendly palette, or using a font contrast that will make it easier on the eyes of those reading your work. There are even browser plugins to test your web content for accessibility. In addition, there are little things like adding a title to inserted images, making links more noticeable etc., all of which can help consumers of your information.
File Types
----------
It’s one thing to create a visualization, but at some point you’re likely going to want to share it. RStudio will allow for the export of any visualization created in the Plots or Viewer tab. In addition, various packages may have their own save function, that may allow you to specify size, type or other aspects. Here we’ll discuss some of the options.
* png: These are relatively small in size and ubiquitous on the web. You should feel fine in this format. It does not scale however, so if you make a smaller image and someone zooms, it will become blurry.
* gif: These are the type used for all the silly animations you see on the web. Using them is fine if you want to make an animation, but know that it can go longer than a couple seconds, and there is no requirement for it to be asinine.
* jpg: Commonly used for photographs, which isn’t the case with data generated graphs. Given their relative size I don’t see much need for these.
* svg: These take a different approach to imaging and can scale. You can make a very small one and it (potentially) can still look great when zoomed in to a much larger size. Often useful for logos, but possibly in any situation.
As I don’t know what screen will see my visualizations, I generally opt for svg. It may be a bit slower/larger, but in my usage and for my audience size, this is of little concern relative to it looking proper. They also work for pdf if you’re still creating those, and there are also lighter weight versions in R, e.g. svglite. Beyond that I use png, and have no need for others.
Here is a [discussion on stackexchange](https://stackoverflow.com/questions/2336522/png-vs-gif-vs-jpeg-vs-svg-when-best-to-use) that summarizes some of the above. The initial question is old but there have been recent updates to the responses.
Note also, you can import files directly into your documents with R, markdown, HTML tags, or \\(\\LaTeX\\). See `?png` for a starting point. The following demonstrates an image insert for HTML output, with a couple options for centering and size.
`<img src="file.jpg" style="display:block; margin: 0 auto;" width=50%>`
This uses markdown to get the same result
```
{width=50%}
```
Summary of Thinking Visually
----------------------------
The goal of this section was mostly just to help you realize that there are many things to consider when visualizing information and attempting to communicate the contents of data. The approach is not the same as what one would do in say, an artistic venture, or where there is nothing specific to impart to an audience. Even some of the most common things you see published are fundamentally problematic, so you can’t even use what people traditionally do as a guide. However, there are many tools available to help you. Another thing to keep in mind is that there is no right way to do a particular visualization, and many ways, to have fun with it.
A casual list of things to avoid
--------------------------------
I’m just putting things that come to mind here as I return to this document. Mostly it is personal opinion, though often based on various sources in the data visualization realm or simply my own experience.
### Pie
Pie charts and their cousins, e.g. bar charts (and stacked versions), wind rose plots, radar plots etc., either convey too little information, or make otherwise simple information more difficult to process perceptually. The basic pie chart is really only able to convey proportional data. Beyond that, anything done with a pie chart can almost always be done better, at the very least with a bar chart, but you should really consider better ways to convey your data.
Alternatives:
* bar
* densities/stacked densities
* parallel sets/sankey
### Histograms
Anyone that’s used R’s hist function knows the frustration here. Use density plots instead. They convey the same information but better, and typical defaults are usually fine. However, you should really consider the information and audience\- is a histogram or density plot really displaying what you want to show?
Alternatives:
* density
* quantile dotplot
### Using 3D without adding any communicative value
You will often come across use of 3D in scientific communication which is fairly poor and makes the data harder to interpret. In general, when going beyond two dimensions, your first thought should be to use color, size, etc. and finally, prefer interactivity to 3D. Where it is useful is in things like showing structure (e.g. molecular, geographical), or continuous multi\-way interactions.
Alternatives:
* multiple 2d/faceting
### Using too many colors
Some put a completely non\-scientifically based number on this, but the idea holds. For example, if you’re trying to show U.S. state grouping by using a different color for all 50 states, no one’s going to be able to tell the yellow for Alabama vs. the slightly different yellow for Idaho. Alternatives would be to show the information via a map or use a hover over display.
### Using valenced colors when data isn’t applicable
Often we have data that can be thought of as having a positive/negative or valenced nuance. For example, we might want to show values relative to some cut point, or they might naturally have positive and negative values (e.g. sentiment, standardized scores). Oftentimes though, doing so would mean possibly arbitrarily picking a cut point and unnaturally discretizing the data.
The following shows a plot of water risk for many countries. The first plots the color along a continuum with increasing darkness as one goes along, which is appropriate for this score of positive numeric values from 0\-5\. We can clearly see problematic ones while still getting a sense of where other countries lie along that score. The other plot arbitrarily codes a different color scheme, which might suggest some countries are fundamentally different than others. However, if the goal is to show values relative to the median, then it accurately conveys countries above and below that value. If the median is not a useful value (e.g. to take some action upon), then the former plot would likely be preferred.
### Showing maps that just display population
Many of the maps I see on the web cover a wide range of data and can be very visually appealing, but pretty much just tell me where the most populated areas are, because the value conveyed is highly correlated with it. Such maps are not very interesting, so make sure that your geographical depiction is more informative than this.
### Biplots
A lot of folks doing PCA resort to biplots for interpretation, where a graphical model would be much more straightforward. See [this chapter](http://m-clark.github.io/sem/latent-variables.html) for example.
### Pie
Pie charts and their cousins, e.g. bar charts (and stacked versions), wind rose plots, radar plots etc., either convey too little information, or make otherwise simple information more difficult to process perceptually. The basic pie chart is really only able to convey proportional data. Beyond that, anything done with a pie chart can almost always be done better, at the very least with a bar chart, but you should really consider better ways to convey your data.
Alternatives:
* bar
* densities/stacked densities
* parallel sets/sankey
### Histograms
Anyone that’s used R’s hist function knows the frustration here. Use density plots instead. They convey the same information but better, and typical defaults are usually fine. However, you should really consider the information and audience\- is a histogram or density plot really displaying what you want to show?
Alternatives:
* density
* quantile dotplot
### Using 3D without adding any communicative value
You will often come across use of 3D in scientific communication which is fairly poor and makes the data harder to interpret. In general, when going beyond two dimensions, your first thought should be to use color, size, etc. and finally, prefer interactivity to 3D. Where it is useful is in things like showing structure (e.g. molecular, geographical), or continuous multi\-way interactions.
Alternatives:
* multiple 2d/faceting
### Using too many colors
Some put a completely non\-scientifically based number on this, but the idea holds. For example, if you’re trying to show U.S. state grouping by using a different color for all 50 states, no one’s going to be able to tell the yellow for Alabama vs. the slightly different yellow for Idaho. Alternatives would be to show the information via a map or use a hover over display.
### Using valenced colors when data isn’t applicable
Often we have data that can be thought of as having a positive/negative or valenced nuance. For example, we might want to show values relative to some cut point, or they might naturally have positive and negative values (e.g. sentiment, standardized scores). Oftentimes though, doing so would mean possibly arbitrarily picking a cut point and unnaturally discretizing the data.
The following shows a plot of water risk for many countries. The first plots the color along a continuum with increasing darkness as one goes along, which is appropriate for this score of positive numeric values from 0\-5\. We can clearly see problematic ones while still getting a sense of where other countries lie along that score. The other plot arbitrarily codes a different color scheme, which might suggest some countries are fundamentally different than others. However, if the goal is to show values relative to the median, then it accurately conveys countries above and below that value. If the median is not a useful value (e.g. to take some action upon), then the former plot would likely be preferred.
### Showing maps that just display population
Many of the maps I see on the web cover a wide range of data and can be very visually appealing, but pretty much just tell me where the most populated areas are, because the value conveyed is highly correlated with it. Such maps are not very interesting, so make sure that your geographical depiction is more informative than this.
### Biplots
A lot of folks doing PCA resort to biplots for interpretation, where a graphical model would be much more straightforward. See [this chapter](http://m-clark.github.io/sem/latent-variables.html) for example.
Thinking Visually Exercises
---------------------------
### Exercise 1
The following uses the diamonds data set that comes with ggplot2. Use the scale\_color\_viridis or scale\_color\_scico function to add a more accessible palette. Use `?` to examine your options.
```
# devtools::install_github("thomasp85/scico") # to use scientific colors
library(ggplot2)
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(color = price)) +
????
```
### Exercise 2
Now color it by the `cut` instead of `price`. Use scale\_color\_viridis/scioc\_d. See the helpfile via `?scale_color_*` to see how to change the palette.
### Thinking exercises
For your upcoming presentation, *who* is your audience?
### Exercise 1
The following uses the diamonds data set that comes with ggplot2. Use the scale\_color\_viridis or scale\_color\_scico function to add a more accessible palette. Use `?` to examine your options.
```
# devtools::install_github("thomasp85/scico") # to use scientific colors
library(ggplot2)
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(color = price)) +
????
```
### Exercise 2
Now color it by the `cut` instead of `price`. Use scale\_color\_viridis/scioc\_d. See the helpfile via `?scale_color_*` to see how to change the palette.
### Thinking exercises
For your upcoming presentation, *who* is your audience?
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/thinking_vis.html |
Thinking Visually
=================
Information
-----------
A starting point for data visualization regards the information you want to display, and then how you want to display it in order to tell the data’s story. As in statistical modeling, parsimony is the goal, but not at the cost of the more compelling story. We don’t want to waste the time of the audience or be redundant, but we also want to avoid unnecessary clutter, chart junk, and the like.
We’ll start with a couple examples. Consider the following.
So what’s wrong with this? Plenty. Aside from being boring, the entire story can be said with a couple words\- males are taller than females (even in the Star Wars universe). There is no reason to have a visualization. And if a simple group difference is the most exciting thing you have to talk about, not many are going to be interested.
Minor issues can also be noted, including unnecessary border around the bars, unnecessary vertical gridlines, and an unnecessary X axis label.
You might think the following is an improvement, but I would say it’s even worse.
Now the y axis has been changed to distort the difference, perceptually suggesting a height increase of over 34%. Furthermore, color is used but the colors are chosen poorly, and add no information, thus making the legend superfluous. And finally, the above doesn’t even convey the information people think it does, assuming they are even standard error bars, which one typically has to guess about in many journal visualizations of this kind[50](#fn50).
Now we add more information, but more problems!
The above has unnecessary border, gridlines, and emphasis. The labels, while possibly interesting, do not relate anything useful to the graph, and many are illegible. It imposes a straight (and too wide of a) straight line on a nonlinear relationship. And finally, color choice is both terrible and tends to draw one’s eye to the female data points. Here is what it looks like to someone with the most common form of colorblindness. If the points were less clumpy on sex, it would be very difficult to distinguish the groups.
And here is what it might look like when printed.
Now consider the following. We have six pieces of information in one graph\- name (on hover), homeworld (shape), age (size), sex (color), mass (x), and height (y). The colors are evenly spaced from one another, and so do not draw one’s attention to one group over another, or even to the line over groups. Opacity allows the line to be added and the points to overlap without loss of information. We technically don’t need a caption, legend or gridlines, because hovering over the data tells us everything we’d want to know about a given data point. The interactivity additionally allows one to select and zoom on specific areas.
Whether this particular scheme is something you’d prefer or not, the point is that we get quite a bit of information without being overwhelming, and the data is allowed to express itself cleanly.
Here are some things to keep in mind when creating visualizations for scientific communication.
### Your audience isn’t dumb
Assume your audience, which in academia is full of people with advanced degrees or those aspiring to obtain one, and in other contexts comprises people who are interested in your story, can handle more than a bar graph. If the visualization is good and well\-explained[51](#fn51), they’ll be fine.
See the data visualization and maps sections of [2019: The Year in Visual Stories and Graphics](https://www.nytimes.com/interactive/2019/12/30/us/2019-year-in-graphics.html) at the New York Times. Good data visualization of even complex relationships can be appreciated by more than an academic audience. Assume you can at least provide visualizations on that level of complexity and be okay. It won’t always work, but at least put the same effort you’d appreciate yourself.
### Clarity is key
Sometimes the clearest message *is* a complicated one. That’s okay, science is an inherently fuzzy process. Make sure your visualization tells the story you think is important, and don’t dumb the story down in the visualization. People will remember the graphic before they’ll remember a table of numbers.
By the same token, don’t needlessly complicate something that is straightforward. Perhaps a scatter plot with some groupwise coloring is enough. That’s fine.
All of this is easier said than done, and there is no right way to do data visualizations. Prepare to experiment, and focus on visuals that display patterns that will be more readily perceived.
### Avoid clutter
In striving for clarity, there are pitfalls to avoid. Gridlines, 3d, unnecessary patterning, and chartjunk in general will only detract from the message. As an example, gridlines might even seem necessary, but even faint ones can potentially hinder the pattern recognition you hope will take place, perceptually imposing clumps of data that do not exist. In addition, they practically insist on a level of data precision that in many situations you simply don’t have. What’s more, with interactivity they literally convey nothing additional, as a simple hover\-over or click on a data point will reveal the precise values. Use sparingly, if at all.
### Color isn’t optional
It’s odd for me to have to say this, as it’s been the case for many years, but no modern scientific outlet should be a print\-first outfit, and if they are, you shouldn’t care to send your work there. The only thing you should be concerned with is how it will look online, because that’s how people will interact with your work first and foremost. That means that color is essentially a requirement for any visualization, so use it well in yours. Appropriate color choice will still look fine in black and white anyway.
### Think interactively
It might be best to start by making the visualization *you want to make*, with interactivity and anything else you like. You can then reduce as necessary for publication or other outlets, and keep the fancy one as supplemental, or accessible on your own website to show off.
Color
-----
There is a lot to consider regarding color. Until recently, the default color schemes of most visualization packages were poor at best. Thankfully, ggplot2, its imitators and extenders, in both the R world and beyond, have made it much easier to have a decent color scheme by default[52](#fn52).
However, the defaults are still potentially problematic, so you should be prepared to go with something else. In other cases, you may just simply prefer something else. For example, for me, the gray background of ggplot2 defaults is something I have to remove for every plot[53](#fn53).
### Viridis
A couple packages will help you get started in choosing a decent color scheme. One is viridis. As stated in the package description:
> These color maps are designed in such a way that they will analytically be perfectly perceptually\-uniform, both in regular form and also when converted to black\-and\-white. They are also designed to be perceived by readers with the most common form of color blindness.
So basically you have something that will take care of your audience without having to do much. There are four primary palettes, plus one version of the main viridis color scheme that will be perceived by those with any type of color blindness (*cividis*).
These color schemes might seem a bit odd from what you’re used to. But recall that the goal is good communication, and these will allow you to convey information accurately, without implicit bias, and be acceptable in different formats. In addition, there is ggplot2 functionality to boot, e.g. scale\_color\_viridis, and it will work for discrete or continuously valued data.
For more, see the [vignette](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html). I also invite you to watch the [introduction of the original module in Python](https://www.youtube.com/watch?v=xAoljeRJ3lU), where you can learn more about the issues in color selection, and why viridis works.
You can use the following functions for with ggplot2:
* scale\_color\_viridis\_c
* scale\_color\_viridis\_d
* scale\_fill\_viridis\_c
* scale\_fill\_viridis\_d
### Scientific colors
Yet another set of palettes are available via the scico package, and are specifically geared toward for scientific presentation. These perceptually\-uniform color maps sequential, divierging, and circular pallets, will handle data variations equally all along the colour bar, and still work for black and white print. They provide more palettes to go with viridis.
* Perceptually uniform
* Perceptually ordered
* Color\-vision\-deficiency friendly
* Readable as black\-and\-white print
You can use the following functions for with ggplot2:
* scale\_color\_scico
* scale\_color\_scico\_d
* scale\_fill\_scico
* scale\_fill\_scico\_d
I personally prefer these for the choices available, and viridis doesn’t seem to work aesthetically that well in a lot of contexts. More information on their development can be found [here](http://www.fabiocrameri.ch/colourmaps.php).
### RColorBrewer
Color Brewer offers a collection of palettes that will generally work well in a variety of situations, but especially for discrete data. While there are print and color\-blind friendly palettes, not all adhere to those restrictions. Specifically though, you have palettes for the following data situations:
* Qualitative (e.g. Dark2[54](#fn54))
* Sequential (e.g. Reds)
* Diverging (e.g. RdBu)
There is a ggplot2 function, scale\_color\_brewer, you can use as well. For more, see [colorbrewer.org](http://colorbrewer2.org/). There you can play around with the palettes to help make your decision.
You can use the following functions for with ggplot2:
* scale\_color\_brewer
* scale\_fill\_brewer/span\>
In R, you have several schemes that work well right out of the box:
* ggplot2 default palette
* viridis
* scico
* RColorBrewer
Furthermore, they’ll work well with discrete or continuous data. You will have to do some work to come up with better, so they should be your default. Sometimes though, [one can’t help oneself](https://github.com/m-clark/NineteenEightyR).
Contrast
--------
Thankfully, websites have mostly gotten past the phase where there text looks like this. The goal of scientific communication is to, well, *communicate*. Making text hard to read is pretty much antithetical to this.
So contrast comes into play with text as well as color. In general, you should consider a 7 to 1 contrast ratio for text, minimally 4 to 1\.
\-Here is text at 2 to 1
\-Here is text at 4 to 1
\-Here is text at 7 to 1 (this document)
\-Here is black
I personally don’t like stark black, and find it visually irritating, but obviously that would be fine to use for most people.
Contrast concerns regard color as well. When considering color, one should also think about the background for plots, or perhaps the surrounding text. The following function will check for this. Ideally one would pass *AAA* status, but *AA* is sufficient for the vast majority of cases.
```
# default ggplot2 discrete color (left) against the default ggplot2 gray background
visibly::color_contrast_checker(foreground = '#F8766D', background = 'gray92')
```
```
ratio AA AALarge AAA AAALarge
1 2.25 fail fail fail fail
```
```
# the dark viridis (right) would be better
visibly::color_contrast_checker(foreground = '#440154', background = 'gray92')
```
```
ratio AA AALarge AAA AAALarge
1 12.7 pass pass pass pass
```
You can’t win all battles however. It will be difficult to choose colors that are perceptually even, avoid color\-blindness issues, have good contrast, work to convey the information you need, and are aesthetically pleasing. The main thing to do is simply make the attempt.
Scaling Size
------------
You might not be aware, but there is more than one way to scale the size of objects, e.g. in a scatterplot. Consider the following, where in both cases dots are scaled by the person’s body\-mass index (BMI).
What’s the difference? The first plot scales the dots by their area, while the second scales the radius, but otherwise they are identical. It’s not generally recommended to scale the radius, as our perceptual system is more attuned to the area. Packages like ggplot2 and plotly will automatically do this, but some might not, so you should check.
Transparency
------------
Using transparency is a great way to keep detailed information available to the audience without being overwhelming. Consider the following. Fifty individual trajectories are shown on the left, but it doesn’t cause any issue graphically. The right has 10 lines plus a fitted line, 20 points and a ribbon to provide a sense of variance. Using transparency and a scientific color scheme allows it to be perceived cleanly.
Without transparency, it just looks ugly, and notably busier if nothing else. This plot is using the exact same scico palette.
In addition, transparency can be used to add additional information to a plot. In the following scatter plot, we can get a better sense of data density from the fact that the plot is darker where points overlap more.
Here we apply transparency to a density plot to convey a group difference in distributions, while still being able to visualize the whole distribution of each group.
Had we not done so, we might not be able to tell what’s going on with some of the groups at all.
In general, a good use of transparency can potentially help any visualization, but consider it especially when trying to display many points, or otherwise have overlapping data.
Accessibility
-------------
Among many things (apparently) rarely considered in typical academic or other visualization is accessibility. The following definition comes from the World Wide Web Consortium.
> Web accessibility means that people with disabilities can use the Web. More specifically, Web accessibility means that people with disabilities can perceive, understand, navigate, and interact with the Web, and that they can contribute to the Web. Web accessibility also benefits others, including older people with changing abilities due to aging.
The main message to get is that not everyone is able to use the web in the same manner. While you won’t be able to satisfy everyone who might come across your work, putting a little thought into your offering can go along way, and potentially widen your audience.
We talked about this previously, but when communicating visually, one can do simple things like choosing a colorblind\-friendly palette, or using a font contrast that will make it easier on the eyes of those reading your work. There are even browser plugins to test your web content for accessibility. In addition, there are little things like adding a title to inserted images, making links more noticeable etc., all of which can help consumers of your information.
File Types
----------
It’s one thing to create a visualization, but at some point you’re likely going to want to share it. RStudio will allow for the export of any visualization created in the Plots or Viewer tab. In addition, various packages may have their own save function, that may allow you to specify size, type or other aspects. Here we’ll discuss some of the options.
* png: These are relatively small in size and ubiquitous on the web. You should feel fine in this format. It does not scale however, so if you make a smaller image and someone zooms, it will become blurry.
* gif: These are the type used for all the silly animations you see on the web. Using them is fine if you want to make an animation, but know that it can go longer than a couple seconds, and there is no requirement for it to be asinine.
* jpg: Commonly used for photographs, which isn’t the case with data generated graphs. Given their relative size I don’t see much need for these.
* svg: These take a different approach to imaging and can scale. You can make a very small one and it (potentially) can still look great when zoomed in to a much larger size. Often useful for logos, but possibly in any situation.
As I don’t know what screen will see my visualizations, I generally opt for svg. It may be a bit slower/larger, but in my usage and for my audience size, this is of little concern relative to it looking proper. They also work for pdf if you’re still creating those, and there are also lighter weight versions in R, e.g. svglite. Beyond that I use png, and have no need for others.
Here is a [discussion on stackexchange](https://stackoverflow.com/questions/2336522/png-vs-gif-vs-jpeg-vs-svg-when-best-to-use) that summarizes some of the above. The initial question is old but there have been recent updates to the responses.
Note also, you can import files directly into your documents with R, markdown, HTML tags, or \\(\\LaTeX\\). See `?png` for a starting point. The following demonstrates an image insert for HTML output, with a couple options for centering and size.
`<img src="file.jpg" style="display:block; margin: 0 auto;" width=50%>`
This uses markdown to get the same result
```
{width=50%}
```
Summary of Thinking Visually
----------------------------
The goal of this section was mostly just to help you realize that there are many things to consider when visualizing information and attempting to communicate the contents of data. The approach is not the same as what one would do in say, an artistic venture, or where there is nothing specific to impart to an audience. Even some of the most common things you see published are fundamentally problematic, so you can’t even use what people traditionally do as a guide. However, there are many tools available to help you. Another thing to keep in mind is that there is no right way to do a particular visualization, and many ways, to have fun with it.
A casual list of things to avoid
--------------------------------
I’m just putting things that come to mind here as I return to this document. Mostly it is personal opinion, though often based on various sources in the data visualization realm or simply my own experience.
### Pie
Pie charts and their cousins, e.g. bar charts (and stacked versions), wind rose plots, radar plots etc., either convey too little information, or make otherwise simple information more difficult to process perceptually. The basic pie chart is really only able to convey proportional data. Beyond that, anything done with a pie chart can almost always be done better, at the very least with a bar chart, but you should really consider better ways to convey your data.
Alternatives:
* bar
* densities/stacked densities
* parallel sets/sankey
### Histograms
Anyone that’s used R’s hist function knows the frustration here. Use density plots instead. They convey the same information but better, and typical defaults are usually fine. However, you should really consider the information and audience\- is a histogram or density plot really displaying what you want to show?
Alternatives:
* density
* quantile dotplot
### Using 3D without adding any communicative value
You will often come across use of 3D in scientific communication which is fairly poor and makes the data harder to interpret. In general, when going beyond two dimensions, your first thought should be to use color, size, etc. and finally, prefer interactivity to 3D. Where it is useful is in things like showing structure (e.g. molecular, geographical), or continuous multi\-way interactions.
Alternatives:
* multiple 2d/faceting
### Using too many colors
Some put a completely non\-scientifically based number on this, but the idea holds. For example, if you’re trying to show U.S. state grouping by using a different color for all 50 states, no one’s going to be able to tell the yellow for Alabama vs. the slightly different yellow for Idaho. Alternatives would be to show the information via a map or use a hover over display.
### Using valenced colors when data isn’t applicable
Often we have data that can be thought of as having a positive/negative or valenced nuance. For example, we might want to show values relative to some cut point, or they might naturally have positive and negative values (e.g. sentiment, standardized scores). Oftentimes though, doing so would mean possibly arbitrarily picking a cut point and unnaturally discretizing the data.
The following shows a plot of water risk for many countries. The first plots the color along a continuum with increasing darkness as one goes along, which is appropriate for this score of positive numeric values from 0\-5\. We can clearly see problematic ones while still getting a sense of where other countries lie along that score. The other plot arbitrarily codes a different color scheme, which might suggest some countries are fundamentally different than others. However, if the goal is to show values relative to the median, then it accurately conveys countries above and below that value. If the median is not a useful value (e.g. to take some action upon), then the former plot would likely be preferred.
### Showing maps that just display population
Many of the maps I see on the web cover a wide range of data and can be very visually appealing, but pretty much just tell me where the most populated areas are, because the value conveyed is highly correlated with it. Such maps are not very interesting, so make sure that your geographical depiction is more informative than this.
### Biplots
A lot of folks doing PCA resort to biplots for interpretation, where a graphical model would be much more straightforward. See [this chapter](http://m-clark.github.io/sem/latent-variables.html) for example.
Thinking Visually Exercises
---------------------------
### Exercise 1
The following uses the diamonds data set that comes with ggplot2. Use the scale\_color\_viridis or scale\_color\_scico function to add a more accessible palette. Use `?` to examine your options.
```
# devtools::install_github("thomasp85/scico") # to use scientific colors
library(ggplot2)
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(color = price)) +
????
```
### Exercise 2
Now color it by the `cut` instead of `price`. Use scale\_color\_viridis/scioc\_d. See the helpfile via `?scale_color_*` to see how to change the palette.
### Thinking exercises
For your upcoming presentation, *who* is your audience?
Information
-----------
A starting point for data visualization regards the information you want to display, and then how you want to display it in order to tell the data’s story. As in statistical modeling, parsimony is the goal, but not at the cost of the more compelling story. We don’t want to waste the time of the audience or be redundant, but we also want to avoid unnecessary clutter, chart junk, and the like.
We’ll start with a couple examples. Consider the following.
So what’s wrong with this? Plenty. Aside from being boring, the entire story can be said with a couple words\- males are taller than females (even in the Star Wars universe). There is no reason to have a visualization. And if a simple group difference is the most exciting thing you have to talk about, not many are going to be interested.
Minor issues can also be noted, including unnecessary border around the bars, unnecessary vertical gridlines, and an unnecessary X axis label.
You might think the following is an improvement, but I would say it’s even worse.
Now the y axis has been changed to distort the difference, perceptually suggesting a height increase of over 34%. Furthermore, color is used but the colors are chosen poorly, and add no information, thus making the legend superfluous. And finally, the above doesn’t even convey the information people think it does, assuming they are even standard error bars, which one typically has to guess about in many journal visualizations of this kind[50](#fn50).
Now we add more information, but more problems!
The above has unnecessary border, gridlines, and emphasis. The labels, while possibly interesting, do not relate anything useful to the graph, and many are illegible. It imposes a straight (and too wide of a) straight line on a nonlinear relationship. And finally, color choice is both terrible and tends to draw one’s eye to the female data points. Here is what it looks like to someone with the most common form of colorblindness. If the points were less clumpy on sex, it would be very difficult to distinguish the groups.
And here is what it might look like when printed.
Now consider the following. We have six pieces of information in one graph\- name (on hover), homeworld (shape), age (size), sex (color), mass (x), and height (y). The colors are evenly spaced from one another, and so do not draw one’s attention to one group over another, or even to the line over groups. Opacity allows the line to be added and the points to overlap without loss of information. We technically don’t need a caption, legend or gridlines, because hovering over the data tells us everything we’d want to know about a given data point. The interactivity additionally allows one to select and zoom on specific areas.
Whether this particular scheme is something you’d prefer or not, the point is that we get quite a bit of information without being overwhelming, and the data is allowed to express itself cleanly.
Here are some things to keep in mind when creating visualizations for scientific communication.
### Your audience isn’t dumb
Assume your audience, which in academia is full of people with advanced degrees or those aspiring to obtain one, and in other contexts comprises people who are interested in your story, can handle more than a bar graph. If the visualization is good and well\-explained[51](#fn51), they’ll be fine.
See the data visualization and maps sections of [2019: The Year in Visual Stories and Graphics](https://www.nytimes.com/interactive/2019/12/30/us/2019-year-in-graphics.html) at the New York Times. Good data visualization of even complex relationships can be appreciated by more than an academic audience. Assume you can at least provide visualizations on that level of complexity and be okay. It won’t always work, but at least put the same effort you’d appreciate yourself.
### Clarity is key
Sometimes the clearest message *is* a complicated one. That’s okay, science is an inherently fuzzy process. Make sure your visualization tells the story you think is important, and don’t dumb the story down in the visualization. People will remember the graphic before they’ll remember a table of numbers.
By the same token, don’t needlessly complicate something that is straightforward. Perhaps a scatter plot with some groupwise coloring is enough. That’s fine.
All of this is easier said than done, and there is no right way to do data visualizations. Prepare to experiment, and focus on visuals that display patterns that will be more readily perceived.
### Avoid clutter
In striving for clarity, there are pitfalls to avoid. Gridlines, 3d, unnecessary patterning, and chartjunk in general will only detract from the message. As an example, gridlines might even seem necessary, but even faint ones can potentially hinder the pattern recognition you hope will take place, perceptually imposing clumps of data that do not exist. In addition, they practically insist on a level of data precision that in many situations you simply don’t have. What’s more, with interactivity they literally convey nothing additional, as a simple hover\-over or click on a data point will reveal the precise values. Use sparingly, if at all.
### Color isn’t optional
It’s odd for me to have to say this, as it’s been the case for many years, but no modern scientific outlet should be a print\-first outfit, and if they are, you shouldn’t care to send your work there. The only thing you should be concerned with is how it will look online, because that’s how people will interact with your work first and foremost. That means that color is essentially a requirement for any visualization, so use it well in yours. Appropriate color choice will still look fine in black and white anyway.
### Think interactively
It might be best to start by making the visualization *you want to make*, with interactivity and anything else you like. You can then reduce as necessary for publication or other outlets, and keep the fancy one as supplemental, or accessible on your own website to show off.
### Your audience isn’t dumb
Assume your audience, which in academia is full of people with advanced degrees or those aspiring to obtain one, and in other contexts comprises people who are interested in your story, can handle more than a bar graph. If the visualization is good and well\-explained[51](#fn51), they’ll be fine.
See the data visualization and maps sections of [2019: The Year in Visual Stories and Graphics](https://www.nytimes.com/interactive/2019/12/30/us/2019-year-in-graphics.html) at the New York Times. Good data visualization of even complex relationships can be appreciated by more than an academic audience. Assume you can at least provide visualizations on that level of complexity and be okay. It won’t always work, but at least put the same effort you’d appreciate yourself.
### Clarity is key
Sometimes the clearest message *is* a complicated one. That’s okay, science is an inherently fuzzy process. Make sure your visualization tells the story you think is important, and don’t dumb the story down in the visualization. People will remember the graphic before they’ll remember a table of numbers.
By the same token, don’t needlessly complicate something that is straightforward. Perhaps a scatter plot with some groupwise coloring is enough. That’s fine.
All of this is easier said than done, and there is no right way to do data visualizations. Prepare to experiment, and focus on visuals that display patterns that will be more readily perceived.
### Avoid clutter
In striving for clarity, there are pitfalls to avoid. Gridlines, 3d, unnecessary patterning, and chartjunk in general will only detract from the message. As an example, gridlines might even seem necessary, but even faint ones can potentially hinder the pattern recognition you hope will take place, perceptually imposing clumps of data that do not exist. In addition, they practically insist on a level of data precision that in many situations you simply don’t have. What’s more, with interactivity they literally convey nothing additional, as a simple hover\-over or click on a data point will reveal the precise values. Use sparingly, if at all.
### Color isn’t optional
It’s odd for me to have to say this, as it’s been the case for many years, but no modern scientific outlet should be a print\-first outfit, and if they are, you shouldn’t care to send your work there. The only thing you should be concerned with is how it will look online, because that’s how people will interact with your work first and foremost. That means that color is essentially a requirement for any visualization, so use it well in yours. Appropriate color choice will still look fine in black and white anyway.
### Think interactively
It might be best to start by making the visualization *you want to make*, with interactivity and anything else you like. You can then reduce as necessary for publication or other outlets, and keep the fancy one as supplemental, or accessible on your own website to show off.
Color
-----
There is a lot to consider regarding color. Until recently, the default color schemes of most visualization packages were poor at best. Thankfully, ggplot2, its imitators and extenders, in both the R world and beyond, have made it much easier to have a decent color scheme by default[52](#fn52).
However, the defaults are still potentially problematic, so you should be prepared to go with something else. In other cases, you may just simply prefer something else. For example, for me, the gray background of ggplot2 defaults is something I have to remove for every plot[53](#fn53).
### Viridis
A couple packages will help you get started in choosing a decent color scheme. One is viridis. As stated in the package description:
> These color maps are designed in such a way that they will analytically be perfectly perceptually\-uniform, both in regular form and also when converted to black\-and\-white. They are also designed to be perceived by readers with the most common form of color blindness.
So basically you have something that will take care of your audience without having to do much. There are four primary palettes, plus one version of the main viridis color scheme that will be perceived by those with any type of color blindness (*cividis*).
These color schemes might seem a bit odd from what you’re used to. But recall that the goal is good communication, and these will allow you to convey information accurately, without implicit bias, and be acceptable in different formats. In addition, there is ggplot2 functionality to boot, e.g. scale\_color\_viridis, and it will work for discrete or continuously valued data.
For more, see the [vignette](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html). I also invite you to watch the [introduction of the original module in Python](https://www.youtube.com/watch?v=xAoljeRJ3lU), where you can learn more about the issues in color selection, and why viridis works.
You can use the following functions for with ggplot2:
* scale\_color\_viridis\_c
* scale\_color\_viridis\_d
* scale\_fill\_viridis\_c
* scale\_fill\_viridis\_d
### Scientific colors
Yet another set of palettes are available via the scico package, and are specifically geared toward for scientific presentation. These perceptually\-uniform color maps sequential, divierging, and circular pallets, will handle data variations equally all along the colour bar, and still work for black and white print. They provide more palettes to go with viridis.
* Perceptually uniform
* Perceptually ordered
* Color\-vision\-deficiency friendly
* Readable as black\-and\-white print
You can use the following functions for with ggplot2:
* scale\_color\_scico
* scale\_color\_scico\_d
* scale\_fill\_scico
* scale\_fill\_scico\_d
I personally prefer these for the choices available, and viridis doesn’t seem to work aesthetically that well in a lot of contexts. More information on their development can be found [here](http://www.fabiocrameri.ch/colourmaps.php).
### RColorBrewer
Color Brewer offers a collection of palettes that will generally work well in a variety of situations, but especially for discrete data. While there are print and color\-blind friendly palettes, not all adhere to those restrictions. Specifically though, you have palettes for the following data situations:
* Qualitative (e.g. Dark2[54](#fn54))
* Sequential (e.g. Reds)
* Diverging (e.g. RdBu)
There is a ggplot2 function, scale\_color\_brewer, you can use as well. For more, see [colorbrewer.org](http://colorbrewer2.org/). There you can play around with the palettes to help make your decision.
You can use the following functions for with ggplot2:
* scale\_color\_brewer
* scale\_fill\_brewer/span\>
In R, you have several schemes that work well right out of the box:
* ggplot2 default palette
* viridis
* scico
* RColorBrewer
Furthermore, they’ll work well with discrete or continuous data. You will have to do some work to come up with better, so they should be your default. Sometimes though, [one can’t help oneself](https://github.com/m-clark/NineteenEightyR).
### Viridis
A couple packages will help you get started in choosing a decent color scheme. One is viridis. As stated in the package description:
> These color maps are designed in such a way that they will analytically be perfectly perceptually\-uniform, both in regular form and also when converted to black\-and\-white. They are also designed to be perceived by readers with the most common form of color blindness.
So basically you have something that will take care of your audience without having to do much. There are four primary palettes, plus one version of the main viridis color scheme that will be perceived by those with any type of color blindness (*cividis*).
These color schemes might seem a bit odd from what you’re used to. But recall that the goal is good communication, and these will allow you to convey information accurately, without implicit bias, and be acceptable in different formats. In addition, there is ggplot2 functionality to boot, e.g. scale\_color\_viridis, and it will work for discrete or continuously valued data.
For more, see the [vignette](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html). I also invite you to watch the [introduction of the original module in Python](https://www.youtube.com/watch?v=xAoljeRJ3lU), where you can learn more about the issues in color selection, and why viridis works.
You can use the following functions for with ggplot2:
* scale\_color\_viridis\_c
* scale\_color\_viridis\_d
* scale\_fill\_viridis\_c
* scale\_fill\_viridis\_d
### Scientific colors
Yet another set of palettes are available via the scico package, and are specifically geared toward for scientific presentation. These perceptually\-uniform color maps sequential, divierging, and circular pallets, will handle data variations equally all along the colour bar, and still work for black and white print. They provide more palettes to go with viridis.
* Perceptually uniform
* Perceptually ordered
* Color\-vision\-deficiency friendly
* Readable as black\-and\-white print
You can use the following functions for with ggplot2:
* scale\_color\_scico
* scale\_color\_scico\_d
* scale\_fill\_scico
* scale\_fill\_scico\_d
I personally prefer these for the choices available, and viridis doesn’t seem to work aesthetically that well in a lot of contexts. More information on their development can be found [here](http://www.fabiocrameri.ch/colourmaps.php).
### RColorBrewer
Color Brewer offers a collection of palettes that will generally work well in a variety of situations, but especially for discrete data. While there are print and color\-blind friendly palettes, not all adhere to those restrictions. Specifically though, you have palettes for the following data situations:
* Qualitative (e.g. Dark2[54](#fn54))
* Sequential (e.g. Reds)
* Diverging (e.g. RdBu)
There is a ggplot2 function, scale\_color\_brewer, you can use as well. For more, see [colorbrewer.org](http://colorbrewer2.org/). There you can play around with the palettes to help make your decision.
You can use the following functions for with ggplot2:
* scale\_color\_brewer
* scale\_fill\_brewer/span\>
In R, you have several schemes that work well right out of the box:
* ggplot2 default palette
* viridis
* scico
* RColorBrewer
Furthermore, they’ll work well with discrete or continuous data. You will have to do some work to come up with better, so they should be your default. Sometimes though, [one can’t help oneself](https://github.com/m-clark/NineteenEightyR).
Contrast
--------
Thankfully, websites have mostly gotten past the phase where there text looks like this. The goal of scientific communication is to, well, *communicate*. Making text hard to read is pretty much antithetical to this.
So contrast comes into play with text as well as color. In general, you should consider a 7 to 1 contrast ratio for text, minimally 4 to 1\.
\-Here is text at 2 to 1
\-Here is text at 4 to 1
\-Here is text at 7 to 1 (this document)
\-Here is black
I personally don’t like stark black, and find it visually irritating, but obviously that would be fine to use for most people.
Contrast concerns regard color as well. When considering color, one should also think about the background for plots, or perhaps the surrounding text. The following function will check for this. Ideally one would pass *AAA* status, but *AA* is sufficient for the vast majority of cases.
```
# default ggplot2 discrete color (left) against the default ggplot2 gray background
visibly::color_contrast_checker(foreground = '#F8766D', background = 'gray92')
```
```
ratio AA AALarge AAA AAALarge
1 2.25 fail fail fail fail
```
```
# the dark viridis (right) would be better
visibly::color_contrast_checker(foreground = '#440154', background = 'gray92')
```
```
ratio AA AALarge AAA AAALarge
1 12.7 pass pass pass pass
```
You can’t win all battles however. It will be difficult to choose colors that are perceptually even, avoid color\-blindness issues, have good contrast, work to convey the information you need, and are aesthetically pleasing. The main thing to do is simply make the attempt.
Scaling Size
------------
You might not be aware, but there is more than one way to scale the size of objects, e.g. in a scatterplot. Consider the following, where in both cases dots are scaled by the person’s body\-mass index (BMI).
What’s the difference? The first plot scales the dots by their area, while the second scales the radius, but otherwise they are identical. It’s not generally recommended to scale the radius, as our perceptual system is more attuned to the area. Packages like ggplot2 and plotly will automatically do this, but some might not, so you should check.
Transparency
------------
Using transparency is a great way to keep detailed information available to the audience without being overwhelming. Consider the following. Fifty individual trajectories are shown on the left, but it doesn’t cause any issue graphically. The right has 10 lines plus a fitted line, 20 points and a ribbon to provide a sense of variance. Using transparency and a scientific color scheme allows it to be perceived cleanly.
Without transparency, it just looks ugly, and notably busier if nothing else. This plot is using the exact same scico palette.
In addition, transparency can be used to add additional information to a plot. In the following scatter plot, we can get a better sense of data density from the fact that the plot is darker where points overlap more.
Here we apply transparency to a density plot to convey a group difference in distributions, while still being able to visualize the whole distribution of each group.
Had we not done so, we might not be able to tell what’s going on with some of the groups at all.
In general, a good use of transparency can potentially help any visualization, but consider it especially when trying to display many points, or otherwise have overlapping data.
Accessibility
-------------
Among many things (apparently) rarely considered in typical academic or other visualization is accessibility. The following definition comes from the World Wide Web Consortium.
> Web accessibility means that people with disabilities can use the Web. More specifically, Web accessibility means that people with disabilities can perceive, understand, navigate, and interact with the Web, and that they can contribute to the Web. Web accessibility also benefits others, including older people with changing abilities due to aging.
The main message to get is that not everyone is able to use the web in the same manner. While you won’t be able to satisfy everyone who might come across your work, putting a little thought into your offering can go along way, and potentially widen your audience.
We talked about this previously, but when communicating visually, one can do simple things like choosing a colorblind\-friendly palette, or using a font contrast that will make it easier on the eyes of those reading your work. There are even browser plugins to test your web content for accessibility. In addition, there are little things like adding a title to inserted images, making links more noticeable etc., all of which can help consumers of your information.
File Types
----------
It’s one thing to create a visualization, but at some point you’re likely going to want to share it. RStudio will allow for the export of any visualization created in the Plots or Viewer tab. In addition, various packages may have their own save function, that may allow you to specify size, type or other aspects. Here we’ll discuss some of the options.
* png: These are relatively small in size and ubiquitous on the web. You should feel fine in this format. It does not scale however, so if you make a smaller image and someone zooms, it will become blurry.
* gif: These are the type used for all the silly animations you see on the web. Using them is fine if you want to make an animation, but know that it can go longer than a couple seconds, and there is no requirement for it to be asinine.
* jpg: Commonly used for photographs, which isn’t the case with data generated graphs. Given their relative size I don’t see much need for these.
* svg: These take a different approach to imaging and can scale. You can make a very small one and it (potentially) can still look great when zoomed in to a much larger size. Often useful for logos, but possibly in any situation.
As I don’t know what screen will see my visualizations, I generally opt for svg. It may be a bit slower/larger, but in my usage and for my audience size, this is of little concern relative to it looking proper. They also work for pdf if you’re still creating those, and there are also lighter weight versions in R, e.g. svglite. Beyond that I use png, and have no need for others.
Here is a [discussion on stackexchange](https://stackoverflow.com/questions/2336522/png-vs-gif-vs-jpeg-vs-svg-when-best-to-use) that summarizes some of the above. The initial question is old but there have been recent updates to the responses.
Note also, you can import files directly into your documents with R, markdown, HTML tags, or \\(\\LaTeX\\). See `?png` for a starting point. The following demonstrates an image insert for HTML output, with a couple options for centering and size.
`<img src="file.jpg" style="display:block; margin: 0 auto;" width=50%>`
This uses markdown to get the same result
```
{width=50%}
```
Summary of Thinking Visually
----------------------------
The goal of this section was mostly just to help you realize that there are many things to consider when visualizing information and attempting to communicate the contents of data. The approach is not the same as what one would do in say, an artistic venture, or where there is nothing specific to impart to an audience. Even some of the most common things you see published are fundamentally problematic, so you can’t even use what people traditionally do as a guide. However, there are many tools available to help you. Another thing to keep in mind is that there is no right way to do a particular visualization, and many ways, to have fun with it.
A casual list of things to avoid
--------------------------------
I’m just putting things that come to mind here as I return to this document. Mostly it is personal opinion, though often based on various sources in the data visualization realm or simply my own experience.
### Pie
Pie charts and their cousins, e.g. bar charts (and stacked versions), wind rose plots, radar plots etc., either convey too little information, or make otherwise simple information more difficult to process perceptually. The basic pie chart is really only able to convey proportional data. Beyond that, anything done with a pie chart can almost always be done better, at the very least with a bar chart, but you should really consider better ways to convey your data.
Alternatives:
* bar
* densities/stacked densities
* parallel sets/sankey
### Histograms
Anyone that’s used R’s hist function knows the frustration here. Use density plots instead. They convey the same information but better, and typical defaults are usually fine. However, you should really consider the information and audience\- is a histogram or density plot really displaying what you want to show?
Alternatives:
* density
* quantile dotplot
### Using 3D without adding any communicative value
You will often come across use of 3D in scientific communication which is fairly poor and makes the data harder to interpret. In general, when going beyond two dimensions, your first thought should be to use color, size, etc. and finally, prefer interactivity to 3D. Where it is useful is in things like showing structure (e.g. molecular, geographical), or continuous multi\-way interactions.
Alternatives:
* multiple 2d/faceting
### Using too many colors
Some put a completely non\-scientifically based number on this, but the idea holds. For example, if you’re trying to show U.S. state grouping by using a different color for all 50 states, no one’s going to be able to tell the yellow for Alabama vs. the slightly different yellow for Idaho. Alternatives would be to show the information via a map or use a hover over display.
### Using valenced colors when data isn’t applicable
Often we have data that can be thought of as having a positive/negative or valenced nuance. For example, we might want to show values relative to some cut point, or they might naturally have positive and negative values (e.g. sentiment, standardized scores). Oftentimes though, doing so would mean possibly arbitrarily picking a cut point and unnaturally discretizing the data.
The following shows a plot of water risk for many countries. The first plots the color along a continuum with increasing darkness as one goes along, which is appropriate for this score of positive numeric values from 0\-5\. We can clearly see problematic ones while still getting a sense of where other countries lie along that score. The other plot arbitrarily codes a different color scheme, which might suggest some countries are fundamentally different than others. However, if the goal is to show values relative to the median, then it accurately conveys countries above and below that value. If the median is not a useful value (e.g. to take some action upon), then the former plot would likely be preferred.
### Showing maps that just display population
Many of the maps I see on the web cover a wide range of data and can be very visually appealing, but pretty much just tell me where the most populated areas are, because the value conveyed is highly correlated with it. Such maps are not very interesting, so make sure that your geographical depiction is more informative than this.
### Biplots
A lot of folks doing PCA resort to biplots for interpretation, where a graphical model would be much more straightforward. See [this chapter](http://m-clark.github.io/sem/latent-variables.html) for example.
### Pie
Pie charts and their cousins, e.g. bar charts (and stacked versions), wind rose plots, radar plots etc., either convey too little information, or make otherwise simple information more difficult to process perceptually. The basic pie chart is really only able to convey proportional data. Beyond that, anything done with a pie chart can almost always be done better, at the very least with a bar chart, but you should really consider better ways to convey your data.
Alternatives:
* bar
* densities/stacked densities
* parallel sets/sankey
### Histograms
Anyone that’s used R’s hist function knows the frustration here. Use density plots instead. They convey the same information but better, and typical defaults are usually fine. However, you should really consider the information and audience\- is a histogram or density plot really displaying what you want to show?
Alternatives:
* density
* quantile dotplot
### Using 3D without adding any communicative value
You will often come across use of 3D in scientific communication which is fairly poor and makes the data harder to interpret. In general, when going beyond two dimensions, your first thought should be to use color, size, etc. and finally, prefer interactivity to 3D. Where it is useful is in things like showing structure (e.g. molecular, geographical), or continuous multi\-way interactions.
Alternatives:
* multiple 2d/faceting
### Using too many colors
Some put a completely non\-scientifically based number on this, but the idea holds. For example, if you’re trying to show U.S. state grouping by using a different color for all 50 states, no one’s going to be able to tell the yellow for Alabama vs. the slightly different yellow for Idaho. Alternatives would be to show the information via a map or use a hover over display.
### Using valenced colors when data isn’t applicable
Often we have data that can be thought of as having a positive/negative or valenced nuance. For example, we might want to show values relative to some cut point, or they might naturally have positive and negative values (e.g. sentiment, standardized scores). Oftentimes though, doing so would mean possibly arbitrarily picking a cut point and unnaturally discretizing the data.
The following shows a plot of water risk for many countries. The first plots the color along a continuum with increasing darkness as one goes along, which is appropriate for this score of positive numeric values from 0\-5\. We can clearly see problematic ones while still getting a sense of where other countries lie along that score. The other plot arbitrarily codes a different color scheme, which might suggest some countries are fundamentally different than others. However, if the goal is to show values relative to the median, then it accurately conveys countries above and below that value. If the median is not a useful value (e.g. to take some action upon), then the former plot would likely be preferred.
### Showing maps that just display population
Many of the maps I see on the web cover a wide range of data and can be very visually appealing, but pretty much just tell me where the most populated areas are, because the value conveyed is highly correlated with it. Such maps are not very interesting, so make sure that your geographical depiction is more informative than this.
### Biplots
A lot of folks doing PCA resort to biplots for interpretation, where a graphical model would be much more straightforward. See [this chapter](http://m-clark.github.io/sem/latent-variables.html) for example.
Thinking Visually Exercises
---------------------------
### Exercise 1
The following uses the diamonds data set that comes with ggplot2. Use the scale\_color\_viridis or scale\_color\_scico function to add a more accessible palette. Use `?` to examine your options.
```
# devtools::install_github("thomasp85/scico") # to use scientific colors
library(ggplot2)
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(color = price)) +
????
```
### Exercise 2
Now color it by the `cut` instead of `price`. Use scale\_color\_viridis/scioc\_d. See the helpfile via `?scale_color_*` to see how to change the palette.
### Thinking exercises
For your upcoming presentation, *who* is your audience?
### Exercise 1
The following uses the diamonds data set that comes with ggplot2. Use the scale\_color\_viridis or scale\_color\_scico function to add a more accessible palette. Use `?` to examine your options.
```
# devtools::install_github("thomasp85/scico") # to use scientific colors
library(ggplot2)
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(color = price)) +
????
```
### Exercise 2
Now color it by the `cut` instead of `price`. Use scale\_color\_viridis/scioc\_d. See the helpfile via `?scale_color_*` to see how to change the palette.
### Thinking exercises
For your upcoming presentation, *who* is your audience?
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/thinking_vis.html |
Thinking Visually
=================
Information
-----------
A starting point for data visualization regards the information you want to display, and then how you want to display it in order to tell the data’s story. As in statistical modeling, parsimony is the goal, but not at the cost of the more compelling story. We don’t want to waste the time of the audience or be redundant, but we also want to avoid unnecessary clutter, chart junk, and the like.
We’ll start with a couple examples. Consider the following.
So what’s wrong with this? Plenty. Aside from being boring, the entire story can be said with a couple words\- males are taller than females (even in the Star Wars universe). There is no reason to have a visualization. And if a simple group difference is the most exciting thing you have to talk about, not many are going to be interested.
Minor issues can also be noted, including unnecessary border around the bars, unnecessary vertical gridlines, and an unnecessary X axis label.
You might think the following is an improvement, but I would say it’s even worse.
Now the y axis has been changed to distort the difference, perceptually suggesting a height increase of over 34%. Furthermore, color is used but the colors are chosen poorly, and add no information, thus making the legend superfluous. And finally, the above doesn’t even convey the information people think it does, assuming they are even standard error bars, which one typically has to guess about in many journal visualizations of this kind[50](#fn50).
Now we add more information, but more problems!
The above has unnecessary border, gridlines, and emphasis. The labels, while possibly interesting, do not relate anything useful to the graph, and many are illegible. It imposes a straight (and too wide of a) straight line on a nonlinear relationship. And finally, color choice is both terrible and tends to draw one’s eye to the female data points. Here is what it looks like to someone with the most common form of colorblindness. If the points were less clumpy on sex, it would be very difficult to distinguish the groups.
And here is what it might look like when printed.
Now consider the following. We have six pieces of information in one graph\- name (on hover), homeworld (shape), age (size), sex (color), mass (x), and height (y). The colors are evenly spaced from one another, and so do not draw one’s attention to one group over another, or even to the line over groups. Opacity allows the line to be added and the points to overlap without loss of information. We technically don’t need a caption, legend or gridlines, because hovering over the data tells us everything we’d want to know about a given data point. The interactivity additionally allows one to select and zoom on specific areas.
Whether this particular scheme is something you’d prefer or not, the point is that we get quite a bit of information without being overwhelming, and the data is allowed to express itself cleanly.
Here are some things to keep in mind when creating visualizations for scientific communication.
### Your audience isn’t dumb
Assume your audience, which in academia is full of people with advanced degrees or those aspiring to obtain one, and in other contexts comprises people who are interested in your story, can handle more than a bar graph. If the visualization is good and well\-explained[51](#fn51), they’ll be fine.
See the data visualization and maps sections of [2019: The Year in Visual Stories and Graphics](https://www.nytimes.com/interactive/2019/12/30/us/2019-year-in-graphics.html) at the New York Times. Good data visualization of even complex relationships can be appreciated by more than an academic audience. Assume you can at least provide visualizations on that level of complexity and be okay. It won’t always work, but at least put the same effort you’d appreciate yourself.
### Clarity is key
Sometimes the clearest message *is* a complicated one. That’s okay, science is an inherently fuzzy process. Make sure your visualization tells the story you think is important, and don’t dumb the story down in the visualization. People will remember the graphic before they’ll remember a table of numbers.
By the same token, don’t needlessly complicate something that is straightforward. Perhaps a scatter plot with some groupwise coloring is enough. That’s fine.
All of this is easier said than done, and there is no right way to do data visualizations. Prepare to experiment, and focus on visuals that display patterns that will be more readily perceived.
### Avoid clutter
In striving for clarity, there are pitfalls to avoid. Gridlines, 3d, unnecessary patterning, and chartjunk in general will only detract from the message. As an example, gridlines might even seem necessary, but even faint ones can potentially hinder the pattern recognition you hope will take place, perceptually imposing clumps of data that do not exist. In addition, they practically insist on a level of data precision that in many situations you simply don’t have. What’s more, with interactivity they literally convey nothing additional, as a simple hover\-over or click on a data point will reveal the precise values. Use sparingly, if at all.
### Color isn’t optional
It’s odd for me to have to say this, as it’s been the case for many years, but no modern scientific outlet should be a print\-first outfit, and if they are, you shouldn’t care to send your work there. The only thing you should be concerned with is how it will look online, because that’s how people will interact with your work first and foremost. That means that color is essentially a requirement for any visualization, so use it well in yours. Appropriate color choice will still look fine in black and white anyway.
### Think interactively
It might be best to start by making the visualization *you want to make*, with interactivity and anything else you like. You can then reduce as necessary for publication or other outlets, and keep the fancy one as supplemental, or accessible on your own website to show off.
Color
-----
There is a lot to consider regarding color. Until recently, the default color schemes of most visualization packages were poor at best. Thankfully, ggplot2, its imitators and extenders, in both the R world and beyond, have made it much easier to have a decent color scheme by default[52](#fn52).
However, the defaults are still potentially problematic, so you should be prepared to go with something else. In other cases, you may just simply prefer something else. For example, for me, the gray background of ggplot2 defaults is something I have to remove for every plot[53](#fn53).
### Viridis
A couple packages will help you get started in choosing a decent color scheme. One is viridis. As stated in the package description:
> These color maps are designed in such a way that they will analytically be perfectly perceptually\-uniform, both in regular form and also when converted to black\-and\-white. They are also designed to be perceived by readers with the most common form of color blindness.
So basically you have something that will take care of your audience without having to do much. There are four primary palettes, plus one version of the main viridis color scheme that will be perceived by those with any type of color blindness (*cividis*).
These color schemes might seem a bit odd from what you’re used to. But recall that the goal is good communication, and these will allow you to convey information accurately, without implicit bias, and be acceptable in different formats. In addition, there is ggplot2 functionality to boot, e.g. scale\_color\_viridis, and it will work for discrete or continuously valued data.
For more, see the [vignette](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html). I also invite you to watch the [introduction of the original module in Python](https://www.youtube.com/watch?v=xAoljeRJ3lU), where you can learn more about the issues in color selection, and why viridis works.
You can use the following functions for with ggplot2:
* scale\_color\_viridis\_c
* scale\_color\_viridis\_d
* scale\_fill\_viridis\_c
* scale\_fill\_viridis\_d
### Scientific colors
Yet another set of palettes are available via the scico package, and are specifically geared toward for scientific presentation. These perceptually\-uniform color maps sequential, divierging, and circular pallets, will handle data variations equally all along the colour bar, and still work for black and white print. They provide more palettes to go with viridis.
* Perceptually uniform
* Perceptually ordered
* Color\-vision\-deficiency friendly
* Readable as black\-and\-white print
You can use the following functions for with ggplot2:
* scale\_color\_scico
* scale\_color\_scico\_d
* scale\_fill\_scico
* scale\_fill\_scico\_d
I personally prefer these for the choices available, and viridis doesn’t seem to work aesthetically that well in a lot of contexts. More information on their development can be found [here](http://www.fabiocrameri.ch/colourmaps.php).
### RColorBrewer
Color Brewer offers a collection of palettes that will generally work well in a variety of situations, but especially for discrete data. While there are print and color\-blind friendly palettes, not all adhere to those restrictions. Specifically though, you have palettes for the following data situations:
* Qualitative (e.g. Dark2[54](#fn54))
* Sequential (e.g. Reds)
* Diverging (e.g. RdBu)
There is a ggplot2 function, scale\_color\_brewer, you can use as well. For more, see [colorbrewer.org](http://colorbrewer2.org/). There you can play around with the palettes to help make your decision.
You can use the following functions for with ggplot2:
* scale\_color\_brewer
* scale\_fill\_brewer/span\>
In R, you have several schemes that work well right out of the box:
* ggplot2 default palette
* viridis
* scico
* RColorBrewer
Furthermore, they’ll work well with discrete or continuous data. You will have to do some work to come up with better, so they should be your default. Sometimes though, [one can’t help oneself](https://github.com/m-clark/NineteenEightyR).
Contrast
--------
Thankfully, websites have mostly gotten past the phase where there text looks like this. The goal of scientific communication is to, well, *communicate*. Making text hard to read is pretty much antithetical to this.
So contrast comes into play with text as well as color. In general, you should consider a 7 to 1 contrast ratio for text, minimally 4 to 1\.
\-Here is text at 2 to 1
\-Here is text at 4 to 1
\-Here is text at 7 to 1 (this document)
\-Here is black
I personally don’t like stark black, and find it visually irritating, but obviously that would be fine to use for most people.
Contrast concerns regard color as well. When considering color, one should also think about the background for plots, or perhaps the surrounding text. The following function will check for this. Ideally one would pass *AAA* status, but *AA* is sufficient for the vast majority of cases.
```
# default ggplot2 discrete color (left) against the default ggplot2 gray background
visibly::color_contrast_checker(foreground = '#F8766D', background = 'gray92')
```
```
ratio AA AALarge AAA AAALarge
1 2.25 fail fail fail fail
```
```
# the dark viridis (right) would be better
visibly::color_contrast_checker(foreground = '#440154', background = 'gray92')
```
```
ratio AA AALarge AAA AAALarge
1 12.7 pass pass pass pass
```
You can’t win all battles however. It will be difficult to choose colors that are perceptually even, avoid color\-blindness issues, have good contrast, work to convey the information you need, and are aesthetically pleasing. The main thing to do is simply make the attempt.
Scaling Size
------------
You might not be aware, but there is more than one way to scale the size of objects, e.g. in a scatterplot. Consider the following, where in both cases dots are scaled by the person’s body\-mass index (BMI).
What’s the difference? The first plot scales the dots by their area, while the second scales the radius, but otherwise they are identical. It’s not generally recommended to scale the radius, as our perceptual system is more attuned to the area. Packages like ggplot2 and plotly will automatically do this, but some might not, so you should check.
Transparency
------------
Using transparency is a great way to keep detailed information available to the audience without being overwhelming. Consider the following. Fifty individual trajectories are shown on the left, but it doesn’t cause any issue graphically. The right has 10 lines plus a fitted line, 20 points and a ribbon to provide a sense of variance. Using transparency and a scientific color scheme allows it to be perceived cleanly.
Without transparency, it just looks ugly, and notably busier if nothing else. This plot is using the exact same scico palette.
In addition, transparency can be used to add additional information to a plot. In the following scatter plot, we can get a better sense of data density from the fact that the plot is darker where points overlap more.
Here we apply transparency to a density plot to convey a group difference in distributions, while still being able to visualize the whole distribution of each group.
Had we not done so, we might not be able to tell what’s going on with some of the groups at all.
In general, a good use of transparency can potentially help any visualization, but consider it especially when trying to display many points, or otherwise have overlapping data.
Accessibility
-------------
Among many things (apparently) rarely considered in typical academic or other visualization is accessibility. The following definition comes from the World Wide Web Consortium.
> Web accessibility means that people with disabilities can use the Web. More specifically, Web accessibility means that people with disabilities can perceive, understand, navigate, and interact with the Web, and that they can contribute to the Web. Web accessibility also benefits others, including older people with changing abilities due to aging.
The main message to get is that not everyone is able to use the web in the same manner. While you won’t be able to satisfy everyone who might come across your work, putting a little thought into your offering can go along way, and potentially widen your audience.
We talked about this previously, but when communicating visually, one can do simple things like choosing a colorblind\-friendly palette, or using a font contrast that will make it easier on the eyes of those reading your work. There are even browser plugins to test your web content for accessibility. In addition, there are little things like adding a title to inserted images, making links more noticeable etc., all of which can help consumers of your information.
File Types
----------
It’s one thing to create a visualization, but at some point you’re likely going to want to share it. RStudio will allow for the export of any visualization created in the Plots or Viewer tab. In addition, various packages may have their own save function, that may allow you to specify size, type or other aspects. Here we’ll discuss some of the options.
* png: These are relatively small in size and ubiquitous on the web. You should feel fine in this format. It does not scale however, so if you make a smaller image and someone zooms, it will become blurry.
* gif: These are the type used for all the silly animations you see on the web. Using them is fine if you want to make an animation, but know that it can go longer than a couple seconds, and there is no requirement for it to be asinine.
* jpg: Commonly used for photographs, which isn’t the case with data generated graphs. Given their relative size I don’t see much need for these.
* svg: These take a different approach to imaging and can scale. You can make a very small one and it (potentially) can still look great when zoomed in to a much larger size. Often useful for logos, but possibly in any situation.
As I don’t know what screen will see my visualizations, I generally opt for svg. It may be a bit slower/larger, but in my usage and for my audience size, this is of little concern relative to it looking proper. They also work for pdf if you’re still creating those, and there are also lighter weight versions in R, e.g. svglite. Beyond that I use png, and have no need for others.
Here is a [discussion on stackexchange](https://stackoverflow.com/questions/2336522/png-vs-gif-vs-jpeg-vs-svg-when-best-to-use) that summarizes some of the above. The initial question is old but there have been recent updates to the responses.
Note also, you can import files directly into your documents with R, markdown, HTML tags, or \\(\\LaTeX\\). See `?png` for a starting point. The following demonstrates an image insert for HTML output, with a couple options for centering and size.
`<img src="file.jpg" style="display:block; margin: 0 auto;" width=50%>`
This uses markdown to get the same result
```
{width=50%}
```
Summary of Thinking Visually
----------------------------
The goal of this section was mostly just to help you realize that there are many things to consider when visualizing information and attempting to communicate the contents of data. The approach is not the same as what one would do in say, an artistic venture, or where there is nothing specific to impart to an audience. Even some of the most common things you see published are fundamentally problematic, so you can’t even use what people traditionally do as a guide. However, there are many tools available to help you. Another thing to keep in mind is that there is no right way to do a particular visualization, and many ways, to have fun with it.
A casual list of things to avoid
--------------------------------
I’m just putting things that come to mind here as I return to this document. Mostly it is personal opinion, though often based on various sources in the data visualization realm or simply my own experience.
### Pie
Pie charts and their cousins, e.g. bar charts (and stacked versions), wind rose plots, radar plots etc., either convey too little information, or make otherwise simple information more difficult to process perceptually. The basic pie chart is really only able to convey proportional data. Beyond that, anything done with a pie chart can almost always be done better, at the very least with a bar chart, but you should really consider better ways to convey your data.
Alternatives:
* bar
* densities/stacked densities
* parallel sets/sankey
### Histograms
Anyone that’s used R’s hist function knows the frustration here. Use density plots instead. They convey the same information but better, and typical defaults are usually fine. However, you should really consider the information and audience\- is a histogram or density plot really displaying what you want to show?
Alternatives:
* density
* quantile dotplot
### Using 3D without adding any communicative value
You will often come across use of 3D in scientific communication which is fairly poor and makes the data harder to interpret. In general, when going beyond two dimensions, your first thought should be to use color, size, etc. and finally, prefer interactivity to 3D. Where it is useful is in things like showing structure (e.g. molecular, geographical), or continuous multi\-way interactions.
Alternatives:
* multiple 2d/faceting
### Using too many colors
Some put a completely non\-scientifically based number on this, but the idea holds. For example, if you’re trying to show U.S. state grouping by using a different color for all 50 states, no one’s going to be able to tell the yellow for Alabama vs. the slightly different yellow for Idaho. Alternatives would be to show the information via a map or use a hover over display.
### Using valenced colors when data isn’t applicable
Often we have data that can be thought of as having a positive/negative or valenced nuance. For example, we might want to show values relative to some cut point, or they might naturally have positive and negative values (e.g. sentiment, standardized scores). Oftentimes though, doing so would mean possibly arbitrarily picking a cut point and unnaturally discretizing the data.
The following shows a plot of water risk for many countries. The first plots the color along a continuum with increasing darkness as one goes along, which is appropriate for this score of positive numeric values from 0\-5\. We can clearly see problematic ones while still getting a sense of where other countries lie along that score. The other plot arbitrarily codes a different color scheme, which might suggest some countries are fundamentally different than others. However, if the goal is to show values relative to the median, then it accurately conveys countries above and below that value. If the median is not a useful value (e.g. to take some action upon), then the former plot would likely be preferred.
### Showing maps that just display population
Many of the maps I see on the web cover a wide range of data and can be very visually appealing, but pretty much just tell me where the most populated areas are, because the value conveyed is highly correlated with it. Such maps are not very interesting, so make sure that your geographical depiction is more informative than this.
### Biplots
A lot of folks doing PCA resort to biplots for interpretation, where a graphical model would be much more straightforward. See [this chapter](http://m-clark.github.io/sem/latent-variables.html) for example.
Thinking Visually Exercises
---------------------------
### Exercise 1
The following uses the diamonds data set that comes with ggplot2. Use the scale\_color\_viridis or scale\_color\_scico function to add a more accessible palette. Use `?` to examine your options.
```
# devtools::install_github("thomasp85/scico") # to use scientific colors
library(ggplot2)
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(color = price)) +
????
```
### Exercise 2
Now color it by the `cut` instead of `price`. Use scale\_color\_viridis/scioc\_d. See the helpfile via `?scale_color_*` to see how to change the palette.
### Thinking exercises
For your upcoming presentation, *who* is your audience?
Information
-----------
A starting point for data visualization regards the information you want to display, and then how you want to display it in order to tell the data’s story. As in statistical modeling, parsimony is the goal, but not at the cost of the more compelling story. We don’t want to waste the time of the audience or be redundant, but we also want to avoid unnecessary clutter, chart junk, and the like.
We’ll start with a couple examples. Consider the following.
So what’s wrong with this? Plenty. Aside from being boring, the entire story can be said with a couple words\- males are taller than females (even in the Star Wars universe). There is no reason to have a visualization. And if a simple group difference is the most exciting thing you have to talk about, not many are going to be interested.
Minor issues can also be noted, including unnecessary border around the bars, unnecessary vertical gridlines, and an unnecessary X axis label.
You might think the following is an improvement, but I would say it’s even worse.
Now the y axis has been changed to distort the difference, perceptually suggesting a height increase of over 34%. Furthermore, color is used but the colors are chosen poorly, and add no information, thus making the legend superfluous. And finally, the above doesn’t even convey the information people think it does, assuming they are even standard error bars, which one typically has to guess about in many journal visualizations of this kind[50](#fn50).
Now we add more information, but more problems!
The above has unnecessary border, gridlines, and emphasis. The labels, while possibly interesting, do not relate anything useful to the graph, and many are illegible. It imposes a straight (and too wide of a) straight line on a nonlinear relationship. And finally, color choice is both terrible and tends to draw one’s eye to the female data points. Here is what it looks like to someone with the most common form of colorblindness. If the points were less clumpy on sex, it would be very difficult to distinguish the groups.
And here is what it might look like when printed.
Now consider the following. We have six pieces of information in one graph\- name (on hover), homeworld (shape), age (size), sex (color), mass (x), and height (y). The colors are evenly spaced from one another, and so do not draw one’s attention to one group over another, or even to the line over groups. Opacity allows the line to be added and the points to overlap without loss of information. We technically don’t need a caption, legend or gridlines, because hovering over the data tells us everything we’d want to know about a given data point. The interactivity additionally allows one to select and zoom on specific areas.
Whether this particular scheme is something you’d prefer or not, the point is that we get quite a bit of information without being overwhelming, and the data is allowed to express itself cleanly.
Here are some things to keep in mind when creating visualizations for scientific communication.
### Your audience isn’t dumb
Assume your audience, which in academia is full of people with advanced degrees or those aspiring to obtain one, and in other contexts comprises people who are interested in your story, can handle more than a bar graph. If the visualization is good and well\-explained[51](#fn51), they’ll be fine.
See the data visualization and maps sections of [2019: The Year in Visual Stories and Graphics](https://www.nytimes.com/interactive/2019/12/30/us/2019-year-in-graphics.html) at the New York Times. Good data visualization of even complex relationships can be appreciated by more than an academic audience. Assume you can at least provide visualizations on that level of complexity and be okay. It won’t always work, but at least put the same effort you’d appreciate yourself.
### Clarity is key
Sometimes the clearest message *is* a complicated one. That’s okay, science is an inherently fuzzy process. Make sure your visualization tells the story you think is important, and don’t dumb the story down in the visualization. People will remember the graphic before they’ll remember a table of numbers.
By the same token, don’t needlessly complicate something that is straightforward. Perhaps a scatter plot with some groupwise coloring is enough. That’s fine.
All of this is easier said than done, and there is no right way to do data visualizations. Prepare to experiment, and focus on visuals that display patterns that will be more readily perceived.
### Avoid clutter
In striving for clarity, there are pitfalls to avoid. Gridlines, 3d, unnecessary patterning, and chartjunk in general will only detract from the message. As an example, gridlines might even seem necessary, but even faint ones can potentially hinder the pattern recognition you hope will take place, perceptually imposing clumps of data that do not exist. In addition, they practically insist on a level of data precision that in many situations you simply don’t have. What’s more, with interactivity they literally convey nothing additional, as a simple hover\-over or click on a data point will reveal the precise values. Use sparingly, if at all.
### Color isn’t optional
It’s odd for me to have to say this, as it’s been the case for many years, but no modern scientific outlet should be a print\-first outfit, and if they are, you shouldn’t care to send your work there. The only thing you should be concerned with is how it will look online, because that’s how people will interact with your work first and foremost. That means that color is essentially a requirement for any visualization, so use it well in yours. Appropriate color choice will still look fine in black and white anyway.
### Think interactively
It might be best to start by making the visualization *you want to make*, with interactivity and anything else you like. You can then reduce as necessary for publication or other outlets, and keep the fancy one as supplemental, or accessible on your own website to show off.
### Your audience isn’t dumb
Assume your audience, which in academia is full of people with advanced degrees or those aspiring to obtain one, and in other contexts comprises people who are interested in your story, can handle more than a bar graph. If the visualization is good and well\-explained[51](#fn51), they’ll be fine.
See the data visualization and maps sections of [2019: The Year in Visual Stories and Graphics](https://www.nytimes.com/interactive/2019/12/30/us/2019-year-in-graphics.html) at the New York Times. Good data visualization of even complex relationships can be appreciated by more than an academic audience. Assume you can at least provide visualizations on that level of complexity and be okay. It won’t always work, but at least put the same effort you’d appreciate yourself.
### Clarity is key
Sometimes the clearest message *is* a complicated one. That’s okay, science is an inherently fuzzy process. Make sure your visualization tells the story you think is important, and don’t dumb the story down in the visualization. People will remember the graphic before they’ll remember a table of numbers.
By the same token, don’t needlessly complicate something that is straightforward. Perhaps a scatter plot with some groupwise coloring is enough. That’s fine.
All of this is easier said than done, and there is no right way to do data visualizations. Prepare to experiment, and focus on visuals that display patterns that will be more readily perceived.
### Avoid clutter
In striving for clarity, there are pitfalls to avoid. Gridlines, 3d, unnecessary patterning, and chartjunk in general will only detract from the message. As an example, gridlines might even seem necessary, but even faint ones can potentially hinder the pattern recognition you hope will take place, perceptually imposing clumps of data that do not exist. In addition, they practically insist on a level of data precision that in many situations you simply don’t have. What’s more, with interactivity they literally convey nothing additional, as a simple hover\-over or click on a data point will reveal the precise values. Use sparingly, if at all.
### Color isn’t optional
It’s odd for me to have to say this, as it’s been the case for many years, but no modern scientific outlet should be a print\-first outfit, and if they are, you shouldn’t care to send your work there. The only thing you should be concerned with is how it will look online, because that’s how people will interact with your work first and foremost. That means that color is essentially a requirement for any visualization, so use it well in yours. Appropriate color choice will still look fine in black and white anyway.
### Think interactively
It might be best to start by making the visualization *you want to make*, with interactivity and anything else you like. You can then reduce as necessary for publication or other outlets, and keep the fancy one as supplemental, or accessible on your own website to show off.
Color
-----
There is a lot to consider regarding color. Until recently, the default color schemes of most visualization packages were poor at best. Thankfully, ggplot2, its imitators and extenders, in both the R world and beyond, have made it much easier to have a decent color scheme by default[52](#fn52).
However, the defaults are still potentially problematic, so you should be prepared to go with something else. In other cases, you may just simply prefer something else. For example, for me, the gray background of ggplot2 defaults is something I have to remove for every plot[53](#fn53).
### Viridis
A couple packages will help you get started in choosing a decent color scheme. One is viridis. As stated in the package description:
> These color maps are designed in such a way that they will analytically be perfectly perceptually\-uniform, both in regular form and also when converted to black\-and\-white. They are also designed to be perceived by readers with the most common form of color blindness.
So basically you have something that will take care of your audience without having to do much. There are four primary palettes, plus one version of the main viridis color scheme that will be perceived by those with any type of color blindness (*cividis*).
These color schemes might seem a bit odd from what you’re used to. But recall that the goal is good communication, and these will allow you to convey information accurately, without implicit bias, and be acceptable in different formats. In addition, there is ggplot2 functionality to boot, e.g. scale\_color\_viridis, and it will work for discrete or continuously valued data.
For more, see the [vignette](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html). I also invite you to watch the [introduction of the original module in Python](https://www.youtube.com/watch?v=xAoljeRJ3lU), where you can learn more about the issues in color selection, and why viridis works.
You can use the following functions for with ggplot2:
* scale\_color\_viridis\_c
* scale\_color\_viridis\_d
* scale\_fill\_viridis\_c
* scale\_fill\_viridis\_d
### Scientific colors
Yet another set of palettes are available via the scico package, and are specifically geared toward for scientific presentation. These perceptually\-uniform color maps sequential, divierging, and circular pallets, will handle data variations equally all along the colour bar, and still work for black and white print. They provide more palettes to go with viridis.
* Perceptually uniform
* Perceptually ordered
* Color\-vision\-deficiency friendly
* Readable as black\-and\-white print
You can use the following functions for with ggplot2:
* scale\_color\_scico
* scale\_color\_scico\_d
* scale\_fill\_scico
* scale\_fill\_scico\_d
I personally prefer these for the choices available, and viridis doesn’t seem to work aesthetically that well in a lot of contexts. More information on their development can be found [here](http://www.fabiocrameri.ch/colourmaps.php).
### RColorBrewer
Color Brewer offers a collection of palettes that will generally work well in a variety of situations, but especially for discrete data. While there are print and color\-blind friendly palettes, not all adhere to those restrictions. Specifically though, you have palettes for the following data situations:
* Qualitative (e.g. Dark2[54](#fn54))
* Sequential (e.g. Reds)
* Diverging (e.g. RdBu)
There is a ggplot2 function, scale\_color\_brewer, you can use as well. For more, see [colorbrewer.org](http://colorbrewer2.org/). There you can play around with the palettes to help make your decision.
You can use the following functions for with ggplot2:
* scale\_color\_brewer
* scale\_fill\_brewer/span\>
In R, you have several schemes that work well right out of the box:
* ggplot2 default palette
* viridis
* scico
* RColorBrewer
Furthermore, they’ll work well with discrete or continuous data. You will have to do some work to come up with better, so they should be your default. Sometimes though, [one can’t help oneself](https://github.com/m-clark/NineteenEightyR).
### Viridis
A couple packages will help you get started in choosing a decent color scheme. One is viridis. As stated in the package description:
> These color maps are designed in such a way that they will analytically be perfectly perceptually\-uniform, both in regular form and also when converted to black\-and\-white. They are also designed to be perceived by readers with the most common form of color blindness.
So basically you have something that will take care of your audience without having to do much. There are four primary palettes, plus one version of the main viridis color scheme that will be perceived by those with any type of color blindness (*cividis*).
These color schemes might seem a bit odd from what you’re used to. But recall that the goal is good communication, and these will allow you to convey information accurately, without implicit bias, and be acceptable in different formats. In addition, there is ggplot2 functionality to boot, e.g. scale\_color\_viridis, and it will work for discrete or continuously valued data.
For more, see the [vignette](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html). I also invite you to watch the [introduction of the original module in Python](https://www.youtube.com/watch?v=xAoljeRJ3lU), where you can learn more about the issues in color selection, and why viridis works.
You can use the following functions for with ggplot2:
* scale\_color\_viridis\_c
* scale\_color\_viridis\_d
* scale\_fill\_viridis\_c
* scale\_fill\_viridis\_d
### Scientific colors
Yet another set of palettes are available via the scico package, and are specifically geared toward for scientific presentation. These perceptually\-uniform color maps sequential, divierging, and circular pallets, will handle data variations equally all along the colour bar, and still work for black and white print. They provide more palettes to go with viridis.
* Perceptually uniform
* Perceptually ordered
* Color\-vision\-deficiency friendly
* Readable as black\-and\-white print
You can use the following functions for with ggplot2:
* scale\_color\_scico
* scale\_color\_scico\_d
* scale\_fill\_scico
* scale\_fill\_scico\_d
I personally prefer these for the choices available, and viridis doesn’t seem to work aesthetically that well in a lot of contexts. More information on their development can be found [here](http://www.fabiocrameri.ch/colourmaps.php).
### RColorBrewer
Color Brewer offers a collection of palettes that will generally work well in a variety of situations, but especially for discrete data. While there are print and color\-blind friendly palettes, not all adhere to those restrictions. Specifically though, you have palettes for the following data situations:
* Qualitative (e.g. Dark2[54](#fn54))
* Sequential (e.g. Reds)
* Diverging (e.g. RdBu)
There is a ggplot2 function, scale\_color\_brewer, you can use as well. For more, see [colorbrewer.org](http://colorbrewer2.org/). There you can play around with the palettes to help make your decision.
You can use the following functions for with ggplot2:
* scale\_color\_brewer
* scale\_fill\_brewer/span\>
In R, you have several schemes that work well right out of the box:
* ggplot2 default palette
* viridis
* scico
* RColorBrewer
Furthermore, they’ll work well with discrete or continuous data. You will have to do some work to come up with better, so they should be your default. Sometimes though, [one can’t help oneself](https://github.com/m-clark/NineteenEightyR).
Contrast
--------
Thankfully, websites have mostly gotten past the phase where there text looks like this. The goal of scientific communication is to, well, *communicate*. Making text hard to read is pretty much antithetical to this.
So contrast comes into play with text as well as color. In general, you should consider a 7 to 1 contrast ratio for text, minimally 4 to 1\.
\-Here is text at 2 to 1
\-Here is text at 4 to 1
\-Here is text at 7 to 1 (this document)
\-Here is black
I personally don’t like stark black, and find it visually irritating, but obviously that would be fine to use for most people.
Contrast concerns regard color as well. When considering color, one should also think about the background for plots, or perhaps the surrounding text. The following function will check for this. Ideally one would pass *AAA* status, but *AA* is sufficient for the vast majority of cases.
```
# default ggplot2 discrete color (left) against the default ggplot2 gray background
visibly::color_contrast_checker(foreground = '#F8766D', background = 'gray92')
```
```
ratio AA AALarge AAA AAALarge
1 2.25 fail fail fail fail
```
```
# the dark viridis (right) would be better
visibly::color_contrast_checker(foreground = '#440154', background = 'gray92')
```
```
ratio AA AALarge AAA AAALarge
1 12.7 pass pass pass pass
```
You can’t win all battles however. It will be difficult to choose colors that are perceptually even, avoid color\-blindness issues, have good contrast, work to convey the information you need, and are aesthetically pleasing. The main thing to do is simply make the attempt.
Scaling Size
------------
You might not be aware, but there is more than one way to scale the size of objects, e.g. in a scatterplot. Consider the following, where in both cases dots are scaled by the person’s body\-mass index (BMI).
What’s the difference? The first plot scales the dots by their area, while the second scales the radius, but otherwise they are identical. It’s not generally recommended to scale the radius, as our perceptual system is more attuned to the area. Packages like ggplot2 and plotly will automatically do this, but some might not, so you should check.
Transparency
------------
Using transparency is a great way to keep detailed information available to the audience without being overwhelming. Consider the following. Fifty individual trajectories are shown on the left, but it doesn’t cause any issue graphically. The right has 10 lines plus a fitted line, 20 points and a ribbon to provide a sense of variance. Using transparency and a scientific color scheme allows it to be perceived cleanly.
Without transparency, it just looks ugly, and notably busier if nothing else. This plot is using the exact same scico palette.
In addition, transparency can be used to add additional information to a plot. In the following scatter plot, we can get a better sense of data density from the fact that the plot is darker where points overlap more.
Here we apply transparency to a density plot to convey a group difference in distributions, while still being able to visualize the whole distribution of each group.
Had we not done so, we might not be able to tell what’s going on with some of the groups at all.
In general, a good use of transparency can potentially help any visualization, but consider it especially when trying to display many points, or otherwise have overlapping data.
Accessibility
-------------
Among many things (apparently) rarely considered in typical academic or other visualization is accessibility. The following definition comes from the World Wide Web Consortium.
> Web accessibility means that people with disabilities can use the Web. More specifically, Web accessibility means that people with disabilities can perceive, understand, navigate, and interact with the Web, and that they can contribute to the Web. Web accessibility also benefits others, including older people with changing abilities due to aging.
The main message to get is that not everyone is able to use the web in the same manner. While you won’t be able to satisfy everyone who might come across your work, putting a little thought into your offering can go along way, and potentially widen your audience.
We talked about this previously, but when communicating visually, one can do simple things like choosing a colorblind\-friendly palette, or using a font contrast that will make it easier on the eyes of those reading your work. There are even browser plugins to test your web content for accessibility. In addition, there are little things like adding a title to inserted images, making links more noticeable etc., all of which can help consumers of your information.
File Types
----------
It’s one thing to create a visualization, but at some point you’re likely going to want to share it. RStudio will allow for the export of any visualization created in the Plots or Viewer tab. In addition, various packages may have their own save function, that may allow you to specify size, type or other aspects. Here we’ll discuss some of the options.
* png: These are relatively small in size and ubiquitous on the web. You should feel fine in this format. It does not scale however, so if you make a smaller image and someone zooms, it will become blurry.
* gif: These are the type used for all the silly animations you see on the web. Using them is fine if you want to make an animation, but know that it can go longer than a couple seconds, and there is no requirement for it to be asinine.
* jpg: Commonly used for photographs, which isn’t the case with data generated graphs. Given their relative size I don’t see much need for these.
* svg: These take a different approach to imaging and can scale. You can make a very small one and it (potentially) can still look great when zoomed in to a much larger size. Often useful for logos, but possibly in any situation.
As I don’t know what screen will see my visualizations, I generally opt for svg. It may be a bit slower/larger, but in my usage and for my audience size, this is of little concern relative to it looking proper. They also work for pdf if you’re still creating those, and there are also lighter weight versions in R, e.g. svglite. Beyond that I use png, and have no need for others.
Here is a [discussion on stackexchange](https://stackoverflow.com/questions/2336522/png-vs-gif-vs-jpeg-vs-svg-when-best-to-use) that summarizes some of the above. The initial question is old but there have been recent updates to the responses.
Note also, you can import files directly into your documents with R, markdown, HTML tags, or \\(\\LaTeX\\). See `?png` for a starting point. The following demonstrates an image insert for HTML output, with a couple options for centering and size.
`<img src="file.jpg" style="display:block; margin: 0 auto;" width=50%>`
This uses markdown to get the same result
```
{width=50%}
```
Summary of Thinking Visually
----------------------------
The goal of this section was mostly just to help you realize that there are many things to consider when visualizing information and attempting to communicate the contents of data. The approach is not the same as what one would do in say, an artistic venture, or where there is nothing specific to impart to an audience. Even some of the most common things you see published are fundamentally problematic, so you can’t even use what people traditionally do as a guide. However, there are many tools available to help you. Another thing to keep in mind is that there is no right way to do a particular visualization, and many ways, to have fun with it.
A casual list of things to avoid
--------------------------------
I’m just putting things that come to mind here as I return to this document. Mostly it is personal opinion, though often based on various sources in the data visualization realm or simply my own experience.
### Pie
Pie charts and their cousins, e.g. bar charts (and stacked versions), wind rose plots, radar plots etc., either convey too little information, or make otherwise simple information more difficult to process perceptually. The basic pie chart is really only able to convey proportional data. Beyond that, anything done with a pie chart can almost always be done better, at the very least with a bar chart, but you should really consider better ways to convey your data.
Alternatives:
* bar
* densities/stacked densities
* parallel sets/sankey
### Histograms
Anyone that’s used R’s hist function knows the frustration here. Use density plots instead. They convey the same information but better, and typical defaults are usually fine. However, you should really consider the information and audience\- is a histogram or density plot really displaying what you want to show?
Alternatives:
* density
* quantile dotplot
### Using 3D without adding any communicative value
You will often come across use of 3D in scientific communication which is fairly poor and makes the data harder to interpret. In general, when going beyond two dimensions, your first thought should be to use color, size, etc. and finally, prefer interactivity to 3D. Where it is useful is in things like showing structure (e.g. molecular, geographical), or continuous multi\-way interactions.
Alternatives:
* multiple 2d/faceting
### Using too many colors
Some put a completely non\-scientifically based number on this, but the idea holds. For example, if you’re trying to show U.S. state grouping by using a different color for all 50 states, no one’s going to be able to tell the yellow for Alabama vs. the slightly different yellow for Idaho. Alternatives would be to show the information via a map or use a hover over display.
### Using valenced colors when data isn’t applicable
Often we have data that can be thought of as having a positive/negative or valenced nuance. For example, we might want to show values relative to some cut point, or they might naturally have positive and negative values (e.g. sentiment, standardized scores). Oftentimes though, doing so would mean possibly arbitrarily picking a cut point and unnaturally discretizing the data.
The following shows a plot of water risk for many countries. The first plots the color along a continuum with increasing darkness as one goes along, which is appropriate for this score of positive numeric values from 0\-5\. We can clearly see problematic ones while still getting a sense of where other countries lie along that score. The other plot arbitrarily codes a different color scheme, which might suggest some countries are fundamentally different than others. However, if the goal is to show values relative to the median, then it accurately conveys countries above and below that value. If the median is not a useful value (e.g. to take some action upon), then the former plot would likely be preferred.
### Showing maps that just display population
Many of the maps I see on the web cover a wide range of data and can be very visually appealing, but pretty much just tell me where the most populated areas are, because the value conveyed is highly correlated with it. Such maps are not very interesting, so make sure that your geographical depiction is more informative than this.
### Biplots
A lot of folks doing PCA resort to biplots for interpretation, where a graphical model would be much more straightforward. See [this chapter](http://m-clark.github.io/sem/latent-variables.html) for example.
### Pie
Pie charts and their cousins, e.g. bar charts (and stacked versions), wind rose plots, radar plots etc., either convey too little information, or make otherwise simple information more difficult to process perceptually. The basic pie chart is really only able to convey proportional data. Beyond that, anything done with a pie chart can almost always be done better, at the very least with a bar chart, but you should really consider better ways to convey your data.
Alternatives:
* bar
* densities/stacked densities
* parallel sets/sankey
### Histograms
Anyone that’s used R’s hist function knows the frustration here. Use density plots instead. They convey the same information but better, and typical defaults are usually fine. However, you should really consider the information and audience\- is a histogram or density plot really displaying what you want to show?
Alternatives:
* density
* quantile dotplot
### Using 3D without adding any communicative value
You will often come across use of 3D in scientific communication which is fairly poor and makes the data harder to interpret. In general, when going beyond two dimensions, your first thought should be to use color, size, etc. and finally, prefer interactivity to 3D. Where it is useful is in things like showing structure (e.g. molecular, geographical), or continuous multi\-way interactions.
Alternatives:
* multiple 2d/faceting
### Using too many colors
Some put a completely non\-scientifically based number on this, but the idea holds. For example, if you’re trying to show U.S. state grouping by using a different color for all 50 states, no one’s going to be able to tell the yellow for Alabama vs. the slightly different yellow for Idaho. Alternatives would be to show the information via a map or use a hover over display.
### Using valenced colors when data isn’t applicable
Often we have data that can be thought of as having a positive/negative or valenced nuance. For example, we might want to show values relative to some cut point, or they might naturally have positive and negative values (e.g. sentiment, standardized scores). Oftentimes though, doing so would mean possibly arbitrarily picking a cut point and unnaturally discretizing the data.
The following shows a plot of water risk for many countries. The first plots the color along a continuum with increasing darkness as one goes along, which is appropriate for this score of positive numeric values from 0\-5\. We can clearly see problematic ones while still getting a sense of where other countries lie along that score. The other plot arbitrarily codes a different color scheme, which might suggest some countries are fundamentally different than others. However, if the goal is to show values relative to the median, then it accurately conveys countries above and below that value. If the median is not a useful value (e.g. to take some action upon), then the former plot would likely be preferred.
### Showing maps that just display population
Many of the maps I see on the web cover a wide range of data and can be very visually appealing, but pretty much just tell me where the most populated areas are, because the value conveyed is highly correlated with it. Such maps are not very interesting, so make sure that your geographical depiction is more informative than this.
### Biplots
A lot of folks doing PCA resort to biplots for interpretation, where a graphical model would be much more straightforward. See [this chapter](http://m-clark.github.io/sem/latent-variables.html) for example.
Thinking Visually Exercises
---------------------------
### Exercise 1
The following uses the diamonds data set that comes with ggplot2. Use the scale\_color\_viridis or scale\_color\_scico function to add a more accessible palette. Use `?` to examine your options.
```
# devtools::install_github("thomasp85/scico") # to use scientific colors
library(ggplot2)
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(color = price)) +
????
```
### Exercise 2
Now color it by the `cut` instead of `price`. Use scale\_color\_viridis/scioc\_d. See the helpfile via `?scale_color_*` to see how to change the palette.
### Thinking exercises
For your upcoming presentation, *who* is your audience?
### Exercise 1
The following uses the diamonds data set that comes with ggplot2. Use the scale\_color\_viridis or scale\_color\_scico function to add a more accessible palette. Use `?` to examine your options.
```
# devtools::install_github("thomasp85/scico") # to use scientific colors
library(ggplot2)
ggplot(aes(x = carat, y = price), data = diamonds) +
geom_point(aes(color = price)) +
????
```
### Exercise 2
Now color it by the `cut` instead of `price`. Use scale\_color\_viridis/scioc\_d. See the helpfile via `?scale_color_*` to see how to change the palette.
### Thinking exercises
For your upcoming presentation, *who* is your audience?
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/reproducibility.html |
Building Better Data\-Driven Products
=====================================
At this point we’ve covered many topics that will get you from data import and generation to visualizing model results. What’s left? To tell others about what you’ve discovered! While there are any number of ways to present your *data\-driven product*, there are a couple of things to keep in mind regardless of the chosen rendition. Chief among them is building a product that will be intimately connected with all the work that went before it, and which will be consistent across products and (hopefully) over time as well.
We’ll start our discussion of how to present one’s work with some terminology you might have come across:
* *Reproducible research*
* *Repeatable research*
* *Replicable science*
* *Reproducible data analysis*
* *Literate programming*
* *Dynamic data analysis*
* *Dynamic report generation*
Each of these may mean slightly different things depending on the context and background of the person using them, so one should take care to note precisely what is meant. We’ll examine some of these concepts, or at least my particular version of them.
Rep\* Analysis
--------------
Let’s start with the notions of *replicability*, *repeatability*, and *reproducibility*, which are hot topics in various disciplines of late. In our case, we are specifically concerned with programming and analytical results, visualizations, etc. (e.g. as opposed to running an experiment).
To begin, these and related terms are often not precisely defined, and depending on the definition one selects, possibly unlikely, or even impossible! I’ll roughly follow the Association for Computing Machinery guidelines Computing Machinery ([2018](#ref-acm2020)) since they actually do define them, but mostly just to help get us organize our thinking about them, and so you can at least know what *I* mean when I use the terms. In many cases, the concepts are best thought of as ideals to strive for, or goals for certain aspects of the data analysis process. For example, in deference to Heraclitus, Cratylus, the Buddha, and others, nothing is exactly replicable, if only because time will have passed, and with it some things will have changed about the process since the initial analysis was conducted\- the people involved, the data collection approach, the analytical tools, etc. Indeed, even your thought processes regarding the programming and analysis are in constant flux while engaged with the data process. However, we can replicate some things or some aspects of the process, possibly even exactly, and thus make the results reproducible. In other cases, even when we can, we may not want to.
### Example
As our focus will be on data analysis in particular, let’s start with the following scenario. Various versions of a data set are used leading up to analysis, and after several iterations, `finaldata7` is now spread across the computers of the faculty advisor, two graduate students and one undergraduate student. Two of those `finaldata7` data sets, specifically named `finaldata7a` and `finaldata7b`, are slightly different from the other two and each other. The undergraduate, who helped with the processing of `finaldata2` through `finaldata6`, has graduated and no longer resides in the same state, and has other things to occupy their time. Some of the data processing was done with menus in a software package that shall not be named.
The script that did the final analysis, called `results.C`, calls the data using a directory location which no longer exists (and refers only to `finaldata7`). Though it is titled ‘results’, the script performs several more data processing steps, but without comments that would indicate why any of them are being done. Some of the variables are named things like `PDQ` and `V3`, but there is no documentation that would say what those mean.
When writing their research document in Microsoft Word, all the values from the analyses were copied and pasted into the tables and text[55](#fn55). The variable names in the document have no exact match to any of the names in any of the data objects. Furthermore, no reference was provided in the text regarding what software or specific packages were used for the analysis.
And now, several months later, after the final draft of the document was written and sent to the journal, the reviewers have eventually made their comments on the paper, and it’s time to dive back into the analysis. Now, what do you think the odds are that this research group could even reproduce the values reported in the main analysis of the paper?
Sadly, up until recently this was not uncommon, and even certain issues just described are still very common. Such an approach is essentially the antithesis of replicability and reproducible research[56](#fn56). Anything that was only done with menus cannot be replicated for certain, and without sufficient documentation it’s not clear what was done even when there is potentially reproducible code. The naming of files, variables and other objects was done poorly, so it will take unnecessary effort to figure out what was done, to what, and when. And even after most things get squared away, there is still a chance the numbers won’t match what was in the paper anyway. This scenario is exactly what we don’t want.
### Repeatable
*Repeatability* can simply be thought of as whether *you* can run the code and analysis again given the same circumstances, essentially producing the same results. In the above scenario, even this is may not even possible. But let’s say that whoever did the analysis can run their code, it works, and produces a result very similar to what was published. We could then say it’s repeatable. This should only be seen as a minimum standard, though sometimes it is enough.
The notion of repeatability also extends to a specific measure itself. This consistency of a measure across repeated observations is typically referred to as *reliability*. This is not our focus here, but I mention it for those who have the false belief that at least some data driven products are entirely replicable. However, you can’t escape measurement error.
### Reproducible
Now let someone else try the analytical process to see if they can *reproduce* the results. Assuming the same data starting point, they should get the same result using the same tools. For our scenario, if we just start at the very last part, maybe this is possible, but at the least, it would require the data that went into the final analysis and the model being specified in a way that anyone could potentially understand. However, it is entirely unlikely that if they start from the raw data import they would get the same results that are in the Word document. If a research article does not provide the analytical data, nor specifies the model in code or math, it is not reproducible, and we can only take on faith what was done.
Here are some typical non\-reproducible situations:
* data is not made available
* code is not made available
* model is not adequately represented (in math or code)
* data processing and/or analysis was done with menus
* visualizations were tweaked in other programs than the one that produced it
* p\-hacking efforts where ‘outliers’ are removed or other data transformations were undertaken to obtain a desired result, and are not reported or are not explained well enough to reproduce
I find the lack of clear model explanation to be pervasive in some sciences. For example, I have seen articles in medical outlets where they ran a mixed model, yet none of the variance components or even a regression table is provided, nor is the model depicted in a formal fashion. You can forget about the code or data being provided as well. I also tend to ignore analyses done using SPSS, because the only reason to use the program is to not have to use the syntax, making reproducibility difficult at best, if it’s even possible.
Tools like Docker, packrat, and others can ensure that the package environment is the same even if you’re running the same code years from now, so that results should be reproduced if run on the same data, assuming other things are accounted for.
### Replicable
*Replicability*, for our purposes, would be something like, if someone had the same type of data (e.g. same structure), and did the same analysis using their own setup (though with the same or similar tools), would they get the same result (to within some tolerance)?
For example, if I have some new data that is otherwise the same, install the same R packages etc. on my machine rather than yours, will I get a very similar result (on average)? Similarly, if I do the exact same analysis using some other package (i.e. using the same estimation procedure even if the underlying code implementation is different), will the results be highly similar?
Here are some typical non\-replicable situations:
* all of the non\-reproducible/repeatable situations
* new versions of the packages break old code, fix bugs that ultimately change results, etc.
* small data and/or overfit models
The last example is an interesting one, and yet it is also one that is driving a lot of so\-called unreplicated findings. Even with a clear model and method, if you’re running a complex analysis on small data without any regularization, or explicit understanding of the uncertainty, the odds of seeing the same results in a new setting are not very strong. While this has been well known and taught in every intro stats course people have taken, the concept evidently immediately gets lost in practice. I see people regularly befuddled as to why they don’t see the same thing when they only have a couple hundred or fewer observations[57](#fn57). However, the uncertainty in small samples, if reported, should make this no surprise.
For my own work, I’m not typically as interested in analytical replicability, as I want my results to work now, not replicate precisely what I did two years ago. No code is bug free, improvements in tools should typically lead to improvements in modeling approach, etc. In the end, I don’t mind the extra work to get my old code working with the latest packages, and there is a correlation between recency and relevancy. However, if such replicability is desired, specific tools will need to be used, such as version control (Git), containers (e.g. Docker), and similar.
These are the stated ACM guidelines.
Repeatability (Same team, same experimental setup)
* The measurement can be obtained with stated precision by the same team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same location on multiple trials. For computational experiments, this means that a researcher can reliably repeat her own computation.
Reproducibility (Different team, same experimental setup)
* The measurement can be obtained with stated precision by a different team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same or a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using the author’s own artifacts.
Replicability (Different team, different experimental setup)
* The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.
### Summary of rep\* analysis
In summary, truly rep\* data analysis requires:
* Accessible data, or at least, essentially similar data[58](#fn58)
* Accessible, well written code
* Clear documentation (of data and code)
* Version control
* Standard means of distribution
* Literate programming practices
* Possibly more depending on the stringency of desired replicability
We’ve seen a poor example, what about a good one? For instance, one could start their research as an RStudio project using Git for version control, write their research products using R Markdown, set seeds for random variables, and use packrat to keep the packages used in analysis specific to the project. Doing so would make it much more likely to reproduce the previous results at any stage, even years later.
* [Reproducibility Guide by ROpenSci](https://ropensci.github.io/reproducibility-guide/)
* [Ten Simple Rules for Reproducible Computational Research](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285) Sandve et al. ([2013](#ref-sandve2013ten))
* [Recommendations to Funding Agencies for Supporting Reproducible Research](https://www.amstat.org/asa/files/pdfs/POL-ReproducibleResearchRecommendations.pdf)
Literate Programming
--------------------
At this point we have an idea of what we want. But how to we get it? There is an additional concept to think about that will help us with regard to programming and data analysis. So let’s now talk about *literate programming*, which is actually an [old idea](http://www.literateprogramming.com/knuthweb.pdf)[59](#fn59).
> I believe that the time is ripe for significantly better documentation of programs, and that we can best achieve this by considering programs to be works of literature.
> \~ Donald Knuth (1984\)
The interweaving of code and text is something many already do in normal scripting. *Comments* in code are not only useful, they are practically required. But in a program script, almost all the emphasis is on the code. With literate programming, we instead focus on the text, and the code exists to help facilitate our ability to tell a (data\-driven) story.
In the early days, the idea was largely to communicate the idea of the computer program itself. Now, at least in the context we’ll be discussing, our usage of literate programming is to generate results that tell the story in a completely human\-oriented fashion, possibly without any reference to the code at all. However, the document, in whatever format, does not exist independently of the code, and cannot be generated without it.
Consider the following example. This code, which is clearly delimited from the text via background and font style, shows how to do an unordered list in *Markdown* using two different methods. Either a `-` or a `*` will denote a list item.
```
- item 1
- item 2
* item 3
* item 4
```
So, we have a statement explaining the code, followed by the code itself. We actually don’t need a code comment, because the text explains the code in everyday language. This is a simple example, but it gets at the essence of the approach. In the document you’re reading right now, code may be visible or not, but when visible, it’s clear what the code part is and what the text explaining the code is.
The following table shows the results of a regression analysis.
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| **(Intercept)** | 37\.29 | 1\.88 | 19\.86 | 0 |
| **wt** | \-5\.34 | 0\.56 | \-9\.56 | 0 |
Fitting linear model: mpg \~ wt
| Observations | Residual Std. Error | \\(R^2\\) | Adjusted \\(R^2\\) |
| --- | --- | --- | --- |
| 32 | 3\.046 | 0\.7528 | 0\.7446 |
You didn’t see the code, but you saw some nicely formatted results. I personally didn’t format anything however, those are using default settings. Here is the underlying code.
```
lm(mpg ~ wt, mtcars) %>%
summary() %>%
pander(round = 2)
```
Now we see the code, but it isn’t evaluated, because the goal of the text is not the result, but to explain the code. So, imagine a product in which the previous text content explains the results, while the analysis code that produces the result resides right where the text is. Nothing is copied and pasted, and the code and text both reside in the same document. You can imagine how much more easily it is to reproduce a result given such a setup.
The idea of literate programming, i.e. creating human\-understandable programs, can extend beyond reports or slides that you might put together for an analysis, and in fact be used for any setting in which you have to write code at all.
R Markdown
----------
Now let’s shift our focus from concepts to implementation. *R Markdown* provides a means for literate programming. It is a flavor of Markdown, a markup language used pervasively throughout the web. Markdown can be converted to other formats like HTML, but is as easy to use as normal text. R Markdown allows one to combine normal R code with text to produce a wide variety of document formats. This allows for a continuous transition from initial data import and processing to a finished product, whether journal article, software application, slide presentation, or even a website.
To use R Markdown effectively, it helps to know why you’d even want to. So, in addition to literate programming, let’s talk about some ideas, all of which are related, and which will give you some sense of the goals of effective document generation, and why this approach is superior to others you might try.
[This chapter](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/reproducibility.Rmd) (and the rest of the document) is [available on GitHub](https://github.com/m-clark/data-processing-and-visualization). Looking at the raw content (i.e. the R Markdown files \*.Rmd) versus the finished product will get you well on your way to understanding how to use various tools at your disposal to produce a better data driven product.
Version Control
---------------
A major step toward Rep\* analysis of any kind is having a way to document the process of analysis, find where mistakes were made, revert back to previous states, and more. *Version control* is a means of creating checkpoints in document production. While it was primarily geared toward code, it can be useful for any files created whether they are code, figures, data, or a manuscript of some kind.
Some of you may have experience with version control already and not even know it. For example, if you use Box to collaborate on documents, in the web version you will often see something like `V10` next to the file name, meaning you are looking at the tenth version of the document. If you open the document you could go back to any prior version to see what it looked like at a previous state.
Version control is a necessity for modern coding practice, but it should be extended well beyond that. One of the most popular tools in this domain is called [Git](https://git-scm.com/), and the website of choice for most developers is [GitHub](https://github.com/). I would say that most of the R package developers develop their code there in the form of *repositories* (usually called repos), but also host websites, and house other projects. While Git provides a syntax for the process, you can actually implement it very easily within RStudio after it’s installed. As most are not software developers, there is little need beyond the bare basics of understanding Git to gain the benefits of version control. Creating an account on GitHub is very easy, and you can even create your first repository via the website. However, a good place to start for our purposes is with [Happy Git and GitHub for the useR](https://happygitwithr.com/) Bryan ([2018](#ref-bryan2018happy)). It will take a bit to get used to, but you’ll be so much better off once you start using it.
Dynamic Data Analysis \& Report Generation
------------------------------------------
Sometimes the goal is to create an expression of the analysis that is not only to be disseminated to a particular audience, but one which possibly will change over time, as the data itself evolves temporally. In this dynamic setting, the document must be able to handle changes with minimal effort.
I can tell you from firsthand experience that R Markdown can allow one to automatically create custom presentation products for different audiences on a regular basis without even touching the data for explicit processing, nor the reports after the templates are created, even as the data continues to come in over time. Furthermore, any academic effort that would fall under the heading of science is applicable here.
The notion of *science as software development* is something you should get used to. Print has had its day, but is not the best choice for scientific advancement as it should take place. Waiting months for feedback, or a year to get a paper published after it’s first sent for review, and then hoping people have access to a possibly pay\-walled outlet, is simply unacceptable. Furthermore, what if more data comes in? A data or modeling bug is found? Other studies shed additional light on the conclusions? In this day and age, are we supposed to just continue to cite a work that may no longer be applicable while waiting another year or so for updates?
Consider [arxiv.org](https://arxiv.org/). Researchers will put papers there before they are published in journals, ostensibly to provide an openly available, if not *necessarily* 100% complete, work. Others put working drafts or just use it as a place to float some ideas out there. It is a serious outlet however, and a good chunk of the articles I read in the stats world can be found there.
Look closely at [this particular entry](https://arxiv.org/abs/1507.02646). As I write this there have been 6 versions of it, and one has access to any of them.[60](#fn60) If something changes, there is no reason not to have a version 7 or however many one wants. In a similar vein, many of my own documents on machine learning, Bayesian analysis, generalized additive models, etc. have been regularly updated for several years now.
Research is never complete. Data can be augmented, analyses tweaked, visualizations improved. Will the products of your own efforts adapt?
Using Modern Tools
------------------
The main problem for other avenues you might use, like MS Word and \\(\\LaTeX\\),[61](#fn61) is that they were created for printed documents. However, not only is printing unnecessary (and environmentally problematic), contorting a document to the confines of print potentially distorts or hinders the meaning an author wishes to convey, as well as restricts the means with which they can convey it. In addition, for academic outlets and well beyond, print is not the dominant format anymore. Even avid print readers must admit they see much more text on a screen than they do on a page on a typical day.
Let’s recap the issues with traditional approaches:
* Possibly not usable for rep\* analysis
* Syntax gets in the way of fluid text
* Designed for print
* Wasteful if printed
* Often very difficult to get visualizations/tables to look as desired
* No interactivity
The case for using a markdown approach is now years old and well established. Unfortunately many, but not all, journals are still print\-oriented[62](#fn62), because their income depends on the assumption of print, not to mention a closed\-source, broken, and anti\-scientific system of review and publication that dates [back to the 17th century](https://www.npr.org/sections/health-shots/2018/02/24/586184355/scientists-aim-to-pull-peer-review-out-of-the-17th-century). Consider the fact that you could blog about your research while conducting it, present preliminary results in your blog via R Markdown (because your blog itself is done via R Markdown), get regular feedback from peers along the way via your site’s comment system, and all this before you’d ever send it off to a journal. Now ask yourself what a print\-oriented journal actually offers you? When was the last time you actually opened a print version of a journal? How often do you go to a journal site to look for something as opposed to a simple web search or using something like Google Scholar? How many journals do adequate retractions when problems are found[63](#fn63)? Is it possible you may actually get more eyeballs and clicks on your work just having it on your own website[64](#fn64) or [tweeting about it](https://www.altmetric.com/about-altmetrics/what-are-altmetrics/)?
The old paradigm is changing because it has to, and there is practically no justification for the traditional approach to academic publication, and even less for other outlets. In the academic world, outlets are starting to require pre\-registration of study design, code, data archiving measures, and other changes to the usual send\-a\-pdf\-and\-we’ll\-get\-back\-to\-you approach[65](#fn65). In non\-academic settings, while there is the same sort of pushback there, even those used to print and powerpoints must admit they’d prefer an interactive document that works on their phone if needed. As such, you might as well be using tools and an approach that accommodate the things we’ve talked about in order to produce a better data\-driven product.
For more on tools for reproducible research in R, see the [task view](https://cran.r-project.org/web/views/ReproducibleResearch.html).
Rep\* Analysis
--------------
Let’s start with the notions of *replicability*, *repeatability*, and *reproducibility*, which are hot topics in various disciplines of late. In our case, we are specifically concerned with programming and analytical results, visualizations, etc. (e.g. as opposed to running an experiment).
To begin, these and related terms are often not precisely defined, and depending on the definition one selects, possibly unlikely, or even impossible! I’ll roughly follow the Association for Computing Machinery guidelines Computing Machinery ([2018](#ref-acm2020)) since they actually do define them, but mostly just to help get us organize our thinking about them, and so you can at least know what *I* mean when I use the terms. In many cases, the concepts are best thought of as ideals to strive for, or goals for certain aspects of the data analysis process. For example, in deference to Heraclitus, Cratylus, the Buddha, and others, nothing is exactly replicable, if only because time will have passed, and with it some things will have changed about the process since the initial analysis was conducted\- the people involved, the data collection approach, the analytical tools, etc. Indeed, even your thought processes regarding the programming and analysis are in constant flux while engaged with the data process. However, we can replicate some things or some aspects of the process, possibly even exactly, and thus make the results reproducible. In other cases, even when we can, we may not want to.
### Example
As our focus will be on data analysis in particular, let’s start with the following scenario. Various versions of a data set are used leading up to analysis, and after several iterations, `finaldata7` is now spread across the computers of the faculty advisor, two graduate students and one undergraduate student. Two of those `finaldata7` data sets, specifically named `finaldata7a` and `finaldata7b`, are slightly different from the other two and each other. The undergraduate, who helped with the processing of `finaldata2` through `finaldata6`, has graduated and no longer resides in the same state, and has other things to occupy their time. Some of the data processing was done with menus in a software package that shall not be named.
The script that did the final analysis, called `results.C`, calls the data using a directory location which no longer exists (and refers only to `finaldata7`). Though it is titled ‘results’, the script performs several more data processing steps, but without comments that would indicate why any of them are being done. Some of the variables are named things like `PDQ` and `V3`, but there is no documentation that would say what those mean.
When writing their research document in Microsoft Word, all the values from the analyses were copied and pasted into the tables and text[55](#fn55). The variable names in the document have no exact match to any of the names in any of the data objects. Furthermore, no reference was provided in the text regarding what software or specific packages were used for the analysis.
And now, several months later, after the final draft of the document was written and sent to the journal, the reviewers have eventually made their comments on the paper, and it’s time to dive back into the analysis. Now, what do you think the odds are that this research group could even reproduce the values reported in the main analysis of the paper?
Sadly, up until recently this was not uncommon, and even certain issues just described are still very common. Such an approach is essentially the antithesis of replicability and reproducible research[56](#fn56). Anything that was only done with menus cannot be replicated for certain, and without sufficient documentation it’s not clear what was done even when there is potentially reproducible code. The naming of files, variables and other objects was done poorly, so it will take unnecessary effort to figure out what was done, to what, and when. And even after most things get squared away, there is still a chance the numbers won’t match what was in the paper anyway. This scenario is exactly what we don’t want.
### Repeatable
*Repeatability* can simply be thought of as whether *you* can run the code and analysis again given the same circumstances, essentially producing the same results. In the above scenario, even this is may not even possible. But let’s say that whoever did the analysis can run their code, it works, and produces a result very similar to what was published. We could then say it’s repeatable. This should only be seen as a minimum standard, though sometimes it is enough.
The notion of repeatability also extends to a specific measure itself. This consistency of a measure across repeated observations is typically referred to as *reliability*. This is not our focus here, but I mention it for those who have the false belief that at least some data driven products are entirely replicable. However, you can’t escape measurement error.
### Reproducible
Now let someone else try the analytical process to see if they can *reproduce* the results. Assuming the same data starting point, they should get the same result using the same tools. For our scenario, if we just start at the very last part, maybe this is possible, but at the least, it would require the data that went into the final analysis and the model being specified in a way that anyone could potentially understand. However, it is entirely unlikely that if they start from the raw data import they would get the same results that are in the Word document. If a research article does not provide the analytical data, nor specifies the model in code or math, it is not reproducible, and we can only take on faith what was done.
Here are some typical non\-reproducible situations:
* data is not made available
* code is not made available
* model is not adequately represented (in math or code)
* data processing and/or analysis was done with menus
* visualizations were tweaked in other programs than the one that produced it
* p\-hacking efforts where ‘outliers’ are removed or other data transformations were undertaken to obtain a desired result, and are not reported or are not explained well enough to reproduce
I find the lack of clear model explanation to be pervasive in some sciences. For example, I have seen articles in medical outlets where they ran a mixed model, yet none of the variance components or even a regression table is provided, nor is the model depicted in a formal fashion. You can forget about the code or data being provided as well. I also tend to ignore analyses done using SPSS, because the only reason to use the program is to not have to use the syntax, making reproducibility difficult at best, if it’s even possible.
Tools like Docker, packrat, and others can ensure that the package environment is the same even if you’re running the same code years from now, so that results should be reproduced if run on the same data, assuming other things are accounted for.
### Replicable
*Replicability*, for our purposes, would be something like, if someone had the same type of data (e.g. same structure), and did the same analysis using their own setup (though with the same or similar tools), would they get the same result (to within some tolerance)?
For example, if I have some new data that is otherwise the same, install the same R packages etc. on my machine rather than yours, will I get a very similar result (on average)? Similarly, if I do the exact same analysis using some other package (i.e. using the same estimation procedure even if the underlying code implementation is different), will the results be highly similar?
Here are some typical non\-replicable situations:
* all of the non\-reproducible/repeatable situations
* new versions of the packages break old code, fix bugs that ultimately change results, etc.
* small data and/or overfit models
The last example is an interesting one, and yet it is also one that is driving a lot of so\-called unreplicated findings. Even with a clear model and method, if you’re running a complex analysis on small data without any regularization, or explicit understanding of the uncertainty, the odds of seeing the same results in a new setting are not very strong. While this has been well known and taught in every intro stats course people have taken, the concept evidently immediately gets lost in practice. I see people regularly befuddled as to why they don’t see the same thing when they only have a couple hundred or fewer observations[57](#fn57). However, the uncertainty in small samples, if reported, should make this no surprise.
For my own work, I’m not typically as interested in analytical replicability, as I want my results to work now, not replicate precisely what I did two years ago. No code is bug free, improvements in tools should typically lead to improvements in modeling approach, etc. In the end, I don’t mind the extra work to get my old code working with the latest packages, and there is a correlation between recency and relevancy. However, if such replicability is desired, specific tools will need to be used, such as version control (Git), containers (e.g. Docker), and similar.
These are the stated ACM guidelines.
Repeatability (Same team, same experimental setup)
* The measurement can be obtained with stated precision by the same team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same location on multiple trials. For computational experiments, this means that a researcher can reliably repeat her own computation.
Reproducibility (Different team, same experimental setup)
* The measurement can be obtained with stated precision by a different team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same or a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using the author’s own artifacts.
Replicability (Different team, different experimental setup)
* The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.
### Summary of rep\* analysis
In summary, truly rep\* data analysis requires:
* Accessible data, or at least, essentially similar data[58](#fn58)
* Accessible, well written code
* Clear documentation (of data and code)
* Version control
* Standard means of distribution
* Literate programming practices
* Possibly more depending on the stringency of desired replicability
We’ve seen a poor example, what about a good one? For instance, one could start their research as an RStudio project using Git for version control, write their research products using R Markdown, set seeds for random variables, and use packrat to keep the packages used in analysis specific to the project. Doing so would make it much more likely to reproduce the previous results at any stage, even years later.
* [Reproducibility Guide by ROpenSci](https://ropensci.github.io/reproducibility-guide/)
* [Ten Simple Rules for Reproducible Computational Research](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285) Sandve et al. ([2013](#ref-sandve2013ten))
* [Recommendations to Funding Agencies for Supporting Reproducible Research](https://www.amstat.org/asa/files/pdfs/POL-ReproducibleResearchRecommendations.pdf)
### Example
As our focus will be on data analysis in particular, let’s start with the following scenario. Various versions of a data set are used leading up to analysis, and after several iterations, `finaldata7` is now spread across the computers of the faculty advisor, two graduate students and one undergraduate student. Two of those `finaldata7` data sets, specifically named `finaldata7a` and `finaldata7b`, are slightly different from the other two and each other. The undergraduate, who helped with the processing of `finaldata2` through `finaldata6`, has graduated and no longer resides in the same state, and has other things to occupy their time. Some of the data processing was done with menus in a software package that shall not be named.
The script that did the final analysis, called `results.C`, calls the data using a directory location which no longer exists (and refers only to `finaldata7`). Though it is titled ‘results’, the script performs several more data processing steps, but without comments that would indicate why any of them are being done. Some of the variables are named things like `PDQ` and `V3`, but there is no documentation that would say what those mean.
When writing their research document in Microsoft Word, all the values from the analyses were copied and pasted into the tables and text[55](#fn55). The variable names in the document have no exact match to any of the names in any of the data objects. Furthermore, no reference was provided in the text regarding what software or specific packages were used for the analysis.
And now, several months later, after the final draft of the document was written and sent to the journal, the reviewers have eventually made their comments on the paper, and it’s time to dive back into the analysis. Now, what do you think the odds are that this research group could even reproduce the values reported in the main analysis of the paper?
Sadly, up until recently this was not uncommon, and even certain issues just described are still very common. Such an approach is essentially the antithesis of replicability and reproducible research[56](#fn56). Anything that was only done with menus cannot be replicated for certain, and without sufficient documentation it’s not clear what was done even when there is potentially reproducible code. The naming of files, variables and other objects was done poorly, so it will take unnecessary effort to figure out what was done, to what, and when. And even after most things get squared away, there is still a chance the numbers won’t match what was in the paper anyway. This scenario is exactly what we don’t want.
### Repeatable
*Repeatability* can simply be thought of as whether *you* can run the code and analysis again given the same circumstances, essentially producing the same results. In the above scenario, even this is may not even possible. But let’s say that whoever did the analysis can run their code, it works, and produces a result very similar to what was published. We could then say it’s repeatable. This should only be seen as a minimum standard, though sometimes it is enough.
The notion of repeatability also extends to a specific measure itself. This consistency of a measure across repeated observations is typically referred to as *reliability*. This is not our focus here, but I mention it for those who have the false belief that at least some data driven products are entirely replicable. However, you can’t escape measurement error.
### Reproducible
Now let someone else try the analytical process to see if they can *reproduce* the results. Assuming the same data starting point, they should get the same result using the same tools. For our scenario, if we just start at the very last part, maybe this is possible, but at the least, it would require the data that went into the final analysis and the model being specified in a way that anyone could potentially understand. However, it is entirely unlikely that if they start from the raw data import they would get the same results that are in the Word document. If a research article does not provide the analytical data, nor specifies the model in code or math, it is not reproducible, and we can only take on faith what was done.
Here are some typical non\-reproducible situations:
* data is not made available
* code is not made available
* model is not adequately represented (in math or code)
* data processing and/or analysis was done with menus
* visualizations were tweaked in other programs than the one that produced it
* p\-hacking efforts where ‘outliers’ are removed or other data transformations were undertaken to obtain a desired result, and are not reported or are not explained well enough to reproduce
I find the lack of clear model explanation to be pervasive in some sciences. For example, I have seen articles in medical outlets where they ran a mixed model, yet none of the variance components or even a regression table is provided, nor is the model depicted in a formal fashion. You can forget about the code or data being provided as well. I also tend to ignore analyses done using SPSS, because the only reason to use the program is to not have to use the syntax, making reproducibility difficult at best, if it’s even possible.
Tools like Docker, packrat, and others can ensure that the package environment is the same even if you’re running the same code years from now, so that results should be reproduced if run on the same data, assuming other things are accounted for.
### Replicable
*Replicability*, for our purposes, would be something like, if someone had the same type of data (e.g. same structure), and did the same analysis using their own setup (though with the same or similar tools), would they get the same result (to within some tolerance)?
For example, if I have some new data that is otherwise the same, install the same R packages etc. on my machine rather than yours, will I get a very similar result (on average)? Similarly, if I do the exact same analysis using some other package (i.e. using the same estimation procedure even if the underlying code implementation is different), will the results be highly similar?
Here are some typical non\-replicable situations:
* all of the non\-reproducible/repeatable situations
* new versions of the packages break old code, fix bugs that ultimately change results, etc.
* small data and/or overfit models
The last example is an interesting one, and yet it is also one that is driving a lot of so\-called unreplicated findings. Even with a clear model and method, if you’re running a complex analysis on small data without any regularization, or explicit understanding of the uncertainty, the odds of seeing the same results in a new setting are not very strong. While this has been well known and taught in every intro stats course people have taken, the concept evidently immediately gets lost in practice. I see people regularly befuddled as to why they don’t see the same thing when they only have a couple hundred or fewer observations[57](#fn57). However, the uncertainty in small samples, if reported, should make this no surprise.
For my own work, I’m not typically as interested in analytical replicability, as I want my results to work now, not replicate precisely what I did two years ago. No code is bug free, improvements in tools should typically lead to improvements in modeling approach, etc. In the end, I don’t mind the extra work to get my old code working with the latest packages, and there is a correlation between recency and relevancy. However, if such replicability is desired, specific tools will need to be used, such as version control (Git), containers (e.g. Docker), and similar.
These are the stated ACM guidelines.
Repeatability (Same team, same experimental setup)
* The measurement can be obtained with stated precision by the same team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same location on multiple trials. For computational experiments, this means that a researcher can reliably repeat her own computation.
Reproducibility (Different team, same experimental setup)
* The measurement can be obtained with stated precision by a different team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same or a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using the author’s own artifacts.
Replicability (Different team, different experimental setup)
* The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.
### Summary of rep\* analysis
In summary, truly rep\* data analysis requires:
* Accessible data, or at least, essentially similar data[58](#fn58)
* Accessible, well written code
* Clear documentation (of data and code)
* Version control
* Standard means of distribution
* Literate programming practices
* Possibly more depending on the stringency of desired replicability
We’ve seen a poor example, what about a good one? For instance, one could start their research as an RStudio project using Git for version control, write their research products using R Markdown, set seeds for random variables, and use packrat to keep the packages used in analysis specific to the project. Doing so would make it much more likely to reproduce the previous results at any stage, even years later.
* [Reproducibility Guide by ROpenSci](https://ropensci.github.io/reproducibility-guide/)
* [Ten Simple Rules for Reproducible Computational Research](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285) Sandve et al. ([2013](#ref-sandve2013ten))
* [Recommendations to Funding Agencies for Supporting Reproducible Research](https://www.amstat.org/asa/files/pdfs/POL-ReproducibleResearchRecommendations.pdf)
Literate Programming
--------------------
At this point we have an idea of what we want. But how to we get it? There is an additional concept to think about that will help us with regard to programming and data analysis. So let’s now talk about *literate programming*, which is actually an [old idea](http://www.literateprogramming.com/knuthweb.pdf)[59](#fn59).
> I believe that the time is ripe for significantly better documentation of programs, and that we can best achieve this by considering programs to be works of literature.
> \~ Donald Knuth (1984\)
The interweaving of code and text is something many already do in normal scripting. *Comments* in code are not only useful, they are practically required. But in a program script, almost all the emphasis is on the code. With literate programming, we instead focus on the text, and the code exists to help facilitate our ability to tell a (data\-driven) story.
In the early days, the idea was largely to communicate the idea of the computer program itself. Now, at least in the context we’ll be discussing, our usage of literate programming is to generate results that tell the story in a completely human\-oriented fashion, possibly without any reference to the code at all. However, the document, in whatever format, does not exist independently of the code, and cannot be generated without it.
Consider the following example. This code, which is clearly delimited from the text via background and font style, shows how to do an unordered list in *Markdown* using two different methods. Either a `-` or a `*` will denote a list item.
```
- item 1
- item 2
* item 3
* item 4
```
So, we have a statement explaining the code, followed by the code itself. We actually don’t need a code comment, because the text explains the code in everyday language. This is a simple example, but it gets at the essence of the approach. In the document you’re reading right now, code may be visible or not, but when visible, it’s clear what the code part is and what the text explaining the code is.
The following table shows the results of a regression analysis.
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| **(Intercept)** | 37\.29 | 1\.88 | 19\.86 | 0 |
| **wt** | \-5\.34 | 0\.56 | \-9\.56 | 0 |
Fitting linear model: mpg \~ wt
| Observations | Residual Std. Error | \\(R^2\\) | Adjusted \\(R^2\\) |
| --- | --- | --- | --- |
| 32 | 3\.046 | 0\.7528 | 0\.7446 |
You didn’t see the code, but you saw some nicely formatted results. I personally didn’t format anything however, those are using default settings. Here is the underlying code.
```
lm(mpg ~ wt, mtcars) %>%
summary() %>%
pander(round = 2)
```
Now we see the code, but it isn’t evaluated, because the goal of the text is not the result, but to explain the code. So, imagine a product in which the previous text content explains the results, while the analysis code that produces the result resides right where the text is. Nothing is copied and pasted, and the code and text both reside in the same document. You can imagine how much more easily it is to reproduce a result given such a setup.
The idea of literate programming, i.e. creating human\-understandable programs, can extend beyond reports or slides that you might put together for an analysis, and in fact be used for any setting in which you have to write code at all.
R Markdown
----------
Now let’s shift our focus from concepts to implementation. *R Markdown* provides a means for literate programming. It is a flavor of Markdown, a markup language used pervasively throughout the web. Markdown can be converted to other formats like HTML, but is as easy to use as normal text. R Markdown allows one to combine normal R code with text to produce a wide variety of document formats. This allows for a continuous transition from initial data import and processing to a finished product, whether journal article, software application, slide presentation, or even a website.
To use R Markdown effectively, it helps to know why you’d even want to. So, in addition to literate programming, let’s talk about some ideas, all of which are related, and which will give you some sense of the goals of effective document generation, and why this approach is superior to others you might try.
[This chapter](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/reproducibility.Rmd) (and the rest of the document) is [available on GitHub](https://github.com/m-clark/data-processing-and-visualization). Looking at the raw content (i.e. the R Markdown files \*.Rmd) versus the finished product will get you well on your way to understanding how to use various tools at your disposal to produce a better data driven product.
Version Control
---------------
A major step toward Rep\* analysis of any kind is having a way to document the process of analysis, find where mistakes were made, revert back to previous states, and more. *Version control* is a means of creating checkpoints in document production. While it was primarily geared toward code, it can be useful for any files created whether they are code, figures, data, or a manuscript of some kind.
Some of you may have experience with version control already and not even know it. For example, if you use Box to collaborate on documents, in the web version you will often see something like `V10` next to the file name, meaning you are looking at the tenth version of the document. If you open the document you could go back to any prior version to see what it looked like at a previous state.
Version control is a necessity for modern coding practice, but it should be extended well beyond that. One of the most popular tools in this domain is called [Git](https://git-scm.com/), and the website of choice for most developers is [GitHub](https://github.com/). I would say that most of the R package developers develop their code there in the form of *repositories* (usually called repos), but also host websites, and house other projects. While Git provides a syntax for the process, you can actually implement it very easily within RStudio after it’s installed. As most are not software developers, there is little need beyond the bare basics of understanding Git to gain the benefits of version control. Creating an account on GitHub is very easy, and you can even create your first repository via the website. However, a good place to start for our purposes is with [Happy Git and GitHub for the useR](https://happygitwithr.com/) Bryan ([2018](#ref-bryan2018happy)). It will take a bit to get used to, but you’ll be so much better off once you start using it.
Dynamic Data Analysis \& Report Generation
------------------------------------------
Sometimes the goal is to create an expression of the analysis that is not only to be disseminated to a particular audience, but one which possibly will change over time, as the data itself evolves temporally. In this dynamic setting, the document must be able to handle changes with minimal effort.
I can tell you from firsthand experience that R Markdown can allow one to automatically create custom presentation products for different audiences on a regular basis without even touching the data for explicit processing, nor the reports after the templates are created, even as the data continues to come in over time. Furthermore, any academic effort that would fall under the heading of science is applicable here.
The notion of *science as software development* is something you should get used to. Print has had its day, but is not the best choice for scientific advancement as it should take place. Waiting months for feedback, or a year to get a paper published after it’s first sent for review, and then hoping people have access to a possibly pay\-walled outlet, is simply unacceptable. Furthermore, what if more data comes in? A data or modeling bug is found? Other studies shed additional light on the conclusions? In this day and age, are we supposed to just continue to cite a work that may no longer be applicable while waiting another year or so for updates?
Consider [arxiv.org](https://arxiv.org/). Researchers will put papers there before they are published in journals, ostensibly to provide an openly available, if not *necessarily* 100% complete, work. Others put working drafts or just use it as a place to float some ideas out there. It is a serious outlet however, and a good chunk of the articles I read in the stats world can be found there.
Look closely at [this particular entry](https://arxiv.org/abs/1507.02646). As I write this there have been 6 versions of it, and one has access to any of them.[60](#fn60) If something changes, there is no reason not to have a version 7 or however many one wants. In a similar vein, many of my own documents on machine learning, Bayesian analysis, generalized additive models, etc. have been regularly updated for several years now.
Research is never complete. Data can be augmented, analyses tweaked, visualizations improved. Will the products of your own efforts adapt?
Using Modern Tools
------------------
The main problem for other avenues you might use, like MS Word and \\(\\LaTeX\\),[61](#fn61) is that they were created for printed documents. However, not only is printing unnecessary (and environmentally problematic), contorting a document to the confines of print potentially distorts or hinders the meaning an author wishes to convey, as well as restricts the means with which they can convey it. In addition, for academic outlets and well beyond, print is not the dominant format anymore. Even avid print readers must admit they see much more text on a screen than they do on a page on a typical day.
Let’s recap the issues with traditional approaches:
* Possibly not usable for rep\* analysis
* Syntax gets in the way of fluid text
* Designed for print
* Wasteful if printed
* Often very difficult to get visualizations/tables to look as desired
* No interactivity
The case for using a markdown approach is now years old and well established. Unfortunately many, but not all, journals are still print\-oriented[62](#fn62), because their income depends on the assumption of print, not to mention a closed\-source, broken, and anti\-scientific system of review and publication that dates [back to the 17th century](https://www.npr.org/sections/health-shots/2018/02/24/586184355/scientists-aim-to-pull-peer-review-out-of-the-17th-century). Consider the fact that you could blog about your research while conducting it, present preliminary results in your blog via R Markdown (because your blog itself is done via R Markdown), get regular feedback from peers along the way via your site’s comment system, and all this before you’d ever send it off to a journal. Now ask yourself what a print\-oriented journal actually offers you? When was the last time you actually opened a print version of a journal? How often do you go to a journal site to look for something as opposed to a simple web search or using something like Google Scholar? How many journals do adequate retractions when problems are found[63](#fn63)? Is it possible you may actually get more eyeballs and clicks on your work just having it on your own website[64](#fn64) or [tweeting about it](https://www.altmetric.com/about-altmetrics/what-are-altmetrics/)?
The old paradigm is changing because it has to, and there is practically no justification for the traditional approach to academic publication, and even less for other outlets. In the academic world, outlets are starting to require pre\-registration of study design, code, data archiving measures, and other changes to the usual send\-a\-pdf\-and\-we’ll\-get\-back\-to\-you approach[65](#fn65). In non\-academic settings, while there is the same sort of pushback there, even those used to print and powerpoints must admit they’d prefer an interactive document that works on their phone if needed. As such, you might as well be using tools and an approach that accommodate the things we’ve talked about in order to produce a better data\-driven product.
For more on tools for reproducible research in R, see the [task view](https://cran.r-project.org/web/views/ReproducibleResearch.html).
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/reproducibility.html |
Building Better Data\-Driven Products
=====================================
At this point we’ve covered many topics that will get you from data import and generation to visualizing model results. What’s left? To tell others about what you’ve discovered! While there are any number of ways to present your *data\-driven product*, there are a couple of things to keep in mind regardless of the chosen rendition. Chief among them is building a product that will be intimately connected with all the work that went before it, and which will be consistent across products and (hopefully) over time as well.
We’ll start our discussion of how to present one’s work with some terminology you might have come across:
* *Reproducible research*
* *Repeatable research*
* *Replicable science*
* *Reproducible data analysis*
* *Literate programming*
* *Dynamic data analysis*
* *Dynamic report generation*
Each of these may mean slightly different things depending on the context and background of the person using them, so one should take care to note precisely what is meant. We’ll examine some of these concepts, or at least my particular version of them.
Rep\* Analysis
--------------
Let’s start with the notions of *replicability*, *repeatability*, and *reproducibility*, which are hot topics in various disciplines of late. In our case, we are specifically concerned with programming and analytical results, visualizations, etc. (e.g. as opposed to running an experiment).
To begin, these and related terms are often not precisely defined, and depending on the definition one selects, possibly unlikely, or even impossible! I’ll roughly follow the Association for Computing Machinery guidelines Computing Machinery ([2018](#ref-acm2020)) since they actually do define them, but mostly just to help get us organize our thinking about them, and so you can at least know what *I* mean when I use the terms. In many cases, the concepts are best thought of as ideals to strive for, or goals for certain aspects of the data analysis process. For example, in deference to Heraclitus, Cratylus, the Buddha, and others, nothing is exactly replicable, if only because time will have passed, and with it some things will have changed about the process since the initial analysis was conducted\- the people involved, the data collection approach, the analytical tools, etc. Indeed, even your thought processes regarding the programming and analysis are in constant flux while engaged with the data process. However, we can replicate some things or some aspects of the process, possibly even exactly, and thus make the results reproducible. In other cases, even when we can, we may not want to.
### Example
As our focus will be on data analysis in particular, let’s start with the following scenario. Various versions of a data set are used leading up to analysis, and after several iterations, `finaldata7` is now spread across the computers of the faculty advisor, two graduate students and one undergraduate student. Two of those `finaldata7` data sets, specifically named `finaldata7a` and `finaldata7b`, are slightly different from the other two and each other. The undergraduate, who helped with the processing of `finaldata2` through `finaldata6`, has graduated and no longer resides in the same state, and has other things to occupy their time. Some of the data processing was done with menus in a software package that shall not be named.
The script that did the final analysis, called `results.C`, calls the data using a directory location which no longer exists (and refers only to `finaldata7`). Though it is titled ‘results’, the script performs several more data processing steps, but without comments that would indicate why any of them are being done. Some of the variables are named things like `PDQ` and `V3`, but there is no documentation that would say what those mean.
When writing their research document in Microsoft Word, all the values from the analyses were copied and pasted into the tables and text[55](#fn55). The variable names in the document have no exact match to any of the names in any of the data objects. Furthermore, no reference was provided in the text regarding what software or specific packages were used for the analysis.
And now, several months later, after the final draft of the document was written and sent to the journal, the reviewers have eventually made their comments on the paper, and it’s time to dive back into the analysis. Now, what do you think the odds are that this research group could even reproduce the values reported in the main analysis of the paper?
Sadly, up until recently this was not uncommon, and even certain issues just described are still very common. Such an approach is essentially the antithesis of replicability and reproducible research[56](#fn56). Anything that was only done with menus cannot be replicated for certain, and without sufficient documentation it’s not clear what was done even when there is potentially reproducible code. The naming of files, variables and other objects was done poorly, so it will take unnecessary effort to figure out what was done, to what, and when. And even after most things get squared away, there is still a chance the numbers won’t match what was in the paper anyway. This scenario is exactly what we don’t want.
### Repeatable
*Repeatability* can simply be thought of as whether *you* can run the code and analysis again given the same circumstances, essentially producing the same results. In the above scenario, even this is may not even possible. But let’s say that whoever did the analysis can run their code, it works, and produces a result very similar to what was published. We could then say it’s repeatable. This should only be seen as a minimum standard, though sometimes it is enough.
The notion of repeatability also extends to a specific measure itself. This consistency of a measure across repeated observations is typically referred to as *reliability*. This is not our focus here, but I mention it for those who have the false belief that at least some data driven products are entirely replicable. However, you can’t escape measurement error.
### Reproducible
Now let someone else try the analytical process to see if they can *reproduce* the results. Assuming the same data starting point, they should get the same result using the same tools. For our scenario, if we just start at the very last part, maybe this is possible, but at the least, it would require the data that went into the final analysis and the model being specified in a way that anyone could potentially understand. However, it is entirely unlikely that if they start from the raw data import they would get the same results that are in the Word document. If a research article does not provide the analytical data, nor specifies the model in code or math, it is not reproducible, and we can only take on faith what was done.
Here are some typical non\-reproducible situations:
* data is not made available
* code is not made available
* model is not adequately represented (in math or code)
* data processing and/or analysis was done with menus
* visualizations were tweaked in other programs than the one that produced it
* p\-hacking efforts where ‘outliers’ are removed or other data transformations were undertaken to obtain a desired result, and are not reported or are not explained well enough to reproduce
I find the lack of clear model explanation to be pervasive in some sciences. For example, I have seen articles in medical outlets where they ran a mixed model, yet none of the variance components or even a regression table is provided, nor is the model depicted in a formal fashion. You can forget about the code or data being provided as well. I also tend to ignore analyses done using SPSS, because the only reason to use the program is to not have to use the syntax, making reproducibility difficult at best, if it’s even possible.
Tools like Docker, packrat, and others can ensure that the package environment is the same even if you’re running the same code years from now, so that results should be reproduced if run on the same data, assuming other things are accounted for.
### Replicable
*Replicability*, for our purposes, would be something like, if someone had the same type of data (e.g. same structure), and did the same analysis using their own setup (though with the same or similar tools), would they get the same result (to within some tolerance)?
For example, if I have some new data that is otherwise the same, install the same R packages etc. on my machine rather than yours, will I get a very similar result (on average)? Similarly, if I do the exact same analysis using some other package (i.e. using the same estimation procedure even if the underlying code implementation is different), will the results be highly similar?
Here are some typical non\-replicable situations:
* all of the non\-reproducible/repeatable situations
* new versions of the packages break old code, fix bugs that ultimately change results, etc.
* small data and/or overfit models
The last example is an interesting one, and yet it is also one that is driving a lot of so\-called unreplicated findings. Even with a clear model and method, if you’re running a complex analysis on small data without any regularization, or explicit understanding of the uncertainty, the odds of seeing the same results in a new setting are not very strong. While this has been well known and taught in every intro stats course people have taken, the concept evidently immediately gets lost in practice. I see people regularly befuddled as to why they don’t see the same thing when they only have a couple hundred or fewer observations[57](#fn57). However, the uncertainty in small samples, if reported, should make this no surprise.
For my own work, I’m not typically as interested in analytical replicability, as I want my results to work now, not replicate precisely what I did two years ago. No code is bug free, improvements in tools should typically lead to improvements in modeling approach, etc. In the end, I don’t mind the extra work to get my old code working with the latest packages, and there is a correlation between recency and relevancy. However, if such replicability is desired, specific tools will need to be used, such as version control (Git), containers (e.g. Docker), and similar.
These are the stated ACM guidelines.
Repeatability (Same team, same experimental setup)
* The measurement can be obtained with stated precision by the same team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same location on multiple trials. For computational experiments, this means that a researcher can reliably repeat her own computation.
Reproducibility (Different team, same experimental setup)
* The measurement can be obtained with stated precision by a different team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same or a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using the author’s own artifacts.
Replicability (Different team, different experimental setup)
* The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.
### Summary of rep\* analysis
In summary, truly rep\* data analysis requires:
* Accessible data, or at least, essentially similar data[58](#fn58)
* Accessible, well written code
* Clear documentation (of data and code)
* Version control
* Standard means of distribution
* Literate programming practices
* Possibly more depending on the stringency of desired replicability
We’ve seen a poor example, what about a good one? For instance, one could start their research as an RStudio project using Git for version control, write their research products using R Markdown, set seeds for random variables, and use packrat to keep the packages used in analysis specific to the project. Doing so would make it much more likely to reproduce the previous results at any stage, even years later.
* [Reproducibility Guide by ROpenSci](https://ropensci.github.io/reproducibility-guide/)
* [Ten Simple Rules for Reproducible Computational Research](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285) Sandve et al. ([2013](#ref-sandve2013ten))
* [Recommendations to Funding Agencies for Supporting Reproducible Research](https://www.amstat.org/asa/files/pdfs/POL-ReproducibleResearchRecommendations.pdf)
Literate Programming
--------------------
At this point we have an idea of what we want. But how to we get it? There is an additional concept to think about that will help us with regard to programming and data analysis. So let’s now talk about *literate programming*, which is actually an [old idea](http://www.literateprogramming.com/knuthweb.pdf)[59](#fn59).
> I believe that the time is ripe for significantly better documentation of programs, and that we can best achieve this by considering programs to be works of literature.
> \~ Donald Knuth (1984\)
The interweaving of code and text is something many already do in normal scripting. *Comments* in code are not only useful, they are practically required. But in a program script, almost all the emphasis is on the code. With literate programming, we instead focus on the text, and the code exists to help facilitate our ability to tell a (data\-driven) story.
In the early days, the idea was largely to communicate the idea of the computer program itself. Now, at least in the context we’ll be discussing, our usage of literate programming is to generate results that tell the story in a completely human\-oriented fashion, possibly without any reference to the code at all. However, the document, in whatever format, does not exist independently of the code, and cannot be generated without it.
Consider the following example. This code, which is clearly delimited from the text via background and font style, shows how to do an unordered list in *Markdown* using two different methods. Either a `-` or a `*` will denote a list item.
```
- item 1
- item 2
* item 3
* item 4
```
So, we have a statement explaining the code, followed by the code itself. We actually don’t need a code comment, because the text explains the code in everyday language. This is a simple example, but it gets at the essence of the approach. In the document you’re reading right now, code may be visible or not, but when visible, it’s clear what the code part is and what the text explaining the code is.
The following table shows the results of a regression analysis.
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| **(Intercept)** | 37\.29 | 1\.88 | 19\.86 | 0 |
| **wt** | \-5\.34 | 0\.56 | \-9\.56 | 0 |
Fitting linear model: mpg \~ wt
| Observations | Residual Std. Error | \\(R^2\\) | Adjusted \\(R^2\\) |
| --- | --- | --- | --- |
| 32 | 3\.046 | 0\.7528 | 0\.7446 |
You didn’t see the code, but you saw some nicely formatted results. I personally didn’t format anything however, those are using default settings. Here is the underlying code.
```
lm(mpg ~ wt, mtcars) %>%
summary() %>%
pander(round = 2)
```
Now we see the code, but it isn’t evaluated, because the goal of the text is not the result, but to explain the code. So, imagine a product in which the previous text content explains the results, while the analysis code that produces the result resides right where the text is. Nothing is copied and pasted, and the code and text both reside in the same document. You can imagine how much more easily it is to reproduce a result given such a setup.
The idea of literate programming, i.e. creating human\-understandable programs, can extend beyond reports or slides that you might put together for an analysis, and in fact be used for any setting in which you have to write code at all.
R Markdown
----------
Now let’s shift our focus from concepts to implementation. *R Markdown* provides a means for literate programming. It is a flavor of Markdown, a markup language used pervasively throughout the web. Markdown can be converted to other formats like HTML, but is as easy to use as normal text. R Markdown allows one to combine normal R code with text to produce a wide variety of document formats. This allows for a continuous transition from initial data import and processing to a finished product, whether journal article, software application, slide presentation, or even a website.
To use R Markdown effectively, it helps to know why you’d even want to. So, in addition to literate programming, let’s talk about some ideas, all of which are related, and which will give you some sense of the goals of effective document generation, and why this approach is superior to others you might try.
[This chapter](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/reproducibility.Rmd) (and the rest of the document) is [available on GitHub](https://github.com/m-clark/data-processing-and-visualization). Looking at the raw content (i.e. the R Markdown files \*.Rmd) versus the finished product will get you well on your way to understanding how to use various tools at your disposal to produce a better data driven product.
Version Control
---------------
A major step toward Rep\* analysis of any kind is having a way to document the process of analysis, find where mistakes were made, revert back to previous states, and more. *Version control* is a means of creating checkpoints in document production. While it was primarily geared toward code, it can be useful for any files created whether they are code, figures, data, or a manuscript of some kind.
Some of you may have experience with version control already and not even know it. For example, if you use Box to collaborate on documents, in the web version you will often see something like `V10` next to the file name, meaning you are looking at the tenth version of the document. If you open the document you could go back to any prior version to see what it looked like at a previous state.
Version control is a necessity for modern coding practice, but it should be extended well beyond that. One of the most popular tools in this domain is called [Git](https://git-scm.com/), and the website of choice for most developers is [GitHub](https://github.com/). I would say that most of the R package developers develop their code there in the form of *repositories* (usually called repos), but also host websites, and house other projects. While Git provides a syntax for the process, you can actually implement it very easily within RStudio after it’s installed. As most are not software developers, there is little need beyond the bare basics of understanding Git to gain the benefits of version control. Creating an account on GitHub is very easy, and you can even create your first repository via the website. However, a good place to start for our purposes is with [Happy Git and GitHub for the useR](https://happygitwithr.com/) Bryan ([2018](#ref-bryan2018happy)). It will take a bit to get used to, but you’ll be so much better off once you start using it.
Dynamic Data Analysis \& Report Generation
------------------------------------------
Sometimes the goal is to create an expression of the analysis that is not only to be disseminated to a particular audience, but one which possibly will change over time, as the data itself evolves temporally. In this dynamic setting, the document must be able to handle changes with minimal effort.
I can tell you from firsthand experience that R Markdown can allow one to automatically create custom presentation products for different audiences on a regular basis without even touching the data for explicit processing, nor the reports after the templates are created, even as the data continues to come in over time. Furthermore, any academic effort that would fall under the heading of science is applicable here.
The notion of *science as software development* is something you should get used to. Print has had its day, but is not the best choice for scientific advancement as it should take place. Waiting months for feedback, or a year to get a paper published after it’s first sent for review, and then hoping people have access to a possibly pay\-walled outlet, is simply unacceptable. Furthermore, what if more data comes in? A data or modeling bug is found? Other studies shed additional light on the conclusions? In this day and age, are we supposed to just continue to cite a work that may no longer be applicable while waiting another year or so for updates?
Consider [arxiv.org](https://arxiv.org/). Researchers will put papers there before they are published in journals, ostensibly to provide an openly available, if not *necessarily* 100% complete, work. Others put working drafts or just use it as a place to float some ideas out there. It is a serious outlet however, and a good chunk of the articles I read in the stats world can be found there.
Look closely at [this particular entry](https://arxiv.org/abs/1507.02646). As I write this there have been 6 versions of it, and one has access to any of them.[60](#fn60) If something changes, there is no reason not to have a version 7 or however many one wants. In a similar vein, many of my own documents on machine learning, Bayesian analysis, generalized additive models, etc. have been regularly updated for several years now.
Research is never complete. Data can be augmented, analyses tweaked, visualizations improved. Will the products of your own efforts adapt?
Using Modern Tools
------------------
The main problem for other avenues you might use, like MS Word and \\(\\LaTeX\\),[61](#fn61) is that they were created for printed documents. However, not only is printing unnecessary (and environmentally problematic), contorting a document to the confines of print potentially distorts or hinders the meaning an author wishes to convey, as well as restricts the means with which they can convey it. In addition, for academic outlets and well beyond, print is not the dominant format anymore. Even avid print readers must admit they see much more text on a screen than they do on a page on a typical day.
Let’s recap the issues with traditional approaches:
* Possibly not usable for rep\* analysis
* Syntax gets in the way of fluid text
* Designed for print
* Wasteful if printed
* Often very difficult to get visualizations/tables to look as desired
* No interactivity
The case for using a markdown approach is now years old and well established. Unfortunately many, but not all, journals are still print\-oriented[62](#fn62), because their income depends on the assumption of print, not to mention a closed\-source, broken, and anti\-scientific system of review and publication that dates [back to the 17th century](https://www.npr.org/sections/health-shots/2018/02/24/586184355/scientists-aim-to-pull-peer-review-out-of-the-17th-century). Consider the fact that you could blog about your research while conducting it, present preliminary results in your blog via R Markdown (because your blog itself is done via R Markdown), get regular feedback from peers along the way via your site’s comment system, and all this before you’d ever send it off to a journal. Now ask yourself what a print\-oriented journal actually offers you? When was the last time you actually opened a print version of a journal? How often do you go to a journal site to look for something as opposed to a simple web search or using something like Google Scholar? How many journals do adequate retractions when problems are found[63](#fn63)? Is it possible you may actually get more eyeballs and clicks on your work just having it on your own website[64](#fn64) or [tweeting about it](https://www.altmetric.com/about-altmetrics/what-are-altmetrics/)?
The old paradigm is changing because it has to, and there is practically no justification for the traditional approach to academic publication, and even less for other outlets. In the academic world, outlets are starting to require pre\-registration of study design, code, data archiving measures, and other changes to the usual send\-a\-pdf\-and\-we’ll\-get\-back\-to\-you approach[65](#fn65). In non\-academic settings, while there is the same sort of pushback there, even those used to print and powerpoints must admit they’d prefer an interactive document that works on their phone if needed. As such, you might as well be using tools and an approach that accommodate the things we’ve talked about in order to produce a better data\-driven product.
For more on tools for reproducible research in R, see the [task view](https://cran.r-project.org/web/views/ReproducibleResearch.html).
Rep\* Analysis
--------------
Let’s start with the notions of *replicability*, *repeatability*, and *reproducibility*, which are hot topics in various disciplines of late. In our case, we are specifically concerned with programming and analytical results, visualizations, etc. (e.g. as opposed to running an experiment).
To begin, these and related terms are often not precisely defined, and depending on the definition one selects, possibly unlikely, or even impossible! I’ll roughly follow the Association for Computing Machinery guidelines Computing Machinery ([2018](#ref-acm2020)) since they actually do define them, but mostly just to help get us organize our thinking about them, and so you can at least know what *I* mean when I use the terms. In many cases, the concepts are best thought of as ideals to strive for, or goals for certain aspects of the data analysis process. For example, in deference to Heraclitus, Cratylus, the Buddha, and others, nothing is exactly replicable, if only because time will have passed, and with it some things will have changed about the process since the initial analysis was conducted\- the people involved, the data collection approach, the analytical tools, etc. Indeed, even your thought processes regarding the programming and analysis are in constant flux while engaged with the data process. However, we can replicate some things or some aspects of the process, possibly even exactly, and thus make the results reproducible. In other cases, even when we can, we may not want to.
### Example
As our focus will be on data analysis in particular, let’s start with the following scenario. Various versions of a data set are used leading up to analysis, and after several iterations, `finaldata7` is now spread across the computers of the faculty advisor, two graduate students and one undergraduate student. Two of those `finaldata7` data sets, specifically named `finaldata7a` and `finaldata7b`, are slightly different from the other two and each other. The undergraduate, who helped with the processing of `finaldata2` through `finaldata6`, has graduated and no longer resides in the same state, and has other things to occupy their time. Some of the data processing was done with menus in a software package that shall not be named.
The script that did the final analysis, called `results.C`, calls the data using a directory location which no longer exists (and refers only to `finaldata7`). Though it is titled ‘results’, the script performs several more data processing steps, but without comments that would indicate why any of them are being done. Some of the variables are named things like `PDQ` and `V3`, but there is no documentation that would say what those mean.
When writing their research document in Microsoft Word, all the values from the analyses were copied and pasted into the tables and text[55](#fn55). The variable names in the document have no exact match to any of the names in any of the data objects. Furthermore, no reference was provided in the text regarding what software or specific packages were used for the analysis.
And now, several months later, after the final draft of the document was written and sent to the journal, the reviewers have eventually made their comments on the paper, and it’s time to dive back into the analysis. Now, what do you think the odds are that this research group could even reproduce the values reported in the main analysis of the paper?
Sadly, up until recently this was not uncommon, and even certain issues just described are still very common. Such an approach is essentially the antithesis of replicability and reproducible research[56](#fn56). Anything that was only done with menus cannot be replicated for certain, and without sufficient documentation it’s not clear what was done even when there is potentially reproducible code. The naming of files, variables and other objects was done poorly, so it will take unnecessary effort to figure out what was done, to what, and when. And even after most things get squared away, there is still a chance the numbers won’t match what was in the paper anyway. This scenario is exactly what we don’t want.
### Repeatable
*Repeatability* can simply be thought of as whether *you* can run the code and analysis again given the same circumstances, essentially producing the same results. In the above scenario, even this is may not even possible. But let’s say that whoever did the analysis can run their code, it works, and produces a result very similar to what was published. We could then say it’s repeatable. This should only be seen as a minimum standard, though sometimes it is enough.
The notion of repeatability also extends to a specific measure itself. This consistency of a measure across repeated observations is typically referred to as *reliability*. This is not our focus here, but I mention it for those who have the false belief that at least some data driven products are entirely replicable. However, you can’t escape measurement error.
### Reproducible
Now let someone else try the analytical process to see if they can *reproduce* the results. Assuming the same data starting point, they should get the same result using the same tools. For our scenario, if we just start at the very last part, maybe this is possible, but at the least, it would require the data that went into the final analysis and the model being specified in a way that anyone could potentially understand. However, it is entirely unlikely that if they start from the raw data import they would get the same results that are in the Word document. If a research article does not provide the analytical data, nor specifies the model in code or math, it is not reproducible, and we can only take on faith what was done.
Here are some typical non\-reproducible situations:
* data is not made available
* code is not made available
* model is not adequately represented (in math or code)
* data processing and/or analysis was done with menus
* visualizations were tweaked in other programs than the one that produced it
* p\-hacking efforts where ‘outliers’ are removed or other data transformations were undertaken to obtain a desired result, and are not reported or are not explained well enough to reproduce
I find the lack of clear model explanation to be pervasive in some sciences. For example, I have seen articles in medical outlets where they ran a mixed model, yet none of the variance components or even a regression table is provided, nor is the model depicted in a formal fashion. You can forget about the code or data being provided as well. I also tend to ignore analyses done using SPSS, because the only reason to use the program is to not have to use the syntax, making reproducibility difficult at best, if it’s even possible.
Tools like Docker, packrat, and others can ensure that the package environment is the same even if you’re running the same code years from now, so that results should be reproduced if run on the same data, assuming other things are accounted for.
### Replicable
*Replicability*, for our purposes, would be something like, if someone had the same type of data (e.g. same structure), and did the same analysis using their own setup (though with the same or similar tools), would they get the same result (to within some tolerance)?
For example, if I have some new data that is otherwise the same, install the same R packages etc. on my machine rather than yours, will I get a very similar result (on average)? Similarly, if I do the exact same analysis using some other package (i.e. using the same estimation procedure even if the underlying code implementation is different), will the results be highly similar?
Here are some typical non\-replicable situations:
* all of the non\-reproducible/repeatable situations
* new versions of the packages break old code, fix bugs that ultimately change results, etc.
* small data and/or overfit models
The last example is an interesting one, and yet it is also one that is driving a lot of so\-called unreplicated findings. Even with a clear model and method, if you’re running a complex analysis on small data without any regularization, or explicit understanding of the uncertainty, the odds of seeing the same results in a new setting are not very strong. While this has been well known and taught in every intro stats course people have taken, the concept evidently immediately gets lost in practice. I see people regularly befuddled as to why they don’t see the same thing when they only have a couple hundred or fewer observations[57](#fn57). However, the uncertainty in small samples, if reported, should make this no surprise.
For my own work, I’m not typically as interested in analytical replicability, as I want my results to work now, not replicate precisely what I did two years ago. No code is bug free, improvements in tools should typically lead to improvements in modeling approach, etc. In the end, I don’t mind the extra work to get my old code working with the latest packages, and there is a correlation between recency and relevancy. However, if such replicability is desired, specific tools will need to be used, such as version control (Git), containers (e.g. Docker), and similar.
These are the stated ACM guidelines.
Repeatability (Same team, same experimental setup)
* The measurement can be obtained with stated precision by the same team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same location on multiple trials. For computational experiments, this means that a researcher can reliably repeat her own computation.
Reproducibility (Different team, same experimental setup)
* The measurement can be obtained with stated precision by a different team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same or a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using the author’s own artifacts.
Replicability (Different team, different experimental setup)
* The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.
### Summary of rep\* analysis
In summary, truly rep\* data analysis requires:
* Accessible data, or at least, essentially similar data[58](#fn58)
* Accessible, well written code
* Clear documentation (of data and code)
* Version control
* Standard means of distribution
* Literate programming practices
* Possibly more depending on the stringency of desired replicability
We’ve seen a poor example, what about a good one? For instance, one could start their research as an RStudio project using Git for version control, write their research products using R Markdown, set seeds for random variables, and use packrat to keep the packages used in analysis specific to the project. Doing so would make it much more likely to reproduce the previous results at any stage, even years later.
* [Reproducibility Guide by ROpenSci](https://ropensci.github.io/reproducibility-guide/)
* [Ten Simple Rules for Reproducible Computational Research](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285) Sandve et al. ([2013](#ref-sandve2013ten))
* [Recommendations to Funding Agencies for Supporting Reproducible Research](https://www.amstat.org/asa/files/pdfs/POL-ReproducibleResearchRecommendations.pdf)
### Example
As our focus will be on data analysis in particular, let’s start with the following scenario. Various versions of a data set are used leading up to analysis, and after several iterations, `finaldata7` is now spread across the computers of the faculty advisor, two graduate students and one undergraduate student. Two of those `finaldata7` data sets, specifically named `finaldata7a` and `finaldata7b`, are slightly different from the other two and each other. The undergraduate, who helped with the processing of `finaldata2` through `finaldata6`, has graduated and no longer resides in the same state, and has other things to occupy their time. Some of the data processing was done with menus in a software package that shall not be named.
The script that did the final analysis, called `results.C`, calls the data using a directory location which no longer exists (and refers only to `finaldata7`). Though it is titled ‘results’, the script performs several more data processing steps, but without comments that would indicate why any of them are being done. Some of the variables are named things like `PDQ` and `V3`, but there is no documentation that would say what those mean.
When writing their research document in Microsoft Word, all the values from the analyses were copied and pasted into the tables and text[55](#fn55). The variable names in the document have no exact match to any of the names in any of the data objects. Furthermore, no reference was provided in the text regarding what software or specific packages were used for the analysis.
And now, several months later, after the final draft of the document was written and sent to the journal, the reviewers have eventually made their comments on the paper, and it’s time to dive back into the analysis. Now, what do you think the odds are that this research group could even reproduce the values reported in the main analysis of the paper?
Sadly, up until recently this was not uncommon, and even certain issues just described are still very common. Such an approach is essentially the antithesis of replicability and reproducible research[56](#fn56). Anything that was only done with menus cannot be replicated for certain, and without sufficient documentation it’s not clear what was done even when there is potentially reproducible code. The naming of files, variables and other objects was done poorly, so it will take unnecessary effort to figure out what was done, to what, and when. And even after most things get squared away, there is still a chance the numbers won’t match what was in the paper anyway. This scenario is exactly what we don’t want.
### Repeatable
*Repeatability* can simply be thought of as whether *you* can run the code and analysis again given the same circumstances, essentially producing the same results. In the above scenario, even this is may not even possible. But let’s say that whoever did the analysis can run their code, it works, and produces a result very similar to what was published. We could then say it’s repeatable. This should only be seen as a minimum standard, though sometimes it is enough.
The notion of repeatability also extends to a specific measure itself. This consistency of a measure across repeated observations is typically referred to as *reliability*. This is not our focus here, but I mention it for those who have the false belief that at least some data driven products are entirely replicable. However, you can’t escape measurement error.
### Reproducible
Now let someone else try the analytical process to see if they can *reproduce* the results. Assuming the same data starting point, they should get the same result using the same tools. For our scenario, if we just start at the very last part, maybe this is possible, but at the least, it would require the data that went into the final analysis and the model being specified in a way that anyone could potentially understand. However, it is entirely unlikely that if they start from the raw data import they would get the same results that are in the Word document. If a research article does not provide the analytical data, nor specifies the model in code or math, it is not reproducible, and we can only take on faith what was done.
Here are some typical non\-reproducible situations:
* data is not made available
* code is not made available
* model is not adequately represented (in math or code)
* data processing and/or analysis was done with menus
* visualizations were tweaked in other programs than the one that produced it
* p\-hacking efforts where ‘outliers’ are removed or other data transformations were undertaken to obtain a desired result, and are not reported or are not explained well enough to reproduce
I find the lack of clear model explanation to be pervasive in some sciences. For example, I have seen articles in medical outlets where they ran a mixed model, yet none of the variance components or even a regression table is provided, nor is the model depicted in a formal fashion. You can forget about the code or data being provided as well. I also tend to ignore analyses done using SPSS, because the only reason to use the program is to not have to use the syntax, making reproducibility difficult at best, if it’s even possible.
Tools like Docker, packrat, and others can ensure that the package environment is the same even if you’re running the same code years from now, so that results should be reproduced if run on the same data, assuming other things are accounted for.
### Replicable
*Replicability*, for our purposes, would be something like, if someone had the same type of data (e.g. same structure), and did the same analysis using their own setup (though with the same or similar tools), would they get the same result (to within some tolerance)?
For example, if I have some new data that is otherwise the same, install the same R packages etc. on my machine rather than yours, will I get a very similar result (on average)? Similarly, if I do the exact same analysis using some other package (i.e. using the same estimation procedure even if the underlying code implementation is different), will the results be highly similar?
Here are some typical non\-replicable situations:
* all of the non\-reproducible/repeatable situations
* new versions of the packages break old code, fix bugs that ultimately change results, etc.
* small data and/or overfit models
The last example is an interesting one, and yet it is also one that is driving a lot of so\-called unreplicated findings. Even with a clear model and method, if you’re running a complex analysis on small data without any regularization, or explicit understanding of the uncertainty, the odds of seeing the same results in a new setting are not very strong. While this has been well known and taught in every intro stats course people have taken, the concept evidently immediately gets lost in practice. I see people regularly befuddled as to why they don’t see the same thing when they only have a couple hundred or fewer observations[57](#fn57). However, the uncertainty in small samples, if reported, should make this no surprise.
For my own work, I’m not typically as interested in analytical replicability, as I want my results to work now, not replicate precisely what I did two years ago. No code is bug free, improvements in tools should typically lead to improvements in modeling approach, etc. In the end, I don’t mind the extra work to get my old code working with the latest packages, and there is a correlation between recency and relevancy. However, if such replicability is desired, specific tools will need to be used, such as version control (Git), containers (e.g. Docker), and similar.
These are the stated ACM guidelines.
Repeatability (Same team, same experimental setup)
* The measurement can be obtained with stated precision by the same team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same location on multiple trials. For computational experiments, this means that a researcher can reliably repeat her own computation.
Reproducibility (Different team, same experimental setup)
* The measurement can be obtained with stated precision by a different team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same or a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using the author’s own artifacts.
Replicability (Different team, different experimental setup)
* The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.
### Summary of rep\* analysis
In summary, truly rep\* data analysis requires:
* Accessible data, or at least, essentially similar data[58](#fn58)
* Accessible, well written code
* Clear documentation (of data and code)
* Version control
* Standard means of distribution
* Literate programming practices
* Possibly more depending on the stringency of desired replicability
We’ve seen a poor example, what about a good one? For instance, one could start their research as an RStudio project using Git for version control, write their research products using R Markdown, set seeds for random variables, and use packrat to keep the packages used in analysis specific to the project. Doing so would make it much more likely to reproduce the previous results at any stage, even years later.
* [Reproducibility Guide by ROpenSci](https://ropensci.github.io/reproducibility-guide/)
* [Ten Simple Rules for Reproducible Computational Research](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285) Sandve et al. ([2013](#ref-sandve2013ten))
* [Recommendations to Funding Agencies for Supporting Reproducible Research](https://www.amstat.org/asa/files/pdfs/POL-ReproducibleResearchRecommendations.pdf)
Literate Programming
--------------------
At this point we have an idea of what we want. But how to we get it? There is an additional concept to think about that will help us with regard to programming and data analysis. So let’s now talk about *literate programming*, which is actually an [old idea](http://www.literateprogramming.com/knuthweb.pdf)[59](#fn59).
> I believe that the time is ripe for significantly better documentation of programs, and that we can best achieve this by considering programs to be works of literature.
> \~ Donald Knuth (1984\)
The interweaving of code and text is something many already do in normal scripting. *Comments* in code are not only useful, they are practically required. But in a program script, almost all the emphasis is on the code. With literate programming, we instead focus on the text, and the code exists to help facilitate our ability to tell a (data\-driven) story.
In the early days, the idea was largely to communicate the idea of the computer program itself. Now, at least in the context we’ll be discussing, our usage of literate programming is to generate results that tell the story in a completely human\-oriented fashion, possibly without any reference to the code at all. However, the document, in whatever format, does not exist independently of the code, and cannot be generated without it.
Consider the following example. This code, which is clearly delimited from the text via background and font style, shows how to do an unordered list in *Markdown* using two different methods. Either a `-` or a `*` will denote a list item.
```
- item 1
- item 2
* item 3
* item 4
```
So, we have a statement explaining the code, followed by the code itself. We actually don’t need a code comment, because the text explains the code in everyday language. This is a simple example, but it gets at the essence of the approach. In the document you’re reading right now, code may be visible or not, but when visible, it’s clear what the code part is and what the text explaining the code is.
The following table shows the results of a regression analysis.
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| **(Intercept)** | 37\.29 | 1\.88 | 19\.86 | 0 |
| **wt** | \-5\.34 | 0\.56 | \-9\.56 | 0 |
Fitting linear model: mpg \~ wt
| Observations | Residual Std. Error | \\(R^2\\) | Adjusted \\(R^2\\) |
| --- | --- | --- | --- |
| 32 | 3\.046 | 0\.7528 | 0\.7446 |
You didn’t see the code, but you saw some nicely formatted results. I personally didn’t format anything however, those are using default settings. Here is the underlying code.
```
lm(mpg ~ wt, mtcars) %>%
summary() %>%
pander(round = 2)
```
Now we see the code, but it isn’t evaluated, because the goal of the text is not the result, but to explain the code. So, imagine a product in which the previous text content explains the results, while the analysis code that produces the result resides right where the text is. Nothing is copied and pasted, and the code and text both reside in the same document. You can imagine how much more easily it is to reproduce a result given such a setup.
The idea of literate programming, i.e. creating human\-understandable programs, can extend beyond reports or slides that you might put together for an analysis, and in fact be used for any setting in which you have to write code at all.
R Markdown
----------
Now let’s shift our focus from concepts to implementation. *R Markdown* provides a means for literate programming. It is a flavor of Markdown, a markup language used pervasively throughout the web. Markdown can be converted to other formats like HTML, but is as easy to use as normal text. R Markdown allows one to combine normal R code with text to produce a wide variety of document formats. This allows for a continuous transition from initial data import and processing to a finished product, whether journal article, software application, slide presentation, or even a website.
To use R Markdown effectively, it helps to know why you’d even want to. So, in addition to literate programming, let’s talk about some ideas, all of which are related, and which will give you some sense of the goals of effective document generation, and why this approach is superior to others you might try.
[This chapter](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/reproducibility.Rmd) (and the rest of the document) is [available on GitHub](https://github.com/m-clark/data-processing-and-visualization). Looking at the raw content (i.e. the R Markdown files \*.Rmd) versus the finished product will get you well on your way to understanding how to use various tools at your disposal to produce a better data driven product.
Version Control
---------------
A major step toward Rep\* analysis of any kind is having a way to document the process of analysis, find where mistakes were made, revert back to previous states, and more. *Version control* is a means of creating checkpoints in document production. While it was primarily geared toward code, it can be useful for any files created whether they are code, figures, data, or a manuscript of some kind.
Some of you may have experience with version control already and not even know it. For example, if you use Box to collaborate on documents, in the web version you will often see something like `V10` next to the file name, meaning you are looking at the tenth version of the document. If you open the document you could go back to any prior version to see what it looked like at a previous state.
Version control is a necessity for modern coding practice, but it should be extended well beyond that. One of the most popular tools in this domain is called [Git](https://git-scm.com/), and the website of choice for most developers is [GitHub](https://github.com/). I would say that most of the R package developers develop their code there in the form of *repositories* (usually called repos), but also host websites, and house other projects. While Git provides a syntax for the process, you can actually implement it very easily within RStudio after it’s installed. As most are not software developers, there is little need beyond the bare basics of understanding Git to gain the benefits of version control. Creating an account on GitHub is very easy, and you can even create your first repository via the website. However, a good place to start for our purposes is with [Happy Git and GitHub for the useR](https://happygitwithr.com/) Bryan ([2018](#ref-bryan2018happy)). It will take a bit to get used to, but you’ll be so much better off once you start using it.
Dynamic Data Analysis \& Report Generation
------------------------------------------
Sometimes the goal is to create an expression of the analysis that is not only to be disseminated to a particular audience, but one which possibly will change over time, as the data itself evolves temporally. In this dynamic setting, the document must be able to handle changes with minimal effort.
I can tell you from firsthand experience that R Markdown can allow one to automatically create custom presentation products for different audiences on a regular basis without even touching the data for explicit processing, nor the reports after the templates are created, even as the data continues to come in over time. Furthermore, any academic effort that would fall under the heading of science is applicable here.
The notion of *science as software development* is something you should get used to. Print has had its day, but is not the best choice for scientific advancement as it should take place. Waiting months for feedback, or a year to get a paper published after it’s first sent for review, and then hoping people have access to a possibly pay\-walled outlet, is simply unacceptable. Furthermore, what if more data comes in? A data or modeling bug is found? Other studies shed additional light on the conclusions? In this day and age, are we supposed to just continue to cite a work that may no longer be applicable while waiting another year or so for updates?
Consider [arxiv.org](https://arxiv.org/). Researchers will put papers there before they are published in journals, ostensibly to provide an openly available, if not *necessarily* 100% complete, work. Others put working drafts or just use it as a place to float some ideas out there. It is a serious outlet however, and a good chunk of the articles I read in the stats world can be found there.
Look closely at [this particular entry](https://arxiv.org/abs/1507.02646). As I write this there have been 6 versions of it, and one has access to any of them.[60](#fn60) If something changes, there is no reason not to have a version 7 or however many one wants. In a similar vein, many of my own documents on machine learning, Bayesian analysis, generalized additive models, etc. have been regularly updated for several years now.
Research is never complete. Data can be augmented, analyses tweaked, visualizations improved. Will the products of your own efforts adapt?
Using Modern Tools
------------------
The main problem for other avenues you might use, like MS Word and \\(\\LaTeX\\),[61](#fn61) is that they were created for printed documents. However, not only is printing unnecessary (and environmentally problematic), contorting a document to the confines of print potentially distorts or hinders the meaning an author wishes to convey, as well as restricts the means with which they can convey it. In addition, for academic outlets and well beyond, print is not the dominant format anymore. Even avid print readers must admit they see much more text on a screen than they do on a page on a typical day.
Let’s recap the issues with traditional approaches:
* Possibly not usable for rep\* analysis
* Syntax gets in the way of fluid text
* Designed for print
* Wasteful if printed
* Often very difficult to get visualizations/tables to look as desired
* No interactivity
The case for using a markdown approach is now years old and well established. Unfortunately many, but not all, journals are still print\-oriented[62](#fn62), because their income depends on the assumption of print, not to mention a closed\-source, broken, and anti\-scientific system of review and publication that dates [back to the 17th century](https://www.npr.org/sections/health-shots/2018/02/24/586184355/scientists-aim-to-pull-peer-review-out-of-the-17th-century). Consider the fact that you could blog about your research while conducting it, present preliminary results in your blog via R Markdown (because your blog itself is done via R Markdown), get regular feedback from peers along the way via your site’s comment system, and all this before you’d ever send it off to a journal. Now ask yourself what a print\-oriented journal actually offers you? When was the last time you actually opened a print version of a journal? How often do you go to a journal site to look for something as opposed to a simple web search or using something like Google Scholar? How many journals do adequate retractions when problems are found[63](#fn63)? Is it possible you may actually get more eyeballs and clicks on your work just having it on your own website[64](#fn64) or [tweeting about it](https://www.altmetric.com/about-altmetrics/what-are-altmetrics/)?
The old paradigm is changing because it has to, and there is practically no justification for the traditional approach to academic publication, and even less for other outlets. In the academic world, outlets are starting to require pre\-registration of study design, code, data archiving measures, and other changes to the usual send\-a\-pdf\-and\-we’ll\-get\-back\-to\-you approach[65](#fn65). In non\-academic settings, while there is the same sort of pushback there, even those used to print and powerpoints must admit they’d prefer an interactive document that works on their phone if needed. As such, you might as well be using tools and an approach that accommodate the things we’ve talked about in order to produce a better data\-driven product.
For more on tools for reproducible research in R, see the [task view](https://cran.r-project.org/web/views/ReproducibleResearch.html).
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/reproducibility.html |
Building Better Data\-Driven Products
=====================================
At this point we’ve covered many topics that will get you from data import and generation to visualizing model results. What’s left? To tell others about what you’ve discovered! While there are any number of ways to present your *data\-driven product*, there are a couple of things to keep in mind regardless of the chosen rendition. Chief among them is building a product that will be intimately connected with all the work that went before it, and which will be consistent across products and (hopefully) over time as well.
We’ll start our discussion of how to present one’s work with some terminology you might have come across:
* *Reproducible research*
* *Repeatable research*
* *Replicable science*
* *Reproducible data analysis*
* *Literate programming*
* *Dynamic data analysis*
* *Dynamic report generation*
Each of these may mean slightly different things depending on the context and background of the person using them, so one should take care to note precisely what is meant. We’ll examine some of these concepts, or at least my particular version of them.
Rep\* Analysis
--------------
Let’s start with the notions of *replicability*, *repeatability*, and *reproducibility*, which are hot topics in various disciplines of late. In our case, we are specifically concerned with programming and analytical results, visualizations, etc. (e.g. as opposed to running an experiment).
To begin, these and related terms are often not precisely defined, and depending on the definition one selects, possibly unlikely, or even impossible! I’ll roughly follow the Association for Computing Machinery guidelines Computing Machinery ([2018](#ref-acm2020)) since they actually do define them, but mostly just to help get us organize our thinking about them, and so you can at least know what *I* mean when I use the terms. In many cases, the concepts are best thought of as ideals to strive for, or goals for certain aspects of the data analysis process. For example, in deference to Heraclitus, Cratylus, the Buddha, and others, nothing is exactly replicable, if only because time will have passed, and with it some things will have changed about the process since the initial analysis was conducted\- the people involved, the data collection approach, the analytical tools, etc. Indeed, even your thought processes regarding the programming and analysis are in constant flux while engaged with the data process. However, we can replicate some things or some aspects of the process, possibly even exactly, and thus make the results reproducible. In other cases, even when we can, we may not want to.
### Example
As our focus will be on data analysis in particular, let’s start with the following scenario. Various versions of a data set are used leading up to analysis, and after several iterations, `finaldata7` is now spread across the computers of the faculty advisor, two graduate students and one undergraduate student. Two of those `finaldata7` data sets, specifically named `finaldata7a` and `finaldata7b`, are slightly different from the other two and each other. The undergraduate, who helped with the processing of `finaldata2` through `finaldata6`, has graduated and no longer resides in the same state, and has other things to occupy their time. Some of the data processing was done with menus in a software package that shall not be named.
The script that did the final analysis, called `results.C`, calls the data using a directory location which no longer exists (and refers only to `finaldata7`). Though it is titled ‘results’, the script performs several more data processing steps, but without comments that would indicate why any of them are being done. Some of the variables are named things like `PDQ` and `V3`, but there is no documentation that would say what those mean.
When writing their research document in Microsoft Word, all the values from the analyses were copied and pasted into the tables and text[55](#fn55). The variable names in the document have no exact match to any of the names in any of the data objects. Furthermore, no reference was provided in the text regarding what software or specific packages were used for the analysis.
And now, several months later, after the final draft of the document was written and sent to the journal, the reviewers have eventually made their comments on the paper, and it’s time to dive back into the analysis. Now, what do you think the odds are that this research group could even reproduce the values reported in the main analysis of the paper?
Sadly, up until recently this was not uncommon, and even certain issues just described are still very common. Such an approach is essentially the antithesis of replicability and reproducible research[56](#fn56). Anything that was only done with menus cannot be replicated for certain, and without sufficient documentation it’s not clear what was done even when there is potentially reproducible code. The naming of files, variables and other objects was done poorly, so it will take unnecessary effort to figure out what was done, to what, and when. And even after most things get squared away, there is still a chance the numbers won’t match what was in the paper anyway. This scenario is exactly what we don’t want.
### Repeatable
*Repeatability* can simply be thought of as whether *you* can run the code and analysis again given the same circumstances, essentially producing the same results. In the above scenario, even this is may not even possible. But let’s say that whoever did the analysis can run their code, it works, and produces a result very similar to what was published. We could then say it’s repeatable. This should only be seen as a minimum standard, though sometimes it is enough.
The notion of repeatability also extends to a specific measure itself. This consistency of a measure across repeated observations is typically referred to as *reliability*. This is not our focus here, but I mention it for those who have the false belief that at least some data driven products are entirely replicable. However, you can’t escape measurement error.
### Reproducible
Now let someone else try the analytical process to see if they can *reproduce* the results. Assuming the same data starting point, they should get the same result using the same tools. For our scenario, if we just start at the very last part, maybe this is possible, but at the least, it would require the data that went into the final analysis and the model being specified in a way that anyone could potentially understand. However, it is entirely unlikely that if they start from the raw data import they would get the same results that are in the Word document. If a research article does not provide the analytical data, nor specifies the model in code or math, it is not reproducible, and we can only take on faith what was done.
Here are some typical non\-reproducible situations:
* data is not made available
* code is not made available
* model is not adequately represented (in math or code)
* data processing and/or analysis was done with menus
* visualizations were tweaked in other programs than the one that produced it
* p\-hacking efforts where ‘outliers’ are removed or other data transformations were undertaken to obtain a desired result, and are not reported or are not explained well enough to reproduce
I find the lack of clear model explanation to be pervasive in some sciences. For example, I have seen articles in medical outlets where they ran a mixed model, yet none of the variance components or even a regression table is provided, nor is the model depicted in a formal fashion. You can forget about the code or data being provided as well. I also tend to ignore analyses done using SPSS, because the only reason to use the program is to not have to use the syntax, making reproducibility difficult at best, if it’s even possible.
Tools like Docker, packrat, and others can ensure that the package environment is the same even if you’re running the same code years from now, so that results should be reproduced if run on the same data, assuming other things are accounted for.
### Replicable
*Replicability*, for our purposes, would be something like, if someone had the same type of data (e.g. same structure), and did the same analysis using their own setup (though with the same or similar tools), would they get the same result (to within some tolerance)?
For example, if I have some new data that is otherwise the same, install the same R packages etc. on my machine rather than yours, will I get a very similar result (on average)? Similarly, if I do the exact same analysis using some other package (i.e. using the same estimation procedure even if the underlying code implementation is different), will the results be highly similar?
Here are some typical non\-replicable situations:
* all of the non\-reproducible/repeatable situations
* new versions of the packages break old code, fix bugs that ultimately change results, etc.
* small data and/or overfit models
The last example is an interesting one, and yet it is also one that is driving a lot of so\-called unreplicated findings. Even with a clear model and method, if you’re running a complex analysis on small data without any regularization, or explicit understanding of the uncertainty, the odds of seeing the same results in a new setting are not very strong. While this has been well known and taught in every intro stats course people have taken, the concept evidently immediately gets lost in practice. I see people regularly befuddled as to why they don’t see the same thing when they only have a couple hundred or fewer observations[57](#fn57). However, the uncertainty in small samples, if reported, should make this no surprise.
For my own work, I’m not typically as interested in analytical replicability, as I want my results to work now, not replicate precisely what I did two years ago. No code is bug free, improvements in tools should typically lead to improvements in modeling approach, etc. In the end, I don’t mind the extra work to get my old code working with the latest packages, and there is a correlation between recency and relevancy. However, if such replicability is desired, specific tools will need to be used, such as version control (Git), containers (e.g. Docker), and similar.
These are the stated ACM guidelines.
Repeatability (Same team, same experimental setup)
* The measurement can be obtained with stated precision by the same team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same location on multiple trials. For computational experiments, this means that a researcher can reliably repeat her own computation.
Reproducibility (Different team, same experimental setup)
* The measurement can be obtained with stated precision by a different team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same or a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using the author’s own artifacts.
Replicability (Different team, different experimental setup)
* The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.
### Summary of rep\* analysis
In summary, truly rep\* data analysis requires:
* Accessible data, or at least, essentially similar data[58](#fn58)
* Accessible, well written code
* Clear documentation (of data and code)
* Version control
* Standard means of distribution
* Literate programming practices
* Possibly more depending on the stringency of desired replicability
We’ve seen a poor example, what about a good one? For instance, one could start their research as an RStudio project using Git for version control, write their research products using R Markdown, set seeds for random variables, and use packrat to keep the packages used in analysis specific to the project. Doing so would make it much more likely to reproduce the previous results at any stage, even years later.
* [Reproducibility Guide by ROpenSci](https://ropensci.github.io/reproducibility-guide/)
* [Ten Simple Rules for Reproducible Computational Research](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285) Sandve et al. ([2013](#ref-sandve2013ten))
* [Recommendations to Funding Agencies for Supporting Reproducible Research](https://www.amstat.org/asa/files/pdfs/POL-ReproducibleResearchRecommendations.pdf)
Literate Programming
--------------------
At this point we have an idea of what we want. But how to we get it? There is an additional concept to think about that will help us with regard to programming and data analysis. So let’s now talk about *literate programming*, which is actually an [old idea](http://www.literateprogramming.com/knuthweb.pdf)[59](#fn59).
> I believe that the time is ripe for significantly better documentation of programs, and that we can best achieve this by considering programs to be works of literature.
> \~ Donald Knuth (1984\)
The interweaving of code and text is something many already do in normal scripting. *Comments* in code are not only useful, they are practically required. But in a program script, almost all the emphasis is on the code. With literate programming, we instead focus on the text, and the code exists to help facilitate our ability to tell a (data\-driven) story.
In the early days, the idea was largely to communicate the idea of the computer program itself. Now, at least in the context we’ll be discussing, our usage of literate programming is to generate results that tell the story in a completely human\-oriented fashion, possibly without any reference to the code at all. However, the document, in whatever format, does not exist independently of the code, and cannot be generated without it.
Consider the following example. This code, which is clearly delimited from the text via background and font style, shows how to do an unordered list in *Markdown* using two different methods. Either a `-` or a `*` will denote a list item.
```
- item 1
- item 2
* item 3
* item 4
```
So, we have a statement explaining the code, followed by the code itself. We actually don’t need a code comment, because the text explains the code in everyday language. This is a simple example, but it gets at the essence of the approach. In the document you’re reading right now, code may be visible or not, but when visible, it’s clear what the code part is and what the text explaining the code is.
The following table shows the results of a regression analysis.
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| **(Intercept)** | 37\.29 | 1\.88 | 19\.86 | 0 |
| **wt** | \-5\.34 | 0\.56 | \-9\.56 | 0 |
Fitting linear model: mpg \~ wt
| Observations | Residual Std. Error | \\(R^2\\) | Adjusted \\(R^2\\) |
| --- | --- | --- | --- |
| 32 | 3\.046 | 0\.7528 | 0\.7446 |
You didn’t see the code, but you saw some nicely formatted results. I personally didn’t format anything however, those are using default settings. Here is the underlying code.
```
lm(mpg ~ wt, mtcars) %>%
summary() %>%
pander(round = 2)
```
Now we see the code, but it isn’t evaluated, because the goal of the text is not the result, but to explain the code. So, imagine a product in which the previous text content explains the results, while the analysis code that produces the result resides right where the text is. Nothing is copied and pasted, and the code and text both reside in the same document. You can imagine how much more easily it is to reproduce a result given such a setup.
The idea of literate programming, i.e. creating human\-understandable programs, can extend beyond reports or slides that you might put together for an analysis, and in fact be used for any setting in which you have to write code at all.
R Markdown
----------
Now let’s shift our focus from concepts to implementation. *R Markdown* provides a means for literate programming. It is a flavor of Markdown, a markup language used pervasively throughout the web. Markdown can be converted to other formats like HTML, but is as easy to use as normal text. R Markdown allows one to combine normal R code with text to produce a wide variety of document formats. This allows for a continuous transition from initial data import and processing to a finished product, whether journal article, software application, slide presentation, or even a website.
To use R Markdown effectively, it helps to know why you’d even want to. So, in addition to literate programming, let’s talk about some ideas, all of which are related, and which will give you some sense of the goals of effective document generation, and why this approach is superior to others you might try.
[This chapter](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/reproducibility.Rmd) (and the rest of the document) is [available on GitHub](https://github.com/m-clark/data-processing-and-visualization). Looking at the raw content (i.e. the R Markdown files \*.Rmd) versus the finished product will get you well on your way to understanding how to use various tools at your disposal to produce a better data driven product.
Version Control
---------------
A major step toward Rep\* analysis of any kind is having a way to document the process of analysis, find where mistakes were made, revert back to previous states, and more. *Version control* is a means of creating checkpoints in document production. While it was primarily geared toward code, it can be useful for any files created whether they are code, figures, data, or a manuscript of some kind.
Some of you may have experience with version control already and not even know it. For example, if you use Box to collaborate on documents, in the web version you will often see something like `V10` next to the file name, meaning you are looking at the tenth version of the document. If you open the document you could go back to any prior version to see what it looked like at a previous state.
Version control is a necessity for modern coding practice, but it should be extended well beyond that. One of the most popular tools in this domain is called [Git](https://git-scm.com/), and the website of choice for most developers is [GitHub](https://github.com/). I would say that most of the R package developers develop their code there in the form of *repositories* (usually called repos), but also host websites, and house other projects. While Git provides a syntax for the process, you can actually implement it very easily within RStudio after it’s installed. As most are not software developers, there is little need beyond the bare basics of understanding Git to gain the benefits of version control. Creating an account on GitHub is very easy, and you can even create your first repository via the website. However, a good place to start for our purposes is with [Happy Git and GitHub for the useR](https://happygitwithr.com/) Bryan ([2018](#ref-bryan2018happy)). It will take a bit to get used to, but you’ll be so much better off once you start using it.
Dynamic Data Analysis \& Report Generation
------------------------------------------
Sometimes the goal is to create an expression of the analysis that is not only to be disseminated to a particular audience, but one which possibly will change over time, as the data itself evolves temporally. In this dynamic setting, the document must be able to handle changes with minimal effort.
I can tell you from firsthand experience that R Markdown can allow one to automatically create custom presentation products for different audiences on a regular basis without even touching the data for explicit processing, nor the reports after the templates are created, even as the data continues to come in over time. Furthermore, any academic effort that would fall under the heading of science is applicable here.
The notion of *science as software development* is something you should get used to. Print has had its day, but is not the best choice for scientific advancement as it should take place. Waiting months for feedback, or a year to get a paper published after it’s first sent for review, and then hoping people have access to a possibly pay\-walled outlet, is simply unacceptable. Furthermore, what if more data comes in? A data or modeling bug is found? Other studies shed additional light on the conclusions? In this day and age, are we supposed to just continue to cite a work that may no longer be applicable while waiting another year or so for updates?
Consider [arxiv.org](https://arxiv.org/). Researchers will put papers there before they are published in journals, ostensibly to provide an openly available, if not *necessarily* 100% complete, work. Others put working drafts or just use it as a place to float some ideas out there. It is a serious outlet however, and a good chunk of the articles I read in the stats world can be found there.
Look closely at [this particular entry](https://arxiv.org/abs/1507.02646). As I write this there have been 6 versions of it, and one has access to any of them.[60](#fn60) If something changes, there is no reason not to have a version 7 or however many one wants. In a similar vein, many of my own documents on machine learning, Bayesian analysis, generalized additive models, etc. have been regularly updated for several years now.
Research is never complete. Data can be augmented, analyses tweaked, visualizations improved. Will the products of your own efforts adapt?
Using Modern Tools
------------------
The main problem for other avenues you might use, like MS Word and \\(\\LaTeX\\),[61](#fn61) is that they were created for printed documents. However, not only is printing unnecessary (and environmentally problematic), contorting a document to the confines of print potentially distorts or hinders the meaning an author wishes to convey, as well as restricts the means with which they can convey it. In addition, for academic outlets and well beyond, print is not the dominant format anymore. Even avid print readers must admit they see much more text on a screen than they do on a page on a typical day.
Let’s recap the issues with traditional approaches:
* Possibly not usable for rep\* analysis
* Syntax gets in the way of fluid text
* Designed for print
* Wasteful if printed
* Often very difficult to get visualizations/tables to look as desired
* No interactivity
The case for using a markdown approach is now years old and well established. Unfortunately many, but not all, journals are still print\-oriented[62](#fn62), because their income depends on the assumption of print, not to mention a closed\-source, broken, and anti\-scientific system of review and publication that dates [back to the 17th century](https://www.npr.org/sections/health-shots/2018/02/24/586184355/scientists-aim-to-pull-peer-review-out-of-the-17th-century). Consider the fact that you could blog about your research while conducting it, present preliminary results in your blog via R Markdown (because your blog itself is done via R Markdown), get regular feedback from peers along the way via your site’s comment system, and all this before you’d ever send it off to a journal. Now ask yourself what a print\-oriented journal actually offers you? When was the last time you actually opened a print version of a journal? How often do you go to a journal site to look for something as opposed to a simple web search or using something like Google Scholar? How many journals do adequate retractions when problems are found[63](#fn63)? Is it possible you may actually get more eyeballs and clicks on your work just having it on your own website[64](#fn64) or [tweeting about it](https://www.altmetric.com/about-altmetrics/what-are-altmetrics/)?
The old paradigm is changing because it has to, and there is practically no justification for the traditional approach to academic publication, and even less for other outlets. In the academic world, outlets are starting to require pre\-registration of study design, code, data archiving measures, and other changes to the usual send\-a\-pdf\-and\-we’ll\-get\-back\-to\-you approach[65](#fn65). In non\-academic settings, while there is the same sort of pushback there, even those used to print and powerpoints must admit they’d prefer an interactive document that works on their phone if needed. As such, you might as well be using tools and an approach that accommodate the things we’ve talked about in order to produce a better data\-driven product.
For more on tools for reproducible research in R, see the [task view](https://cran.r-project.org/web/views/ReproducibleResearch.html).
Rep\* Analysis
--------------
Let’s start with the notions of *replicability*, *repeatability*, and *reproducibility*, which are hot topics in various disciplines of late. In our case, we are specifically concerned with programming and analytical results, visualizations, etc. (e.g. as opposed to running an experiment).
To begin, these and related terms are often not precisely defined, and depending on the definition one selects, possibly unlikely, or even impossible! I’ll roughly follow the Association for Computing Machinery guidelines Computing Machinery ([2018](#ref-acm2020)) since they actually do define them, but mostly just to help get us organize our thinking about them, and so you can at least know what *I* mean when I use the terms. In many cases, the concepts are best thought of as ideals to strive for, or goals for certain aspects of the data analysis process. For example, in deference to Heraclitus, Cratylus, the Buddha, and others, nothing is exactly replicable, if only because time will have passed, and with it some things will have changed about the process since the initial analysis was conducted\- the people involved, the data collection approach, the analytical tools, etc. Indeed, even your thought processes regarding the programming and analysis are in constant flux while engaged with the data process. However, we can replicate some things or some aspects of the process, possibly even exactly, and thus make the results reproducible. In other cases, even when we can, we may not want to.
### Example
As our focus will be on data analysis in particular, let’s start with the following scenario. Various versions of a data set are used leading up to analysis, and after several iterations, `finaldata7` is now spread across the computers of the faculty advisor, two graduate students and one undergraduate student. Two of those `finaldata7` data sets, specifically named `finaldata7a` and `finaldata7b`, are slightly different from the other two and each other. The undergraduate, who helped with the processing of `finaldata2` through `finaldata6`, has graduated and no longer resides in the same state, and has other things to occupy their time. Some of the data processing was done with menus in a software package that shall not be named.
The script that did the final analysis, called `results.C`, calls the data using a directory location which no longer exists (and refers only to `finaldata7`). Though it is titled ‘results’, the script performs several more data processing steps, but without comments that would indicate why any of them are being done. Some of the variables are named things like `PDQ` and `V3`, but there is no documentation that would say what those mean.
When writing their research document in Microsoft Word, all the values from the analyses were copied and pasted into the tables and text[55](#fn55). The variable names in the document have no exact match to any of the names in any of the data objects. Furthermore, no reference was provided in the text regarding what software or specific packages were used for the analysis.
And now, several months later, after the final draft of the document was written and sent to the journal, the reviewers have eventually made their comments on the paper, and it’s time to dive back into the analysis. Now, what do you think the odds are that this research group could even reproduce the values reported in the main analysis of the paper?
Sadly, up until recently this was not uncommon, and even certain issues just described are still very common. Such an approach is essentially the antithesis of replicability and reproducible research[56](#fn56). Anything that was only done with menus cannot be replicated for certain, and without sufficient documentation it’s not clear what was done even when there is potentially reproducible code. The naming of files, variables and other objects was done poorly, so it will take unnecessary effort to figure out what was done, to what, and when. And even after most things get squared away, there is still a chance the numbers won’t match what was in the paper anyway. This scenario is exactly what we don’t want.
### Repeatable
*Repeatability* can simply be thought of as whether *you* can run the code and analysis again given the same circumstances, essentially producing the same results. In the above scenario, even this is may not even possible. But let’s say that whoever did the analysis can run their code, it works, and produces a result very similar to what was published. We could then say it’s repeatable. This should only be seen as a minimum standard, though sometimes it is enough.
The notion of repeatability also extends to a specific measure itself. This consistency of a measure across repeated observations is typically referred to as *reliability*. This is not our focus here, but I mention it for those who have the false belief that at least some data driven products are entirely replicable. However, you can’t escape measurement error.
### Reproducible
Now let someone else try the analytical process to see if they can *reproduce* the results. Assuming the same data starting point, they should get the same result using the same tools. For our scenario, if we just start at the very last part, maybe this is possible, but at the least, it would require the data that went into the final analysis and the model being specified in a way that anyone could potentially understand. However, it is entirely unlikely that if they start from the raw data import they would get the same results that are in the Word document. If a research article does not provide the analytical data, nor specifies the model in code or math, it is not reproducible, and we can only take on faith what was done.
Here are some typical non\-reproducible situations:
* data is not made available
* code is not made available
* model is not adequately represented (in math or code)
* data processing and/or analysis was done with menus
* visualizations were tweaked in other programs than the one that produced it
* p\-hacking efforts where ‘outliers’ are removed or other data transformations were undertaken to obtain a desired result, and are not reported or are not explained well enough to reproduce
I find the lack of clear model explanation to be pervasive in some sciences. For example, I have seen articles in medical outlets where they ran a mixed model, yet none of the variance components or even a regression table is provided, nor is the model depicted in a formal fashion. You can forget about the code or data being provided as well. I also tend to ignore analyses done using SPSS, because the only reason to use the program is to not have to use the syntax, making reproducibility difficult at best, if it’s even possible.
Tools like Docker, packrat, and others can ensure that the package environment is the same even if you’re running the same code years from now, so that results should be reproduced if run on the same data, assuming other things are accounted for.
### Replicable
*Replicability*, for our purposes, would be something like, if someone had the same type of data (e.g. same structure), and did the same analysis using their own setup (though with the same or similar tools), would they get the same result (to within some tolerance)?
For example, if I have some new data that is otherwise the same, install the same R packages etc. on my machine rather than yours, will I get a very similar result (on average)? Similarly, if I do the exact same analysis using some other package (i.e. using the same estimation procedure even if the underlying code implementation is different), will the results be highly similar?
Here are some typical non\-replicable situations:
* all of the non\-reproducible/repeatable situations
* new versions of the packages break old code, fix bugs that ultimately change results, etc.
* small data and/or overfit models
The last example is an interesting one, and yet it is also one that is driving a lot of so\-called unreplicated findings. Even with a clear model and method, if you’re running a complex analysis on small data without any regularization, or explicit understanding of the uncertainty, the odds of seeing the same results in a new setting are not very strong. While this has been well known and taught in every intro stats course people have taken, the concept evidently immediately gets lost in practice. I see people regularly befuddled as to why they don’t see the same thing when they only have a couple hundred or fewer observations[57](#fn57). However, the uncertainty in small samples, if reported, should make this no surprise.
For my own work, I’m not typically as interested in analytical replicability, as I want my results to work now, not replicate precisely what I did two years ago. No code is bug free, improvements in tools should typically lead to improvements in modeling approach, etc. In the end, I don’t mind the extra work to get my old code working with the latest packages, and there is a correlation between recency and relevancy. However, if such replicability is desired, specific tools will need to be used, such as version control (Git), containers (e.g. Docker), and similar.
These are the stated ACM guidelines.
Repeatability (Same team, same experimental setup)
* The measurement can be obtained with stated precision by the same team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same location on multiple trials. For computational experiments, this means that a researcher can reliably repeat her own computation.
Reproducibility (Different team, same experimental setup)
* The measurement can be obtained with stated precision by a different team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same or a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using the author’s own artifacts.
Replicability (Different team, different experimental setup)
* The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.
### Summary of rep\* analysis
In summary, truly rep\* data analysis requires:
* Accessible data, or at least, essentially similar data[58](#fn58)
* Accessible, well written code
* Clear documentation (of data and code)
* Version control
* Standard means of distribution
* Literate programming practices
* Possibly more depending on the stringency of desired replicability
We’ve seen a poor example, what about a good one? For instance, one could start their research as an RStudio project using Git for version control, write their research products using R Markdown, set seeds for random variables, and use packrat to keep the packages used in analysis specific to the project. Doing so would make it much more likely to reproduce the previous results at any stage, even years later.
* [Reproducibility Guide by ROpenSci](https://ropensci.github.io/reproducibility-guide/)
* [Ten Simple Rules for Reproducible Computational Research](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285) Sandve et al. ([2013](#ref-sandve2013ten))
* [Recommendations to Funding Agencies for Supporting Reproducible Research](https://www.amstat.org/asa/files/pdfs/POL-ReproducibleResearchRecommendations.pdf)
### Example
As our focus will be on data analysis in particular, let’s start with the following scenario. Various versions of a data set are used leading up to analysis, and after several iterations, `finaldata7` is now spread across the computers of the faculty advisor, two graduate students and one undergraduate student. Two of those `finaldata7` data sets, specifically named `finaldata7a` and `finaldata7b`, are slightly different from the other two and each other. The undergraduate, who helped with the processing of `finaldata2` through `finaldata6`, has graduated and no longer resides in the same state, and has other things to occupy their time. Some of the data processing was done with menus in a software package that shall not be named.
The script that did the final analysis, called `results.C`, calls the data using a directory location which no longer exists (and refers only to `finaldata7`). Though it is titled ‘results’, the script performs several more data processing steps, but without comments that would indicate why any of them are being done. Some of the variables are named things like `PDQ` and `V3`, but there is no documentation that would say what those mean.
When writing their research document in Microsoft Word, all the values from the analyses were copied and pasted into the tables and text[55](#fn55). The variable names in the document have no exact match to any of the names in any of the data objects. Furthermore, no reference was provided in the text regarding what software or specific packages were used for the analysis.
And now, several months later, after the final draft of the document was written and sent to the journal, the reviewers have eventually made their comments on the paper, and it’s time to dive back into the analysis. Now, what do you think the odds are that this research group could even reproduce the values reported in the main analysis of the paper?
Sadly, up until recently this was not uncommon, and even certain issues just described are still very common. Such an approach is essentially the antithesis of replicability and reproducible research[56](#fn56). Anything that was only done with menus cannot be replicated for certain, and without sufficient documentation it’s not clear what was done even when there is potentially reproducible code. The naming of files, variables and other objects was done poorly, so it will take unnecessary effort to figure out what was done, to what, and when. And even after most things get squared away, there is still a chance the numbers won’t match what was in the paper anyway. This scenario is exactly what we don’t want.
### Repeatable
*Repeatability* can simply be thought of as whether *you* can run the code and analysis again given the same circumstances, essentially producing the same results. In the above scenario, even this is may not even possible. But let’s say that whoever did the analysis can run their code, it works, and produces a result very similar to what was published. We could then say it’s repeatable. This should only be seen as a minimum standard, though sometimes it is enough.
The notion of repeatability also extends to a specific measure itself. This consistency of a measure across repeated observations is typically referred to as *reliability*. This is not our focus here, but I mention it for those who have the false belief that at least some data driven products are entirely replicable. However, you can’t escape measurement error.
### Reproducible
Now let someone else try the analytical process to see if they can *reproduce* the results. Assuming the same data starting point, they should get the same result using the same tools. For our scenario, if we just start at the very last part, maybe this is possible, but at the least, it would require the data that went into the final analysis and the model being specified in a way that anyone could potentially understand. However, it is entirely unlikely that if they start from the raw data import they would get the same results that are in the Word document. If a research article does not provide the analytical data, nor specifies the model in code or math, it is not reproducible, and we can only take on faith what was done.
Here are some typical non\-reproducible situations:
* data is not made available
* code is not made available
* model is not adequately represented (in math or code)
* data processing and/or analysis was done with menus
* visualizations were tweaked in other programs than the one that produced it
* p\-hacking efforts where ‘outliers’ are removed or other data transformations were undertaken to obtain a desired result, and are not reported or are not explained well enough to reproduce
I find the lack of clear model explanation to be pervasive in some sciences. For example, I have seen articles in medical outlets where they ran a mixed model, yet none of the variance components or even a regression table is provided, nor is the model depicted in a formal fashion. You can forget about the code or data being provided as well. I also tend to ignore analyses done using SPSS, because the only reason to use the program is to not have to use the syntax, making reproducibility difficult at best, if it’s even possible.
Tools like Docker, packrat, and others can ensure that the package environment is the same even if you’re running the same code years from now, so that results should be reproduced if run on the same data, assuming other things are accounted for.
### Replicable
*Replicability*, for our purposes, would be something like, if someone had the same type of data (e.g. same structure), and did the same analysis using their own setup (though with the same or similar tools), would they get the same result (to within some tolerance)?
For example, if I have some new data that is otherwise the same, install the same R packages etc. on my machine rather than yours, will I get a very similar result (on average)? Similarly, if I do the exact same analysis using some other package (i.e. using the same estimation procedure even if the underlying code implementation is different), will the results be highly similar?
Here are some typical non\-replicable situations:
* all of the non\-reproducible/repeatable situations
* new versions of the packages break old code, fix bugs that ultimately change results, etc.
* small data and/or overfit models
The last example is an interesting one, and yet it is also one that is driving a lot of so\-called unreplicated findings. Even with a clear model and method, if you’re running a complex analysis on small data without any regularization, or explicit understanding of the uncertainty, the odds of seeing the same results in a new setting are not very strong. While this has been well known and taught in every intro stats course people have taken, the concept evidently immediately gets lost in practice. I see people regularly befuddled as to why they don’t see the same thing when they only have a couple hundred or fewer observations[57](#fn57). However, the uncertainty in small samples, if reported, should make this no surprise.
For my own work, I’m not typically as interested in analytical replicability, as I want my results to work now, not replicate precisely what I did two years ago. No code is bug free, improvements in tools should typically lead to improvements in modeling approach, etc. In the end, I don’t mind the extra work to get my old code working with the latest packages, and there is a correlation between recency and relevancy. However, if such replicability is desired, specific tools will need to be used, such as version control (Git), containers (e.g. Docker), and similar.
These are the stated ACM guidelines.
Repeatability (Same team, same experimental setup)
* The measurement can be obtained with stated precision by the same team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same location on multiple trials. For computational experiments, this means that a researcher can reliably repeat her own computation.
Reproducibility (Different team, same experimental setup)
* The measurement can be obtained with stated precision by a different team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same or a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using the author’s own artifacts.
Replicability (Different team, different experimental setup)
* The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.
### Summary of rep\* analysis
In summary, truly rep\* data analysis requires:
* Accessible data, or at least, essentially similar data[58](#fn58)
* Accessible, well written code
* Clear documentation (of data and code)
* Version control
* Standard means of distribution
* Literate programming practices
* Possibly more depending on the stringency of desired replicability
We’ve seen a poor example, what about a good one? For instance, one could start their research as an RStudio project using Git for version control, write their research products using R Markdown, set seeds for random variables, and use packrat to keep the packages used in analysis specific to the project. Doing so would make it much more likely to reproduce the previous results at any stage, even years later.
* [Reproducibility Guide by ROpenSci](https://ropensci.github.io/reproducibility-guide/)
* [Ten Simple Rules for Reproducible Computational Research](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285) Sandve et al. ([2013](#ref-sandve2013ten))
* [Recommendations to Funding Agencies for Supporting Reproducible Research](https://www.amstat.org/asa/files/pdfs/POL-ReproducibleResearchRecommendations.pdf)
Literate Programming
--------------------
At this point we have an idea of what we want. But how to we get it? There is an additional concept to think about that will help us with regard to programming and data analysis. So let’s now talk about *literate programming*, which is actually an [old idea](http://www.literateprogramming.com/knuthweb.pdf)[59](#fn59).
> I believe that the time is ripe for significantly better documentation of programs, and that we can best achieve this by considering programs to be works of literature.
> \~ Donald Knuth (1984\)
The interweaving of code and text is something many already do in normal scripting. *Comments* in code are not only useful, they are practically required. But in a program script, almost all the emphasis is on the code. With literate programming, we instead focus on the text, and the code exists to help facilitate our ability to tell a (data\-driven) story.
In the early days, the idea was largely to communicate the idea of the computer program itself. Now, at least in the context we’ll be discussing, our usage of literate programming is to generate results that tell the story in a completely human\-oriented fashion, possibly without any reference to the code at all. However, the document, in whatever format, does not exist independently of the code, and cannot be generated without it.
Consider the following example. This code, which is clearly delimited from the text via background and font style, shows how to do an unordered list in *Markdown* using two different methods. Either a `-` or a `*` will denote a list item.
```
- item 1
- item 2
* item 3
* item 4
```
So, we have a statement explaining the code, followed by the code itself. We actually don’t need a code comment, because the text explains the code in everyday language. This is a simple example, but it gets at the essence of the approach. In the document you’re reading right now, code may be visible or not, but when visible, it’s clear what the code part is and what the text explaining the code is.
The following table shows the results of a regression analysis.
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| **(Intercept)** | 37\.29 | 1\.88 | 19\.86 | 0 |
| **wt** | \-5\.34 | 0\.56 | \-9\.56 | 0 |
Fitting linear model: mpg \~ wt
| Observations | Residual Std. Error | \\(R^2\\) | Adjusted \\(R^2\\) |
| --- | --- | --- | --- |
| 32 | 3\.046 | 0\.7528 | 0\.7446 |
You didn’t see the code, but you saw some nicely formatted results. I personally didn’t format anything however, those are using default settings. Here is the underlying code.
```
lm(mpg ~ wt, mtcars) %>%
summary() %>%
pander(round = 2)
```
Now we see the code, but it isn’t evaluated, because the goal of the text is not the result, but to explain the code. So, imagine a product in which the previous text content explains the results, while the analysis code that produces the result resides right where the text is. Nothing is copied and pasted, and the code and text both reside in the same document. You can imagine how much more easily it is to reproduce a result given such a setup.
The idea of literate programming, i.e. creating human\-understandable programs, can extend beyond reports or slides that you might put together for an analysis, and in fact be used for any setting in which you have to write code at all.
R Markdown
----------
Now let’s shift our focus from concepts to implementation. *R Markdown* provides a means for literate programming. It is a flavor of Markdown, a markup language used pervasively throughout the web. Markdown can be converted to other formats like HTML, but is as easy to use as normal text. R Markdown allows one to combine normal R code with text to produce a wide variety of document formats. This allows for a continuous transition from initial data import and processing to a finished product, whether journal article, software application, slide presentation, or even a website.
To use R Markdown effectively, it helps to know why you’d even want to. So, in addition to literate programming, let’s talk about some ideas, all of which are related, and which will give you some sense of the goals of effective document generation, and why this approach is superior to others you might try.
[This chapter](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/reproducibility.Rmd) (and the rest of the document) is [available on GitHub](https://github.com/m-clark/data-processing-and-visualization). Looking at the raw content (i.e. the R Markdown files \*.Rmd) versus the finished product will get you well on your way to understanding how to use various tools at your disposal to produce a better data driven product.
Version Control
---------------
A major step toward Rep\* analysis of any kind is having a way to document the process of analysis, find where mistakes were made, revert back to previous states, and more. *Version control* is a means of creating checkpoints in document production. While it was primarily geared toward code, it can be useful for any files created whether they are code, figures, data, or a manuscript of some kind.
Some of you may have experience with version control already and not even know it. For example, if you use Box to collaborate on documents, in the web version you will often see something like `V10` next to the file name, meaning you are looking at the tenth version of the document. If you open the document you could go back to any prior version to see what it looked like at a previous state.
Version control is a necessity for modern coding practice, but it should be extended well beyond that. One of the most popular tools in this domain is called [Git](https://git-scm.com/), and the website of choice for most developers is [GitHub](https://github.com/). I would say that most of the R package developers develop their code there in the form of *repositories* (usually called repos), but also host websites, and house other projects. While Git provides a syntax for the process, you can actually implement it very easily within RStudio after it’s installed. As most are not software developers, there is little need beyond the bare basics of understanding Git to gain the benefits of version control. Creating an account on GitHub is very easy, and you can even create your first repository via the website. However, a good place to start for our purposes is with [Happy Git and GitHub for the useR](https://happygitwithr.com/) Bryan ([2018](#ref-bryan2018happy)). It will take a bit to get used to, but you’ll be so much better off once you start using it.
Dynamic Data Analysis \& Report Generation
------------------------------------------
Sometimes the goal is to create an expression of the analysis that is not only to be disseminated to a particular audience, but one which possibly will change over time, as the data itself evolves temporally. In this dynamic setting, the document must be able to handle changes with minimal effort.
I can tell you from firsthand experience that R Markdown can allow one to automatically create custom presentation products for different audiences on a regular basis without even touching the data for explicit processing, nor the reports after the templates are created, even as the data continues to come in over time. Furthermore, any academic effort that would fall under the heading of science is applicable here.
The notion of *science as software development* is something you should get used to. Print has had its day, but is not the best choice for scientific advancement as it should take place. Waiting months for feedback, or a year to get a paper published after it’s first sent for review, and then hoping people have access to a possibly pay\-walled outlet, is simply unacceptable. Furthermore, what if more data comes in? A data or modeling bug is found? Other studies shed additional light on the conclusions? In this day and age, are we supposed to just continue to cite a work that may no longer be applicable while waiting another year or so for updates?
Consider [arxiv.org](https://arxiv.org/). Researchers will put papers there before they are published in journals, ostensibly to provide an openly available, if not *necessarily* 100% complete, work. Others put working drafts or just use it as a place to float some ideas out there. It is a serious outlet however, and a good chunk of the articles I read in the stats world can be found there.
Look closely at [this particular entry](https://arxiv.org/abs/1507.02646). As I write this there have been 6 versions of it, and one has access to any of them.[60](#fn60) If something changes, there is no reason not to have a version 7 or however many one wants. In a similar vein, many of my own documents on machine learning, Bayesian analysis, generalized additive models, etc. have been regularly updated for several years now.
Research is never complete. Data can be augmented, analyses tweaked, visualizations improved. Will the products of your own efforts adapt?
Using Modern Tools
------------------
The main problem for other avenues you might use, like MS Word and \\(\\LaTeX\\),[61](#fn61) is that they were created for printed documents. However, not only is printing unnecessary (and environmentally problematic), contorting a document to the confines of print potentially distorts or hinders the meaning an author wishes to convey, as well as restricts the means with which they can convey it. In addition, for academic outlets and well beyond, print is not the dominant format anymore. Even avid print readers must admit they see much more text on a screen than they do on a page on a typical day.
Let’s recap the issues with traditional approaches:
* Possibly not usable for rep\* analysis
* Syntax gets in the way of fluid text
* Designed for print
* Wasteful if printed
* Often very difficult to get visualizations/tables to look as desired
* No interactivity
The case for using a markdown approach is now years old and well established. Unfortunately many, but not all, journals are still print\-oriented[62](#fn62), because their income depends on the assumption of print, not to mention a closed\-source, broken, and anti\-scientific system of review and publication that dates [back to the 17th century](https://www.npr.org/sections/health-shots/2018/02/24/586184355/scientists-aim-to-pull-peer-review-out-of-the-17th-century). Consider the fact that you could blog about your research while conducting it, present preliminary results in your blog via R Markdown (because your blog itself is done via R Markdown), get regular feedback from peers along the way via your site’s comment system, and all this before you’d ever send it off to a journal. Now ask yourself what a print\-oriented journal actually offers you? When was the last time you actually opened a print version of a journal? How often do you go to a journal site to look for something as opposed to a simple web search or using something like Google Scholar? How many journals do adequate retractions when problems are found[63](#fn63)? Is it possible you may actually get more eyeballs and clicks on your work just having it on your own website[64](#fn64) or [tweeting about it](https://www.altmetric.com/about-altmetrics/what-are-altmetrics/)?
The old paradigm is changing because it has to, and there is practically no justification for the traditional approach to academic publication, and even less for other outlets. In the academic world, outlets are starting to require pre\-registration of study design, code, data archiving measures, and other changes to the usual send\-a\-pdf\-and\-we’ll\-get\-back\-to\-you approach[65](#fn65). In non\-academic settings, while there is the same sort of pushback there, even those used to print and powerpoints must admit they’d prefer an interactive document that works on their phone if needed. As such, you might as well be using tools and an approach that accommodate the things we’ve talked about in order to produce a better data\-driven product.
For more on tools for reproducible research in R, see the [task view](https://cran.r-project.org/web/views/ReproducibleResearch.html).
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/reproducibility.html |
Building Better Data\-Driven Products
=====================================
At this point we’ve covered many topics that will get you from data import and generation to visualizing model results. What’s left? To tell others about what you’ve discovered! While there are any number of ways to present your *data\-driven product*, there are a couple of things to keep in mind regardless of the chosen rendition. Chief among them is building a product that will be intimately connected with all the work that went before it, and which will be consistent across products and (hopefully) over time as well.
We’ll start our discussion of how to present one’s work with some terminology you might have come across:
* *Reproducible research*
* *Repeatable research*
* *Replicable science*
* *Reproducible data analysis*
* *Literate programming*
* *Dynamic data analysis*
* *Dynamic report generation*
Each of these may mean slightly different things depending on the context and background of the person using them, so one should take care to note precisely what is meant. We’ll examine some of these concepts, or at least my particular version of them.
Rep\* Analysis
--------------
Let’s start with the notions of *replicability*, *repeatability*, and *reproducibility*, which are hot topics in various disciplines of late. In our case, we are specifically concerned with programming and analytical results, visualizations, etc. (e.g. as opposed to running an experiment).
To begin, these and related terms are often not precisely defined, and depending on the definition one selects, possibly unlikely, or even impossible! I’ll roughly follow the Association for Computing Machinery guidelines Computing Machinery ([2018](#ref-acm2020)) since they actually do define them, but mostly just to help get us organize our thinking about them, and so you can at least know what *I* mean when I use the terms. In many cases, the concepts are best thought of as ideals to strive for, or goals for certain aspects of the data analysis process. For example, in deference to Heraclitus, Cratylus, the Buddha, and others, nothing is exactly replicable, if only because time will have passed, and with it some things will have changed about the process since the initial analysis was conducted\- the people involved, the data collection approach, the analytical tools, etc. Indeed, even your thought processes regarding the programming and analysis are in constant flux while engaged with the data process. However, we can replicate some things or some aspects of the process, possibly even exactly, and thus make the results reproducible. In other cases, even when we can, we may not want to.
### Example
As our focus will be on data analysis in particular, let’s start with the following scenario. Various versions of a data set are used leading up to analysis, and after several iterations, `finaldata7` is now spread across the computers of the faculty advisor, two graduate students and one undergraduate student. Two of those `finaldata7` data sets, specifically named `finaldata7a` and `finaldata7b`, are slightly different from the other two and each other. The undergraduate, who helped with the processing of `finaldata2` through `finaldata6`, has graduated and no longer resides in the same state, and has other things to occupy their time. Some of the data processing was done with menus in a software package that shall not be named.
The script that did the final analysis, called `results.C`, calls the data using a directory location which no longer exists (and refers only to `finaldata7`). Though it is titled ‘results’, the script performs several more data processing steps, but without comments that would indicate why any of them are being done. Some of the variables are named things like `PDQ` and `V3`, but there is no documentation that would say what those mean.
When writing their research document in Microsoft Word, all the values from the analyses were copied and pasted into the tables and text[55](#fn55). The variable names in the document have no exact match to any of the names in any of the data objects. Furthermore, no reference was provided in the text regarding what software or specific packages were used for the analysis.
And now, several months later, after the final draft of the document was written and sent to the journal, the reviewers have eventually made their comments on the paper, and it’s time to dive back into the analysis. Now, what do you think the odds are that this research group could even reproduce the values reported in the main analysis of the paper?
Sadly, up until recently this was not uncommon, and even certain issues just described are still very common. Such an approach is essentially the antithesis of replicability and reproducible research[56](#fn56). Anything that was only done with menus cannot be replicated for certain, and without sufficient documentation it’s not clear what was done even when there is potentially reproducible code. The naming of files, variables and other objects was done poorly, so it will take unnecessary effort to figure out what was done, to what, and when. And even after most things get squared away, there is still a chance the numbers won’t match what was in the paper anyway. This scenario is exactly what we don’t want.
### Repeatable
*Repeatability* can simply be thought of as whether *you* can run the code and analysis again given the same circumstances, essentially producing the same results. In the above scenario, even this is may not even possible. But let’s say that whoever did the analysis can run their code, it works, and produces a result very similar to what was published. We could then say it’s repeatable. This should only be seen as a minimum standard, though sometimes it is enough.
The notion of repeatability also extends to a specific measure itself. This consistency of a measure across repeated observations is typically referred to as *reliability*. This is not our focus here, but I mention it for those who have the false belief that at least some data driven products are entirely replicable. However, you can’t escape measurement error.
### Reproducible
Now let someone else try the analytical process to see if they can *reproduce* the results. Assuming the same data starting point, they should get the same result using the same tools. For our scenario, if we just start at the very last part, maybe this is possible, but at the least, it would require the data that went into the final analysis and the model being specified in a way that anyone could potentially understand. However, it is entirely unlikely that if they start from the raw data import they would get the same results that are in the Word document. If a research article does not provide the analytical data, nor specifies the model in code or math, it is not reproducible, and we can only take on faith what was done.
Here are some typical non\-reproducible situations:
* data is not made available
* code is not made available
* model is not adequately represented (in math or code)
* data processing and/or analysis was done with menus
* visualizations were tweaked in other programs than the one that produced it
* p\-hacking efforts where ‘outliers’ are removed or other data transformations were undertaken to obtain a desired result, and are not reported or are not explained well enough to reproduce
I find the lack of clear model explanation to be pervasive in some sciences. For example, I have seen articles in medical outlets where they ran a mixed model, yet none of the variance components or even a regression table is provided, nor is the model depicted in a formal fashion. You can forget about the code or data being provided as well. I also tend to ignore analyses done using SPSS, because the only reason to use the program is to not have to use the syntax, making reproducibility difficult at best, if it’s even possible.
Tools like Docker, packrat, and others can ensure that the package environment is the same even if you’re running the same code years from now, so that results should be reproduced if run on the same data, assuming other things are accounted for.
### Replicable
*Replicability*, for our purposes, would be something like, if someone had the same type of data (e.g. same structure), and did the same analysis using their own setup (though with the same or similar tools), would they get the same result (to within some tolerance)?
For example, if I have some new data that is otherwise the same, install the same R packages etc. on my machine rather than yours, will I get a very similar result (on average)? Similarly, if I do the exact same analysis using some other package (i.e. using the same estimation procedure even if the underlying code implementation is different), will the results be highly similar?
Here are some typical non\-replicable situations:
* all of the non\-reproducible/repeatable situations
* new versions of the packages break old code, fix bugs that ultimately change results, etc.
* small data and/or overfit models
The last example is an interesting one, and yet it is also one that is driving a lot of so\-called unreplicated findings. Even with a clear model and method, if you’re running a complex analysis on small data without any regularization, or explicit understanding of the uncertainty, the odds of seeing the same results in a new setting are not very strong. While this has been well known and taught in every intro stats course people have taken, the concept evidently immediately gets lost in practice. I see people regularly befuddled as to why they don’t see the same thing when they only have a couple hundred or fewer observations[57](#fn57). However, the uncertainty in small samples, if reported, should make this no surprise.
For my own work, I’m not typically as interested in analytical replicability, as I want my results to work now, not replicate precisely what I did two years ago. No code is bug free, improvements in tools should typically lead to improvements in modeling approach, etc. In the end, I don’t mind the extra work to get my old code working with the latest packages, and there is a correlation between recency and relevancy. However, if such replicability is desired, specific tools will need to be used, such as version control (Git), containers (e.g. Docker), and similar.
These are the stated ACM guidelines.
Repeatability (Same team, same experimental setup)
* The measurement can be obtained with stated precision by the same team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same location on multiple trials. For computational experiments, this means that a researcher can reliably repeat her own computation.
Reproducibility (Different team, same experimental setup)
* The measurement can be obtained with stated precision by a different team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same or a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using the author’s own artifacts.
Replicability (Different team, different experimental setup)
* The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.
### Summary of rep\* analysis
In summary, truly rep\* data analysis requires:
* Accessible data, or at least, essentially similar data[58](#fn58)
* Accessible, well written code
* Clear documentation (of data and code)
* Version control
* Standard means of distribution
* Literate programming practices
* Possibly more depending on the stringency of desired replicability
We’ve seen a poor example, what about a good one? For instance, one could start their research as an RStudio project using Git for version control, write their research products using R Markdown, set seeds for random variables, and use packrat to keep the packages used in analysis specific to the project. Doing so would make it much more likely to reproduce the previous results at any stage, even years later.
* [Reproducibility Guide by ROpenSci](https://ropensci.github.io/reproducibility-guide/)
* [Ten Simple Rules for Reproducible Computational Research](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285) Sandve et al. ([2013](#ref-sandve2013ten))
* [Recommendations to Funding Agencies for Supporting Reproducible Research](https://www.amstat.org/asa/files/pdfs/POL-ReproducibleResearchRecommendations.pdf)
Literate Programming
--------------------
At this point we have an idea of what we want. But how to we get it? There is an additional concept to think about that will help us with regard to programming and data analysis. So let’s now talk about *literate programming*, which is actually an [old idea](http://www.literateprogramming.com/knuthweb.pdf)[59](#fn59).
> I believe that the time is ripe for significantly better documentation of programs, and that we can best achieve this by considering programs to be works of literature.
> \~ Donald Knuth (1984\)
The interweaving of code and text is something many already do in normal scripting. *Comments* in code are not only useful, they are practically required. But in a program script, almost all the emphasis is on the code. With literate programming, we instead focus on the text, and the code exists to help facilitate our ability to tell a (data\-driven) story.
In the early days, the idea was largely to communicate the idea of the computer program itself. Now, at least in the context we’ll be discussing, our usage of literate programming is to generate results that tell the story in a completely human\-oriented fashion, possibly without any reference to the code at all. However, the document, in whatever format, does not exist independently of the code, and cannot be generated without it.
Consider the following example. This code, which is clearly delimited from the text via background and font style, shows how to do an unordered list in *Markdown* using two different methods. Either a `-` or a `*` will denote a list item.
```
- item 1
- item 2
* item 3
* item 4
```
So, we have a statement explaining the code, followed by the code itself. We actually don’t need a code comment, because the text explains the code in everyday language. This is a simple example, but it gets at the essence of the approach. In the document you’re reading right now, code may be visible or not, but when visible, it’s clear what the code part is and what the text explaining the code is.
The following table shows the results of a regression analysis.
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| **(Intercept)** | 37\.29 | 1\.88 | 19\.86 | 0 |
| **wt** | \-5\.34 | 0\.56 | \-9\.56 | 0 |
Fitting linear model: mpg \~ wt
| Observations | Residual Std. Error | \\(R^2\\) | Adjusted \\(R^2\\) |
| --- | --- | --- | --- |
| 32 | 3\.046 | 0\.7528 | 0\.7446 |
You didn’t see the code, but you saw some nicely formatted results. I personally didn’t format anything however, those are using default settings. Here is the underlying code.
```
lm(mpg ~ wt, mtcars) %>%
summary() %>%
pander(round = 2)
```
Now we see the code, but it isn’t evaluated, because the goal of the text is not the result, but to explain the code. So, imagine a product in which the previous text content explains the results, while the analysis code that produces the result resides right where the text is. Nothing is copied and pasted, and the code and text both reside in the same document. You can imagine how much more easily it is to reproduce a result given such a setup.
The idea of literate programming, i.e. creating human\-understandable programs, can extend beyond reports or slides that you might put together for an analysis, and in fact be used for any setting in which you have to write code at all.
R Markdown
----------
Now let’s shift our focus from concepts to implementation. *R Markdown* provides a means for literate programming. It is a flavor of Markdown, a markup language used pervasively throughout the web. Markdown can be converted to other formats like HTML, but is as easy to use as normal text. R Markdown allows one to combine normal R code with text to produce a wide variety of document formats. This allows for a continuous transition from initial data import and processing to a finished product, whether journal article, software application, slide presentation, or even a website.
To use R Markdown effectively, it helps to know why you’d even want to. So, in addition to literate programming, let’s talk about some ideas, all of which are related, and which will give you some sense of the goals of effective document generation, and why this approach is superior to others you might try.
[This chapter](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/reproducibility.Rmd) (and the rest of the document) is [available on GitHub](https://github.com/m-clark/data-processing-and-visualization). Looking at the raw content (i.e. the R Markdown files \*.Rmd) versus the finished product will get you well on your way to understanding how to use various tools at your disposal to produce a better data driven product.
Version Control
---------------
A major step toward Rep\* analysis of any kind is having a way to document the process of analysis, find where mistakes were made, revert back to previous states, and more. *Version control* is a means of creating checkpoints in document production. While it was primarily geared toward code, it can be useful for any files created whether they are code, figures, data, or a manuscript of some kind.
Some of you may have experience with version control already and not even know it. For example, if you use Box to collaborate on documents, in the web version you will often see something like `V10` next to the file name, meaning you are looking at the tenth version of the document. If you open the document you could go back to any prior version to see what it looked like at a previous state.
Version control is a necessity for modern coding practice, but it should be extended well beyond that. One of the most popular tools in this domain is called [Git](https://git-scm.com/), and the website of choice for most developers is [GitHub](https://github.com/). I would say that most of the R package developers develop their code there in the form of *repositories* (usually called repos), but also host websites, and house other projects. While Git provides a syntax for the process, you can actually implement it very easily within RStudio after it’s installed. As most are not software developers, there is little need beyond the bare basics of understanding Git to gain the benefits of version control. Creating an account on GitHub is very easy, and you can even create your first repository via the website. However, a good place to start for our purposes is with [Happy Git and GitHub for the useR](https://happygitwithr.com/) Bryan ([2018](#ref-bryan2018happy)). It will take a bit to get used to, but you’ll be so much better off once you start using it.
Dynamic Data Analysis \& Report Generation
------------------------------------------
Sometimes the goal is to create an expression of the analysis that is not only to be disseminated to a particular audience, but one which possibly will change over time, as the data itself evolves temporally. In this dynamic setting, the document must be able to handle changes with minimal effort.
I can tell you from firsthand experience that R Markdown can allow one to automatically create custom presentation products for different audiences on a regular basis without even touching the data for explicit processing, nor the reports after the templates are created, even as the data continues to come in over time. Furthermore, any academic effort that would fall under the heading of science is applicable here.
The notion of *science as software development* is something you should get used to. Print has had its day, but is not the best choice for scientific advancement as it should take place. Waiting months for feedback, or a year to get a paper published after it’s first sent for review, and then hoping people have access to a possibly pay\-walled outlet, is simply unacceptable. Furthermore, what if more data comes in? A data or modeling bug is found? Other studies shed additional light on the conclusions? In this day and age, are we supposed to just continue to cite a work that may no longer be applicable while waiting another year or so for updates?
Consider [arxiv.org](https://arxiv.org/). Researchers will put papers there before they are published in journals, ostensibly to provide an openly available, if not *necessarily* 100% complete, work. Others put working drafts or just use it as a place to float some ideas out there. It is a serious outlet however, and a good chunk of the articles I read in the stats world can be found there.
Look closely at [this particular entry](https://arxiv.org/abs/1507.02646). As I write this there have been 6 versions of it, and one has access to any of them.[60](#fn60) If something changes, there is no reason not to have a version 7 or however many one wants. In a similar vein, many of my own documents on machine learning, Bayesian analysis, generalized additive models, etc. have been regularly updated for several years now.
Research is never complete. Data can be augmented, analyses tweaked, visualizations improved. Will the products of your own efforts adapt?
Using Modern Tools
------------------
The main problem for other avenues you might use, like MS Word and \\(\\LaTeX\\),[61](#fn61) is that they were created for printed documents. However, not only is printing unnecessary (and environmentally problematic), contorting a document to the confines of print potentially distorts or hinders the meaning an author wishes to convey, as well as restricts the means with which they can convey it. In addition, for academic outlets and well beyond, print is not the dominant format anymore. Even avid print readers must admit they see much more text on a screen than they do on a page on a typical day.
Let’s recap the issues with traditional approaches:
* Possibly not usable for rep\* analysis
* Syntax gets in the way of fluid text
* Designed for print
* Wasteful if printed
* Often very difficult to get visualizations/tables to look as desired
* No interactivity
The case for using a markdown approach is now years old and well established. Unfortunately many, but not all, journals are still print\-oriented[62](#fn62), because their income depends on the assumption of print, not to mention a closed\-source, broken, and anti\-scientific system of review and publication that dates [back to the 17th century](https://www.npr.org/sections/health-shots/2018/02/24/586184355/scientists-aim-to-pull-peer-review-out-of-the-17th-century). Consider the fact that you could blog about your research while conducting it, present preliminary results in your blog via R Markdown (because your blog itself is done via R Markdown), get regular feedback from peers along the way via your site’s comment system, and all this before you’d ever send it off to a journal. Now ask yourself what a print\-oriented journal actually offers you? When was the last time you actually opened a print version of a journal? How often do you go to a journal site to look for something as opposed to a simple web search or using something like Google Scholar? How many journals do adequate retractions when problems are found[63](#fn63)? Is it possible you may actually get more eyeballs and clicks on your work just having it on your own website[64](#fn64) or [tweeting about it](https://www.altmetric.com/about-altmetrics/what-are-altmetrics/)?
The old paradigm is changing because it has to, and there is practically no justification for the traditional approach to academic publication, and even less for other outlets. In the academic world, outlets are starting to require pre\-registration of study design, code, data archiving measures, and other changes to the usual send\-a\-pdf\-and\-we’ll\-get\-back\-to\-you approach[65](#fn65). In non\-academic settings, while there is the same sort of pushback there, even those used to print and powerpoints must admit they’d prefer an interactive document that works on their phone if needed. As such, you might as well be using tools and an approach that accommodate the things we’ve talked about in order to produce a better data\-driven product.
For more on tools for reproducible research in R, see the [task view](https://cran.r-project.org/web/views/ReproducibleResearch.html).
Rep\* Analysis
--------------
Let’s start with the notions of *replicability*, *repeatability*, and *reproducibility*, which are hot topics in various disciplines of late. In our case, we are specifically concerned with programming and analytical results, visualizations, etc. (e.g. as opposed to running an experiment).
To begin, these and related terms are often not precisely defined, and depending on the definition one selects, possibly unlikely, or even impossible! I’ll roughly follow the Association for Computing Machinery guidelines Computing Machinery ([2018](#ref-acm2020)) since they actually do define them, but mostly just to help get us organize our thinking about them, and so you can at least know what *I* mean when I use the terms. In many cases, the concepts are best thought of as ideals to strive for, or goals for certain aspects of the data analysis process. For example, in deference to Heraclitus, Cratylus, the Buddha, and others, nothing is exactly replicable, if only because time will have passed, and with it some things will have changed about the process since the initial analysis was conducted\- the people involved, the data collection approach, the analytical tools, etc. Indeed, even your thought processes regarding the programming and analysis are in constant flux while engaged with the data process. However, we can replicate some things or some aspects of the process, possibly even exactly, and thus make the results reproducible. In other cases, even when we can, we may not want to.
### Example
As our focus will be on data analysis in particular, let’s start with the following scenario. Various versions of a data set are used leading up to analysis, and after several iterations, `finaldata7` is now spread across the computers of the faculty advisor, two graduate students and one undergraduate student. Two of those `finaldata7` data sets, specifically named `finaldata7a` and `finaldata7b`, are slightly different from the other two and each other. The undergraduate, who helped with the processing of `finaldata2` through `finaldata6`, has graduated and no longer resides in the same state, and has other things to occupy their time. Some of the data processing was done with menus in a software package that shall not be named.
The script that did the final analysis, called `results.C`, calls the data using a directory location which no longer exists (and refers only to `finaldata7`). Though it is titled ‘results’, the script performs several more data processing steps, but without comments that would indicate why any of them are being done. Some of the variables are named things like `PDQ` and `V3`, but there is no documentation that would say what those mean.
When writing their research document in Microsoft Word, all the values from the analyses were copied and pasted into the tables and text[55](#fn55). The variable names in the document have no exact match to any of the names in any of the data objects. Furthermore, no reference was provided in the text regarding what software or specific packages were used for the analysis.
And now, several months later, after the final draft of the document was written and sent to the journal, the reviewers have eventually made their comments on the paper, and it’s time to dive back into the analysis. Now, what do you think the odds are that this research group could even reproduce the values reported in the main analysis of the paper?
Sadly, up until recently this was not uncommon, and even certain issues just described are still very common. Such an approach is essentially the antithesis of replicability and reproducible research[56](#fn56). Anything that was only done with menus cannot be replicated for certain, and without sufficient documentation it’s not clear what was done even when there is potentially reproducible code. The naming of files, variables and other objects was done poorly, so it will take unnecessary effort to figure out what was done, to what, and when. And even after most things get squared away, there is still a chance the numbers won’t match what was in the paper anyway. This scenario is exactly what we don’t want.
### Repeatable
*Repeatability* can simply be thought of as whether *you* can run the code and analysis again given the same circumstances, essentially producing the same results. In the above scenario, even this is may not even possible. But let’s say that whoever did the analysis can run their code, it works, and produces a result very similar to what was published. We could then say it’s repeatable. This should only be seen as a minimum standard, though sometimes it is enough.
The notion of repeatability also extends to a specific measure itself. This consistency of a measure across repeated observations is typically referred to as *reliability*. This is not our focus here, but I mention it for those who have the false belief that at least some data driven products are entirely replicable. However, you can’t escape measurement error.
### Reproducible
Now let someone else try the analytical process to see if they can *reproduce* the results. Assuming the same data starting point, they should get the same result using the same tools. For our scenario, if we just start at the very last part, maybe this is possible, but at the least, it would require the data that went into the final analysis and the model being specified in a way that anyone could potentially understand. However, it is entirely unlikely that if they start from the raw data import they would get the same results that are in the Word document. If a research article does not provide the analytical data, nor specifies the model in code or math, it is not reproducible, and we can only take on faith what was done.
Here are some typical non\-reproducible situations:
* data is not made available
* code is not made available
* model is not adequately represented (in math or code)
* data processing and/or analysis was done with menus
* visualizations were tweaked in other programs than the one that produced it
* p\-hacking efforts where ‘outliers’ are removed or other data transformations were undertaken to obtain a desired result, and are not reported or are not explained well enough to reproduce
I find the lack of clear model explanation to be pervasive in some sciences. For example, I have seen articles in medical outlets where they ran a mixed model, yet none of the variance components or even a regression table is provided, nor is the model depicted in a formal fashion. You can forget about the code or data being provided as well. I also tend to ignore analyses done using SPSS, because the only reason to use the program is to not have to use the syntax, making reproducibility difficult at best, if it’s even possible.
Tools like Docker, packrat, and others can ensure that the package environment is the same even if you’re running the same code years from now, so that results should be reproduced if run on the same data, assuming other things are accounted for.
### Replicable
*Replicability*, for our purposes, would be something like, if someone had the same type of data (e.g. same structure), and did the same analysis using their own setup (though with the same or similar tools), would they get the same result (to within some tolerance)?
For example, if I have some new data that is otherwise the same, install the same R packages etc. on my machine rather than yours, will I get a very similar result (on average)? Similarly, if I do the exact same analysis using some other package (i.e. using the same estimation procedure even if the underlying code implementation is different), will the results be highly similar?
Here are some typical non\-replicable situations:
* all of the non\-reproducible/repeatable situations
* new versions of the packages break old code, fix bugs that ultimately change results, etc.
* small data and/or overfit models
The last example is an interesting one, and yet it is also one that is driving a lot of so\-called unreplicated findings. Even with a clear model and method, if you’re running a complex analysis on small data without any regularization, or explicit understanding of the uncertainty, the odds of seeing the same results in a new setting are not very strong. While this has been well known and taught in every intro stats course people have taken, the concept evidently immediately gets lost in practice. I see people regularly befuddled as to why they don’t see the same thing when they only have a couple hundred or fewer observations[57](#fn57). However, the uncertainty in small samples, if reported, should make this no surprise.
For my own work, I’m not typically as interested in analytical replicability, as I want my results to work now, not replicate precisely what I did two years ago. No code is bug free, improvements in tools should typically lead to improvements in modeling approach, etc. In the end, I don’t mind the extra work to get my old code working with the latest packages, and there is a correlation between recency and relevancy. However, if such replicability is desired, specific tools will need to be used, such as version control (Git), containers (e.g. Docker), and similar.
These are the stated ACM guidelines.
Repeatability (Same team, same experimental setup)
* The measurement can be obtained with stated precision by the same team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same location on multiple trials. For computational experiments, this means that a researcher can reliably repeat her own computation.
Reproducibility (Different team, same experimental setup)
* The measurement can be obtained with stated precision by a different team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same or a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using the author’s own artifacts.
Replicability (Different team, different experimental setup)
* The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.
### Summary of rep\* analysis
In summary, truly rep\* data analysis requires:
* Accessible data, or at least, essentially similar data[58](#fn58)
* Accessible, well written code
* Clear documentation (of data and code)
* Version control
* Standard means of distribution
* Literate programming practices
* Possibly more depending on the stringency of desired replicability
We’ve seen a poor example, what about a good one? For instance, one could start their research as an RStudio project using Git for version control, write their research products using R Markdown, set seeds for random variables, and use packrat to keep the packages used in analysis specific to the project. Doing so would make it much more likely to reproduce the previous results at any stage, even years later.
* [Reproducibility Guide by ROpenSci](https://ropensci.github.io/reproducibility-guide/)
* [Ten Simple Rules for Reproducible Computational Research](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285) Sandve et al. ([2013](#ref-sandve2013ten))
* [Recommendations to Funding Agencies for Supporting Reproducible Research](https://www.amstat.org/asa/files/pdfs/POL-ReproducibleResearchRecommendations.pdf)
### Example
As our focus will be on data analysis in particular, let’s start with the following scenario. Various versions of a data set are used leading up to analysis, and after several iterations, `finaldata7` is now spread across the computers of the faculty advisor, two graduate students and one undergraduate student. Two of those `finaldata7` data sets, specifically named `finaldata7a` and `finaldata7b`, are slightly different from the other two and each other. The undergraduate, who helped with the processing of `finaldata2` through `finaldata6`, has graduated and no longer resides in the same state, and has other things to occupy their time. Some of the data processing was done with menus in a software package that shall not be named.
The script that did the final analysis, called `results.C`, calls the data using a directory location which no longer exists (and refers only to `finaldata7`). Though it is titled ‘results’, the script performs several more data processing steps, but without comments that would indicate why any of them are being done. Some of the variables are named things like `PDQ` and `V3`, but there is no documentation that would say what those mean.
When writing their research document in Microsoft Word, all the values from the analyses were copied and pasted into the tables and text[55](#fn55). The variable names in the document have no exact match to any of the names in any of the data objects. Furthermore, no reference was provided in the text regarding what software or specific packages were used for the analysis.
And now, several months later, after the final draft of the document was written and sent to the journal, the reviewers have eventually made their comments on the paper, and it’s time to dive back into the analysis. Now, what do you think the odds are that this research group could even reproduce the values reported in the main analysis of the paper?
Sadly, up until recently this was not uncommon, and even certain issues just described are still very common. Such an approach is essentially the antithesis of replicability and reproducible research[56](#fn56). Anything that was only done with menus cannot be replicated for certain, and without sufficient documentation it’s not clear what was done even when there is potentially reproducible code. The naming of files, variables and other objects was done poorly, so it will take unnecessary effort to figure out what was done, to what, and when. And even after most things get squared away, there is still a chance the numbers won’t match what was in the paper anyway. This scenario is exactly what we don’t want.
### Repeatable
*Repeatability* can simply be thought of as whether *you* can run the code and analysis again given the same circumstances, essentially producing the same results. In the above scenario, even this is may not even possible. But let’s say that whoever did the analysis can run their code, it works, and produces a result very similar to what was published. We could then say it’s repeatable. This should only be seen as a minimum standard, though sometimes it is enough.
The notion of repeatability also extends to a specific measure itself. This consistency of a measure across repeated observations is typically referred to as *reliability*. This is not our focus here, but I mention it for those who have the false belief that at least some data driven products are entirely replicable. However, you can’t escape measurement error.
### Reproducible
Now let someone else try the analytical process to see if they can *reproduce* the results. Assuming the same data starting point, they should get the same result using the same tools. For our scenario, if we just start at the very last part, maybe this is possible, but at the least, it would require the data that went into the final analysis and the model being specified in a way that anyone could potentially understand. However, it is entirely unlikely that if they start from the raw data import they would get the same results that are in the Word document. If a research article does not provide the analytical data, nor specifies the model in code or math, it is not reproducible, and we can only take on faith what was done.
Here are some typical non\-reproducible situations:
* data is not made available
* code is not made available
* model is not adequately represented (in math or code)
* data processing and/or analysis was done with menus
* visualizations were tweaked in other programs than the one that produced it
* p\-hacking efforts where ‘outliers’ are removed or other data transformations were undertaken to obtain a desired result, and are not reported or are not explained well enough to reproduce
I find the lack of clear model explanation to be pervasive in some sciences. For example, I have seen articles in medical outlets where they ran a mixed model, yet none of the variance components or even a regression table is provided, nor is the model depicted in a formal fashion. You can forget about the code or data being provided as well. I also tend to ignore analyses done using SPSS, because the only reason to use the program is to not have to use the syntax, making reproducibility difficult at best, if it’s even possible.
Tools like Docker, packrat, and others can ensure that the package environment is the same even if you’re running the same code years from now, so that results should be reproduced if run on the same data, assuming other things are accounted for.
### Replicable
*Replicability*, for our purposes, would be something like, if someone had the same type of data (e.g. same structure), and did the same analysis using their own setup (though with the same or similar tools), would they get the same result (to within some tolerance)?
For example, if I have some new data that is otherwise the same, install the same R packages etc. on my machine rather than yours, will I get a very similar result (on average)? Similarly, if I do the exact same analysis using some other package (i.e. using the same estimation procedure even if the underlying code implementation is different), will the results be highly similar?
Here are some typical non\-replicable situations:
* all of the non\-reproducible/repeatable situations
* new versions of the packages break old code, fix bugs that ultimately change results, etc.
* small data and/or overfit models
The last example is an interesting one, and yet it is also one that is driving a lot of so\-called unreplicated findings. Even with a clear model and method, if you’re running a complex analysis on small data without any regularization, or explicit understanding of the uncertainty, the odds of seeing the same results in a new setting are not very strong. While this has been well known and taught in every intro stats course people have taken, the concept evidently immediately gets lost in practice. I see people regularly befuddled as to why they don’t see the same thing when they only have a couple hundred or fewer observations[57](#fn57). However, the uncertainty in small samples, if reported, should make this no surprise.
For my own work, I’m not typically as interested in analytical replicability, as I want my results to work now, not replicate precisely what I did two years ago. No code is bug free, improvements in tools should typically lead to improvements in modeling approach, etc. In the end, I don’t mind the extra work to get my old code working with the latest packages, and there is a correlation between recency and relevancy. However, if such replicability is desired, specific tools will need to be used, such as version control (Git), containers (e.g. Docker), and similar.
These are the stated ACM guidelines.
Repeatability (Same team, same experimental setup)
* The measurement can be obtained with stated precision by the same team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same location on multiple trials. For computational experiments, this means that a researcher can reliably repeat her own computation.
Reproducibility (Different team, same experimental setup)
* The measurement can be obtained with stated precision by a different team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same or a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using the author’s own artifacts.
Replicability (Different team, different experimental setup)
* The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.
### Summary of rep\* analysis
In summary, truly rep\* data analysis requires:
* Accessible data, or at least, essentially similar data[58](#fn58)
* Accessible, well written code
* Clear documentation (of data and code)
* Version control
* Standard means of distribution
* Literate programming practices
* Possibly more depending on the stringency of desired replicability
We’ve seen a poor example, what about a good one? For instance, one could start their research as an RStudio project using Git for version control, write their research products using R Markdown, set seeds for random variables, and use packrat to keep the packages used in analysis specific to the project. Doing so would make it much more likely to reproduce the previous results at any stage, even years later.
* [Reproducibility Guide by ROpenSci](https://ropensci.github.io/reproducibility-guide/)
* [Ten Simple Rules for Reproducible Computational Research](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285) Sandve et al. ([2013](#ref-sandve2013ten))
* [Recommendations to Funding Agencies for Supporting Reproducible Research](https://www.amstat.org/asa/files/pdfs/POL-ReproducibleResearchRecommendations.pdf)
Literate Programming
--------------------
At this point we have an idea of what we want. But how to we get it? There is an additional concept to think about that will help us with regard to programming and data analysis. So let’s now talk about *literate programming*, which is actually an [old idea](http://www.literateprogramming.com/knuthweb.pdf)[59](#fn59).
> I believe that the time is ripe for significantly better documentation of programs, and that we can best achieve this by considering programs to be works of literature.
> \~ Donald Knuth (1984\)
The interweaving of code and text is something many already do in normal scripting. *Comments* in code are not only useful, they are practically required. But in a program script, almost all the emphasis is on the code. With literate programming, we instead focus on the text, and the code exists to help facilitate our ability to tell a (data\-driven) story.
In the early days, the idea was largely to communicate the idea of the computer program itself. Now, at least in the context we’ll be discussing, our usage of literate programming is to generate results that tell the story in a completely human\-oriented fashion, possibly without any reference to the code at all. However, the document, in whatever format, does not exist independently of the code, and cannot be generated without it.
Consider the following example. This code, which is clearly delimited from the text via background and font style, shows how to do an unordered list in *Markdown* using two different methods. Either a `-` or a `*` will denote a list item.
```
- item 1
- item 2
* item 3
* item 4
```
So, we have a statement explaining the code, followed by the code itself. We actually don’t need a code comment, because the text explains the code in everyday language. This is a simple example, but it gets at the essence of the approach. In the document you’re reading right now, code may be visible or not, but when visible, it’s clear what the code part is and what the text explaining the code is.
The following table shows the results of a regression analysis.
| | Estimate | Std. Error | t value | Pr(\>\|t\|) |
| --- | --- | --- | --- | --- |
| **(Intercept)** | 37\.29 | 1\.88 | 19\.86 | 0 |
| **wt** | \-5\.34 | 0\.56 | \-9\.56 | 0 |
Fitting linear model: mpg \~ wt
| Observations | Residual Std. Error | \\(R^2\\) | Adjusted \\(R^2\\) |
| --- | --- | --- | --- |
| 32 | 3\.046 | 0\.7528 | 0\.7446 |
You didn’t see the code, but you saw some nicely formatted results. I personally didn’t format anything however, those are using default settings. Here is the underlying code.
```
lm(mpg ~ wt, mtcars) %>%
summary() %>%
pander(round = 2)
```
Now we see the code, but it isn’t evaluated, because the goal of the text is not the result, but to explain the code. So, imagine a product in which the previous text content explains the results, while the analysis code that produces the result resides right where the text is. Nothing is copied and pasted, and the code and text both reside in the same document. You can imagine how much more easily it is to reproduce a result given such a setup.
The idea of literate programming, i.e. creating human\-understandable programs, can extend beyond reports or slides that you might put together for an analysis, and in fact be used for any setting in which you have to write code at all.
R Markdown
----------
Now let’s shift our focus from concepts to implementation. *R Markdown* provides a means for literate programming. It is a flavor of Markdown, a markup language used pervasively throughout the web. Markdown can be converted to other formats like HTML, but is as easy to use as normal text. R Markdown allows one to combine normal R code with text to produce a wide variety of document formats. This allows for a continuous transition from initial data import and processing to a finished product, whether journal article, software application, slide presentation, or even a website.
To use R Markdown effectively, it helps to know why you’d even want to. So, in addition to literate programming, let’s talk about some ideas, all of which are related, and which will give you some sense of the goals of effective document generation, and why this approach is superior to others you might try.
[This chapter](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/reproducibility.Rmd) (and the rest of the document) is [available on GitHub](https://github.com/m-clark/data-processing-and-visualization). Looking at the raw content (i.e. the R Markdown files \*.Rmd) versus the finished product will get you well on your way to understanding how to use various tools at your disposal to produce a better data driven product.
Version Control
---------------
A major step toward Rep\* analysis of any kind is having a way to document the process of analysis, find where mistakes were made, revert back to previous states, and more. *Version control* is a means of creating checkpoints in document production. While it was primarily geared toward code, it can be useful for any files created whether they are code, figures, data, or a manuscript of some kind.
Some of you may have experience with version control already and not even know it. For example, if you use Box to collaborate on documents, in the web version you will often see something like `V10` next to the file name, meaning you are looking at the tenth version of the document. If you open the document you could go back to any prior version to see what it looked like at a previous state.
Version control is a necessity for modern coding practice, but it should be extended well beyond that. One of the most popular tools in this domain is called [Git](https://git-scm.com/), and the website of choice for most developers is [GitHub](https://github.com/). I would say that most of the R package developers develop their code there in the form of *repositories* (usually called repos), but also host websites, and house other projects. While Git provides a syntax for the process, you can actually implement it very easily within RStudio after it’s installed. As most are not software developers, there is little need beyond the bare basics of understanding Git to gain the benefits of version control. Creating an account on GitHub is very easy, and you can even create your first repository via the website. However, a good place to start for our purposes is with [Happy Git and GitHub for the useR](https://happygitwithr.com/) Bryan ([2018](#ref-bryan2018happy)). It will take a bit to get used to, but you’ll be so much better off once you start using it.
Dynamic Data Analysis \& Report Generation
------------------------------------------
Sometimes the goal is to create an expression of the analysis that is not only to be disseminated to a particular audience, but one which possibly will change over time, as the data itself evolves temporally. In this dynamic setting, the document must be able to handle changes with minimal effort.
I can tell you from firsthand experience that R Markdown can allow one to automatically create custom presentation products for different audiences on a regular basis without even touching the data for explicit processing, nor the reports after the templates are created, even as the data continues to come in over time. Furthermore, any academic effort that would fall under the heading of science is applicable here.
The notion of *science as software development* is something you should get used to. Print has had its day, but is not the best choice for scientific advancement as it should take place. Waiting months for feedback, or a year to get a paper published after it’s first sent for review, and then hoping people have access to a possibly pay\-walled outlet, is simply unacceptable. Furthermore, what if more data comes in? A data or modeling bug is found? Other studies shed additional light on the conclusions? In this day and age, are we supposed to just continue to cite a work that may no longer be applicable while waiting another year or so for updates?
Consider [arxiv.org](https://arxiv.org/). Researchers will put papers there before they are published in journals, ostensibly to provide an openly available, if not *necessarily* 100% complete, work. Others put working drafts or just use it as a place to float some ideas out there. It is a serious outlet however, and a good chunk of the articles I read in the stats world can be found there.
Look closely at [this particular entry](https://arxiv.org/abs/1507.02646). As I write this there have been 6 versions of it, and one has access to any of them.[60](#fn60) If something changes, there is no reason not to have a version 7 or however many one wants. In a similar vein, many of my own documents on machine learning, Bayesian analysis, generalized additive models, etc. have been regularly updated for several years now.
Research is never complete. Data can be augmented, analyses tweaked, visualizations improved. Will the products of your own efforts adapt?
Using Modern Tools
------------------
The main problem for other avenues you might use, like MS Word and \\(\\LaTeX\\),[61](#fn61) is that they were created for printed documents. However, not only is printing unnecessary (and environmentally problematic), contorting a document to the confines of print potentially distorts or hinders the meaning an author wishes to convey, as well as restricts the means with which they can convey it. In addition, for academic outlets and well beyond, print is not the dominant format anymore. Even avid print readers must admit they see much more text on a screen than they do on a page on a typical day.
Let’s recap the issues with traditional approaches:
* Possibly not usable for rep\* analysis
* Syntax gets in the way of fluid text
* Designed for print
* Wasteful if printed
* Often very difficult to get visualizations/tables to look as desired
* No interactivity
The case for using a markdown approach is now years old and well established. Unfortunately many, but not all, journals are still print\-oriented[62](#fn62), because their income depends on the assumption of print, not to mention a closed\-source, broken, and anti\-scientific system of review and publication that dates [back to the 17th century](https://www.npr.org/sections/health-shots/2018/02/24/586184355/scientists-aim-to-pull-peer-review-out-of-the-17th-century). Consider the fact that you could blog about your research while conducting it, present preliminary results in your blog via R Markdown (because your blog itself is done via R Markdown), get regular feedback from peers along the way via your site’s comment system, and all this before you’d ever send it off to a journal. Now ask yourself what a print\-oriented journal actually offers you? When was the last time you actually opened a print version of a journal? How often do you go to a journal site to look for something as opposed to a simple web search or using something like Google Scholar? How many journals do adequate retractions when problems are found[63](#fn63)? Is it possible you may actually get more eyeballs and clicks on your work just having it on your own website[64](#fn64) or [tweeting about it](https://www.altmetric.com/about-altmetrics/what-are-altmetrics/)?
The old paradigm is changing because it has to, and there is practically no justification for the traditional approach to academic publication, and even less for other outlets. In the academic world, outlets are starting to require pre\-registration of study design, code, data archiving measures, and other changes to the usual send\-a\-pdf\-and\-we’ll\-get\-back\-to\-you approach[65](#fn65). In non\-academic settings, while there is the same sort of pushback there, even those used to print and powerpoints must admit they’d prefer an interactive document that works on their phone if needed. As such, you might as well be using tools and an approach that accommodate the things we’ve talked about in order to produce a better data\-driven product.
For more on tools for reproducible research in R, see the [task view](https://cran.r-project.org/web/views/ReproducibleResearch.html).
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/getting_started.html |
Getting Started
===============
What is Markdown?
-----------------
*Markdown* is basically a syntax (a markup language) that conveys how text should be displayed. In practice, it allows you to use plain text for a document with bits of other things thrown in, but which will ultimately be converted to any number of other languages, especially HMTL, for eventual display in a format you desire.
The basic markdown syntax hasn’t even really been developed for many years, but there are now dozens of *flavors*, of which R Markdown is one. Most Markdown syntax is preserved and works identically no matter what flavor you use. However, the different flavors will have different options or slightly different implementations of certain things. The main point is knowing one flavor means you know some Markdown, and thus would easily work with others.
Documents
---------
To start using R Markdown, simply go to `File/New File/R Markdown...`
As you can see right away, you have your choice of several types of formats, some of which will be of more interest to you as you gain familiarity with R Markdown.
Documents are what you’ll likely use most, especially since they can be used in place of normal R scripts. You have the choice of HTML, PDF and MS Word for the output. The main thing you’ll want to do is make your choice early, because it is not really possible to have the document look exactly how you’d want in all formats simultaneously. More detail on standard documents is forthcoming in the next chapter, but we’ll cover some general options regarding it and other formats here.
### Standard HTML
As mentioned, the default, and most commonly used document for R Markdown, is the standard, single\-page HTML document. It is highly flexible, has several default themes available, works well for short or longer works, different screen sizes, etc. Some of the other different formats (e.g. presentations) are variations of it. You will probably want to get comfortable with standard HTML before moving on to other types of formats, but it will serve you well even as you advance your R Markdown skills.
### R notebooks
Many are using *R Notebooks* as their primary scripting tool, and these are definitely something to consider for your own approach. While there are some drawbacks in efficiency, many like the feel of them. In my experience, they lack the smooth approach of Jupyter Notebooks, and they really aren’t as suited to publishable work as the standard R Markdown document (at least I’ve had issues). Having your output inline also means you may be looking at stale results that don’t reflect previous data changes, and it’s easy for a few lines of code to become what would be equivalent to several printed pages of output. Again though, they might be useful for some purposes.
### Distill
One of the newer formats for R Markdown is the *Distill* template. It is specifically suited toward scientific publishing, and is very visually appealing (in my opinion). Along with some subtle alternative default settings relative to the standard HTML document, it has asides (marginal presentation), hover\-over citations and footnotes, easy incorporation of metadata (e.g. license, author contribution), and other useful things that are, or should be, common to a scientific publication. You can even create a website with it ([as I have](https://m-clark.github.io))!
### Bookdown
You’re reading the output of a *Bookdown* template now. You might not think you will have that much to say, but if you are using a standard HTML document and it tends to get notably long, you might prefer the bookdown format to the constant scrolling. You could even use it for a slide\-like presentation, but with a lot more flexibility. Many R publications, and more all the time, are being published via bookdown rather than traditional print (or both). See [bookdown.org](https://bookdown.org/) and [my own website](https://m-clark.github.io) for more examples.
Presentations
-------------
You can do slide\-style presentations with R Markdown, with three options shown, though two are bizarrely absent. Two shown are HTML based, and you should not even consider Beamer/pdf (i.e. \\(\\LaTeX\\)). Slides, if done well, are not viable for text\-focused printing, and in fact, really don’t work for text in general. They should be very visual if they are to be effective. The two notable formats not shown are [revealjs](http://rmarkdown.rstudio.com/revealjs_presentation_format.html)[66](#fn66) and the kind you can create by going to `File/New File/R Presentation`, which is also revealjs but a different format. I do not recommend the latter. In addition, many seem to be really high on [xaringan](https://github.com/yihui/xaringan), which is based on *remarkjs*, but, after using it several times, I am not sure what it offers over the others.
Creating a presentation is easy enough, and the following shows an example.
```
---
title: "Habits"
output: ioslides_presentation
---
# In the morning
## Getting up
- Turn off alarm
- Get out of bed
## Breakfast
- Eat eggs
- Drink coffee
# In the evening
## Dinner
- Eat spaghetti
- Drink wine
```
You should really question whether you need slides. They are a unnecessarily restrictive format, do not work well with text, and often don’t work well with interactive visualizations. Furthermore, their development doesn’t appear to be as much of a priority for the RStudio crowd relative to other formats (rightly so in my opinion). And finally, there is nothing substantive they offer that can’t be done with a standard HTML doc or its variants.
Apps, Sites \& Dashboards
-------------------------
*Shiny* is an inherently interactive format geared toward the creation of websites and applications. While there are far more apt programming languages than R for creating a website/app, at least Shiny allows you to stay completely within the R environment, and that means you don’t have to be expert in those other languages.
You can run shiny apps on your machine well enough, though usually the point is to make something other people can interact with. This means you’ll need some place to house your work, and [shinyapps.io](https://www.shinyapps.io) allows for some free hosting along with other options. As long as you have a web server people will be able to access your work. Other formats in this area to be aware of are `websites` and `flexdashboard`.
Templates
---------
Templates are available for any number of things, and one can find plenty among specific packages. Once a package with a particular template is installed, you’ll then have it as an option here. All these typically do is provide an R Markdown file similar to when you open a document, with a couple specific options, and demonstration of them if applicable. It’s not much, but at least it will save you a little effort.
After you get the hang of R Markdown, you should strongly consider making your own template. It’s actually pretty easy, and then you’ll always have the option.
How to Begin
------------
The best way to get started with R Markdown is to see a document you like, copy the relevant parts for your own document, and get to it! It really is the best way in my opinion. Many people host their files on [GitHub](https://github.com), so you can just download it directly from there. The author of the bookdown package, a particular format for R Markdown, actually suggests people simply clone his repository for his book that teaches bookdown, and go from there. That’s how I started using it, and it became my favored format for longer documents, and even presentations.
In summary, just see what others are doing, and then tailor it to your own needs.
What is Markdown?
-----------------
*Markdown* is basically a syntax (a markup language) that conveys how text should be displayed. In practice, it allows you to use plain text for a document with bits of other things thrown in, but which will ultimately be converted to any number of other languages, especially HMTL, for eventual display in a format you desire.
The basic markdown syntax hasn’t even really been developed for many years, but there are now dozens of *flavors*, of which R Markdown is one. Most Markdown syntax is preserved and works identically no matter what flavor you use. However, the different flavors will have different options or slightly different implementations of certain things. The main point is knowing one flavor means you know some Markdown, and thus would easily work with others.
Documents
---------
To start using R Markdown, simply go to `File/New File/R Markdown...`
As you can see right away, you have your choice of several types of formats, some of which will be of more interest to you as you gain familiarity with R Markdown.
Documents are what you’ll likely use most, especially since they can be used in place of normal R scripts. You have the choice of HTML, PDF and MS Word for the output. The main thing you’ll want to do is make your choice early, because it is not really possible to have the document look exactly how you’d want in all formats simultaneously. More detail on standard documents is forthcoming in the next chapter, but we’ll cover some general options regarding it and other formats here.
### Standard HTML
As mentioned, the default, and most commonly used document for R Markdown, is the standard, single\-page HTML document. It is highly flexible, has several default themes available, works well for short or longer works, different screen sizes, etc. Some of the other different formats (e.g. presentations) are variations of it. You will probably want to get comfortable with standard HTML before moving on to other types of formats, but it will serve you well even as you advance your R Markdown skills.
### R notebooks
Many are using *R Notebooks* as their primary scripting tool, and these are definitely something to consider for your own approach. While there are some drawbacks in efficiency, many like the feel of them. In my experience, they lack the smooth approach of Jupyter Notebooks, and they really aren’t as suited to publishable work as the standard R Markdown document (at least I’ve had issues). Having your output inline also means you may be looking at stale results that don’t reflect previous data changes, and it’s easy for a few lines of code to become what would be equivalent to several printed pages of output. Again though, they might be useful for some purposes.
### Distill
One of the newer formats for R Markdown is the *Distill* template. It is specifically suited toward scientific publishing, and is very visually appealing (in my opinion). Along with some subtle alternative default settings relative to the standard HTML document, it has asides (marginal presentation), hover\-over citations and footnotes, easy incorporation of metadata (e.g. license, author contribution), and other useful things that are, or should be, common to a scientific publication. You can even create a website with it ([as I have](https://m-clark.github.io))!
### Bookdown
You’re reading the output of a *Bookdown* template now. You might not think you will have that much to say, but if you are using a standard HTML document and it tends to get notably long, you might prefer the bookdown format to the constant scrolling. You could even use it for a slide\-like presentation, but with a lot more flexibility. Many R publications, and more all the time, are being published via bookdown rather than traditional print (or both). See [bookdown.org](https://bookdown.org/) and [my own website](https://m-clark.github.io) for more examples.
### Standard HTML
As mentioned, the default, and most commonly used document for R Markdown, is the standard, single\-page HTML document. It is highly flexible, has several default themes available, works well for short or longer works, different screen sizes, etc. Some of the other different formats (e.g. presentations) are variations of it. You will probably want to get comfortable with standard HTML before moving on to other types of formats, but it will serve you well even as you advance your R Markdown skills.
### R notebooks
Many are using *R Notebooks* as their primary scripting tool, and these are definitely something to consider for your own approach. While there are some drawbacks in efficiency, many like the feel of them. In my experience, they lack the smooth approach of Jupyter Notebooks, and they really aren’t as suited to publishable work as the standard R Markdown document (at least I’ve had issues). Having your output inline also means you may be looking at stale results that don’t reflect previous data changes, and it’s easy for a few lines of code to become what would be equivalent to several printed pages of output. Again though, they might be useful for some purposes.
### Distill
One of the newer formats for R Markdown is the *Distill* template. It is specifically suited toward scientific publishing, and is very visually appealing (in my opinion). Along with some subtle alternative default settings relative to the standard HTML document, it has asides (marginal presentation), hover\-over citations and footnotes, easy incorporation of metadata (e.g. license, author contribution), and other useful things that are, or should be, common to a scientific publication. You can even create a website with it ([as I have](https://m-clark.github.io))!
### Bookdown
You’re reading the output of a *Bookdown* template now. You might not think you will have that much to say, but if you are using a standard HTML document and it tends to get notably long, you might prefer the bookdown format to the constant scrolling. You could even use it for a slide\-like presentation, but with a lot more flexibility. Many R publications, and more all the time, are being published via bookdown rather than traditional print (or both). See [bookdown.org](https://bookdown.org/) and [my own website](https://m-clark.github.io) for more examples.
Presentations
-------------
You can do slide\-style presentations with R Markdown, with three options shown, though two are bizarrely absent. Two shown are HTML based, and you should not even consider Beamer/pdf (i.e. \\(\\LaTeX\\)). Slides, if done well, are not viable for text\-focused printing, and in fact, really don’t work for text in general. They should be very visual if they are to be effective. The two notable formats not shown are [revealjs](http://rmarkdown.rstudio.com/revealjs_presentation_format.html)[66](#fn66) and the kind you can create by going to `File/New File/R Presentation`, which is also revealjs but a different format. I do not recommend the latter. In addition, many seem to be really high on [xaringan](https://github.com/yihui/xaringan), which is based on *remarkjs*, but, after using it several times, I am not sure what it offers over the others.
Creating a presentation is easy enough, and the following shows an example.
```
---
title: "Habits"
output: ioslides_presentation
---
# In the morning
## Getting up
- Turn off alarm
- Get out of bed
## Breakfast
- Eat eggs
- Drink coffee
# In the evening
## Dinner
- Eat spaghetti
- Drink wine
```
You should really question whether you need slides. They are a unnecessarily restrictive format, do not work well with text, and often don’t work well with interactive visualizations. Furthermore, their development doesn’t appear to be as much of a priority for the RStudio crowd relative to other formats (rightly so in my opinion). And finally, there is nothing substantive they offer that can’t be done with a standard HTML doc or its variants.
Apps, Sites \& Dashboards
-------------------------
*Shiny* is an inherently interactive format geared toward the creation of websites and applications. While there are far more apt programming languages than R for creating a website/app, at least Shiny allows you to stay completely within the R environment, and that means you don’t have to be expert in those other languages.
You can run shiny apps on your machine well enough, though usually the point is to make something other people can interact with. This means you’ll need some place to house your work, and [shinyapps.io](https://www.shinyapps.io) allows for some free hosting along with other options. As long as you have a web server people will be able to access your work. Other formats in this area to be aware of are `websites` and `flexdashboard`.
Templates
---------
Templates are available for any number of things, and one can find plenty among specific packages. Once a package with a particular template is installed, you’ll then have it as an option here. All these typically do is provide an R Markdown file similar to when you open a document, with a couple specific options, and demonstration of them if applicable. It’s not much, but at least it will save you a little effort.
After you get the hang of R Markdown, you should strongly consider making your own template. It’s actually pretty easy, and then you’ll always have the option.
How to Begin
------------
The best way to get started with R Markdown is to see a document you like, copy the relevant parts for your own document, and get to it! It really is the best way in my opinion. Many people host their files on [GitHub](https://github.com), so you can just download it directly from there. The author of the bookdown package, a particular format for R Markdown, actually suggests people simply clone his repository for his book that teaches bookdown, and go from there. That’s how I started using it, and it became my favored format for longer documents, and even presentations.
In summary, just see what others are doing, and then tailor it to your own needs.
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/getting_started.html |
Getting Started
===============
What is Markdown?
-----------------
*Markdown* is basically a syntax (a markup language) that conveys how text should be displayed. In practice, it allows you to use plain text for a document with bits of other things thrown in, but which will ultimately be converted to any number of other languages, especially HMTL, for eventual display in a format you desire.
The basic markdown syntax hasn’t even really been developed for many years, but there are now dozens of *flavors*, of which R Markdown is one. Most Markdown syntax is preserved and works identically no matter what flavor you use. However, the different flavors will have different options or slightly different implementations of certain things. The main point is knowing one flavor means you know some Markdown, and thus would easily work with others.
Documents
---------
To start using R Markdown, simply go to `File/New File/R Markdown...`
As you can see right away, you have your choice of several types of formats, some of which will be of more interest to you as you gain familiarity with R Markdown.
Documents are what you’ll likely use most, especially since they can be used in place of normal R scripts. You have the choice of HTML, PDF and MS Word for the output. The main thing you’ll want to do is make your choice early, because it is not really possible to have the document look exactly how you’d want in all formats simultaneously. More detail on standard documents is forthcoming in the next chapter, but we’ll cover some general options regarding it and other formats here.
### Standard HTML
As mentioned, the default, and most commonly used document for R Markdown, is the standard, single\-page HTML document. It is highly flexible, has several default themes available, works well for short or longer works, different screen sizes, etc. Some of the other different formats (e.g. presentations) are variations of it. You will probably want to get comfortable with standard HTML before moving on to other types of formats, but it will serve you well even as you advance your R Markdown skills.
### R notebooks
Many are using *R Notebooks* as their primary scripting tool, and these are definitely something to consider for your own approach. While there are some drawbacks in efficiency, many like the feel of them. In my experience, they lack the smooth approach of Jupyter Notebooks, and they really aren’t as suited to publishable work as the standard R Markdown document (at least I’ve had issues). Having your output inline also means you may be looking at stale results that don’t reflect previous data changes, and it’s easy for a few lines of code to become what would be equivalent to several printed pages of output. Again though, they might be useful for some purposes.
### Distill
One of the newer formats for R Markdown is the *Distill* template. It is specifically suited toward scientific publishing, and is very visually appealing (in my opinion). Along with some subtle alternative default settings relative to the standard HTML document, it has asides (marginal presentation), hover\-over citations and footnotes, easy incorporation of metadata (e.g. license, author contribution), and other useful things that are, or should be, common to a scientific publication. You can even create a website with it ([as I have](https://m-clark.github.io))!
### Bookdown
You’re reading the output of a *Bookdown* template now. You might not think you will have that much to say, but if you are using a standard HTML document and it tends to get notably long, you might prefer the bookdown format to the constant scrolling. You could even use it for a slide\-like presentation, but with a lot more flexibility. Many R publications, and more all the time, are being published via bookdown rather than traditional print (or both). See [bookdown.org](https://bookdown.org/) and [my own website](https://m-clark.github.io) for more examples.
Presentations
-------------
You can do slide\-style presentations with R Markdown, with three options shown, though two are bizarrely absent. Two shown are HTML based, and you should not even consider Beamer/pdf (i.e. \\(\\LaTeX\\)). Slides, if done well, are not viable for text\-focused printing, and in fact, really don’t work for text in general. They should be very visual if they are to be effective. The two notable formats not shown are [revealjs](http://rmarkdown.rstudio.com/revealjs_presentation_format.html)[66](#fn66) and the kind you can create by going to `File/New File/R Presentation`, which is also revealjs but a different format. I do not recommend the latter. In addition, many seem to be really high on [xaringan](https://github.com/yihui/xaringan), which is based on *remarkjs*, but, after using it several times, I am not sure what it offers over the others.
Creating a presentation is easy enough, and the following shows an example.
```
---
title: "Habits"
output: ioslides_presentation
---
# In the morning
## Getting up
- Turn off alarm
- Get out of bed
## Breakfast
- Eat eggs
- Drink coffee
# In the evening
## Dinner
- Eat spaghetti
- Drink wine
```
You should really question whether you need slides. They are a unnecessarily restrictive format, do not work well with text, and often don’t work well with interactive visualizations. Furthermore, their development doesn’t appear to be as much of a priority for the RStudio crowd relative to other formats (rightly so in my opinion). And finally, there is nothing substantive they offer that can’t be done with a standard HTML doc or its variants.
Apps, Sites \& Dashboards
-------------------------
*Shiny* is an inherently interactive format geared toward the creation of websites and applications. While there are far more apt programming languages than R for creating a website/app, at least Shiny allows you to stay completely within the R environment, and that means you don’t have to be expert in those other languages.
You can run shiny apps on your machine well enough, though usually the point is to make something other people can interact with. This means you’ll need some place to house your work, and [shinyapps.io](https://www.shinyapps.io) allows for some free hosting along with other options. As long as you have a web server people will be able to access your work. Other formats in this area to be aware of are `websites` and `flexdashboard`.
Templates
---------
Templates are available for any number of things, and one can find plenty among specific packages. Once a package with a particular template is installed, you’ll then have it as an option here. All these typically do is provide an R Markdown file similar to when you open a document, with a couple specific options, and demonstration of them if applicable. It’s not much, but at least it will save you a little effort.
After you get the hang of R Markdown, you should strongly consider making your own template. It’s actually pretty easy, and then you’ll always have the option.
How to Begin
------------
The best way to get started with R Markdown is to see a document you like, copy the relevant parts for your own document, and get to it! It really is the best way in my opinion. Many people host their files on [GitHub](https://github.com), so you can just download it directly from there. The author of the bookdown package, a particular format for R Markdown, actually suggests people simply clone his repository for his book that teaches bookdown, and go from there. That’s how I started using it, and it became my favored format for longer documents, and even presentations.
In summary, just see what others are doing, and then tailor it to your own needs.
What is Markdown?
-----------------
*Markdown* is basically a syntax (a markup language) that conveys how text should be displayed. In practice, it allows you to use plain text for a document with bits of other things thrown in, but which will ultimately be converted to any number of other languages, especially HMTL, for eventual display in a format you desire.
The basic markdown syntax hasn’t even really been developed for many years, but there are now dozens of *flavors*, of which R Markdown is one. Most Markdown syntax is preserved and works identically no matter what flavor you use. However, the different flavors will have different options or slightly different implementations of certain things. The main point is knowing one flavor means you know some Markdown, and thus would easily work with others.
Documents
---------
To start using R Markdown, simply go to `File/New File/R Markdown...`
As you can see right away, you have your choice of several types of formats, some of which will be of more interest to you as you gain familiarity with R Markdown.
Documents are what you’ll likely use most, especially since they can be used in place of normal R scripts. You have the choice of HTML, PDF and MS Word for the output. The main thing you’ll want to do is make your choice early, because it is not really possible to have the document look exactly how you’d want in all formats simultaneously. More detail on standard documents is forthcoming in the next chapter, but we’ll cover some general options regarding it and other formats here.
### Standard HTML
As mentioned, the default, and most commonly used document for R Markdown, is the standard, single\-page HTML document. It is highly flexible, has several default themes available, works well for short or longer works, different screen sizes, etc. Some of the other different formats (e.g. presentations) are variations of it. You will probably want to get comfortable with standard HTML before moving on to other types of formats, but it will serve you well even as you advance your R Markdown skills.
### R notebooks
Many are using *R Notebooks* as their primary scripting tool, and these are definitely something to consider for your own approach. While there are some drawbacks in efficiency, many like the feel of them. In my experience, they lack the smooth approach of Jupyter Notebooks, and they really aren’t as suited to publishable work as the standard R Markdown document (at least I’ve had issues). Having your output inline also means you may be looking at stale results that don’t reflect previous data changes, and it’s easy for a few lines of code to become what would be equivalent to several printed pages of output. Again though, they might be useful for some purposes.
### Distill
One of the newer formats for R Markdown is the *Distill* template. It is specifically suited toward scientific publishing, and is very visually appealing (in my opinion). Along with some subtle alternative default settings relative to the standard HTML document, it has asides (marginal presentation), hover\-over citations and footnotes, easy incorporation of metadata (e.g. license, author contribution), and other useful things that are, or should be, common to a scientific publication. You can even create a website with it ([as I have](https://m-clark.github.io))!
### Bookdown
You’re reading the output of a *Bookdown* template now. You might not think you will have that much to say, but if you are using a standard HTML document and it tends to get notably long, you might prefer the bookdown format to the constant scrolling. You could even use it for a slide\-like presentation, but with a lot more flexibility. Many R publications, and more all the time, are being published via bookdown rather than traditional print (or both). See [bookdown.org](https://bookdown.org/) and [my own website](https://m-clark.github.io) for more examples.
### Standard HTML
As mentioned, the default, and most commonly used document for R Markdown, is the standard, single\-page HTML document. It is highly flexible, has several default themes available, works well for short or longer works, different screen sizes, etc. Some of the other different formats (e.g. presentations) are variations of it. You will probably want to get comfortable with standard HTML before moving on to other types of formats, but it will serve you well even as you advance your R Markdown skills.
### R notebooks
Many are using *R Notebooks* as their primary scripting tool, and these are definitely something to consider for your own approach. While there are some drawbacks in efficiency, many like the feel of them. In my experience, they lack the smooth approach of Jupyter Notebooks, and they really aren’t as suited to publishable work as the standard R Markdown document (at least I’ve had issues). Having your output inline also means you may be looking at stale results that don’t reflect previous data changes, and it’s easy for a few lines of code to become what would be equivalent to several printed pages of output. Again though, they might be useful for some purposes.
### Distill
One of the newer formats for R Markdown is the *Distill* template. It is specifically suited toward scientific publishing, and is very visually appealing (in my opinion). Along with some subtle alternative default settings relative to the standard HTML document, it has asides (marginal presentation), hover\-over citations and footnotes, easy incorporation of metadata (e.g. license, author contribution), and other useful things that are, or should be, common to a scientific publication. You can even create a website with it ([as I have](https://m-clark.github.io))!
### Bookdown
You’re reading the output of a *Bookdown* template now. You might not think you will have that much to say, but if you are using a standard HTML document and it tends to get notably long, you might prefer the bookdown format to the constant scrolling. You could even use it for a slide\-like presentation, but with a lot more flexibility. Many R publications, and more all the time, are being published via bookdown rather than traditional print (or both). See [bookdown.org](https://bookdown.org/) and [my own website](https://m-clark.github.io) for more examples.
Presentations
-------------
You can do slide\-style presentations with R Markdown, with three options shown, though two are bizarrely absent. Two shown are HTML based, and you should not even consider Beamer/pdf (i.e. \\(\\LaTeX\\)). Slides, if done well, are not viable for text\-focused printing, and in fact, really don’t work for text in general. They should be very visual if they are to be effective. The two notable formats not shown are [revealjs](http://rmarkdown.rstudio.com/revealjs_presentation_format.html)[66](#fn66) and the kind you can create by going to `File/New File/R Presentation`, which is also revealjs but a different format. I do not recommend the latter. In addition, many seem to be really high on [xaringan](https://github.com/yihui/xaringan), which is based on *remarkjs*, but, after using it several times, I am not sure what it offers over the others.
Creating a presentation is easy enough, and the following shows an example.
```
---
title: "Habits"
output: ioslides_presentation
---
# In the morning
## Getting up
- Turn off alarm
- Get out of bed
## Breakfast
- Eat eggs
- Drink coffee
# In the evening
## Dinner
- Eat spaghetti
- Drink wine
```
You should really question whether you need slides. They are a unnecessarily restrictive format, do not work well with text, and often don’t work well with interactive visualizations. Furthermore, their development doesn’t appear to be as much of a priority for the RStudio crowd relative to other formats (rightly so in my opinion). And finally, there is nothing substantive they offer that can’t be done with a standard HTML doc or its variants.
Apps, Sites \& Dashboards
-------------------------
*Shiny* is an inherently interactive format geared toward the creation of websites and applications. While there are far more apt programming languages than R for creating a website/app, at least Shiny allows you to stay completely within the R environment, and that means you don’t have to be expert in those other languages.
You can run shiny apps on your machine well enough, though usually the point is to make something other people can interact with. This means you’ll need some place to house your work, and [shinyapps.io](https://www.shinyapps.io) allows for some free hosting along with other options. As long as you have a web server people will be able to access your work. Other formats in this area to be aware of are `websites` and `flexdashboard`.
Templates
---------
Templates are available for any number of things, and one can find plenty among specific packages. Once a package with a particular template is installed, you’ll then have it as an option here. All these typically do is provide an R Markdown file similar to when you open a document, with a couple specific options, and demonstration of them if applicable. It’s not much, but at least it will save you a little effort.
After you get the hang of R Markdown, you should strongly consider making your own template. It’s actually pretty easy, and then you’ll always have the option.
How to Begin
------------
The best way to get started with R Markdown is to see a document you like, copy the relevant parts for your own document, and get to it! It really is the best way in my opinion. Many people host their files on [GitHub](https://github.com), so you can just download it directly from there. The author of the bookdown package, a particular format for R Markdown, actually suggests people simply clone his repository for his book that teaches bookdown, and go from there. That’s how I started using it, and it became my favored format for longer documents, and even presentations.
In summary, just see what others are doing, and then tailor it to your own needs.
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/getting_started.html |
Getting Started
===============
What is Markdown?
-----------------
*Markdown* is basically a syntax (a markup language) that conveys how text should be displayed. In practice, it allows you to use plain text for a document with bits of other things thrown in, but which will ultimately be converted to any number of other languages, especially HMTL, for eventual display in a format you desire.
The basic markdown syntax hasn’t even really been developed for many years, but there are now dozens of *flavors*, of which R Markdown is one. Most Markdown syntax is preserved and works identically no matter what flavor you use. However, the different flavors will have different options or slightly different implementations of certain things. The main point is knowing one flavor means you know some Markdown, and thus would easily work with others.
Documents
---------
To start using R Markdown, simply go to `File/New File/R Markdown...`
As you can see right away, you have your choice of several types of formats, some of which will be of more interest to you as you gain familiarity with R Markdown.
Documents are what you’ll likely use most, especially since they can be used in place of normal R scripts. You have the choice of HTML, PDF and MS Word for the output. The main thing you’ll want to do is make your choice early, because it is not really possible to have the document look exactly how you’d want in all formats simultaneously. More detail on standard documents is forthcoming in the next chapter, but we’ll cover some general options regarding it and other formats here.
### Standard HTML
As mentioned, the default, and most commonly used document for R Markdown, is the standard, single\-page HTML document. It is highly flexible, has several default themes available, works well for short or longer works, different screen sizes, etc. Some of the other different formats (e.g. presentations) are variations of it. You will probably want to get comfortable with standard HTML before moving on to other types of formats, but it will serve you well even as you advance your R Markdown skills.
### R notebooks
Many are using *R Notebooks* as their primary scripting tool, and these are definitely something to consider for your own approach. While there are some drawbacks in efficiency, many like the feel of them. In my experience, they lack the smooth approach of Jupyter Notebooks, and they really aren’t as suited to publishable work as the standard R Markdown document (at least I’ve had issues). Having your output inline also means you may be looking at stale results that don’t reflect previous data changes, and it’s easy for a few lines of code to become what would be equivalent to several printed pages of output. Again though, they might be useful for some purposes.
### Distill
One of the newer formats for R Markdown is the *Distill* template. It is specifically suited toward scientific publishing, and is very visually appealing (in my opinion). Along with some subtle alternative default settings relative to the standard HTML document, it has asides (marginal presentation), hover\-over citations and footnotes, easy incorporation of metadata (e.g. license, author contribution), and other useful things that are, or should be, common to a scientific publication. You can even create a website with it ([as I have](https://m-clark.github.io))!
### Bookdown
You’re reading the output of a *Bookdown* template now. You might not think you will have that much to say, but if you are using a standard HTML document and it tends to get notably long, you might prefer the bookdown format to the constant scrolling. You could even use it for a slide\-like presentation, but with a lot more flexibility. Many R publications, and more all the time, are being published via bookdown rather than traditional print (or both). See [bookdown.org](https://bookdown.org/) and [my own website](https://m-clark.github.io) for more examples.
Presentations
-------------
You can do slide\-style presentations with R Markdown, with three options shown, though two are bizarrely absent. Two shown are HTML based, and you should not even consider Beamer/pdf (i.e. \\(\\LaTeX\\)). Slides, if done well, are not viable for text\-focused printing, and in fact, really don’t work for text in general. They should be very visual if they are to be effective. The two notable formats not shown are [revealjs](http://rmarkdown.rstudio.com/revealjs_presentation_format.html)[66](#fn66) and the kind you can create by going to `File/New File/R Presentation`, which is also revealjs but a different format. I do not recommend the latter. In addition, many seem to be really high on [xaringan](https://github.com/yihui/xaringan), which is based on *remarkjs*, but, after using it several times, I am not sure what it offers over the others.
Creating a presentation is easy enough, and the following shows an example.
```
---
title: "Habits"
output: ioslides_presentation
---
# In the morning
## Getting up
- Turn off alarm
- Get out of bed
## Breakfast
- Eat eggs
- Drink coffee
# In the evening
## Dinner
- Eat spaghetti
- Drink wine
```
You should really question whether you need slides. They are a unnecessarily restrictive format, do not work well with text, and often don’t work well with interactive visualizations. Furthermore, their development doesn’t appear to be as much of a priority for the RStudio crowd relative to other formats (rightly so in my opinion). And finally, there is nothing substantive they offer that can’t be done with a standard HTML doc or its variants.
Apps, Sites \& Dashboards
-------------------------
*Shiny* is an inherently interactive format geared toward the creation of websites and applications. While there are far more apt programming languages than R for creating a website/app, at least Shiny allows you to stay completely within the R environment, and that means you don’t have to be expert in those other languages.
You can run shiny apps on your machine well enough, though usually the point is to make something other people can interact with. This means you’ll need some place to house your work, and [shinyapps.io](https://www.shinyapps.io) allows for some free hosting along with other options. As long as you have a web server people will be able to access your work. Other formats in this area to be aware of are `websites` and `flexdashboard`.
Templates
---------
Templates are available for any number of things, and one can find plenty among specific packages. Once a package with a particular template is installed, you’ll then have it as an option here. All these typically do is provide an R Markdown file similar to when you open a document, with a couple specific options, and demonstration of them if applicable. It’s not much, but at least it will save you a little effort.
After you get the hang of R Markdown, you should strongly consider making your own template. It’s actually pretty easy, and then you’ll always have the option.
How to Begin
------------
The best way to get started with R Markdown is to see a document you like, copy the relevant parts for your own document, and get to it! It really is the best way in my opinion. Many people host their files on [GitHub](https://github.com), so you can just download it directly from there. The author of the bookdown package, a particular format for R Markdown, actually suggests people simply clone his repository for his book that teaches bookdown, and go from there. That’s how I started using it, and it became my favored format for longer documents, and even presentations.
In summary, just see what others are doing, and then tailor it to your own needs.
What is Markdown?
-----------------
*Markdown* is basically a syntax (a markup language) that conveys how text should be displayed. In practice, it allows you to use plain text for a document with bits of other things thrown in, but which will ultimately be converted to any number of other languages, especially HMTL, for eventual display in a format you desire.
The basic markdown syntax hasn’t even really been developed for many years, but there are now dozens of *flavors*, of which R Markdown is one. Most Markdown syntax is preserved and works identically no matter what flavor you use. However, the different flavors will have different options or slightly different implementations of certain things. The main point is knowing one flavor means you know some Markdown, and thus would easily work with others.
Documents
---------
To start using R Markdown, simply go to `File/New File/R Markdown...`
As you can see right away, you have your choice of several types of formats, some of which will be of more interest to you as you gain familiarity with R Markdown.
Documents are what you’ll likely use most, especially since they can be used in place of normal R scripts. You have the choice of HTML, PDF and MS Word for the output. The main thing you’ll want to do is make your choice early, because it is not really possible to have the document look exactly how you’d want in all formats simultaneously. More detail on standard documents is forthcoming in the next chapter, but we’ll cover some general options regarding it and other formats here.
### Standard HTML
As mentioned, the default, and most commonly used document for R Markdown, is the standard, single\-page HTML document. It is highly flexible, has several default themes available, works well for short or longer works, different screen sizes, etc. Some of the other different formats (e.g. presentations) are variations of it. You will probably want to get comfortable with standard HTML before moving on to other types of formats, but it will serve you well even as you advance your R Markdown skills.
### R notebooks
Many are using *R Notebooks* as their primary scripting tool, and these are definitely something to consider for your own approach. While there are some drawbacks in efficiency, many like the feel of them. In my experience, they lack the smooth approach of Jupyter Notebooks, and they really aren’t as suited to publishable work as the standard R Markdown document (at least I’ve had issues). Having your output inline also means you may be looking at stale results that don’t reflect previous data changes, and it’s easy for a few lines of code to become what would be equivalent to several printed pages of output. Again though, they might be useful for some purposes.
### Distill
One of the newer formats for R Markdown is the *Distill* template. It is specifically suited toward scientific publishing, and is very visually appealing (in my opinion). Along with some subtle alternative default settings relative to the standard HTML document, it has asides (marginal presentation), hover\-over citations and footnotes, easy incorporation of metadata (e.g. license, author contribution), and other useful things that are, or should be, common to a scientific publication. You can even create a website with it ([as I have](https://m-clark.github.io))!
### Bookdown
You’re reading the output of a *Bookdown* template now. You might not think you will have that much to say, but if you are using a standard HTML document and it tends to get notably long, you might prefer the bookdown format to the constant scrolling. You could even use it for a slide\-like presentation, but with a lot more flexibility. Many R publications, and more all the time, are being published via bookdown rather than traditional print (or both). See [bookdown.org](https://bookdown.org/) and [my own website](https://m-clark.github.io) for more examples.
### Standard HTML
As mentioned, the default, and most commonly used document for R Markdown, is the standard, single\-page HTML document. It is highly flexible, has several default themes available, works well for short or longer works, different screen sizes, etc. Some of the other different formats (e.g. presentations) are variations of it. You will probably want to get comfortable with standard HTML before moving on to other types of formats, but it will serve you well even as you advance your R Markdown skills.
### R notebooks
Many are using *R Notebooks* as their primary scripting tool, and these are definitely something to consider for your own approach. While there are some drawbacks in efficiency, many like the feel of them. In my experience, they lack the smooth approach of Jupyter Notebooks, and they really aren’t as suited to publishable work as the standard R Markdown document (at least I’ve had issues). Having your output inline also means you may be looking at stale results that don’t reflect previous data changes, and it’s easy for a few lines of code to become what would be equivalent to several printed pages of output. Again though, they might be useful for some purposes.
### Distill
One of the newer formats for R Markdown is the *Distill* template. It is specifically suited toward scientific publishing, and is very visually appealing (in my opinion). Along with some subtle alternative default settings relative to the standard HTML document, it has asides (marginal presentation), hover\-over citations and footnotes, easy incorporation of metadata (e.g. license, author contribution), and other useful things that are, or should be, common to a scientific publication. You can even create a website with it ([as I have](https://m-clark.github.io))!
### Bookdown
You’re reading the output of a *Bookdown* template now. You might not think you will have that much to say, but if you are using a standard HTML document and it tends to get notably long, you might prefer the bookdown format to the constant scrolling. You could even use it for a slide\-like presentation, but with a lot more flexibility. Many R publications, and more all the time, are being published via bookdown rather than traditional print (or both). See [bookdown.org](https://bookdown.org/) and [my own website](https://m-clark.github.io) for more examples.
Presentations
-------------
You can do slide\-style presentations with R Markdown, with three options shown, though two are bizarrely absent. Two shown are HTML based, and you should not even consider Beamer/pdf (i.e. \\(\\LaTeX\\)). Slides, if done well, are not viable for text\-focused printing, and in fact, really don’t work for text in general. They should be very visual if they are to be effective. The two notable formats not shown are [revealjs](http://rmarkdown.rstudio.com/revealjs_presentation_format.html)[66](#fn66) and the kind you can create by going to `File/New File/R Presentation`, which is also revealjs but a different format. I do not recommend the latter. In addition, many seem to be really high on [xaringan](https://github.com/yihui/xaringan), which is based on *remarkjs*, but, after using it several times, I am not sure what it offers over the others.
Creating a presentation is easy enough, and the following shows an example.
```
---
title: "Habits"
output: ioslides_presentation
---
# In the morning
## Getting up
- Turn off alarm
- Get out of bed
## Breakfast
- Eat eggs
- Drink coffee
# In the evening
## Dinner
- Eat spaghetti
- Drink wine
```
You should really question whether you need slides. They are a unnecessarily restrictive format, do not work well with text, and often don’t work well with interactive visualizations. Furthermore, their development doesn’t appear to be as much of a priority for the RStudio crowd relative to other formats (rightly so in my opinion). And finally, there is nothing substantive they offer that can’t be done with a standard HTML doc or its variants.
Apps, Sites \& Dashboards
-------------------------
*Shiny* is an inherently interactive format geared toward the creation of websites and applications. While there are far more apt programming languages than R for creating a website/app, at least Shiny allows you to stay completely within the R environment, and that means you don’t have to be expert in those other languages.
You can run shiny apps on your machine well enough, though usually the point is to make something other people can interact with. This means you’ll need some place to house your work, and [shinyapps.io](https://www.shinyapps.io) allows for some free hosting along with other options. As long as you have a web server people will be able to access your work. Other formats in this area to be aware of are `websites` and `flexdashboard`.
Templates
---------
Templates are available for any number of things, and one can find plenty among specific packages. Once a package with a particular template is installed, you’ll then have it as an option here. All these typically do is provide an R Markdown file similar to when you open a document, with a couple specific options, and demonstration of them if applicable. It’s not much, but at least it will save you a little effort.
After you get the hang of R Markdown, you should strongly consider making your own template. It’s actually pretty easy, and then you’ll always have the option.
How to Begin
------------
The best way to get started with R Markdown is to see a document you like, copy the relevant parts for your own document, and get to it! It really is the best way in my opinion. Many people host their files on [GitHub](https://github.com), so you can just download it directly from there. The author of the bookdown package, a particular format for R Markdown, actually suggests people simply clone his repository for his book that teaches bookdown, and go from there. That’s how I started using it, and it became my favored format for longer documents, and even presentations.
In summary, just see what others are doing, and then tailor it to your own needs.
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/getting_started.html |
Getting Started
===============
What is Markdown?
-----------------
*Markdown* is basically a syntax (a markup language) that conveys how text should be displayed. In practice, it allows you to use plain text for a document with bits of other things thrown in, but which will ultimately be converted to any number of other languages, especially HMTL, for eventual display in a format you desire.
The basic markdown syntax hasn’t even really been developed for many years, but there are now dozens of *flavors*, of which R Markdown is one. Most Markdown syntax is preserved and works identically no matter what flavor you use. However, the different flavors will have different options or slightly different implementations of certain things. The main point is knowing one flavor means you know some Markdown, and thus would easily work with others.
Documents
---------
To start using R Markdown, simply go to `File/New File/R Markdown...`
As you can see right away, you have your choice of several types of formats, some of which will be of more interest to you as you gain familiarity with R Markdown.
Documents are what you’ll likely use most, especially since they can be used in place of normal R scripts. You have the choice of HTML, PDF and MS Word for the output. The main thing you’ll want to do is make your choice early, because it is not really possible to have the document look exactly how you’d want in all formats simultaneously. More detail on standard documents is forthcoming in the next chapter, but we’ll cover some general options regarding it and other formats here.
### Standard HTML
As mentioned, the default, and most commonly used document for R Markdown, is the standard, single\-page HTML document. It is highly flexible, has several default themes available, works well for short or longer works, different screen sizes, etc. Some of the other different formats (e.g. presentations) are variations of it. You will probably want to get comfortable with standard HTML before moving on to other types of formats, but it will serve you well even as you advance your R Markdown skills.
### R notebooks
Many are using *R Notebooks* as their primary scripting tool, and these are definitely something to consider for your own approach. While there are some drawbacks in efficiency, many like the feel of them. In my experience, they lack the smooth approach of Jupyter Notebooks, and they really aren’t as suited to publishable work as the standard R Markdown document (at least I’ve had issues). Having your output inline also means you may be looking at stale results that don’t reflect previous data changes, and it’s easy for a few lines of code to become what would be equivalent to several printed pages of output. Again though, they might be useful for some purposes.
### Distill
One of the newer formats for R Markdown is the *Distill* template. It is specifically suited toward scientific publishing, and is very visually appealing (in my opinion). Along with some subtle alternative default settings relative to the standard HTML document, it has asides (marginal presentation), hover\-over citations and footnotes, easy incorporation of metadata (e.g. license, author contribution), and other useful things that are, or should be, common to a scientific publication. You can even create a website with it ([as I have](https://m-clark.github.io))!
### Bookdown
You’re reading the output of a *Bookdown* template now. You might not think you will have that much to say, but if you are using a standard HTML document and it tends to get notably long, you might prefer the bookdown format to the constant scrolling. You could even use it for a slide\-like presentation, but with a lot more flexibility. Many R publications, and more all the time, are being published via bookdown rather than traditional print (or both). See [bookdown.org](https://bookdown.org/) and [my own website](https://m-clark.github.io) for more examples.
Presentations
-------------
You can do slide\-style presentations with R Markdown, with three options shown, though two are bizarrely absent. Two shown are HTML based, and you should not even consider Beamer/pdf (i.e. \\(\\LaTeX\\)). Slides, if done well, are not viable for text\-focused printing, and in fact, really don’t work for text in general. They should be very visual if they are to be effective. The two notable formats not shown are [revealjs](http://rmarkdown.rstudio.com/revealjs_presentation_format.html)[66](#fn66) and the kind you can create by going to `File/New File/R Presentation`, which is also revealjs but a different format. I do not recommend the latter. In addition, many seem to be really high on [xaringan](https://github.com/yihui/xaringan), which is based on *remarkjs*, but, after using it several times, I am not sure what it offers over the others.
Creating a presentation is easy enough, and the following shows an example.
```
---
title: "Habits"
output: ioslides_presentation
---
# In the morning
## Getting up
- Turn off alarm
- Get out of bed
## Breakfast
- Eat eggs
- Drink coffee
# In the evening
## Dinner
- Eat spaghetti
- Drink wine
```
You should really question whether you need slides. They are a unnecessarily restrictive format, do not work well with text, and often don’t work well with interactive visualizations. Furthermore, their development doesn’t appear to be as much of a priority for the RStudio crowd relative to other formats (rightly so in my opinion). And finally, there is nothing substantive they offer that can’t be done with a standard HTML doc or its variants.
Apps, Sites \& Dashboards
-------------------------
*Shiny* is an inherently interactive format geared toward the creation of websites and applications. While there are far more apt programming languages than R for creating a website/app, at least Shiny allows you to stay completely within the R environment, and that means you don’t have to be expert in those other languages.
You can run shiny apps on your machine well enough, though usually the point is to make something other people can interact with. This means you’ll need some place to house your work, and [shinyapps.io](https://www.shinyapps.io) allows for some free hosting along with other options. As long as you have a web server people will be able to access your work. Other formats in this area to be aware of are `websites` and `flexdashboard`.
Templates
---------
Templates are available for any number of things, and one can find plenty among specific packages. Once a package with a particular template is installed, you’ll then have it as an option here. All these typically do is provide an R Markdown file similar to when you open a document, with a couple specific options, and demonstration of them if applicable. It’s not much, but at least it will save you a little effort.
After you get the hang of R Markdown, you should strongly consider making your own template. It’s actually pretty easy, and then you’ll always have the option.
How to Begin
------------
The best way to get started with R Markdown is to see a document you like, copy the relevant parts for your own document, and get to it! It really is the best way in my opinion. Many people host their files on [GitHub](https://github.com), so you can just download it directly from there. The author of the bookdown package, a particular format for R Markdown, actually suggests people simply clone his repository for his book that teaches bookdown, and go from there. That’s how I started using it, and it became my favored format for longer documents, and even presentations.
In summary, just see what others are doing, and then tailor it to your own needs.
What is Markdown?
-----------------
*Markdown* is basically a syntax (a markup language) that conveys how text should be displayed. In practice, it allows you to use plain text for a document with bits of other things thrown in, but which will ultimately be converted to any number of other languages, especially HMTL, for eventual display in a format you desire.
The basic markdown syntax hasn’t even really been developed for many years, but there are now dozens of *flavors*, of which R Markdown is one. Most Markdown syntax is preserved and works identically no matter what flavor you use. However, the different flavors will have different options or slightly different implementations of certain things. The main point is knowing one flavor means you know some Markdown, and thus would easily work with others.
Documents
---------
To start using R Markdown, simply go to `File/New File/R Markdown...`
As you can see right away, you have your choice of several types of formats, some of which will be of more interest to you as you gain familiarity with R Markdown.
Documents are what you’ll likely use most, especially since they can be used in place of normal R scripts. You have the choice of HTML, PDF and MS Word for the output. The main thing you’ll want to do is make your choice early, because it is not really possible to have the document look exactly how you’d want in all formats simultaneously. More detail on standard documents is forthcoming in the next chapter, but we’ll cover some general options regarding it and other formats here.
### Standard HTML
As mentioned, the default, and most commonly used document for R Markdown, is the standard, single\-page HTML document. It is highly flexible, has several default themes available, works well for short or longer works, different screen sizes, etc. Some of the other different formats (e.g. presentations) are variations of it. You will probably want to get comfortable with standard HTML before moving on to other types of formats, but it will serve you well even as you advance your R Markdown skills.
### R notebooks
Many are using *R Notebooks* as their primary scripting tool, and these are definitely something to consider for your own approach. While there are some drawbacks in efficiency, many like the feel of them. In my experience, they lack the smooth approach of Jupyter Notebooks, and they really aren’t as suited to publishable work as the standard R Markdown document (at least I’ve had issues). Having your output inline also means you may be looking at stale results that don’t reflect previous data changes, and it’s easy for a few lines of code to become what would be equivalent to several printed pages of output. Again though, they might be useful for some purposes.
### Distill
One of the newer formats for R Markdown is the *Distill* template. It is specifically suited toward scientific publishing, and is very visually appealing (in my opinion). Along with some subtle alternative default settings relative to the standard HTML document, it has asides (marginal presentation), hover\-over citations and footnotes, easy incorporation of metadata (e.g. license, author contribution), and other useful things that are, or should be, common to a scientific publication. You can even create a website with it ([as I have](https://m-clark.github.io))!
### Bookdown
You’re reading the output of a *Bookdown* template now. You might not think you will have that much to say, but if you are using a standard HTML document and it tends to get notably long, you might prefer the bookdown format to the constant scrolling. You could even use it for a slide\-like presentation, but with a lot more flexibility. Many R publications, and more all the time, are being published via bookdown rather than traditional print (or both). See [bookdown.org](https://bookdown.org/) and [my own website](https://m-clark.github.io) for more examples.
### Standard HTML
As mentioned, the default, and most commonly used document for R Markdown, is the standard, single\-page HTML document. It is highly flexible, has several default themes available, works well for short or longer works, different screen sizes, etc. Some of the other different formats (e.g. presentations) are variations of it. You will probably want to get comfortable with standard HTML before moving on to other types of formats, but it will serve you well even as you advance your R Markdown skills.
### R notebooks
Many are using *R Notebooks* as their primary scripting tool, and these are definitely something to consider for your own approach. While there are some drawbacks in efficiency, many like the feel of them. In my experience, they lack the smooth approach of Jupyter Notebooks, and they really aren’t as suited to publishable work as the standard R Markdown document (at least I’ve had issues). Having your output inline also means you may be looking at stale results that don’t reflect previous data changes, and it’s easy for a few lines of code to become what would be equivalent to several printed pages of output. Again though, they might be useful for some purposes.
### Distill
One of the newer formats for R Markdown is the *Distill* template. It is specifically suited toward scientific publishing, and is very visually appealing (in my opinion). Along with some subtle alternative default settings relative to the standard HTML document, it has asides (marginal presentation), hover\-over citations and footnotes, easy incorporation of metadata (e.g. license, author contribution), and other useful things that are, or should be, common to a scientific publication. You can even create a website with it ([as I have](https://m-clark.github.io))!
### Bookdown
You’re reading the output of a *Bookdown* template now. You might not think you will have that much to say, but if you are using a standard HTML document and it tends to get notably long, you might prefer the bookdown format to the constant scrolling. You could even use it for a slide\-like presentation, but with a lot more flexibility. Many R publications, and more all the time, are being published via bookdown rather than traditional print (or both). See [bookdown.org](https://bookdown.org/) and [my own website](https://m-clark.github.io) for more examples.
Presentations
-------------
You can do slide\-style presentations with R Markdown, with three options shown, though two are bizarrely absent. Two shown are HTML based, and you should not even consider Beamer/pdf (i.e. \\(\\LaTeX\\)). Slides, if done well, are not viable for text\-focused printing, and in fact, really don’t work for text in general. They should be very visual if they are to be effective. The two notable formats not shown are [revealjs](http://rmarkdown.rstudio.com/revealjs_presentation_format.html)[66](#fn66) and the kind you can create by going to `File/New File/R Presentation`, which is also revealjs but a different format. I do not recommend the latter. In addition, many seem to be really high on [xaringan](https://github.com/yihui/xaringan), which is based on *remarkjs*, but, after using it several times, I am not sure what it offers over the others.
Creating a presentation is easy enough, and the following shows an example.
```
---
title: "Habits"
output: ioslides_presentation
---
# In the morning
## Getting up
- Turn off alarm
- Get out of bed
## Breakfast
- Eat eggs
- Drink coffee
# In the evening
## Dinner
- Eat spaghetti
- Drink wine
```
You should really question whether you need slides. They are a unnecessarily restrictive format, do not work well with text, and often don’t work well with interactive visualizations. Furthermore, their development doesn’t appear to be as much of a priority for the RStudio crowd relative to other formats (rightly so in my opinion). And finally, there is nothing substantive they offer that can’t be done with a standard HTML doc or its variants.
Apps, Sites \& Dashboards
-------------------------
*Shiny* is an inherently interactive format geared toward the creation of websites and applications. While there are far more apt programming languages than R for creating a website/app, at least Shiny allows you to stay completely within the R environment, and that means you don’t have to be expert in those other languages.
You can run shiny apps on your machine well enough, though usually the point is to make something other people can interact with. This means you’ll need some place to house your work, and [shinyapps.io](https://www.shinyapps.io) allows for some free hosting along with other options. As long as you have a web server people will be able to access your work. Other formats in this area to be aware of are `websites` and `flexdashboard`.
Templates
---------
Templates are available for any number of things, and one can find plenty among specific packages. Once a package with a particular template is installed, you’ll then have it as an option here. All these typically do is provide an R Markdown file similar to when you open a document, with a couple specific options, and demonstration of them if applicable. It’s not much, but at least it will save you a little effort.
After you get the hang of R Markdown, you should strongly consider making your own template. It’s actually pretty easy, and then you’ll always have the option.
How to Begin
------------
The best way to get started with R Markdown is to see a document you like, copy the relevant parts for your own document, and get to it! It really is the best way in my opinion. Many people host their files on [GitHub](https://github.com), so you can just download it directly from there. The author of the bookdown package, a particular format for R Markdown, actually suggests people simply clone his repository for his book that teaches bookdown, and go from there. That’s how I started using it, and it became my favored format for longer documents, and even presentations.
In summary, just see what others are doing, and then tailor it to your own needs.
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/standard_documents.html |
Standard Documents
==================
R Markdown files
----------------
R Markdown files, with extension `*.Rmd`, are a combination of text, R code, and possibly other code or syntax, all within a single file. Various packages, e.g. rmarkdown, knitr, pandoc, etc., work behind the scenes to knit all those pieces into one coherent whole, in whatever format is desired[67](#fn67). The knitr package is the driving force behind most of what is done to create the final product.
#### HTML
I personally do everything in HTML because it’s the most flexible, and easiest to get things looking the way you want. Presumably at some point, these will simply be the default that people both use and expect in the academic realm and beyond, as there is little additional value that one can get with PDF or MS Word, and often notably less. Furthermore, academia is an anachronism. How much do you engage PDF and Word for *anything* else relative to how much you make use of HTML (i.e. the web)?
Text
----
Writing text in a R Markdown document is the same as anywhere else. There are a couple things you’ll use frequently though.
* Headings/Subheadings: Specified \#, \#\#, \#\#\# etc.
* Italics \& bold: \**word*\* for italics \*\***word**\*\* for bold. You can also use underscores (some Markdown flavors may require it).
* Links: `[some_text](http://webaddress.com)`
* Image: ``
* Lists: Start with a dash or number, and make sure the first element is separated from any preceding text by a full blank line. Then separate each element by a line.
```
Some *text*.
- List item 1
- List item 2
1. item #1
2. item #2
```
That will pretty much cover most of your basic text needs. For those that know HTML \& CSS, you can use those throughout the text as you desire as well. For example, sometimes I need some extra space after a plot and will put in a `<br>`.
Code
----
### Chunks
After text, the most common thing you’ll have is code. The code resides in a *chunk*, and looks like this. You can add it to your document with the `Insert` menu in the upper right of your Rmd file, but as you’ll be needing to do this all the time, instead you’ll want to use the keyboard shortcut of Ctrl/Cmd \+ Alt/Option \+ I[68](#fn68).
```
```{r}
x = rnorm(10)
```
```
There is no limit to what you put in an R chunk. I don’t recommend it, but it could be hundreds of lines of code! You can put these anywhere within the document. Other languages, e.g. Python, can be used as well, as long as knitr knows where to look for the engine for the code you want to insert[69](#fn69).
#### Chunk options
There are many things to specify for a specific chunk, or to apply to all chunks. The example demonstrates some of the more common ones you might use.
```
```{r mylabel, echo = TRUE, eval = TRUE, cache = FALSE, out.width = '50%'}
# code
```
```
These do the following:
* **echo**: show the code; can be logical, or line numbers specifying which lines to show
* **eval**: evaluate the code; can be logical, or line numbers specifying which lines to show
* **cache**: logical, whether to cache the results for later reuse without reevaluation
* **out.width**: figure width, can be pixels, percentage, etc.
You can also specify these as defaults for the whole document by using a chunk near the beginning that looks something like this.
```
```{r setup}
knitr::opts_chunk$set(
echo = T,
message = F,
warning = F,
error = F,
comment = NA,
R.options = list(width = 220),
dev.args = list(bg = 'transparent')
)
```
```
There are [quite a few options](https://yihui.org/knitr/options/), so familiarize yourself with what’s available, even if you don’t plan on using it, because you never know.
### In\-line
R code doesn’t have to be in a chunk. You can put it right in the middle of a sentence.
```
Here is a sentence whose sum is `r 2 + 2`.
```
```
This sentence has a value of `r x[1]`.
```
When you knit the document, it will look like ordinary text because you aren’t using an R chunk:
Here is a sentence whose sum is 4\.
This sentence has a value of 1\.955294\.
This effect of this in scientific reporting cannot be understated.
> **Your goal in writing a document should be to not explicitly write a single number that’s based on data.**
### Labels
All chunks should be given a label. This makes it easy to find it within your document because there are two outlines available to you. One that shows your text headers (to the right), and one that you can click to reveal that will also show your chunks (bottom left). If they just say Chunk 1, Chunk 2 etc., it doesn’t help you to know what they’re doing. There is also some potential benefit in terms of caching, which we’ll discuss later.
### Running code
You don’t have to knit the document to run the code, and often you’ll be using the results as you write the document. You can run a single chunk or multiple chunks. Use the shortcuts instead of the menu.
By default, when you knit the document, all code will be run. Depending on a variety of factors, this may or may not be what you want to do, especially if it is time\-consuming to do so. We’ll talk about how to deal with this issue in the next part.
Multiple Documents
------------------
### Knitting multiple documents into one
A single `.Rmd` file can call others, referred to as *child documents*, and when you knit that document you’ll have one single document with the content from all of them. You may want to consider other formats, such as *bookdown*, rather than doing this. Scrolling a lot is sometimes problematic, and not actually required for the presentation of material. It also makes the content take longer to load, because everything has to load[70](#fn70). See the [appendix](appendix.html#appendix) for details.
### Parameterized reports
In many cases one may wish to create multiple separate reports that more or less have the same structure. The standard scenario is creating tailored reports, possibly customized for different audiences. In this case you may have a single template which all follow. We can use the YAML configuration to set or load R data objects as follows.
```
title: institution_name
params:
institution_name: 'U of M'
data_folder: 'results/data'
---
``{r}
load(paste0(data_folder, 'myfile.RData'))
``
```
The result of the above would create a document with the title of ‘U of M’ and load data from the designated data folder. The `institution_name` and `data_folder` are processed as R objects before anything else about the document is created.
Parameterized reports combined with child documents mean you can essentially have one document template for all reports as a child template and merely change the YAML configuration for the rest.
Collaboration
-------------
*R Notebooks* are a format one can use that might be more suitable for code collaboration. They are identical to the standard HTML document in most respects, but chunks will by default print output in the Rmd file itself. For example, a graduate student could write up a notebook, and their advisor could then look at the document, change the code as needed etc. Of course, you could just do this with a standard R script as well. Some prefer the inline output however.
For more involved collaborations, I would suggest partitioning the sections into their own document, then use version control keep track of respective contributions, and merge as needed. Such a process was designed for software development, but there’s no reason it wouldn’t work for a document in general, and in my experience, it has quite well.
Using Python for Documents
--------------------------
Most of what we’ve discussed for the standard html document would apply to the Python world as well. The main document format there is the [Jupyter Notebook](https://jupyter.org/). Like RStudio and R Markdown, Jupyter notebooks seamlessly integrate code and text via markdown, and even can use R instead of Python as well. Jupyter Notebooks are superior to the R Notebook format in most respects, but particularly in look and feel, and interactivity. They are entirely browser\-based to use, so whatever you are constructing will look the same if converted to HTML. Almost all the chapters in this document regarding programming, analysis, and visualization have corresponding [Jupyter notebooks demonstrating the same thing with Python](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks), including the [discussion of notebooks themselves](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks/jupyter.ipynb).
Beyond the notebook format, I don’t find it as easy to customize Jupyter notebooks as it is the standard R Markdown HTML document and other formats (e.g. slides, bookdown, interactive web, etc.), which is probably why most of the Jupyter notebooks you come across look exactly the same. With R, one or two clicks in RStudio, or a couple knitr/YAML option adjustments, can make the final product for a basic HTML document look notably different. For Jupyter, one would need various extensions, possibly via the browser, terminal scripting, or would have to directly manipulate html/css. None of this is necessarily a deal\-breaker, but most probably would prefer a more straightforward way to change the look.
R Markdown files
----------------
R Markdown files, with extension `*.Rmd`, are a combination of text, R code, and possibly other code or syntax, all within a single file. Various packages, e.g. rmarkdown, knitr, pandoc, etc., work behind the scenes to knit all those pieces into one coherent whole, in whatever format is desired[67](#fn67). The knitr package is the driving force behind most of what is done to create the final product.
#### HTML
I personally do everything in HTML because it’s the most flexible, and easiest to get things looking the way you want. Presumably at some point, these will simply be the default that people both use and expect in the academic realm and beyond, as there is little additional value that one can get with PDF or MS Word, and often notably less. Furthermore, academia is an anachronism. How much do you engage PDF and Word for *anything* else relative to how much you make use of HTML (i.e. the web)?
#### HTML
I personally do everything in HTML because it’s the most flexible, and easiest to get things looking the way you want. Presumably at some point, these will simply be the default that people both use and expect in the academic realm and beyond, as there is little additional value that one can get with PDF or MS Word, and often notably less. Furthermore, academia is an anachronism. How much do you engage PDF and Word for *anything* else relative to how much you make use of HTML (i.e. the web)?
Text
----
Writing text in a R Markdown document is the same as anywhere else. There are a couple things you’ll use frequently though.
* Headings/Subheadings: Specified \#, \#\#, \#\#\# etc.
* Italics \& bold: \**word*\* for italics \*\***word**\*\* for bold. You can also use underscores (some Markdown flavors may require it).
* Links: `[some_text](http://webaddress.com)`
* Image: ``
* Lists: Start with a dash or number, and make sure the first element is separated from any preceding text by a full blank line. Then separate each element by a line.
```
Some *text*.
- List item 1
- List item 2
1. item #1
2. item #2
```
That will pretty much cover most of your basic text needs. For those that know HTML \& CSS, you can use those throughout the text as you desire as well. For example, sometimes I need some extra space after a plot and will put in a `<br>`.
Code
----
### Chunks
After text, the most common thing you’ll have is code. The code resides in a *chunk*, and looks like this. You can add it to your document with the `Insert` menu in the upper right of your Rmd file, but as you’ll be needing to do this all the time, instead you’ll want to use the keyboard shortcut of Ctrl/Cmd \+ Alt/Option \+ I[68](#fn68).
```
```{r}
x = rnorm(10)
```
```
There is no limit to what you put in an R chunk. I don’t recommend it, but it could be hundreds of lines of code! You can put these anywhere within the document. Other languages, e.g. Python, can be used as well, as long as knitr knows where to look for the engine for the code you want to insert[69](#fn69).
#### Chunk options
There are many things to specify for a specific chunk, or to apply to all chunks. The example demonstrates some of the more common ones you might use.
```
```{r mylabel, echo = TRUE, eval = TRUE, cache = FALSE, out.width = '50%'}
# code
```
```
These do the following:
* **echo**: show the code; can be logical, or line numbers specifying which lines to show
* **eval**: evaluate the code; can be logical, or line numbers specifying which lines to show
* **cache**: logical, whether to cache the results for later reuse without reevaluation
* **out.width**: figure width, can be pixels, percentage, etc.
You can also specify these as defaults for the whole document by using a chunk near the beginning that looks something like this.
```
```{r setup}
knitr::opts_chunk$set(
echo = T,
message = F,
warning = F,
error = F,
comment = NA,
R.options = list(width = 220),
dev.args = list(bg = 'transparent')
)
```
```
There are [quite a few options](https://yihui.org/knitr/options/), so familiarize yourself with what’s available, even if you don’t plan on using it, because you never know.
### In\-line
R code doesn’t have to be in a chunk. You can put it right in the middle of a sentence.
```
Here is a sentence whose sum is `r 2 + 2`.
```
```
This sentence has a value of `r x[1]`.
```
When you knit the document, it will look like ordinary text because you aren’t using an R chunk:
Here is a sentence whose sum is 4\.
This sentence has a value of 1\.955294\.
This effect of this in scientific reporting cannot be understated.
> **Your goal in writing a document should be to not explicitly write a single number that’s based on data.**
### Labels
All chunks should be given a label. This makes it easy to find it within your document because there are two outlines available to you. One that shows your text headers (to the right), and one that you can click to reveal that will also show your chunks (bottom left). If they just say Chunk 1, Chunk 2 etc., it doesn’t help you to know what they’re doing. There is also some potential benefit in terms of caching, which we’ll discuss later.
### Running code
You don’t have to knit the document to run the code, and often you’ll be using the results as you write the document. You can run a single chunk or multiple chunks. Use the shortcuts instead of the menu.
By default, when you knit the document, all code will be run. Depending on a variety of factors, this may or may not be what you want to do, especially if it is time\-consuming to do so. We’ll talk about how to deal with this issue in the next part.
### Chunks
After text, the most common thing you’ll have is code. The code resides in a *chunk*, and looks like this. You can add it to your document with the `Insert` menu in the upper right of your Rmd file, but as you’ll be needing to do this all the time, instead you’ll want to use the keyboard shortcut of Ctrl/Cmd \+ Alt/Option \+ I[68](#fn68).
```
```{r}
x = rnorm(10)
```
```
There is no limit to what you put in an R chunk. I don’t recommend it, but it could be hundreds of lines of code! You can put these anywhere within the document. Other languages, e.g. Python, can be used as well, as long as knitr knows where to look for the engine for the code you want to insert[69](#fn69).
#### Chunk options
There are many things to specify for a specific chunk, or to apply to all chunks. The example demonstrates some of the more common ones you might use.
```
```{r mylabel, echo = TRUE, eval = TRUE, cache = FALSE, out.width = '50%'}
# code
```
```
These do the following:
* **echo**: show the code; can be logical, or line numbers specifying which lines to show
* **eval**: evaluate the code; can be logical, or line numbers specifying which lines to show
* **cache**: logical, whether to cache the results for later reuse without reevaluation
* **out.width**: figure width, can be pixels, percentage, etc.
You can also specify these as defaults for the whole document by using a chunk near the beginning that looks something like this.
```
```{r setup}
knitr::opts_chunk$set(
echo = T,
message = F,
warning = F,
error = F,
comment = NA,
R.options = list(width = 220),
dev.args = list(bg = 'transparent')
)
```
```
There are [quite a few options](https://yihui.org/knitr/options/), so familiarize yourself with what’s available, even if you don’t plan on using it, because you never know.
#### Chunk options
There are many things to specify for a specific chunk, or to apply to all chunks. The example demonstrates some of the more common ones you might use.
```
```{r mylabel, echo = TRUE, eval = TRUE, cache = FALSE, out.width = '50%'}
# code
```
```
These do the following:
* **echo**: show the code; can be logical, or line numbers specifying which lines to show
* **eval**: evaluate the code; can be logical, or line numbers specifying which lines to show
* **cache**: logical, whether to cache the results for later reuse without reevaluation
* **out.width**: figure width, can be pixels, percentage, etc.
You can also specify these as defaults for the whole document by using a chunk near the beginning that looks something like this.
```
```{r setup}
knitr::opts_chunk$set(
echo = T,
message = F,
warning = F,
error = F,
comment = NA,
R.options = list(width = 220),
dev.args = list(bg = 'transparent')
)
```
```
There are [quite a few options](https://yihui.org/knitr/options/), so familiarize yourself with what’s available, even if you don’t plan on using it, because you never know.
### In\-line
R code doesn’t have to be in a chunk. You can put it right in the middle of a sentence.
```
Here is a sentence whose sum is `r 2 + 2`.
```
```
This sentence has a value of `r x[1]`.
```
When you knit the document, it will look like ordinary text because you aren’t using an R chunk:
Here is a sentence whose sum is 4\.
This sentence has a value of 1\.955294\.
This effect of this in scientific reporting cannot be understated.
> **Your goal in writing a document should be to not explicitly write a single number that’s based on data.**
### Labels
All chunks should be given a label. This makes it easy to find it within your document because there are two outlines available to you. One that shows your text headers (to the right), and one that you can click to reveal that will also show your chunks (bottom left). If they just say Chunk 1, Chunk 2 etc., it doesn’t help you to know what they’re doing. There is also some potential benefit in terms of caching, which we’ll discuss later.
### Running code
You don’t have to knit the document to run the code, and often you’ll be using the results as you write the document. You can run a single chunk or multiple chunks. Use the shortcuts instead of the menu.
By default, when you knit the document, all code will be run. Depending on a variety of factors, this may or may not be what you want to do, especially if it is time\-consuming to do so. We’ll talk about how to deal with this issue in the next part.
Multiple Documents
------------------
### Knitting multiple documents into one
A single `.Rmd` file can call others, referred to as *child documents*, and when you knit that document you’ll have one single document with the content from all of them. You may want to consider other formats, such as *bookdown*, rather than doing this. Scrolling a lot is sometimes problematic, and not actually required for the presentation of material. It also makes the content take longer to load, because everything has to load[70](#fn70). See the [appendix](appendix.html#appendix) for details.
### Parameterized reports
In many cases one may wish to create multiple separate reports that more or less have the same structure. The standard scenario is creating tailored reports, possibly customized for different audiences. In this case you may have a single template which all follow. We can use the YAML configuration to set or load R data objects as follows.
```
title: institution_name
params:
institution_name: 'U of M'
data_folder: 'results/data'
---
``{r}
load(paste0(data_folder, 'myfile.RData'))
``
```
The result of the above would create a document with the title of ‘U of M’ and load data from the designated data folder. The `institution_name` and `data_folder` are processed as R objects before anything else about the document is created.
Parameterized reports combined with child documents mean you can essentially have one document template for all reports as a child template and merely change the YAML configuration for the rest.
### Knitting multiple documents into one
A single `.Rmd` file can call others, referred to as *child documents*, and when you knit that document you’ll have one single document with the content from all of them. You may want to consider other formats, such as *bookdown*, rather than doing this. Scrolling a lot is sometimes problematic, and not actually required for the presentation of material. It also makes the content take longer to load, because everything has to load[70](#fn70). See the [appendix](appendix.html#appendix) for details.
### Parameterized reports
In many cases one may wish to create multiple separate reports that more or less have the same structure. The standard scenario is creating tailored reports, possibly customized for different audiences. In this case you may have a single template which all follow. We can use the YAML configuration to set or load R data objects as follows.
```
title: institution_name
params:
institution_name: 'U of M'
data_folder: 'results/data'
---
``{r}
load(paste0(data_folder, 'myfile.RData'))
``
```
The result of the above would create a document with the title of ‘U of M’ and load data from the designated data folder. The `institution_name` and `data_folder` are processed as R objects before anything else about the document is created.
Parameterized reports combined with child documents mean you can essentially have one document template for all reports as a child template and merely change the YAML configuration for the rest.
Collaboration
-------------
*R Notebooks* are a format one can use that might be more suitable for code collaboration. They are identical to the standard HTML document in most respects, but chunks will by default print output in the Rmd file itself. For example, a graduate student could write up a notebook, and their advisor could then look at the document, change the code as needed etc. Of course, you could just do this with a standard R script as well. Some prefer the inline output however.
For more involved collaborations, I would suggest partitioning the sections into their own document, then use version control keep track of respective contributions, and merge as needed. Such a process was designed for software development, but there’s no reason it wouldn’t work for a document in general, and in my experience, it has quite well.
Using Python for Documents
--------------------------
Most of what we’ve discussed for the standard html document would apply to the Python world as well. The main document format there is the [Jupyter Notebook](https://jupyter.org/). Like RStudio and R Markdown, Jupyter notebooks seamlessly integrate code and text via markdown, and even can use R instead of Python as well. Jupyter Notebooks are superior to the R Notebook format in most respects, but particularly in look and feel, and interactivity. They are entirely browser\-based to use, so whatever you are constructing will look the same if converted to HTML. Almost all the chapters in this document regarding programming, analysis, and visualization have corresponding [Jupyter notebooks demonstrating the same thing with Python](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks), including the [discussion of notebooks themselves](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks/jupyter.ipynb).
Beyond the notebook format, I don’t find it as easy to customize Jupyter notebooks as it is the standard R Markdown HTML document and other formats (e.g. slides, bookdown, interactive web, etc.), which is probably why most of the Jupyter notebooks you come across look exactly the same. With R, one or two clicks in RStudio, or a couple knitr/YAML option adjustments, can make the final product for a basic HTML document look notably different. For Jupyter, one would need various extensions, possibly via the browser, terminal scripting, or would have to directly manipulate html/css. None of this is necessarily a deal\-breaker, but most probably would prefer a more straightforward way to change the look.
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/standard_documents.html |
Standard Documents
==================
R Markdown files
----------------
R Markdown files, with extension `*.Rmd`, are a combination of text, R code, and possibly other code or syntax, all within a single file. Various packages, e.g. rmarkdown, knitr, pandoc, etc., work behind the scenes to knit all those pieces into one coherent whole, in whatever format is desired[67](#fn67). The knitr package is the driving force behind most of what is done to create the final product.
#### HTML
I personally do everything in HTML because it’s the most flexible, and easiest to get things looking the way you want. Presumably at some point, these will simply be the default that people both use and expect in the academic realm and beyond, as there is little additional value that one can get with PDF or MS Word, and often notably less. Furthermore, academia is an anachronism. How much do you engage PDF and Word for *anything* else relative to how much you make use of HTML (i.e. the web)?
Text
----
Writing text in a R Markdown document is the same as anywhere else. There are a couple things you’ll use frequently though.
* Headings/Subheadings: Specified \#, \#\#, \#\#\# etc.
* Italics \& bold: \**word*\* for italics \*\***word**\*\* for bold. You can also use underscores (some Markdown flavors may require it).
* Links: `[some_text](http://webaddress.com)`
* Image: ``
* Lists: Start with a dash or number, and make sure the first element is separated from any preceding text by a full blank line. Then separate each element by a line.
```
Some *text*.
- List item 1
- List item 2
1. item #1
2. item #2
```
That will pretty much cover most of your basic text needs. For those that know HTML \& CSS, you can use those throughout the text as you desire as well. For example, sometimes I need some extra space after a plot and will put in a `<br>`.
Code
----
### Chunks
After text, the most common thing you’ll have is code. The code resides in a *chunk*, and looks like this. You can add it to your document with the `Insert` menu in the upper right of your Rmd file, but as you’ll be needing to do this all the time, instead you’ll want to use the keyboard shortcut of Ctrl/Cmd \+ Alt/Option \+ I[68](#fn68).
```
```{r}
x = rnorm(10)
```
```
There is no limit to what you put in an R chunk. I don’t recommend it, but it could be hundreds of lines of code! You can put these anywhere within the document. Other languages, e.g. Python, can be used as well, as long as knitr knows where to look for the engine for the code you want to insert[69](#fn69).
#### Chunk options
There are many things to specify for a specific chunk, or to apply to all chunks. The example demonstrates some of the more common ones you might use.
```
```{r mylabel, echo = TRUE, eval = TRUE, cache = FALSE, out.width = '50%'}
# code
```
```
These do the following:
* **echo**: show the code; can be logical, or line numbers specifying which lines to show
* **eval**: evaluate the code; can be logical, or line numbers specifying which lines to show
* **cache**: logical, whether to cache the results for later reuse without reevaluation
* **out.width**: figure width, can be pixels, percentage, etc.
You can also specify these as defaults for the whole document by using a chunk near the beginning that looks something like this.
```
```{r setup}
knitr::opts_chunk$set(
echo = T,
message = F,
warning = F,
error = F,
comment = NA,
R.options = list(width = 220),
dev.args = list(bg = 'transparent')
)
```
```
There are [quite a few options](https://yihui.org/knitr/options/), so familiarize yourself with what’s available, even if you don’t plan on using it, because you never know.
### In\-line
R code doesn’t have to be in a chunk. You can put it right in the middle of a sentence.
```
Here is a sentence whose sum is `r 2 + 2`.
```
```
This sentence has a value of `r x[1]`.
```
When you knit the document, it will look like ordinary text because you aren’t using an R chunk:
Here is a sentence whose sum is 4\.
This sentence has a value of 1\.955294\.
This effect of this in scientific reporting cannot be understated.
> **Your goal in writing a document should be to not explicitly write a single number that’s based on data.**
### Labels
All chunks should be given a label. This makes it easy to find it within your document because there are two outlines available to you. One that shows your text headers (to the right), and one that you can click to reveal that will also show your chunks (bottom left). If they just say Chunk 1, Chunk 2 etc., it doesn’t help you to know what they’re doing. There is also some potential benefit in terms of caching, which we’ll discuss later.
### Running code
You don’t have to knit the document to run the code, and often you’ll be using the results as you write the document. You can run a single chunk or multiple chunks. Use the shortcuts instead of the menu.
By default, when you knit the document, all code will be run. Depending on a variety of factors, this may or may not be what you want to do, especially if it is time\-consuming to do so. We’ll talk about how to deal with this issue in the next part.
Multiple Documents
------------------
### Knitting multiple documents into one
A single `.Rmd` file can call others, referred to as *child documents*, and when you knit that document you’ll have one single document with the content from all of them. You may want to consider other formats, such as *bookdown*, rather than doing this. Scrolling a lot is sometimes problematic, and not actually required for the presentation of material. It also makes the content take longer to load, because everything has to load[70](#fn70). See the [appendix](appendix.html#appendix) for details.
### Parameterized reports
In many cases one may wish to create multiple separate reports that more or less have the same structure. The standard scenario is creating tailored reports, possibly customized for different audiences. In this case you may have a single template which all follow. We can use the YAML configuration to set or load R data objects as follows.
```
title: institution_name
params:
institution_name: 'U of M'
data_folder: 'results/data'
---
``{r}
load(paste0(data_folder, 'myfile.RData'))
``
```
The result of the above would create a document with the title of ‘U of M’ and load data from the designated data folder. The `institution_name` and `data_folder` are processed as R objects before anything else about the document is created.
Parameterized reports combined with child documents mean you can essentially have one document template for all reports as a child template and merely change the YAML configuration for the rest.
Collaboration
-------------
*R Notebooks* are a format one can use that might be more suitable for code collaboration. They are identical to the standard HTML document in most respects, but chunks will by default print output in the Rmd file itself. For example, a graduate student could write up a notebook, and their advisor could then look at the document, change the code as needed etc. Of course, you could just do this with a standard R script as well. Some prefer the inline output however.
For more involved collaborations, I would suggest partitioning the sections into their own document, then use version control keep track of respective contributions, and merge as needed. Such a process was designed for software development, but there’s no reason it wouldn’t work for a document in general, and in my experience, it has quite well.
Using Python for Documents
--------------------------
Most of what we’ve discussed for the standard html document would apply to the Python world as well. The main document format there is the [Jupyter Notebook](https://jupyter.org/). Like RStudio and R Markdown, Jupyter notebooks seamlessly integrate code and text via markdown, and even can use R instead of Python as well. Jupyter Notebooks are superior to the R Notebook format in most respects, but particularly in look and feel, and interactivity. They are entirely browser\-based to use, so whatever you are constructing will look the same if converted to HTML. Almost all the chapters in this document regarding programming, analysis, and visualization have corresponding [Jupyter notebooks demonstrating the same thing with Python](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks), including the [discussion of notebooks themselves](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks/jupyter.ipynb).
Beyond the notebook format, I don’t find it as easy to customize Jupyter notebooks as it is the standard R Markdown HTML document and other formats (e.g. slides, bookdown, interactive web, etc.), which is probably why most of the Jupyter notebooks you come across look exactly the same. With R, one or two clicks in RStudio, or a couple knitr/YAML option adjustments, can make the final product for a basic HTML document look notably different. For Jupyter, one would need various extensions, possibly via the browser, terminal scripting, or would have to directly manipulate html/css. None of this is necessarily a deal\-breaker, but most probably would prefer a more straightforward way to change the look.
R Markdown files
----------------
R Markdown files, with extension `*.Rmd`, are a combination of text, R code, and possibly other code or syntax, all within a single file. Various packages, e.g. rmarkdown, knitr, pandoc, etc., work behind the scenes to knit all those pieces into one coherent whole, in whatever format is desired[67](#fn67). The knitr package is the driving force behind most of what is done to create the final product.
#### HTML
I personally do everything in HTML because it’s the most flexible, and easiest to get things looking the way you want. Presumably at some point, these will simply be the default that people both use and expect in the academic realm and beyond, as there is little additional value that one can get with PDF or MS Word, and often notably less. Furthermore, academia is an anachronism. How much do you engage PDF and Word for *anything* else relative to how much you make use of HTML (i.e. the web)?
#### HTML
I personally do everything in HTML because it’s the most flexible, and easiest to get things looking the way you want. Presumably at some point, these will simply be the default that people both use and expect in the academic realm and beyond, as there is little additional value that one can get with PDF or MS Word, and often notably less. Furthermore, academia is an anachronism. How much do you engage PDF and Word for *anything* else relative to how much you make use of HTML (i.e. the web)?
Text
----
Writing text in a R Markdown document is the same as anywhere else. There are a couple things you’ll use frequently though.
* Headings/Subheadings: Specified \#, \#\#, \#\#\# etc.
* Italics \& bold: \**word*\* for italics \*\***word**\*\* for bold. You can also use underscores (some Markdown flavors may require it).
* Links: `[some_text](http://webaddress.com)`
* Image: ``
* Lists: Start with a dash or number, and make sure the first element is separated from any preceding text by a full blank line. Then separate each element by a line.
```
Some *text*.
- List item 1
- List item 2
1. item #1
2. item #2
```
That will pretty much cover most of your basic text needs. For those that know HTML \& CSS, you can use those throughout the text as you desire as well. For example, sometimes I need some extra space after a plot and will put in a `<br>`.
Code
----
### Chunks
After text, the most common thing you’ll have is code. The code resides in a *chunk*, and looks like this. You can add it to your document with the `Insert` menu in the upper right of your Rmd file, but as you’ll be needing to do this all the time, instead you’ll want to use the keyboard shortcut of Ctrl/Cmd \+ Alt/Option \+ I[68](#fn68).
```
```{r}
x = rnorm(10)
```
```
There is no limit to what you put in an R chunk. I don’t recommend it, but it could be hundreds of lines of code! You can put these anywhere within the document. Other languages, e.g. Python, can be used as well, as long as knitr knows where to look for the engine for the code you want to insert[69](#fn69).
#### Chunk options
There are many things to specify for a specific chunk, or to apply to all chunks. The example demonstrates some of the more common ones you might use.
```
```{r mylabel, echo = TRUE, eval = TRUE, cache = FALSE, out.width = '50%'}
# code
```
```
These do the following:
* **echo**: show the code; can be logical, or line numbers specifying which lines to show
* **eval**: evaluate the code; can be logical, or line numbers specifying which lines to show
* **cache**: logical, whether to cache the results for later reuse without reevaluation
* **out.width**: figure width, can be pixels, percentage, etc.
You can also specify these as defaults for the whole document by using a chunk near the beginning that looks something like this.
```
```{r setup}
knitr::opts_chunk$set(
echo = T,
message = F,
warning = F,
error = F,
comment = NA,
R.options = list(width = 220),
dev.args = list(bg = 'transparent')
)
```
```
There are [quite a few options](https://yihui.org/knitr/options/), so familiarize yourself with what’s available, even if you don’t plan on using it, because you never know.
### In\-line
R code doesn’t have to be in a chunk. You can put it right in the middle of a sentence.
```
Here is a sentence whose sum is `r 2 + 2`.
```
```
This sentence has a value of `r x[1]`.
```
When you knit the document, it will look like ordinary text because you aren’t using an R chunk:
Here is a sentence whose sum is 4\.
This sentence has a value of 1\.955294\.
This effect of this in scientific reporting cannot be understated.
> **Your goal in writing a document should be to not explicitly write a single number that’s based on data.**
### Labels
All chunks should be given a label. This makes it easy to find it within your document because there are two outlines available to you. One that shows your text headers (to the right), and one that you can click to reveal that will also show your chunks (bottom left). If they just say Chunk 1, Chunk 2 etc., it doesn’t help you to know what they’re doing. There is also some potential benefit in terms of caching, which we’ll discuss later.
### Running code
You don’t have to knit the document to run the code, and often you’ll be using the results as you write the document. You can run a single chunk or multiple chunks. Use the shortcuts instead of the menu.
By default, when you knit the document, all code will be run. Depending on a variety of factors, this may or may not be what you want to do, especially if it is time\-consuming to do so. We’ll talk about how to deal with this issue in the next part.
### Chunks
After text, the most common thing you’ll have is code. The code resides in a *chunk*, and looks like this. You can add it to your document with the `Insert` menu in the upper right of your Rmd file, but as you’ll be needing to do this all the time, instead you’ll want to use the keyboard shortcut of Ctrl/Cmd \+ Alt/Option \+ I[68](#fn68).
```
```{r}
x = rnorm(10)
```
```
There is no limit to what you put in an R chunk. I don’t recommend it, but it could be hundreds of lines of code! You can put these anywhere within the document. Other languages, e.g. Python, can be used as well, as long as knitr knows where to look for the engine for the code you want to insert[69](#fn69).
#### Chunk options
There are many things to specify for a specific chunk, or to apply to all chunks. The example demonstrates some of the more common ones you might use.
```
```{r mylabel, echo = TRUE, eval = TRUE, cache = FALSE, out.width = '50%'}
# code
```
```
These do the following:
* **echo**: show the code; can be logical, or line numbers specifying which lines to show
* **eval**: evaluate the code; can be logical, or line numbers specifying which lines to show
* **cache**: logical, whether to cache the results for later reuse without reevaluation
* **out.width**: figure width, can be pixels, percentage, etc.
You can also specify these as defaults for the whole document by using a chunk near the beginning that looks something like this.
```
```{r setup}
knitr::opts_chunk$set(
echo = T,
message = F,
warning = F,
error = F,
comment = NA,
R.options = list(width = 220),
dev.args = list(bg = 'transparent')
)
```
```
There are [quite a few options](https://yihui.org/knitr/options/), so familiarize yourself with what’s available, even if you don’t plan on using it, because you never know.
#### Chunk options
There are many things to specify for a specific chunk, or to apply to all chunks. The example demonstrates some of the more common ones you might use.
```
```{r mylabel, echo = TRUE, eval = TRUE, cache = FALSE, out.width = '50%'}
# code
```
```
These do the following:
* **echo**: show the code; can be logical, or line numbers specifying which lines to show
* **eval**: evaluate the code; can be logical, or line numbers specifying which lines to show
* **cache**: logical, whether to cache the results for later reuse without reevaluation
* **out.width**: figure width, can be pixels, percentage, etc.
You can also specify these as defaults for the whole document by using a chunk near the beginning that looks something like this.
```
```{r setup}
knitr::opts_chunk$set(
echo = T,
message = F,
warning = F,
error = F,
comment = NA,
R.options = list(width = 220),
dev.args = list(bg = 'transparent')
)
```
```
There are [quite a few options](https://yihui.org/knitr/options/), so familiarize yourself with what’s available, even if you don’t plan on using it, because you never know.
### In\-line
R code doesn’t have to be in a chunk. You can put it right in the middle of a sentence.
```
Here is a sentence whose sum is `r 2 + 2`.
```
```
This sentence has a value of `r x[1]`.
```
When you knit the document, it will look like ordinary text because you aren’t using an R chunk:
Here is a sentence whose sum is 4\.
This sentence has a value of 1\.955294\.
This effect of this in scientific reporting cannot be understated.
> **Your goal in writing a document should be to not explicitly write a single number that’s based on data.**
### Labels
All chunks should be given a label. This makes it easy to find it within your document because there are two outlines available to you. One that shows your text headers (to the right), and one that you can click to reveal that will also show your chunks (bottom left). If they just say Chunk 1, Chunk 2 etc., it doesn’t help you to know what they’re doing. There is also some potential benefit in terms of caching, which we’ll discuss later.
### Running code
You don’t have to knit the document to run the code, and often you’ll be using the results as you write the document. You can run a single chunk or multiple chunks. Use the shortcuts instead of the menu.
By default, when you knit the document, all code will be run. Depending on a variety of factors, this may or may not be what you want to do, especially if it is time\-consuming to do so. We’ll talk about how to deal with this issue in the next part.
Multiple Documents
------------------
### Knitting multiple documents into one
A single `.Rmd` file can call others, referred to as *child documents*, and when you knit that document you’ll have one single document with the content from all of them. You may want to consider other formats, such as *bookdown*, rather than doing this. Scrolling a lot is sometimes problematic, and not actually required for the presentation of material. It also makes the content take longer to load, because everything has to load[70](#fn70). See the [appendix](appendix.html#appendix) for details.
### Parameterized reports
In many cases one may wish to create multiple separate reports that more or less have the same structure. The standard scenario is creating tailored reports, possibly customized for different audiences. In this case you may have a single template which all follow. We can use the YAML configuration to set or load R data objects as follows.
```
title: institution_name
params:
institution_name: 'U of M'
data_folder: 'results/data'
---
``{r}
load(paste0(data_folder, 'myfile.RData'))
``
```
The result of the above would create a document with the title of ‘U of M’ and load data from the designated data folder. The `institution_name` and `data_folder` are processed as R objects before anything else about the document is created.
Parameterized reports combined with child documents mean you can essentially have one document template for all reports as a child template and merely change the YAML configuration for the rest.
### Knitting multiple documents into one
A single `.Rmd` file can call others, referred to as *child documents*, and when you knit that document you’ll have one single document with the content from all of them. You may want to consider other formats, such as *bookdown*, rather than doing this. Scrolling a lot is sometimes problematic, and not actually required for the presentation of material. It also makes the content take longer to load, because everything has to load[70](#fn70). See the [appendix](appendix.html#appendix) for details.
### Parameterized reports
In many cases one may wish to create multiple separate reports that more or less have the same structure. The standard scenario is creating tailored reports, possibly customized for different audiences. In this case you may have a single template which all follow. We can use the YAML configuration to set or load R data objects as follows.
```
title: institution_name
params:
institution_name: 'U of M'
data_folder: 'results/data'
---
``{r}
load(paste0(data_folder, 'myfile.RData'))
``
```
The result of the above would create a document with the title of ‘U of M’ and load data from the designated data folder. The `institution_name` and `data_folder` are processed as R objects before anything else about the document is created.
Parameterized reports combined with child documents mean you can essentially have one document template for all reports as a child template and merely change the YAML configuration for the rest.
Collaboration
-------------
*R Notebooks* are a format one can use that might be more suitable for code collaboration. They are identical to the standard HTML document in most respects, but chunks will by default print output in the Rmd file itself. For example, a graduate student could write up a notebook, and their advisor could then look at the document, change the code as needed etc. Of course, you could just do this with a standard R script as well. Some prefer the inline output however.
For more involved collaborations, I would suggest partitioning the sections into their own document, then use version control keep track of respective contributions, and merge as needed. Such a process was designed for software development, but there’s no reason it wouldn’t work for a document in general, and in my experience, it has quite well.
Using Python for Documents
--------------------------
Most of what we’ve discussed for the standard html document would apply to the Python world as well. The main document format there is the [Jupyter Notebook](https://jupyter.org/). Like RStudio and R Markdown, Jupyter notebooks seamlessly integrate code and text via markdown, and even can use R instead of Python as well. Jupyter Notebooks are superior to the R Notebook format in most respects, but particularly in look and feel, and interactivity. They are entirely browser\-based to use, so whatever you are constructing will look the same if converted to HTML. Almost all the chapters in this document regarding programming, analysis, and visualization have corresponding [Jupyter notebooks demonstrating the same thing with Python](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks), including the [discussion of notebooks themselves](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks/jupyter.ipynb).
Beyond the notebook format, I don’t find it as easy to customize Jupyter notebooks as it is the standard R Markdown HTML document and other formats (e.g. slides, bookdown, interactive web, etc.), which is probably why most of the Jupyter notebooks you come across look exactly the same. With R, one or two clicks in RStudio, or a couple knitr/YAML option adjustments, can make the final product for a basic HTML document look notably different. For Jupyter, one would need various extensions, possibly via the browser, terminal scripting, or would have to directly manipulate html/css. None of this is necessarily a deal\-breaker, but most probably would prefer a more straightforward way to change the look.
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/standard_documents.html |
Standard Documents
==================
R Markdown files
----------------
R Markdown files, with extension `*.Rmd`, are a combination of text, R code, and possibly other code or syntax, all within a single file. Various packages, e.g. rmarkdown, knitr, pandoc, etc., work behind the scenes to knit all those pieces into one coherent whole, in whatever format is desired[67](#fn67). The knitr package is the driving force behind most of what is done to create the final product.
#### HTML
I personally do everything in HTML because it’s the most flexible, and easiest to get things looking the way you want. Presumably at some point, these will simply be the default that people both use and expect in the academic realm and beyond, as there is little additional value that one can get with PDF or MS Word, and often notably less. Furthermore, academia is an anachronism. How much do you engage PDF and Word for *anything* else relative to how much you make use of HTML (i.e. the web)?
Text
----
Writing text in a R Markdown document is the same as anywhere else. There are a couple things you’ll use frequently though.
* Headings/Subheadings: Specified \#, \#\#, \#\#\# etc.
* Italics \& bold: \**word*\* for italics \*\***word**\*\* for bold. You can also use underscores (some Markdown flavors may require it).
* Links: `[some_text](http://webaddress.com)`
* Image: ``
* Lists: Start with a dash or number, and make sure the first element is separated from any preceding text by a full blank line. Then separate each element by a line.
```
Some *text*.
- List item 1
- List item 2
1. item #1
2. item #2
```
That will pretty much cover most of your basic text needs. For those that know HTML \& CSS, you can use those throughout the text as you desire as well. For example, sometimes I need some extra space after a plot and will put in a `<br>`.
Code
----
### Chunks
After text, the most common thing you’ll have is code. The code resides in a *chunk*, and looks like this. You can add it to your document with the `Insert` menu in the upper right of your Rmd file, but as you’ll be needing to do this all the time, instead you’ll want to use the keyboard shortcut of Ctrl/Cmd \+ Alt/Option \+ I[68](#fn68).
```
```{r}
x = rnorm(10)
```
```
There is no limit to what you put in an R chunk. I don’t recommend it, but it could be hundreds of lines of code! You can put these anywhere within the document. Other languages, e.g. Python, can be used as well, as long as knitr knows where to look for the engine for the code you want to insert[69](#fn69).
#### Chunk options
There are many things to specify for a specific chunk, or to apply to all chunks. The example demonstrates some of the more common ones you might use.
```
```{r mylabel, echo = TRUE, eval = TRUE, cache = FALSE, out.width = '50%'}
# code
```
```
These do the following:
* **echo**: show the code; can be logical, or line numbers specifying which lines to show
* **eval**: evaluate the code; can be logical, or line numbers specifying which lines to show
* **cache**: logical, whether to cache the results for later reuse without reevaluation
* **out.width**: figure width, can be pixels, percentage, etc.
You can also specify these as defaults for the whole document by using a chunk near the beginning that looks something like this.
```
```{r setup}
knitr::opts_chunk$set(
echo = T,
message = F,
warning = F,
error = F,
comment = NA,
R.options = list(width = 220),
dev.args = list(bg = 'transparent')
)
```
```
There are [quite a few options](https://yihui.org/knitr/options/), so familiarize yourself with what’s available, even if you don’t plan on using it, because you never know.
### In\-line
R code doesn’t have to be in a chunk. You can put it right in the middle of a sentence.
```
Here is a sentence whose sum is `r 2 + 2`.
```
```
This sentence has a value of `r x[1]`.
```
When you knit the document, it will look like ordinary text because you aren’t using an R chunk:
Here is a sentence whose sum is 4\.
This sentence has a value of 1\.955294\.
This effect of this in scientific reporting cannot be understated.
> **Your goal in writing a document should be to not explicitly write a single number that’s based on data.**
### Labels
All chunks should be given a label. This makes it easy to find it within your document because there are two outlines available to you. One that shows your text headers (to the right), and one that you can click to reveal that will also show your chunks (bottom left). If they just say Chunk 1, Chunk 2 etc., it doesn’t help you to know what they’re doing. There is also some potential benefit in terms of caching, which we’ll discuss later.
### Running code
You don’t have to knit the document to run the code, and often you’ll be using the results as you write the document. You can run a single chunk or multiple chunks. Use the shortcuts instead of the menu.
By default, when you knit the document, all code will be run. Depending on a variety of factors, this may or may not be what you want to do, especially if it is time\-consuming to do so. We’ll talk about how to deal with this issue in the next part.
Multiple Documents
------------------
### Knitting multiple documents into one
A single `.Rmd` file can call others, referred to as *child documents*, and when you knit that document you’ll have one single document with the content from all of them. You may want to consider other formats, such as *bookdown*, rather than doing this. Scrolling a lot is sometimes problematic, and not actually required for the presentation of material. It also makes the content take longer to load, because everything has to load[70](#fn70). See the [appendix](appendix.html#appendix) for details.
### Parameterized reports
In many cases one may wish to create multiple separate reports that more or less have the same structure. The standard scenario is creating tailored reports, possibly customized for different audiences. In this case you may have a single template which all follow. We can use the YAML configuration to set or load R data objects as follows.
```
title: institution_name
params:
institution_name: 'U of M'
data_folder: 'results/data'
---
``{r}
load(paste0(data_folder, 'myfile.RData'))
``
```
The result of the above would create a document with the title of ‘U of M’ and load data from the designated data folder. The `institution_name` and `data_folder` are processed as R objects before anything else about the document is created.
Parameterized reports combined with child documents mean you can essentially have one document template for all reports as a child template and merely change the YAML configuration for the rest.
Collaboration
-------------
*R Notebooks* are a format one can use that might be more suitable for code collaboration. They are identical to the standard HTML document in most respects, but chunks will by default print output in the Rmd file itself. For example, a graduate student could write up a notebook, and their advisor could then look at the document, change the code as needed etc. Of course, you could just do this with a standard R script as well. Some prefer the inline output however.
For more involved collaborations, I would suggest partitioning the sections into their own document, then use version control keep track of respective contributions, and merge as needed. Such a process was designed for software development, but there’s no reason it wouldn’t work for a document in general, and in my experience, it has quite well.
Using Python for Documents
--------------------------
Most of what we’ve discussed for the standard html document would apply to the Python world as well. The main document format there is the [Jupyter Notebook](https://jupyter.org/). Like RStudio and R Markdown, Jupyter notebooks seamlessly integrate code and text via markdown, and even can use R instead of Python as well. Jupyter Notebooks are superior to the R Notebook format in most respects, but particularly in look and feel, and interactivity. They are entirely browser\-based to use, so whatever you are constructing will look the same if converted to HTML. Almost all the chapters in this document regarding programming, analysis, and visualization have corresponding [Jupyter notebooks demonstrating the same thing with Python](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks), including the [discussion of notebooks themselves](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks/jupyter.ipynb).
Beyond the notebook format, I don’t find it as easy to customize Jupyter notebooks as it is the standard R Markdown HTML document and other formats (e.g. slides, bookdown, interactive web, etc.), which is probably why most of the Jupyter notebooks you come across look exactly the same. With R, one or two clicks in RStudio, or a couple knitr/YAML option adjustments, can make the final product for a basic HTML document look notably different. For Jupyter, one would need various extensions, possibly via the browser, terminal scripting, or would have to directly manipulate html/css. None of this is necessarily a deal\-breaker, but most probably would prefer a more straightforward way to change the look.
R Markdown files
----------------
R Markdown files, with extension `*.Rmd`, are a combination of text, R code, and possibly other code or syntax, all within a single file. Various packages, e.g. rmarkdown, knitr, pandoc, etc., work behind the scenes to knit all those pieces into one coherent whole, in whatever format is desired[67](#fn67). The knitr package is the driving force behind most of what is done to create the final product.
#### HTML
I personally do everything in HTML because it’s the most flexible, and easiest to get things looking the way you want. Presumably at some point, these will simply be the default that people both use and expect in the academic realm and beyond, as there is little additional value that one can get with PDF or MS Word, and often notably less. Furthermore, academia is an anachronism. How much do you engage PDF and Word for *anything* else relative to how much you make use of HTML (i.e. the web)?
#### HTML
I personally do everything in HTML because it’s the most flexible, and easiest to get things looking the way you want. Presumably at some point, these will simply be the default that people both use and expect in the academic realm and beyond, as there is little additional value that one can get with PDF or MS Word, and often notably less. Furthermore, academia is an anachronism. How much do you engage PDF and Word for *anything* else relative to how much you make use of HTML (i.e. the web)?
Text
----
Writing text in a R Markdown document is the same as anywhere else. There are a couple things you’ll use frequently though.
* Headings/Subheadings: Specified \#, \#\#, \#\#\# etc.
* Italics \& bold: \**word*\* for italics \*\***word**\*\* for bold. You can also use underscores (some Markdown flavors may require it).
* Links: `[some_text](http://webaddress.com)`
* Image: ``
* Lists: Start with a dash or number, and make sure the first element is separated from any preceding text by a full blank line. Then separate each element by a line.
```
Some *text*.
- List item 1
- List item 2
1. item #1
2. item #2
```
That will pretty much cover most of your basic text needs. For those that know HTML \& CSS, you can use those throughout the text as you desire as well. For example, sometimes I need some extra space after a plot and will put in a `<br>`.
Code
----
### Chunks
After text, the most common thing you’ll have is code. The code resides in a *chunk*, and looks like this. You can add it to your document with the `Insert` menu in the upper right of your Rmd file, but as you’ll be needing to do this all the time, instead you’ll want to use the keyboard shortcut of Ctrl/Cmd \+ Alt/Option \+ I[68](#fn68).
```
```{r}
x = rnorm(10)
```
```
There is no limit to what you put in an R chunk. I don’t recommend it, but it could be hundreds of lines of code! You can put these anywhere within the document. Other languages, e.g. Python, can be used as well, as long as knitr knows where to look for the engine for the code you want to insert[69](#fn69).
#### Chunk options
There are many things to specify for a specific chunk, or to apply to all chunks. The example demonstrates some of the more common ones you might use.
```
```{r mylabel, echo = TRUE, eval = TRUE, cache = FALSE, out.width = '50%'}
# code
```
```
These do the following:
* **echo**: show the code; can be logical, or line numbers specifying which lines to show
* **eval**: evaluate the code; can be logical, or line numbers specifying which lines to show
* **cache**: logical, whether to cache the results for later reuse without reevaluation
* **out.width**: figure width, can be pixels, percentage, etc.
You can also specify these as defaults for the whole document by using a chunk near the beginning that looks something like this.
```
```{r setup}
knitr::opts_chunk$set(
echo = T,
message = F,
warning = F,
error = F,
comment = NA,
R.options = list(width = 220),
dev.args = list(bg = 'transparent')
)
```
```
There are [quite a few options](https://yihui.org/knitr/options/), so familiarize yourself with what’s available, even if you don’t plan on using it, because you never know.
### In\-line
R code doesn’t have to be in a chunk. You can put it right in the middle of a sentence.
```
Here is a sentence whose sum is `r 2 + 2`.
```
```
This sentence has a value of `r x[1]`.
```
When you knit the document, it will look like ordinary text because you aren’t using an R chunk:
Here is a sentence whose sum is 4\.
This sentence has a value of 1\.955294\.
This effect of this in scientific reporting cannot be understated.
> **Your goal in writing a document should be to not explicitly write a single number that’s based on data.**
### Labels
All chunks should be given a label. This makes it easy to find it within your document because there are two outlines available to you. One that shows your text headers (to the right), and one that you can click to reveal that will also show your chunks (bottom left). If they just say Chunk 1, Chunk 2 etc., it doesn’t help you to know what they’re doing. There is also some potential benefit in terms of caching, which we’ll discuss later.
### Running code
You don’t have to knit the document to run the code, and often you’ll be using the results as you write the document. You can run a single chunk or multiple chunks. Use the shortcuts instead of the menu.
By default, when you knit the document, all code will be run. Depending on a variety of factors, this may or may not be what you want to do, especially if it is time\-consuming to do so. We’ll talk about how to deal with this issue in the next part.
### Chunks
After text, the most common thing you’ll have is code. The code resides in a *chunk*, and looks like this. You can add it to your document with the `Insert` menu in the upper right of your Rmd file, but as you’ll be needing to do this all the time, instead you’ll want to use the keyboard shortcut of Ctrl/Cmd \+ Alt/Option \+ I[68](#fn68).
```
```{r}
x = rnorm(10)
```
```
There is no limit to what you put in an R chunk. I don’t recommend it, but it could be hundreds of lines of code! You can put these anywhere within the document. Other languages, e.g. Python, can be used as well, as long as knitr knows where to look for the engine for the code you want to insert[69](#fn69).
#### Chunk options
There are many things to specify for a specific chunk, or to apply to all chunks. The example demonstrates some of the more common ones you might use.
```
```{r mylabel, echo = TRUE, eval = TRUE, cache = FALSE, out.width = '50%'}
# code
```
```
These do the following:
* **echo**: show the code; can be logical, or line numbers specifying which lines to show
* **eval**: evaluate the code; can be logical, or line numbers specifying which lines to show
* **cache**: logical, whether to cache the results for later reuse without reevaluation
* **out.width**: figure width, can be pixels, percentage, etc.
You can also specify these as defaults for the whole document by using a chunk near the beginning that looks something like this.
```
```{r setup}
knitr::opts_chunk$set(
echo = T,
message = F,
warning = F,
error = F,
comment = NA,
R.options = list(width = 220),
dev.args = list(bg = 'transparent')
)
```
```
There are [quite a few options](https://yihui.org/knitr/options/), so familiarize yourself with what’s available, even if you don’t plan on using it, because you never know.
#### Chunk options
There are many things to specify for a specific chunk, or to apply to all chunks. The example demonstrates some of the more common ones you might use.
```
```{r mylabel, echo = TRUE, eval = TRUE, cache = FALSE, out.width = '50%'}
# code
```
```
These do the following:
* **echo**: show the code; can be logical, or line numbers specifying which lines to show
* **eval**: evaluate the code; can be logical, or line numbers specifying which lines to show
* **cache**: logical, whether to cache the results for later reuse without reevaluation
* **out.width**: figure width, can be pixels, percentage, etc.
You can also specify these as defaults for the whole document by using a chunk near the beginning that looks something like this.
```
```{r setup}
knitr::opts_chunk$set(
echo = T,
message = F,
warning = F,
error = F,
comment = NA,
R.options = list(width = 220),
dev.args = list(bg = 'transparent')
)
```
```
There are [quite a few options](https://yihui.org/knitr/options/), so familiarize yourself with what’s available, even if you don’t plan on using it, because you never know.
### In\-line
R code doesn’t have to be in a chunk. You can put it right in the middle of a sentence.
```
Here is a sentence whose sum is `r 2 + 2`.
```
```
This sentence has a value of `r x[1]`.
```
When you knit the document, it will look like ordinary text because you aren’t using an R chunk:
Here is a sentence whose sum is 4\.
This sentence has a value of 1\.955294\.
This effect of this in scientific reporting cannot be understated.
> **Your goal in writing a document should be to not explicitly write a single number that’s based on data.**
### Labels
All chunks should be given a label. This makes it easy to find it within your document because there are two outlines available to you. One that shows your text headers (to the right), and one that you can click to reveal that will also show your chunks (bottom left). If they just say Chunk 1, Chunk 2 etc., it doesn’t help you to know what they’re doing. There is also some potential benefit in terms of caching, which we’ll discuss later.
### Running code
You don’t have to knit the document to run the code, and often you’ll be using the results as you write the document. You can run a single chunk or multiple chunks. Use the shortcuts instead of the menu.
By default, when you knit the document, all code will be run. Depending on a variety of factors, this may or may not be what you want to do, especially if it is time\-consuming to do so. We’ll talk about how to deal with this issue in the next part.
Multiple Documents
------------------
### Knitting multiple documents into one
A single `.Rmd` file can call others, referred to as *child documents*, and when you knit that document you’ll have one single document with the content from all of them. You may want to consider other formats, such as *bookdown*, rather than doing this. Scrolling a lot is sometimes problematic, and not actually required for the presentation of material. It also makes the content take longer to load, because everything has to load[70](#fn70). See the [appendix](appendix.html#appendix) for details.
### Parameterized reports
In many cases one may wish to create multiple separate reports that more or less have the same structure. The standard scenario is creating tailored reports, possibly customized for different audiences. In this case you may have a single template which all follow. We can use the YAML configuration to set or load R data objects as follows.
```
title: institution_name
params:
institution_name: 'U of M'
data_folder: 'results/data'
---
``{r}
load(paste0(data_folder, 'myfile.RData'))
``
```
The result of the above would create a document with the title of ‘U of M’ and load data from the designated data folder. The `institution_name` and `data_folder` are processed as R objects before anything else about the document is created.
Parameterized reports combined with child documents mean you can essentially have one document template for all reports as a child template and merely change the YAML configuration for the rest.
### Knitting multiple documents into one
A single `.Rmd` file can call others, referred to as *child documents*, and when you knit that document you’ll have one single document with the content from all of them. You may want to consider other formats, such as *bookdown*, rather than doing this. Scrolling a lot is sometimes problematic, and not actually required for the presentation of material. It also makes the content take longer to load, because everything has to load[70](#fn70). See the [appendix](appendix.html#appendix) for details.
### Parameterized reports
In many cases one may wish to create multiple separate reports that more or less have the same structure. The standard scenario is creating tailored reports, possibly customized for different audiences. In this case you may have a single template which all follow. We can use the YAML configuration to set or load R data objects as follows.
```
title: institution_name
params:
institution_name: 'U of M'
data_folder: 'results/data'
---
``{r}
load(paste0(data_folder, 'myfile.RData'))
``
```
The result of the above would create a document with the title of ‘U of M’ and load data from the designated data folder. The `institution_name` and `data_folder` are processed as R objects before anything else about the document is created.
Parameterized reports combined with child documents mean you can essentially have one document template for all reports as a child template and merely change the YAML configuration for the rest.
Collaboration
-------------
*R Notebooks* are a format one can use that might be more suitable for code collaboration. They are identical to the standard HTML document in most respects, but chunks will by default print output in the Rmd file itself. For example, a graduate student could write up a notebook, and their advisor could then look at the document, change the code as needed etc. Of course, you could just do this with a standard R script as well. Some prefer the inline output however.
For more involved collaborations, I would suggest partitioning the sections into their own document, then use version control keep track of respective contributions, and merge as needed. Such a process was designed for software development, but there’s no reason it wouldn’t work for a document in general, and in my experience, it has quite well.
Using Python for Documents
--------------------------
Most of what we’ve discussed for the standard html document would apply to the Python world as well. The main document format there is the [Jupyter Notebook](https://jupyter.org/). Like RStudio and R Markdown, Jupyter notebooks seamlessly integrate code and text via markdown, and even can use R instead of Python as well. Jupyter Notebooks are superior to the R Notebook format in most respects, but particularly in look and feel, and interactivity. They are entirely browser\-based to use, so whatever you are constructing will look the same if converted to HTML. Almost all the chapters in this document regarding programming, analysis, and visualization have corresponding [Jupyter notebooks demonstrating the same thing with Python](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks), including the [discussion of notebooks themselves](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks/jupyter.ipynb).
Beyond the notebook format, I don’t find it as easy to customize Jupyter notebooks as it is the standard R Markdown HTML document and other formats (e.g. slides, bookdown, interactive web, etc.), which is probably why most of the Jupyter notebooks you come across look exactly the same. With R, one or two clicks in RStudio, or a couple knitr/YAML option adjustments, can make the final product for a basic HTML document look notably different. For Jupyter, one would need various extensions, possibly via the browser, terminal scripting, or would have to directly manipulate html/css. None of this is necessarily a deal\-breaker, but most probably would prefer a more straightforward way to change the look.
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/standard_documents.html |
Standard Documents
==================
R Markdown files
----------------
R Markdown files, with extension `*.Rmd`, are a combination of text, R code, and possibly other code or syntax, all within a single file. Various packages, e.g. rmarkdown, knitr, pandoc, etc., work behind the scenes to knit all those pieces into one coherent whole, in whatever format is desired[67](#fn67). The knitr package is the driving force behind most of what is done to create the final product.
#### HTML
I personally do everything in HTML because it’s the most flexible, and easiest to get things looking the way you want. Presumably at some point, these will simply be the default that people both use and expect in the academic realm and beyond, as there is little additional value that one can get with PDF or MS Word, and often notably less. Furthermore, academia is an anachronism. How much do you engage PDF and Word for *anything* else relative to how much you make use of HTML (i.e. the web)?
Text
----
Writing text in a R Markdown document is the same as anywhere else. There are a couple things you’ll use frequently though.
* Headings/Subheadings: Specified \#, \#\#, \#\#\# etc.
* Italics \& bold: \**word*\* for italics \*\***word**\*\* for bold. You can also use underscores (some Markdown flavors may require it).
* Links: `[some_text](http://webaddress.com)`
* Image: ``
* Lists: Start with a dash or number, and make sure the first element is separated from any preceding text by a full blank line. Then separate each element by a line.
```
Some *text*.
- List item 1
- List item 2
1. item #1
2. item #2
```
That will pretty much cover most of your basic text needs. For those that know HTML \& CSS, you can use those throughout the text as you desire as well. For example, sometimes I need some extra space after a plot and will put in a `<br>`.
Code
----
### Chunks
After text, the most common thing you’ll have is code. The code resides in a *chunk*, and looks like this. You can add it to your document with the `Insert` menu in the upper right of your Rmd file, but as you’ll be needing to do this all the time, instead you’ll want to use the keyboard shortcut of Ctrl/Cmd \+ Alt/Option \+ I[68](#fn68).
```
```{r}
x = rnorm(10)
```
```
There is no limit to what you put in an R chunk. I don’t recommend it, but it could be hundreds of lines of code! You can put these anywhere within the document. Other languages, e.g. Python, can be used as well, as long as knitr knows where to look for the engine for the code you want to insert[69](#fn69).
#### Chunk options
There are many things to specify for a specific chunk, or to apply to all chunks. The example demonstrates some of the more common ones you might use.
```
```{r mylabel, echo = TRUE, eval = TRUE, cache = FALSE, out.width = '50%'}
# code
```
```
These do the following:
* **echo**: show the code; can be logical, or line numbers specifying which lines to show
* **eval**: evaluate the code; can be logical, or line numbers specifying which lines to show
* **cache**: logical, whether to cache the results for later reuse without reevaluation
* **out.width**: figure width, can be pixels, percentage, etc.
You can also specify these as defaults for the whole document by using a chunk near the beginning that looks something like this.
```
```{r setup}
knitr::opts_chunk$set(
echo = T,
message = F,
warning = F,
error = F,
comment = NA,
R.options = list(width = 220),
dev.args = list(bg = 'transparent')
)
```
```
There are [quite a few options](https://yihui.org/knitr/options/), so familiarize yourself with what’s available, even if you don’t plan on using it, because you never know.
### In\-line
R code doesn’t have to be in a chunk. You can put it right in the middle of a sentence.
```
Here is a sentence whose sum is `r 2 + 2`.
```
```
This sentence has a value of `r x[1]`.
```
When you knit the document, it will look like ordinary text because you aren’t using an R chunk:
Here is a sentence whose sum is 4\.
This sentence has a value of 1\.955294\.
This effect of this in scientific reporting cannot be understated.
> **Your goal in writing a document should be to not explicitly write a single number that’s based on data.**
### Labels
All chunks should be given a label. This makes it easy to find it within your document because there are two outlines available to you. One that shows your text headers (to the right), and one that you can click to reveal that will also show your chunks (bottom left). If they just say Chunk 1, Chunk 2 etc., it doesn’t help you to know what they’re doing. There is also some potential benefit in terms of caching, which we’ll discuss later.
### Running code
You don’t have to knit the document to run the code, and often you’ll be using the results as you write the document. You can run a single chunk or multiple chunks. Use the shortcuts instead of the menu.
By default, when you knit the document, all code will be run. Depending on a variety of factors, this may or may not be what you want to do, especially if it is time\-consuming to do so. We’ll talk about how to deal with this issue in the next part.
Multiple Documents
------------------
### Knitting multiple documents into one
A single `.Rmd` file can call others, referred to as *child documents*, and when you knit that document you’ll have one single document with the content from all of them. You may want to consider other formats, such as *bookdown*, rather than doing this. Scrolling a lot is sometimes problematic, and not actually required for the presentation of material. It also makes the content take longer to load, because everything has to load[70](#fn70). See the [appendix](appendix.html#appendix) for details.
### Parameterized reports
In many cases one may wish to create multiple separate reports that more or less have the same structure. The standard scenario is creating tailored reports, possibly customized for different audiences. In this case you may have a single template which all follow. We can use the YAML configuration to set or load R data objects as follows.
```
title: institution_name
params:
institution_name: 'U of M'
data_folder: 'results/data'
---
``{r}
load(paste0(data_folder, 'myfile.RData'))
``
```
The result of the above would create a document with the title of ‘U of M’ and load data from the designated data folder. The `institution_name` and `data_folder` are processed as R objects before anything else about the document is created.
Parameterized reports combined with child documents mean you can essentially have one document template for all reports as a child template and merely change the YAML configuration for the rest.
Collaboration
-------------
*R Notebooks* are a format one can use that might be more suitable for code collaboration. They are identical to the standard HTML document in most respects, but chunks will by default print output in the Rmd file itself. For example, a graduate student could write up a notebook, and their advisor could then look at the document, change the code as needed etc. Of course, you could just do this with a standard R script as well. Some prefer the inline output however.
For more involved collaborations, I would suggest partitioning the sections into their own document, then use version control keep track of respective contributions, and merge as needed. Such a process was designed for software development, but there’s no reason it wouldn’t work for a document in general, and in my experience, it has quite well.
Using Python for Documents
--------------------------
Most of what we’ve discussed for the standard html document would apply to the Python world as well. The main document format there is the [Jupyter Notebook](https://jupyter.org/). Like RStudio and R Markdown, Jupyter notebooks seamlessly integrate code and text via markdown, and even can use R instead of Python as well. Jupyter Notebooks are superior to the R Notebook format in most respects, but particularly in look and feel, and interactivity. They are entirely browser\-based to use, so whatever you are constructing will look the same if converted to HTML. Almost all the chapters in this document regarding programming, analysis, and visualization have corresponding [Jupyter notebooks demonstrating the same thing with Python](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks), including the [discussion of notebooks themselves](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks/jupyter.ipynb).
Beyond the notebook format, I don’t find it as easy to customize Jupyter notebooks as it is the standard R Markdown HTML document and other formats (e.g. slides, bookdown, interactive web, etc.), which is probably why most of the Jupyter notebooks you come across look exactly the same. With R, one or two clicks in RStudio, or a couple knitr/YAML option adjustments, can make the final product for a basic HTML document look notably different. For Jupyter, one would need various extensions, possibly via the browser, terminal scripting, or would have to directly manipulate html/css. None of this is necessarily a deal\-breaker, but most probably would prefer a more straightforward way to change the look.
R Markdown files
----------------
R Markdown files, with extension `*.Rmd`, are a combination of text, R code, and possibly other code or syntax, all within a single file. Various packages, e.g. rmarkdown, knitr, pandoc, etc., work behind the scenes to knit all those pieces into one coherent whole, in whatever format is desired[67](#fn67). The knitr package is the driving force behind most of what is done to create the final product.
#### HTML
I personally do everything in HTML because it’s the most flexible, and easiest to get things looking the way you want. Presumably at some point, these will simply be the default that people both use and expect in the academic realm and beyond, as there is little additional value that one can get with PDF or MS Word, and often notably less. Furthermore, academia is an anachronism. How much do you engage PDF and Word for *anything* else relative to how much you make use of HTML (i.e. the web)?
#### HTML
I personally do everything in HTML because it’s the most flexible, and easiest to get things looking the way you want. Presumably at some point, these will simply be the default that people both use and expect in the academic realm and beyond, as there is little additional value that one can get with PDF or MS Word, and often notably less. Furthermore, academia is an anachronism. How much do you engage PDF and Word for *anything* else relative to how much you make use of HTML (i.e. the web)?
Text
----
Writing text in a R Markdown document is the same as anywhere else. There are a couple things you’ll use frequently though.
* Headings/Subheadings: Specified \#, \#\#, \#\#\# etc.
* Italics \& bold: \**word*\* for italics \*\***word**\*\* for bold. You can also use underscores (some Markdown flavors may require it).
* Links: `[some_text](http://webaddress.com)`
* Image: ``
* Lists: Start with a dash or number, and make sure the first element is separated from any preceding text by a full blank line. Then separate each element by a line.
```
Some *text*.
- List item 1
- List item 2
1. item #1
2. item #2
```
That will pretty much cover most of your basic text needs. For those that know HTML \& CSS, you can use those throughout the text as you desire as well. For example, sometimes I need some extra space after a plot and will put in a `<br>`.
Code
----
### Chunks
After text, the most common thing you’ll have is code. The code resides in a *chunk*, and looks like this. You can add it to your document with the `Insert` menu in the upper right of your Rmd file, but as you’ll be needing to do this all the time, instead you’ll want to use the keyboard shortcut of Ctrl/Cmd \+ Alt/Option \+ I[68](#fn68).
```
```{r}
x = rnorm(10)
```
```
There is no limit to what you put in an R chunk. I don’t recommend it, but it could be hundreds of lines of code! You can put these anywhere within the document. Other languages, e.g. Python, can be used as well, as long as knitr knows where to look for the engine for the code you want to insert[69](#fn69).
#### Chunk options
There are many things to specify for a specific chunk, or to apply to all chunks. The example demonstrates some of the more common ones you might use.
```
```{r mylabel, echo = TRUE, eval = TRUE, cache = FALSE, out.width = '50%'}
# code
```
```
These do the following:
* **echo**: show the code; can be logical, or line numbers specifying which lines to show
* **eval**: evaluate the code; can be logical, or line numbers specifying which lines to show
* **cache**: logical, whether to cache the results for later reuse without reevaluation
* **out.width**: figure width, can be pixels, percentage, etc.
You can also specify these as defaults for the whole document by using a chunk near the beginning that looks something like this.
```
```{r setup}
knitr::opts_chunk$set(
echo = T,
message = F,
warning = F,
error = F,
comment = NA,
R.options = list(width = 220),
dev.args = list(bg = 'transparent')
)
```
```
There are [quite a few options](https://yihui.org/knitr/options/), so familiarize yourself with what’s available, even if you don’t plan on using it, because you never know.
### In\-line
R code doesn’t have to be in a chunk. You can put it right in the middle of a sentence.
```
Here is a sentence whose sum is `r 2 + 2`.
```
```
This sentence has a value of `r x[1]`.
```
When you knit the document, it will look like ordinary text because you aren’t using an R chunk:
Here is a sentence whose sum is 4\.
This sentence has a value of 1\.955294\.
This effect of this in scientific reporting cannot be understated.
> **Your goal in writing a document should be to not explicitly write a single number that’s based on data.**
### Labels
All chunks should be given a label. This makes it easy to find it within your document because there are two outlines available to you. One that shows your text headers (to the right), and one that you can click to reveal that will also show your chunks (bottom left). If they just say Chunk 1, Chunk 2 etc., it doesn’t help you to know what they’re doing. There is also some potential benefit in terms of caching, which we’ll discuss later.
### Running code
You don’t have to knit the document to run the code, and often you’ll be using the results as you write the document. You can run a single chunk or multiple chunks. Use the shortcuts instead of the menu.
By default, when you knit the document, all code will be run. Depending on a variety of factors, this may or may not be what you want to do, especially if it is time\-consuming to do so. We’ll talk about how to deal with this issue in the next part.
### Chunks
After text, the most common thing you’ll have is code. The code resides in a *chunk*, and looks like this. You can add it to your document with the `Insert` menu in the upper right of your Rmd file, but as you’ll be needing to do this all the time, instead you’ll want to use the keyboard shortcut of Ctrl/Cmd \+ Alt/Option \+ I[68](#fn68).
```
```{r}
x = rnorm(10)
```
```
There is no limit to what you put in an R chunk. I don’t recommend it, but it could be hundreds of lines of code! You can put these anywhere within the document. Other languages, e.g. Python, can be used as well, as long as knitr knows where to look for the engine for the code you want to insert[69](#fn69).
#### Chunk options
There are many things to specify for a specific chunk, or to apply to all chunks. The example demonstrates some of the more common ones you might use.
```
```{r mylabel, echo = TRUE, eval = TRUE, cache = FALSE, out.width = '50%'}
# code
```
```
These do the following:
* **echo**: show the code; can be logical, or line numbers specifying which lines to show
* **eval**: evaluate the code; can be logical, or line numbers specifying which lines to show
* **cache**: logical, whether to cache the results for later reuse without reevaluation
* **out.width**: figure width, can be pixels, percentage, etc.
You can also specify these as defaults for the whole document by using a chunk near the beginning that looks something like this.
```
```{r setup}
knitr::opts_chunk$set(
echo = T,
message = F,
warning = F,
error = F,
comment = NA,
R.options = list(width = 220),
dev.args = list(bg = 'transparent')
)
```
```
There are [quite a few options](https://yihui.org/knitr/options/), so familiarize yourself with what’s available, even if you don’t plan on using it, because you never know.
#### Chunk options
There are many things to specify for a specific chunk, or to apply to all chunks. The example demonstrates some of the more common ones you might use.
```
```{r mylabel, echo = TRUE, eval = TRUE, cache = FALSE, out.width = '50%'}
# code
```
```
These do the following:
* **echo**: show the code; can be logical, or line numbers specifying which lines to show
* **eval**: evaluate the code; can be logical, or line numbers specifying which lines to show
* **cache**: logical, whether to cache the results for later reuse without reevaluation
* **out.width**: figure width, can be pixels, percentage, etc.
You can also specify these as defaults for the whole document by using a chunk near the beginning that looks something like this.
```
```{r setup}
knitr::opts_chunk$set(
echo = T,
message = F,
warning = F,
error = F,
comment = NA,
R.options = list(width = 220),
dev.args = list(bg = 'transparent')
)
```
```
There are [quite a few options](https://yihui.org/knitr/options/), so familiarize yourself with what’s available, even if you don’t plan on using it, because you never know.
### In\-line
R code doesn’t have to be in a chunk. You can put it right in the middle of a sentence.
```
Here is a sentence whose sum is `r 2 + 2`.
```
```
This sentence has a value of `r x[1]`.
```
When you knit the document, it will look like ordinary text because you aren’t using an R chunk:
Here is a sentence whose sum is 4\.
This sentence has a value of 1\.955294\.
This effect of this in scientific reporting cannot be understated.
> **Your goal in writing a document should be to not explicitly write a single number that’s based on data.**
### Labels
All chunks should be given a label. This makes it easy to find it within your document because there are two outlines available to you. One that shows your text headers (to the right), and one that you can click to reveal that will also show your chunks (bottom left). If they just say Chunk 1, Chunk 2 etc., it doesn’t help you to know what they’re doing. There is also some potential benefit in terms of caching, which we’ll discuss later.
### Running code
You don’t have to knit the document to run the code, and often you’ll be using the results as you write the document. You can run a single chunk or multiple chunks. Use the shortcuts instead of the menu.
By default, when you knit the document, all code will be run. Depending on a variety of factors, this may or may not be what you want to do, especially if it is time\-consuming to do so. We’ll talk about how to deal with this issue in the next part.
Multiple Documents
------------------
### Knitting multiple documents into one
A single `.Rmd` file can call others, referred to as *child documents*, and when you knit that document you’ll have one single document with the content from all of them. You may want to consider other formats, such as *bookdown*, rather than doing this. Scrolling a lot is sometimes problematic, and not actually required for the presentation of material. It also makes the content take longer to load, because everything has to load[70](#fn70). See the [appendix](appendix.html#appendix) for details.
### Parameterized reports
In many cases one may wish to create multiple separate reports that more or less have the same structure. The standard scenario is creating tailored reports, possibly customized for different audiences. In this case you may have a single template which all follow. We can use the YAML configuration to set or load R data objects as follows.
```
title: institution_name
params:
institution_name: 'U of M'
data_folder: 'results/data'
---
``{r}
load(paste0(data_folder, 'myfile.RData'))
``
```
The result of the above would create a document with the title of ‘U of M’ and load data from the designated data folder. The `institution_name` and `data_folder` are processed as R objects before anything else about the document is created.
Parameterized reports combined with child documents mean you can essentially have one document template for all reports as a child template and merely change the YAML configuration for the rest.
### Knitting multiple documents into one
A single `.Rmd` file can call others, referred to as *child documents*, and when you knit that document you’ll have one single document with the content from all of them. You may want to consider other formats, such as *bookdown*, rather than doing this. Scrolling a lot is sometimes problematic, and not actually required for the presentation of material. It also makes the content take longer to load, because everything has to load[70](#fn70). See the [appendix](appendix.html#appendix) for details.
### Parameterized reports
In many cases one may wish to create multiple separate reports that more or less have the same structure. The standard scenario is creating tailored reports, possibly customized for different audiences. In this case you may have a single template which all follow. We can use the YAML configuration to set or load R data objects as follows.
```
title: institution_name
params:
institution_name: 'U of M'
data_folder: 'results/data'
---
``{r}
load(paste0(data_folder, 'myfile.RData'))
``
```
The result of the above would create a document with the title of ‘U of M’ and load data from the designated data folder. The `institution_name` and `data_folder` are processed as R objects before anything else about the document is created.
Parameterized reports combined with child documents mean you can essentially have one document template for all reports as a child template and merely change the YAML configuration for the rest.
Collaboration
-------------
*R Notebooks* are a format one can use that might be more suitable for code collaboration. They are identical to the standard HTML document in most respects, but chunks will by default print output in the Rmd file itself. For example, a graduate student could write up a notebook, and their advisor could then look at the document, change the code as needed etc. Of course, you could just do this with a standard R script as well. Some prefer the inline output however.
For more involved collaborations, I would suggest partitioning the sections into their own document, then use version control keep track of respective contributions, and merge as needed. Such a process was designed for software development, but there’s no reason it wouldn’t work for a document in general, and in my experience, it has quite well.
Using Python for Documents
--------------------------
Most of what we’ve discussed for the standard html document would apply to the Python world as well. The main document format there is the [Jupyter Notebook](https://jupyter.org/). Like RStudio and R Markdown, Jupyter notebooks seamlessly integrate code and text via markdown, and even can use R instead of Python as well. Jupyter Notebooks are superior to the R Notebook format in most respects, but particularly in look and feel, and interactivity. They are entirely browser\-based to use, so whatever you are constructing will look the same if converted to HTML. Almost all the chapters in this document regarding programming, analysis, and visualization have corresponding [Jupyter notebooks demonstrating the same thing with Python](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks), including the [discussion of notebooks themselves](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks/jupyter.ipynb).
Beyond the notebook format, I don’t find it as easy to customize Jupyter notebooks as it is the standard R Markdown HTML document and other formats (e.g. slides, bookdown, interactive web, etc.), which is probably why most of the Jupyter notebooks you come across look exactly the same. With R, one or two clicks in RStudio, or a couple knitr/YAML option adjustments, can make the final product for a basic HTML document look notably different. For Jupyter, one would need various extensions, possibly via the browser, terminal scripting, or would have to directly manipulate html/css. None of this is necessarily a deal\-breaker, but most probably would prefer a more straightforward way to change the look.
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/customization.html |
Customization \& Configuration
==============================
Now that you have a document ready to go, you’ll want to customize it to make it look the way you want. There is basically nothing you can’t change by using R packages to enhance output, using custom themes to control the overall look, and using various other tools to your liking.
Output Options
--------------
The basic document comes with several options to apply to your output. You’ll find a cog wheel in the toolbar area underneath the tabs.
Note that the inline vs. console stuff mostly just has to do with the actual .Rmd file, not the output, so we’re going to ignore it[71](#fn71). Within the options you can apply some default settings to images, code, and more.
### Themes etc.
As a first step, simply play around with the themes you already have available. For quick, one\-off documents that you want to share without a lot of fuss, choosing one of these will make your document look good without breaking a sweat.
As another example, choose a new code style with the syntax highlighting. If you have headings in your current document, go ahead and turn on table of contents.
For many of the documents you create, changing the defaults this way may be enough, so be familiar with your options.
After making your selections, now see what has changed at the top of your document. You might see something like the following.
I’m sure you’ve been wondering at this point, so what is that stuff anyway? That is YAML[72](#fn72). So let’s see what’s going on.
YAML
----
For the purposes of starting out, all you really need to know is that YAML is like configuration code for your document. You can see that it specifies what the output is, and whatever options you selected previously. You can change the title, add a date etc. There is a lot of other stuff too. Here is the YAML for this document.
Clearly, there is a lot to play with, but it will depend on the type of document you’re doing. For example, the `always_allow_html: yes` is pointless for an HTML document, but would allow certain things to be (very likely poorly) attempted in a PDF or Word document. Other options only make sense for bookdown documents.
There is a lot more available too, as YAML is a programming syntax all its own, so how deep you want to get into it is up to you. The best way, just like learning R Markdown generally, is to simply see what others do and apply it to your own document. It may take a bit of trial and error, but you’ll eventually get the hang of it.
HTML \& CSS
-----------
### HTML
Knowing some basic HTML can add little things to your document to make it look better. As a minimal example, here is a plot followed by text.
Even with a return space between this line you are reading and the plot, this text is smack against it. I do not prefer this.
This fix is easy, just add `<br>` after the R chunk that creates the plot to add a line break.
This text has some room to breathe. Alternatively, I could use htmltools and put `br()` in the code after the plot. Possibly the best option would be to change the CSS regarding images so that they all have a bit of padding around them.
While you have a CSS file to make such changes, you can also do so in\-line.
This sentence is tyrian purple, bold, and has bigger font because I put `<span style='color:#66023C; font-size:150%; font-weight:600'>` before it and `</span>` after it.
Say you want to center and resize an image. Basic Markdown is too limited to do much more than display the image, so use some HTML instead.
Here is the basic markdown image.
``
A little more functionality has been added to the default approach, such that you can add some options in the following manner (no spaces!).
`{width=25%}`
Next we use HTML instead. This will produce a centered image that is slightly smaller.
`<img src="img/R.ico" style="display: block; margin: 0 auto;" width=40px>`
While the `src` and `width` are self\-explanatory, the style part is where you can do one\-off CSS styling, which we’ll talk about next. In this example, it serves to center the image. Taking `display: block` out and changing the margins to 0 will default to left\-alignment within the part of the page (container) the image resides in.
`<img src="img/R.ico" style="margin: 0 0;" width=40px>`
We can also use an R chunk with code as follows, which would allow for adjustments via chunk options.
```
knitr::include_graphics('img/R.ico')
```
And finally, you’ll want to is hone your ASCII art skills, because sometimes that’s the best way to display an image, like this ocean sunset.
```
^^ @@@@@@@@@
^^ ^^ @@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@ ^^
@@@@@@@@@@@@@@@@@@@@
~~~~ ~~ ~~~~~ ~~~~~~~~ ~~ &&&&&&&&&&&&&&&&&&&& ~~~~~~~ ~~~~~~~~~~~ ~~~
~ ~~ ~ ~ ~~~~~~~~~~~~~~~~~~~~ ~ ~~ ~~ ~
~ ~~ ~~ ~~ ~~ ~~~~~~~~~~~~~ ~~~~ ~ ~~~ ~ ~~~ ~ ~~
~ ~~ ~ ~ ~~~~~~ ~~ ~~~ ~~ ~ ~~ ~~ ~
~ ~ ~ ~ ~ ~~ ~~~~~~ ~ ~~ ~ ~~
~ ~ ~ ~ ~~ ~ ~
```
### CSS
Recall the style section in some of the HTML examples above. For example, the part `style='color:#66023C; font-size:150%; font-weight:600'` changed the font[73](#fn73). It’s actually CSS, and if we need to do the same thing each time, we can take an alternative approach to creating a style that would apply the same settings to all objects of the same class or HTML tag throughout the document.
The first step is to create a `*.css` file that your R Markdown document can refer to. Let’s say we want to make every link dodgerblue. Links in HTML are tagged with the letter **`a`**, and to insert a link with HTML you can do something like:
```
<a href='https://m-clark.github.io>wowee zowee!</a>
```
It would look like this: [wowee zowee!](https://m-clark.github.io). If we want to change the color from the default setting for all links, we go into our CSS file.
```
a {
color: dodgerblue;
}
```
Now our links would look like this: [wowee zowee!](https://m-clark.github.io)
You can use hexadecimal, RGB and other representations of practically any color. CSS, like HTML, has a fairly simple syntax, but it’s very flexible, and can do a ton of stuff you wouldn’t think of. With experience and looking at other people’s CSS, you’ll pick up the basics.
Now that you have a CSS file. Note that you want to specify it in the YAML section of your R Markdown document.
```
output:
html_document:
css: mystyle.css
```
Now every link you create will be that color. We could add a subtle background to it, make them bold or whatever.
```
a {
color: dodgerblue;
background-color: #f2f2f2;
font-weight: 800;
}
```
Now it becomes [wowee zowee!](https://m-clark.github.io). In a similar fashion, you could make images always display at 50% width by default.
```
img {
width: 50%;
}
```
### Custom classes
You can also create custom classes. For example, all R functions in my documents are a specific color, as they are wrapped in a custom css class I created called ‘func’ as follows[74](#fn74).
```
.func {
color: #007199;
font-weight: 500;
}
```
Then I can do `<span class="func">crossprod</span>` and the text of the function name, or any text of class func, will have the appropriate color and weight.
Personal Templates
------------------
A common mantra in computer programming and beyond is DRY, or Don’t Repeat Yourself. If you start using R Markdown a lot, and there is a good chance of that, once you get some settings you use often, you’ll not want to start from scratch, but simply reuse them. While this can be done [formally](https://rmarkdown.rstudio.com/developer_document_templates.html) by creating an R package, it can also be as simple as saving a file that just has the YAML and maybe some knitr options specified, and starting from that file each time. Same goes for CSS or other files you use often.
Over time, these files and settings will grow, especially as you learn new options and want to tweak old. In the end, you may have very little to do to make the document look great the first time you knit it!
The Rabbit Hole Goes Deep
-------------------------
How much you want to get into customization is up to you. Using the developer tools of any web browser allows you to inspect what anyone else has done as far as styling with CSS. Here is an example of Chrome Developer Tools, which you can access through its menus.
All browsers have this, making it easy to see exactly what’s going on with any webpage.
For some of you, if you aren’t careful, you’ll spend an afternoon on an already finished document trying to make it look perfect. It takes very little effort to make a great looking document with R Markdown. Making it *perfect* is impossible. You have been warned.
R Markdown Exercises
--------------------
### Exercise 1
* Create an `*.Rmd` for HTML.
* Now change some configuration options: choose a theme and add a table of contents. For the latter, create some headings/sections and sub\-sections so that you can see your configuration in action.
```
# Header 1
## Header 2
```
### Exercise 2
* Add a chunk that does the following (or something similar): `summary(mtcars)`
* Add a chunk that produces a visualization. If you need an example, create a density plot of the population total variable from the midwest data set in the ggplot2 package. Now align it with the `fig.align` chunk option.
* Add a chunk similar to the previous but have the resulting document hide the code, just showing the visualization.
* Now add a chunk that *only* shows the code, but doesn’t actually run it.
* Add a chunk that creates an R object such as a set of numbers or text. Then use that object in the text via inline R code. For example, show only the first element of the object in a sentence.
```
Yadda yadda `r object[1]` hey that's neat!
```
* **Bonus**: Set a chunk option that will be applied to the whole document. For example, make the default figure alignment be centered, or have the default be to hide the code.
### Exercise 3
* Italicize or bold some words.
* Add a hyperlink.
* Add a line break via HTML. Bonus: use htmltools and the `br()` function to add a line break within an R chunk. See what happens when you simply put several line returns.
* Change your output to PDF.
### Exercise 4
For these, you’ll have to look it up, as we haven’t explicitly discussed it.
* Add a title and subtitle to your document (YAML)
* Remove the \# from the R chunk outputs (Chunk option)
* Create a quoted block. (Basic Markdown)
### Exercise 5
For this more advanced exercise, you’d have to know a little CSS, but just doing it once will go quite a ways to helping you feel comfortable being creative with your own CSS files.
* Create a `*.css` file to set an option for your link color. Don’t forget to refer to it in your YAML configuration section of the Rmd file. Just add something like `css: file/location/file.css`.
* Create a special class of links and add a link of that class.
Output Options
--------------
The basic document comes with several options to apply to your output. You’ll find a cog wheel in the toolbar area underneath the tabs.
Note that the inline vs. console stuff mostly just has to do with the actual .Rmd file, not the output, so we’re going to ignore it[71](#fn71). Within the options you can apply some default settings to images, code, and more.
### Themes etc.
As a first step, simply play around with the themes you already have available. For quick, one\-off documents that you want to share without a lot of fuss, choosing one of these will make your document look good without breaking a sweat.
As another example, choose a new code style with the syntax highlighting. If you have headings in your current document, go ahead and turn on table of contents.
For many of the documents you create, changing the defaults this way may be enough, so be familiar with your options.
After making your selections, now see what has changed at the top of your document. You might see something like the following.
I’m sure you’ve been wondering at this point, so what is that stuff anyway? That is YAML[72](#fn72). So let’s see what’s going on.
### Themes etc.
As a first step, simply play around with the themes you already have available. For quick, one\-off documents that you want to share without a lot of fuss, choosing one of these will make your document look good without breaking a sweat.
As another example, choose a new code style with the syntax highlighting. If you have headings in your current document, go ahead and turn on table of contents.
For many of the documents you create, changing the defaults this way may be enough, so be familiar with your options.
After making your selections, now see what has changed at the top of your document. You might see something like the following.
I’m sure you’ve been wondering at this point, so what is that stuff anyway? That is YAML[72](#fn72). So let’s see what’s going on.
YAML
----
For the purposes of starting out, all you really need to know is that YAML is like configuration code for your document. You can see that it specifies what the output is, and whatever options you selected previously. You can change the title, add a date etc. There is a lot of other stuff too. Here is the YAML for this document.
Clearly, there is a lot to play with, but it will depend on the type of document you’re doing. For example, the `always_allow_html: yes` is pointless for an HTML document, but would allow certain things to be (very likely poorly) attempted in a PDF or Word document. Other options only make sense for bookdown documents.
There is a lot more available too, as YAML is a programming syntax all its own, so how deep you want to get into it is up to you. The best way, just like learning R Markdown generally, is to simply see what others do and apply it to your own document. It may take a bit of trial and error, but you’ll eventually get the hang of it.
HTML \& CSS
-----------
### HTML
Knowing some basic HTML can add little things to your document to make it look better. As a minimal example, here is a plot followed by text.
Even with a return space between this line you are reading and the plot, this text is smack against it. I do not prefer this.
This fix is easy, just add `<br>` after the R chunk that creates the plot to add a line break.
This text has some room to breathe. Alternatively, I could use htmltools and put `br()` in the code after the plot. Possibly the best option would be to change the CSS regarding images so that they all have a bit of padding around them.
While you have a CSS file to make such changes, you can also do so in\-line.
This sentence is tyrian purple, bold, and has bigger font because I put `<span style='color:#66023C; font-size:150%; font-weight:600'>` before it and `</span>` after it.
Say you want to center and resize an image. Basic Markdown is too limited to do much more than display the image, so use some HTML instead.
Here is the basic markdown image.
``
A little more functionality has been added to the default approach, such that you can add some options in the following manner (no spaces!).
`{width=25%}`
Next we use HTML instead. This will produce a centered image that is slightly smaller.
`<img src="img/R.ico" style="display: block; margin: 0 auto;" width=40px>`
While the `src` and `width` are self\-explanatory, the style part is where you can do one\-off CSS styling, which we’ll talk about next. In this example, it serves to center the image. Taking `display: block` out and changing the margins to 0 will default to left\-alignment within the part of the page (container) the image resides in.
`<img src="img/R.ico" style="margin: 0 0;" width=40px>`
We can also use an R chunk with code as follows, which would allow for adjustments via chunk options.
```
knitr::include_graphics('img/R.ico')
```
And finally, you’ll want to is hone your ASCII art skills, because sometimes that’s the best way to display an image, like this ocean sunset.
```
^^ @@@@@@@@@
^^ ^^ @@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@ ^^
@@@@@@@@@@@@@@@@@@@@
~~~~ ~~ ~~~~~ ~~~~~~~~ ~~ &&&&&&&&&&&&&&&&&&&& ~~~~~~~ ~~~~~~~~~~~ ~~~
~ ~~ ~ ~ ~~~~~~~~~~~~~~~~~~~~ ~ ~~ ~~ ~
~ ~~ ~~ ~~ ~~ ~~~~~~~~~~~~~ ~~~~ ~ ~~~ ~ ~~~ ~ ~~
~ ~~ ~ ~ ~~~~~~ ~~ ~~~ ~~ ~ ~~ ~~ ~
~ ~ ~ ~ ~ ~~ ~~~~~~ ~ ~~ ~ ~~
~ ~ ~ ~ ~~ ~ ~
```
### CSS
Recall the style section in some of the HTML examples above. For example, the part `style='color:#66023C; font-size:150%; font-weight:600'` changed the font[73](#fn73). It’s actually CSS, and if we need to do the same thing each time, we can take an alternative approach to creating a style that would apply the same settings to all objects of the same class or HTML tag throughout the document.
The first step is to create a `*.css` file that your R Markdown document can refer to. Let’s say we want to make every link dodgerblue. Links in HTML are tagged with the letter **`a`**, and to insert a link with HTML you can do something like:
```
<a href='https://m-clark.github.io>wowee zowee!</a>
```
It would look like this: [wowee zowee!](https://m-clark.github.io). If we want to change the color from the default setting for all links, we go into our CSS file.
```
a {
color: dodgerblue;
}
```
Now our links would look like this: [wowee zowee!](https://m-clark.github.io)
You can use hexadecimal, RGB and other representations of practically any color. CSS, like HTML, has a fairly simple syntax, but it’s very flexible, and can do a ton of stuff you wouldn’t think of. With experience and looking at other people’s CSS, you’ll pick up the basics.
Now that you have a CSS file. Note that you want to specify it in the YAML section of your R Markdown document.
```
output:
html_document:
css: mystyle.css
```
Now every link you create will be that color. We could add a subtle background to it, make them bold or whatever.
```
a {
color: dodgerblue;
background-color: #f2f2f2;
font-weight: 800;
}
```
Now it becomes [wowee zowee!](https://m-clark.github.io). In a similar fashion, you could make images always display at 50% width by default.
```
img {
width: 50%;
}
```
### Custom classes
You can also create custom classes. For example, all R functions in my documents are a specific color, as they are wrapped in a custom css class I created called ‘func’ as follows[74](#fn74).
```
.func {
color: #007199;
font-weight: 500;
}
```
Then I can do `<span class="func">crossprod</span>` and the text of the function name, or any text of class func, will have the appropriate color and weight.
### HTML
Knowing some basic HTML can add little things to your document to make it look better. As a minimal example, here is a plot followed by text.
Even with a return space between this line you are reading and the plot, this text is smack against it. I do not prefer this.
This fix is easy, just add `<br>` after the R chunk that creates the plot to add a line break.
This text has some room to breathe. Alternatively, I could use htmltools and put `br()` in the code after the plot. Possibly the best option would be to change the CSS regarding images so that they all have a bit of padding around them.
While you have a CSS file to make such changes, you can also do so in\-line.
This sentence is tyrian purple, bold, and has bigger font because I put `<span style='color:#66023C; font-size:150%; font-weight:600'>` before it and `</span>` after it.
Say you want to center and resize an image. Basic Markdown is too limited to do much more than display the image, so use some HTML instead.
Here is the basic markdown image.
``
A little more functionality has been added to the default approach, such that you can add some options in the following manner (no spaces!).
`{width=25%}`
Next we use HTML instead. This will produce a centered image that is slightly smaller.
`<img src="img/R.ico" style="display: block; margin: 0 auto;" width=40px>`
While the `src` and `width` are self\-explanatory, the style part is where you can do one\-off CSS styling, which we’ll talk about next. In this example, it serves to center the image. Taking `display: block` out and changing the margins to 0 will default to left\-alignment within the part of the page (container) the image resides in.
`<img src="img/R.ico" style="margin: 0 0;" width=40px>`
We can also use an R chunk with code as follows, which would allow for adjustments via chunk options.
```
knitr::include_graphics('img/R.ico')
```
And finally, you’ll want to is hone your ASCII art skills, because sometimes that’s the best way to display an image, like this ocean sunset.
```
^^ @@@@@@@@@
^^ ^^ @@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@ ^^
@@@@@@@@@@@@@@@@@@@@
~~~~ ~~ ~~~~~ ~~~~~~~~ ~~ &&&&&&&&&&&&&&&&&&&& ~~~~~~~ ~~~~~~~~~~~ ~~~
~ ~~ ~ ~ ~~~~~~~~~~~~~~~~~~~~ ~ ~~ ~~ ~
~ ~~ ~~ ~~ ~~ ~~~~~~~~~~~~~ ~~~~ ~ ~~~ ~ ~~~ ~ ~~
~ ~~ ~ ~ ~~~~~~ ~~ ~~~ ~~ ~ ~~ ~~ ~
~ ~ ~ ~ ~ ~~ ~~~~~~ ~ ~~ ~ ~~
~ ~ ~ ~ ~~ ~ ~
```
### CSS
Recall the style section in some of the HTML examples above. For example, the part `style='color:#66023C; font-size:150%; font-weight:600'` changed the font[73](#fn73). It’s actually CSS, and if we need to do the same thing each time, we can take an alternative approach to creating a style that would apply the same settings to all objects of the same class or HTML tag throughout the document.
The first step is to create a `*.css` file that your R Markdown document can refer to. Let’s say we want to make every link dodgerblue. Links in HTML are tagged with the letter **`a`**, and to insert a link with HTML you can do something like:
```
<a href='https://m-clark.github.io>wowee zowee!</a>
```
It would look like this: [wowee zowee!](https://m-clark.github.io). If we want to change the color from the default setting for all links, we go into our CSS file.
```
a {
color: dodgerblue;
}
```
Now our links would look like this: [wowee zowee!](https://m-clark.github.io)
You can use hexadecimal, RGB and other representations of practically any color. CSS, like HTML, has a fairly simple syntax, but it’s very flexible, and can do a ton of stuff you wouldn’t think of. With experience and looking at other people’s CSS, you’ll pick up the basics.
Now that you have a CSS file. Note that you want to specify it in the YAML section of your R Markdown document.
```
output:
html_document:
css: mystyle.css
```
Now every link you create will be that color. We could add a subtle background to it, make them bold or whatever.
```
a {
color: dodgerblue;
background-color: #f2f2f2;
font-weight: 800;
}
```
Now it becomes [wowee zowee!](https://m-clark.github.io). In a similar fashion, you could make images always display at 50% width by default.
```
img {
width: 50%;
}
```
### Custom classes
You can also create custom classes. For example, all R functions in my documents are a specific color, as they are wrapped in a custom css class I created called ‘func’ as follows[74](#fn74).
```
.func {
color: #007199;
font-weight: 500;
}
```
Then I can do `<span class="func">crossprod</span>` and the text of the function name, or any text of class func, will have the appropriate color and weight.
Personal Templates
------------------
A common mantra in computer programming and beyond is DRY, or Don’t Repeat Yourself. If you start using R Markdown a lot, and there is a good chance of that, once you get some settings you use often, you’ll not want to start from scratch, but simply reuse them. While this can be done [formally](https://rmarkdown.rstudio.com/developer_document_templates.html) by creating an R package, it can also be as simple as saving a file that just has the YAML and maybe some knitr options specified, and starting from that file each time. Same goes for CSS or other files you use often.
Over time, these files and settings will grow, especially as you learn new options and want to tweak old. In the end, you may have very little to do to make the document look great the first time you knit it!
The Rabbit Hole Goes Deep
-------------------------
How much you want to get into customization is up to you. Using the developer tools of any web browser allows you to inspect what anyone else has done as far as styling with CSS. Here is an example of Chrome Developer Tools, which you can access through its menus.
All browsers have this, making it easy to see exactly what’s going on with any webpage.
For some of you, if you aren’t careful, you’ll spend an afternoon on an already finished document trying to make it look perfect. It takes very little effort to make a great looking document with R Markdown. Making it *perfect* is impossible. You have been warned.
R Markdown Exercises
--------------------
### Exercise 1
* Create an `*.Rmd` for HTML.
* Now change some configuration options: choose a theme and add a table of contents. For the latter, create some headings/sections and sub\-sections so that you can see your configuration in action.
```
# Header 1
## Header 2
```
### Exercise 2
* Add a chunk that does the following (or something similar): `summary(mtcars)`
* Add a chunk that produces a visualization. If you need an example, create a density plot of the population total variable from the midwest data set in the ggplot2 package. Now align it with the `fig.align` chunk option.
* Add a chunk similar to the previous but have the resulting document hide the code, just showing the visualization.
* Now add a chunk that *only* shows the code, but doesn’t actually run it.
* Add a chunk that creates an R object such as a set of numbers or text. Then use that object in the text via inline R code. For example, show only the first element of the object in a sentence.
```
Yadda yadda `r object[1]` hey that's neat!
```
* **Bonus**: Set a chunk option that will be applied to the whole document. For example, make the default figure alignment be centered, or have the default be to hide the code.
### Exercise 3
* Italicize or bold some words.
* Add a hyperlink.
* Add a line break via HTML. Bonus: use htmltools and the `br()` function to add a line break within an R chunk. See what happens when you simply put several line returns.
* Change your output to PDF.
### Exercise 4
For these, you’ll have to look it up, as we haven’t explicitly discussed it.
* Add a title and subtitle to your document (YAML)
* Remove the \# from the R chunk outputs (Chunk option)
* Create a quoted block. (Basic Markdown)
### Exercise 5
For this more advanced exercise, you’d have to know a little CSS, but just doing it once will go quite a ways to helping you feel comfortable being creative with your own CSS files.
* Create a `*.css` file to set an option for your link color. Don’t forget to refer to it in your YAML configuration section of the Rmd file. Just add something like `css: file/location/file.css`.
* Create a special class of links and add a link of that class.
### Exercise 1
* Create an `*.Rmd` for HTML.
* Now change some configuration options: choose a theme and add a table of contents. For the latter, create some headings/sections and sub\-sections so that you can see your configuration in action.
```
# Header 1
## Header 2
```
### Exercise 2
* Add a chunk that does the following (or something similar): `summary(mtcars)`
* Add a chunk that produces a visualization. If you need an example, create a density plot of the population total variable from the midwest data set in the ggplot2 package. Now align it with the `fig.align` chunk option.
* Add a chunk similar to the previous but have the resulting document hide the code, just showing the visualization.
* Now add a chunk that *only* shows the code, but doesn’t actually run it.
* Add a chunk that creates an R object such as a set of numbers or text. Then use that object in the text via inline R code. For example, show only the first element of the object in a sentence.
```
Yadda yadda `r object[1]` hey that's neat!
```
* **Bonus**: Set a chunk option that will be applied to the whole document. For example, make the default figure alignment be centered, or have the default be to hide the code.
### Exercise 3
* Italicize or bold some words.
* Add a hyperlink.
* Add a line break via HTML. Bonus: use htmltools and the `br()` function to add a line break within an R chunk. See what happens when you simply put several line returns.
* Change your output to PDF.
### Exercise 4
For these, you’ll have to look it up, as we haven’t explicitly discussed it.
* Add a title and subtitle to your document (YAML)
* Remove the \# from the R chunk outputs (Chunk option)
* Create a quoted block. (Basic Markdown)
### Exercise 5
For this more advanced exercise, you’d have to know a little CSS, but just doing it once will go quite a ways to helping you feel comfortable being creative with your own CSS files.
* Create a `*.css` file to set an option for your link color. Don’t forget to refer to it in your YAML configuration section of the Rmd file. Just add something like `css: file/location/file.css`.
* Create a special class of links and add a link of that class.
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/customization.html |
Customization \& Configuration
==============================
Now that you have a document ready to go, you’ll want to customize it to make it look the way you want. There is basically nothing you can’t change by using R packages to enhance output, using custom themes to control the overall look, and using various other tools to your liking.
Output Options
--------------
The basic document comes with several options to apply to your output. You’ll find a cog wheel in the toolbar area underneath the tabs.
Note that the inline vs. console stuff mostly just has to do with the actual .Rmd file, not the output, so we’re going to ignore it[71](#fn71). Within the options you can apply some default settings to images, code, and more.
### Themes etc.
As a first step, simply play around with the themes you already have available. For quick, one\-off documents that you want to share without a lot of fuss, choosing one of these will make your document look good without breaking a sweat.
As another example, choose a new code style with the syntax highlighting. If you have headings in your current document, go ahead and turn on table of contents.
For many of the documents you create, changing the defaults this way may be enough, so be familiar with your options.
After making your selections, now see what has changed at the top of your document. You might see something like the following.
I’m sure you’ve been wondering at this point, so what is that stuff anyway? That is YAML[72](#fn72). So let’s see what’s going on.
YAML
----
For the purposes of starting out, all you really need to know is that YAML is like configuration code for your document. You can see that it specifies what the output is, and whatever options you selected previously. You can change the title, add a date etc. There is a lot of other stuff too. Here is the YAML for this document.
Clearly, there is a lot to play with, but it will depend on the type of document you’re doing. For example, the `always_allow_html: yes` is pointless for an HTML document, but would allow certain things to be (very likely poorly) attempted in a PDF or Word document. Other options only make sense for bookdown documents.
There is a lot more available too, as YAML is a programming syntax all its own, so how deep you want to get into it is up to you. The best way, just like learning R Markdown generally, is to simply see what others do and apply it to your own document. It may take a bit of trial and error, but you’ll eventually get the hang of it.
HTML \& CSS
-----------
### HTML
Knowing some basic HTML can add little things to your document to make it look better. As a minimal example, here is a plot followed by text.
Even with a return space between this line you are reading and the plot, this text is smack against it. I do not prefer this.
This fix is easy, just add `<br>` after the R chunk that creates the plot to add a line break.
This text has some room to breathe. Alternatively, I could use htmltools and put `br()` in the code after the plot. Possibly the best option would be to change the CSS regarding images so that they all have a bit of padding around them.
While you have a CSS file to make such changes, you can also do so in\-line.
This sentence is tyrian purple, bold, and has bigger font because I put `<span style='color:#66023C; font-size:150%; font-weight:600'>` before it and `</span>` after it.
Say you want to center and resize an image. Basic Markdown is too limited to do much more than display the image, so use some HTML instead.
Here is the basic markdown image.
``
A little more functionality has been added to the default approach, such that you can add some options in the following manner (no spaces!).
`{width=25%}`
Next we use HTML instead. This will produce a centered image that is slightly smaller.
`<img src="img/R.ico" style="display: block; margin: 0 auto;" width=40px>`
While the `src` and `width` are self\-explanatory, the style part is where you can do one\-off CSS styling, which we’ll talk about next. In this example, it serves to center the image. Taking `display: block` out and changing the margins to 0 will default to left\-alignment within the part of the page (container) the image resides in.
`<img src="img/R.ico" style="margin: 0 0;" width=40px>`
We can also use an R chunk with code as follows, which would allow for adjustments via chunk options.
```
knitr::include_graphics('img/R.ico')
```
And finally, you’ll want to is hone your ASCII art skills, because sometimes that’s the best way to display an image, like this ocean sunset.
```
^^ @@@@@@@@@
^^ ^^ @@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@ ^^
@@@@@@@@@@@@@@@@@@@@
~~~~ ~~ ~~~~~ ~~~~~~~~ ~~ &&&&&&&&&&&&&&&&&&&& ~~~~~~~ ~~~~~~~~~~~ ~~~
~ ~~ ~ ~ ~~~~~~~~~~~~~~~~~~~~ ~ ~~ ~~ ~
~ ~~ ~~ ~~ ~~ ~~~~~~~~~~~~~ ~~~~ ~ ~~~ ~ ~~~ ~ ~~
~ ~~ ~ ~ ~~~~~~ ~~ ~~~ ~~ ~ ~~ ~~ ~
~ ~ ~ ~ ~ ~~ ~~~~~~ ~ ~~ ~ ~~
~ ~ ~ ~ ~~ ~ ~
```
### CSS
Recall the style section in some of the HTML examples above. For example, the part `style='color:#66023C; font-size:150%; font-weight:600'` changed the font[73](#fn73). It’s actually CSS, and if we need to do the same thing each time, we can take an alternative approach to creating a style that would apply the same settings to all objects of the same class or HTML tag throughout the document.
The first step is to create a `*.css` file that your R Markdown document can refer to. Let’s say we want to make every link dodgerblue. Links in HTML are tagged with the letter **`a`**, and to insert a link with HTML you can do something like:
```
<a href='https://m-clark.github.io>wowee zowee!</a>
```
It would look like this: [wowee zowee!](https://m-clark.github.io). If we want to change the color from the default setting for all links, we go into our CSS file.
```
a {
color: dodgerblue;
}
```
Now our links would look like this: [wowee zowee!](https://m-clark.github.io)
You can use hexadecimal, RGB and other representations of practically any color. CSS, like HTML, has a fairly simple syntax, but it’s very flexible, and can do a ton of stuff you wouldn’t think of. With experience and looking at other people’s CSS, you’ll pick up the basics.
Now that you have a CSS file. Note that you want to specify it in the YAML section of your R Markdown document.
```
output:
html_document:
css: mystyle.css
```
Now every link you create will be that color. We could add a subtle background to it, make them bold or whatever.
```
a {
color: dodgerblue;
background-color: #f2f2f2;
font-weight: 800;
}
```
Now it becomes [wowee zowee!](https://m-clark.github.io). In a similar fashion, you could make images always display at 50% width by default.
```
img {
width: 50%;
}
```
### Custom classes
You can also create custom classes. For example, all R functions in my documents are a specific color, as they are wrapped in a custom css class I created called ‘func’ as follows[74](#fn74).
```
.func {
color: #007199;
font-weight: 500;
}
```
Then I can do `<span class="func">crossprod</span>` and the text of the function name, or any text of class func, will have the appropriate color and weight.
Personal Templates
------------------
A common mantra in computer programming and beyond is DRY, or Don’t Repeat Yourself. If you start using R Markdown a lot, and there is a good chance of that, once you get some settings you use often, you’ll not want to start from scratch, but simply reuse them. While this can be done [formally](https://rmarkdown.rstudio.com/developer_document_templates.html) by creating an R package, it can also be as simple as saving a file that just has the YAML and maybe some knitr options specified, and starting from that file each time. Same goes for CSS or other files you use often.
Over time, these files and settings will grow, especially as you learn new options and want to tweak old. In the end, you may have very little to do to make the document look great the first time you knit it!
The Rabbit Hole Goes Deep
-------------------------
How much you want to get into customization is up to you. Using the developer tools of any web browser allows you to inspect what anyone else has done as far as styling with CSS. Here is an example of Chrome Developer Tools, which you can access through its menus.
All browsers have this, making it easy to see exactly what’s going on with any webpage.
For some of you, if you aren’t careful, you’ll spend an afternoon on an already finished document trying to make it look perfect. It takes very little effort to make a great looking document with R Markdown. Making it *perfect* is impossible. You have been warned.
R Markdown Exercises
--------------------
### Exercise 1
* Create an `*.Rmd` for HTML.
* Now change some configuration options: choose a theme and add a table of contents. For the latter, create some headings/sections and sub\-sections so that you can see your configuration in action.
```
# Header 1
## Header 2
```
### Exercise 2
* Add a chunk that does the following (or something similar): `summary(mtcars)`
* Add a chunk that produces a visualization. If you need an example, create a density plot of the population total variable from the midwest data set in the ggplot2 package. Now align it with the `fig.align` chunk option.
* Add a chunk similar to the previous but have the resulting document hide the code, just showing the visualization.
* Now add a chunk that *only* shows the code, but doesn’t actually run it.
* Add a chunk that creates an R object such as a set of numbers or text. Then use that object in the text via inline R code. For example, show only the first element of the object in a sentence.
```
Yadda yadda `r object[1]` hey that's neat!
```
* **Bonus**: Set a chunk option that will be applied to the whole document. For example, make the default figure alignment be centered, or have the default be to hide the code.
### Exercise 3
* Italicize or bold some words.
* Add a hyperlink.
* Add a line break via HTML. Bonus: use htmltools and the `br()` function to add a line break within an R chunk. See what happens when you simply put several line returns.
* Change your output to PDF.
### Exercise 4
For these, you’ll have to look it up, as we haven’t explicitly discussed it.
* Add a title and subtitle to your document (YAML)
* Remove the \# from the R chunk outputs (Chunk option)
* Create a quoted block. (Basic Markdown)
### Exercise 5
For this more advanced exercise, you’d have to know a little CSS, but just doing it once will go quite a ways to helping you feel comfortable being creative with your own CSS files.
* Create a `*.css` file to set an option for your link color. Don’t forget to refer to it in your YAML configuration section of the Rmd file. Just add something like `css: file/location/file.css`.
* Create a special class of links and add a link of that class.
Output Options
--------------
The basic document comes with several options to apply to your output. You’ll find a cog wheel in the toolbar area underneath the tabs.
Note that the inline vs. console stuff mostly just has to do with the actual .Rmd file, not the output, so we’re going to ignore it[71](#fn71). Within the options you can apply some default settings to images, code, and more.
### Themes etc.
As a first step, simply play around with the themes you already have available. For quick, one\-off documents that you want to share without a lot of fuss, choosing one of these will make your document look good without breaking a sweat.
As another example, choose a new code style with the syntax highlighting. If you have headings in your current document, go ahead and turn on table of contents.
For many of the documents you create, changing the defaults this way may be enough, so be familiar with your options.
After making your selections, now see what has changed at the top of your document. You might see something like the following.
I’m sure you’ve been wondering at this point, so what is that stuff anyway? That is YAML[72](#fn72). So let’s see what’s going on.
### Themes etc.
As a first step, simply play around with the themes you already have available. For quick, one\-off documents that you want to share without a lot of fuss, choosing one of these will make your document look good without breaking a sweat.
As another example, choose a new code style with the syntax highlighting. If you have headings in your current document, go ahead and turn on table of contents.
For many of the documents you create, changing the defaults this way may be enough, so be familiar with your options.
After making your selections, now see what has changed at the top of your document. You might see something like the following.
I’m sure you’ve been wondering at this point, so what is that stuff anyway? That is YAML[72](#fn72). So let’s see what’s going on.
YAML
----
For the purposes of starting out, all you really need to know is that YAML is like configuration code for your document. You can see that it specifies what the output is, and whatever options you selected previously. You can change the title, add a date etc. There is a lot of other stuff too. Here is the YAML for this document.
Clearly, there is a lot to play with, but it will depend on the type of document you’re doing. For example, the `always_allow_html: yes` is pointless for an HTML document, but would allow certain things to be (very likely poorly) attempted in a PDF or Word document. Other options only make sense for bookdown documents.
There is a lot more available too, as YAML is a programming syntax all its own, so how deep you want to get into it is up to you. The best way, just like learning R Markdown generally, is to simply see what others do and apply it to your own document. It may take a bit of trial and error, but you’ll eventually get the hang of it.
HTML \& CSS
-----------
### HTML
Knowing some basic HTML can add little things to your document to make it look better. As a minimal example, here is a plot followed by text.
Even with a return space between this line you are reading and the plot, this text is smack against it. I do not prefer this.
This fix is easy, just add `<br>` after the R chunk that creates the plot to add a line break.
This text has some room to breathe. Alternatively, I could use htmltools and put `br()` in the code after the plot. Possibly the best option would be to change the CSS regarding images so that they all have a bit of padding around them.
While you have a CSS file to make such changes, you can also do so in\-line.
This sentence is tyrian purple, bold, and has bigger font because I put `<span style='color:#66023C; font-size:150%; font-weight:600'>` before it and `</span>` after it.
Say you want to center and resize an image. Basic Markdown is too limited to do much more than display the image, so use some HTML instead.
Here is the basic markdown image.
``
A little more functionality has been added to the default approach, such that you can add some options in the following manner (no spaces!).
`{width=25%}`
Next we use HTML instead. This will produce a centered image that is slightly smaller.
`<img src="img/R.ico" style="display: block; margin: 0 auto;" width=40px>`
While the `src` and `width` are self\-explanatory, the style part is where you can do one\-off CSS styling, which we’ll talk about next. In this example, it serves to center the image. Taking `display: block` out and changing the margins to 0 will default to left\-alignment within the part of the page (container) the image resides in.
`<img src="img/R.ico" style="margin: 0 0;" width=40px>`
We can also use an R chunk with code as follows, which would allow for adjustments via chunk options.
```
knitr::include_graphics('img/R.ico')
```
And finally, you’ll want to is hone your ASCII art skills, because sometimes that’s the best way to display an image, like this ocean sunset.
```
^^ @@@@@@@@@
^^ ^^ @@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@ ^^
@@@@@@@@@@@@@@@@@@@@
~~~~ ~~ ~~~~~ ~~~~~~~~ ~~ &&&&&&&&&&&&&&&&&&&& ~~~~~~~ ~~~~~~~~~~~ ~~~
~ ~~ ~ ~ ~~~~~~~~~~~~~~~~~~~~ ~ ~~ ~~ ~
~ ~~ ~~ ~~ ~~ ~~~~~~~~~~~~~ ~~~~ ~ ~~~ ~ ~~~ ~ ~~
~ ~~ ~ ~ ~~~~~~ ~~ ~~~ ~~ ~ ~~ ~~ ~
~ ~ ~ ~ ~ ~~ ~~~~~~ ~ ~~ ~ ~~
~ ~ ~ ~ ~~ ~ ~
```
### CSS
Recall the style section in some of the HTML examples above. For example, the part `style='color:#66023C; font-size:150%; font-weight:600'` changed the font[73](#fn73). It’s actually CSS, and if we need to do the same thing each time, we can take an alternative approach to creating a style that would apply the same settings to all objects of the same class or HTML tag throughout the document.
The first step is to create a `*.css` file that your R Markdown document can refer to. Let’s say we want to make every link dodgerblue. Links in HTML are tagged with the letter **`a`**, and to insert a link with HTML you can do something like:
```
<a href='https://m-clark.github.io>wowee zowee!</a>
```
It would look like this: [wowee zowee!](https://m-clark.github.io). If we want to change the color from the default setting for all links, we go into our CSS file.
```
a {
color: dodgerblue;
}
```
Now our links would look like this: [wowee zowee!](https://m-clark.github.io)
You can use hexadecimal, RGB and other representations of practically any color. CSS, like HTML, has a fairly simple syntax, but it’s very flexible, and can do a ton of stuff you wouldn’t think of. With experience and looking at other people’s CSS, you’ll pick up the basics.
Now that you have a CSS file. Note that you want to specify it in the YAML section of your R Markdown document.
```
output:
html_document:
css: mystyle.css
```
Now every link you create will be that color. We could add a subtle background to it, make them bold or whatever.
```
a {
color: dodgerblue;
background-color: #f2f2f2;
font-weight: 800;
}
```
Now it becomes [wowee zowee!](https://m-clark.github.io). In a similar fashion, you could make images always display at 50% width by default.
```
img {
width: 50%;
}
```
### Custom classes
You can also create custom classes. For example, all R functions in my documents are a specific color, as they are wrapped in a custom css class I created called ‘func’ as follows[74](#fn74).
```
.func {
color: #007199;
font-weight: 500;
}
```
Then I can do `<span class="func">crossprod</span>` and the text of the function name, or any text of class func, will have the appropriate color and weight.
### HTML
Knowing some basic HTML can add little things to your document to make it look better. As a minimal example, here is a plot followed by text.
Even with a return space between this line you are reading and the plot, this text is smack against it. I do not prefer this.
This fix is easy, just add `<br>` after the R chunk that creates the plot to add a line break.
This text has some room to breathe. Alternatively, I could use htmltools and put `br()` in the code after the plot. Possibly the best option would be to change the CSS regarding images so that they all have a bit of padding around them.
While you have a CSS file to make such changes, you can also do so in\-line.
This sentence is tyrian purple, bold, and has bigger font because I put `<span style='color:#66023C; font-size:150%; font-weight:600'>` before it and `</span>` after it.
Say you want to center and resize an image. Basic Markdown is too limited to do much more than display the image, so use some HTML instead.
Here is the basic markdown image.
``
A little more functionality has been added to the default approach, such that you can add some options in the following manner (no spaces!).
`{width=25%}`
Next we use HTML instead. This will produce a centered image that is slightly smaller.
`<img src="img/R.ico" style="display: block; margin: 0 auto;" width=40px>`
While the `src` and `width` are self\-explanatory, the style part is where you can do one\-off CSS styling, which we’ll talk about next. In this example, it serves to center the image. Taking `display: block` out and changing the margins to 0 will default to left\-alignment within the part of the page (container) the image resides in.
`<img src="img/R.ico" style="margin: 0 0;" width=40px>`
We can also use an R chunk with code as follows, which would allow for adjustments via chunk options.
```
knitr::include_graphics('img/R.ico')
```
And finally, you’ll want to is hone your ASCII art skills, because sometimes that’s the best way to display an image, like this ocean sunset.
```
^^ @@@@@@@@@
^^ ^^ @@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@ ^^
@@@@@@@@@@@@@@@@@@@@
~~~~ ~~ ~~~~~ ~~~~~~~~ ~~ &&&&&&&&&&&&&&&&&&&& ~~~~~~~ ~~~~~~~~~~~ ~~~
~ ~~ ~ ~ ~~~~~~~~~~~~~~~~~~~~ ~ ~~ ~~ ~
~ ~~ ~~ ~~ ~~ ~~~~~~~~~~~~~ ~~~~ ~ ~~~ ~ ~~~ ~ ~~
~ ~~ ~ ~ ~~~~~~ ~~ ~~~ ~~ ~ ~~ ~~ ~
~ ~ ~ ~ ~ ~~ ~~~~~~ ~ ~~ ~ ~~
~ ~ ~ ~ ~~ ~ ~
```
### CSS
Recall the style section in some of the HTML examples above. For example, the part `style='color:#66023C; font-size:150%; font-weight:600'` changed the font[73](#fn73). It’s actually CSS, and if we need to do the same thing each time, we can take an alternative approach to creating a style that would apply the same settings to all objects of the same class or HTML tag throughout the document.
The first step is to create a `*.css` file that your R Markdown document can refer to. Let’s say we want to make every link dodgerblue. Links in HTML are tagged with the letter **`a`**, and to insert a link with HTML you can do something like:
```
<a href='https://m-clark.github.io>wowee zowee!</a>
```
It would look like this: [wowee zowee!](https://m-clark.github.io). If we want to change the color from the default setting for all links, we go into our CSS file.
```
a {
color: dodgerblue;
}
```
Now our links would look like this: [wowee zowee!](https://m-clark.github.io)
You can use hexadecimal, RGB and other representations of practically any color. CSS, like HTML, has a fairly simple syntax, but it’s very flexible, and can do a ton of stuff you wouldn’t think of. With experience and looking at other people’s CSS, you’ll pick up the basics.
Now that you have a CSS file. Note that you want to specify it in the YAML section of your R Markdown document.
```
output:
html_document:
css: mystyle.css
```
Now every link you create will be that color. We could add a subtle background to it, make them bold or whatever.
```
a {
color: dodgerblue;
background-color: #f2f2f2;
font-weight: 800;
}
```
Now it becomes [wowee zowee!](https://m-clark.github.io). In a similar fashion, you could make images always display at 50% width by default.
```
img {
width: 50%;
}
```
### Custom classes
You can also create custom classes. For example, all R functions in my documents are a specific color, as they are wrapped in a custom css class I created called ‘func’ as follows[74](#fn74).
```
.func {
color: #007199;
font-weight: 500;
}
```
Then I can do `<span class="func">crossprod</span>` and the text of the function name, or any text of class func, will have the appropriate color and weight.
Personal Templates
------------------
A common mantra in computer programming and beyond is DRY, or Don’t Repeat Yourself. If you start using R Markdown a lot, and there is a good chance of that, once you get some settings you use often, you’ll not want to start from scratch, but simply reuse them. While this can be done [formally](https://rmarkdown.rstudio.com/developer_document_templates.html) by creating an R package, it can also be as simple as saving a file that just has the YAML and maybe some knitr options specified, and starting from that file each time. Same goes for CSS or other files you use often.
Over time, these files and settings will grow, especially as you learn new options and want to tweak old. In the end, you may have very little to do to make the document look great the first time you knit it!
The Rabbit Hole Goes Deep
-------------------------
How much you want to get into customization is up to you. Using the developer tools of any web browser allows you to inspect what anyone else has done as far as styling with CSS. Here is an example of Chrome Developer Tools, which you can access through its menus.
All browsers have this, making it easy to see exactly what’s going on with any webpage.
For some of you, if you aren’t careful, you’ll spend an afternoon on an already finished document trying to make it look perfect. It takes very little effort to make a great looking document with R Markdown. Making it *perfect* is impossible. You have been warned.
R Markdown Exercises
--------------------
### Exercise 1
* Create an `*.Rmd` for HTML.
* Now change some configuration options: choose a theme and add a table of contents. For the latter, create some headings/sections and sub\-sections so that you can see your configuration in action.
```
# Header 1
## Header 2
```
### Exercise 2
* Add a chunk that does the following (or something similar): `summary(mtcars)`
* Add a chunk that produces a visualization. If you need an example, create a density plot of the population total variable from the midwest data set in the ggplot2 package. Now align it with the `fig.align` chunk option.
* Add a chunk similar to the previous but have the resulting document hide the code, just showing the visualization.
* Now add a chunk that *only* shows the code, but doesn’t actually run it.
* Add a chunk that creates an R object such as a set of numbers or text. Then use that object in the text via inline R code. For example, show only the first element of the object in a sentence.
```
Yadda yadda `r object[1]` hey that's neat!
```
* **Bonus**: Set a chunk option that will be applied to the whole document. For example, make the default figure alignment be centered, or have the default be to hide the code.
### Exercise 3
* Italicize or bold some words.
* Add a hyperlink.
* Add a line break via HTML. Bonus: use htmltools and the `br()` function to add a line break within an R chunk. See what happens when you simply put several line returns.
* Change your output to PDF.
### Exercise 4
For these, you’ll have to look it up, as we haven’t explicitly discussed it.
* Add a title and subtitle to your document (YAML)
* Remove the \# from the R chunk outputs (Chunk option)
* Create a quoted block. (Basic Markdown)
### Exercise 5
For this more advanced exercise, you’d have to know a little CSS, but just doing it once will go quite a ways to helping you feel comfortable being creative with your own CSS files.
* Create a `*.css` file to set an option for your link color. Don’t forget to refer to it in your YAML configuration section of the Rmd file. Just add something like `css: file/location/file.css`.
* Create a special class of links and add a link of that class.
### Exercise 1
* Create an `*.Rmd` for HTML.
* Now change some configuration options: choose a theme and add a table of contents. For the latter, create some headings/sections and sub\-sections so that you can see your configuration in action.
```
# Header 1
## Header 2
```
### Exercise 2
* Add a chunk that does the following (or something similar): `summary(mtcars)`
* Add a chunk that produces a visualization. If you need an example, create a density plot of the population total variable from the midwest data set in the ggplot2 package. Now align it with the `fig.align` chunk option.
* Add a chunk similar to the previous but have the resulting document hide the code, just showing the visualization.
* Now add a chunk that *only* shows the code, but doesn’t actually run it.
* Add a chunk that creates an R object such as a set of numbers or text. Then use that object in the text via inline R code. For example, show only the first element of the object in a sentence.
```
Yadda yadda `r object[1]` hey that's neat!
```
* **Bonus**: Set a chunk option that will be applied to the whole document. For example, make the default figure alignment be centered, or have the default be to hide the code.
### Exercise 3
* Italicize or bold some words.
* Add a hyperlink.
* Add a line break via HTML. Bonus: use htmltools and the `br()` function to add a line break within an R chunk. See what happens when you simply put several line returns.
* Change your output to PDF.
### Exercise 4
For these, you’ll have to look it up, as we haven’t explicitly discussed it.
* Add a title and subtitle to your document (YAML)
* Remove the \# from the R chunk outputs (Chunk option)
* Create a quoted block. (Basic Markdown)
### Exercise 5
For this more advanced exercise, you’d have to know a little CSS, but just doing it once will go quite a ways to helping you feel comfortable being creative with your own CSS files.
* Create a `*.css` file to set an option for your link color. Don’t forget to refer to it in your YAML configuration section of the Rmd file. Just add something like `css: file/location/file.css`.
* Create a special class of links and add a link of that class.
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/customization.html |
Customization \& Configuration
==============================
Now that you have a document ready to go, you’ll want to customize it to make it look the way you want. There is basically nothing you can’t change by using R packages to enhance output, using custom themes to control the overall look, and using various other tools to your liking.
Output Options
--------------
The basic document comes with several options to apply to your output. You’ll find a cog wheel in the toolbar area underneath the tabs.
Note that the inline vs. console stuff mostly just has to do with the actual .Rmd file, not the output, so we’re going to ignore it[71](#fn71). Within the options you can apply some default settings to images, code, and more.
### Themes etc.
As a first step, simply play around with the themes you already have available. For quick, one\-off documents that you want to share without a lot of fuss, choosing one of these will make your document look good without breaking a sweat.
As another example, choose a new code style with the syntax highlighting. If you have headings in your current document, go ahead and turn on table of contents.
For many of the documents you create, changing the defaults this way may be enough, so be familiar with your options.
After making your selections, now see what has changed at the top of your document. You might see something like the following.
I’m sure you’ve been wondering at this point, so what is that stuff anyway? That is YAML[72](#fn72). So let’s see what’s going on.
YAML
----
For the purposes of starting out, all you really need to know is that YAML is like configuration code for your document. You can see that it specifies what the output is, and whatever options you selected previously. You can change the title, add a date etc. There is a lot of other stuff too. Here is the YAML for this document.
Clearly, there is a lot to play with, but it will depend on the type of document you’re doing. For example, the `always_allow_html: yes` is pointless for an HTML document, but would allow certain things to be (very likely poorly) attempted in a PDF or Word document. Other options only make sense for bookdown documents.
There is a lot more available too, as YAML is a programming syntax all its own, so how deep you want to get into it is up to you. The best way, just like learning R Markdown generally, is to simply see what others do and apply it to your own document. It may take a bit of trial and error, but you’ll eventually get the hang of it.
HTML \& CSS
-----------
### HTML
Knowing some basic HTML can add little things to your document to make it look better. As a minimal example, here is a plot followed by text.
Even with a return space between this line you are reading and the plot, this text is smack against it. I do not prefer this.
This fix is easy, just add `<br>` after the R chunk that creates the plot to add a line break.
This text has some room to breathe. Alternatively, I could use htmltools and put `br()` in the code after the plot. Possibly the best option would be to change the CSS regarding images so that they all have a bit of padding around them.
While you have a CSS file to make such changes, you can also do so in\-line.
This sentence is tyrian purple, bold, and has bigger font because I put `<span style='color:#66023C; font-size:150%; font-weight:600'>` before it and `</span>` after it.
Say you want to center and resize an image. Basic Markdown is too limited to do much more than display the image, so use some HTML instead.
Here is the basic markdown image.
``
A little more functionality has been added to the default approach, such that you can add some options in the following manner (no spaces!).
`{width=25%}`
Next we use HTML instead. This will produce a centered image that is slightly smaller.
`<img src="img/R.ico" style="display: block; margin: 0 auto;" width=40px>`
While the `src` and `width` are self\-explanatory, the style part is where you can do one\-off CSS styling, which we’ll talk about next. In this example, it serves to center the image. Taking `display: block` out and changing the margins to 0 will default to left\-alignment within the part of the page (container) the image resides in.
`<img src="img/R.ico" style="margin: 0 0;" width=40px>`
We can also use an R chunk with code as follows, which would allow for adjustments via chunk options.
```
knitr::include_graphics('img/R.ico')
```
And finally, you’ll want to is hone your ASCII art skills, because sometimes that’s the best way to display an image, like this ocean sunset.
```
^^ @@@@@@@@@
^^ ^^ @@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@ ^^
@@@@@@@@@@@@@@@@@@@@
~~~~ ~~ ~~~~~ ~~~~~~~~ ~~ &&&&&&&&&&&&&&&&&&&& ~~~~~~~ ~~~~~~~~~~~ ~~~
~ ~~ ~ ~ ~~~~~~~~~~~~~~~~~~~~ ~ ~~ ~~ ~
~ ~~ ~~ ~~ ~~ ~~~~~~~~~~~~~ ~~~~ ~ ~~~ ~ ~~~ ~ ~~
~ ~~ ~ ~ ~~~~~~ ~~ ~~~ ~~ ~ ~~ ~~ ~
~ ~ ~ ~ ~ ~~ ~~~~~~ ~ ~~ ~ ~~
~ ~ ~ ~ ~~ ~ ~
```
### CSS
Recall the style section in some of the HTML examples above. For example, the part `style='color:#66023C; font-size:150%; font-weight:600'` changed the font[73](#fn73). It’s actually CSS, and if we need to do the same thing each time, we can take an alternative approach to creating a style that would apply the same settings to all objects of the same class or HTML tag throughout the document.
The first step is to create a `*.css` file that your R Markdown document can refer to. Let’s say we want to make every link dodgerblue. Links in HTML are tagged with the letter **`a`**, and to insert a link with HTML you can do something like:
```
<a href='https://m-clark.github.io>wowee zowee!</a>
```
It would look like this: [wowee zowee!](https://m-clark.github.io). If we want to change the color from the default setting for all links, we go into our CSS file.
```
a {
color: dodgerblue;
}
```
Now our links would look like this: [wowee zowee!](https://m-clark.github.io)
You can use hexadecimal, RGB and other representations of practically any color. CSS, like HTML, has a fairly simple syntax, but it’s very flexible, and can do a ton of stuff you wouldn’t think of. With experience and looking at other people’s CSS, you’ll pick up the basics.
Now that you have a CSS file. Note that you want to specify it in the YAML section of your R Markdown document.
```
output:
html_document:
css: mystyle.css
```
Now every link you create will be that color. We could add a subtle background to it, make them bold or whatever.
```
a {
color: dodgerblue;
background-color: #f2f2f2;
font-weight: 800;
}
```
Now it becomes [wowee zowee!](https://m-clark.github.io). In a similar fashion, you could make images always display at 50% width by default.
```
img {
width: 50%;
}
```
### Custom classes
You can also create custom classes. For example, all R functions in my documents are a specific color, as they are wrapped in a custom css class I created called ‘func’ as follows[74](#fn74).
```
.func {
color: #007199;
font-weight: 500;
}
```
Then I can do `<span class="func">crossprod</span>` and the text of the function name, or any text of class func, will have the appropriate color and weight.
Personal Templates
------------------
A common mantra in computer programming and beyond is DRY, or Don’t Repeat Yourself. If you start using R Markdown a lot, and there is a good chance of that, once you get some settings you use often, you’ll not want to start from scratch, but simply reuse them. While this can be done [formally](https://rmarkdown.rstudio.com/developer_document_templates.html) by creating an R package, it can also be as simple as saving a file that just has the YAML and maybe some knitr options specified, and starting from that file each time. Same goes for CSS or other files you use often.
Over time, these files and settings will grow, especially as you learn new options and want to tweak old. In the end, you may have very little to do to make the document look great the first time you knit it!
The Rabbit Hole Goes Deep
-------------------------
How much you want to get into customization is up to you. Using the developer tools of any web browser allows you to inspect what anyone else has done as far as styling with CSS. Here is an example of Chrome Developer Tools, which you can access through its menus.
All browsers have this, making it easy to see exactly what’s going on with any webpage.
For some of you, if you aren’t careful, you’ll spend an afternoon on an already finished document trying to make it look perfect. It takes very little effort to make a great looking document with R Markdown. Making it *perfect* is impossible. You have been warned.
R Markdown Exercises
--------------------
### Exercise 1
* Create an `*.Rmd` for HTML.
* Now change some configuration options: choose a theme and add a table of contents. For the latter, create some headings/sections and sub\-sections so that you can see your configuration in action.
```
# Header 1
## Header 2
```
### Exercise 2
* Add a chunk that does the following (or something similar): `summary(mtcars)`
* Add a chunk that produces a visualization. If you need an example, create a density plot of the population total variable from the midwest data set in the ggplot2 package. Now align it with the `fig.align` chunk option.
* Add a chunk similar to the previous but have the resulting document hide the code, just showing the visualization.
* Now add a chunk that *only* shows the code, but doesn’t actually run it.
* Add a chunk that creates an R object such as a set of numbers or text. Then use that object in the text via inline R code. For example, show only the first element of the object in a sentence.
```
Yadda yadda `r object[1]` hey that's neat!
```
* **Bonus**: Set a chunk option that will be applied to the whole document. For example, make the default figure alignment be centered, or have the default be to hide the code.
### Exercise 3
* Italicize or bold some words.
* Add a hyperlink.
* Add a line break via HTML. Bonus: use htmltools and the `br()` function to add a line break within an R chunk. See what happens when you simply put several line returns.
* Change your output to PDF.
### Exercise 4
For these, you’ll have to look it up, as we haven’t explicitly discussed it.
* Add a title and subtitle to your document (YAML)
* Remove the \# from the R chunk outputs (Chunk option)
* Create a quoted block. (Basic Markdown)
### Exercise 5
For this more advanced exercise, you’d have to know a little CSS, but just doing it once will go quite a ways to helping you feel comfortable being creative with your own CSS files.
* Create a `*.css` file to set an option for your link color. Don’t forget to refer to it in your YAML configuration section of the Rmd file. Just add something like `css: file/location/file.css`.
* Create a special class of links and add a link of that class.
Output Options
--------------
The basic document comes with several options to apply to your output. You’ll find a cog wheel in the toolbar area underneath the tabs.
Note that the inline vs. console stuff mostly just has to do with the actual .Rmd file, not the output, so we’re going to ignore it[71](#fn71). Within the options you can apply some default settings to images, code, and more.
### Themes etc.
As a first step, simply play around with the themes you already have available. For quick, one\-off documents that you want to share without a lot of fuss, choosing one of these will make your document look good without breaking a sweat.
As another example, choose a new code style with the syntax highlighting. If you have headings in your current document, go ahead and turn on table of contents.
For many of the documents you create, changing the defaults this way may be enough, so be familiar with your options.
After making your selections, now see what has changed at the top of your document. You might see something like the following.
I’m sure you’ve been wondering at this point, so what is that stuff anyway? That is YAML[72](#fn72). So let’s see what’s going on.
### Themes etc.
As a first step, simply play around with the themes you already have available. For quick, one\-off documents that you want to share without a lot of fuss, choosing one of these will make your document look good without breaking a sweat.
As another example, choose a new code style with the syntax highlighting. If you have headings in your current document, go ahead and turn on table of contents.
For many of the documents you create, changing the defaults this way may be enough, so be familiar with your options.
After making your selections, now see what has changed at the top of your document. You might see something like the following.
I’m sure you’ve been wondering at this point, so what is that stuff anyway? That is YAML[72](#fn72). So let’s see what’s going on.
YAML
----
For the purposes of starting out, all you really need to know is that YAML is like configuration code for your document. You can see that it specifies what the output is, and whatever options you selected previously. You can change the title, add a date etc. There is a lot of other stuff too. Here is the YAML for this document.
Clearly, there is a lot to play with, but it will depend on the type of document you’re doing. For example, the `always_allow_html: yes` is pointless for an HTML document, but would allow certain things to be (very likely poorly) attempted in a PDF or Word document. Other options only make sense for bookdown documents.
There is a lot more available too, as YAML is a programming syntax all its own, so how deep you want to get into it is up to you. The best way, just like learning R Markdown generally, is to simply see what others do and apply it to your own document. It may take a bit of trial and error, but you’ll eventually get the hang of it.
HTML \& CSS
-----------
### HTML
Knowing some basic HTML can add little things to your document to make it look better. As a minimal example, here is a plot followed by text.
Even with a return space between this line you are reading and the plot, this text is smack against it. I do not prefer this.
This fix is easy, just add `<br>` after the R chunk that creates the plot to add a line break.
This text has some room to breathe. Alternatively, I could use htmltools and put `br()` in the code after the plot. Possibly the best option would be to change the CSS regarding images so that they all have a bit of padding around them.
While you have a CSS file to make such changes, you can also do so in\-line.
This sentence is tyrian purple, bold, and has bigger font because I put `<span style='color:#66023C; font-size:150%; font-weight:600'>` before it and `</span>` after it.
Say you want to center and resize an image. Basic Markdown is too limited to do much more than display the image, so use some HTML instead.
Here is the basic markdown image.
``
A little more functionality has been added to the default approach, such that you can add some options in the following manner (no spaces!).
`{width=25%}`
Next we use HTML instead. This will produce a centered image that is slightly smaller.
`<img src="img/R.ico" style="display: block; margin: 0 auto;" width=40px>`
While the `src` and `width` are self\-explanatory, the style part is where you can do one\-off CSS styling, which we’ll talk about next. In this example, it serves to center the image. Taking `display: block` out and changing the margins to 0 will default to left\-alignment within the part of the page (container) the image resides in.
`<img src="img/R.ico" style="margin: 0 0;" width=40px>`
We can also use an R chunk with code as follows, which would allow for adjustments via chunk options.
```
knitr::include_graphics('img/R.ico')
```
And finally, you’ll want to is hone your ASCII art skills, because sometimes that’s the best way to display an image, like this ocean sunset.
```
^^ @@@@@@@@@
^^ ^^ @@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@ ^^
@@@@@@@@@@@@@@@@@@@@
~~~~ ~~ ~~~~~ ~~~~~~~~ ~~ &&&&&&&&&&&&&&&&&&&& ~~~~~~~ ~~~~~~~~~~~ ~~~
~ ~~ ~ ~ ~~~~~~~~~~~~~~~~~~~~ ~ ~~ ~~ ~
~ ~~ ~~ ~~ ~~ ~~~~~~~~~~~~~ ~~~~ ~ ~~~ ~ ~~~ ~ ~~
~ ~~ ~ ~ ~~~~~~ ~~ ~~~ ~~ ~ ~~ ~~ ~
~ ~ ~ ~ ~ ~~ ~~~~~~ ~ ~~ ~ ~~
~ ~ ~ ~ ~~ ~ ~
```
### CSS
Recall the style section in some of the HTML examples above. For example, the part `style='color:#66023C; font-size:150%; font-weight:600'` changed the font[73](#fn73). It’s actually CSS, and if we need to do the same thing each time, we can take an alternative approach to creating a style that would apply the same settings to all objects of the same class or HTML tag throughout the document.
The first step is to create a `*.css` file that your R Markdown document can refer to. Let’s say we want to make every link dodgerblue. Links in HTML are tagged with the letter **`a`**, and to insert a link with HTML you can do something like:
```
<a href='https://m-clark.github.io>wowee zowee!</a>
```
It would look like this: [wowee zowee!](https://m-clark.github.io). If we want to change the color from the default setting for all links, we go into our CSS file.
```
a {
color: dodgerblue;
}
```
Now our links would look like this: [wowee zowee!](https://m-clark.github.io)
You can use hexadecimal, RGB and other representations of practically any color. CSS, like HTML, has a fairly simple syntax, but it’s very flexible, and can do a ton of stuff you wouldn’t think of. With experience and looking at other people’s CSS, you’ll pick up the basics.
Now that you have a CSS file. Note that you want to specify it in the YAML section of your R Markdown document.
```
output:
html_document:
css: mystyle.css
```
Now every link you create will be that color. We could add a subtle background to it, make them bold or whatever.
```
a {
color: dodgerblue;
background-color: #f2f2f2;
font-weight: 800;
}
```
Now it becomes [wowee zowee!](https://m-clark.github.io). In a similar fashion, you could make images always display at 50% width by default.
```
img {
width: 50%;
}
```
### Custom classes
You can also create custom classes. For example, all R functions in my documents are a specific color, as they are wrapped in a custom css class I created called ‘func’ as follows[74](#fn74).
```
.func {
color: #007199;
font-weight: 500;
}
```
Then I can do `<span class="func">crossprod</span>` and the text of the function name, or any text of class func, will have the appropriate color and weight.
### HTML
Knowing some basic HTML can add little things to your document to make it look better. As a minimal example, here is a plot followed by text.
Even with a return space between this line you are reading and the plot, this text is smack against it. I do not prefer this.
This fix is easy, just add `<br>` after the R chunk that creates the plot to add a line break.
This text has some room to breathe. Alternatively, I could use htmltools and put `br()` in the code after the plot. Possibly the best option would be to change the CSS regarding images so that they all have a bit of padding around them.
While you have a CSS file to make such changes, you can also do so in\-line.
This sentence is tyrian purple, bold, and has bigger font because I put `<span style='color:#66023C; font-size:150%; font-weight:600'>` before it and `</span>` after it.
Say you want to center and resize an image. Basic Markdown is too limited to do much more than display the image, so use some HTML instead.
Here is the basic markdown image.
``
A little more functionality has been added to the default approach, such that you can add some options in the following manner (no spaces!).
`{width=25%}`
Next we use HTML instead. This will produce a centered image that is slightly smaller.
`<img src="img/R.ico" style="display: block; margin: 0 auto;" width=40px>`
While the `src` and `width` are self\-explanatory, the style part is where you can do one\-off CSS styling, which we’ll talk about next. In this example, it serves to center the image. Taking `display: block` out and changing the margins to 0 will default to left\-alignment within the part of the page (container) the image resides in.
`<img src="img/R.ico" style="margin: 0 0;" width=40px>`
We can also use an R chunk with code as follows, which would allow for adjustments via chunk options.
```
knitr::include_graphics('img/R.ico')
```
And finally, you’ll want to is hone your ASCII art skills, because sometimes that’s the best way to display an image, like this ocean sunset.
```
^^ @@@@@@@@@
^^ ^^ @@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@ ^^
@@@@@@@@@@@@@@@@@@@@
~~~~ ~~ ~~~~~ ~~~~~~~~ ~~ &&&&&&&&&&&&&&&&&&&& ~~~~~~~ ~~~~~~~~~~~ ~~~
~ ~~ ~ ~ ~~~~~~~~~~~~~~~~~~~~ ~ ~~ ~~ ~
~ ~~ ~~ ~~ ~~ ~~~~~~~~~~~~~ ~~~~ ~ ~~~ ~ ~~~ ~ ~~
~ ~~ ~ ~ ~~~~~~ ~~ ~~~ ~~ ~ ~~ ~~ ~
~ ~ ~ ~ ~ ~~ ~~~~~~ ~ ~~ ~ ~~
~ ~ ~ ~ ~~ ~ ~
```
### CSS
Recall the style section in some of the HTML examples above. For example, the part `style='color:#66023C; font-size:150%; font-weight:600'` changed the font[73](#fn73). It’s actually CSS, and if we need to do the same thing each time, we can take an alternative approach to creating a style that would apply the same settings to all objects of the same class or HTML tag throughout the document.
The first step is to create a `*.css` file that your R Markdown document can refer to. Let’s say we want to make every link dodgerblue. Links in HTML are tagged with the letter **`a`**, and to insert a link with HTML you can do something like:
```
<a href='https://m-clark.github.io>wowee zowee!</a>
```
It would look like this: [wowee zowee!](https://m-clark.github.io). If we want to change the color from the default setting for all links, we go into our CSS file.
```
a {
color: dodgerblue;
}
```
Now our links would look like this: [wowee zowee!](https://m-clark.github.io)
You can use hexadecimal, RGB and other representations of practically any color. CSS, like HTML, has a fairly simple syntax, but it’s very flexible, and can do a ton of stuff you wouldn’t think of. With experience and looking at other people’s CSS, you’ll pick up the basics.
Now that you have a CSS file. Note that you want to specify it in the YAML section of your R Markdown document.
```
output:
html_document:
css: mystyle.css
```
Now every link you create will be that color. We could add a subtle background to it, make them bold or whatever.
```
a {
color: dodgerblue;
background-color: #f2f2f2;
font-weight: 800;
}
```
Now it becomes [wowee zowee!](https://m-clark.github.io). In a similar fashion, you could make images always display at 50% width by default.
```
img {
width: 50%;
}
```
### Custom classes
You can also create custom classes. For example, all R functions in my documents are a specific color, as they are wrapped in a custom css class I created called ‘func’ as follows[74](#fn74).
```
.func {
color: #007199;
font-weight: 500;
}
```
Then I can do `<span class="func">crossprod</span>` and the text of the function name, or any text of class func, will have the appropriate color and weight.
Personal Templates
------------------
A common mantra in computer programming and beyond is DRY, or Don’t Repeat Yourself. If you start using R Markdown a lot, and there is a good chance of that, once you get some settings you use often, you’ll not want to start from scratch, but simply reuse them. While this can be done [formally](https://rmarkdown.rstudio.com/developer_document_templates.html) by creating an R package, it can also be as simple as saving a file that just has the YAML and maybe some knitr options specified, and starting from that file each time. Same goes for CSS or other files you use often.
Over time, these files and settings will grow, especially as you learn new options and want to tweak old. In the end, you may have very little to do to make the document look great the first time you knit it!
The Rabbit Hole Goes Deep
-------------------------
How much you want to get into customization is up to you. Using the developer tools of any web browser allows you to inspect what anyone else has done as far as styling with CSS. Here is an example of Chrome Developer Tools, which you can access through its menus.
All browsers have this, making it easy to see exactly what’s going on with any webpage.
For some of you, if you aren’t careful, you’ll spend an afternoon on an already finished document trying to make it look perfect. It takes very little effort to make a great looking document with R Markdown. Making it *perfect* is impossible. You have been warned.
R Markdown Exercises
--------------------
### Exercise 1
* Create an `*.Rmd` for HTML.
* Now change some configuration options: choose a theme and add a table of contents. For the latter, create some headings/sections and sub\-sections so that you can see your configuration in action.
```
# Header 1
## Header 2
```
### Exercise 2
* Add a chunk that does the following (or something similar): `summary(mtcars)`
* Add a chunk that produces a visualization. If you need an example, create a density plot of the population total variable from the midwest data set in the ggplot2 package. Now align it with the `fig.align` chunk option.
* Add a chunk similar to the previous but have the resulting document hide the code, just showing the visualization.
* Now add a chunk that *only* shows the code, but doesn’t actually run it.
* Add a chunk that creates an R object such as a set of numbers or text. Then use that object in the text via inline R code. For example, show only the first element of the object in a sentence.
```
Yadda yadda `r object[1]` hey that's neat!
```
* **Bonus**: Set a chunk option that will be applied to the whole document. For example, make the default figure alignment be centered, or have the default be to hide the code.
### Exercise 3
* Italicize or bold some words.
* Add a hyperlink.
* Add a line break via HTML. Bonus: use htmltools and the `br()` function to add a line break within an R chunk. See what happens when you simply put several line returns.
* Change your output to PDF.
### Exercise 4
For these, you’ll have to look it up, as we haven’t explicitly discussed it.
* Add a title and subtitle to your document (YAML)
* Remove the \# from the R chunk outputs (Chunk option)
* Create a quoted block. (Basic Markdown)
### Exercise 5
For this more advanced exercise, you’d have to know a little CSS, but just doing it once will go quite a ways to helping you feel comfortable being creative with your own CSS files.
* Create a `*.css` file to set an option for your link color. Don’t forget to refer to it in your YAML configuration section of the Rmd file. Just add something like `css: file/location/file.css`.
* Create a special class of links and add a link of that class.
### Exercise 1
* Create an `*.Rmd` for HTML.
* Now change some configuration options: choose a theme and add a table of contents. For the latter, create some headings/sections and sub\-sections so that you can see your configuration in action.
```
# Header 1
## Header 2
```
### Exercise 2
* Add a chunk that does the following (or something similar): `summary(mtcars)`
* Add a chunk that produces a visualization. If you need an example, create a density plot of the population total variable from the midwest data set in the ggplot2 package. Now align it with the `fig.align` chunk option.
* Add a chunk similar to the previous but have the resulting document hide the code, just showing the visualization.
* Now add a chunk that *only* shows the code, but doesn’t actually run it.
* Add a chunk that creates an R object such as a set of numbers or text. Then use that object in the text via inline R code. For example, show only the first element of the object in a sentence.
```
Yadda yadda `r object[1]` hey that's neat!
```
* **Bonus**: Set a chunk option that will be applied to the whole document. For example, make the default figure alignment be centered, or have the default be to hide the code.
### Exercise 3
* Italicize or bold some words.
* Add a hyperlink.
* Add a line break via HTML. Bonus: use htmltools and the `br()` function to add a line break within an R chunk. See what happens when you simply put several line returns.
* Change your output to PDF.
### Exercise 4
For these, you’ll have to look it up, as we haven’t explicitly discussed it.
* Add a title and subtitle to your document (YAML)
* Remove the \# from the R chunk outputs (Chunk option)
* Create a quoted block. (Basic Markdown)
### Exercise 5
For this more advanced exercise, you’d have to know a little CSS, but just doing it once will go quite a ways to helping you feel comfortable being creative with your own CSS files.
* Create a `*.css` file to set an option for your link color. Don’t forget to refer to it in your YAML configuration section of the Rmd file. Just add something like `css: file/location/file.css`.
* Create a special class of links and add a link of that class.
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/customization.html |
Customization \& Configuration
==============================
Now that you have a document ready to go, you’ll want to customize it to make it look the way you want. There is basically nothing you can’t change by using R packages to enhance output, using custom themes to control the overall look, and using various other tools to your liking.
Output Options
--------------
The basic document comes with several options to apply to your output. You’ll find a cog wheel in the toolbar area underneath the tabs.
Note that the inline vs. console stuff mostly just has to do with the actual .Rmd file, not the output, so we’re going to ignore it[71](#fn71). Within the options you can apply some default settings to images, code, and more.
### Themes etc.
As a first step, simply play around with the themes you already have available. For quick, one\-off documents that you want to share without a lot of fuss, choosing one of these will make your document look good without breaking a sweat.
As another example, choose a new code style with the syntax highlighting. If you have headings in your current document, go ahead and turn on table of contents.
For many of the documents you create, changing the defaults this way may be enough, so be familiar with your options.
After making your selections, now see what has changed at the top of your document. You might see something like the following.
I’m sure you’ve been wondering at this point, so what is that stuff anyway? That is YAML[72](#fn72). So let’s see what’s going on.
YAML
----
For the purposes of starting out, all you really need to know is that YAML is like configuration code for your document. You can see that it specifies what the output is, and whatever options you selected previously. You can change the title, add a date etc. There is a lot of other stuff too. Here is the YAML for this document.
Clearly, there is a lot to play with, but it will depend on the type of document you’re doing. For example, the `always_allow_html: yes` is pointless for an HTML document, but would allow certain things to be (very likely poorly) attempted in a PDF or Word document. Other options only make sense for bookdown documents.
There is a lot more available too, as YAML is a programming syntax all its own, so how deep you want to get into it is up to you. The best way, just like learning R Markdown generally, is to simply see what others do and apply it to your own document. It may take a bit of trial and error, but you’ll eventually get the hang of it.
HTML \& CSS
-----------
### HTML
Knowing some basic HTML can add little things to your document to make it look better. As a minimal example, here is a plot followed by text.
Even with a return space between this line you are reading and the plot, this text is smack against it. I do not prefer this.
This fix is easy, just add `<br>` after the R chunk that creates the plot to add a line break.
This text has some room to breathe. Alternatively, I could use htmltools and put `br()` in the code after the plot. Possibly the best option would be to change the CSS regarding images so that they all have a bit of padding around them.
While you have a CSS file to make such changes, you can also do so in\-line.
This sentence is tyrian purple, bold, and has bigger font because I put `<span style='color:#66023C; font-size:150%; font-weight:600'>` before it and `</span>` after it.
Say you want to center and resize an image. Basic Markdown is too limited to do much more than display the image, so use some HTML instead.
Here is the basic markdown image.
``
A little more functionality has been added to the default approach, such that you can add some options in the following manner (no spaces!).
`{width=25%}`
Next we use HTML instead. This will produce a centered image that is slightly smaller.
`<img src="img/R.ico" style="display: block; margin: 0 auto;" width=40px>`
While the `src` and `width` are self\-explanatory, the style part is where you can do one\-off CSS styling, which we’ll talk about next. In this example, it serves to center the image. Taking `display: block` out and changing the margins to 0 will default to left\-alignment within the part of the page (container) the image resides in.
`<img src="img/R.ico" style="margin: 0 0;" width=40px>`
We can also use an R chunk with code as follows, which would allow for adjustments via chunk options.
```
knitr::include_graphics('img/R.ico')
```
And finally, you’ll want to is hone your ASCII art skills, because sometimes that’s the best way to display an image, like this ocean sunset.
```
^^ @@@@@@@@@
^^ ^^ @@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@ ^^
@@@@@@@@@@@@@@@@@@@@
~~~~ ~~ ~~~~~ ~~~~~~~~ ~~ &&&&&&&&&&&&&&&&&&&& ~~~~~~~ ~~~~~~~~~~~ ~~~
~ ~~ ~ ~ ~~~~~~~~~~~~~~~~~~~~ ~ ~~ ~~ ~
~ ~~ ~~ ~~ ~~ ~~~~~~~~~~~~~ ~~~~ ~ ~~~ ~ ~~~ ~ ~~
~ ~~ ~ ~ ~~~~~~ ~~ ~~~ ~~ ~ ~~ ~~ ~
~ ~ ~ ~ ~ ~~ ~~~~~~ ~ ~~ ~ ~~
~ ~ ~ ~ ~~ ~ ~
```
### CSS
Recall the style section in some of the HTML examples above. For example, the part `style='color:#66023C; font-size:150%; font-weight:600'` changed the font[73](#fn73). It’s actually CSS, and if we need to do the same thing each time, we can take an alternative approach to creating a style that would apply the same settings to all objects of the same class or HTML tag throughout the document.
The first step is to create a `*.css` file that your R Markdown document can refer to. Let’s say we want to make every link dodgerblue. Links in HTML are tagged with the letter **`a`**, and to insert a link with HTML you can do something like:
```
<a href='https://m-clark.github.io>wowee zowee!</a>
```
It would look like this: [wowee zowee!](https://m-clark.github.io). If we want to change the color from the default setting for all links, we go into our CSS file.
```
a {
color: dodgerblue;
}
```
Now our links would look like this: [wowee zowee!](https://m-clark.github.io)
You can use hexadecimal, RGB and other representations of practically any color. CSS, like HTML, has a fairly simple syntax, but it’s very flexible, and can do a ton of stuff you wouldn’t think of. With experience and looking at other people’s CSS, you’ll pick up the basics.
Now that you have a CSS file. Note that you want to specify it in the YAML section of your R Markdown document.
```
output:
html_document:
css: mystyle.css
```
Now every link you create will be that color. We could add a subtle background to it, make them bold or whatever.
```
a {
color: dodgerblue;
background-color: #f2f2f2;
font-weight: 800;
}
```
Now it becomes [wowee zowee!](https://m-clark.github.io). In a similar fashion, you could make images always display at 50% width by default.
```
img {
width: 50%;
}
```
### Custom classes
You can also create custom classes. For example, all R functions in my documents are a specific color, as they are wrapped in a custom css class I created called ‘func’ as follows[74](#fn74).
```
.func {
color: #007199;
font-weight: 500;
}
```
Then I can do `<span class="func">crossprod</span>` and the text of the function name, or any text of class func, will have the appropriate color and weight.
Personal Templates
------------------
A common mantra in computer programming and beyond is DRY, or Don’t Repeat Yourself. If you start using R Markdown a lot, and there is a good chance of that, once you get some settings you use often, you’ll not want to start from scratch, but simply reuse them. While this can be done [formally](https://rmarkdown.rstudio.com/developer_document_templates.html) by creating an R package, it can also be as simple as saving a file that just has the YAML and maybe some knitr options specified, and starting from that file each time. Same goes for CSS or other files you use often.
Over time, these files and settings will grow, especially as you learn new options and want to tweak old. In the end, you may have very little to do to make the document look great the first time you knit it!
The Rabbit Hole Goes Deep
-------------------------
How much you want to get into customization is up to you. Using the developer tools of any web browser allows you to inspect what anyone else has done as far as styling with CSS. Here is an example of Chrome Developer Tools, which you can access through its menus.
All browsers have this, making it easy to see exactly what’s going on with any webpage.
For some of you, if you aren’t careful, you’ll spend an afternoon on an already finished document trying to make it look perfect. It takes very little effort to make a great looking document with R Markdown. Making it *perfect* is impossible. You have been warned.
R Markdown Exercises
--------------------
### Exercise 1
* Create an `*.Rmd` for HTML.
* Now change some configuration options: choose a theme and add a table of contents. For the latter, create some headings/sections and sub\-sections so that you can see your configuration in action.
```
# Header 1
## Header 2
```
### Exercise 2
* Add a chunk that does the following (or something similar): `summary(mtcars)`
* Add a chunk that produces a visualization. If you need an example, create a density plot of the population total variable from the midwest data set in the ggplot2 package. Now align it with the `fig.align` chunk option.
* Add a chunk similar to the previous but have the resulting document hide the code, just showing the visualization.
* Now add a chunk that *only* shows the code, but doesn’t actually run it.
* Add a chunk that creates an R object such as a set of numbers or text. Then use that object in the text via inline R code. For example, show only the first element of the object in a sentence.
```
Yadda yadda `r object[1]` hey that's neat!
```
* **Bonus**: Set a chunk option that will be applied to the whole document. For example, make the default figure alignment be centered, or have the default be to hide the code.
### Exercise 3
* Italicize or bold some words.
* Add a hyperlink.
* Add a line break via HTML. Bonus: use htmltools and the `br()` function to add a line break within an R chunk. See what happens when you simply put several line returns.
* Change your output to PDF.
### Exercise 4
For these, you’ll have to look it up, as we haven’t explicitly discussed it.
* Add a title and subtitle to your document (YAML)
* Remove the \# from the R chunk outputs (Chunk option)
* Create a quoted block. (Basic Markdown)
### Exercise 5
For this more advanced exercise, you’d have to know a little CSS, but just doing it once will go quite a ways to helping you feel comfortable being creative with your own CSS files.
* Create a `*.css` file to set an option for your link color. Don’t forget to refer to it in your YAML configuration section of the Rmd file. Just add something like `css: file/location/file.css`.
* Create a special class of links and add a link of that class.
Output Options
--------------
The basic document comes with several options to apply to your output. You’ll find a cog wheel in the toolbar area underneath the tabs.
Note that the inline vs. console stuff mostly just has to do with the actual .Rmd file, not the output, so we’re going to ignore it[71](#fn71). Within the options you can apply some default settings to images, code, and more.
### Themes etc.
As a first step, simply play around with the themes you already have available. For quick, one\-off documents that you want to share without a lot of fuss, choosing one of these will make your document look good without breaking a sweat.
As another example, choose a new code style with the syntax highlighting. If you have headings in your current document, go ahead and turn on table of contents.
For many of the documents you create, changing the defaults this way may be enough, so be familiar with your options.
After making your selections, now see what has changed at the top of your document. You might see something like the following.
I’m sure you’ve been wondering at this point, so what is that stuff anyway? That is YAML[72](#fn72). So let’s see what’s going on.
### Themes etc.
As a first step, simply play around with the themes you already have available. For quick, one\-off documents that you want to share without a lot of fuss, choosing one of these will make your document look good without breaking a sweat.
As another example, choose a new code style with the syntax highlighting. If you have headings in your current document, go ahead and turn on table of contents.
For many of the documents you create, changing the defaults this way may be enough, so be familiar with your options.
After making your selections, now see what has changed at the top of your document. You might see something like the following.
I’m sure you’ve been wondering at this point, so what is that stuff anyway? That is YAML[72](#fn72). So let’s see what’s going on.
YAML
----
For the purposes of starting out, all you really need to know is that YAML is like configuration code for your document. You can see that it specifies what the output is, and whatever options you selected previously. You can change the title, add a date etc. There is a lot of other stuff too. Here is the YAML for this document.
Clearly, there is a lot to play with, but it will depend on the type of document you’re doing. For example, the `always_allow_html: yes` is pointless for an HTML document, but would allow certain things to be (very likely poorly) attempted in a PDF or Word document. Other options only make sense for bookdown documents.
There is a lot more available too, as YAML is a programming syntax all its own, so how deep you want to get into it is up to you. The best way, just like learning R Markdown generally, is to simply see what others do and apply it to your own document. It may take a bit of trial and error, but you’ll eventually get the hang of it.
HTML \& CSS
-----------
### HTML
Knowing some basic HTML can add little things to your document to make it look better. As a minimal example, here is a plot followed by text.
Even with a return space between this line you are reading and the plot, this text is smack against it. I do not prefer this.
This fix is easy, just add `<br>` after the R chunk that creates the plot to add a line break.
This text has some room to breathe. Alternatively, I could use htmltools and put `br()` in the code after the plot. Possibly the best option would be to change the CSS regarding images so that they all have a bit of padding around them.
While you have a CSS file to make such changes, you can also do so in\-line.
This sentence is tyrian purple, bold, and has bigger font because I put `<span style='color:#66023C; font-size:150%; font-weight:600'>` before it and `</span>` after it.
Say you want to center and resize an image. Basic Markdown is too limited to do much more than display the image, so use some HTML instead.
Here is the basic markdown image.
``
A little more functionality has been added to the default approach, such that you can add some options in the following manner (no spaces!).
`{width=25%}`
Next we use HTML instead. This will produce a centered image that is slightly smaller.
`<img src="img/R.ico" style="display: block; margin: 0 auto;" width=40px>`
While the `src` and `width` are self\-explanatory, the style part is where you can do one\-off CSS styling, which we’ll talk about next. In this example, it serves to center the image. Taking `display: block` out and changing the margins to 0 will default to left\-alignment within the part of the page (container) the image resides in.
`<img src="img/R.ico" style="margin: 0 0;" width=40px>`
We can also use an R chunk with code as follows, which would allow for adjustments via chunk options.
```
knitr::include_graphics('img/R.ico')
```
And finally, you’ll want to is hone your ASCII art skills, because sometimes that’s the best way to display an image, like this ocean sunset.
```
^^ @@@@@@@@@
^^ ^^ @@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@ ^^
@@@@@@@@@@@@@@@@@@@@
~~~~ ~~ ~~~~~ ~~~~~~~~ ~~ &&&&&&&&&&&&&&&&&&&& ~~~~~~~ ~~~~~~~~~~~ ~~~
~ ~~ ~ ~ ~~~~~~~~~~~~~~~~~~~~ ~ ~~ ~~ ~
~ ~~ ~~ ~~ ~~ ~~~~~~~~~~~~~ ~~~~ ~ ~~~ ~ ~~~ ~ ~~
~ ~~ ~ ~ ~~~~~~ ~~ ~~~ ~~ ~ ~~ ~~ ~
~ ~ ~ ~ ~ ~~ ~~~~~~ ~ ~~ ~ ~~
~ ~ ~ ~ ~~ ~ ~
```
### CSS
Recall the style section in some of the HTML examples above. For example, the part `style='color:#66023C; font-size:150%; font-weight:600'` changed the font[73](#fn73). It’s actually CSS, and if we need to do the same thing each time, we can take an alternative approach to creating a style that would apply the same settings to all objects of the same class or HTML tag throughout the document.
The first step is to create a `*.css` file that your R Markdown document can refer to. Let’s say we want to make every link dodgerblue. Links in HTML are tagged with the letter **`a`**, and to insert a link with HTML you can do something like:
```
<a href='https://m-clark.github.io>wowee zowee!</a>
```
It would look like this: [wowee zowee!](https://m-clark.github.io). If we want to change the color from the default setting for all links, we go into our CSS file.
```
a {
color: dodgerblue;
}
```
Now our links would look like this: [wowee zowee!](https://m-clark.github.io)
You can use hexadecimal, RGB and other representations of practically any color. CSS, like HTML, has a fairly simple syntax, but it’s very flexible, and can do a ton of stuff you wouldn’t think of. With experience and looking at other people’s CSS, you’ll pick up the basics.
Now that you have a CSS file. Note that you want to specify it in the YAML section of your R Markdown document.
```
output:
html_document:
css: mystyle.css
```
Now every link you create will be that color. We could add a subtle background to it, make them bold or whatever.
```
a {
color: dodgerblue;
background-color: #f2f2f2;
font-weight: 800;
}
```
Now it becomes [wowee zowee!](https://m-clark.github.io). In a similar fashion, you could make images always display at 50% width by default.
```
img {
width: 50%;
}
```
### Custom classes
You can also create custom classes. For example, all R functions in my documents are a specific color, as they are wrapped in a custom css class I created called ‘func’ as follows[74](#fn74).
```
.func {
color: #007199;
font-weight: 500;
}
```
Then I can do `<span class="func">crossprod</span>` and the text of the function name, or any text of class func, will have the appropriate color and weight.
### HTML
Knowing some basic HTML can add little things to your document to make it look better. As a minimal example, here is a plot followed by text.
Even with a return space between this line you are reading and the plot, this text is smack against it. I do not prefer this.
This fix is easy, just add `<br>` after the R chunk that creates the plot to add a line break.
This text has some room to breathe. Alternatively, I could use htmltools and put `br()` in the code after the plot. Possibly the best option would be to change the CSS regarding images so that they all have a bit of padding around them.
While you have a CSS file to make such changes, you can also do so in\-line.
This sentence is tyrian purple, bold, and has bigger font because I put `<span style='color:#66023C; font-size:150%; font-weight:600'>` before it and `</span>` after it.
Say you want to center and resize an image. Basic Markdown is too limited to do much more than display the image, so use some HTML instead.
Here is the basic markdown image.
``
A little more functionality has been added to the default approach, such that you can add some options in the following manner (no spaces!).
`{width=25%}`
Next we use HTML instead. This will produce a centered image that is slightly smaller.
`<img src="img/R.ico" style="display: block; margin: 0 auto;" width=40px>`
While the `src` and `width` are self\-explanatory, the style part is where you can do one\-off CSS styling, which we’ll talk about next. In this example, it serves to center the image. Taking `display: block` out and changing the margins to 0 will default to left\-alignment within the part of the page (container) the image resides in.
`<img src="img/R.ico" style="margin: 0 0;" width=40px>`
We can also use an R chunk with code as follows, which would allow for adjustments via chunk options.
```
knitr::include_graphics('img/R.ico')
```
And finally, you’ll want to is hone your ASCII art skills, because sometimes that’s the best way to display an image, like this ocean sunset.
```
^^ @@@@@@@@@
^^ ^^ @@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@ ^^
@@@@@@@@@@@@@@@@@@@@
~~~~ ~~ ~~~~~ ~~~~~~~~ ~~ &&&&&&&&&&&&&&&&&&&& ~~~~~~~ ~~~~~~~~~~~ ~~~
~ ~~ ~ ~ ~~~~~~~~~~~~~~~~~~~~ ~ ~~ ~~ ~
~ ~~ ~~ ~~ ~~ ~~~~~~~~~~~~~ ~~~~ ~ ~~~ ~ ~~~ ~ ~~
~ ~~ ~ ~ ~~~~~~ ~~ ~~~ ~~ ~ ~~ ~~ ~
~ ~ ~ ~ ~ ~~ ~~~~~~ ~ ~~ ~ ~~
~ ~ ~ ~ ~~ ~ ~
```
### CSS
Recall the style section in some of the HTML examples above. For example, the part `style='color:#66023C; font-size:150%; font-weight:600'` changed the font[73](#fn73). It’s actually CSS, and if we need to do the same thing each time, we can take an alternative approach to creating a style that would apply the same settings to all objects of the same class or HTML tag throughout the document.
The first step is to create a `*.css` file that your R Markdown document can refer to. Let’s say we want to make every link dodgerblue. Links in HTML are tagged with the letter **`a`**, and to insert a link with HTML you can do something like:
```
<a href='https://m-clark.github.io>wowee zowee!</a>
```
It would look like this: [wowee zowee!](https://m-clark.github.io). If we want to change the color from the default setting for all links, we go into our CSS file.
```
a {
color: dodgerblue;
}
```
Now our links would look like this: [wowee zowee!](https://m-clark.github.io)
You can use hexadecimal, RGB and other representations of practically any color. CSS, like HTML, has a fairly simple syntax, but it’s very flexible, and can do a ton of stuff you wouldn’t think of. With experience and looking at other people’s CSS, you’ll pick up the basics.
Now that you have a CSS file. Note that you want to specify it in the YAML section of your R Markdown document.
```
output:
html_document:
css: mystyle.css
```
Now every link you create will be that color. We could add a subtle background to it, make them bold or whatever.
```
a {
color: dodgerblue;
background-color: #f2f2f2;
font-weight: 800;
}
```
Now it becomes [wowee zowee!](https://m-clark.github.io). In a similar fashion, you could make images always display at 50% width by default.
```
img {
width: 50%;
}
```
### Custom classes
You can also create custom classes. For example, all R functions in my documents are a specific color, as they are wrapped in a custom css class I created called ‘func’ as follows[74](#fn74).
```
.func {
color: #007199;
font-weight: 500;
}
```
Then I can do `<span class="func">crossprod</span>` and the text of the function name, or any text of class func, will have the appropriate color and weight.
Personal Templates
------------------
A common mantra in computer programming and beyond is DRY, or Don’t Repeat Yourself. If you start using R Markdown a lot, and there is a good chance of that, once you get some settings you use often, you’ll not want to start from scratch, but simply reuse them. While this can be done [formally](https://rmarkdown.rstudio.com/developer_document_templates.html) by creating an R package, it can also be as simple as saving a file that just has the YAML and maybe some knitr options specified, and starting from that file each time. Same goes for CSS or other files you use often.
Over time, these files and settings will grow, especially as you learn new options and want to tweak old. In the end, you may have very little to do to make the document look great the first time you knit it!
The Rabbit Hole Goes Deep
-------------------------
How much you want to get into customization is up to you. Using the developer tools of any web browser allows you to inspect what anyone else has done as far as styling with CSS. Here is an example of Chrome Developer Tools, which you can access through its menus.
All browsers have this, making it easy to see exactly what’s going on with any webpage.
For some of you, if you aren’t careful, you’ll spend an afternoon on an already finished document trying to make it look perfect. It takes very little effort to make a great looking document with R Markdown. Making it *perfect* is impossible. You have been warned.
R Markdown Exercises
--------------------
### Exercise 1
* Create an `*.Rmd` for HTML.
* Now change some configuration options: choose a theme and add a table of contents. For the latter, create some headings/sections and sub\-sections so that you can see your configuration in action.
```
# Header 1
## Header 2
```
### Exercise 2
* Add a chunk that does the following (or something similar): `summary(mtcars)`
* Add a chunk that produces a visualization. If you need an example, create a density plot of the population total variable from the midwest data set in the ggplot2 package. Now align it with the `fig.align` chunk option.
* Add a chunk similar to the previous but have the resulting document hide the code, just showing the visualization.
* Now add a chunk that *only* shows the code, but doesn’t actually run it.
* Add a chunk that creates an R object such as a set of numbers or text. Then use that object in the text via inline R code. For example, show only the first element of the object in a sentence.
```
Yadda yadda `r object[1]` hey that's neat!
```
* **Bonus**: Set a chunk option that will be applied to the whole document. For example, make the default figure alignment be centered, or have the default be to hide the code.
### Exercise 3
* Italicize or bold some words.
* Add a hyperlink.
* Add a line break via HTML. Bonus: use htmltools and the `br()` function to add a line break within an R chunk. See what happens when you simply put several line returns.
* Change your output to PDF.
### Exercise 4
For these, you’ll have to look it up, as we haven’t explicitly discussed it.
* Add a title and subtitle to your document (YAML)
* Remove the \# from the R chunk outputs (Chunk option)
* Create a quoted block. (Basic Markdown)
### Exercise 5
For this more advanced exercise, you’d have to know a little CSS, but just doing it once will go quite a ways to helping you feel comfortable being creative with your own CSS files.
* Create a `*.css` file to set an option for your link color. Don’t forget to refer to it in your YAML configuration section of the Rmd file. Just add something like `css: file/location/file.css`.
* Create a special class of links and add a link of that class.
### Exercise 1
* Create an `*.Rmd` for HTML.
* Now change some configuration options: choose a theme and add a table of contents. For the latter, create some headings/sections and sub\-sections so that you can see your configuration in action.
```
# Header 1
## Header 2
```
### Exercise 2
* Add a chunk that does the following (or something similar): `summary(mtcars)`
* Add a chunk that produces a visualization. If you need an example, create a density plot of the population total variable from the midwest data set in the ggplot2 package. Now align it with the `fig.align` chunk option.
* Add a chunk similar to the previous but have the resulting document hide the code, just showing the visualization.
* Now add a chunk that *only* shows the code, but doesn’t actually run it.
* Add a chunk that creates an R object such as a set of numbers or text. Then use that object in the text via inline R code. For example, show only the first element of the object in a sentence.
```
Yadda yadda `r object[1]` hey that's neat!
```
* **Bonus**: Set a chunk option that will be applied to the whole document. For example, make the default figure alignment be centered, or have the default be to hide the code.
### Exercise 3
* Italicize or bold some words.
* Add a hyperlink.
* Add a line break via HTML. Bonus: use htmltools and the `br()` function to add a line break within an R chunk. See what happens when you simply put several line returns.
* Change your output to PDF.
### Exercise 4
For these, you’ll have to look it up, as we haven’t explicitly discussed it.
* Add a title and subtitle to your document (YAML)
* Remove the \# from the R chunk outputs (Chunk option)
* Create a quoted block. (Basic Markdown)
### Exercise 5
For this more advanced exercise, you’d have to know a little CSS, but just doing it once will go quite a ways to helping you feel comfortable being creative with your own CSS files.
* Create a `*.css` file to set an option for your link color. Don’t forget to refer to it in your YAML configuration section of the Rmd file. Just add something like `css: file/location/file.css`.
* Create a special class of links and add a link of that class.
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/summary.html |
Summary
=======
With the right tools, data exploration can be:
* easier
* faster
* more efficient
* more fun!
Use them to wring your data dry of what it has to offer.
See the references for recommended next steps and…
Embrace a richer understanding of your data!
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/summary.html |
Summary
=======
With the right tools, data exploration can be:
* easier
* faster
* more efficient
* more fun!
Use them to wring your data dry of what it has to offer.
See the references for recommended next steps and…
Embrace a richer understanding of your data!
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/summary.html |
Summary
=======
With the right tools, data exploration can be:
* easier
* faster
* more efficient
* more fun!
Use them to wring your data dry of what it has to offer.
See the references for recommended next steps and…
Embrace a richer understanding of your data!
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/summary.html |
Summary
=======
With the right tools, data exploration can be:
* easier
* faster
* more efficient
* more fun!
Use them to wring your data dry of what it has to offer.
See the references for recommended next steps and…
Embrace a richer understanding of your data!
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/appendix.html |
Appendix
========
R Markdown
----------
### Footnotes
Footnotes are very straightforward. To add one just put `[^myfootnote]` where needed (the name is arbitrary). Then, somewhere in your document add the following.
```
[^myfootnote]: Yadda blah yadda.
```
The usual practice is to put them at the end of the document. It doesn’t matter what order you put them in, they will be displayed/numbered as they appear in the actual text.
### Citations and references
#### References \& bibliography
Assuming you have a reference file somewhere adding them to the text of the document is very easy. Many formats of the file are acceptable, such as BibTeX, EndNote, and more. For example if using BibTeX and the file is `refs.bib`, then just note the file in the YAML section.
```
bibliography: refs.bib
biblio-style: apalike
link-citations: yes
```
The style can be anything you want, but will take some extra steps to use. Consult the R Markdown page for how to use custom styles.
Now, somewhere in your document, put an empty References header like so:
```
# References
```
The references you cite will magically appear in the references section when you knit the document. Note that bookdown documents, of which this is one, also put them at the bottom of the page where the citation occurs.
#### Citations
Now that your document has some references, you’ll want to cite them! In the refs.bib file for example, we have the following entry:
```
@book{clark2018rmd,
title={Introduction to R Markdown},
author={Clark, Michael},
year={2018}
}
```
For example, if we type the following:
`Blah blah [see @clark2018rmd, pp. 33-35; also @barthelme1981balloon].`
It will produce:
Blah blah (see Clark [2018](#ref-clark2018rmd), 33–35; also Barthelme [1981](#ref-barthelme1981balloon)).
Use a minus sign (\-) before the @ to suppress mention of the author in the citation. T
`Clark says blah [-@clark2018rmd].`
Clark says blah ([2018](#ref-clark2018rmd)).
You can also write an in\-text citation, as follows:
`@barthelme1981balloon says blah.`
Barthelme ([1981](#ref-barthelme1981balloon)) says blah.
`@barthelme1981balloon [p. 33] says blah.`
Barthelme ([1981](#ref-barthelme1981balloon), 33\) says blah.
### Multiple documents
If you’re writing a lengthy document, for example, an academic article, you’ll not want to have a single `*.Rmd` file for the whole thing, no more than you want a single R script to do all the data preparation and analysis for it. For one thing, probably only one section is data heavy, and you wouldn’t want to have to do a lot of processing every time you make a change to the document (though caching would help there). In addition, if there is a problem with one section, you can still put the document together by just ignoring the problematic part. A more compelling reason regards collaboration. Your colleague can write the introduction while you work on the results, and the final paper can then be put together without conflict.
The best way to accomplish this is to think of your document like you would a website. The `index.Rmd` file has the YAML and other settings, and it will also be where the other files come together. For example a paper with an introduction, results, and discussion section might have this in the index file.
```
```{r, child='introduction.Rmd'}
```
```{r, child='results.Rmd'}
```
```{r, child='discussion.Rmd'}
```
```
You work on the individual sections separately, and when you knit the index file, all will come together. What’s more, if you’re creating an HTML document, you now can put this index file, which is the complete document, on the web for easy access. For example, if the document is placed in a folder like `mydoc`, then one could go to `www.someplace.net/mydoc/` and view the document.
### Web standards
When creating an HTML document or site and customizing things as you like, you should consider accessibility issues at some point. Not everyone interacts with the web the same way. For example, roughly 10% of people see [color differently](https://en.wikipedia.org/wiki/Color_blindness) from ‘normal’. Also, if your font is too light to read, or your visualizations don’t distinguish points of interest for a certain group of people, your document is less effective at communicating your ideas.
As a simple example, when we changed the link color to dodgerblue, it might not have seemed like much, but the color was no longer sufficient contrast at the lowest web standards. Nor was my original color when the gray background was added. The default link color for my documents is just fine though.
You can get a [browser extension](https://www.deque.com/axe/) to see what problems your page has once it’s on the web. This is where your `*.css` file will come in handy, fixing 100 problems with one line of code. Then you have a template you can use from there on out. As you’ll be using HTML more and more the more you use R Markdown, things like this become more important. You won’t likely be able to deal with every issue that arises, but you’ll want to consider them.
R Markdown
----------
### Footnotes
Footnotes are very straightforward. To add one just put `[^myfootnote]` where needed (the name is arbitrary). Then, somewhere in your document add the following.
```
[^myfootnote]: Yadda blah yadda.
```
The usual practice is to put them at the end of the document. It doesn’t matter what order you put them in, they will be displayed/numbered as they appear in the actual text.
### Citations and references
#### References \& bibliography
Assuming you have a reference file somewhere adding them to the text of the document is very easy. Many formats of the file are acceptable, such as BibTeX, EndNote, and more. For example if using BibTeX and the file is `refs.bib`, then just note the file in the YAML section.
```
bibliography: refs.bib
biblio-style: apalike
link-citations: yes
```
The style can be anything you want, but will take some extra steps to use. Consult the R Markdown page for how to use custom styles.
Now, somewhere in your document, put an empty References header like so:
```
# References
```
The references you cite will magically appear in the references section when you knit the document. Note that bookdown documents, of which this is one, also put them at the bottom of the page where the citation occurs.
#### Citations
Now that your document has some references, you’ll want to cite them! In the refs.bib file for example, we have the following entry:
```
@book{clark2018rmd,
title={Introduction to R Markdown},
author={Clark, Michael},
year={2018}
}
```
For example, if we type the following:
`Blah blah [see @clark2018rmd, pp. 33-35; also @barthelme1981balloon].`
It will produce:
Blah blah (see Clark [2018](#ref-clark2018rmd), 33–35; also Barthelme [1981](#ref-barthelme1981balloon)).
Use a minus sign (\-) before the @ to suppress mention of the author in the citation. T
`Clark says blah [-@clark2018rmd].`
Clark says blah ([2018](#ref-clark2018rmd)).
You can also write an in\-text citation, as follows:
`@barthelme1981balloon says blah.`
Barthelme ([1981](#ref-barthelme1981balloon)) says blah.
`@barthelme1981balloon [p. 33] says blah.`
Barthelme ([1981](#ref-barthelme1981balloon), 33\) says blah.
### Multiple documents
If you’re writing a lengthy document, for example, an academic article, you’ll not want to have a single `*.Rmd` file for the whole thing, no more than you want a single R script to do all the data preparation and analysis for it. For one thing, probably only one section is data heavy, and you wouldn’t want to have to do a lot of processing every time you make a change to the document (though caching would help there). In addition, if there is a problem with one section, you can still put the document together by just ignoring the problematic part. A more compelling reason regards collaboration. Your colleague can write the introduction while you work on the results, and the final paper can then be put together without conflict.
The best way to accomplish this is to think of your document like you would a website. The `index.Rmd` file has the YAML and other settings, and it will also be where the other files come together. For example a paper with an introduction, results, and discussion section might have this in the index file.
```
```{r, child='introduction.Rmd'}
```
```{r, child='results.Rmd'}
```
```{r, child='discussion.Rmd'}
```
```
You work on the individual sections separately, and when you knit the index file, all will come together. What’s more, if you’re creating an HTML document, you now can put this index file, which is the complete document, on the web for easy access. For example, if the document is placed in a folder like `mydoc`, then one could go to `www.someplace.net/mydoc/` and view the document.
### Web standards
When creating an HTML document or site and customizing things as you like, you should consider accessibility issues at some point. Not everyone interacts with the web the same way. For example, roughly 10% of people see [color differently](https://en.wikipedia.org/wiki/Color_blindness) from ‘normal’. Also, if your font is too light to read, or your visualizations don’t distinguish points of interest for a certain group of people, your document is less effective at communicating your ideas.
As a simple example, when we changed the link color to dodgerblue, it might not have seemed like much, but the color was no longer sufficient contrast at the lowest web standards. Nor was my original color when the gray background was added. The default link color for my documents is just fine though.
You can get a [browser extension](https://www.deque.com/axe/) to see what problems your page has once it’s on the web. This is where your `*.css` file will come in handy, fixing 100 problems with one line of code. Then you have a template you can use from there on out. As you’ll be using HTML more and more the more you use R Markdown, things like this become more important. You won’t likely be able to deal with every issue that arises, but you’ll want to consider them.
### Footnotes
Footnotes are very straightforward. To add one just put `[^myfootnote]` where needed (the name is arbitrary). Then, somewhere in your document add the following.
```
[^myfootnote]: Yadda blah yadda.
```
The usual practice is to put them at the end of the document. It doesn’t matter what order you put them in, they will be displayed/numbered as they appear in the actual text.
### Citations and references
#### References \& bibliography
Assuming you have a reference file somewhere adding them to the text of the document is very easy. Many formats of the file are acceptable, such as BibTeX, EndNote, and more. For example if using BibTeX and the file is `refs.bib`, then just note the file in the YAML section.
```
bibliography: refs.bib
biblio-style: apalike
link-citations: yes
```
The style can be anything you want, but will take some extra steps to use. Consult the R Markdown page for how to use custom styles.
Now, somewhere in your document, put an empty References header like so:
```
# References
```
The references you cite will magically appear in the references section when you knit the document. Note that bookdown documents, of which this is one, also put them at the bottom of the page where the citation occurs.
#### Citations
Now that your document has some references, you’ll want to cite them! In the refs.bib file for example, we have the following entry:
```
@book{clark2018rmd,
title={Introduction to R Markdown},
author={Clark, Michael},
year={2018}
}
```
For example, if we type the following:
`Blah blah [see @clark2018rmd, pp. 33-35; also @barthelme1981balloon].`
It will produce:
Blah blah (see Clark [2018](#ref-clark2018rmd), 33–35; also Barthelme [1981](#ref-barthelme1981balloon)).
Use a minus sign (\-) before the @ to suppress mention of the author in the citation. T
`Clark says blah [-@clark2018rmd].`
Clark says blah ([2018](#ref-clark2018rmd)).
You can also write an in\-text citation, as follows:
`@barthelme1981balloon says blah.`
Barthelme ([1981](#ref-barthelme1981balloon)) says blah.
`@barthelme1981balloon [p. 33] says blah.`
Barthelme ([1981](#ref-barthelme1981balloon), 33\) says blah.
#### References \& bibliography
Assuming you have a reference file somewhere adding them to the text of the document is very easy. Many formats of the file are acceptable, such as BibTeX, EndNote, and more. For example if using BibTeX and the file is `refs.bib`, then just note the file in the YAML section.
```
bibliography: refs.bib
biblio-style: apalike
link-citations: yes
```
The style can be anything you want, but will take some extra steps to use. Consult the R Markdown page for how to use custom styles.
Now, somewhere in your document, put an empty References header like so:
```
# References
```
The references you cite will magically appear in the references section when you knit the document. Note that bookdown documents, of which this is one, also put them at the bottom of the page where the citation occurs.
#### Citations
Now that your document has some references, you’ll want to cite them! In the refs.bib file for example, we have the following entry:
```
@book{clark2018rmd,
title={Introduction to R Markdown},
author={Clark, Michael},
year={2018}
}
```
For example, if we type the following:
`Blah blah [see @clark2018rmd, pp. 33-35; also @barthelme1981balloon].`
It will produce:
Blah blah (see Clark [2018](#ref-clark2018rmd), 33–35; also Barthelme [1981](#ref-barthelme1981balloon)).
Use a minus sign (\-) before the @ to suppress mention of the author in the citation. T
`Clark says blah [-@clark2018rmd].`
Clark says blah ([2018](#ref-clark2018rmd)).
You can also write an in\-text citation, as follows:
`@barthelme1981balloon says blah.`
Barthelme ([1981](#ref-barthelme1981balloon)) says blah.
`@barthelme1981balloon [p. 33] says blah.`
Barthelme ([1981](#ref-barthelme1981balloon), 33\) says blah.
### Multiple documents
If you’re writing a lengthy document, for example, an academic article, you’ll not want to have a single `*.Rmd` file for the whole thing, no more than you want a single R script to do all the data preparation and analysis for it. For one thing, probably only one section is data heavy, and you wouldn’t want to have to do a lot of processing every time you make a change to the document (though caching would help there). In addition, if there is a problem with one section, you can still put the document together by just ignoring the problematic part. A more compelling reason regards collaboration. Your colleague can write the introduction while you work on the results, and the final paper can then be put together without conflict.
The best way to accomplish this is to think of your document like you would a website. The `index.Rmd` file has the YAML and other settings, and it will also be where the other files come together. For example a paper with an introduction, results, and discussion section might have this in the index file.
```
```{r, child='introduction.Rmd'}
```
```{r, child='results.Rmd'}
```
```{r, child='discussion.Rmd'}
```
```
You work on the individual sections separately, and when you knit the index file, all will come together. What’s more, if you’re creating an HTML document, you now can put this index file, which is the complete document, on the web for easy access. For example, if the document is placed in a folder like `mydoc`, then one could go to `www.someplace.net/mydoc/` and view the document.
### Web standards
When creating an HTML document or site and customizing things as you like, you should consider accessibility issues at some point. Not everyone interacts with the web the same way. For example, roughly 10% of people see [color differently](https://en.wikipedia.org/wiki/Color_blindness) from ‘normal’. Also, if your font is too light to read, or your visualizations don’t distinguish points of interest for a certain group of people, your document is less effective at communicating your ideas.
As a simple example, when we changed the link color to dodgerblue, it might not have seemed like much, but the color was no longer sufficient contrast at the lowest web standards. Nor was my original color when the gray background was added. The default link color for my documents is just fine though.
You can get a [browser extension](https://www.deque.com/axe/) to see what problems your page has once it’s on the web. This is where your `*.css` file will come in handy, fixing 100 problems with one line of code. Then you have a template you can use from there on out. As you’ll be using HTML more and more the more you use R Markdown, things like this become more important. You won’t likely be able to deal with every issue that arises, but you’ll want to consider them.
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/appendix.html |
Appendix
========
R Markdown
----------
### Footnotes
Footnotes are very straightforward. To add one just put `[^myfootnote]` where needed (the name is arbitrary). Then, somewhere in your document add the following.
```
[^myfootnote]: Yadda blah yadda.
```
The usual practice is to put them at the end of the document. It doesn’t matter what order you put them in, they will be displayed/numbered as they appear in the actual text.
### Citations and references
#### References \& bibliography
Assuming you have a reference file somewhere adding them to the text of the document is very easy. Many formats of the file are acceptable, such as BibTeX, EndNote, and more. For example if using BibTeX and the file is `refs.bib`, then just note the file in the YAML section.
```
bibliography: refs.bib
biblio-style: apalike
link-citations: yes
```
The style can be anything you want, but will take some extra steps to use. Consult the R Markdown page for how to use custom styles.
Now, somewhere in your document, put an empty References header like so:
```
# References
```
The references you cite will magically appear in the references section when you knit the document. Note that bookdown documents, of which this is one, also put them at the bottom of the page where the citation occurs.
#### Citations
Now that your document has some references, you’ll want to cite them! In the refs.bib file for example, we have the following entry:
```
@book{clark2018rmd,
title={Introduction to R Markdown},
author={Clark, Michael},
year={2018}
}
```
For example, if we type the following:
`Blah blah [see @clark2018rmd, pp. 33-35; also @barthelme1981balloon].`
It will produce:
Blah blah (see Clark [2018](#ref-clark2018rmd), 33–35; also Barthelme [1981](#ref-barthelme1981balloon)).
Use a minus sign (\-) before the @ to suppress mention of the author in the citation. T
`Clark says blah [-@clark2018rmd].`
Clark says blah ([2018](#ref-clark2018rmd)).
You can also write an in\-text citation, as follows:
`@barthelme1981balloon says blah.`
Barthelme ([1981](#ref-barthelme1981balloon)) says blah.
`@barthelme1981balloon [p. 33] says blah.`
Barthelme ([1981](#ref-barthelme1981balloon), 33\) says blah.
### Multiple documents
If you’re writing a lengthy document, for example, an academic article, you’ll not want to have a single `*.Rmd` file for the whole thing, no more than you want a single R script to do all the data preparation and analysis for it. For one thing, probably only one section is data heavy, and you wouldn’t want to have to do a lot of processing every time you make a change to the document (though caching would help there). In addition, if there is a problem with one section, you can still put the document together by just ignoring the problematic part. A more compelling reason regards collaboration. Your colleague can write the introduction while you work on the results, and the final paper can then be put together without conflict.
The best way to accomplish this is to think of your document like you would a website. The `index.Rmd` file has the YAML and other settings, and it will also be where the other files come together. For example a paper with an introduction, results, and discussion section might have this in the index file.
```
```{r, child='introduction.Rmd'}
```
```{r, child='results.Rmd'}
```
```{r, child='discussion.Rmd'}
```
```
You work on the individual sections separately, and when you knit the index file, all will come together. What’s more, if you’re creating an HTML document, you now can put this index file, which is the complete document, on the web for easy access. For example, if the document is placed in a folder like `mydoc`, then one could go to `www.someplace.net/mydoc/` and view the document.
### Web standards
When creating an HTML document or site and customizing things as you like, you should consider accessibility issues at some point. Not everyone interacts with the web the same way. For example, roughly 10% of people see [color differently](https://en.wikipedia.org/wiki/Color_blindness) from ‘normal’. Also, if your font is too light to read, or your visualizations don’t distinguish points of interest for a certain group of people, your document is less effective at communicating your ideas.
As a simple example, when we changed the link color to dodgerblue, it might not have seemed like much, but the color was no longer sufficient contrast at the lowest web standards. Nor was my original color when the gray background was added. The default link color for my documents is just fine though.
You can get a [browser extension](https://www.deque.com/axe/) to see what problems your page has once it’s on the web. This is where your `*.css` file will come in handy, fixing 100 problems with one line of code. Then you have a template you can use from there on out. As you’ll be using HTML more and more the more you use R Markdown, things like this become more important. You won’t likely be able to deal with every issue that arises, but you’ll want to consider them.
R Markdown
----------
### Footnotes
Footnotes are very straightforward. To add one just put `[^myfootnote]` where needed (the name is arbitrary). Then, somewhere in your document add the following.
```
[^myfootnote]: Yadda blah yadda.
```
The usual practice is to put them at the end of the document. It doesn’t matter what order you put them in, they will be displayed/numbered as they appear in the actual text.
### Citations and references
#### References \& bibliography
Assuming you have a reference file somewhere adding them to the text of the document is very easy. Many formats of the file are acceptable, such as BibTeX, EndNote, and more. For example if using BibTeX and the file is `refs.bib`, then just note the file in the YAML section.
```
bibliography: refs.bib
biblio-style: apalike
link-citations: yes
```
The style can be anything you want, but will take some extra steps to use. Consult the R Markdown page for how to use custom styles.
Now, somewhere in your document, put an empty References header like so:
```
# References
```
The references you cite will magically appear in the references section when you knit the document. Note that bookdown documents, of which this is one, also put them at the bottom of the page where the citation occurs.
#### Citations
Now that your document has some references, you’ll want to cite them! In the refs.bib file for example, we have the following entry:
```
@book{clark2018rmd,
title={Introduction to R Markdown},
author={Clark, Michael},
year={2018}
}
```
For example, if we type the following:
`Blah blah [see @clark2018rmd, pp. 33-35; also @barthelme1981balloon].`
It will produce:
Blah blah (see Clark [2018](#ref-clark2018rmd), 33–35; also Barthelme [1981](#ref-barthelme1981balloon)).
Use a minus sign (\-) before the @ to suppress mention of the author in the citation. T
`Clark says blah [-@clark2018rmd].`
Clark says blah ([2018](#ref-clark2018rmd)).
You can also write an in\-text citation, as follows:
`@barthelme1981balloon says blah.`
Barthelme ([1981](#ref-barthelme1981balloon)) says blah.
`@barthelme1981balloon [p. 33] says blah.`
Barthelme ([1981](#ref-barthelme1981balloon), 33\) says blah.
### Multiple documents
If you’re writing a lengthy document, for example, an academic article, you’ll not want to have a single `*.Rmd` file for the whole thing, no more than you want a single R script to do all the data preparation and analysis for it. For one thing, probably only one section is data heavy, and you wouldn’t want to have to do a lot of processing every time you make a change to the document (though caching would help there). In addition, if there is a problem with one section, you can still put the document together by just ignoring the problematic part. A more compelling reason regards collaboration. Your colleague can write the introduction while you work on the results, and the final paper can then be put together without conflict.
The best way to accomplish this is to think of your document like you would a website. The `index.Rmd` file has the YAML and other settings, and it will also be where the other files come together. For example a paper with an introduction, results, and discussion section might have this in the index file.
```
```{r, child='introduction.Rmd'}
```
```{r, child='results.Rmd'}
```
```{r, child='discussion.Rmd'}
```
```
You work on the individual sections separately, and when you knit the index file, all will come together. What’s more, if you’re creating an HTML document, you now can put this index file, which is the complete document, on the web for easy access. For example, if the document is placed in a folder like `mydoc`, then one could go to `www.someplace.net/mydoc/` and view the document.
### Web standards
When creating an HTML document or site and customizing things as you like, you should consider accessibility issues at some point. Not everyone interacts with the web the same way. For example, roughly 10% of people see [color differently](https://en.wikipedia.org/wiki/Color_blindness) from ‘normal’. Also, if your font is too light to read, or your visualizations don’t distinguish points of interest for a certain group of people, your document is less effective at communicating your ideas.
As a simple example, when we changed the link color to dodgerblue, it might not have seemed like much, but the color was no longer sufficient contrast at the lowest web standards. Nor was my original color when the gray background was added. The default link color for my documents is just fine though.
You can get a [browser extension](https://www.deque.com/axe/) to see what problems your page has once it’s on the web. This is where your `*.css` file will come in handy, fixing 100 problems with one line of code. Then you have a template you can use from there on out. As you’ll be using HTML more and more the more you use R Markdown, things like this become more important. You won’t likely be able to deal with every issue that arises, but you’ll want to consider them.
### Footnotes
Footnotes are very straightforward. To add one just put `[^myfootnote]` where needed (the name is arbitrary). Then, somewhere in your document add the following.
```
[^myfootnote]: Yadda blah yadda.
```
The usual practice is to put them at the end of the document. It doesn’t matter what order you put them in, they will be displayed/numbered as they appear in the actual text.
### Citations and references
#### References \& bibliography
Assuming you have a reference file somewhere adding them to the text of the document is very easy. Many formats of the file are acceptable, such as BibTeX, EndNote, and more. For example if using BibTeX and the file is `refs.bib`, then just note the file in the YAML section.
```
bibliography: refs.bib
biblio-style: apalike
link-citations: yes
```
The style can be anything you want, but will take some extra steps to use. Consult the R Markdown page for how to use custom styles.
Now, somewhere in your document, put an empty References header like so:
```
# References
```
The references you cite will magically appear in the references section when you knit the document. Note that bookdown documents, of which this is one, also put them at the bottom of the page where the citation occurs.
#### Citations
Now that your document has some references, you’ll want to cite them! In the refs.bib file for example, we have the following entry:
```
@book{clark2018rmd,
title={Introduction to R Markdown},
author={Clark, Michael},
year={2018}
}
```
For example, if we type the following:
`Blah blah [see @clark2018rmd, pp. 33-35; also @barthelme1981balloon].`
It will produce:
Blah blah (see Clark [2018](#ref-clark2018rmd), 33–35; also Barthelme [1981](#ref-barthelme1981balloon)).
Use a minus sign (\-) before the @ to suppress mention of the author in the citation. T
`Clark says blah [-@clark2018rmd].`
Clark says blah ([2018](#ref-clark2018rmd)).
You can also write an in\-text citation, as follows:
`@barthelme1981balloon says blah.`
Barthelme ([1981](#ref-barthelme1981balloon)) says blah.
`@barthelme1981balloon [p. 33] says blah.`
Barthelme ([1981](#ref-barthelme1981balloon), 33\) says blah.
#### References \& bibliography
Assuming you have a reference file somewhere adding them to the text of the document is very easy. Many formats of the file are acceptable, such as BibTeX, EndNote, and more. For example if using BibTeX and the file is `refs.bib`, then just note the file in the YAML section.
```
bibliography: refs.bib
biblio-style: apalike
link-citations: yes
```
The style can be anything you want, but will take some extra steps to use. Consult the R Markdown page for how to use custom styles.
Now, somewhere in your document, put an empty References header like so:
```
# References
```
The references you cite will magically appear in the references section when you knit the document. Note that bookdown documents, of which this is one, also put them at the bottom of the page where the citation occurs.
#### Citations
Now that your document has some references, you’ll want to cite them! In the refs.bib file for example, we have the following entry:
```
@book{clark2018rmd,
title={Introduction to R Markdown},
author={Clark, Michael},
year={2018}
}
```
For example, if we type the following:
`Blah blah [see @clark2018rmd, pp. 33-35; also @barthelme1981balloon].`
It will produce:
Blah blah (see Clark [2018](#ref-clark2018rmd), 33–35; also Barthelme [1981](#ref-barthelme1981balloon)).
Use a minus sign (\-) before the @ to suppress mention of the author in the citation. T
`Clark says blah [-@clark2018rmd].`
Clark says blah ([2018](#ref-clark2018rmd)).
You can also write an in\-text citation, as follows:
`@barthelme1981balloon says blah.`
Barthelme ([1981](#ref-barthelme1981balloon)) says blah.
`@barthelme1981balloon [p. 33] says blah.`
Barthelme ([1981](#ref-barthelme1981balloon), 33\) says blah.
### Multiple documents
If you’re writing a lengthy document, for example, an academic article, you’ll not want to have a single `*.Rmd` file for the whole thing, no more than you want a single R script to do all the data preparation and analysis for it. For one thing, probably only one section is data heavy, and you wouldn’t want to have to do a lot of processing every time you make a change to the document (though caching would help there). In addition, if there is a problem with one section, you can still put the document together by just ignoring the problematic part. A more compelling reason regards collaboration. Your colleague can write the introduction while you work on the results, and the final paper can then be put together without conflict.
The best way to accomplish this is to think of your document like you would a website. The `index.Rmd` file has the YAML and other settings, and it will also be where the other files come together. For example a paper with an introduction, results, and discussion section might have this in the index file.
```
```{r, child='introduction.Rmd'}
```
```{r, child='results.Rmd'}
```
```{r, child='discussion.Rmd'}
```
```
You work on the individual sections separately, and when you knit the index file, all will come together. What’s more, if you’re creating an HTML document, you now can put this index file, which is the complete document, on the web for easy access. For example, if the document is placed in a folder like `mydoc`, then one could go to `www.someplace.net/mydoc/` and view the document.
### Web standards
When creating an HTML document or site and customizing things as you like, you should consider accessibility issues at some point. Not everyone interacts with the web the same way. For example, roughly 10% of people see [color differently](https://en.wikipedia.org/wiki/Color_blindness) from ‘normal’. Also, if your font is too light to read, or your visualizations don’t distinguish points of interest for a certain group of people, your document is less effective at communicating your ideas.
As a simple example, when we changed the link color to dodgerblue, it might not have seemed like much, but the color was no longer sufficient contrast at the lowest web standards. Nor was my original color when the gray background was added. The default link color for my documents is just fine though.
You can get a [browser extension](https://www.deque.com/axe/) to see what problems your page has once it’s on the web. This is where your `*.css` file will come in handy, fixing 100 problems with one line of code. Then you have a template you can use from there on out. As you’ll be using HTML more and more the more you use R Markdown, things like this become more important. You won’t likely be able to deal with every issue that arises, but you’ll want to consider them.
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/appendix.html |
Appendix
========
R Markdown
----------
### Footnotes
Footnotes are very straightforward. To add one just put `[^myfootnote]` where needed (the name is arbitrary). Then, somewhere in your document add the following.
```
[^myfootnote]: Yadda blah yadda.
```
The usual practice is to put them at the end of the document. It doesn’t matter what order you put them in, they will be displayed/numbered as they appear in the actual text.
### Citations and references
#### References \& bibliography
Assuming you have a reference file somewhere adding them to the text of the document is very easy. Many formats of the file are acceptable, such as BibTeX, EndNote, and more. For example if using BibTeX and the file is `refs.bib`, then just note the file in the YAML section.
```
bibliography: refs.bib
biblio-style: apalike
link-citations: yes
```
The style can be anything you want, but will take some extra steps to use. Consult the R Markdown page for how to use custom styles.
Now, somewhere in your document, put an empty References header like so:
```
# References
```
The references you cite will magically appear in the references section when you knit the document. Note that bookdown documents, of which this is one, also put them at the bottom of the page where the citation occurs.
#### Citations
Now that your document has some references, you’ll want to cite them! In the refs.bib file for example, we have the following entry:
```
@book{clark2018rmd,
title={Introduction to R Markdown},
author={Clark, Michael},
year={2018}
}
```
For example, if we type the following:
`Blah blah [see @clark2018rmd, pp. 33-35; also @barthelme1981balloon].`
It will produce:
Blah blah (see Clark [2018](#ref-clark2018rmd), 33–35; also Barthelme [1981](#ref-barthelme1981balloon)).
Use a minus sign (\-) before the @ to suppress mention of the author in the citation. T
`Clark says blah [-@clark2018rmd].`
Clark says blah ([2018](#ref-clark2018rmd)).
You can also write an in\-text citation, as follows:
`@barthelme1981balloon says blah.`
Barthelme ([1981](#ref-barthelme1981balloon)) says blah.
`@barthelme1981balloon [p. 33] says blah.`
Barthelme ([1981](#ref-barthelme1981balloon), 33\) says blah.
### Multiple documents
If you’re writing a lengthy document, for example, an academic article, you’ll not want to have a single `*.Rmd` file for the whole thing, no more than you want a single R script to do all the data preparation and analysis for it. For one thing, probably only one section is data heavy, and you wouldn’t want to have to do a lot of processing every time you make a change to the document (though caching would help there). In addition, if there is a problem with one section, you can still put the document together by just ignoring the problematic part. A more compelling reason regards collaboration. Your colleague can write the introduction while you work on the results, and the final paper can then be put together without conflict.
The best way to accomplish this is to think of your document like you would a website. The `index.Rmd` file has the YAML and other settings, and it will also be where the other files come together. For example a paper with an introduction, results, and discussion section might have this in the index file.
```
```{r, child='introduction.Rmd'}
```
```{r, child='results.Rmd'}
```
```{r, child='discussion.Rmd'}
```
```
You work on the individual sections separately, and when you knit the index file, all will come together. What’s more, if you’re creating an HTML document, you now can put this index file, which is the complete document, on the web for easy access. For example, if the document is placed in a folder like `mydoc`, then one could go to `www.someplace.net/mydoc/` and view the document.
### Web standards
When creating an HTML document or site and customizing things as you like, you should consider accessibility issues at some point. Not everyone interacts with the web the same way. For example, roughly 10% of people see [color differently](https://en.wikipedia.org/wiki/Color_blindness) from ‘normal’. Also, if your font is too light to read, or your visualizations don’t distinguish points of interest for a certain group of people, your document is less effective at communicating your ideas.
As a simple example, when we changed the link color to dodgerblue, it might not have seemed like much, but the color was no longer sufficient contrast at the lowest web standards. Nor was my original color when the gray background was added. The default link color for my documents is just fine though.
You can get a [browser extension](https://www.deque.com/axe/) to see what problems your page has once it’s on the web. This is where your `*.css` file will come in handy, fixing 100 problems with one line of code. Then you have a template you can use from there on out. As you’ll be using HTML more and more the more you use R Markdown, things like this become more important. You won’t likely be able to deal with every issue that arises, but you’ll want to consider them.
R Markdown
----------
### Footnotes
Footnotes are very straightforward. To add one just put `[^myfootnote]` where needed (the name is arbitrary). Then, somewhere in your document add the following.
```
[^myfootnote]: Yadda blah yadda.
```
The usual practice is to put them at the end of the document. It doesn’t matter what order you put them in, they will be displayed/numbered as they appear in the actual text.
### Citations and references
#### References \& bibliography
Assuming you have a reference file somewhere adding them to the text of the document is very easy. Many formats of the file are acceptable, such as BibTeX, EndNote, and more. For example if using BibTeX and the file is `refs.bib`, then just note the file in the YAML section.
```
bibliography: refs.bib
biblio-style: apalike
link-citations: yes
```
The style can be anything you want, but will take some extra steps to use. Consult the R Markdown page for how to use custom styles.
Now, somewhere in your document, put an empty References header like so:
```
# References
```
The references you cite will magically appear in the references section when you knit the document. Note that bookdown documents, of which this is one, also put them at the bottom of the page where the citation occurs.
#### Citations
Now that your document has some references, you’ll want to cite them! In the refs.bib file for example, we have the following entry:
```
@book{clark2018rmd,
title={Introduction to R Markdown},
author={Clark, Michael},
year={2018}
}
```
For example, if we type the following:
`Blah blah [see @clark2018rmd, pp. 33-35; also @barthelme1981balloon].`
It will produce:
Blah blah (see Clark [2018](#ref-clark2018rmd), 33–35; also Barthelme [1981](#ref-barthelme1981balloon)).
Use a minus sign (\-) before the @ to suppress mention of the author in the citation. T
`Clark says blah [-@clark2018rmd].`
Clark says blah ([2018](#ref-clark2018rmd)).
You can also write an in\-text citation, as follows:
`@barthelme1981balloon says blah.`
Barthelme ([1981](#ref-barthelme1981balloon)) says blah.
`@barthelme1981balloon [p. 33] says blah.`
Barthelme ([1981](#ref-barthelme1981balloon), 33\) says blah.
### Multiple documents
If you’re writing a lengthy document, for example, an academic article, you’ll not want to have a single `*.Rmd` file for the whole thing, no more than you want a single R script to do all the data preparation and analysis for it. For one thing, probably only one section is data heavy, and you wouldn’t want to have to do a lot of processing every time you make a change to the document (though caching would help there). In addition, if there is a problem with one section, you can still put the document together by just ignoring the problematic part. A more compelling reason regards collaboration. Your colleague can write the introduction while you work on the results, and the final paper can then be put together without conflict.
The best way to accomplish this is to think of your document like you would a website. The `index.Rmd` file has the YAML and other settings, and it will also be where the other files come together. For example a paper with an introduction, results, and discussion section might have this in the index file.
```
```{r, child='introduction.Rmd'}
```
```{r, child='results.Rmd'}
```
```{r, child='discussion.Rmd'}
```
```
You work on the individual sections separately, and when you knit the index file, all will come together. What’s more, if you’re creating an HTML document, you now can put this index file, which is the complete document, on the web for easy access. For example, if the document is placed in a folder like `mydoc`, then one could go to `www.someplace.net/mydoc/` and view the document.
### Web standards
When creating an HTML document or site and customizing things as you like, you should consider accessibility issues at some point. Not everyone interacts with the web the same way. For example, roughly 10% of people see [color differently](https://en.wikipedia.org/wiki/Color_blindness) from ‘normal’. Also, if your font is too light to read, or your visualizations don’t distinguish points of interest for a certain group of people, your document is less effective at communicating your ideas.
As a simple example, when we changed the link color to dodgerblue, it might not have seemed like much, but the color was no longer sufficient contrast at the lowest web standards. Nor was my original color when the gray background was added. The default link color for my documents is just fine though.
You can get a [browser extension](https://www.deque.com/axe/) to see what problems your page has once it’s on the web. This is where your `*.css` file will come in handy, fixing 100 problems with one line of code. Then you have a template you can use from there on out. As you’ll be using HTML more and more the more you use R Markdown, things like this become more important. You won’t likely be able to deal with every issue that arises, but you’ll want to consider them.
### Footnotes
Footnotes are very straightforward. To add one just put `[^myfootnote]` where needed (the name is arbitrary). Then, somewhere in your document add the following.
```
[^myfootnote]: Yadda blah yadda.
```
The usual practice is to put them at the end of the document. It doesn’t matter what order you put them in, they will be displayed/numbered as they appear in the actual text.
### Citations and references
#### References \& bibliography
Assuming you have a reference file somewhere adding them to the text of the document is very easy. Many formats of the file are acceptable, such as BibTeX, EndNote, and more. For example if using BibTeX and the file is `refs.bib`, then just note the file in the YAML section.
```
bibliography: refs.bib
biblio-style: apalike
link-citations: yes
```
The style can be anything you want, but will take some extra steps to use. Consult the R Markdown page for how to use custom styles.
Now, somewhere in your document, put an empty References header like so:
```
# References
```
The references you cite will magically appear in the references section when you knit the document. Note that bookdown documents, of which this is one, also put them at the bottom of the page where the citation occurs.
#### Citations
Now that your document has some references, you’ll want to cite them! In the refs.bib file for example, we have the following entry:
```
@book{clark2018rmd,
title={Introduction to R Markdown},
author={Clark, Michael},
year={2018}
}
```
For example, if we type the following:
`Blah blah [see @clark2018rmd, pp. 33-35; also @barthelme1981balloon].`
It will produce:
Blah blah (see Clark [2018](#ref-clark2018rmd), 33–35; also Barthelme [1981](#ref-barthelme1981balloon)).
Use a minus sign (\-) before the @ to suppress mention of the author in the citation. T
`Clark says blah [-@clark2018rmd].`
Clark says blah ([2018](#ref-clark2018rmd)).
You can also write an in\-text citation, as follows:
`@barthelme1981balloon says blah.`
Barthelme ([1981](#ref-barthelme1981balloon)) says blah.
`@barthelme1981balloon [p. 33] says blah.`
Barthelme ([1981](#ref-barthelme1981balloon), 33\) says blah.
#### References \& bibliography
Assuming you have a reference file somewhere adding them to the text of the document is very easy. Many formats of the file are acceptable, such as BibTeX, EndNote, and more. For example if using BibTeX and the file is `refs.bib`, then just note the file in the YAML section.
```
bibliography: refs.bib
biblio-style: apalike
link-citations: yes
```
The style can be anything you want, but will take some extra steps to use. Consult the R Markdown page for how to use custom styles.
Now, somewhere in your document, put an empty References header like so:
```
# References
```
The references you cite will magically appear in the references section when you knit the document. Note that bookdown documents, of which this is one, also put them at the bottom of the page where the citation occurs.
#### Citations
Now that your document has some references, you’ll want to cite them! In the refs.bib file for example, we have the following entry:
```
@book{clark2018rmd,
title={Introduction to R Markdown},
author={Clark, Michael},
year={2018}
}
```
For example, if we type the following:
`Blah blah [see @clark2018rmd, pp. 33-35; also @barthelme1981balloon].`
It will produce:
Blah blah (see Clark [2018](#ref-clark2018rmd), 33–35; also Barthelme [1981](#ref-barthelme1981balloon)).
Use a minus sign (\-) before the @ to suppress mention of the author in the citation. T
`Clark says blah [-@clark2018rmd].`
Clark says blah ([2018](#ref-clark2018rmd)).
You can also write an in\-text citation, as follows:
`@barthelme1981balloon says blah.`
Barthelme ([1981](#ref-barthelme1981balloon)) says blah.
`@barthelme1981balloon [p. 33] says blah.`
Barthelme ([1981](#ref-barthelme1981balloon), 33\) says blah.
### Multiple documents
If you’re writing a lengthy document, for example, an academic article, you’ll not want to have a single `*.Rmd` file for the whole thing, no more than you want a single R script to do all the data preparation and analysis for it. For one thing, probably only one section is data heavy, and you wouldn’t want to have to do a lot of processing every time you make a change to the document (though caching would help there). In addition, if there is a problem with one section, you can still put the document together by just ignoring the problematic part. A more compelling reason regards collaboration. Your colleague can write the introduction while you work on the results, and the final paper can then be put together without conflict.
The best way to accomplish this is to think of your document like you would a website. The `index.Rmd` file has the YAML and other settings, and it will also be where the other files come together. For example a paper with an introduction, results, and discussion section might have this in the index file.
```
```{r, child='introduction.Rmd'}
```
```{r, child='results.Rmd'}
```
```{r, child='discussion.Rmd'}
```
```
You work on the individual sections separately, and when you knit the index file, all will come together. What’s more, if you’re creating an HTML document, you now can put this index file, which is the complete document, on the web for easy access. For example, if the document is placed in a folder like `mydoc`, then one could go to `www.someplace.net/mydoc/` and view the document.
### Web standards
When creating an HTML document or site and customizing things as you like, you should consider accessibility issues at some point. Not everyone interacts with the web the same way. For example, roughly 10% of people see [color differently](https://en.wikipedia.org/wiki/Color_blindness) from ‘normal’. Also, if your font is too light to read, or your visualizations don’t distinguish points of interest for a certain group of people, your document is less effective at communicating your ideas.
As a simple example, when we changed the link color to dodgerblue, it might not have seemed like much, but the color was no longer sufficient contrast at the lowest web standards. Nor was my original color when the gray background was added. The default link color for my documents is just fine though.
You can get a [browser extension](https://www.deque.com/axe/) to see what problems your page has once it’s on the web. This is where your `*.css` file will come in handy, fixing 100 problems with one line of code. Then you have a template you can use from there on out. As you’ll be using HTML more and more the more you use R Markdown, things like this become more important. You won’t likely be able to deal with every issue that arises, but you’ll want to consider them.
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/appendix.html |
Appendix
========
R Markdown
----------
### Footnotes
Footnotes are very straightforward. To add one just put `[^myfootnote]` where needed (the name is arbitrary). Then, somewhere in your document add the following.
```
[^myfootnote]: Yadda blah yadda.
```
The usual practice is to put them at the end of the document. It doesn’t matter what order you put them in, they will be displayed/numbered as they appear in the actual text.
### Citations and references
#### References \& bibliography
Assuming you have a reference file somewhere adding them to the text of the document is very easy. Many formats of the file are acceptable, such as BibTeX, EndNote, and more. For example if using BibTeX and the file is `refs.bib`, then just note the file in the YAML section.
```
bibliography: refs.bib
biblio-style: apalike
link-citations: yes
```
The style can be anything you want, but will take some extra steps to use. Consult the R Markdown page for how to use custom styles.
Now, somewhere in your document, put an empty References header like so:
```
# References
```
The references you cite will magically appear in the references section when you knit the document. Note that bookdown documents, of which this is one, also put them at the bottom of the page where the citation occurs.
#### Citations
Now that your document has some references, you’ll want to cite them! In the refs.bib file for example, we have the following entry:
```
@book{clark2018rmd,
title={Introduction to R Markdown},
author={Clark, Michael},
year={2018}
}
```
For example, if we type the following:
`Blah blah [see @clark2018rmd, pp. 33-35; also @barthelme1981balloon].`
It will produce:
Blah blah (see Clark [2018](#ref-clark2018rmd), 33–35; also Barthelme [1981](#ref-barthelme1981balloon)).
Use a minus sign (\-) before the @ to suppress mention of the author in the citation. T
`Clark says blah [-@clark2018rmd].`
Clark says blah ([2018](#ref-clark2018rmd)).
You can also write an in\-text citation, as follows:
`@barthelme1981balloon says blah.`
Barthelme ([1981](#ref-barthelme1981balloon)) says blah.
`@barthelme1981balloon [p. 33] says blah.`
Barthelme ([1981](#ref-barthelme1981balloon), 33\) says blah.
### Multiple documents
If you’re writing a lengthy document, for example, an academic article, you’ll not want to have a single `*.Rmd` file for the whole thing, no more than you want a single R script to do all the data preparation and analysis for it. For one thing, probably only one section is data heavy, and you wouldn’t want to have to do a lot of processing every time you make a change to the document (though caching would help there). In addition, if there is a problem with one section, you can still put the document together by just ignoring the problematic part. A more compelling reason regards collaboration. Your colleague can write the introduction while you work on the results, and the final paper can then be put together without conflict.
The best way to accomplish this is to think of your document like you would a website. The `index.Rmd` file has the YAML and other settings, and it will also be where the other files come together. For example a paper with an introduction, results, and discussion section might have this in the index file.
```
```{r, child='introduction.Rmd'}
```
```{r, child='results.Rmd'}
```
```{r, child='discussion.Rmd'}
```
```
You work on the individual sections separately, and when you knit the index file, all will come together. What’s more, if you’re creating an HTML document, you now can put this index file, which is the complete document, on the web for easy access. For example, if the document is placed in a folder like `mydoc`, then one could go to `www.someplace.net/mydoc/` and view the document.
### Web standards
When creating an HTML document or site and customizing things as you like, you should consider accessibility issues at some point. Not everyone interacts with the web the same way. For example, roughly 10% of people see [color differently](https://en.wikipedia.org/wiki/Color_blindness) from ‘normal’. Also, if your font is too light to read, or your visualizations don’t distinguish points of interest for a certain group of people, your document is less effective at communicating your ideas.
As a simple example, when we changed the link color to dodgerblue, it might not have seemed like much, but the color was no longer sufficient contrast at the lowest web standards. Nor was my original color when the gray background was added. The default link color for my documents is just fine though.
You can get a [browser extension](https://www.deque.com/axe/) to see what problems your page has once it’s on the web. This is where your `*.css` file will come in handy, fixing 100 problems with one line of code. Then you have a template you can use from there on out. As you’ll be using HTML more and more the more you use R Markdown, things like this become more important. You won’t likely be able to deal with every issue that arises, but you’ll want to consider them.
R Markdown
----------
### Footnotes
Footnotes are very straightforward. To add one just put `[^myfootnote]` where needed (the name is arbitrary). Then, somewhere in your document add the following.
```
[^myfootnote]: Yadda blah yadda.
```
The usual practice is to put them at the end of the document. It doesn’t matter what order you put them in, they will be displayed/numbered as they appear in the actual text.
### Citations and references
#### References \& bibliography
Assuming you have a reference file somewhere adding them to the text of the document is very easy. Many formats of the file are acceptable, such as BibTeX, EndNote, and more. For example if using BibTeX and the file is `refs.bib`, then just note the file in the YAML section.
```
bibliography: refs.bib
biblio-style: apalike
link-citations: yes
```
The style can be anything you want, but will take some extra steps to use. Consult the R Markdown page for how to use custom styles.
Now, somewhere in your document, put an empty References header like so:
```
# References
```
The references you cite will magically appear in the references section when you knit the document. Note that bookdown documents, of which this is one, also put them at the bottom of the page where the citation occurs.
#### Citations
Now that your document has some references, you’ll want to cite them! In the refs.bib file for example, we have the following entry:
```
@book{clark2018rmd,
title={Introduction to R Markdown},
author={Clark, Michael},
year={2018}
}
```
For example, if we type the following:
`Blah blah [see @clark2018rmd, pp. 33-35; also @barthelme1981balloon].`
It will produce:
Blah blah (see Clark [2018](#ref-clark2018rmd), 33–35; also Barthelme [1981](#ref-barthelme1981balloon)).
Use a minus sign (\-) before the @ to suppress mention of the author in the citation. T
`Clark says blah [-@clark2018rmd].`
Clark says blah ([2018](#ref-clark2018rmd)).
You can also write an in\-text citation, as follows:
`@barthelme1981balloon says blah.`
Barthelme ([1981](#ref-barthelme1981balloon)) says blah.
`@barthelme1981balloon [p. 33] says blah.`
Barthelme ([1981](#ref-barthelme1981balloon), 33\) says blah.
### Multiple documents
If you’re writing a lengthy document, for example, an academic article, you’ll not want to have a single `*.Rmd` file for the whole thing, no more than you want a single R script to do all the data preparation and analysis for it. For one thing, probably only one section is data heavy, and you wouldn’t want to have to do a lot of processing every time you make a change to the document (though caching would help there). In addition, if there is a problem with one section, you can still put the document together by just ignoring the problematic part. A more compelling reason regards collaboration. Your colleague can write the introduction while you work on the results, and the final paper can then be put together without conflict.
The best way to accomplish this is to think of your document like you would a website. The `index.Rmd` file has the YAML and other settings, and it will also be where the other files come together. For example a paper with an introduction, results, and discussion section might have this in the index file.
```
```{r, child='introduction.Rmd'}
```
```{r, child='results.Rmd'}
```
```{r, child='discussion.Rmd'}
```
```
You work on the individual sections separately, and when you knit the index file, all will come together. What’s more, if you’re creating an HTML document, you now can put this index file, which is the complete document, on the web for easy access. For example, if the document is placed in a folder like `mydoc`, then one could go to `www.someplace.net/mydoc/` and view the document.
### Web standards
When creating an HTML document or site and customizing things as you like, you should consider accessibility issues at some point. Not everyone interacts with the web the same way. For example, roughly 10% of people see [color differently](https://en.wikipedia.org/wiki/Color_blindness) from ‘normal’. Also, if your font is too light to read, or your visualizations don’t distinguish points of interest for a certain group of people, your document is less effective at communicating your ideas.
As a simple example, when we changed the link color to dodgerblue, it might not have seemed like much, but the color was no longer sufficient contrast at the lowest web standards. Nor was my original color when the gray background was added. The default link color for my documents is just fine though.
You can get a [browser extension](https://www.deque.com/axe/) to see what problems your page has once it’s on the web. This is where your `*.css` file will come in handy, fixing 100 problems with one line of code. Then you have a template you can use from there on out. As you’ll be using HTML more and more the more you use R Markdown, things like this become more important. You won’t likely be able to deal with every issue that arises, but you’ll want to consider them.
### Footnotes
Footnotes are very straightforward. To add one just put `[^myfootnote]` where needed (the name is arbitrary). Then, somewhere in your document add the following.
```
[^myfootnote]: Yadda blah yadda.
```
The usual practice is to put them at the end of the document. It doesn’t matter what order you put them in, they will be displayed/numbered as they appear in the actual text.
### Citations and references
#### References \& bibliography
Assuming you have a reference file somewhere adding them to the text of the document is very easy. Many formats of the file are acceptable, such as BibTeX, EndNote, and more. For example if using BibTeX and the file is `refs.bib`, then just note the file in the YAML section.
```
bibliography: refs.bib
biblio-style: apalike
link-citations: yes
```
The style can be anything you want, but will take some extra steps to use. Consult the R Markdown page for how to use custom styles.
Now, somewhere in your document, put an empty References header like so:
```
# References
```
The references you cite will magically appear in the references section when you knit the document. Note that bookdown documents, of which this is one, also put them at the bottom of the page where the citation occurs.
#### Citations
Now that your document has some references, you’ll want to cite them! In the refs.bib file for example, we have the following entry:
```
@book{clark2018rmd,
title={Introduction to R Markdown},
author={Clark, Michael},
year={2018}
}
```
For example, if we type the following:
`Blah blah [see @clark2018rmd, pp. 33-35; also @barthelme1981balloon].`
It will produce:
Blah blah (see Clark [2018](#ref-clark2018rmd), 33–35; also Barthelme [1981](#ref-barthelme1981balloon)).
Use a minus sign (\-) before the @ to suppress mention of the author in the citation. T
`Clark says blah [-@clark2018rmd].`
Clark says blah ([2018](#ref-clark2018rmd)).
You can also write an in\-text citation, as follows:
`@barthelme1981balloon says blah.`
Barthelme ([1981](#ref-barthelme1981balloon)) says blah.
`@barthelme1981balloon [p. 33] says blah.`
Barthelme ([1981](#ref-barthelme1981balloon), 33\) says blah.
#### References \& bibliography
Assuming you have a reference file somewhere adding them to the text of the document is very easy. Many formats of the file are acceptable, such as BibTeX, EndNote, and more. For example if using BibTeX and the file is `refs.bib`, then just note the file in the YAML section.
```
bibliography: refs.bib
biblio-style: apalike
link-citations: yes
```
The style can be anything you want, but will take some extra steps to use. Consult the R Markdown page for how to use custom styles.
Now, somewhere in your document, put an empty References header like so:
```
# References
```
The references you cite will magically appear in the references section when you knit the document. Note that bookdown documents, of which this is one, also put them at the bottom of the page where the citation occurs.
#### Citations
Now that your document has some references, you’ll want to cite them! In the refs.bib file for example, we have the following entry:
```
@book{clark2018rmd,
title={Introduction to R Markdown},
author={Clark, Michael},
year={2018}
}
```
For example, if we type the following:
`Blah blah [see @clark2018rmd, pp. 33-35; also @barthelme1981balloon].`
It will produce:
Blah blah (see Clark [2018](#ref-clark2018rmd), 33–35; also Barthelme [1981](#ref-barthelme1981balloon)).
Use a minus sign (\-) before the @ to suppress mention of the author in the citation. T
`Clark says blah [-@clark2018rmd].`
Clark says blah ([2018](#ref-clark2018rmd)).
You can also write an in\-text citation, as follows:
`@barthelme1981balloon says blah.`
Barthelme ([1981](#ref-barthelme1981balloon)) says blah.
`@barthelme1981balloon [p. 33] says blah.`
Barthelme ([1981](#ref-barthelme1981balloon), 33\) says blah.
### Multiple documents
If you’re writing a lengthy document, for example, an academic article, you’ll not want to have a single `*.Rmd` file for the whole thing, no more than you want a single R script to do all the data preparation and analysis for it. For one thing, probably only one section is data heavy, and you wouldn’t want to have to do a lot of processing every time you make a change to the document (though caching would help there). In addition, if there is a problem with one section, you can still put the document together by just ignoring the problematic part. A more compelling reason regards collaboration. Your colleague can write the introduction while you work on the results, and the final paper can then be put together without conflict.
The best way to accomplish this is to think of your document like you would a website. The `index.Rmd` file has the YAML and other settings, and it will also be where the other files come together. For example a paper with an introduction, results, and discussion section might have this in the index file.
```
```{r, child='introduction.Rmd'}
```
```{r, child='results.Rmd'}
```
```{r, child='discussion.Rmd'}
```
```
You work on the individual sections separately, and when you knit the index file, all will come together. What’s more, if you’re creating an HTML document, you now can put this index file, which is the complete document, on the web for easy access. For example, if the document is placed in a folder like `mydoc`, then one could go to `www.someplace.net/mydoc/` and view the document.
### Web standards
When creating an HTML document or site and customizing things as you like, you should consider accessibility issues at some point. Not everyone interacts with the web the same way. For example, roughly 10% of people see [color differently](https://en.wikipedia.org/wiki/Color_blindness) from ‘normal’. Also, if your font is too light to read, or your visualizations don’t distinguish points of interest for a certain group of people, your document is less effective at communicating your ideas.
As a simple example, when we changed the link color to dodgerblue, it might not have seemed like much, but the color was no longer sufficient contrast at the lowest web standards. Nor was my original color when the gray background was added. The default link color for my documents is just fine though.
You can get a [browser extension](https://www.deque.com/axe/) to see what problems your page has once it’s on the web. This is where your `*.css` file will come in handy, fixing 100 problems with one line of code. Then you have a template you can use from there on out. As you’ll be using HTML more and more the more you use R Markdown, things like this become more important. You won’t likely be able to deal with every issue that arises, but you’ll want to consider them.
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/references.html | Data Visualization |
|
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/references.html | Data Visualization |
|
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/references.html | Text Analysis |
|
m-clark.github.io | https://m-clark.github.io/data-processing-and-visualization/references.html | Text Analysis |
|
psyteachr.github.io | https://psyteachr.github.io/introdataviz/index.html |
Overview
========
In addition to benefiting reproducibility and transparency, one of the advantages of using R is that researchers have a much larger range of fully customisable data visualisations options than are typically available in point\-and\-click software, due to the open\-source nature of R. These visualisation options not only look attractive, but can increase transparency about the distribution of the underlying data rather than relying on commonly used visualisations of aggregations such as bar charts of means.
In this tutorial, we provide a practical introduction to data visualisation using R, specifically aimed at researchers who have little to no prior experience of using R. First we detail the rationale for using R for data visualisation and introduce the "grammar of graphics" that underlies data visualisation using the ggplot2 package. The tutorial then walks the reader through how to replicate plots that are commonly available in point\-and\-click software such as histograms and boxplots, as well as showing how the code for these "basic" plots can be easily extended to less commonly available options such as violin\-boxplots.
The dataset and code used in this tutorial is available at <https://osf.io/bj83f/> whilst an interactive version of this tutorial is available at <https://psyteachr.github.io/introdataviz/> and includes solutions to the activities and an appendix with additional resources and advanced plotting options.
0\.1 Citing
-----------
Please cite both the preprint and interactive online tutorial as:
Nordmann, E., McAleer, P., Toivo, W., Paterson, H. \& DeBruine, L. (2022\). Data visualisation using R, for researchers who don't use R. Advances in Methods and Practices in Psychological Science. [https://doi.org/10\.1177/25152459221074654](https://doi.org/10.1177/25152459221074654)
0\.1 Citing
-----------
Please cite both the preprint and interactive online tutorial as:
Nordmann, E., McAleer, P., Toivo, W., Paterson, H. \& DeBruine, L. (2022\). Data visualisation using R, for researchers who don't use R. Advances in Methods and Practices in Psychological Science. [https://doi.org/10\.1177/25152459221074654](https://doi.org/10.1177/25152459221074654)
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/index.html |
Overview
========
In addition to benefiting reproducibility and transparency, one of the advantages of using R is that researchers have a much larger range of fully customisable data visualisations options than are typically available in point\-and\-click software, due to the open\-source nature of R. These visualisation options not only look attractive, but can increase transparency about the distribution of the underlying data rather than relying on commonly used visualisations of aggregations such as bar charts of means.
In this tutorial, we provide a practical introduction to data visualisation using R, specifically aimed at researchers who have little to no prior experience of using R. First we detail the rationale for using R for data visualisation and introduce the "grammar of graphics" that underlies data visualisation using the ggplot2 package. The tutorial then walks the reader through how to replicate plots that are commonly available in point\-and\-click software such as histograms and boxplots, as well as showing how the code for these "basic" plots can be easily extended to less commonly available options such as violin\-boxplots.
The dataset and code used in this tutorial is available at <https://osf.io/bj83f/> whilst an interactive version of this tutorial is available at <https://psyteachr.github.io/introdataviz/> and includes solutions to the activities and an appendix with additional resources and advanced plotting options.
0\.1 Citing
-----------
Please cite both the preprint and interactive online tutorial as:
Nordmann, E., McAleer, P., Toivo, W., Paterson, H. \& DeBruine, L. (2022\). Data visualisation using R, for researchers who don't use R. Advances in Methods and Practices in Psychological Science. [https://doi.org/10\.1177/25152459221074654](https://doi.org/10.1177/25152459221074654)
0\.1 Citing
-----------
Please cite both the preprint and interactive online tutorial as:
Nordmann, E., McAleer, P., Toivo, W., Paterson, H. \& DeBruine, L. (2022\). Data visualisation using R, for researchers who don't use R. Advances in Methods and Practices in Psychological Science. [https://doi.org/10\.1177/25152459221074654](https://doi.org/10.1177/25152459221074654)
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/index.html |
Overview
========
In addition to benefiting reproducibility and transparency, one of the advantages of using R is that researchers have a much larger range of fully customisable data visualisations options than are typically available in point\-and\-click software, due to the open\-source nature of R. These visualisation options not only look attractive, but can increase transparency about the distribution of the underlying data rather than relying on commonly used visualisations of aggregations such as bar charts of means.
In this tutorial, we provide a practical introduction to data visualisation using R, specifically aimed at researchers who have little to no prior experience of using R. First we detail the rationale for using R for data visualisation and introduce the "grammar of graphics" that underlies data visualisation using the ggplot2 package. The tutorial then walks the reader through how to replicate plots that are commonly available in point\-and\-click software such as histograms and boxplots, as well as showing how the code for these "basic" plots can be easily extended to less commonly available options such as violin\-boxplots.
The dataset and code used in this tutorial is available at <https://osf.io/bj83f/> whilst an interactive version of this tutorial is available at <https://psyteachr.github.io/introdataviz/> and includes solutions to the activities and an appendix with additional resources and advanced plotting options.
0\.1 Citing
-----------
Please cite both the preprint and interactive online tutorial as:
Nordmann, E., McAleer, P., Toivo, W., Paterson, H. \& DeBruine, L. (2022\). Data visualisation using R, for researchers who don't use R. Advances in Methods and Practices in Psychological Science. [https://doi.org/10\.1177/25152459221074654](https://doi.org/10.1177/25152459221074654)
0\.1 Citing
-----------
Please cite both the preprint and interactive online tutorial as:
Nordmann, E., McAleer, P., Toivo, W., Paterson, H. \& DeBruine, L. (2022\). Data visualisation using R, for researchers who don't use R. Advances in Methods and Practices in Psychological Science. [https://doi.org/10\.1177/25152459221074654](https://doi.org/10.1177/25152459221074654)
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/introduction.html |
1 Introduction
==============
Use of the programming language R ([R Core Team, 2022](references.html#ref-R-base)) for data processing and statistical analysis by researchers is increasingly common, with an average yearly growth of 87% in the number of citations of the R Core Team between 2006\-2018 ([Barrett, 2019](references.html#ref-barrett2019six)). In addition to benefiting reproducibility and transparency, one of the advantages of using R is that researchers have a much larger range of fully customisable data visualisation options than are typically available in point\-and\-click software, due to the open\-source nature of R. These visualisation options not only look attractive, but can increase transparency about the distribution of the underlying data rather than relying on commonly used visualisations of aggregations such as bar charts of means ([Newman \& Scholl, 2012](references.html#ref-newman2012bar)).
Yet, the benefits of using R are obscured for many researchers by the perception that coding skills are difficult to learn ([Robins et al., 2003](references.html#ref-robins2003learning)). Coupled with this, only a minority of psychology programmes currently teach coding skills ([Wills, n.d.](references.html#ref-rminr)) with the majority of both undergraduate and postgraduate courses using proprietary point\-and\-click software such as SAS, SPSS or Microsoft Excel. While the sophisticated use of proprietary software often necessitates the use of computational thinking skills akin to coding (for instance SPSS scripts or formulas in Excel), we have found that many researchers do not perceive that they already have introductory coding skills. In the following tutorial we intend to change that perception by showing how experienced researchers can redevelop their existing computational skills to utilise the powerful data visualisation tools offered by R.
In this tutorial we provide a practical introduction to data visualisation using R, specifically aimed at researchers who have little to no prior experience of using R. First we detail the rationale for using R for data visualisation and introduce the "grammar of graphics" that underlies data visualisation using the `ggplot2` package. The tutorial then walks the reader through how to replicate plots that are commonly available in point\-and\-click software such as histograms and boxplots, as well as showing how the code for these "basic" plots can be easily extended to less commonly available options such as violin\-boxplots.
1\.1 Why R for data visualisation?
----------------------------------
Data visualisation benefits from the same advantages as statistical analysis when writing code rather than using point\-and\-click software \-\- reproducibility and transparency. The need for psychological researchers to work in reproducible ways has been well\-documented and discussed in response to the replication crisis (e.g. [Munafò et al., 2017](references.html#ref-munafo2017manifesto)) and we will not repeat those arguments here. However, there is an additional benefit to reproducibility that is less frequently acknowledged compared to the loftier goals of improving psychological science: if you write code to produce your plots, you can reuse and adapt that code in the future rather than starting from scratch each time.
In addition to the benefits of reproducibility, using R for data visualisation gives the researcher almost total control over each element of the plot. Whilst this flexibility can seem daunting at first, the ability to write reusable code recipes (and use recipes created by others) is highly advantageous. The level of customisation and the professional outputs available using R has, for instance, lead news outlets such as the BBC ([Visual \& Journalism, 2019](references.html#ref-BBC-R)) and the New York Times ([Bertini \& Stefaner, 2015](references.html#ref-NYT-R)) to adopt R as their preferred data visualisation tool.
1\.2 A layered grammar of graphics
----------------------------------
There are multiple approaches to data visualisation in R; in this paper we use the popular package1 `ggplot2` ([Wickham, 2016](references.html#ref-ggplot2)) which is part of the larger `tidyverse`2 ([Wickham, 2017](references.html#ref-tidyverse)) collection of packages that provide functions for data wrangling, descriptives, and visualisation. A grammar of graphics ([Wilkinson et al., 2005](references.html#ref-wilkinson2005graph)) is a standardised way to describe the components of a graphic. `ggplot2` uses a layered grammar of graphics ([Wickham, 2010](references.html#ref-wickham2010layered)), in which plots are built up in a series of layers. It may be helpful to think about any picture as having multiple elements that sit semi\-transparently over each other. A good analogy is old Disney movies where artists would create a background and then add moveable elements on top of the background via transparencies.
Figure [1\.1](introduction.html#fig:layers) displays the evolution of a simple scatterplot using this layered approach. First, the plot space is built (layer 1\); the variables are specified (layer 2\); the type of visualisation (known as a `geom`) that is desired for these variables is specified (layer 3\) \- in this case `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` is called to visualise individual data points; a second geom is added to include a line of best fit (layer 4\), the axis labels are edited for readability (layer 5\), and finally, a theme is applied to change the overall appearance of the plot (layer 6\).
Figure 1\.1: Evolution of a layered plot
Importantly, each layer is independent and independently customisable. For example, the size, colour and position of each component can be adjusted, or one could, for example, remove the first geom (the data points) to only visualise the line of best fit, simply by removing the layer that draws the data points (Figure [1\.2](introduction.html#fig:remove-layer)). The use of layers makes it easy to build up complex plots step\-by\-step, and to adapt or extend plots from existing code.
Figure 1\.2: Plot with scatterplot layer removed.
1\.3 Tutorial components
------------------------
This tutorial contains three components.
1. A traditional PDF manuscript that can easily be saved, printed, and cited.
2. An online version of the tutorial published at <https://psyteachr.github.io/introdataviz/> that may be easier to copy and paste code from and that also provides the optional activity solutions as well as additional appendices, including code tutorials for advanced plots beyond the scope of this paper and links to additional resources.
3. An Open Science Framework repository published at <https://osf.io/bj83f/> that contains the simulated dataset (see below), preprint, and R Markdown workbook.
1\.4 Simulated dataset
----------------------
For the purpose of this tutorial, we will use simulated data for a 2 x 2 mixed\-design lexical decision task in which 100 participants must decide whether a presented word is a real word or a non\-word. There are 100 rows (1 for each participant) and 7 variables:
* Participant information:
+ `id`: Participant ID
+ `age`: Age
* 1 between\-subject independent variable (IV):
+ `language`: Language group (1 \= monolingual, 2 \= bilingual)
* 4 columns for the 2 dependent variables (DVs) of RT and accuracy, crossed by the within\-subject IV of condition:
+ `rt_word`: Reaction time (ms) for word trials
+ `rt_nonword`: Reaction time (ms) for non\-word trials
+ `acc_word`: Accuracy for word trials
+ `acc_nonword`: Accuracy for non\-word trials
For newcomers to R, we would suggest working through this tutorial with the simulated dataset, then extending the code to your own datasets with a similar structure, and finally generalising the code to new structures and problems.
1\.5 Setting up R and RStudio
-----------------------------
We strongly encourage the use of RStudio ([RStudio Team, 2021](references.html#ref-RStudio)) to write code in R. R is the programming language whilst RStudio is an *integrated development environment* that makes working with R easier. More information on installing both R and RStudio can be found in the additional resources.
Projects are a useful way of keeping all your code, data, and output in one place. To create a new project, open RStudio and click `File - New Project - New Directory - New Project`. You will be prompted to give the project a name, and select a location for where to store the project on your computer. Once you have done this, click `Create Project`. Download the simulated dataset and code tutorial Rmd file from [the online materials](https://osf.io/bj83f/files/) (`ldt_data.csv`, `workbook.Rmd`) and then move them to this folder. The files pane on the bottom right of RStudio should now display this folder and the files it contains \- this is known as your *working directory* and it is where R will look for any data you wish to import and where it will save any output you create.
This tutorial will require you to use the packages in the `tidyverse` collection. Additionally, we will also require use of `patchwork`. To install these packages, copy and paste the below code into the console (the left hand pane) and press enter to execute the code.
```
# only run in the console, never put this in a script
package_list <- [c](https://rdrr.io/r/base/c.html)("tidyverse", "patchwork")
[install.packages](https://rdrr.io/r/utils/install.packages.html)(package_list)
```
R Markdown is a dynamic format that allows you to combine text and code into one reproducible document. The R Markdown workbook available in the [online materials](https://osf.io/bj83f/files/) contains all the code in this tutorial and there is more information and links to additional resources for how to use R Markdown for reproducible reports in the additional resources.
The reason that the above code is not included in the workbook is that every time you run the install command code it will install the latest version of the package. Leaving this code in your script can lead you to unintentionally install a package update you didn't want. For this reason, avoid including install code in any script or Markdown document.
For more information on how to use R with RStudio, please see the additional resources in the online appendices.
1\.6 Preparing your data
------------------------
Before you start visualising your data, it must be in an appropriate format. These preparatory steps can all be dealt with reproducibly using R and the additional resources section points to extra tutorials for doing so. However, performing these types of tasks in R can require more sophisticated coding skills and the solutions and tools are dependent on the idiosyncrasies of each dataset. For this reason, in this tutorial we encourage the reader to complete data preparation steps using the method they are most comfortable with and to focus on the aim of data visualisation.
### 1\.6\.1 Data format
The simulated lexical decision data is provided in a `csv` (comma\-separated variable) file. Functions exist in R to read many other types of data files; the `rio` package's `import()` function can read most types of files. However, `csv` files avoids problems like Excel's insistence on mangling anything that even vaguely resembles a date. You may wish to export your data as a `csv` file that contains only the data you want to visualise, rather than a full, larger workbook. It is possible to clean almost any file reproducibly in R, however, as noted above, this can require higher level coding skills. For getting started with visualisation, we suggest removing summary rows or additional notes from any files you import so the file only contains the rows and columns of data you want to plot.
### 1\.6\.2 Variable names
Ensuring that your variable names are consistent can make it much easier to work in R. We recommend using short but informative variable names, for example `rt_word` is preferred over `dv1_iv1` or `reaction_time_word_condition` because these are either hard to read or hard to type.
It is also helpful to have a consistent naming scheme, particularly for variable names that require more than one word. Two popular options are `CamelCase` where each new word begins with a capital letter, or `snake_case` where all letters are lower case and words are separated by an underscore. For the purposes of naming variables, avoid using any spaces in variable names (e.g., `rt word`) and consider the additional meaning of a separator beyond making the variable names easier to read. For example, `rt_word`, `rt_nonword`, `acc_word`, and `acc_nonword` all have the DV to the left of the separator and the level of the IV to the right. `rt_word_condition` on the other hand has two separators but only one of them is meaningful, making it more difficult to split variable names consistently. In this paper, we will use `snake_case` and lower case letters for all variable names so that we don't have to remember where to put the capital letters.
When working with your own data, you can rename columns in Excel, but the resources listed in the online appendices point to how to rename columns reproducibly with code.
### 1\.6\.3 Data values
A benefit of R is that categorical data can be entered as text. In the tutorial dataset, language group is entered as 1 or 2, so that we can show you how to recode numeric values into factors with labels. However, we recommend recording meaningful labels rather than numbers from the beginning of data collection to avoid misinterpreting data due to coding errors. Note that values must match *exactly* in order to be considered in the same category and R is case sensitive, so "mono", "Mono", and "monolingual" would be classified as members of three separate categories.
Finally, importing data is more straightforward if cells that represent missing data are left empty rather than containing values like `NA`, `missing` or `999`3. A complementary rule of thumb is that each column should only contain one type of data, such as words or numbers, not both.
1\.1 Why R for data visualisation?
----------------------------------
Data visualisation benefits from the same advantages as statistical analysis when writing code rather than using point\-and\-click software \-\- reproducibility and transparency. The need for psychological researchers to work in reproducible ways has been well\-documented and discussed in response to the replication crisis (e.g. [Munafò et al., 2017](references.html#ref-munafo2017manifesto)) and we will not repeat those arguments here. However, there is an additional benefit to reproducibility that is less frequently acknowledged compared to the loftier goals of improving psychological science: if you write code to produce your plots, you can reuse and adapt that code in the future rather than starting from scratch each time.
In addition to the benefits of reproducibility, using R for data visualisation gives the researcher almost total control over each element of the plot. Whilst this flexibility can seem daunting at first, the ability to write reusable code recipes (and use recipes created by others) is highly advantageous. The level of customisation and the professional outputs available using R has, for instance, lead news outlets such as the BBC ([Visual \& Journalism, 2019](references.html#ref-BBC-R)) and the New York Times ([Bertini \& Stefaner, 2015](references.html#ref-NYT-R)) to adopt R as their preferred data visualisation tool.
1\.2 A layered grammar of graphics
----------------------------------
There are multiple approaches to data visualisation in R; in this paper we use the popular package1 `ggplot2` ([Wickham, 2016](references.html#ref-ggplot2)) which is part of the larger `tidyverse`2 ([Wickham, 2017](references.html#ref-tidyverse)) collection of packages that provide functions for data wrangling, descriptives, and visualisation. A grammar of graphics ([Wilkinson et al., 2005](references.html#ref-wilkinson2005graph)) is a standardised way to describe the components of a graphic. `ggplot2` uses a layered grammar of graphics ([Wickham, 2010](references.html#ref-wickham2010layered)), in which plots are built up in a series of layers. It may be helpful to think about any picture as having multiple elements that sit semi\-transparently over each other. A good analogy is old Disney movies where artists would create a background and then add moveable elements on top of the background via transparencies.
Figure [1\.1](introduction.html#fig:layers) displays the evolution of a simple scatterplot using this layered approach. First, the plot space is built (layer 1\); the variables are specified (layer 2\); the type of visualisation (known as a `geom`) that is desired for these variables is specified (layer 3\) \- in this case `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` is called to visualise individual data points; a second geom is added to include a line of best fit (layer 4\), the axis labels are edited for readability (layer 5\), and finally, a theme is applied to change the overall appearance of the plot (layer 6\).
Figure 1\.1: Evolution of a layered plot
Importantly, each layer is independent and independently customisable. For example, the size, colour and position of each component can be adjusted, or one could, for example, remove the first geom (the data points) to only visualise the line of best fit, simply by removing the layer that draws the data points (Figure [1\.2](introduction.html#fig:remove-layer)). The use of layers makes it easy to build up complex plots step\-by\-step, and to adapt or extend plots from existing code.
Figure 1\.2: Plot with scatterplot layer removed.
1\.3 Tutorial components
------------------------
This tutorial contains three components.
1. A traditional PDF manuscript that can easily be saved, printed, and cited.
2. An online version of the tutorial published at <https://psyteachr.github.io/introdataviz/> that may be easier to copy and paste code from and that also provides the optional activity solutions as well as additional appendices, including code tutorials for advanced plots beyond the scope of this paper and links to additional resources.
3. An Open Science Framework repository published at <https://osf.io/bj83f/> that contains the simulated dataset (see below), preprint, and R Markdown workbook.
1\.4 Simulated dataset
----------------------
For the purpose of this tutorial, we will use simulated data for a 2 x 2 mixed\-design lexical decision task in which 100 participants must decide whether a presented word is a real word or a non\-word. There are 100 rows (1 for each participant) and 7 variables:
* Participant information:
+ `id`: Participant ID
+ `age`: Age
* 1 between\-subject independent variable (IV):
+ `language`: Language group (1 \= monolingual, 2 \= bilingual)
* 4 columns for the 2 dependent variables (DVs) of RT and accuracy, crossed by the within\-subject IV of condition:
+ `rt_word`: Reaction time (ms) for word trials
+ `rt_nonword`: Reaction time (ms) for non\-word trials
+ `acc_word`: Accuracy for word trials
+ `acc_nonword`: Accuracy for non\-word trials
For newcomers to R, we would suggest working through this tutorial with the simulated dataset, then extending the code to your own datasets with a similar structure, and finally generalising the code to new structures and problems.
1\.5 Setting up R and RStudio
-----------------------------
We strongly encourage the use of RStudio ([RStudio Team, 2021](references.html#ref-RStudio)) to write code in R. R is the programming language whilst RStudio is an *integrated development environment* that makes working with R easier. More information on installing both R and RStudio can be found in the additional resources.
Projects are a useful way of keeping all your code, data, and output in one place. To create a new project, open RStudio and click `File - New Project - New Directory - New Project`. You will be prompted to give the project a name, and select a location for where to store the project on your computer. Once you have done this, click `Create Project`. Download the simulated dataset and code tutorial Rmd file from [the online materials](https://osf.io/bj83f/files/) (`ldt_data.csv`, `workbook.Rmd`) and then move them to this folder. The files pane on the bottom right of RStudio should now display this folder and the files it contains \- this is known as your *working directory* and it is where R will look for any data you wish to import and where it will save any output you create.
This tutorial will require you to use the packages in the `tidyverse` collection. Additionally, we will also require use of `patchwork`. To install these packages, copy and paste the below code into the console (the left hand pane) and press enter to execute the code.
```
# only run in the console, never put this in a script
package_list <- [c](https://rdrr.io/r/base/c.html)("tidyverse", "patchwork")
[install.packages](https://rdrr.io/r/utils/install.packages.html)(package_list)
```
R Markdown is a dynamic format that allows you to combine text and code into one reproducible document. The R Markdown workbook available in the [online materials](https://osf.io/bj83f/files/) contains all the code in this tutorial and there is more information and links to additional resources for how to use R Markdown for reproducible reports in the additional resources.
The reason that the above code is not included in the workbook is that every time you run the install command code it will install the latest version of the package. Leaving this code in your script can lead you to unintentionally install a package update you didn't want. For this reason, avoid including install code in any script or Markdown document.
For more information on how to use R with RStudio, please see the additional resources in the online appendices.
1\.6 Preparing your data
------------------------
Before you start visualising your data, it must be in an appropriate format. These preparatory steps can all be dealt with reproducibly using R and the additional resources section points to extra tutorials for doing so. However, performing these types of tasks in R can require more sophisticated coding skills and the solutions and tools are dependent on the idiosyncrasies of each dataset. For this reason, in this tutorial we encourage the reader to complete data preparation steps using the method they are most comfortable with and to focus on the aim of data visualisation.
### 1\.6\.1 Data format
The simulated lexical decision data is provided in a `csv` (comma\-separated variable) file. Functions exist in R to read many other types of data files; the `rio` package's `import()` function can read most types of files. However, `csv` files avoids problems like Excel's insistence on mangling anything that even vaguely resembles a date. You may wish to export your data as a `csv` file that contains only the data you want to visualise, rather than a full, larger workbook. It is possible to clean almost any file reproducibly in R, however, as noted above, this can require higher level coding skills. For getting started with visualisation, we suggest removing summary rows or additional notes from any files you import so the file only contains the rows and columns of data you want to plot.
### 1\.6\.2 Variable names
Ensuring that your variable names are consistent can make it much easier to work in R. We recommend using short but informative variable names, for example `rt_word` is preferred over `dv1_iv1` or `reaction_time_word_condition` because these are either hard to read or hard to type.
It is also helpful to have a consistent naming scheme, particularly for variable names that require more than one word. Two popular options are `CamelCase` where each new word begins with a capital letter, or `snake_case` where all letters are lower case and words are separated by an underscore. For the purposes of naming variables, avoid using any spaces in variable names (e.g., `rt word`) and consider the additional meaning of a separator beyond making the variable names easier to read. For example, `rt_word`, `rt_nonword`, `acc_word`, and `acc_nonword` all have the DV to the left of the separator and the level of the IV to the right. `rt_word_condition` on the other hand has two separators but only one of them is meaningful, making it more difficult to split variable names consistently. In this paper, we will use `snake_case` and lower case letters for all variable names so that we don't have to remember where to put the capital letters.
When working with your own data, you can rename columns in Excel, but the resources listed in the online appendices point to how to rename columns reproducibly with code.
### 1\.6\.3 Data values
A benefit of R is that categorical data can be entered as text. In the tutorial dataset, language group is entered as 1 or 2, so that we can show you how to recode numeric values into factors with labels. However, we recommend recording meaningful labels rather than numbers from the beginning of data collection to avoid misinterpreting data due to coding errors. Note that values must match *exactly* in order to be considered in the same category and R is case sensitive, so "mono", "Mono", and "monolingual" would be classified as members of three separate categories.
Finally, importing data is more straightforward if cells that represent missing data are left empty rather than containing values like `NA`, `missing` or `999`3. A complementary rule of thumb is that each column should only contain one type of data, such as words or numbers, not both.
### 1\.6\.1 Data format
The simulated lexical decision data is provided in a `csv` (comma\-separated variable) file. Functions exist in R to read many other types of data files; the `rio` package's `import()` function can read most types of files. However, `csv` files avoids problems like Excel's insistence on mangling anything that even vaguely resembles a date. You may wish to export your data as a `csv` file that contains only the data you want to visualise, rather than a full, larger workbook. It is possible to clean almost any file reproducibly in R, however, as noted above, this can require higher level coding skills. For getting started with visualisation, we suggest removing summary rows or additional notes from any files you import so the file only contains the rows and columns of data you want to plot.
### 1\.6\.2 Variable names
Ensuring that your variable names are consistent can make it much easier to work in R. We recommend using short but informative variable names, for example `rt_word` is preferred over `dv1_iv1` or `reaction_time_word_condition` because these are either hard to read or hard to type.
It is also helpful to have a consistent naming scheme, particularly for variable names that require more than one word. Two popular options are `CamelCase` where each new word begins with a capital letter, or `snake_case` where all letters are lower case and words are separated by an underscore. For the purposes of naming variables, avoid using any spaces in variable names (e.g., `rt word`) and consider the additional meaning of a separator beyond making the variable names easier to read. For example, `rt_word`, `rt_nonword`, `acc_word`, and `acc_nonword` all have the DV to the left of the separator and the level of the IV to the right. `rt_word_condition` on the other hand has two separators but only one of them is meaningful, making it more difficult to split variable names consistently. In this paper, we will use `snake_case` and lower case letters for all variable names so that we don't have to remember where to put the capital letters.
When working with your own data, you can rename columns in Excel, but the resources listed in the online appendices point to how to rename columns reproducibly with code.
### 1\.6\.3 Data values
A benefit of R is that categorical data can be entered as text. In the tutorial dataset, language group is entered as 1 or 2, so that we can show you how to recode numeric values into factors with labels. However, we recommend recording meaningful labels rather than numbers from the beginning of data collection to avoid misinterpreting data due to coding errors. Note that values must match *exactly* in order to be considered in the same category and R is case sensitive, so "mono", "Mono", and "monolingual" would be classified as members of three separate categories.
Finally, importing data is more straightforward if cells that represent missing data are left empty rather than containing values like `NA`, `missing` or `999`3. A complementary rule of thumb is that each column should only contain one type of data, such as words or numbers, not both.
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/introduction.html |
1 Introduction
==============
Use of the programming language R ([R Core Team, 2022](references.html#ref-R-base)) for data processing and statistical analysis by researchers is increasingly common, with an average yearly growth of 87% in the number of citations of the R Core Team between 2006\-2018 ([Barrett, 2019](references.html#ref-barrett2019six)). In addition to benefiting reproducibility and transparency, one of the advantages of using R is that researchers have a much larger range of fully customisable data visualisation options than are typically available in point\-and\-click software, due to the open\-source nature of R. These visualisation options not only look attractive, but can increase transparency about the distribution of the underlying data rather than relying on commonly used visualisations of aggregations such as bar charts of means ([Newman \& Scholl, 2012](references.html#ref-newman2012bar)).
Yet, the benefits of using R are obscured for many researchers by the perception that coding skills are difficult to learn ([Robins et al., 2003](references.html#ref-robins2003learning)). Coupled with this, only a minority of psychology programmes currently teach coding skills ([Wills, n.d.](references.html#ref-rminr)) with the majority of both undergraduate and postgraduate courses using proprietary point\-and\-click software such as SAS, SPSS or Microsoft Excel. While the sophisticated use of proprietary software often necessitates the use of computational thinking skills akin to coding (for instance SPSS scripts or formulas in Excel), we have found that many researchers do not perceive that they already have introductory coding skills. In the following tutorial we intend to change that perception by showing how experienced researchers can redevelop their existing computational skills to utilise the powerful data visualisation tools offered by R.
In this tutorial we provide a practical introduction to data visualisation using R, specifically aimed at researchers who have little to no prior experience of using R. First we detail the rationale for using R for data visualisation and introduce the "grammar of graphics" that underlies data visualisation using the `ggplot2` package. The tutorial then walks the reader through how to replicate plots that are commonly available in point\-and\-click software such as histograms and boxplots, as well as showing how the code for these "basic" plots can be easily extended to less commonly available options such as violin\-boxplots.
1\.1 Why R for data visualisation?
----------------------------------
Data visualisation benefits from the same advantages as statistical analysis when writing code rather than using point\-and\-click software \-\- reproducibility and transparency. The need for psychological researchers to work in reproducible ways has been well\-documented and discussed in response to the replication crisis (e.g. [Munafò et al., 2017](references.html#ref-munafo2017manifesto)) and we will not repeat those arguments here. However, there is an additional benefit to reproducibility that is less frequently acknowledged compared to the loftier goals of improving psychological science: if you write code to produce your plots, you can reuse and adapt that code in the future rather than starting from scratch each time.
In addition to the benefits of reproducibility, using R for data visualisation gives the researcher almost total control over each element of the plot. Whilst this flexibility can seem daunting at first, the ability to write reusable code recipes (and use recipes created by others) is highly advantageous. The level of customisation and the professional outputs available using R has, for instance, lead news outlets such as the BBC ([Visual \& Journalism, 2019](references.html#ref-BBC-R)) and the New York Times ([Bertini \& Stefaner, 2015](references.html#ref-NYT-R)) to adopt R as their preferred data visualisation tool.
1\.2 A layered grammar of graphics
----------------------------------
There are multiple approaches to data visualisation in R; in this paper we use the popular package1 `ggplot2` ([Wickham, 2016](references.html#ref-ggplot2)) which is part of the larger `tidyverse`2 ([Wickham, 2017](references.html#ref-tidyverse)) collection of packages that provide functions for data wrangling, descriptives, and visualisation. A grammar of graphics ([Wilkinson et al., 2005](references.html#ref-wilkinson2005graph)) is a standardised way to describe the components of a graphic. `ggplot2` uses a layered grammar of graphics ([Wickham, 2010](references.html#ref-wickham2010layered)), in which plots are built up in a series of layers. It may be helpful to think about any picture as having multiple elements that sit semi\-transparently over each other. A good analogy is old Disney movies where artists would create a background and then add moveable elements on top of the background via transparencies.
Figure [1\.1](introduction.html#fig:layers) displays the evolution of a simple scatterplot using this layered approach. First, the plot space is built (layer 1\); the variables are specified (layer 2\); the type of visualisation (known as a `geom`) that is desired for these variables is specified (layer 3\) \- in this case `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` is called to visualise individual data points; a second geom is added to include a line of best fit (layer 4\), the axis labels are edited for readability (layer 5\), and finally, a theme is applied to change the overall appearance of the plot (layer 6\).
Figure 1\.1: Evolution of a layered plot
Importantly, each layer is independent and independently customisable. For example, the size, colour and position of each component can be adjusted, or one could, for example, remove the first geom (the data points) to only visualise the line of best fit, simply by removing the layer that draws the data points (Figure [1\.2](introduction.html#fig:remove-layer)). The use of layers makes it easy to build up complex plots step\-by\-step, and to adapt or extend plots from existing code.
Figure 1\.2: Plot with scatterplot layer removed.
1\.3 Tutorial components
------------------------
This tutorial contains three components.
1. A traditional PDF manuscript that can easily be saved, printed, and cited.
2. An online version of the tutorial published at <https://psyteachr.github.io/introdataviz/> that may be easier to copy and paste code from and that also provides the optional activity solutions as well as additional appendices, including code tutorials for advanced plots beyond the scope of this paper and links to additional resources.
3. An Open Science Framework repository published at <https://osf.io/bj83f/> that contains the simulated dataset (see below), preprint, and R Markdown workbook.
1\.4 Simulated dataset
----------------------
For the purpose of this tutorial, we will use simulated data for a 2 x 2 mixed\-design lexical decision task in which 100 participants must decide whether a presented word is a real word or a non\-word. There are 100 rows (1 for each participant) and 7 variables:
* Participant information:
+ `id`: Participant ID
+ `age`: Age
* 1 between\-subject independent variable (IV):
+ `language`: Language group (1 \= monolingual, 2 \= bilingual)
* 4 columns for the 2 dependent variables (DVs) of RT and accuracy, crossed by the within\-subject IV of condition:
+ `rt_word`: Reaction time (ms) for word trials
+ `rt_nonword`: Reaction time (ms) for non\-word trials
+ `acc_word`: Accuracy for word trials
+ `acc_nonword`: Accuracy for non\-word trials
For newcomers to R, we would suggest working through this tutorial with the simulated dataset, then extending the code to your own datasets with a similar structure, and finally generalising the code to new structures and problems.
1\.5 Setting up R and RStudio
-----------------------------
We strongly encourage the use of RStudio ([RStudio Team, 2021](references.html#ref-RStudio)) to write code in R. R is the programming language whilst RStudio is an *integrated development environment* that makes working with R easier. More information on installing both R and RStudio can be found in the additional resources.
Projects are a useful way of keeping all your code, data, and output in one place. To create a new project, open RStudio and click `File - New Project - New Directory - New Project`. You will be prompted to give the project a name, and select a location for where to store the project on your computer. Once you have done this, click `Create Project`. Download the simulated dataset and code tutorial Rmd file from [the online materials](https://osf.io/bj83f/files/) (`ldt_data.csv`, `workbook.Rmd`) and then move them to this folder. The files pane on the bottom right of RStudio should now display this folder and the files it contains \- this is known as your *working directory* and it is where R will look for any data you wish to import and where it will save any output you create.
This tutorial will require you to use the packages in the `tidyverse` collection. Additionally, we will also require use of `patchwork`. To install these packages, copy and paste the below code into the console (the left hand pane) and press enter to execute the code.
```
# only run in the console, never put this in a script
package_list <- [c](https://rdrr.io/r/base/c.html)("tidyverse", "patchwork")
[install.packages](https://rdrr.io/r/utils/install.packages.html)(package_list)
```
R Markdown is a dynamic format that allows you to combine text and code into one reproducible document. The R Markdown workbook available in the [online materials](https://osf.io/bj83f/files/) contains all the code in this tutorial and there is more information and links to additional resources for how to use R Markdown for reproducible reports in the additional resources.
The reason that the above code is not included in the workbook is that every time you run the install command code it will install the latest version of the package. Leaving this code in your script can lead you to unintentionally install a package update you didn't want. For this reason, avoid including install code in any script or Markdown document.
For more information on how to use R with RStudio, please see the additional resources in the online appendices.
1\.6 Preparing your data
------------------------
Before you start visualising your data, it must be in an appropriate format. These preparatory steps can all be dealt with reproducibly using R and the additional resources section points to extra tutorials for doing so. However, performing these types of tasks in R can require more sophisticated coding skills and the solutions and tools are dependent on the idiosyncrasies of each dataset. For this reason, in this tutorial we encourage the reader to complete data preparation steps using the method they are most comfortable with and to focus on the aim of data visualisation.
### 1\.6\.1 Data format
The simulated lexical decision data is provided in a `csv` (comma\-separated variable) file. Functions exist in R to read many other types of data files; the `rio` package's `import()` function can read most types of files. However, `csv` files avoids problems like Excel's insistence on mangling anything that even vaguely resembles a date. You may wish to export your data as a `csv` file that contains only the data you want to visualise, rather than a full, larger workbook. It is possible to clean almost any file reproducibly in R, however, as noted above, this can require higher level coding skills. For getting started with visualisation, we suggest removing summary rows or additional notes from any files you import so the file only contains the rows and columns of data you want to plot.
### 1\.6\.2 Variable names
Ensuring that your variable names are consistent can make it much easier to work in R. We recommend using short but informative variable names, for example `rt_word` is preferred over `dv1_iv1` or `reaction_time_word_condition` because these are either hard to read or hard to type.
It is also helpful to have a consistent naming scheme, particularly for variable names that require more than one word. Two popular options are `CamelCase` where each new word begins with a capital letter, or `snake_case` where all letters are lower case and words are separated by an underscore. For the purposes of naming variables, avoid using any spaces in variable names (e.g., `rt word`) and consider the additional meaning of a separator beyond making the variable names easier to read. For example, `rt_word`, `rt_nonword`, `acc_word`, and `acc_nonword` all have the DV to the left of the separator and the level of the IV to the right. `rt_word_condition` on the other hand has two separators but only one of them is meaningful, making it more difficult to split variable names consistently. In this paper, we will use `snake_case` and lower case letters for all variable names so that we don't have to remember where to put the capital letters.
When working with your own data, you can rename columns in Excel, but the resources listed in the online appendices point to how to rename columns reproducibly with code.
### 1\.6\.3 Data values
A benefit of R is that categorical data can be entered as text. In the tutorial dataset, language group is entered as 1 or 2, so that we can show you how to recode numeric values into factors with labels. However, we recommend recording meaningful labels rather than numbers from the beginning of data collection to avoid misinterpreting data due to coding errors. Note that values must match *exactly* in order to be considered in the same category and R is case sensitive, so "mono", "Mono", and "monolingual" would be classified as members of three separate categories.
Finally, importing data is more straightforward if cells that represent missing data are left empty rather than containing values like `NA`, `missing` or `999`3. A complementary rule of thumb is that each column should only contain one type of data, such as words or numbers, not both.
1\.1 Why R for data visualisation?
----------------------------------
Data visualisation benefits from the same advantages as statistical analysis when writing code rather than using point\-and\-click software \-\- reproducibility and transparency. The need for psychological researchers to work in reproducible ways has been well\-documented and discussed in response to the replication crisis (e.g. [Munafò et al., 2017](references.html#ref-munafo2017manifesto)) and we will not repeat those arguments here. However, there is an additional benefit to reproducibility that is less frequently acknowledged compared to the loftier goals of improving psychological science: if you write code to produce your plots, you can reuse and adapt that code in the future rather than starting from scratch each time.
In addition to the benefits of reproducibility, using R for data visualisation gives the researcher almost total control over each element of the plot. Whilst this flexibility can seem daunting at first, the ability to write reusable code recipes (and use recipes created by others) is highly advantageous. The level of customisation and the professional outputs available using R has, for instance, lead news outlets such as the BBC ([Visual \& Journalism, 2019](references.html#ref-BBC-R)) and the New York Times ([Bertini \& Stefaner, 2015](references.html#ref-NYT-R)) to adopt R as their preferred data visualisation tool.
1\.2 A layered grammar of graphics
----------------------------------
There are multiple approaches to data visualisation in R; in this paper we use the popular package1 `ggplot2` ([Wickham, 2016](references.html#ref-ggplot2)) which is part of the larger `tidyverse`2 ([Wickham, 2017](references.html#ref-tidyverse)) collection of packages that provide functions for data wrangling, descriptives, and visualisation. A grammar of graphics ([Wilkinson et al., 2005](references.html#ref-wilkinson2005graph)) is a standardised way to describe the components of a graphic. `ggplot2` uses a layered grammar of graphics ([Wickham, 2010](references.html#ref-wickham2010layered)), in which plots are built up in a series of layers. It may be helpful to think about any picture as having multiple elements that sit semi\-transparently over each other. A good analogy is old Disney movies where artists would create a background and then add moveable elements on top of the background via transparencies.
Figure [1\.1](introduction.html#fig:layers) displays the evolution of a simple scatterplot using this layered approach. First, the plot space is built (layer 1\); the variables are specified (layer 2\); the type of visualisation (known as a `geom`) that is desired for these variables is specified (layer 3\) \- in this case `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` is called to visualise individual data points; a second geom is added to include a line of best fit (layer 4\), the axis labels are edited for readability (layer 5\), and finally, a theme is applied to change the overall appearance of the plot (layer 6\).
Figure 1\.1: Evolution of a layered plot
Importantly, each layer is independent and independently customisable. For example, the size, colour and position of each component can be adjusted, or one could, for example, remove the first geom (the data points) to only visualise the line of best fit, simply by removing the layer that draws the data points (Figure [1\.2](introduction.html#fig:remove-layer)). The use of layers makes it easy to build up complex plots step\-by\-step, and to adapt or extend plots from existing code.
Figure 1\.2: Plot with scatterplot layer removed.
1\.3 Tutorial components
------------------------
This tutorial contains three components.
1. A traditional PDF manuscript that can easily be saved, printed, and cited.
2. An online version of the tutorial published at <https://psyteachr.github.io/introdataviz/> that may be easier to copy and paste code from and that also provides the optional activity solutions as well as additional appendices, including code tutorials for advanced plots beyond the scope of this paper and links to additional resources.
3. An Open Science Framework repository published at <https://osf.io/bj83f/> that contains the simulated dataset (see below), preprint, and R Markdown workbook.
1\.4 Simulated dataset
----------------------
For the purpose of this tutorial, we will use simulated data for a 2 x 2 mixed\-design lexical decision task in which 100 participants must decide whether a presented word is a real word or a non\-word. There are 100 rows (1 for each participant) and 7 variables:
* Participant information:
+ `id`: Participant ID
+ `age`: Age
* 1 between\-subject independent variable (IV):
+ `language`: Language group (1 \= monolingual, 2 \= bilingual)
* 4 columns for the 2 dependent variables (DVs) of RT and accuracy, crossed by the within\-subject IV of condition:
+ `rt_word`: Reaction time (ms) for word trials
+ `rt_nonword`: Reaction time (ms) for non\-word trials
+ `acc_word`: Accuracy for word trials
+ `acc_nonword`: Accuracy for non\-word trials
For newcomers to R, we would suggest working through this tutorial with the simulated dataset, then extending the code to your own datasets with a similar structure, and finally generalising the code to new structures and problems.
1\.5 Setting up R and RStudio
-----------------------------
We strongly encourage the use of RStudio ([RStudio Team, 2021](references.html#ref-RStudio)) to write code in R. R is the programming language whilst RStudio is an *integrated development environment* that makes working with R easier. More information on installing both R and RStudio can be found in the additional resources.
Projects are a useful way of keeping all your code, data, and output in one place. To create a new project, open RStudio and click `File - New Project - New Directory - New Project`. You will be prompted to give the project a name, and select a location for where to store the project on your computer. Once you have done this, click `Create Project`. Download the simulated dataset and code tutorial Rmd file from [the online materials](https://osf.io/bj83f/files/) (`ldt_data.csv`, `workbook.Rmd`) and then move them to this folder. The files pane on the bottom right of RStudio should now display this folder and the files it contains \- this is known as your *working directory* and it is where R will look for any data you wish to import and where it will save any output you create.
This tutorial will require you to use the packages in the `tidyverse` collection. Additionally, we will also require use of `patchwork`. To install these packages, copy and paste the below code into the console (the left hand pane) and press enter to execute the code.
```
# only run in the console, never put this in a script
package_list <- [c](https://rdrr.io/r/base/c.html)("tidyverse", "patchwork")
[install.packages](https://rdrr.io/r/utils/install.packages.html)(package_list)
```
R Markdown is a dynamic format that allows you to combine text and code into one reproducible document. The R Markdown workbook available in the [online materials](https://osf.io/bj83f/files/) contains all the code in this tutorial and there is more information and links to additional resources for how to use R Markdown for reproducible reports in the additional resources.
The reason that the above code is not included in the workbook is that every time you run the install command code it will install the latest version of the package. Leaving this code in your script can lead you to unintentionally install a package update you didn't want. For this reason, avoid including install code in any script or Markdown document.
For more information on how to use R with RStudio, please see the additional resources in the online appendices.
1\.6 Preparing your data
------------------------
Before you start visualising your data, it must be in an appropriate format. These preparatory steps can all be dealt with reproducibly using R and the additional resources section points to extra tutorials for doing so. However, performing these types of tasks in R can require more sophisticated coding skills and the solutions and tools are dependent on the idiosyncrasies of each dataset. For this reason, in this tutorial we encourage the reader to complete data preparation steps using the method they are most comfortable with and to focus on the aim of data visualisation.
### 1\.6\.1 Data format
The simulated lexical decision data is provided in a `csv` (comma\-separated variable) file. Functions exist in R to read many other types of data files; the `rio` package's `import()` function can read most types of files. However, `csv` files avoids problems like Excel's insistence on mangling anything that even vaguely resembles a date. You may wish to export your data as a `csv` file that contains only the data you want to visualise, rather than a full, larger workbook. It is possible to clean almost any file reproducibly in R, however, as noted above, this can require higher level coding skills. For getting started with visualisation, we suggest removing summary rows or additional notes from any files you import so the file only contains the rows and columns of data you want to plot.
### 1\.6\.2 Variable names
Ensuring that your variable names are consistent can make it much easier to work in R. We recommend using short but informative variable names, for example `rt_word` is preferred over `dv1_iv1` or `reaction_time_word_condition` because these are either hard to read or hard to type.
It is also helpful to have a consistent naming scheme, particularly for variable names that require more than one word. Two popular options are `CamelCase` where each new word begins with a capital letter, or `snake_case` where all letters are lower case and words are separated by an underscore. For the purposes of naming variables, avoid using any spaces in variable names (e.g., `rt word`) and consider the additional meaning of a separator beyond making the variable names easier to read. For example, `rt_word`, `rt_nonword`, `acc_word`, and `acc_nonword` all have the DV to the left of the separator and the level of the IV to the right. `rt_word_condition` on the other hand has two separators but only one of them is meaningful, making it more difficult to split variable names consistently. In this paper, we will use `snake_case` and lower case letters for all variable names so that we don't have to remember where to put the capital letters.
When working with your own data, you can rename columns in Excel, but the resources listed in the online appendices point to how to rename columns reproducibly with code.
### 1\.6\.3 Data values
A benefit of R is that categorical data can be entered as text. In the tutorial dataset, language group is entered as 1 or 2, so that we can show you how to recode numeric values into factors with labels. However, we recommend recording meaningful labels rather than numbers from the beginning of data collection to avoid misinterpreting data due to coding errors. Note that values must match *exactly* in order to be considered in the same category and R is case sensitive, so "mono", "Mono", and "monolingual" would be classified as members of three separate categories.
Finally, importing data is more straightforward if cells that represent missing data are left empty rather than containing values like `NA`, `missing` or `999`3. A complementary rule of thumb is that each column should only contain one type of data, such as words or numbers, not both.
### 1\.6\.1 Data format
The simulated lexical decision data is provided in a `csv` (comma\-separated variable) file. Functions exist in R to read many other types of data files; the `rio` package's `import()` function can read most types of files. However, `csv` files avoids problems like Excel's insistence on mangling anything that even vaguely resembles a date. You may wish to export your data as a `csv` file that contains only the data you want to visualise, rather than a full, larger workbook. It is possible to clean almost any file reproducibly in R, however, as noted above, this can require higher level coding skills. For getting started with visualisation, we suggest removing summary rows or additional notes from any files you import so the file only contains the rows and columns of data you want to plot.
### 1\.6\.2 Variable names
Ensuring that your variable names are consistent can make it much easier to work in R. We recommend using short but informative variable names, for example `rt_word` is preferred over `dv1_iv1` or `reaction_time_word_condition` because these are either hard to read or hard to type.
It is also helpful to have a consistent naming scheme, particularly for variable names that require more than one word. Two popular options are `CamelCase` where each new word begins with a capital letter, or `snake_case` where all letters are lower case and words are separated by an underscore. For the purposes of naming variables, avoid using any spaces in variable names (e.g., `rt word`) and consider the additional meaning of a separator beyond making the variable names easier to read. For example, `rt_word`, `rt_nonword`, `acc_word`, and `acc_nonword` all have the DV to the left of the separator and the level of the IV to the right. `rt_word_condition` on the other hand has two separators but only one of them is meaningful, making it more difficult to split variable names consistently. In this paper, we will use `snake_case` and lower case letters for all variable names so that we don't have to remember where to put the capital letters.
When working with your own data, you can rename columns in Excel, but the resources listed in the online appendices point to how to rename columns reproducibly with code.
### 1\.6\.3 Data values
A benefit of R is that categorical data can be entered as text. In the tutorial dataset, language group is entered as 1 or 2, so that we can show you how to recode numeric values into factors with labels. However, we recommend recording meaningful labels rather than numbers from the beginning of data collection to avoid misinterpreting data due to coding errors. Note that values must match *exactly* in order to be considered in the same category and R is case sensitive, so "mono", "Mono", and "monolingual" would be classified as members of three separate categories.
Finally, importing data is more straightforward if cells that represent missing data are left empty rather than containing values like `NA`, `missing` or `999`3. A complementary rule of thumb is that each column should only contain one type of data, such as words or numbers, not both.
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/introduction.html |
1 Introduction
==============
Use of the programming language R ([R Core Team, 2022](references.html#ref-R-base)) for data processing and statistical analysis by researchers is increasingly common, with an average yearly growth of 87% in the number of citations of the R Core Team between 2006\-2018 ([Barrett, 2019](references.html#ref-barrett2019six)). In addition to benefiting reproducibility and transparency, one of the advantages of using R is that researchers have a much larger range of fully customisable data visualisation options than are typically available in point\-and\-click software, due to the open\-source nature of R. These visualisation options not only look attractive, but can increase transparency about the distribution of the underlying data rather than relying on commonly used visualisations of aggregations such as bar charts of means ([Newman \& Scholl, 2012](references.html#ref-newman2012bar)).
Yet, the benefits of using R are obscured for many researchers by the perception that coding skills are difficult to learn ([Robins et al., 2003](references.html#ref-robins2003learning)). Coupled with this, only a minority of psychology programmes currently teach coding skills ([Wills, n.d.](references.html#ref-rminr)) with the majority of both undergraduate and postgraduate courses using proprietary point\-and\-click software such as SAS, SPSS or Microsoft Excel. While the sophisticated use of proprietary software often necessitates the use of computational thinking skills akin to coding (for instance SPSS scripts or formulas in Excel), we have found that many researchers do not perceive that they already have introductory coding skills. In the following tutorial we intend to change that perception by showing how experienced researchers can redevelop their existing computational skills to utilise the powerful data visualisation tools offered by R.
In this tutorial we provide a practical introduction to data visualisation using R, specifically aimed at researchers who have little to no prior experience of using R. First we detail the rationale for using R for data visualisation and introduce the "grammar of graphics" that underlies data visualisation using the `ggplot2` package. The tutorial then walks the reader through how to replicate plots that are commonly available in point\-and\-click software such as histograms and boxplots, as well as showing how the code for these "basic" plots can be easily extended to less commonly available options such as violin\-boxplots.
1\.1 Why R for data visualisation?
----------------------------------
Data visualisation benefits from the same advantages as statistical analysis when writing code rather than using point\-and\-click software \-\- reproducibility and transparency. The need for psychological researchers to work in reproducible ways has been well\-documented and discussed in response to the replication crisis (e.g. [Munafò et al., 2017](references.html#ref-munafo2017manifesto)) and we will not repeat those arguments here. However, there is an additional benefit to reproducibility that is less frequently acknowledged compared to the loftier goals of improving psychological science: if you write code to produce your plots, you can reuse and adapt that code in the future rather than starting from scratch each time.
In addition to the benefits of reproducibility, using R for data visualisation gives the researcher almost total control over each element of the plot. Whilst this flexibility can seem daunting at first, the ability to write reusable code recipes (and use recipes created by others) is highly advantageous. The level of customisation and the professional outputs available using R has, for instance, lead news outlets such as the BBC ([Visual \& Journalism, 2019](references.html#ref-BBC-R)) and the New York Times ([Bertini \& Stefaner, 2015](references.html#ref-NYT-R)) to adopt R as their preferred data visualisation tool.
1\.2 A layered grammar of graphics
----------------------------------
There are multiple approaches to data visualisation in R; in this paper we use the popular package1 `ggplot2` ([Wickham, 2016](references.html#ref-ggplot2)) which is part of the larger `tidyverse`2 ([Wickham, 2017](references.html#ref-tidyverse)) collection of packages that provide functions for data wrangling, descriptives, and visualisation. A grammar of graphics ([Wilkinson et al., 2005](references.html#ref-wilkinson2005graph)) is a standardised way to describe the components of a graphic. `ggplot2` uses a layered grammar of graphics ([Wickham, 2010](references.html#ref-wickham2010layered)), in which plots are built up in a series of layers. It may be helpful to think about any picture as having multiple elements that sit semi\-transparently over each other. A good analogy is old Disney movies where artists would create a background and then add moveable elements on top of the background via transparencies.
Figure [1\.1](introduction.html#fig:layers) displays the evolution of a simple scatterplot using this layered approach. First, the plot space is built (layer 1\); the variables are specified (layer 2\); the type of visualisation (known as a `geom`) that is desired for these variables is specified (layer 3\) \- in this case `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` is called to visualise individual data points; a second geom is added to include a line of best fit (layer 4\), the axis labels are edited for readability (layer 5\), and finally, a theme is applied to change the overall appearance of the plot (layer 6\).
Figure 1\.1: Evolution of a layered plot
Importantly, each layer is independent and independently customisable. For example, the size, colour and position of each component can be adjusted, or one could, for example, remove the first geom (the data points) to only visualise the line of best fit, simply by removing the layer that draws the data points (Figure [1\.2](introduction.html#fig:remove-layer)). The use of layers makes it easy to build up complex plots step\-by\-step, and to adapt or extend plots from existing code.
Figure 1\.2: Plot with scatterplot layer removed.
1\.3 Tutorial components
------------------------
This tutorial contains three components.
1. A traditional PDF manuscript that can easily be saved, printed, and cited.
2. An online version of the tutorial published at <https://psyteachr.github.io/introdataviz/> that may be easier to copy and paste code from and that also provides the optional activity solutions as well as additional appendices, including code tutorials for advanced plots beyond the scope of this paper and links to additional resources.
3. An Open Science Framework repository published at <https://osf.io/bj83f/> that contains the simulated dataset (see below), preprint, and R Markdown workbook.
1\.4 Simulated dataset
----------------------
For the purpose of this tutorial, we will use simulated data for a 2 x 2 mixed\-design lexical decision task in which 100 participants must decide whether a presented word is a real word or a non\-word. There are 100 rows (1 for each participant) and 7 variables:
* Participant information:
+ `id`: Participant ID
+ `age`: Age
* 1 between\-subject independent variable (IV):
+ `language`: Language group (1 \= monolingual, 2 \= bilingual)
* 4 columns for the 2 dependent variables (DVs) of RT and accuracy, crossed by the within\-subject IV of condition:
+ `rt_word`: Reaction time (ms) for word trials
+ `rt_nonword`: Reaction time (ms) for non\-word trials
+ `acc_word`: Accuracy for word trials
+ `acc_nonword`: Accuracy for non\-word trials
For newcomers to R, we would suggest working through this tutorial with the simulated dataset, then extending the code to your own datasets with a similar structure, and finally generalising the code to new structures and problems.
1\.5 Setting up R and RStudio
-----------------------------
We strongly encourage the use of RStudio ([RStudio Team, 2021](references.html#ref-RStudio)) to write code in R. R is the programming language whilst RStudio is an *integrated development environment* that makes working with R easier. More information on installing both R and RStudio can be found in the additional resources.
Projects are a useful way of keeping all your code, data, and output in one place. To create a new project, open RStudio and click `File - New Project - New Directory - New Project`. You will be prompted to give the project a name, and select a location for where to store the project on your computer. Once you have done this, click `Create Project`. Download the simulated dataset and code tutorial Rmd file from [the online materials](https://osf.io/bj83f/files/) (`ldt_data.csv`, `workbook.Rmd`) and then move them to this folder. The files pane on the bottom right of RStudio should now display this folder and the files it contains \- this is known as your *working directory* and it is where R will look for any data you wish to import and where it will save any output you create.
This tutorial will require you to use the packages in the `tidyverse` collection. Additionally, we will also require use of `patchwork`. To install these packages, copy and paste the below code into the console (the left hand pane) and press enter to execute the code.
```
# only run in the console, never put this in a script
package_list <- [c](https://rdrr.io/r/base/c.html)("tidyverse", "patchwork")
[install.packages](https://rdrr.io/r/utils/install.packages.html)(package_list)
```
R Markdown is a dynamic format that allows you to combine text and code into one reproducible document. The R Markdown workbook available in the [online materials](https://osf.io/bj83f/files/) contains all the code in this tutorial and there is more information and links to additional resources for how to use R Markdown for reproducible reports in the additional resources.
The reason that the above code is not included in the workbook is that every time you run the install command code it will install the latest version of the package. Leaving this code in your script can lead you to unintentionally install a package update you didn't want. For this reason, avoid including install code in any script or Markdown document.
For more information on how to use R with RStudio, please see the additional resources in the online appendices.
1\.6 Preparing your data
------------------------
Before you start visualising your data, it must be in an appropriate format. These preparatory steps can all be dealt with reproducibly using R and the additional resources section points to extra tutorials for doing so. However, performing these types of tasks in R can require more sophisticated coding skills and the solutions and tools are dependent on the idiosyncrasies of each dataset. For this reason, in this tutorial we encourage the reader to complete data preparation steps using the method they are most comfortable with and to focus on the aim of data visualisation.
### 1\.6\.1 Data format
The simulated lexical decision data is provided in a `csv` (comma\-separated variable) file. Functions exist in R to read many other types of data files; the `rio` package's `import()` function can read most types of files. However, `csv` files avoids problems like Excel's insistence on mangling anything that even vaguely resembles a date. You may wish to export your data as a `csv` file that contains only the data you want to visualise, rather than a full, larger workbook. It is possible to clean almost any file reproducibly in R, however, as noted above, this can require higher level coding skills. For getting started with visualisation, we suggest removing summary rows or additional notes from any files you import so the file only contains the rows and columns of data you want to plot.
### 1\.6\.2 Variable names
Ensuring that your variable names are consistent can make it much easier to work in R. We recommend using short but informative variable names, for example `rt_word` is preferred over `dv1_iv1` or `reaction_time_word_condition` because these are either hard to read or hard to type.
It is also helpful to have a consistent naming scheme, particularly for variable names that require more than one word. Two popular options are `CamelCase` where each new word begins with a capital letter, or `snake_case` where all letters are lower case and words are separated by an underscore. For the purposes of naming variables, avoid using any spaces in variable names (e.g., `rt word`) and consider the additional meaning of a separator beyond making the variable names easier to read. For example, `rt_word`, `rt_nonword`, `acc_word`, and `acc_nonword` all have the DV to the left of the separator and the level of the IV to the right. `rt_word_condition` on the other hand has two separators but only one of them is meaningful, making it more difficult to split variable names consistently. In this paper, we will use `snake_case` and lower case letters for all variable names so that we don't have to remember where to put the capital letters.
When working with your own data, you can rename columns in Excel, but the resources listed in the online appendices point to how to rename columns reproducibly with code.
### 1\.6\.3 Data values
A benefit of R is that categorical data can be entered as text. In the tutorial dataset, language group is entered as 1 or 2, so that we can show you how to recode numeric values into factors with labels. However, we recommend recording meaningful labels rather than numbers from the beginning of data collection to avoid misinterpreting data due to coding errors. Note that values must match *exactly* in order to be considered in the same category and R is case sensitive, so "mono", "Mono", and "monolingual" would be classified as members of three separate categories.
Finally, importing data is more straightforward if cells that represent missing data are left empty rather than containing values like `NA`, `missing` or `999`3. A complementary rule of thumb is that each column should only contain one type of data, such as words or numbers, not both.
1\.1 Why R for data visualisation?
----------------------------------
Data visualisation benefits from the same advantages as statistical analysis when writing code rather than using point\-and\-click software \-\- reproducibility and transparency. The need for psychological researchers to work in reproducible ways has been well\-documented and discussed in response to the replication crisis (e.g. [Munafò et al., 2017](references.html#ref-munafo2017manifesto)) and we will not repeat those arguments here. However, there is an additional benefit to reproducibility that is less frequently acknowledged compared to the loftier goals of improving psychological science: if you write code to produce your plots, you can reuse and adapt that code in the future rather than starting from scratch each time.
In addition to the benefits of reproducibility, using R for data visualisation gives the researcher almost total control over each element of the plot. Whilst this flexibility can seem daunting at first, the ability to write reusable code recipes (and use recipes created by others) is highly advantageous. The level of customisation and the professional outputs available using R has, for instance, lead news outlets such as the BBC ([Visual \& Journalism, 2019](references.html#ref-BBC-R)) and the New York Times ([Bertini \& Stefaner, 2015](references.html#ref-NYT-R)) to adopt R as their preferred data visualisation tool.
1\.2 A layered grammar of graphics
----------------------------------
There are multiple approaches to data visualisation in R; in this paper we use the popular package1 `ggplot2` ([Wickham, 2016](references.html#ref-ggplot2)) which is part of the larger `tidyverse`2 ([Wickham, 2017](references.html#ref-tidyverse)) collection of packages that provide functions for data wrangling, descriptives, and visualisation. A grammar of graphics ([Wilkinson et al., 2005](references.html#ref-wilkinson2005graph)) is a standardised way to describe the components of a graphic. `ggplot2` uses a layered grammar of graphics ([Wickham, 2010](references.html#ref-wickham2010layered)), in which plots are built up in a series of layers. It may be helpful to think about any picture as having multiple elements that sit semi\-transparently over each other. A good analogy is old Disney movies where artists would create a background and then add moveable elements on top of the background via transparencies.
Figure [1\.1](introduction.html#fig:layers) displays the evolution of a simple scatterplot using this layered approach. First, the plot space is built (layer 1\); the variables are specified (layer 2\); the type of visualisation (known as a `geom`) that is desired for these variables is specified (layer 3\) \- in this case `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` is called to visualise individual data points; a second geom is added to include a line of best fit (layer 4\), the axis labels are edited for readability (layer 5\), and finally, a theme is applied to change the overall appearance of the plot (layer 6\).
Figure 1\.1: Evolution of a layered plot
Importantly, each layer is independent and independently customisable. For example, the size, colour and position of each component can be adjusted, or one could, for example, remove the first geom (the data points) to only visualise the line of best fit, simply by removing the layer that draws the data points (Figure [1\.2](introduction.html#fig:remove-layer)). The use of layers makes it easy to build up complex plots step\-by\-step, and to adapt or extend plots from existing code.
Figure 1\.2: Plot with scatterplot layer removed.
1\.3 Tutorial components
------------------------
This tutorial contains three components.
1. A traditional PDF manuscript that can easily be saved, printed, and cited.
2. An online version of the tutorial published at <https://psyteachr.github.io/introdataviz/> that may be easier to copy and paste code from and that also provides the optional activity solutions as well as additional appendices, including code tutorials for advanced plots beyond the scope of this paper and links to additional resources.
3. An Open Science Framework repository published at <https://osf.io/bj83f/> that contains the simulated dataset (see below), preprint, and R Markdown workbook.
1\.4 Simulated dataset
----------------------
For the purpose of this tutorial, we will use simulated data for a 2 x 2 mixed\-design lexical decision task in which 100 participants must decide whether a presented word is a real word or a non\-word. There are 100 rows (1 for each participant) and 7 variables:
* Participant information:
+ `id`: Participant ID
+ `age`: Age
* 1 between\-subject independent variable (IV):
+ `language`: Language group (1 \= monolingual, 2 \= bilingual)
* 4 columns for the 2 dependent variables (DVs) of RT and accuracy, crossed by the within\-subject IV of condition:
+ `rt_word`: Reaction time (ms) for word trials
+ `rt_nonword`: Reaction time (ms) for non\-word trials
+ `acc_word`: Accuracy for word trials
+ `acc_nonword`: Accuracy for non\-word trials
For newcomers to R, we would suggest working through this tutorial with the simulated dataset, then extending the code to your own datasets with a similar structure, and finally generalising the code to new structures and problems.
1\.5 Setting up R and RStudio
-----------------------------
We strongly encourage the use of RStudio ([RStudio Team, 2021](references.html#ref-RStudio)) to write code in R. R is the programming language whilst RStudio is an *integrated development environment* that makes working with R easier. More information on installing both R and RStudio can be found in the additional resources.
Projects are a useful way of keeping all your code, data, and output in one place. To create a new project, open RStudio and click `File - New Project - New Directory - New Project`. You will be prompted to give the project a name, and select a location for where to store the project on your computer. Once you have done this, click `Create Project`. Download the simulated dataset and code tutorial Rmd file from [the online materials](https://osf.io/bj83f/files/) (`ldt_data.csv`, `workbook.Rmd`) and then move them to this folder. The files pane on the bottom right of RStudio should now display this folder and the files it contains \- this is known as your *working directory* and it is where R will look for any data you wish to import and where it will save any output you create.
This tutorial will require you to use the packages in the `tidyverse` collection. Additionally, we will also require use of `patchwork`. To install these packages, copy and paste the below code into the console (the left hand pane) and press enter to execute the code.
```
# only run in the console, never put this in a script
package_list <- [c](https://rdrr.io/r/base/c.html)("tidyverse", "patchwork")
[install.packages](https://rdrr.io/r/utils/install.packages.html)(package_list)
```
R Markdown is a dynamic format that allows you to combine text and code into one reproducible document. The R Markdown workbook available in the [online materials](https://osf.io/bj83f/files/) contains all the code in this tutorial and there is more information and links to additional resources for how to use R Markdown for reproducible reports in the additional resources.
The reason that the above code is not included in the workbook is that every time you run the install command code it will install the latest version of the package. Leaving this code in your script can lead you to unintentionally install a package update you didn't want. For this reason, avoid including install code in any script or Markdown document.
For more information on how to use R with RStudio, please see the additional resources in the online appendices.
1\.6 Preparing your data
------------------------
Before you start visualising your data, it must be in an appropriate format. These preparatory steps can all be dealt with reproducibly using R and the additional resources section points to extra tutorials for doing so. However, performing these types of tasks in R can require more sophisticated coding skills and the solutions and tools are dependent on the idiosyncrasies of each dataset. For this reason, in this tutorial we encourage the reader to complete data preparation steps using the method they are most comfortable with and to focus on the aim of data visualisation.
### 1\.6\.1 Data format
The simulated lexical decision data is provided in a `csv` (comma\-separated variable) file. Functions exist in R to read many other types of data files; the `rio` package's `import()` function can read most types of files. However, `csv` files avoids problems like Excel's insistence on mangling anything that even vaguely resembles a date. You may wish to export your data as a `csv` file that contains only the data you want to visualise, rather than a full, larger workbook. It is possible to clean almost any file reproducibly in R, however, as noted above, this can require higher level coding skills. For getting started with visualisation, we suggest removing summary rows or additional notes from any files you import so the file only contains the rows and columns of data you want to plot.
### 1\.6\.2 Variable names
Ensuring that your variable names are consistent can make it much easier to work in R. We recommend using short but informative variable names, for example `rt_word` is preferred over `dv1_iv1` or `reaction_time_word_condition` because these are either hard to read or hard to type.
It is also helpful to have a consistent naming scheme, particularly for variable names that require more than one word. Two popular options are `CamelCase` where each new word begins with a capital letter, or `snake_case` where all letters are lower case and words are separated by an underscore. For the purposes of naming variables, avoid using any spaces in variable names (e.g., `rt word`) and consider the additional meaning of a separator beyond making the variable names easier to read. For example, `rt_word`, `rt_nonword`, `acc_word`, and `acc_nonword` all have the DV to the left of the separator and the level of the IV to the right. `rt_word_condition` on the other hand has two separators but only one of them is meaningful, making it more difficult to split variable names consistently. In this paper, we will use `snake_case` and lower case letters for all variable names so that we don't have to remember where to put the capital letters.
When working with your own data, you can rename columns in Excel, but the resources listed in the online appendices point to how to rename columns reproducibly with code.
### 1\.6\.3 Data values
A benefit of R is that categorical data can be entered as text. In the tutorial dataset, language group is entered as 1 or 2, so that we can show you how to recode numeric values into factors with labels. However, we recommend recording meaningful labels rather than numbers from the beginning of data collection to avoid misinterpreting data due to coding errors. Note that values must match *exactly* in order to be considered in the same category and R is case sensitive, so "mono", "Mono", and "monolingual" would be classified as members of three separate categories.
Finally, importing data is more straightforward if cells that represent missing data are left empty rather than containing values like `NA`, `missing` or `999`3. A complementary rule of thumb is that each column should only contain one type of data, such as words or numbers, not both.
### 1\.6\.1 Data format
The simulated lexical decision data is provided in a `csv` (comma\-separated variable) file. Functions exist in R to read many other types of data files; the `rio` package's `import()` function can read most types of files. However, `csv` files avoids problems like Excel's insistence on mangling anything that even vaguely resembles a date. You may wish to export your data as a `csv` file that contains only the data you want to visualise, rather than a full, larger workbook. It is possible to clean almost any file reproducibly in R, however, as noted above, this can require higher level coding skills. For getting started with visualisation, we suggest removing summary rows or additional notes from any files you import so the file only contains the rows and columns of data you want to plot.
### 1\.6\.2 Variable names
Ensuring that your variable names are consistent can make it much easier to work in R. We recommend using short but informative variable names, for example `rt_word` is preferred over `dv1_iv1` or `reaction_time_word_condition` because these are either hard to read or hard to type.
It is also helpful to have a consistent naming scheme, particularly for variable names that require more than one word. Two popular options are `CamelCase` where each new word begins with a capital letter, or `snake_case` where all letters are lower case and words are separated by an underscore. For the purposes of naming variables, avoid using any spaces in variable names (e.g., `rt word`) and consider the additional meaning of a separator beyond making the variable names easier to read. For example, `rt_word`, `rt_nonword`, `acc_word`, and `acc_nonword` all have the DV to the left of the separator and the level of the IV to the right. `rt_word_condition` on the other hand has two separators but only one of them is meaningful, making it more difficult to split variable names consistently. In this paper, we will use `snake_case` and lower case letters for all variable names so that we don't have to remember where to put the capital letters.
When working with your own data, you can rename columns in Excel, but the resources listed in the online appendices point to how to rename columns reproducibly with code.
### 1\.6\.3 Data values
A benefit of R is that categorical data can be entered as text. In the tutorial dataset, language group is entered as 1 or 2, so that we can show you how to recode numeric values into factors with labels. However, we recommend recording meaningful labels rather than numbers from the beginning of data collection to avoid misinterpreting data due to coding errors. Note that values must match *exactly* in order to be considered in the same category and R is case sensitive, so "mono", "Mono", and "monolingual" would be classified as members of three separate categories.
Finally, importing data is more straightforward if cells that represent missing data are left empty rather than containing values like `NA`, `missing` or `999`3. A complementary rule of thumb is that each column should only contain one type of data, such as words or numbers, not both.
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/getting-started.html |
2 Getting Started
=================
2\.1 Loading packages
---------------------
To load the packages that have the functions we need, use the `[library()](https://rdrr.io/r/base/library.html)` function. Whilst you only need to install packages once, you need to load any packages you want to use with `[library()](https://rdrr.io/r/base/library.html)` every time you start R or start a new session. When you load the `tidyverse`, you actually load several separate packages that are all part of the same collection and have been designed to work together. R will produce a message that tells you the names of the packages that have been loaded.
```
[library](https://rdrr.io/r/base/library.html)([tidyverse](https://tidyverse.tidyverse.org))
[library](https://rdrr.io/r/base/library.html)([patchwork](https://patchwork.data-imaginist.com))
```
2\.2 Loading data
-----------------
To load the [simulated data](https://osf.io/bj83f/files/) we use the function `[read_csv()](https://readr.tidyverse.org/reference/read_delim.html)` from the `readr` tidyverse package. Note that there are many other ways of reading data into R, but the benefit of this function is that it enters the data into the R environment in such a way that it makes most sense for other tidyverse packages.
```
dat <- [read_csv](https://readr.tidyverse.org/reference/read_delim.html)(file = "ldt_data.csv")
```
This code has created an object `dat` into which you have read the data from the file `ldt_data.csv`. This object will appear in the environment pane in the top right. Note that the name of the data file must be in quotation marks and the file extension (`.csv`) must also be included. If you receive the error `…does not exist in current working directory` it is highly likely that you have made a typo in the file name (remember R is case sensitive), have forgotten to include the file extension `.csv`, or that the data file you want to load is not stored in your project folder. If you get the error `could not find function` it means you have either not loaded the correct package (a common beginner error is to write the code, but not run it), or you have made a typo in the function name.
You should always check after importing data that the resulting table looks like you expect. To view the dataset, click `dat` in the environment pane or run `View(dat)` in the console. The environment pane also tells us that the object `dat` has 100 observations of 7 variables, and this is a useful quick check to ensure one has loaded the right data. Note that the 7 variables have an additional piece of information `chr` and `num`; this specifies the kind of data in the column. Similar to Excel and SPSS, R uses this information (or variable type) to specify allowable manipulations of data. For instance character data such as the `id` cannot be averaged, while it is possible to do this with numerical data such as the `age`.
2\.3 Handling numeric factors
-----------------------------
Another useful check is to use the functions `[summary()](https://rdrr.io/r/base/summary.html)` and `[str()](https://rdrr.io/r/utils/str.html)` (structure) to check what kind of data R thinks is in each column. Run the below code and look at the output of each, comparing it with what you know about the simulated dataset:
```
[summary](https://rdrr.io/r/base/summary.html)(dat)
[str](https://rdrr.io/r/utils/str.html)(dat)
```
Because the factor `language` is coded as 1 and 2, R has categorised this column as containing numeric information and unless we correct it, this will cause problems for visualisation and analysis. The code below shows how to recode numeric codes into labels.
* `[mutate()](https://dplyr.tidyverse.org/reference/mutate.html)` makes new columns in a data table, or overwrites a column;
* `[factor()](https://rdrr.io/r/base/factor.html)` translates the language column into a factor with the labels "monolingual" and "bilingual". You can also use `[factor()](https://rdrr.io/r/base/factor.html)` to set the display order of a column that contains words. Otherwise, they will display in alphabetical order. In this case we are replacing the numeric data (1 and 2\) in the `language` column with the equivalent English labels `monolingual` for 1 and `bilingual` for 2\. At the same time we will change the column type to be a factor, which is how R defines categorical data.
```
dat <- [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(dat, language = [factor](https://rdrr.io/r/base/factor.html)(
x = language, # column to translate
levels = [c](https://rdrr.io/r/base/c.html)(1, 2), # values of the original data in preferred order
labels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual") # labels for display
))
```
Make sure that you always check the output of any code that you run. If after running this code `language` is full of `NA` values, it means that you have run the code twice. The first time would have worked and transformed the values from `1` to `monolingual` and `2` to `bilingual`. If you run the code again on the same dataset, it will look for the values `1` and `2`, and because there are no longer any that match, it will return NA. If this happens, you will need to reload the dataset from the csv file.
A good way to avoid this is never to overwrite data, but to always store the output of code in new objects (e.g., `dat_recoded`) or new variables (`language_recoded`). For the purposes of this tutorial, overwriting provides a useful teachable moment so we'll leave it as it is.
2\.4 Argument names
-------------------
Each function has a list of arguments it can take, and a default order for those arguments. You can get more information on each function by entering `?function_name` into the console, although be aware that learning to read the help documentation in R is a skill in itself. When you are writing R code, as long as you stick to the default order, you do not have to explicitly call the argument names, for example, the above code could also be written as:
```
dat <- [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(dat, language = [factor](https://rdrr.io/r/base/factor.html)(
language,
[c](https://rdrr.io/r/base/c.html)(1, 2),
[c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual")
))
```
One of the challenges in learning R is that many of the "helpful" examples and solutions you will find online do not include argument names and so for novice learners are completely opaque. In this tutorial, we will include the argument names the first time a function is used, however, we will remove some argument names from subsequent examples to facilitate knowledge transfer to the help available online.
2\.5 Summarising data
---------------------
You can calculate and plot some basic descriptive information about the demographics of our sample using the imported dataset without any additional wrangling (i.e., data processing). The code below uses the `%>%` operator, otherwise known as the *pipe,* and can be translated as "*and then"*. For example, the below code can be read as:
* Start with the dataset `dat` *and then;*
* Group it by the variable `language` *and then;*
* Count the number of observations in each group *and then;*
* Remove the grouping
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
| language | n |
| --- | --- |
| monolingual | 55 |
| bilingual | 45 |
`[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` does not result in surface level changes to the dataset, rather, it changes the underlying structure so that if groups are specified, whatever functions called next are performed separately on each level of the grouping variable. This grouping remains in the object that is created so it is important to remove it with `[ungroup()](https://dplyr.tidyverse.org/reference/group_by.html)` to avoid future operations on the object unknowingly being performed by groups.
The above code therefore counts the number of observations in each group of the variable `language`. If you just need the total number of observations, you could remove the `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` and `[ungroup()](https://dplyr.tidyverse.org/reference/group_by.html)` lines, which would perform the operation on the whole dataset, rather than by groups:
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)()
```
| n |
| --- |
| 100 |
Similarly, we may wish to calculate the mean age (and SD) of the sample and we can do so using the function `[summarise()](https://dplyr.tidyverse.org/reference/summarise.html)` from the `dplyr` tidyverse package.
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)())
```
| mean\_age | sd\_age | n\_values |
| --- | --- | --- |
| 29\.75 | 8\.28 | 100 |
This code produces summary data in the form of a column named `mean_age` that contains the result of calculating the mean of the variable `age`. It then creates `sd_age` which does the same but for standard deviation. Finally, it uses the function `[n()](https://dplyr.tidyverse.org/reference/context.html)` to add the number of values used to calculate the statistic in a column named `n_values` \- this is a useful sanity check whenever you make summary statistics.
Note that the above code will not save the result of this operation, it will simply output the result in the console. If you wish to save it for future use, you can store it in an object by using the `<-` notation and print it later by typing the object name.
```
age_stats <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)())
```
Finally, the `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` function will work in the same way when calculating summary statistics \-\- the output of the function that is called after `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` will be produced for each level of the grouping variable.
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)()) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
| language | mean\_age | sd\_age | n\_values |
| --- | --- | --- | --- |
| monolingual | 27\.96 | 6\.78 | 55 |
| bilingual | 31\.93 | 9\.44 | 45 |
2\.6 Bar chart of counts
------------------------
For our first plot, we will make a simple bar chart of counts that shows the number of participants in each `language` group.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)()
```
Figure 2\.1: Bar chart of counts.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = (..count..)/[sum](https://rdrr.io/r/base/sum.html)(..count..))) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Percent", labels=scales::[percent](https://scales.r-lib.org/reference/label_percent.html))
```
The first line of code sets up the base of the plot.
* `data` specifies which data source to use for the plot
* `mapping` specifies which variables to map to which aesthetics (`aes`) of the plot. Mappings describe how variables in the data are mapped to visual properties (aesthetics) of geoms.
* `x` specifies which variable to put on the x\-axis
The second line of code adds a `geom`, and is connected to the base code with `+`. In this case, we ask for `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)`. Each `geom` has an associated default statistic. For `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)`, the default statistic is to count the data passed to it. This means that you do not have to specify a `y` variable when making a bar plot of counts; when given an `x` variable `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)` will automatically calculate counts of the groups in that variable. In this example, it counts the number of data points that are in each category of the `language` variable.
The base and geoms layers work in symbiosis so it is worthwhile checking the mapping rules as these are related to the default statistic for the plot's geom.
2\.7 Aggregates and percentages
-------------------------------
If your dataset already has the counts that you want to plot, you can set `stat="identity"` inside of `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)` to use that number instead of counting rows. For example, to plot percentages rather than counts within `ggplot2`, you can calculate these and store them in a new object that is then used as the dataset. You can do this in the software you are most comfortable in, save the new data, and import it as a new table, or you can use code to manipulate the data.
```
dat_percent <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html) # start with the data in dat
[count](https://dplyr.tidyverse.org/reference/count.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) # count rows per language (makes a new column called n)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(percent = (n/[sum](https://rdrr.io/r/base/sum.html)(n)*100)) # make a new column 'percent' equal to
# n divided by the sum of n times 100
```
Notice that we are now omitting the names of the arguments `data` and `mapping` in the `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` function.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_percent, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language, y = percent)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(stat="identity")
```
Figure 2\.2: Bar chart of pre\-calculated counts.
2\.8 Histogram
--------------
The code to plot a histogram of `age` is very similar to the bar chart code. We start by setting up the plot space, the dataset to use, and mapping the variables to the relevant axis. In this case, we want to plot a histogram with `age` on the x\-axis:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)()
```
Figure 2\.3: Histogram of ages.
The base statistic for `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` is also count, and by default `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` divides the x\-axis into 30 "bins" and counts how many observations are in each bin and so the y\-axis does not need to be specified. When you run the code to produce the histogram, you will get the message "stat\_bin() using bins \= 30\. Pick better value with binwidth". You can change this by either setting the number of bins (e.g., `bins = 20`) or the width of each bin (e.g., `binwidth = 5`) as an argument.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 5)
```
Figure 2\.4: Histogram of ages where each bin covers five years.
2\.9 Customisation 1
--------------------
So far we have made basic plots with the default visual appearance. Before we move on to the experimental data, we will introduce some simple visual customisation options. There are many ways in which you can control or customise the visual appearance of figures in R. However, once you understand the logic of one, it becomes easier to understand others that you may see in other examples. The visual appearance of elements can be customised within a geom itself, within the aesthetic mapping, or by connecting additional layers with `+`. In this section we look at the simplest and most commonly\-used customisations: changing colours, adding axis labels, and adding themes.
### 2\.9\.1 Changing colours
For our basic bar chart, you can control colours used to display the bars by setting `fill` (internal colour) and `colour` (outline colour) inside the geom function. This method changes **all** bars; we will show you later how to set fill or colour separately for different groups.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1,
fill = "white",
colour = "black")
```
Figure 2\.5: Histogram with custom colors for bar fill and line colors.
### 2\.9\.2 Editing axis names and labels
To edit axis names and labels you can connect `scale_*` functions to your plot with `+` to add layers. These functions are part of `ggplot2` and the one you use depends on which aesthetic you wish to edit (e.g., x\-axis, y\-axis, fill, colour) as well as the type of data it represents (discrete, continuous).
For the bar chart of counts, the x\-axis is mapped to a discrete (categorical) variable whilst the y\-axis is continuous. For each of these there is a relevant scale function with various elements that can be customised. Each axis then has its own function added as a layer to the basic plot.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Number of participants",
breaks = [c](https://rdrr.io/r/base/c.html)(0,10,20,30,40,50))
```
Figure 2\.6: Bar chart with custom axis labels.
* `name` controls the overall name of the axis (note the use of quotation marks)
* `labels` controls the names of the conditions with a discrete variable.
* `[c()](https://rdrr.io/r/base/c.html)` is a function that you will see in many different contexts and is used to combine multiple values. In this case, the labels we want to apply are combined within `[c()](https://rdrr.io/r/base/c.html)` by enclosing each word within their own parenthesis, and are in the order displayed on the plot. A very common error is to forget to enclose multiple values in `[c()](https://rdrr.io/r/base/c.html)`.
* `breaks` controls the tick marks on the axis. Again, because there are multiple values, they are enclosed within `[c()](https://rdrr.io/r/base/c.html)`. Because they are numeric and not text, they do not need quotation marks.
A common error is to map the wrong type of `scale_` function to a variable. Try running the below code:
```
# produces an error
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
```
This will produce the error `Discrete value supplied to continuous scale` because we have used a `continuous` scale function, despite the fact that x\-axis variable is discrete. If you get this error (or the reverse), check the type of data on each axis and the function you have used.
### 2\.9\.3 Adding a theme
`ggplot2` has a number of built\-in visual themes that you can apply as an extra layer. The below code updates the x\-axis and y\-axis labels to the histogram, but also applies `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`. Each part of a theme can be independently customised, which may be necessary, for example, if you have journal guidelines on fonts for publication. There are further instructions for how to do this in the online appendices.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 2\.7: Histogram with a custom theme.
You can set the theme globally so that all subsequent plots use a theme. `[theme_set()](https://ggplot2.tidyverse.org/reference/theme_get.html)` is not part of a `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` object, you should run this code on its own. It may be useful to add this code to the top of your script so that all plots produced subsequently use the same theme.
```
[theme_set](https://ggplot2.tidyverse.org/reference/theme_get.html)([theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)())
```
If you wished to return to the default theme, change the above to specify `[theme_grey()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`.
2\.10 Activities 1
------------------
Before you move on try the following:
1. Add a layer that edits the **name** of the y\-axis histogram label to `Number of participants`.
Solution 1
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Number of participants")
```
2. Change the colour of the bars in the bar chart to red.
Solution 2
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(fill = "red")
```
3. Remove `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)` from the histogram and instead apply one of the other available themes. To find out about other available themes, start typing `theme_` and the auto\-complete will show you the available options \- this will only work if you have loaded the `tidyverse` library with `[library(tidyverse)](https://tidyverse.tidyverse.org)`.
Solution 3
```
#multiple options here e.g., theme_classic()
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
# theme_bw()
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
2\.1 Loading packages
---------------------
To load the packages that have the functions we need, use the `[library()](https://rdrr.io/r/base/library.html)` function. Whilst you only need to install packages once, you need to load any packages you want to use with `[library()](https://rdrr.io/r/base/library.html)` every time you start R or start a new session. When you load the `tidyverse`, you actually load several separate packages that are all part of the same collection and have been designed to work together. R will produce a message that tells you the names of the packages that have been loaded.
```
[library](https://rdrr.io/r/base/library.html)([tidyverse](https://tidyverse.tidyverse.org))
[library](https://rdrr.io/r/base/library.html)([patchwork](https://patchwork.data-imaginist.com))
```
2\.2 Loading data
-----------------
To load the [simulated data](https://osf.io/bj83f/files/) we use the function `[read_csv()](https://readr.tidyverse.org/reference/read_delim.html)` from the `readr` tidyverse package. Note that there are many other ways of reading data into R, but the benefit of this function is that it enters the data into the R environment in such a way that it makes most sense for other tidyverse packages.
```
dat <- [read_csv](https://readr.tidyverse.org/reference/read_delim.html)(file = "ldt_data.csv")
```
This code has created an object `dat` into which you have read the data from the file `ldt_data.csv`. This object will appear in the environment pane in the top right. Note that the name of the data file must be in quotation marks and the file extension (`.csv`) must also be included. If you receive the error `…does not exist in current working directory` it is highly likely that you have made a typo in the file name (remember R is case sensitive), have forgotten to include the file extension `.csv`, or that the data file you want to load is not stored in your project folder. If you get the error `could not find function` it means you have either not loaded the correct package (a common beginner error is to write the code, but not run it), or you have made a typo in the function name.
You should always check after importing data that the resulting table looks like you expect. To view the dataset, click `dat` in the environment pane or run `View(dat)` in the console. The environment pane also tells us that the object `dat` has 100 observations of 7 variables, and this is a useful quick check to ensure one has loaded the right data. Note that the 7 variables have an additional piece of information `chr` and `num`; this specifies the kind of data in the column. Similar to Excel and SPSS, R uses this information (or variable type) to specify allowable manipulations of data. For instance character data such as the `id` cannot be averaged, while it is possible to do this with numerical data such as the `age`.
2\.3 Handling numeric factors
-----------------------------
Another useful check is to use the functions `[summary()](https://rdrr.io/r/base/summary.html)` and `[str()](https://rdrr.io/r/utils/str.html)` (structure) to check what kind of data R thinks is in each column. Run the below code and look at the output of each, comparing it with what you know about the simulated dataset:
```
[summary](https://rdrr.io/r/base/summary.html)(dat)
[str](https://rdrr.io/r/utils/str.html)(dat)
```
Because the factor `language` is coded as 1 and 2, R has categorised this column as containing numeric information and unless we correct it, this will cause problems for visualisation and analysis. The code below shows how to recode numeric codes into labels.
* `[mutate()](https://dplyr.tidyverse.org/reference/mutate.html)` makes new columns in a data table, or overwrites a column;
* `[factor()](https://rdrr.io/r/base/factor.html)` translates the language column into a factor with the labels "monolingual" and "bilingual". You can also use `[factor()](https://rdrr.io/r/base/factor.html)` to set the display order of a column that contains words. Otherwise, they will display in alphabetical order. In this case we are replacing the numeric data (1 and 2\) in the `language` column with the equivalent English labels `monolingual` for 1 and `bilingual` for 2\. At the same time we will change the column type to be a factor, which is how R defines categorical data.
```
dat <- [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(dat, language = [factor](https://rdrr.io/r/base/factor.html)(
x = language, # column to translate
levels = [c](https://rdrr.io/r/base/c.html)(1, 2), # values of the original data in preferred order
labels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual") # labels for display
))
```
Make sure that you always check the output of any code that you run. If after running this code `language` is full of `NA` values, it means that you have run the code twice. The first time would have worked and transformed the values from `1` to `monolingual` and `2` to `bilingual`. If you run the code again on the same dataset, it will look for the values `1` and `2`, and because there are no longer any that match, it will return NA. If this happens, you will need to reload the dataset from the csv file.
A good way to avoid this is never to overwrite data, but to always store the output of code in new objects (e.g., `dat_recoded`) or new variables (`language_recoded`). For the purposes of this tutorial, overwriting provides a useful teachable moment so we'll leave it as it is.
2\.4 Argument names
-------------------
Each function has a list of arguments it can take, and a default order for those arguments. You can get more information on each function by entering `?function_name` into the console, although be aware that learning to read the help documentation in R is a skill in itself. When you are writing R code, as long as you stick to the default order, you do not have to explicitly call the argument names, for example, the above code could also be written as:
```
dat <- [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(dat, language = [factor](https://rdrr.io/r/base/factor.html)(
language,
[c](https://rdrr.io/r/base/c.html)(1, 2),
[c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual")
))
```
One of the challenges in learning R is that many of the "helpful" examples and solutions you will find online do not include argument names and so for novice learners are completely opaque. In this tutorial, we will include the argument names the first time a function is used, however, we will remove some argument names from subsequent examples to facilitate knowledge transfer to the help available online.
2\.5 Summarising data
---------------------
You can calculate and plot some basic descriptive information about the demographics of our sample using the imported dataset without any additional wrangling (i.e., data processing). The code below uses the `%>%` operator, otherwise known as the *pipe,* and can be translated as "*and then"*. For example, the below code can be read as:
* Start with the dataset `dat` *and then;*
* Group it by the variable `language` *and then;*
* Count the number of observations in each group *and then;*
* Remove the grouping
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
| language | n |
| --- | --- |
| monolingual | 55 |
| bilingual | 45 |
`[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` does not result in surface level changes to the dataset, rather, it changes the underlying structure so that if groups are specified, whatever functions called next are performed separately on each level of the grouping variable. This grouping remains in the object that is created so it is important to remove it with `[ungroup()](https://dplyr.tidyverse.org/reference/group_by.html)` to avoid future operations on the object unknowingly being performed by groups.
The above code therefore counts the number of observations in each group of the variable `language`. If you just need the total number of observations, you could remove the `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` and `[ungroup()](https://dplyr.tidyverse.org/reference/group_by.html)` lines, which would perform the operation on the whole dataset, rather than by groups:
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)()
```
| n |
| --- |
| 100 |
Similarly, we may wish to calculate the mean age (and SD) of the sample and we can do so using the function `[summarise()](https://dplyr.tidyverse.org/reference/summarise.html)` from the `dplyr` tidyverse package.
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)())
```
| mean\_age | sd\_age | n\_values |
| --- | --- | --- |
| 29\.75 | 8\.28 | 100 |
This code produces summary data in the form of a column named `mean_age` that contains the result of calculating the mean of the variable `age`. It then creates `sd_age` which does the same but for standard deviation. Finally, it uses the function `[n()](https://dplyr.tidyverse.org/reference/context.html)` to add the number of values used to calculate the statistic in a column named `n_values` \- this is a useful sanity check whenever you make summary statistics.
Note that the above code will not save the result of this operation, it will simply output the result in the console. If you wish to save it for future use, you can store it in an object by using the `<-` notation and print it later by typing the object name.
```
age_stats <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)())
```
Finally, the `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` function will work in the same way when calculating summary statistics \-\- the output of the function that is called after `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` will be produced for each level of the grouping variable.
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)()) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
| language | mean\_age | sd\_age | n\_values |
| --- | --- | --- | --- |
| monolingual | 27\.96 | 6\.78 | 55 |
| bilingual | 31\.93 | 9\.44 | 45 |
2\.6 Bar chart of counts
------------------------
For our first plot, we will make a simple bar chart of counts that shows the number of participants in each `language` group.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)()
```
Figure 2\.1: Bar chart of counts.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = (..count..)/[sum](https://rdrr.io/r/base/sum.html)(..count..))) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Percent", labels=scales::[percent](https://scales.r-lib.org/reference/label_percent.html))
```
The first line of code sets up the base of the plot.
* `data` specifies which data source to use for the plot
* `mapping` specifies which variables to map to which aesthetics (`aes`) of the plot. Mappings describe how variables in the data are mapped to visual properties (aesthetics) of geoms.
* `x` specifies which variable to put on the x\-axis
The second line of code adds a `geom`, and is connected to the base code with `+`. In this case, we ask for `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)`. Each `geom` has an associated default statistic. For `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)`, the default statistic is to count the data passed to it. This means that you do not have to specify a `y` variable when making a bar plot of counts; when given an `x` variable `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)` will automatically calculate counts of the groups in that variable. In this example, it counts the number of data points that are in each category of the `language` variable.
The base and geoms layers work in symbiosis so it is worthwhile checking the mapping rules as these are related to the default statistic for the plot's geom.
2\.7 Aggregates and percentages
-------------------------------
If your dataset already has the counts that you want to plot, you can set `stat="identity"` inside of `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)` to use that number instead of counting rows. For example, to plot percentages rather than counts within `ggplot2`, you can calculate these and store them in a new object that is then used as the dataset. You can do this in the software you are most comfortable in, save the new data, and import it as a new table, or you can use code to manipulate the data.
```
dat_percent <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html) # start with the data in dat
[count](https://dplyr.tidyverse.org/reference/count.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) # count rows per language (makes a new column called n)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(percent = (n/[sum](https://rdrr.io/r/base/sum.html)(n)*100)) # make a new column 'percent' equal to
# n divided by the sum of n times 100
```
Notice that we are now omitting the names of the arguments `data` and `mapping` in the `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` function.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_percent, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language, y = percent)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(stat="identity")
```
Figure 2\.2: Bar chart of pre\-calculated counts.
2\.8 Histogram
--------------
The code to plot a histogram of `age` is very similar to the bar chart code. We start by setting up the plot space, the dataset to use, and mapping the variables to the relevant axis. In this case, we want to plot a histogram with `age` on the x\-axis:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)()
```
Figure 2\.3: Histogram of ages.
The base statistic for `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` is also count, and by default `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` divides the x\-axis into 30 "bins" and counts how many observations are in each bin and so the y\-axis does not need to be specified. When you run the code to produce the histogram, you will get the message "stat\_bin() using bins \= 30\. Pick better value with binwidth". You can change this by either setting the number of bins (e.g., `bins = 20`) or the width of each bin (e.g., `binwidth = 5`) as an argument.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 5)
```
Figure 2\.4: Histogram of ages where each bin covers five years.
2\.9 Customisation 1
--------------------
So far we have made basic plots with the default visual appearance. Before we move on to the experimental data, we will introduce some simple visual customisation options. There are many ways in which you can control or customise the visual appearance of figures in R. However, once you understand the logic of one, it becomes easier to understand others that you may see in other examples. The visual appearance of elements can be customised within a geom itself, within the aesthetic mapping, or by connecting additional layers with `+`. In this section we look at the simplest and most commonly\-used customisations: changing colours, adding axis labels, and adding themes.
### 2\.9\.1 Changing colours
For our basic bar chart, you can control colours used to display the bars by setting `fill` (internal colour) and `colour` (outline colour) inside the geom function. This method changes **all** bars; we will show you later how to set fill or colour separately for different groups.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1,
fill = "white",
colour = "black")
```
Figure 2\.5: Histogram with custom colors for bar fill and line colors.
### 2\.9\.2 Editing axis names and labels
To edit axis names and labels you can connect `scale_*` functions to your plot with `+` to add layers. These functions are part of `ggplot2` and the one you use depends on which aesthetic you wish to edit (e.g., x\-axis, y\-axis, fill, colour) as well as the type of data it represents (discrete, continuous).
For the bar chart of counts, the x\-axis is mapped to a discrete (categorical) variable whilst the y\-axis is continuous. For each of these there is a relevant scale function with various elements that can be customised. Each axis then has its own function added as a layer to the basic plot.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Number of participants",
breaks = [c](https://rdrr.io/r/base/c.html)(0,10,20,30,40,50))
```
Figure 2\.6: Bar chart with custom axis labels.
* `name` controls the overall name of the axis (note the use of quotation marks)
* `labels` controls the names of the conditions with a discrete variable.
* `[c()](https://rdrr.io/r/base/c.html)` is a function that you will see in many different contexts and is used to combine multiple values. In this case, the labels we want to apply are combined within `[c()](https://rdrr.io/r/base/c.html)` by enclosing each word within their own parenthesis, and are in the order displayed on the plot. A very common error is to forget to enclose multiple values in `[c()](https://rdrr.io/r/base/c.html)`.
* `breaks` controls the tick marks on the axis. Again, because there are multiple values, they are enclosed within `[c()](https://rdrr.io/r/base/c.html)`. Because they are numeric and not text, they do not need quotation marks.
A common error is to map the wrong type of `scale_` function to a variable. Try running the below code:
```
# produces an error
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
```
This will produce the error `Discrete value supplied to continuous scale` because we have used a `continuous` scale function, despite the fact that x\-axis variable is discrete. If you get this error (or the reverse), check the type of data on each axis and the function you have used.
### 2\.9\.3 Adding a theme
`ggplot2` has a number of built\-in visual themes that you can apply as an extra layer. The below code updates the x\-axis and y\-axis labels to the histogram, but also applies `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`. Each part of a theme can be independently customised, which may be necessary, for example, if you have journal guidelines on fonts for publication. There are further instructions for how to do this in the online appendices.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 2\.7: Histogram with a custom theme.
You can set the theme globally so that all subsequent plots use a theme. `[theme_set()](https://ggplot2.tidyverse.org/reference/theme_get.html)` is not part of a `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` object, you should run this code on its own. It may be useful to add this code to the top of your script so that all plots produced subsequently use the same theme.
```
[theme_set](https://ggplot2.tidyverse.org/reference/theme_get.html)([theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)())
```
If you wished to return to the default theme, change the above to specify `[theme_grey()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`.
### 2\.9\.1 Changing colours
For our basic bar chart, you can control colours used to display the bars by setting `fill` (internal colour) and `colour` (outline colour) inside the geom function. This method changes **all** bars; we will show you later how to set fill or colour separately for different groups.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1,
fill = "white",
colour = "black")
```
Figure 2\.5: Histogram with custom colors for bar fill and line colors.
### 2\.9\.2 Editing axis names and labels
To edit axis names and labels you can connect `scale_*` functions to your plot with `+` to add layers. These functions are part of `ggplot2` and the one you use depends on which aesthetic you wish to edit (e.g., x\-axis, y\-axis, fill, colour) as well as the type of data it represents (discrete, continuous).
For the bar chart of counts, the x\-axis is mapped to a discrete (categorical) variable whilst the y\-axis is continuous. For each of these there is a relevant scale function with various elements that can be customised. Each axis then has its own function added as a layer to the basic plot.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Number of participants",
breaks = [c](https://rdrr.io/r/base/c.html)(0,10,20,30,40,50))
```
Figure 2\.6: Bar chart with custom axis labels.
* `name` controls the overall name of the axis (note the use of quotation marks)
* `labels` controls the names of the conditions with a discrete variable.
* `[c()](https://rdrr.io/r/base/c.html)` is a function that you will see in many different contexts and is used to combine multiple values. In this case, the labels we want to apply are combined within `[c()](https://rdrr.io/r/base/c.html)` by enclosing each word within their own parenthesis, and are in the order displayed on the plot. A very common error is to forget to enclose multiple values in `[c()](https://rdrr.io/r/base/c.html)`.
* `breaks` controls the tick marks on the axis. Again, because there are multiple values, they are enclosed within `[c()](https://rdrr.io/r/base/c.html)`. Because they are numeric and not text, they do not need quotation marks.
A common error is to map the wrong type of `scale_` function to a variable. Try running the below code:
```
# produces an error
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
```
This will produce the error `Discrete value supplied to continuous scale` because we have used a `continuous` scale function, despite the fact that x\-axis variable is discrete. If you get this error (or the reverse), check the type of data on each axis and the function you have used.
### 2\.9\.3 Adding a theme
`ggplot2` has a number of built\-in visual themes that you can apply as an extra layer. The below code updates the x\-axis and y\-axis labels to the histogram, but also applies `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`. Each part of a theme can be independently customised, which may be necessary, for example, if you have journal guidelines on fonts for publication. There are further instructions for how to do this in the online appendices.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 2\.7: Histogram with a custom theme.
You can set the theme globally so that all subsequent plots use a theme. `[theme_set()](https://ggplot2.tidyverse.org/reference/theme_get.html)` is not part of a `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` object, you should run this code on its own. It may be useful to add this code to the top of your script so that all plots produced subsequently use the same theme.
```
[theme_set](https://ggplot2.tidyverse.org/reference/theme_get.html)([theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)())
```
If you wished to return to the default theme, change the above to specify `[theme_grey()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`.
2\.10 Activities 1
------------------
Before you move on try the following:
1. Add a layer that edits the **name** of the y\-axis histogram label to `Number of participants`.
Solution 1
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Number of participants")
```
2. Change the colour of the bars in the bar chart to red.
Solution 2
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(fill = "red")
```
3. Remove `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)` from the histogram and instead apply one of the other available themes. To find out about other available themes, start typing `theme_` and the auto\-complete will show you the available options \- this will only work if you have loaded the `tidyverse` library with `[library(tidyverse)](https://tidyverse.tidyverse.org)`.
Solution 3
```
#multiple options here e.g., theme_classic()
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
# theme_bw()
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/getting-started.html |
2 Getting Started
=================
2\.1 Loading packages
---------------------
To load the packages that have the functions we need, use the `[library()](https://rdrr.io/r/base/library.html)` function. Whilst you only need to install packages once, you need to load any packages you want to use with `[library()](https://rdrr.io/r/base/library.html)` every time you start R or start a new session. When you load the `tidyverse`, you actually load several separate packages that are all part of the same collection and have been designed to work together. R will produce a message that tells you the names of the packages that have been loaded.
```
[library](https://rdrr.io/r/base/library.html)([tidyverse](https://tidyverse.tidyverse.org))
[library](https://rdrr.io/r/base/library.html)([patchwork](https://patchwork.data-imaginist.com))
```
2\.2 Loading data
-----------------
To load the [simulated data](https://osf.io/bj83f/files/) we use the function `[read_csv()](https://readr.tidyverse.org/reference/read_delim.html)` from the `readr` tidyverse package. Note that there are many other ways of reading data into R, but the benefit of this function is that it enters the data into the R environment in such a way that it makes most sense for other tidyverse packages.
```
dat <- [read_csv](https://readr.tidyverse.org/reference/read_delim.html)(file = "ldt_data.csv")
```
This code has created an object `dat` into which you have read the data from the file `ldt_data.csv`. This object will appear in the environment pane in the top right. Note that the name of the data file must be in quotation marks and the file extension (`.csv`) must also be included. If you receive the error `…does not exist in current working directory` it is highly likely that you have made a typo in the file name (remember R is case sensitive), have forgotten to include the file extension `.csv`, or that the data file you want to load is not stored in your project folder. If you get the error `could not find function` it means you have either not loaded the correct package (a common beginner error is to write the code, but not run it), or you have made a typo in the function name.
You should always check after importing data that the resulting table looks like you expect. To view the dataset, click `dat` in the environment pane or run `View(dat)` in the console. The environment pane also tells us that the object `dat` has 100 observations of 7 variables, and this is a useful quick check to ensure one has loaded the right data. Note that the 7 variables have an additional piece of information `chr` and `num`; this specifies the kind of data in the column. Similar to Excel and SPSS, R uses this information (or variable type) to specify allowable manipulations of data. For instance character data such as the `id` cannot be averaged, while it is possible to do this with numerical data such as the `age`.
2\.3 Handling numeric factors
-----------------------------
Another useful check is to use the functions `[summary()](https://rdrr.io/r/base/summary.html)` and `[str()](https://rdrr.io/r/utils/str.html)` (structure) to check what kind of data R thinks is in each column. Run the below code and look at the output of each, comparing it with what you know about the simulated dataset:
```
[summary](https://rdrr.io/r/base/summary.html)(dat)
[str](https://rdrr.io/r/utils/str.html)(dat)
```
Because the factor `language` is coded as 1 and 2, R has categorised this column as containing numeric information and unless we correct it, this will cause problems for visualisation and analysis. The code below shows how to recode numeric codes into labels.
* `[mutate()](https://dplyr.tidyverse.org/reference/mutate.html)` makes new columns in a data table, or overwrites a column;
* `[factor()](https://rdrr.io/r/base/factor.html)` translates the language column into a factor with the labels "monolingual" and "bilingual". You can also use `[factor()](https://rdrr.io/r/base/factor.html)` to set the display order of a column that contains words. Otherwise, they will display in alphabetical order. In this case we are replacing the numeric data (1 and 2\) in the `language` column with the equivalent English labels `monolingual` for 1 and `bilingual` for 2\. At the same time we will change the column type to be a factor, which is how R defines categorical data.
```
dat <- [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(dat, language = [factor](https://rdrr.io/r/base/factor.html)(
x = language, # column to translate
levels = [c](https://rdrr.io/r/base/c.html)(1, 2), # values of the original data in preferred order
labels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual") # labels for display
))
```
Make sure that you always check the output of any code that you run. If after running this code `language` is full of `NA` values, it means that you have run the code twice. The first time would have worked and transformed the values from `1` to `monolingual` and `2` to `bilingual`. If you run the code again on the same dataset, it will look for the values `1` and `2`, and because there are no longer any that match, it will return NA. If this happens, you will need to reload the dataset from the csv file.
A good way to avoid this is never to overwrite data, but to always store the output of code in new objects (e.g., `dat_recoded`) or new variables (`language_recoded`). For the purposes of this tutorial, overwriting provides a useful teachable moment so we'll leave it as it is.
2\.4 Argument names
-------------------
Each function has a list of arguments it can take, and a default order for those arguments. You can get more information on each function by entering `?function_name` into the console, although be aware that learning to read the help documentation in R is a skill in itself. When you are writing R code, as long as you stick to the default order, you do not have to explicitly call the argument names, for example, the above code could also be written as:
```
dat <- [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(dat, language = [factor](https://rdrr.io/r/base/factor.html)(
language,
[c](https://rdrr.io/r/base/c.html)(1, 2),
[c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual")
))
```
One of the challenges in learning R is that many of the "helpful" examples and solutions you will find online do not include argument names and so for novice learners are completely opaque. In this tutorial, we will include the argument names the first time a function is used, however, we will remove some argument names from subsequent examples to facilitate knowledge transfer to the help available online.
2\.5 Summarising data
---------------------
You can calculate and plot some basic descriptive information about the demographics of our sample using the imported dataset without any additional wrangling (i.e., data processing). The code below uses the `%>%` operator, otherwise known as the *pipe,* and can be translated as "*and then"*. For example, the below code can be read as:
* Start with the dataset `dat` *and then;*
* Group it by the variable `language` *and then;*
* Count the number of observations in each group *and then;*
* Remove the grouping
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
| language | n |
| --- | --- |
| monolingual | 55 |
| bilingual | 45 |
`[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` does not result in surface level changes to the dataset, rather, it changes the underlying structure so that if groups are specified, whatever functions called next are performed separately on each level of the grouping variable. This grouping remains in the object that is created so it is important to remove it with `[ungroup()](https://dplyr.tidyverse.org/reference/group_by.html)` to avoid future operations on the object unknowingly being performed by groups.
The above code therefore counts the number of observations in each group of the variable `language`. If you just need the total number of observations, you could remove the `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` and `[ungroup()](https://dplyr.tidyverse.org/reference/group_by.html)` lines, which would perform the operation on the whole dataset, rather than by groups:
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)()
```
| n |
| --- |
| 100 |
Similarly, we may wish to calculate the mean age (and SD) of the sample and we can do so using the function `[summarise()](https://dplyr.tidyverse.org/reference/summarise.html)` from the `dplyr` tidyverse package.
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)())
```
| mean\_age | sd\_age | n\_values |
| --- | --- | --- |
| 29\.75 | 8\.28 | 100 |
This code produces summary data in the form of a column named `mean_age` that contains the result of calculating the mean of the variable `age`. It then creates `sd_age` which does the same but for standard deviation. Finally, it uses the function `[n()](https://dplyr.tidyverse.org/reference/context.html)` to add the number of values used to calculate the statistic in a column named `n_values` \- this is a useful sanity check whenever you make summary statistics.
Note that the above code will not save the result of this operation, it will simply output the result in the console. If you wish to save it for future use, you can store it in an object by using the `<-` notation and print it later by typing the object name.
```
age_stats <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)())
```
Finally, the `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` function will work in the same way when calculating summary statistics \-\- the output of the function that is called after `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` will be produced for each level of the grouping variable.
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)()) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
| language | mean\_age | sd\_age | n\_values |
| --- | --- | --- | --- |
| monolingual | 27\.96 | 6\.78 | 55 |
| bilingual | 31\.93 | 9\.44 | 45 |
2\.6 Bar chart of counts
------------------------
For our first plot, we will make a simple bar chart of counts that shows the number of participants in each `language` group.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)()
```
Figure 2\.1: Bar chart of counts.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = (..count..)/[sum](https://rdrr.io/r/base/sum.html)(..count..))) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Percent", labels=scales::[percent](https://scales.r-lib.org/reference/label_percent.html))
```
The first line of code sets up the base of the plot.
* `data` specifies which data source to use for the plot
* `mapping` specifies which variables to map to which aesthetics (`aes`) of the plot. Mappings describe how variables in the data are mapped to visual properties (aesthetics) of geoms.
* `x` specifies which variable to put on the x\-axis
The second line of code adds a `geom`, and is connected to the base code with `+`. In this case, we ask for `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)`. Each `geom` has an associated default statistic. For `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)`, the default statistic is to count the data passed to it. This means that you do not have to specify a `y` variable when making a bar plot of counts; when given an `x` variable `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)` will automatically calculate counts of the groups in that variable. In this example, it counts the number of data points that are in each category of the `language` variable.
The base and geoms layers work in symbiosis so it is worthwhile checking the mapping rules as these are related to the default statistic for the plot's geom.
2\.7 Aggregates and percentages
-------------------------------
If your dataset already has the counts that you want to plot, you can set `stat="identity"` inside of `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)` to use that number instead of counting rows. For example, to plot percentages rather than counts within `ggplot2`, you can calculate these and store them in a new object that is then used as the dataset. You can do this in the software you are most comfortable in, save the new data, and import it as a new table, or you can use code to manipulate the data.
```
dat_percent <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html) # start with the data in dat
[count](https://dplyr.tidyverse.org/reference/count.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) # count rows per language (makes a new column called n)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(percent = (n/[sum](https://rdrr.io/r/base/sum.html)(n)*100)) # make a new column 'percent' equal to
# n divided by the sum of n times 100
```
Notice that we are now omitting the names of the arguments `data` and `mapping` in the `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` function.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_percent, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language, y = percent)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(stat="identity")
```
Figure 2\.2: Bar chart of pre\-calculated counts.
2\.8 Histogram
--------------
The code to plot a histogram of `age` is very similar to the bar chart code. We start by setting up the plot space, the dataset to use, and mapping the variables to the relevant axis. In this case, we want to plot a histogram with `age` on the x\-axis:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)()
```
Figure 2\.3: Histogram of ages.
The base statistic for `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` is also count, and by default `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` divides the x\-axis into 30 "bins" and counts how many observations are in each bin and so the y\-axis does not need to be specified. When you run the code to produce the histogram, you will get the message "stat\_bin() using bins \= 30\. Pick better value with binwidth". You can change this by either setting the number of bins (e.g., `bins = 20`) or the width of each bin (e.g., `binwidth = 5`) as an argument.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 5)
```
Figure 2\.4: Histogram of ages where each bin covers five years.
2\.9 Customisation 1
--------------------
So far we have made basic plots with the default visual appearance. Before we move on to the experimental data, we will introduce some simple visual customisation options. There are many ways in which you can control or customise the visual appearance of figures in R. However, once you understand the logic of one, it becomes easier to understand others that you may see in other examples. The visual appearance of elements can be customised within a geom itself, within the aesthetic mapping, or by connecting additional layers with `+`. In this section we look at the simplest and most commonly\-used customisations: changing colours, adding axis labels, and adding themes.
### 2\.9\.1 Changing colours
For our basic bar chart, you can control colours used to display the bars by setting `fill` (internal colour) and `colour` (outline colour) inside the geom function. This method changes **all** bars; we will show you later how to set fill or colour separately for different groups.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1,
fill = "white",
colour = "black")
```
Figure 2\.5: Histogram with custom colors for bar fill and line colors.
### 2\.9\.2 Editing axis names and labels
To edit axis names and labels you can connect `scale_*` functions to your plot with `+` to add layers. These functions are part of `ggplot2` and the one you use depends on which aesthetic you wish to edit (e.g., x\-axis, y\-axis, fill, colour) as well as the type of data it represents (discrete, continuous).
For the bar chart of counts, the x\-axis is mapped to a discrete (categorical) variable whilst the y\-axis is continuous. For each of these there is a relevant scale function with various elements that can be customised. Each axis then has its own function added as a layer to the basic plot.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Number of participants",
breaks = [c](https://rdrr.io/r/base/c.html)(0,10,20,30,40,50))
```
Figure 2\.6: Bar chart with custom axis labels.
* `name` controls the overall name of the axis (note the use of quotation marks)
* `labels` controls the names of the conditions with a discrete variable.
* `[c()](https://rdrr.io/r/base/c.html)` is a function that you will see in many different contexts and is used to combine multiple values. In this case, the labels we want to apply are combined within `[c()](https://rdrr.io/r/base/c.html)` by enclosing each word within their own parenthesis, and are in the order displayed on the plot. A very common error is to forget to enclose multiple values in `[c()](https://rdrr.io/r/base/c.html)`.
* `breaks` controls the tick marks on the axis. Again, because there are multiple values, they are enclosed within `[c()](https://rdrr.io/r/base/c.html)`. Because they are numeric and not text, they do not need quotation marks.
A common error is to map the wrong type of `scale_` function to a variable. Try running the below code:
```
# produces an error
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
```
This will produce the error `Discrete value supplied to continuous scale` because we have used a `continuous` scale function, despite the fact that x\-axis variable is discrete. If you get this error (or the reverse), check the type of data on each axis and the function you have used.
### 2\.9\.3 Adding a theme
`ggplot2` has a number of built\-in visual themes that you can apply as an extra layer. The below code updates the x\-axis and y\-axis labels to the histogram, but also applies `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`. Each part of a theme can be independently customised, which may be necessary, for example, if you have journal guidelines on fonts for publication. There are further instructions for how to do this in the online appendices.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 2\.7: Histogram with a custom theme.
You can set the theme globally so that all subsequent plots use a theme. `[theme_set()](https://ggplot2.tidyverse.org/reference/theme_get.html)` is not part of a `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` object, you should run this code on its own. It may be useful to add this code to the top of your script so that all plots produced subsequently use the same theme.
```
[theme_set](https://ggplot2.tidyverse.org/reference/theme_get.html)([theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)())
```
If you wished to return to the default theme, change the above to specify `[theme_grey()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`.
2\.10 Activities 1
------------------
Before you move on try the following:
1. Add a layer that edits the **name** of the y\-axis histogram label to `Number of participants`.
Solution 1
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Number of participants")
```
2. Change the colour of the bars in the bar chart to red.
Solution 2
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(fill = "red")
```
3. Remove `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)` from the histogram and instead apply one of the other available themes. To find out about other available themes, start typing `theme_` and the auto\-complete will show you the available options \- this will only work if you have loaded the `tidyverse` library with `[library(tidyverse)](https://tidyverse.tidyverse.org)`.
Solution 3
```
#multiple options here e.g., theme_classic()
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
# theme_bw()
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
2\.1 Loading packages
---------------------
To load the packages that have the functions we need, use the `[library()](https://rdrr.io/r/base/library.html)` function. Whilst you only need to install packages once, you need to load any packages you want to use with `[library()](https://rdrr.io/r/base/library.html)` every time you start R or start a new session. When you load the `tidyverse`, you actually load several separate packages that are all part of the same collection and have been designed to work together. R will produce a message that tells you the names of the packages that have been loaded.
```
[library](https://rdrr.io/r/base/library.html)([tidyverse](https://tidyverse.tidyverse.org))
[library](https://rdrr.io/r/base/library.html)([patchwork](https://patchwork.data-imaginist.com))
```
2\.2 Loading data
-----------------
To load the [simulated data](https://osf.io/bj83f/files/) we use the function `[read_csv()](https://readr.tidyverse.org/reference/read_delim.html)` from the `readr` tidyverse package. Note that there are many other ways of reading data into R, but the benefit of this function is that it enters the data into the R environment in such a way that it makes most sense for other tidyverse packages.
```
dat <- [read_csv](https://readr.tidyverse.org/reference/read_delim.html)(file = "ldt_data.csv")
```
This code has created an object `dat` into which you have read the data from the file `ldt_data.csv`. This object will appear in the environment pane in the top right. Note that the name of the data file must be in quotation marks and the file extension (`.csv`) must also be included. If you receive the error `…does not exist in current working directory` it is highly likely that you have made a typo in the file name (remember R is case sensitive), have forgotten to include the file extension `.csv`, or that the data file you want to load is not stored in your project folder. If you get the error `could not find function` it means you have either not loaded the correct package (a common beginner error is to write the code, but not run it), or you have made a typo in the function name.
You should always check after importing data that the resulting table looks like you expect. To view the dataset, click `dat` in the environment pane or run `View(dat)` in the console. The environment pane also tells us that the object `dat` has 100 observations of 7 variables, and this is a useful quick check to ensure one has loaded the right data. Note that the 7 variables have an additional piece of information `chr` and `num`; this specifies the kind of data in the column. Similar to Excel and SPSS, R uses this information (or variable type) to specify allowable manipulations of data. For instance character data such as the `id` cannot be averaged, while it is possible to do this with numerical data such as the `age`.
2\.3 Handling numeric factors
-----------------------------
Another useful check is to use the functions `[summary()](https://rdrr.io/r/base/summary.html)` and `[str()](https://rdrr.io/r/utils/str.html)` (structure) to check what kind of data R thinks is in each column. Run the below code and look at the output of each, comparing it with what you know about the simulated dataset:
```
[summary](https://rdrr.io/r/base/summary.html)(dat)
[str](https://rdrr.io/r/utils/str.html)(dat)
```
Because the factor `language` is coded as 1 and 2, R has categorised this column as containing numeric information and unless we correct it, this will cause problems for visualisation and analysis. The code below shows how to recode numeric codes into labels.
* `[mutate()](https://dplyr.tidyverse.org/reference/mutate.html)` makes new columns in a data table, or overwrites a column;
* `[factor()](https://rdrr.io/r/base/factor.html)` translates the language column into a factor with the labels "monolingual" and "bilingual". You can also use `[factor()](https://rdrr.io/r/base/factor.html)` to set the display order of a column that contains words. Otherwise, they will display in alphabetical order. In this case we are replacing the numeric data (1 and 2\) in the `language` column with the equivalent English labels `monolingual` for 1 and `bilingual` for 2\. At the same time we will change the column type to be a factor, which is how R defines categorical data.
```
dat <- [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(dat, language = [factor](https://rdrr.io/r/base/factor.html)(
x = language, # column to translate
levels = [c](https://rdrr.io/r/base/c.html)(1, 2), # values of the original data in preferred order
labels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual") # labels for display
))
```
Make sure that you always check the output of any code that you run. If after running this code `language` is full of `NA` values, it means that you have run the code twice. The first time would have worked and transformed the values from `1` to `monolingual` and `2` to `bilingual`. If you run the code again on the same dataset, it will look for the values `1` and `2`, and because there are no longer any that match, it will return NA. If this happens, you will need to reload the dataset from the csv file.
A good way to avoid this is never to overwrite data, but to always store the output of code in new objects (e.g., `dat_recoded`) or new variables (`language_recoded`). For the purposes of this tutorial, overwriting provides a useful teachable moment so we'll leave it as it is.
2\.4 Argument names
-------------------
Each function has a list of arguments it can take, and a default order for those arguments. You can get more information on each function by entering `?function_name` into the console, although be aware that learning to read the help documentation in R is a skill in itself. When you are writing R code, as long as you stick to the default order, you do not have to explicitly call the argument names, for example, the above code could also be written as:
```
dat <- [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(dat, language = [factor](https://rdrr.io/r/base/factor.html)(
language,
[c](https://rdrr.io/r/base/c.html)(1, 2),
[c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual")
))
```
One of the challenges in learning R is that many of the "helpful" examples and solutions you will find online do not include argument names and so for novice learners are completely opaque. In this tutorial, we will include the argument names the first time a function is used, however, we will remove some argument names from subsequent examples to facilitate knowledge transfer to the help available online.
2\.5 Summarising data
---------------------
You can calculate and plot some basic descriptive information about the demographics of our sample using the imported dataset without any additional wrangling (i.e., data processing). The code below uses the `%>%` operator, otherwise known as the *pipe,* and can be translated as "*and then"*. For example, the below code can be read as:
* Start with the dataset `dat` *and then;*
* Group it by the variable `language` *and then;*
* Count the number of observations in each group *and then;*
* Remove the grouping
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
| language | n |
| --- | --- |
| monolingual | 55 |
| bilingual | 45 |
`[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` does not result in surface level changes to the dataset, rather, it changes the underlying structure so that if groups are specified, whatever functions called next are performed separately on each level of the grouping variable. This grouping remains in the object that is created so it is important to remove it with `[ungroup()](https://dplyr.tidyverse.org/reference/group_by.html)` to avoid future operations on the object unknowingly being performed by groups.
The above code therefore counts the number of observations in each group of the variable `language`. If you just need the total number of observations, you could remove the `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` and `[ungroup()](https://dplyr.tidyverse.org/reference/group_by.html)` lines, which would perform the operation on the whole dataset, rather than by groups:
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)()
```
| n |
| --- |
| 100 |
Similarly, we may wish to calculate the mean age (and SD) of the sample and we can do so using the function `[summarise()](https://dplyr.tidyverse.org/reference/summarise.html)` from the `dplyr` tidyverse package.
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)())
```
| mean\_age | sd\_age | n\_values |
| --- | --- | --- |
| 29\.75 | 8\.28 | 100 |
This code produces summary data in the form of a column named `mean_age` that contains the result of calculating the mean of the variable `age`. It then creates `sd_age` which does the same but for standard deviation. Finally, it uses the function `[n()](https://dplyr.tidyverse.org/reference/context.html)` to add the number of values used to calculate the statistic in a column named `n_values` \- this is a useful sanity check whenever you make summary statistics.
Note that the above code will not save the result of this operation, it will simply output the result in the console. If you wish to save it for future use, you can store it in an object by using the `<-` notation and print it later by typing the object name.
```
age_stats <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)())
```
Finally, the `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` function will work in the same way when calculating summary statistics \-\- the output of the function that is called after `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` will be produced for each level of the grouping variable.
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)()) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
| language | mean\_age | sd\_age | n\_values |
| --- | --- | --- | --- |
| monolingual | 27\.96 | 6\.78 | 55 |
| bilingual | 31\.93 | 9\.44 | 45 |
2\.6 Bar chart of counts
------------------------
For our first plot, we will make a simple bar chart of counts that shows the number of participants in each `language` group.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)()
```
Figure 2\.1: Bar chart of counts.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = (..count..)/[sum](https://rdrr.io/r/base/sum.html)(..count..))) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Percent", labels=scales::[percent](https://scales.r-lib.org/reference/label_percent.html))
```
The first line of code sets up the base of the plot.
* `data` specifies which data source to use for the plot
* `mapping` specifies which variables to map to which aesthetics (`aes`) of the plot. Mappings describe how variables in the data are mapped to visual properties (aesthetics) of geoms.
* `x` specifies which variable to put on the x\-axis
The second line of code adds a `geom`, and is connected to the base code with `+`. In this case, we ask for `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)`. Each `geom` has an associated default statistic. For `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)`, the default statistic is to count the data passed to it. This means that you do not have to specify a `y` variable when making a bar plot of counts; when given an `x` variable `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)` will automatically calculate counts of the groups in that variable. In this example, it counts the number of data points that are in each category of the `language` variable.
The base and geoms layers work in symbiosis so it is worthwhile checking the mapping rules as these are related to the default statistic for the plot's geom.
2\.7 Aggregates and percentages
-------------------------------
If your dataset already has the counts that you want to plot, you can set `stat="identity"` inside of `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)` to use that number instead of counting rows. For example, to plot percentages rather than counts within `ggplot2`, you can calculate these and store them in a new object that is then used as the dataset. You can do this in the software you are most comfortable in, save the new data, and import it as a new table, or you can use code to manipulate the data.
```
dat_percent <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html) # start with the data in dat
[count](https://dplyr.tidyverse.org/reference/count.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) # count rows per language (makes a new column called n)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(percent = (n/[sum](https://rdrr.io/r/base/sum.html)(n)*100)) # make a new column 'percent' equal to
# n divided by the sum of n times 100
```
Notice that we are now omitting the names of the arguments `data` and `mapping` in the `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` function.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_percent, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language, y = percent)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(stat="identity")
```
Figure 2\.2: Bar chart of pre\-calculated counts.
2\.8 Histogram
--------------
The code to plot a histogram of `age` is very similar to the bar chart code. We start by setting up the plot space, the dataset to use, and mapping the variables to the relevant axis. In this case, we want to plot a histogram with `age` on the x\-axis:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)()
```
Figure 2\.3: Histogram of ages.
The base statistic for `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` is also count, and by default `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` divides the x\-axis into 30 "bins" and counts how many observations are in each bin and so the y\-axis does not need to be specified. When you run the code to produce the histogram, you will get the message "stat\_bin() using bins \= 30\. Pick better value with binwidth". You can change this by either setting the number of bins (e.g., `bins = 20`) or the width of each bin (e.g., `binwidth = 5`) as an argument.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 5)
```
Figure 2\.4: Histogram of ages where each bin covers five years.
2\.9 Customisation 1
--------------------
So far we have made basic plots with the default visual appearance. Before we move on to the experimental data, we will introduce some simple visual customisation options. There are many ways in which you can control or customise the visual appearance of figures in R. However, once you understand the logic of one, it becomes easier to understand others that you may see in other examples. The visual appearance of elements can be customised within a geom itself, within the aesthetic mapping, or by connecting additional layers with `+`. In this section we look at the simplest and most commonly\-used customisations: changing colours, adding axis labels, and adding themes.
### 2\.9\.1 Changing colours
For our basic bar chart, you can control colours used to display the bars by setting `fill` (internal colour) and `colour` (outline colour) inside the geom function. This method changes **all** bars; we will show you later how to set fill or colour separately for different groups.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1,
fill = "white",
colour = "black")
```
Figure 2\.5: Histogram with custom colors for bar fill and line colors.
### 2\.9\.2 Editing axis names and labels
To edit axis names and labels you can connect `scale_*` functions to your plot with `+` to add layers. These functions are part of `ggplot2` and the one you use depends on which aesthetic you wish to edit (e.g., x\-axis, y\-axis, fill, colour) as well as the type of data it represents (discrete, continuous).
For the bar chart of counts, the x\-axis is mapped to a discrete (categorical) variable whilst the y\-axis is continuous. For each of these there is a relevant scale function with various elements that can be customised. Each axis then has its own function added as a layer to the basic plot.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Number of participants",
breaks = [c](https://rdrr.io/r/base/c.html)(0,10,20,30,40,50))
```
Figure 2\.6: Bar chart with custom axis labels.
* `name` controls the overall name of the axis (note the use of quotation marks)
* `labels` controls the names of the conditions with a discrete variable.
* `[c()](https://rdrr.io/r/base/c.html)` is a function that you will see in many different contexts and is used to combine multiple values. In this case, the labels we want to apply are combined within `[c()](https://rdrr.io/r/base/c.html)` by enclosing each word within their own parenthesis, and are in the order displayed on the plot. A very common error is to forget to enclose multiple values in `[c()](https://rdrr.io/r/base/c.html)`.
* `breaks` controls the tick marks on the axis. Again, because there are multiple values, they are enclosed within `[c()](https://rdrr.io/r/base/c.html)`. Because they are numeric and not text, they do not need quotation marks.
A common error is to map the wrong type of `scale_` function to a variable. Try running the below code:
```
# produces an error
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
```
This will produce the error `Discrete value supplied to continuous scale` because we have used a `continuous` scale function, despite the fact that x\-axis variable is discrete. If you get this error (or the reverse), check the type of data on each axis and the function you have used.
### 2\.9\.3 Adding a theme
`ggplot2` has a number of built\-in visual themes that you can apply as an extra layer. The below code updates the x\-axis and y\-axis labels to the histogram, but also applies `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`. Each part of a theme can be independently customised, which may be necessary, for example, if you have journal guidelines on fonts for publication. There are further instructions for how to do this in the online appendices.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 2\.7: Histogram with a custom theme.
You can set the theme globally so that all subsequent plots use a theme. `[theme_set()](https://ggplot2.tidyverse.org/reference/theme_get.html)` is not part of a `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` object, you should run this code on its own. It may be useful to add this code to the top of your script so that all plots produced subsequently use the same theme.
```
[theme_set](https://ggplot2.tidyverse.org/reference/theme_get.html)([theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)())
```
If you wished to return to the default theme, change the above to specify `[theme_grey()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`.
### 2\.9\.1 Changing colours
For our basic bar chart, you can control colours used to display the bars by setting `fill` (internal colour) and `colour` (outline colour) inside the geom function. This method changes **all** bars; we will show you later how to set fill or colour separately for different groups.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1,
fill = "white",
colour = "black")
```
Figure 2\.5: Histogram with custom colors for bar fill and line colors.
### 2\.9\.2 Editing axis names and labels
To edit axis names and labels you can connect `scale_*` functions to your plot with `+` to add layers. These functions are part of `ggplot2` and the one you use depends on which aesthetic you wish to edit (e.g., x\-axis, y\-axis, fill, colour) as well as the type of data it represents (discrete, continuous).
For the bar chart of counts, the x\-axis is mapped to a discrete (categorical) variable whilst the y\-axis is continuous. For each of these there is a relevant scale function with various elements that can be customised. Each axis then has its own function added as a layer to the basic plot.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Number of participants",
breaks = [c](https://rdrr.io/r/base/c.html)(0,10,20,30,40,50))
```
Figure 2\.6: Bar chart with custom axis labels.
* `name` controls the overall name of the axis (note the use of quotation marks)
* `labels` controls the names of the conditions with a discrete variable.
* `[c()](https://rdrr.io/r/base/c.html)` is a function that you will see in many different contexts and is used to combine multiple values. In this case, the labels we want to apply are combined within `[c()](https://rdrr.io/r/base/c.html)` by enclosing each word within their own parenthesis, and are in the order displayed on the plot. A very common error is to forget to enclose multiple values in `[c()](https://rdrr.io/r/base/c.html)`.
* `breaks` controls the tick marks on the axis. Again, because there are multiple values, they are enclosed within `[c()](https://rdrr.io/r/base/c.html)`. Because they are numeric and not text, they do not need quotation marks.
A common error is to map the wrong type of `scale_` function to a variable. Try running the below code:
```
# produces an error
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
```
This will produce the error `Discrete value supplied to continuous scale` because we have used a `continuous` scale function, despite the fact that x\-axis variable is discrete. If you get this error (or the reverse), check the type of data on each axis and the function you have used.
### 2\.9\.3 Adding a theme
`ggplot2` has a number of built\-in visual themes that you can apply as an extra layer. The below code updates the x\-axis and y\-axis labels to the histogram, but also applies `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`. Each part of a theme can be independently customised, which may be necessary, for example, if you have journal guidelines on fonts for publication. There are further instructions for how to do this in the online appendices.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 2\.7: Histogram with a custom theme.
You can set the theme globally so that all subsequent plots use a theme. `[theme_set()](https://ggplot2.tidyverse.org/reference/theme_get.html)` is not part of a `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` object, you should run this code on its own. It may be useful to add this code to the top of your script so that all plots produced subsequently use the same theme.
```
[theme_set](https://ggplot2.tidyverse.org/reference/theme_get.html)([theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)())
```
If you wished to return to the default theme, change the above to specify `[theme_grey()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`.
2\.10 Activities 1
------------------
Before you move on try the following:
1. Add a layer that edits the **name** of the y\-axis histogram label to `Number of participants`.
Solution 1
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Number of participants")
```
2. Change the colour of the bars in the bar chart to red.
Solution 2
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(fill = "red")
```
3. Remove `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)` from the histogram and instead apply one of the other available themes. To find out about other available themes, start typing `theme_` and the auto\-complete will show you the available options \- this will only work if you have loaded the `tidyverse` library with `[library(tidyverse)](https://tidyverse.tidyverse.org)`.
Solution 3
```
#multiple options here e.g., theme_classic()
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
# theme_bw()
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/getting-started.html |
2 Getting Started
=================
2\.1 Loading packages
---------------------
To load the packages that have the functions we need, use the `[library()](https://rdrr.io/r/base/library.html)` function. Whilst you only need to install packages once, you need to load any packages you want to use with `[library()](https://rdrr.io/r/base/library.html)` every time you start R or start a new session. When you load the `tidyverse`, you actually load several separate packages that are all part of the same collection and have been designed to work together. R will produce a message that tells you the names of the packages that have been loaded.
```
[library](https://rdrr.io/r/base/library.html)([tidyverse](https://tidyverse.tidyverse.org))
[library](https://rdrr.io/r/base/library.html)([patchwork](https://patchwork.data-imaginist.com))
```
2\.2 Loading data
-----------------
To load the [simulated data](https://osf.io/bj83f/files/) we use the function `[read_csv()](https://readr.tidyverse.org/reference/read_delim.html)` from the `readr` tidyverse package. Note that there are many other ways of reading data into R, but the benefit of this function is that it enters the data into the R environment in such a way that it makes most sense for other tidyverse packages.
```
dat <- [read_csv](https://readr.tidyverse.org/reference/read_delim.html)(file = "ldt_data.csv")
```
This code has created an object `dat` into which you have read the data from the file `ldt_data.csv`. This object will appear in the environment pane in the top right. Note that the name of the data file must be in quotation marks and the file extension (`.csv`) must also be included. If you receive the error `…does not exist in current working directory` it is highly likely that you have made a typo in the file name (remember R is case sensitive), have forgotten to include the file extension `.csv`, or that the data file you want to load is not stored in your project folder. If you get the error `could not find function` it means you have either not loaded the correct package (a common beginner error is to write the code, but not run it), or you have made a typo in the function name.
You should always check after importing data that the resulting table looks like you expect. To view the dataset, click `dat` in the environment pane or run `View(dat)` in the console. The environment pane also tells us that the object `dat` has 100 observations of 7 variables, and this is a useful quick check to ensure one has loaded the right data. Note that the 7 variables have an additional piece of information `chr` and `num`; this specifies the kind of data in the column. Similar to Excel and SPSS, R uses this information (or variable type) to specify allowable manipulations of data. For instance character data such as the `id` cannot be averaged, while it is possible to do this with numerical data such as the `age`.
2\.3 Handling numeric factors
-----------------------------
Another useful check is to use the functions `[summary()](https://rdrr.io/r/base/summary.html)` and `[str()](https://rdrr.io/r/utils/str.html)` (structure) to check what kind of data R thinks is in each column. Run the below code and look at the output of each, comparing it with what you know about the simulated dataset:
```
[summary](https://rdrr.io/r/base/summary.html)(dat)
[str](https://rdrr.io/r/utils/str.html)(dat)
```
Because the factor `language` is coded as 1 and 2, R has categorised this column as containing numeric information and unless we correct it, this will cause problems for visualisation and analysis. The code below shows how to recode numeric codes into labels.
* `[mutate()](https://dplyr.tidyverse.org/reference/mutate.html)` makes new columns in a data table, or overwrites a column;
* `[factor()](https://rdrr.io/r/base/factor.html)` translates the language column into a factor with the labels "monolingual" and "bilingual". You can also use `[factor()](https://rdrr.io/r/base/factor.html)` to set the display order of a column that contains words. Otherwise, they will display in alphabetical order. In this case we are replacing the numeric data (1 and 2\) in the `language` column with the equivalent English labels `monolingual` for 1 and `bilingual` for 2\. At the same time we will change the column type to be a factor, which is how R defines categorical data.
```
dat <- [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(dat, language = [factor](https://rdrr.io/r/base/factor.html)(
x = language, # column to translate
levels = [c](https://rdrr.io/r/base/c.html)(1, 2), # values of the original data in preferred order
labels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual") # labels for display
))
```
Make sure that you always check the output of any code that you run. If after running this code `language` is full of `NA` values, it means that you have run the code twice. The first time would have worked and transformed the values from `1` to `monolingual` and `2` to `bilingual`. If you run the code again on the same dataset, it will look for the values `1` and `2`, and because there are no longer any that match, it will return NA. If this happens, you will need to reload the dataset from the csv file.
A good way to avoid this is never to overwrite data, but to always store the output of code in new objects (e.g., `dat_recoded`) or new variables (`language_recoded`). For the purposes of this tutorial, overwriting provides a useful teachable moment so we'll leave it as it is.
2\.4 Argument names
-------------------
Each function has a list of arguments it can take, and a default order for those arguments. You can get more information on each function by entering `?function_name` into the console, although be aware that learning to read the help documentation in R is a skill in itself. When you are writing R code, as long as you stick to the default order, you do not have to explicitly call the argument names, for example, the above code could also be written as:
```
dat <- [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(dat, language = [factor](https://rdrr.io/r/base/factor.html)(
language,
[c](https://rdrr.io/r/base/c.html)(1, 2),
[c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual")
))
```
One of the challenges in learning R is that many of the "helpful" examples and solutions you will find online do not include argument names and so for novice learners are completely opaque. In this tutorial, we will include the argument names the first time a function is used, however, we will remove some argument names from subsequent examples to facilitate knowledge transfer to the help available online.
2\.5 Summarising data
---------------------
You can calculate and plot some basic descriptive information about the demographics of our sample using the imported dataset without any additional wrangling (i.e., data processing). The code below uses the `%>%` operator, otherwise known as the *pipe,* and can be translated as "*and then"*. For example, the below code can be read as:
* Start with the dataset `dat` *and then;*
* Group it by the variable `language` *and then;*
* Count the number of observations in each group *and then;*
* Remove the grouping
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
| language | n |
| --- | --- |
| monolingual | 55 |
| bilingual | 45 |
`[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` does not result in surface level changes to the dataset, rather, it changes the underlying structure so that if groups are specified, whatever functions called next are performed separately on each level of the grouping variable. This grouping remains in the object that is created so it is important to remove it with `[ungroup()](https://dplyr.tidyverse.org/reference/group_by.html)` to avoid future operations on the object unknowingly being performed by groups.
The above code therefore counts the number of observations in each group of the variable `language`. If you just need the total number of observations, you could remove the `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` and `[ungroup()](https://dplyr.tidyverse.org/reference/group_by.html)` lines, which would perform the operation on the whole dataset, rather than by groups:
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)()
```
| n |
| --- |
| 100 |
Similarly, we may wish to calculate the mean age (and SD) of the sample and we can do so using the function `[summarise()](https://dplyr.tidyverse.org/reference/summarise.html)` from the `dplyr` tidyverse package.
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)())
```
| mean\_age | sd\_age | n\_values |
| --- | --- | --- |
| 29\.75 | 8\.28 | 100 |
This code produces summary data in the form of a column named `mean_age` that contains the result of calculating the mean of the variable `age`. It then creates `sd_age` which does the same but for standard deviation. Finally, it uses the function `[n()](https://dplyr.tidyverse.org/reference/context.html)` to add the number of values used to calculate the statistic in a column named `n_values` \- this is a useful sanity check whenever you make summary statistics.
Note that the above code will not save the result of this operation, it will simply output the result in the console. If you wish to save it for future use, you can store it in an object by using the `<-` notation and print it later by typing the object name.
```
age_stats <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)())
```
Finally, the `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` function will work in the same way when calculating summary statistics \-\- the output of the function that is called after `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` will be produced for each level of the grouping variable.
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)()) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
| language | mean\_age | sd\_age | n\_values |
| --- | --- | --- | --- |
| monolingual | 27\.96 | 6\.78 | 55 |
| bilingual | 31\.93 | 9\.44 | 45 |
2\.6 Bar chart of counts
------------------------
For our first plot, we will make a simple bar chart of counts that shows the number of participants in each `language` group.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)()
```
Figure 2\.1: Bar chart of counts.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = (..count..)/[sum](https://rdrr.io/r/base/sum.html)(..count..))) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Percent", labels=scales::[percent](https://scales.r-lib.org/reference/label_percent.html))
```
The first line of code sets up the base of the plot.
* `data` specifies which data source to use for the plot
* `mapping` specifies which variables to map to which aesthetics (`aes`) of the plot. Mappings describe how variables in the data are mapped to visual properties (aesthetics) of geoms.
* `x` specifies which variable to put on the x\-axis
The second line of code adds a `geom`, and is connected to the base code with `+`. In this case, we ask for `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)`. Each `geom` has an associated default statistic. For `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)`, the default statistic is to count the data passed to it. This means that you do not have to specify a `y` variable when making a bar plot of counts; when given an `x` variable `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)` will automatically calculate counts of the groups in that variable. In this example, it counts the number of data points that are in each category of the `language` variable.
The base and geoms layers work in symbiosis so it is worthwhile checking the mapping rules as these are related to the default statistic for the plot's geom.
2\.7 Aggregates and percentages
-------------------------------
If your dataset already has the counts that you want to plot, you can set `stat="identity"` inside of `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)` to use that number instead of counting rows. For example, to plot percentages rather than counts within `ggplot2`, you can calculate these and store them in a new object that is then used as the dataset. You can do this in the software you are most comfortable in, save the new data, and import it as a new table, or you can use code to manipulate the data.
```
dat_percent <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html) # start with the data in dat
[count](https://dplyr.tidyverse.org/reference/count.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) # count rows per language (makes a new column called n)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(percent = (n/[sum](https://rdrr.io/r/base/sum.html)(n)*100)) # make a new column 'percent' equal to
# n divided by the sum of n times 100
```
Notice that we are now omitting the names of the arguments `data` and `mapping` in the `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` function.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_percent, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language, y = percent)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(stat="identity")
```
Figure 2\.2: Bar chart of pre\-calculated counts.
2\.8 Histogram
--------------
The code to plot a histogram of `age` is very similar to the bar chart code. We start by setting up the plot space, the dataset to use, and mapping the variables to the relevant axis. In this case, we want to plot a histogram with `age` on the x\-axis:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)()
```
Figure 2\.3: Histogram of ages.
The base statistic for `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` is also count, and by default `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` divides the x\-axis into 30 "bins" and counts how many observations are in each bin and so the y\-axis does not need to be specified. When you run the code to produce the histogram, you will get the message "stat\_bin() using bins \= 30\. Pick better value with binwidth". You can change this by either setting the number of bins (e.g., `bins = 20`) or the width of each bin (e.g., `binwidth = 5`) as an argument.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 5)
```
Figure 2\.4: Histogram of ages where each bin covers five years.
2\.9 Customisation 1
--------------------
So far we have made basic plots with the default visual appearance. Before we move on to the experimental data, we will introduce some simple visual customisation options. There are many ways in which you can control or customise the visual appearance of figures in R. However, once you understand the logic of one, it becomes easier to understand others that you may see in other examples. The visual appearance of elements can be customised within a geom itself, within the aesthetic mapping, or by connecting additional layers with `+`. In this section we look at the simplest and most commonly\-used customisations: changing colours, adding axis labels, and adding themes.
### 2\.9\.1 Changing colours
For our basic bar chart, you can control colours used to display the bars by setting `fill` (internal colour) and `colour` (outline colour) inside the geom function. This method changes **all** bars; we will show you later how to set fill or colour separately for different groups.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1,
fill = "white",
colour = "black")
```
Figure 2\.5: Histogram with custom colors for bar fill and line colors.
### 2\.9\.2 Editing axis names and labels
To edit axis names and labels you can connect `scale_*` functions to your plot with `+` to add layers. These functions are part of `ggplot2` and the one you use depends on which aesthetic you wish to edit (e.g., x\-axis, y\-axis, fill, colour) as well as the type of data it represents (discrete, continuous).
For the bar chart of counts, the x\-axis is mapped to a discrete (categorical) variable whilst the y\-axis is continuous. For each of these there is a relevant scale function with various elements that can be customised. Each axis then has its own function added as a layer to the basic plot.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Number of participants",
breaks = [c](https://rdrr.io/r/base/c.html)(0,10,20,30,40,50))
```
Figure 2\.6: Bar chart with custom axis labels.
* `name` controls the overall name of the axis (note the use of quotation marks)
* `labels` controls the names of the conditions with a discrete variable.
* `[c()](https://rdrr.io/r/base/c.html)` is a function that you will see in many different contexts and is used to combine multiple values. In this case, the labels we want to apply are combined within `[c()](https://rdrr.io/r/base/c.html)` by enclosing each word within their own parenthesis, and are in the order displayed on the plot. A very common error is to forget to enclose multiple values in `[c()](https://rdrr.io/r/base/c.html)`.
* `breaks` controls the tick marks on the axis. Again, because there are multiple values, they are enclosed within `[c()](https://rdrr.io/r/base/c.html)`. Because they are numeric and not text, they do not need quotation marks.
A common error is to map the wrong type of `scale_` function to a variable. Try running the below code:
```
# produces an error
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
```
This will produce the error `Discrete value supplied to continuous scale` because we have used a `continuous` scale function, despite the fact that x\-axis variable is discrete. If you get this error (or the reverse), check the type of data on each axis and the function you have used.
### 2\.9\.3 Adding a theme
`ggplot2` has a number of built\-in visual themes that you can apply as an extra layer. The below code updates the x\-axis and y\-axis labels to the histogram, but also applies `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`. Each part of a theme can be independently customised, which may be necessary, for example, if you have journal guidelines on fonts for publication. There are further instructions for how to do this in the online appendices.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 2\.7: Histogram with a custom theme.
You can set the theme globally so that all subsequent plots use a theme. `[theme_set()](https://ggplot2.tidyverse.org/reference/theme_get.html)` is not part of a `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` object, you should run this code on its own. It may be useful to add this code to the top of your script so that all plots produced subsequently use the same theme.
```
[theme_set](https://ggplot2.tidyverse.org/reference/theme_get.html)([theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)())
```
If you wished to return to the default theme, change the above to specify `[theme_grey()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`.
2\.10 Activities 1
------------------
Before you move on try the following:
1. Add a layer that edits the **name** of the y\-axis histogram label to `Number of participants`.
Solution 1
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Number of participants")
```
2. Change the colour of the bars in the bar chart to red.
Solution 2
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(fill = "red")
```
3. Remove `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)` from the histogram and instead apply one of the other available themes. To find out about other available themes, start typing `theme_` and the auto\-complete will show you the available options \- this will only work if you have loaded the `tidyverse` library with `[library(tidyverse)](https://tidyverse.tidyverse.org)`.
Solution 3
```
#multiple options here e.g., theme_classic()
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
# theme_bw()
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
2\.1 Loading packages
---------------------
To load the packages that have the functions we need, use the `[library()](https://rdrr.io/r/base/library.html)` function. Whilst you only need to install packages once, you need to load any packages you want to use with `[library()](https://rdrr.io/r/base/library.html)` every time you start R or start a new session. When you load the `tidyverse`, you actually load several separate packages that are all part of the same collection and have been designed to work together. R will produce a message that tells you the names of the packages that have been loaded.
```
[library](https://rdrr.io/r/base/library.html)([tidyverse](https://tidyverse.tidyverse.org))
[library](https://rdrr.io/r/base/library.html)([patchwork](https://patchwork.data-imaginist.com))
```
2\.2 Loading data
-----------------
To load the [simulated data](https://osf.io/bj83f/files/) we use the function `[read_csv()](https://readr.tidyverse.org/reference/read_delim.html)` from the `readr` tidyverse package. Note that there are many other ways of reading data into R, but the benefit of this function is that it enters the data into the R environment in such a way that it makes most sense for other tidyverse packages.
```
dat <- [read_csv](https://readr.tidyverse.org/reference/read_delim.html)(file = "ldt_data.csv")
```
This code has created an object `dat` into which you have read the data from the file `ldt_data.csv`. This object will appear in the environment pane in the top right. Note that the name of the data file must be in quotation marks and the file extension (`.csv`) must also be included. If you receive the error `…does not exist in current working directory` it is highly likely that you have made a typo in the file name (remember R is case sensitive), have forgotten to include the file extension `.csv`, or that the data file you want to load is not stored in your project folder. If you get the error `could not find function` it means you have either not loaded the correct package (a common beginner error is to write the code, but not run it), or you have made a typo in the function name.
You should always check after importing data that the resulting table looks like you expect. To view the dataset, click `dat` in the environment pane or run `View(dat)` in the console. The environment pane also tells us that the object `dat` has 100 observations of 7 variables, and this is a useful quick check to ensure one has loaded the right data. Note that the 7 variables have an additional piece of information `chr` and `num`; this specifies the kind of data in the column. Similar to Excel and SPSS, R uses this information (or variable type) to specify allowable manipulations of data. For instance character data such as the `id` cannot be averaged, while it is possible to do this with numerical data such as the `age`.
2\.3 Handling numeric factors
-----------------------------
Another useful check is to use the functions `[summary()](https://rdrr.io/r/base/summary.html)` and `[str()](https://rdrr.io/r/utils/str.html)` (structure) to check what kind of data R thinks is in each column. Run the below code and look at the output of each, comparing it with what you know about the simulated dataset:
```
[summary](https://rdrr.io/r/base/summary.html)(dat)
[str](https://rdrr.io/r/utils/str.html)(dat)
```
Because the factor `language` is coded as 1 and 2, R has categorised this column as containing numeric information and unless we correct it, this will cause problems for visualisation and analysis. The code below shows how to recode numeric codes into labels.
* `[mutate()](https://dplyr.tidyverse.org/reference/mutate.html)` makes new columns in a data table, or overwrites a column;
* `[factor()](https://rdrr.io/r/base/factor.html)` translates the language column into a factor with the labels "monolingual" and "bilingual". You can also use `[factor()](https://rdrr.io/r/base/factor.html)` to set the display order of a column that contains words. Otherwise, they will display in alphabetical order. In this case we are replacing the numeric data (1 and 2\) in the `language` column with the equivalent English labels `monolingual` for 1 and `bilingual` for 2\. At the same time we will change the column type to be a factor, which is how R defines categorical data.
```
dat <- [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(dat, language = [factor](https://rdrr.io/r/base/factor.html)(
x = language, # column to translate
levels = [c](https://rdrr.io/r/base/c.html)(1, 2), # values of the original data in preferred order
labels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual") # labels for display
))
```
Make sure that you always check the output of any code that you run. If after running this code `language` is full of `NA` values, it means that you have run the code twice. The first time would have worked and transformed the values from `1` to `monolingual` and `2` to `bilingual`. If you run the code again on the same dataset, it will look for the values `1` and `2`, and because there are no longer any that match, it will return NA. If this happens, you will need to reload the dataset from the csv file.
A good way to avoid this is never to overwrite data, but to always store the output of code in new objects (e.g., `dat_recoded`) or new variables (`language_recoded`). For the purposes of this tutorial, overwriting provides a useful teachable moment so we'll leave it as it is.
2\.4 Argument names
-------------------
Each function has a list of arguments it can take, and a default order for those arguments. You can get more information on each function by entering `?function_name` into the console, although be aware that learning to read the help documentation in R is a skill in itself. When you are writing R code, as long as you stick to the default order, you do not have to explicitly call the argument names, for example, the above code could also be written as:
```
dat <- [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(dat, language = [factor](https://rdrr.io/r/base/factor.html)(
language,
[c](https://rdrr.io/r/base/c.html)(1, 2),
[c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual")
))
```
One of the challenges in learning R is that many of the "helpful" examples and solutions you will find online do not include argument names and so for novice learners are completely opaque. In this tutorial, we will include the argument names the first time a function is used, however, we will remove some argument names from subsequent examples to facilitate knowledge transfer to the help available online.
2\.5 Summarising data
---------------------
You can calculate and plot some basic descriptive information about the demographics of our sample using the imported dataset without any additional wrangling (i.e., data processing). The code below uses the `%>%` operator, otherwise known as the *pipe,* and can be translated as "*and then"*. For example, the below code can be read as:
* Start with the dataset `dat` *and then;*
* Group it by the variable `language` *and then;*
* Count the number of observations in each group *and then;*
* Remove the grouping
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
| language | n |
| --- | --- |
| monolingual | 55 |
| bilingual | 45 |
`[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` does not result in surface level changes to the dataset, rather, it changes the underlying structure so that if groups are specified, whatever functions called next are performed separately on each level of the grouping variable. This grouping remains in the object that is created so it is important to remove it with `[ungroup()](https://dplyr.tidyverse.org/reference/group_by.html)` to avoid future operations on the object unknowingly being performed by groups.
The above code therefore counts the number of observations in each group of the variable `language`. If you just need the total number of observations, you could remove the `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` and `[ungroup()](https://dplyr.tidyverse.org/reference/group_by.html)` lines, which would perform the operation on the whole dataset, rather than by groups:
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)()
```
| n |
| --- |
| 100 |
Similarly, we may wish to calculate the mean age (and SD) of the sample and we can do so using the function `[summarise()](https://dplyr.tidyverse.org/reference/summarise.html)` from the `dplyr` tidyverse package.
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)())
```
| mean\_age | sd\_age | n\_values |
| --- | --- | --- |
| 29\.75 | 8\.28 | 100 |
This code produces summary data in the form of a column named `mean_age` that contains the result of calculating the mean of the variable `age`. It then creates `sd_age` which does the same but for standard deviation. Finally, it uses the function `[n()](https://dplyr.tidyverse.org/reference/context.html)` to add the number of values used to calculate the statistic in a column named `n_values` \- this is a useful sanity check whenever you make summary statistics.
Note that the above code will not save the result of this operation, it will simply output the result in the console. If you wish to save it for future use, you can store it in an object by using the `<-` notation and print it later by typing the object name.
```
age_stats <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)())
```
Finally, the `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` function will work in the same way when calculating summary statistics \-\- the output of the function that is called after `[group_by()](https://dplyr.tidyverse.org/reference/group_by.html)` will be produced for each level of the grouping variable.
```
dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://dplyr.tidyverse.org/reference/summarise.html)(mean_age = [mean](https://rdrr.io/r/base/mean.html)(age),
sd_age = [sd](https://rdrr.io/r/stats/sd.html)(age),
n_values = [n](https://dplyr.tidyverse.org/reference/context.html)()) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
| language | mean\_age | sd\_age | n\_values |
| --- | --- | --- | --- |
| monolingual | 27\.96 | 6\.78 | 55 |
| bilingual | 31\.93 | 9\.44 | 45 |
2\.6 Bar chart of counts
------------------------
For our first plot, we will make a simple bar chart of counts that shows the number of participants in each `language` group.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)()
```
Figure 2\.1: Bar chart of counts.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = (..count..)/[sum](https://rdrr.io/r/base/sum.html)(..count..))) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Percent", labels=scales::[percent](https://scales.r-lib.org/reference/label_percent.html))
```
The first line of code sets up the base of the plot.
* `data` specifies which data source to use for the plot
* `mapping` specifies which variables to map to which aesthetics (`aes`) of the plot. Mappings describe how variables in the data are mapped to visual properties (aesthetics) of geoms.
* `x` specifies which variable to put on the x\-axis
The second line of code adds a `geom`, and is connected to the base code with `+`. In this case, we ask for `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)`. Each `geom` has an associated default statistic. For `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)`, the default statistic is to count the data passed to it. This means that you do not have to specify a `y` variable when making a bar plot of counts; when given an `x` variable `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)` will automatically calculate counts of the groups in that variable. In this example, it counts the number of data points that are in each category of the `language` variable.
The base and geoms layers work in symbiosis so it is worthwhile checking the mapping rules as these are related to the default statistic for the plot's geom.
2\.7 Aggregates and percentages
-------------------------------
If your dataset already has the counts that you want to plot, you can set `stat="identity"` inside of `[geom_bar()](https://ggplot2.tidyverse.org/reference/geom_bar.html)` to use that number instead of counting rows. For example, to plot percentages rather than counts within `ggplot2`, you can calculate these and store them in a new object that is then used as the dataset. You can do this in the software you are most comfortable in, save the new data, and import it as a new table, or you can use code to manipulate the data.
```
dat_percent <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html) # start with the data in dat
[count](https://dplyr.tidyverse.org/reference/count.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) # count rows per language (makes a new column called n)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(percent = (n/[sum](https://rdrr.io/r/base/sum.html)(n)*100)) # make a new column 'percent' equal to
# n divided by the sum of n times 100
```
Notice that we are now omitting the names of the arguments `data` and `mapping` in the `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` function.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_percent, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language, y = percent)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(stat="identity")
```
Figure 2\.2: Bar chart of pre\-calculated counts.
2\.8 Histogram
--------------
The code to plot a histogram of `age` is very similar to the bar chart code. We start by setting up the plot space, the dataset to use, and mapping the variables to the relevant axis. In this case, we want to plot a histogram with `age` on the x\-axis:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)()
```
Figure 2\.3: Histogram of ages.
The base statistic for `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` is also count, and by default `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` divides the x\-axis into 30 "bins" and counts how many observations are in each bin and so the y\-axis does not need to be specified. When you run the code to produce the histogram, you will get the message "stat\_bin() using bins \= 30\. Pick better value with binwidth". You can change this by either setting the number of bins (e.g., `bins = 20`) or the width of each bin (e.g., `binwidth = 5`) as an argument.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 5)
```
Figure 2\.4: Histogram of ages where each bin covers five years.
2\.9 Customisation 1
--------------------
So far we have made basic plots with the default visual appearance. Before we move on to the experimental data, we will introduce some simple visual customisation options. There are many ways in which you can control or customise the visual appearance of figures in R. However, once you understand the logic of one, it becomes easier to understand others that you may see in other examples. The visual appearance of elements can be customised within a geom itself, within the aesthetic mapping, or by connecting additional layers with `+`. In this section we look at the simplest and most commonly\-used customisations: changing colours, adding axis labels, and adding themes.
### 2\.9\.1 Changing colours
For our basic bar chart, you can control colours used to display the bars by setting `fill` (internal colour) and `colour` (outline colour) inside the geom function. This method changes **all** bars; we will show you later how to set fill or colour separately for different groups.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1,
fill = "white",
colour = "black")
```
Figure 2\.5: Histogram with custom colors for bar fill and line colors.
### 2\.9\.2 Editing axis names and labels
To edit axis names and labels you can connect `scale_*` functions to your plot with `+` to add layers. These functions are part of `ggplot2` and the one you use depends on which aesthetic you wish to edit (e.g., x\-axis, y\-axis, fill, colour) as well as the type of data it represents (discrete, continuous).
For the bar chart of counts, the x\-axis is mapped to a discrete (categorical) variable whilst the y\-axis is continuous. For each of these there is a relevant scale function with various elements that can be customised. Each axis then has its own function added as a layer to the basic plot.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Number of participants",
breaks = [c](https://rdrr.io/r/base/c.html)(0,10,20,30,40,50))
```
Figure 2\.6: Bar chart with custom axis labels.
* `name` controls the overall name of the axis (note the use of quotation marks)
* `labels` controls the names of the conditions with a discrete variable.
* `[c()](https://rdrr.io/r/base/c.html)` is a function that you will see in many different contexts and is used to combine multiple values. In this case, the labels we want to apply are combined within `[c()](https://rdrr.io/r/base/c.html)` by enclosing each word within their own parenthesis, and are in the order displayed on the plot. A very common error is to forget to enclose multiple values in `[c()](https://rdrr.io/r/base/c.html)`.
* `breaks` controls the tick marks on the axis. Again, because there are multiple values, they are enclosed within `[c()](https://rdrr.io/r/base/c.html)`. Because they are numeric and not text, they do not need quotation marks.
A common error is to map the wrong type of `scale_` function to a variable. Try running the below code:
```
# produces an error
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
```
This will produce the error `Discrete value supplied to continuous scale` because we have used a `continuous` scale function, despite the fact that x\-axis variable is discrete. If you get this error (or the reverse), check the type of data on each axis and the function you have used.
### 2\.9\.3 Adding a theme
`ggplot2` has a number of built\-in visual themes that you can apply as an extra layer. The below code updates the x\-axis and y\-axis labels to the histogram, but also applies `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`. Each part of a theme can be independently customised, which may be necessary, for example, if you have journal guidelines on fonts for publication. There are further instructions for how to do this in the online appendices.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 2\.7: Histogram with a custom theme.
You can set the theme globally so that all subsequent plots use a theme. `[theme_set()](https://ggplot2.tidyverse.org/reference/theme_get.html)` is not part of a `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` object, you should run this code on its own. It may be useful to add this code to the top of your script so that all plots produced subsequently use the same theme.
```
[theme_set](https://ggplot2.tidyverse.org/reference/theme_get.html)([theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)())
```
If you wished to return to the default theme, change the above to specify `[theme_grey()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`.
### 2\.9\.1 Changing colours
For our basic bar chart, you can control colours used to display the bars by setting `fill` (internal colour) and `colour` (outline colour) inside the geom function. This method changes **all** bars; we will show you later how to set fill or colour separately for different groups.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1,
fill = "white",
colour = "black")
```
Figure 2\.5: Histogram with custom colors for bar fill and line colors.
### 2\.9\.2 Editing axis names and labels
To edit axis names and labels you can connect `scale_*` functions to your plot with `+` to add layers. These functions are part of `ggplot2` and the one you use depends on which aesthetic you wish to edit (e.g., x\-axis, y\-axis, fill, colour) as well as the type of data it represents (discrete, continuous).
For the bar chart of counts, the x\-axis is mapped to a discrete (categorical) variable whilst the y\-axis is continuous. For each of these there is a relevant scale function with various elements that can be customised. Each axis then has its own function added as a layer to the basic plot.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Number of participants",
breaks = [c](https://rdrr.io/r/base/c.html)(0,10,20,30,40,50))
```
Figure 2\.6: Bar chart with custom axis labels.
* `name` controls the overall name of the axis (note the use of quotation marks)
* `labels` controls the names of the conditions with a discrete variable.
* `[c()](https://rdrr.io/r/base/c.html)` is a function that you will see in many different contexts and is used to combine multiple values. In this case, the labels we want to apply are combined within `[c()](https://rdrr.io/r/base/c.html)` by enclosing each word within their own parenthesis, and are in the order displayed on the plot. A very common error is to forget to enclose multiple values in `[c()](https://rdrr.io/r/base/c.html)`.
* `breaks` controls the tick marks on the axis. Again, because there are multiple values, they are enclosed within `[c()](https://rdrr.io/r/base/c.html)`. Because they are numeric and not text, they do not need quotation marks.
A common error is to map the wrong type of `scale_` function to a variable. Try running the below code:
```
# produces an error
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)() +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Language group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
```
This will produce the error `Discrete value supplied to continuous scale` because we have used a `continuous` scale function, despite the fact that x\-axis variable is discrete. If you get this error (or the reverse), check the type of data on each axis and the function you have used.
### 2\.9\.3 Adding a theme
`ggplot2` has a number of built\-in visual themes that you can apply as an extra layer. The below code updates the x\-axis and y\-axis labels to the histogram, but also applies `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`. Each part of a theme can be independently customised, which may be necessary, for example, if you have journal guidelines on fonts for publication. There are further instructions for how to do this in the online appendices.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 2\.7: Histogram with a custom theme.
You can set the theme globally so that all subsequent plots use a theme. `[theme_set()](https://ggplot2.tidyverse.org/reference/theme_get.html)` is not part of a `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` object, you should run this code on its own. It may be useful to add this code to the top of your script so that all plots produced subsequently use the same theme.
```
[theme_set](https://ggplot2.tidyverse.org/reference/theme_get.html)([theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)())
```
If you wished to return to the default theme, change the above to specify `[theme_grey()](https://ggplot2.tidyverse.org/reference/ggtheme.html)`.
2\.10 Activities 1
------------------
Before you move on try the following:
1. Add a layer that edits the **name** of the y\-axis histogram label to `Number of participants`.
Solution 1
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Number of participants")
```
2. Change the colour of the bars in the bar chart to red.
Solution 2
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(data = dat, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(fill = "red")
```
3. Remove `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)` from the histogram and instead apply one of the other available themes. To find out about other available themes, start typing `theme_` and the auto\-complete will show you the available options \- this will only work if you have loaded the `tidyverse` library with `[library(tidyverse)](https://tidyverse.tidyverse.org)`.
Solution 3
```
#multiple options here e.g., theme_classic()
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
# theme_bw()
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(age)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "wheat", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Participant age (years)") +
[theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/transforming-data.html |
3 Transforming Data
===================
3\.1 Data formats
-----------------
To visualise the experimental reaction time and accuracy data using `ggplot2`, we first need to reshape the data from wide format to long format. This step can cause friction with novice users of R. Traditionally, psychologists have been taught data skills using wide\-format data. Wide\-format data typically has one row of data for each participant, with separate columns for each score or variable. For repeated\-measures variables, the dependent variable is split across different columns. For between\-groups variables, a separate column is added to encode the group to which a participant or observation belongs.
The simulated lexical decision data is currently in wide format (see Table [3\.1](transforming-data.html#tab:wide-data)), where each participant's aggregated 4 reaction time and accuracy for each level of the within\-subject variable is split across multiple columns for the repeated factor of conditon (words versus non\-words).
Table 3\.1: Data in wide format.
| id | age | language | rt\_word | rt\_nonword | acc\_word | acc\_nonword |
| --- | --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | 379\.46 | 516\.82 | 99 | 90 |
| S002 | 33 | monolingual | 312\.45 | 435\.04 | 94 | 82 |
| S003 | 23 | monolingual | 404\.94 | 458\.50 | 96 | 87 |
| S004 | 28 | monolingual | 298\.37 | 335\.89 | 92 | 76 |
| S005 | 26 | monolingual | 316\.42 | 401\.32 | 91 | 83 |
| S006 | 29 | monolingual | 357\.17 | 367\.34 | 96 | 78 |
Wide format is popular because it is intuitive to read and easy to enter data into as all the data for one participant is contained within a single row. However, for the purposes of analysis, and particularly for analysis using R, this format is unsuitable. Whilst it is intuitive to read by a human, the same is not true for a computer. Wide\-format data concatenates multiple pieces of information in a single column, for example in Table [3\.1](transforming-data.html#tab:wide-data), `rt_word` contains information related to both a DV and one level of an IV. In comparison, long\-format data separates the DV from the IVs so that each column represents only one variable. The less intuitive part is that long\-format data has multiple rows for each participant (one row for each observation) and a column that encodes the level of the IV (`word` or `nonword`). Wickham ([2014](references.html#ref-wickham2014tidy)) provides a comprehensive overview of the benefits of a similar format known as tidy data, which is a standard way of mapping a dataset to its structure. For the purposes of this tutorial there are two important rules: each column should be a *variable* and each row should be an *observation*.
Moving from using wide\-format to long\-format datasets can require a conceptual shift on the part of the researcher and one that usually only comes with practice and repeated exposure5. It may be helpful to make a note that “row \= participant” (wide format) and “row \= observation” (long format) until you get used to moving between the formats. For our example dataset, adhering to these rules for reshaping the data would produce Table [3\.2](transforming-data.html#tab:long). Rather than different observations of the same dependent variable being split across columns, there is now a single column for the DV reaction time, and a single column for the DV accuracy. Each participant now has multiple rows of data, one for each observation (i.e., for each participant there will be as many rows as there are levels of the within\-subject IV). Although there is some repetition of age and language group, each row is unique when looking at the combination of measures.
Table 3\.2: Data in the correct format for visualization.
| id | age | language | condition | rt | acc |
| --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | word | 379\.46 | 99 |
| S001 | 22 | monolingual | nonword | 516\.82 | 90 |
| S002 | 33 | monolingual | word | 312\.45 | 94 |
| S002 | 33 | monolingual | nonword | 435\.04 | 82 |
| S003 | 23 | monolingual | word | 404\.94 | 96 |
| S003 | 23 | monolingual | nonword | 458\.50 | 87 |
The benefits and flexibility of this format will hopefully become apparent as we progress through the tutorial, however, a useful rule of thumb when working with data in R for visualisation is that *anything that shares an axis should probably be in the same column*. For example, a simple boxplot showing reaction time by condition would display the variable `condition` on the x\-axis with bars representing both the `word` and `nonword` data, and `rt` on the y\-axis. Therefore, all the data relating to `condition` should be in one column, and all the data relating to `rt` should be in a separate single column, rather than being split like in wide\-format data.
3\.2 Wide to long format
------------------------
We have chosen a 2 x 2 design with two DVs, as we anticipate that this is a design many researchers will be familiar with and may also have existing datasets with a similar structure. However, it is worth normalising that trial\-and\-error is part of the process of learning how to apply these functions to new datasets and structures. Data visualisation can be a useful way to scaffold learning these data transformations because they can provide a concrete visual check as to whether you have done what you intended to do with your data.
### 3\.2\.1 Step 1: `pivot_longer()`
The first step is to use the function `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)` to transform the data to long\-form. We have purposefully used a more complex dataset with two DVs for this tutorial to aid researchers applying our code to their own datasets. Because of this, we will break down the steps involved to help show how the code works.
This first code ignores that the dataset has two DVs, a problem we will fix in step 2\. The pivot functions can be easier to show than tell \- you may find it a useful exercise to run the below code and compare the newly created object `long` (Table [3\.3](transforming-data.html#tab:long1-example)) with the original `dat` Table [3\.1](transforming-data.html#tab:wide-data) before reading on.
```
long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_to = "dv_condition",
values_to = "dv")
```
* As with the other tidyverse functions, the first argument specifies the dataset to use as the base, in this case `dat`. This argument name is often dropped in examples.
* `cols` specifies all the columns you want to transform. The easiest way to visualise this is to think about which columns would be the same in the new long\-form dataset and which will change. If you refer back to Table [3\.1](transforming-data.html#tab:wide-data), you can see that `id`, `age`, and `language` all remain, while the columns that contain the measurements of the DVs change. The colon notation `first_column:last_column` is used to select all variables from the first column specified to the last In our code, `cols` specifies that the columns we want to transform are `rt_word` to `acc_nonword`.
* `names_to` specifies the name of the new column that will be created. This column will contain the names of the selected existing columns.
* Finally, `values_to` names the new column that will contain the values in the selected columns. In this case we'll call it `dv`.
At this point you may find it helpful to go back and compare `dat` and `long` again to see how each argument matches up with the output of the table.
Table 3\.3: Data in long format with mixed DVs.
| id | age | language | dv\_condition | dv |
| --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt\_word | 379\.46 |
| S001 | 22 | monolingual | rt\_nonword | 516\.82 |
| S001 | 22 | monolingual | acc\_word | 99\.00 |
| S001 | 22 | monolingual | acc\_nonword | 90\.00 |
| S002 | 33 | monolingual | rt\_word | 312\.45 |
| S002 | 33 | monolingual | rt\_nonword | 435\.04 |
### 3\.2\.2 Step 2: `pivot_longer()` adjusted
The problem with the above long\-format data\-set is that `dv_condition` combines two variables \- it has information about the type of DV and the condition of the IV. To account for this, we include a new argument `names_sep` and adjust `name_to` to specify the creation of two new columns. Note that we are pivoting the same wide\-format dataset `dat` as we did in step 1\.
```
long2 <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv")
```
* `names_sep` specifies how to split up the variable name in cases where it has multiple components. This is when taking care to name your variables consistently and meaningfully pays off. Because the word to the left of the separator (`_`) is always the DV type and the word to the right is always the condition of the within\-subject IV, it is easy to automatically split the columns.
* Note that when specifying more than one column name, they must be combined using `[c()](https://rdrr.io/r/base/c.html)` and be enclosed in their own quotation marks.
Table 3\.4: Data in long format with dv type and condition in separate columns.
| id | age | language | dv\_type | condition | dv |
| --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt | word | 379\.46 |
| S001 | 22 | monolingual | rt | nonword | 516\.82 |
| S001 | 22 | monolingual | acc | word | 99\.00 |
| S001 | 22 | monolingual | acc | nonword | 90\.00 |
| S002 | 33 | monolingual | rt | word | 312\.45 |
| S002 | 33 | monolingual | rt | nonword | 435\.04 |
### 3\.2\.3 Step 3: `pivot_wider()`
Although we have now split the columns so that there are separate variables for the DV type and level of condition, because the two DVs are different types of data, there is an additional bit of wrangling required to get the data in the right format for plotting.
In the current long\-format dataset, the column `dv` contains both reaction time and accuracy measures. Keeping in mind the rule of thumb that *anything that shares an axis should probably be in the same column,* this creates a problem because we cannot plot two different units of measurement on the same axis. To fix this we need to use the function `[pivot_wider()](https://tidyr.tidyverse.org/reference/pivot_wider.html)`. Again, we would encourage you at this point to compare `long2` and `dat_long` with the below code to try and map the connections before reading on.
```
dat_long <- [pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(long2,
names_from = "dv_type",
values_from = "dv")
```
* The first argument is again the dataset you wish to work from, in this case `long2`. We have removed the argument name `data` in this example.
* `names_from` is the reverse of `names_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It will take the values from the variable specified and use these as the new column names. In this case, the values of `rt` and `acc` that are currently in the `dv_type` column will become the new column names.
* `values_from` is the reverse of `values_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It specifies the column that contains the values to fill the new columns with. In this case, the new columns `rt` and `acc` will be filled with the values that were in `dv`.
Again, it can be helpful to compare each dataset with the code to see how it aligns. This final long\-form data should look like Table [3\.2](transforming-data.html#tab:long).
If you are working with a dataset with only one DV, note that only step 1 of this process would be necessary. Also, be careful not to calculate demographic descriptive statistics from this long\-form dataset. Because the process of transformation has introduced some repetition for these variables, the wide\-format dataset where one row equals one participant should be used for demographic information. Finally, the three step process noted above is broken down for teaching purposes, in reality, one would likely do this in a single pipeline of code, for example:
```
dat_long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv") [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(names_from = "dv_type",
values_from = "dv")
```
3\.3 Histogram 2
----------------
Now that we have the experimental data in the right form, we can begin to create some useful visualizations. First, to demonstrate how code recipes can be reused and adapted, we will create histograms of reaction time and accuracy. The below code uses the same template as before but changes the dataset (`dat_long`), the bin\-widths of the histograms, the `x` variable to display (`rt`/`acc`), and the name of the x\-axis.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", colour = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", colour = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)")
```
Figure 3\.1: Histograms showing the distribution of reaction time (top) and accuracy (bottom)
3\.4 Density plots
------------------
The layer system makes it easy to create new types of plots by adapting existing recipes. For example, rather than creating a histogram, we can create a smoothed density plot by calling `[geom_density()](https://ggplot2.tidyverse.org/reference/geom_density.html)` rather than `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)`. The rest of the code remains identical.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
```
Figure 3\.2: Density plot of reaction time.
### 3\.4\.1 Grouped density plots
Density plots are most useful for comparing the distributions of different groups of data. Because the dataset is now in long format, with each variable contained within a single column, we can map `condition` to the plot.
* In addition to mapping `rt` to the x\-axis, we specify the `fill` aesthetic to fill the visualisation so that each level of the `condition` variable is represented by a different colour.
* Because the density plots are overlapping, we set `alpha = 0.75` to make the geoms 75% transparent.
* As with the x and y\-axis scale functions, we can edit the names and labels of our fill aesthetic by adding on another `scale_*` layer (`[scale_fill_discrete()](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)`).
* Note that the `fill` here is set inside the `[aes()](https://ggplot2.tidyverse.org/reference/aes.html)` function, which tells ggplot to set the fill differently for each value in the `condition` column. You cannot specify which colour here (e.g., `fill="red"`), like you could when you set `fill` inside the `geom_*()` function before.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = 0.75)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.3: Density plot of reaction times grouped by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
3\.5 Scatterplots
-----------------
Scatterplots are created by calling `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` and require both an `x` and `y` variable to be specified in the mapping.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)()
```
Figure 3\.4: Scatterplot of reaction time versus age.
A line of best fit can be added with an additional layer that calls the function `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)`. The default is to draw a LOESS or curved regression line. However, a linear line of best fit can be specified using `method = "lm"`. By default, `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)` will also draw a confidence envelope around the regression line; this can be removed by adding `se = FALSE` to `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)`. A common error is to try and use `[geom_line()](https://ggplot2.tidyverse.org/reference/geom_path.html)` to draw the line of best fit, which whilst a sensible guess, will not work (try it).
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure 3\.5: Line of best fit for reaction time versus age.
### 3\.5\.1 Grouped scatterplots
Similar to the density plot, the scatterplot can also be easily adjusted to display grouped data. For `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`, the grouping variable is mapped to `colour` rather than `fill` and the relevant `scale_*` function is added.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_colour_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.6: Grouped scatterplot of reaction time versus age by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
3\.6 Long to wide format
------------------------
Following the rule that *anything that shares an axis should probably be in the same column* means that we will frequently need our data in long\-form when using `ggplot2`, However, there are some cases when wide format is necessary. For example, we may wish to visualise the relationship between reaction time in the word and non\-word conditions. This requires that the corresponding word and non\-word values for each participant be in the same row. The easiest way to achieve this in our case would simply be to use the original wide\-format data as the input:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_word, y = rt_nonword, colour = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure 3\.7: Scatterplot with data grouped by language group
However, there may also be cases when you do not have an original wide\-format version and you can use the `[pivot_wider()](https://tidyr.tidyverse.org/reference/pivot_wider.html)` function to transform from long to wide.
```
dat_wide <- dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(id_cols = "id",
names_from = "condition",
values_from = [c](https://rdrr.io/r/base/c.html)(rt,acc))
```
| id | rt\_word | rt\_nonword | acc\_word | acc\_nonword |
| --- | --- | --- | --- | --- |
| S001 | 379\.4585 | 516\.8176 | 99 | 90 |
| S002 | 312\.4513 | 435\.0404 | 94 | 82 |
| S003 | 404\.9407 | 458\.5022 | 96 | 87 |
| S004 | 298\.3734 | 335\.8933 | 92 | 76 |
| S005 | 316\.4250 | 401\.3214 | 91 | 83 |
| S006 | 357\.1710 | 367\.3355 | 96 | 78 |
3\.7 Customisation 2
--------------------
### 3\.7\.1 Accessible colour schemes
One of the drawbacks of using `ggplot2` for visualisation is that the default colour scheme is not accessible (or visually appealing). The red and green default palette is difficult for colour\-blind people to differentiate, and also does not display well in greyscale. You can specify exact custom colours for your plots, but one easy option is to use a custom colour palette. These take the same arguments as their default `scale` sister functions for updating axis names and labels, but display plots in contrasting colours that can be read by colour\-blind people and that also print well in grey scale. For categorical colours, the "Set2", "Dark2" and "Paired" palettes from the `brewer` scale functions are colourblind\-safe (but are hard to distinhuish in greyscale). For continuous colours, such as when colour is representing the magnitude of a correlation in a tile plot, the `viridis` scale functions provide a number of different colourblind and greyscale\-safe options.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_color_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
Figure 3\.8: Use the Dark2 brewer colour scheme for accessibility.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 3\.7\.2 Specifying axis `breaks` with `seq()`
Previously, when we have edited the `breaks` on the axis labels, we have done so manually, typing out all the values we want to display on the axis. For example, the below code edits the y\-axis so that `age` is displayed in increments of 5\.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [c](https://rdrr.io/r/base/c.html)(20,25,30,35,40,45,50,55,60))
```
However, this is somewhat inefficient. Instead, we can use the function `[seq()](https://rdrr.io/r/base/seq.html)` (short for sequence) to specify the first and last value and the increments `by` which the breaks should display between these two values.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [seq](https://rdrr.io/r/base/seq.html)(20,60, by = 5))
```
3\.8 Activities 2
-----------------
Before you move on try the following:
1. Use `fill` to created grouped histograms that display the distributions for `rt` for each `language` group separately and also edit the fill axis labels. Try adding `position = "dodge"` to `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` to see what happens.
Solution 1
```
# fill and axis changes
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = language)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
# add in dodge
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = language)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, position = "dodge") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
```
2. Use `scale_*` functions to edit the name of the x and y\-axis on the scatterplot
Solution 2
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
3. Use `se = FALSE` to remove the confidence envelope from the scatterplots
Solution 3
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm", se = FALSE) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
4. Remove `method = "lm"` from `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)` to produce a curved fit line.
Solution 4
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)() +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
5. Replace the default fill on the grouped density plot with a colour\-blind friendly version.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = 0.75)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Set2", # or "Dark2" or "Paired"
name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
3\.1 Data formats
-----------------
To visualise the experimental reaction time and accuracy data using `ggplot2`, we first need to reshape the data from wide format to long format. This step can cause friction with novice users of R. Traditionally, psychologists have been taught data skills using wide\-format data. Wide\-format data typically has one row of data for each participant, with separate columns for each score or variable. For repeated\-measures variables, the dependent variable is split across different columns. For between\-groups variables, a separate column is added to encode the group to which a participant or observation belongs.
The simulated lexical decision data is currently in wide format (see Table [3\.1](transforming-data.html#tab:wide-data)), where each participant's aggregated 4 reaction time and accuracy for each level of the within\-subject variable is split across multiple columns for the repeated factor of conditon (words versus non\-words).
Table 3\.1: Data in wide format.
| id | age | language | rt\_word | rt\_nonword | acc\_word | acc\_nonword |
| --- | --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | 379\.46 | 516\.82 | 99 | 90 |
| S002 | 33 | monolingual | 312\.45 | 435\.04 | 94 | 82 |
| S003 | 23 | monolingual | 404\.94 | 458\.50 | 96 | 87 |
| S004 | 28 | monolingual | 298\.37 | 335\.89 | 92 | 76 |
| S005 | 26 | monolingual | 316\.42 | 401\.32 | 91 | 83 |
| S006 | 29 | monolingual | 357\.17 | 367\.34 | 96 | 78 |
Wide format is popular because it is intuitive to read and easy to enter data into as all the data for one participant is contained within a single row. However, for the purposes of analysis, and particularly for analysis using R, this format is unsuitable. Whilst it is intuitive to read by a human, the same is not true for a computer. Wide\-format data concatenates multiple pieces of information in a single column, for example in Table [3\.1](transforming-data.html#tab:wide-data), `rt_word` contains information related to both a DV and one level of an IV. In comparison, long\-format data separates the DV from the IVs so that each column represents only one variable. The less intuitive part is that long\-format data has multiple rows for each participant (one row for each observation) and a column that encodes the level of the IV (`word` or `nonword`). Wickham ([2014](references.html#ref-wickham2014tidy)) provides a comprehensive overview of the benefits of a similar format known as tidy data, which is a standard way of mapping a dataset to its structure. For the purposes of this tutorial there are two important rules: each column should be a *variable* and each row should be an *observation*.
Moving from using wide\-format to long\-format datasets can require a conceptual shift on the part of the researcher and one that usually only comes with practice and repeated exposure5. It may be helpful to make a note that “row \= participant” (wide format) and “row \= observation” (long format) until you get used to moving between the formats. For our example dataset, adhering to these rules for reshaping the data would produce Table [3\.2](transforming-data.html#tab:long). Rather than different observations of the same dependent variable being split across columns, there is now a single column for the DV reaction time, and a single column for the DV accuracy. Each participant now has multiple rows of data, one for each observation (i.e., for each participant there will be as many rows as there are levels of the within\-subject IV). Although there is some repetition of age and language group, each row is unique when looking at the combination of measures.
Table 3\.2: Data in the correct format for visualization.
| id | age | language | condition | rt | acc |
| --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | word | 379\.46 | 99 |
| S001 | 22 | monolingual | nonword | 516\.82 | 90 |
| S002 | 33 | monolingual | word | 312\.45 | 94 |
| S002 | 33 | monolingual | nonword | 435\.04 | 82 |
| S003 | 23 | monolingual | word | 404\.94 | 96 |
| S003 | 23 | monolingual | nonword | 458\.50 | 87 |
The benefits and flexibility of this format will hopefully become apparent as we progress through the tutorial, however, a useful rule of thumb when working with data in R for visualisation is that *anything that shares an axis should probably be in the same column*. For example, a simple boxplot showing reaction time by condition would display the variable `condition` on the x\-axis with bars representing both the `word` and `nonword` data, and `rt` on the y\-axis. Therefore, all the data relating to `condition` should be in one column, and all the data relating to `rt` should be in a separate single column, rather than being split like in wide\-format data.
3\.2 Wide to long format
------------------------
We have chosen a 2 x 2 design with two DVs, as we anticipate that this is a design many researchers will be familiar with and may also have existing datasets with a similar structure. However, it is worth normalising that trial\-and\-error is part of the process of learning how to apply these functions to new datasets and structures. Data visualisation can be a useful way to scaffold learning these data transformations because they can provide a concrete visual check as to whether you have done what you intended to do with your data.
### 3\.2\.1 Step 1: `pivot_longer()`
The first step is to use the function `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)` to transform the data to long\-form. We have purposefully used a more complex dataset with two DVs for this tutorial to aid researchers applying our code to their own datasets. Because of this, we will break down the steps involved to help show how the code works.
This first code ignores that the dataset has two DVs, a problem we will fix in step 2\. The pivot functions can be easier to show than tell \- you may find it a useful exercise to run the below code and compare the newly created object `long` (Table [3\.3](transforming-data.html#tab:long1-example)) with the original `dat` Table [3\.1](transforming-data.html#tab:wide-data) before reading on.
```
long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_to = "dv_condition",
values_to = "dv")
```
* As with the other tidyverse functions, the first argument specifies the dataset to use as the base, in this case `dat`. This argument name is often dropped in examples.
* `cols` specifies all the columns you want to transform. The easiest way to visualise this is to think about which columns would be the same in the new long\-form dataset and which will change. If you refer back to Table [3\.1](transforming-data.html#tab:wide-data), you can see that `id`, `age`, and `language` all remain, while the columns that contain the measurements of the DVs change. The colon notation `first_column:last_column` is used to select all variables from the first column specified to the last In our code, `cols` specifies that the columns we want to transform are `rt_word` to `acc_nonword`.
* `names_to` specifies the name of the new column that will be created. This column will contain the names of the selected existing columns.
* Finally, `values_to` names the new column that will contain the values in the selected columns. In this case we'll call it `dv`.
At this point you may find it helpful to go back and compare `dat` and `long` again to see how each argument matches up with the output of the table.
Table 3\.3: Data in long format with mixed DVs.
| id | age | language | dv\_condition | dv |
| --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt\_word | 379\.46 |
| S001 | 22 | monolingual | rt\_nonword | 516\.82 |
| S001 | 22 | monolingual | acc\_word | 99\.00 |
| S001 | 22 | monolingual | acc\_nonword | 90\.00 |
| S002 | 33 | monolingual | rt\_word | 312\.45 |
| S002 | 33 | monolingual | rt\_nonword | 435\.04 |
### 3\.2\.2 Step 2: `pivot_longer()` adjusted
The problem with the above long\-format data\-set is that `dv_condition` combines two variables \- it has information about the type of DV and the condition of the IV. To account for this, we include a new argument `names_sep` and adjust `name_to` to specify the creation of two new columns. Note that we are pivoting the same wide\-format dataset `dat` as we did in step 1\.
```
long2 <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv")
```
* `names_sep` specifies how to split up the variable name in cases where it has multiple components. This is when taking care to name your variables consistently and meaningfully pays off. Because the word to the left of the separator (`_`) is always the DV type and the word to the right is always the condition of the within\-subject IV, it is easy to automatically split the columns.
* Note that when specifying more than one column name, they must be combined using `[c()](https://rdrr.io/r/base/c.html)` and be enclosed in their own quotation marks.
Table 3\.4: Data in long format with dv type and condition in separate columns.
| id | age | language | dv\_type | condition | dv |
| --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt | word | 379\.46 |
| S001 | 22 | monolingual | rt | nonword | 516\.82 |
| S001 | 22 | monolingual | acc | word | 99\.00 |
| S001 | 22 | monolingual | acc | nonword | 90\.00 |
| S002 | 33 | monolingual | rt | word | 312\.45 |
| S002 | 33 | monolingual | rt | nonword | 435\.04 |
### 3\.2\.3 Step 3: `pivot_wider()`
Although we have now split the columns so that there are separate variables for the DV type and level of condition, because the two DVs are different types of data, there is an additional bit of wrangling required to get the data in the right format for plotting.
In the current long\-format dataset, the column `dv` contains both reaction time and accuracy measures. Keeping in mind the rule of thumb that *anything that shares an axis should probably be in the same column,* this creates a problem because we cannot plot two different units of measurement on the same axis. To fix this we need to use the function `[pivot_wider()](https://tidyr.tidyverse.org/reference/pivot_wider.html)`. Again, we would encourage you at this point to compare `long2` and `dat_long` with the below code to try and map the connections before reading on.
```
dat_long <- [pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(long2,
names_from = "dv_type",
values_from = "dv")
```
* The first argument is again the dataset you wish to work from, in this case `long2`. We have removed the argument name `data` in this example.
* `names_from` is the reverse of `names_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It will take the values from the variable specified and use these as the new column names. In this case, the values of `rt` and `acc` that are currently in the `dv_type` column will become the new column names.
* `values_from` is the reverse of `values_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It specifies the column that contains the values to fill the new columns with. In this case, the new columns `rt` and `acc` will be filled with the values that were in `dv`.
Again, it can be helpful to compare each dataset with the code to see how it aligns. This final long\-form data should look like Table [3\.2](transforming-data.html#tab:long).
If you are working with a dataset with only one DV, note that only step 1 of this process would be necessary. Also, be careful not to calculate demographic descriptive statistics from this long\-form dataset. Because the process of transformation has introduced some repetition for these variables, the wide\-format dataset where one row equals one participant should be used for demographic information. Finally, the three step process noted above is broken down for teaching purposes, in reality, one would likely do this in a single pipeline of code, for example:
```
dat_long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv") [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(names_from = "dv_type",
values_from = "dv")
```
### 3\.2\.1 Step 1: `pivot_longer()`
The first step is to use the function `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)` to transform the data to long\-form. We have purposefully used a more complex dataset with two DVs for this tutorial to aid researchers applying our code to their own datasets. Because of this, we will break down the steps involved to help show how the code works.
This first code ignores that the dataset has two DVs, a problem we will fix in step 2\. The pivot functions can be easier to show than tell \- you may find it a useful exercise to run the below code and compare the newly created object `long` (Table [3\.3](transforming-data.html#tab:long1-example)) with the original `dat` Table [3\.1](transforming-data.html#tab:wide-data) before reading on.
```
long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_to = "dv_condition",
values_to = "dv")
```
* As with the other tidyverse functions, the first argument specifies the dataset to use as the base, in this case `dat`. This argument name is often dropped in examples.
* `cols` specifies all the columns you want to transform. The easiest way to visualise this is to think about which columns would be the same in the new long\-form dataset and which will change. If you refer back to Table [3\.1](transforming-data.html#tab:wide-data), you can see that `id`, `age`, and `language` all remain, while the columns that contain the measurements of the DVs change. The colon notation `first_column:last_column` is used to select all variables from the first column specified to the last In our code, `cols` specifies that the columns we want to transform are `rt_word` to `acc_nonword`.
* `names_to` specifies the name of the new column that will be created. This column will contain the names of the selected existing columns.
* Finally, `values_to` names the new column that will contain the values in the selected columns. In this case we'll call it `dv`.
At this point you may find it helpful to go back and compare `dat` and `long` again to see how each argument matches up with the output of the table.
Table 3\.3: Data in long format with mixed DVs.
| id | age | language | dv\_condition | dv |
| --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt\_word | 379\.46 |
| S001 | 22 | monolingual | rt\_nonword | 516\.82 |
| S001 | 22 | monolingual | acc\_word | 99\.00 |
| S001 | 22 | monolingual | acc\_nonword | 90\.00 |
| S002 | 33 | monolingual | rt\_word | 312\.45 |
| S002 | 33 | monolingual | rt\_nonword | 435\.04 |
### 3\.2\.2 Step 2: `pivot_longer()` adjusted
The problem with the above long\-format data\-set is that `dv_condition` combines two variables \- it has information about the type of DV and the condition of the IV. To account for this, we include a new argument `names_sep` and adjust `name_to` to specify the creation of two new columns. Note that we are pivoting the same wide\-format dataset `dat` as we did in step 1\.
```
long2 <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv")
```
* `names_sep` specifies how to split up the variable name in cases where it has multiple components. This is when taking care to name your variables consistently and meaningfully pays off. Because the word to the left of the separator (`_`) is always the DV type and the word to the right is always the condition of the within\-subject IV, it is easy to automatically split the columns.
* Note that when specifying more than one column name, they must be combined using `[c()](https://rdrr.io/r/base/c.html)` and be enclosed in their own quotation marks.
Table 3\.4: Data in long format with dv type and condition in separate columns.
| id | age | language | dv\_type | condition | dv |
| --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt | word | 379\.46 |
| S001 | 22 | monolingual | rt | nonword | 516\.82 |
| S001 | 22 | monolingual | acc | word | 99\.00 |
| S001 | 22 | monolingual | acc | nonword | 90\.00 |
| S002 | 33 | monolingual | rt | word | 312\.45 |
| S002 | 33 | monolingual | rt | nonword | 435\.04 |
### 3\.2\.3 Step 3: `pivot_wider()`
Although we have now split the columns so that there are separate variables for the DV type and level of condition, because the two DVs are different types of data, there is an additional bit of wrangling required to get the data in the right format for plotting.
In the current long\-format dataset, the column `dv` contains both reaction time and accuracy measures. Keeping in mind the rule of thumb that *anything that shares an axis should probably be in the same column,* this creates a problem because we cannot plot two different units of measurement on the same axis. To fix this we need to use the function `[pivot_wider()](https://tidyr.tidyverse.org/reference/pivot_wider.html)`. Again, we would encourage you at this point to compare `long2` and `dat_long` with the below code to try and map the connections before reading on.
```
dat_long <- [pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(long2,
names_from = "dv_type",
values_from = "dv")
```
* The first argument is again the dataset you wish to work from, in this case `long2`. We have removed the argument name `data` in this example.
* `names_from` is the reverse of `names_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It will take the values from the variable specified and use these as the new column names. In this case, the values of `rt` and `acc` that are currently in the `dv_type` column will become the new column names.
* `values_from` is the reverse of `values_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It specifies the column that contains the values to fill the new columns with. In this case, the new columns `rt` and `acc` will be filled with the values that were in `dv`.
Again, it can be helpful to compare each dataset with the code to see how it aligns. This final long\-form data should look like Table [3\.2](transforming-data.html#tab:long).
If you are working with a dataset with only one DV, note that only step 1 of this process would be necessary. Also, be careful not to calculate demographic descriptive statistics from this long\-form dataset. Because the process of transformation has introduced some repetition for these variables, the wide\-format dataset where one row equals one participant should be used for demographic information. Finally, the three step process noted above is broken down for teaching purposes, in reality, one would likely do this in a single pipeline of code, for example:
```
dat_long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv") [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(names_from = "dv_type",
values_from = "dv")
```
3\.3 Histogram 2
----------------
Now that we have the experimental data in the right form, we can begin to create some useful visualizations. First, to demonstrate how code recipes can be reused and adapted, we will create histograms of reaction time and accuracy. The below code uses the same template as before but changes the dataset (`dat_long`), the bin\-widths of the histograms, the `x` variable to display (`rt`/`acc`), and the name of the x\-axis.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", colour = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", colour = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)")
```
Figure 3\.1: Histograms showing the distribution of reaction time (top) and accuracy (bottom)
3\.4 Density plots
------------------
The layer system makes it easy to create new types of plots by adapting existing recipes. For example, rather than creating a histogram, we can create a smoothed density plot by calling `[geom_density()](https://ggplot2.tidyverse.org/reference/geom_density.html)` rather than `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)`. The rest of the code remains identical.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
```
Figure 3\.2: Density plot of reaction time.
### 3\.4\.1 Grouped density plots
Density plots are most useful for comparing the distributions of different groups of data. Because the dataset is now in long format, with each variable contained within a single column, we can map `condition` to the plot.
* In addition to mapping `rt` to the x\-axis, we specify the `fill` aesthetic to fill the visualisation so that each level of the `condition` variable is represented by a different colour.
* Because the density plots are overlapping, we set `alpha = 0.75` to make the geoms 75% transparent.
* As with the x and y\-axis scale functions, we can edit the names and labels of our fill aesthetic by adding on another `scale_*` layer (`[scale_fill_discrete()](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)`).
* Note that the `fill` here is set inside the `[aes()](https://ggplot2.tidyverse.org/reference/aes.html)` function, which tells ggplot to set the fill differently for each value in the `condition` column. You cannot specify which colour here (e.g., `fill="red"`), like you could when you set `fill` inside the `geom_*()` function before.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = 0.75)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.3: Density plot of reaction times grouped by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 3\.4\.1 Grouped density plots
Density plots are most useful for comparing the distributions of different groups of data. Because the dataset is now in long format, with each variable contained within a single column, we can map `condition` to the plot.
* In addition to mapping `rt` to the x\-axis, we specify the `fill` aesthetic to fill the visualisation so that each level of the `condition` variable is represented by a different colour.
* Because the density plots are overlapping, we set `alpha = 0.75` to make the geoms 75% transparent.
* As with the x and y\-axis scale functions, we can edit the names and labels of our fill aesthetic by adding on another `scale_*` layer (`[scale_fill_discrete()](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)`).
* Note that the `fill` here is set inside the `[aes()](https://ggplot2.tidyverse.org/reference/aes.html)` function, which tells ggplot to set the fill differently for each value in the `condition` column. You cannot specify which colour here (e.g., `fill="red"`), like you could when you set `fill` inside the `geom_*()` function before.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = 0.75)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.3: Density plot of reaction times grouped by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
3\.5 Scatterplots
-----------------
Scatterplots are created by calling `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` and require both an `x` and `y` variable to be specified in the mapping.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)()
```
Figure 3\.4: Scatterplot of reaction time versus age.
A line of best fit can be added with an additional layer that calls the function `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)`. The default is to draw a LOESS or curved regression line. However, a linear line of best fit can be specified using `method = "lm"`. By default, `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)` will also draw a confidence envelope around the regression line; this can be removed by adding `se = FALSE` to `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)`. A common error is to try and use `[geom_line()](https://ggplot2.tidyverse.org/reference/geom_path.html)` to draw the line of best fit, which whilst a sensible guess, will not work (try it).
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure 3\.5: Line of best fit for reaction time versus age.
### 3\.5\.1 Grouped scatterplots
Similar to the density plot, the scatterplot can also be easily adjusted to display grouped data. For `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`, the grouping variable is mapped to `colour` rather than `fill` and the relevant `scale_*` function is added.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_colour_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.6: Grouped scatterplot of reaction time versus age by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 3\.5\.1 Grouped scatterplots
Similar to the density plot, the scatterplot can also be easily adjusted to display grouped data. For `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`, the grouping variable is mapped to `colour` rather than `fill` and the relevant `scale_*` function is added.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_colour_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.6: Grouped scatterplot of reaction time versus age by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
3\.6 Long to wide format
------------------------
Following the rule that *anything that shares an axis should probably be in the same column* means that we will frequently need our data in long\-form when using `ggplot2`, However, there are some cases when wide format is necessary. For example, we may wish to visualise the relationship between reaction time in the word and non\-word conditions. This requires that the corresponding word and non\-word values for each participant be in the same row. The easiest way to achieve this in our case would simply be to use the original wide\-format data as the input:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_word, y = rt_nonword, colour = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure 3\.7: Scatterplot with data grouped by language group
However, there may also be cases when you do not have an original wide\-format version and you can use the `[pivot_wider()](https://tidyr.tidyverse.org/reference/pivot_wider.html)` function to transform from long to wide.
```
dat_wide <- dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(id_cols = "id",
names_from = "condition",
values_from = [c](https://rdrr.io/r/base/c.html)(rt,acc))
```
| id | rt\_word | rt\_nonword | acc\_word | acc\_nonword |
| --- | --- | --- | --- | --- |
| S001 | 379\.4585 | 516\.8176 | 99 | 90 |
| S002 | 312\.4513 | 435\.0404 | 94 | 82 |
| S003 | 404\.9407 | 458\.5022 | 96 | 87 |
| S004 | 298\.3734 | 335\.8933 | 92 | 76 |
| S005 | 316\.4250 | 401\.3214 | 91 | 83 |
| S006 | 357\.1710 | 367\.3355 | 96 | 78 |
3\.7 Customisation 2
--------------------
### 3\.7\.1 Accessible colour schemes
One of the drawbacks of using `ggplot2` for visualisation is that the default colour scheme is not accessible (or visually appealing). The red and green default palette is difficult for colour\-blind people to differentiate, and also does not display well in greyscale. You can specify exact custom colours for your plots, but one easy option is to use a custom colour palette. These take the same arguments as their default `scale` sister functions for updating axis names and labels, but display plots in contrasting colours that can be read by colour\-blind people and that also print well in grey scale. For categorical colours, the "Set2", "Dark2" and "Paired" palettes from the `brewer` scale functions are colourblind\-safe (but are hard to distinhuish in greyscale). For continuous colours, such as when colour is representing the magnitude of a correlation in a tile plot, the `viridis` scale functions provide a number of different colourblind and greyscale\-safe options.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_color_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
Figure 3\.8: Use the Dark2 brewer colour scheme for accessibility.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 3\.7\.2 Specifying axis `breaks` with `seq()`
Previously, when we have edited the `breaks` on the axis labels, we have done so manually, typing out all the values we want to display on the axis. For example, the below code edits the y\-axis so that `age` is displayed in increments of 5\.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [c](https://rdrr.io/r/base/c.html)(20,25,30,35,40,45,50,55,60))
```
However, this is somewhat inefficient. Instead, we can use the function `[seq()](https://rdrr.io/r/base/seq.html)` (short for sequence) to specify the first and last value and the increments `by` which the breaks should display between these two values.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [seq](https://rdrr.io/r/base/seq.html)(20,60, by = 5))
```
### 3\.7\.1 Accessible colour schemes
One of the drawbacks of using `ggplot2` for visualisation is that the default colour scheme is not accessible (or visually appealing). The red and green default palette is difficult for colour\-blind people to differentiate, and also does not display well in greyscale. You can specify exact custom colours for your plots, but one easy option is to use a custom colour palette. These take the same arguments as their default `scale` sister functions for updating axis names and labels, but display plots in contrasting colours that can be read by colour\-blind people and that also print well in grey scale. For categorical colours, the "Set2", "Dark2" and "Paired" palettes from the `brewer` scale functions are colourblind\-safe (but are hard to distinhuish in greyscale). For continuous colours, such as when colour is representing the magnitude of a correlation in a tile plot, the `viridis` scale functions provide a number of different colourblind and greyscale\-safe options.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_color_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
Figure 3\.8: Use the Dark2 brewer colour scheme for accessibility.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 3\.7\.2 Specifying axis `breaks` with `seq()`
Previously, when we have edited the `breaks` on the axis labels, we have done so manually, typing out all the values we want to display on the axis. For example, the below code edits the y\-axis so that `age` is displayed in increments of 5\.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [c](https://rdrr.io/r/base/c.html)(20,25,30,35,40,45,50,55,60))
```
However, this is somewhat inefficient. Instead, we can use the function `[seq()](https://rdrr.io/r/base/seq.html)` (short for sequence) to specify the first and last value and the increments `by` which the breaks should display between these two values.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [seq](https://rdrr.io/r/base/seq.html)(20,60, by = 5))
```
3\.8 Activities 2
-----------------
Before you move on try the following:
1. Use `fill` to created grouped histograms that display the distributions for `rt` for each `language` group separately and also edit the fill axis labels. Try adding `position = "dodge"` to `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` to see what happens.
Solution 1
```
# fill and axis changes
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = language)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
# add in dodge
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = language)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, position = "dodge") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
```
2. Use `scale_*` functions to edit the name of the x and y\-axis on the scatterplot
Solution 2
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
3. Use `se = FALSE` to remove the confidence envelope from the scatterplots
Solution 3
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm", se = FALSE) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
4. Remove `method = "lm"` from `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)` to produce a curved fit line.
Solution 4
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)() +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
5. Replace the default fill on the grouped density plot with a colour\-blind friendly version.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = 0.75)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Set2", # or "Dark2" or "Paired"
name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/transforming-data.html |
3 Transforming Data
===================
3\.1 Data formats
-----------------
To visualise the experimental reaction time and accuracy data using `ggplot2`, we first need to reshape the data from wide format to long format. This step can cause friction with novice users of R. Traditionally, psychologists have been taught data skills using wide\-format data. Wide\-format data typically has one row of data for each participant, with separate columns for each score or variable. For repeated\-measures variables, the dependent variable is split across different columns. For between\-groups variables, a separate column is added to encode the group to which a participant or observation belongs.
The simulated lexical decision data is currently in wide format (see Table [3\.1](transforming-data.html#tab:wide-data)), where each participant's aggregated 4 reaction time and accuracy for each level of the within\-subject variable is split across multiple columns for the repeated factor of conditon (words versus non\-words).
Table 3\.1: Data in wide format.
| id | age | language | rt\_word | rt\_nonword | acc\_word | acc\_nonword |
| --- | --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | 379\.46 | 516\.82 | 99 | 90 |
| S002 | 33 | monolingual | 312\.45 | 435\.04 | 94 | 82 |
| S003 | 23 | monolingual | 404\.94 | 458\.50 | 96 | 87 |
| S004 | 28 | monolingual | 298\.37 | 335\.89 | 92 | 76 |
| S005 | 26 | monolingual | 316\.42 | 401\.32 | 91 | 83 |
| S006 | 29 | monolingual | 357\.17 | 367\.34 | 96 | 78 |
Wide format is popular because it is intuitive to read and easy to enter data into as all the data for one participant is contained within a single row. However, for the purposes of analysis, and particularly for analysis using R, this format is unsuitable. Whilst it is intuitive to read by a human, the same is not true for a computer. Wide\-format data concatenates multiple pieces of information in a single column, for example in Table [3\.1](transforming-data.html#tab:wide-data), `rt_word` contains information related to both a DV and one level of an IV. In comparison, long\-format data separates the DV from the IVs so that each column represents only one variable. The less intuitive part is that long\-format data has multiple rows for each participant (one row for each observation) and a column that encodes the level of the IV (`word` or `nonword`). Wickham ([2014](references.html#ref-wickham2014tidy)) provides a comprehensive overview of the benefits of a similar format known as tidy data, which is a standard way of mapping a dataset to its structure. For the purposes of this tutorial there are two important rules: each column should be a *variable* and each row should be an *observation*.
Moving from using wide\-format to long\-format datasets can require a conceptual shift on the part of the researcher and one that usually only comes with practice and repeated exposure5. It may be helpful to make a note that “row \= participant” (wide format) and “row \= observation” (long format) until you get used to moving between the formats. For our example dataset, adhering to these rules for reshaping the data would produce Table [3\.2](transforming-data.html#tab:long). Rather than different observations of the same dependent variable being split across columns, there is now a single column for the DV reaction time, and a single column for the DV accuracy. Each participant now has multiple rows of data, one for each observation (i.e., for each participant there will be as many rows as there are levels of the within\-subject IV). Although there is some repetition of age and language group, each row is unique when looking at the combination of measures.
Table 3\.2: Data in the correct format for visualization.
| id | age | language | condition | rt | acc |
| --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | word | 379\.46 | 99 |
| S001 | 22 | monolingual | nonword | 516\.82 | 90 |
| S002 | 33 | monolingual | word | 312\.45 | 94 |
| S002 | 33 | monolingual | nonword | 435\.04 | 82 |
| S003 | 23 | monolingual | word | 404\.94 | 96 |
| S003 | 23 | monolingual | nonword | 458\.50 | 87 |
The benefits and flexibility of this format will hopefully become apparent as we progress through the tutorial, however, a useful rule of thumb when working with data in R for visualisation is that *anything that shares an axis should probably be in the same column*. For example, a simple boxplot showing reaction time by condition would display the variable `condition` on the x\-axis with bars representing both the `word` and `nonword` data, and `rt` on the y\-axis. Therefore, all the data relating to `condition` should be in one column, and all the data relating to `rt` should be in a separate single column, rather than being split like in wide\-format data.
3\.2 Wide to long format
------------------------
We have chosen a 2 x 2 design with two DVs, as we anticipate that this is a design many researchers will be familiar with and may also have existing datasets with a similar structure. However, it is worth normalising that trial\-and\-error is part of the process of learning how to apply these functions to new datasets and structures. Data visualisation can be a useful way to scaffold learning these data transformations because they can provide a concrete visual check as to whether you have done what you intended to do with your data.
### 3\.2\.1 Step 1: `pivot_longer()`
The first step is to use the function `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)` to transform the data to long\-form. We have purposefully used a more complex dataset with two DVs for this tutorial to aid researchers applying our code to their own datasets. Because of this, we will break down the steps involved to help show how the code works.
This first code ignores that the dataset has two DVs, a problem we will fix in step 2\. The pivot functions can be easier to show than tell \- you may find it a useful exercise to run the below code and compare the newly created object `long` (Table [3\.3](transforming-data.html#tab:long1-example)) with the original `dat` Table [3\.1](transforming-data.html#tab:wide-data) before reading on.
```
long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_to = "dv_condition",
values_to = "dv")
```
* As with the other tidyverse functions, the first argument specifies the dataset to use as the base, in this case `dat`. This argument name is often dropped in examples.
* `cols` specifies all the columns you want to transform. The easiest way to visualise this is to think about which columns would be the same in the new long\-form dataset and which will change. If you refer back to Table [3\.1](transforming-data.html#tab:wide-data), you can see that `id`, `age`, and `language` all remain, while the columns that contain the measurements of the DVs change. The colon notation `first_column:last_column` is used to select all variables from the first column specified to the last In our code, `cols` specifies that the columns we want to transform are `rt_word` to `acc_nonword`.
* `names_to` specifies the name of the new column that will be created. This column will contain the names of the selected existing columns.
* Finally, `values_to` names the new column that will contain the values in the selected columns. In this case we'll call it `dv`.
At this point you may find it helpful to go back and compare `dat` and `long` again to see how each argument matches up with the output of the table.
Table 3\.3: Data in long format with mixed DVs.
| id | age | language | dv\_condition | dv |
| --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt\_word | 379\.46 |
| S001 | 22 | monolingual | rt\_nonword | 516\.82 |
| S001 | 22 | monolingual | acc\_word | 99\.00 |
| S001 | 22 | monolingual | acc\_nonword | 90\.00 |
| S002 | 33 | monolingual | rt\_word | 312\.45 |
| S002 | 33 | monolingual | rt\_nonword | 435\.04 |
### 3\.2\.2 Step 2: `pivot_longer()` adjusted
The problem with the above long\-format data\-set is that `dv_condition` combines two variables \- it has information about the type of DV and the condition of the IV. To account for this, we include a new argument `names_sep` and adjust `name_to` to specify the creation of two new columns. Note that we are pivoting the same wide\-format dataset `dat` as we did in step 1\.
```
long2 <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv")
```
* `names_sep` specifies how to split up the variable name in cases where it has multiple components. This is when taking care to name your variables consistently and meaningfully pays off. Because the word to the left of the separator (`_`) is always the DV type and the word to the right is always the condition of the within\-subject IV, it is easy to automatically split the columns.
* Note that when specifying more than one column name, they must be combined using `[c()](https://rdrr.io/r/base/c.html)` and be enclosed in their own quotation marks.
Table 3\.4: Data in long format with dv type and condition in separate columns.
| id | age | language | dv\_type | condition | dv |
| --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt | word | 379\.46 |
| S001 | 22 | monolingual | rt | nonword | 516\.82 |
| S001 | 22 | monolingual | acc | word | 99\.00 |
| S001 | 22 | monolingual | acc | nonword | 90\.00 |
| S002 | 33 | monolingual | rt | word | 312\.45 |
| S002 | 33 | monolingual | rt | nonword | 435\.04 |
### 3\.2\.3 Step 3: `pivot_wider()`
Although we have now split the columns so that there are separate variables for the DV type and level of condition, because the two DVs are different types of data, there is an additional bit of wrangling required to get the data in the right format for plotting.
In the current long\-format dataset, the column `dv` contains both reaction time and accuracy measures. Keeping in mind the rule of thumb that *anything that shares an axis should probably be in the same column,* this creates a problem because we cannot plot two different units of measurement on the same axis. To fix this we need to use the function `[pivot_wider()](https://tidyr.tidyverse.org/reference/pivot_wider.html)`. Again, we would encourage you at this point to compare `long2` and `dat_long` with the below code to try and map the connections before reading on.
```
dat_long <- [pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(long2,
names_from = "dv_type",
values_from = "dv")
```
* The first argument is again the dataset you wish to work from, in this case `long2`. We have removed the argument name `data` in this example.
* `names_from` is the reverse of `names_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It will take the values from the variable specified and use these as the new column names. In this case, the values of `rt` and `acc` that are currently in the `dv_type` column will become the new column names.
* `values_from` is the reverse of `values_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It specifies the column that contains the values to fill the new columns with. In this case, the new columns `rt` and `acc` will be filled with the values that were in `dv`.
Again, it can be helpful to compare each dataset with the code to see how it aligns. This final long\-form data should look like Table [3\.2](transforming-data.html#tab:long).
If you are working with a dataset with only one DV, note that only step 1 of this process would be necessary. Also, be careful not to calculate demographic descriptive statistics from this long\-form dataset. Because the process of transformation has introduced some repetition for these variables, the wide\-format dataset where one row equals one participant should be used for demographic information. Finally, the three step process noted above is broken down for teaching purposes, in reality, one would likely do this in a single pipeline of code, for example:
```
dat_long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv") [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(names_from = "dv_type",
values_from = "dv")
```
3\.3 Histogram 2
----------------
Now that we have the experimental data in the right form, we can begin to create some useful visualizations. First, to demonstrate how code recipes can be reused and adapted, we will create histograms of reaction time and accuracy. The below code uses the same template as before but changes the dataset (`dat_long`), the bin\-widths of the histograms, the `x` variable to display (`rt`/`acc`), and the name of the x\-axis.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", colour = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", colour = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)")
```
Figure 3\.1: Histograms showing the distribution of reaction time (top) and accuracy (bottom)
3\.4 Density plots
------------------
The layer system makes it easy to create new types of plots by adapting existing recipes. For example, rather than creating a histogram, we can create a smoothed density plot by calling `[geom_density()](https://ggplot2.tidyverse.org/reference/geom_density.html)` rather than `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)`. The rest of the code remains identical.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
```
Figure 3\.2: Density plot of reaction time.
### 3\.4\.1 Grouped density plots
Density plots are most useful for comparing the distributions of different groups of data. Because the dataset is now in long format, with each variable contained within a single column, we can map `condition` to the plot.
* In addition to mapping `rt` to the x\-axis, we specify the `fill` aesthetic to fill the visualisation so that each level of the `condition` variable is represented by a different colour.
* Because the density plots are overlapping, we set `alpha = 0.75` to make the geoms 75% transparent.
* As with the x and y\-axis scale functions, we can edit the names and labels of our fill aesthetic by adding on another `scale_*` layer (`[scale_fill_discrete()](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)`).
* Note that the `fill` here is set inside the `[aes()](https://ggplot2.tidyverse.org/reference/aes.html)` function, which tells ggplot to set the fill differently for each value in the `condition` column. You cannot specify which colour here (e.g., `fill="red"`), like you could when you set `fill` inside the `geom_*()` function before.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = 0.75)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.3: Density plot of reaction times grouped by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
3\.5 Scatterplots
-----------------
Scatterplots are created by calling `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` and require both an `x` and `y` variable to be specified in the mapping.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)()
```
Figure 3\.4: Scatterplot of reaction time versus age.
A line of best fit can be added with an additional layer that calls the function `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)`. The default is to draw a LOESS or curved regression line. However, a linear line of best fit can be specified using `method = "lm"`. By default, `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)` will also draw a confidence envelope around the regression line; this can be removed by adding `se = FALSE` to `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)`. A common error is to try and use `[geom_line()](https://ggplot2.tidyverse.org/reference/geom_path.html)` to draw the line of best fit, which whilst a sensible guess, will not work (try it).
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure 3\.5: Line of best fit for reaction time versus age.
### 3\.5\.1 Grouped scatterplots
Similar to the density plot, the scatterplot can also be easily adjusted to display grouped data. For `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`, the grouping variable is mapped to `colour` rather than `fill` and the relevant `scale_*` function is added.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_colour_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.6: Grouped scatterplot of reaction time versus age by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
3\.6 Long to wide format
------------------------
Following the rule that *anything that shares an axis should probably be in the same column* means that we will frequently need our data in long\-form when using `ggplot2`, However, there are some cases when wide format is necessary. For example, we may wish to visualise the relationship between reaction time in the word and non\-word conditions. This requires that the corresponding word and non\-word values for each participant be in the same row. The easiest way to achieve this in our case would simply be to use the original wide\-format data as the input:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_word, y = rt_nonword, colour = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure 3\.7: Scatterplot with data grouped by language group
However, there may also be cases when you do not have an original wide\-format version and you can use the `[pivot_wider()](https://tidyr.tidyverse.org/reference/pivot_wider.html)` function to transform from long to wide.
```
dat_wide <- dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(id_cols = "id",
names_from = "condition",
values_from = [c](https://rdrr.io/r/base/c.html)(rt,acc))
```
| id | rt\_word | rt\_nonword | acc\_word | acc\_nonword |
| --- | --- | --- | --- | --- |
| S001 | 379\.4585 | 516\.8176 | 99 | 90 |
| S002 | 312\.4513 | 435\.0404 | 94 | 82 |
| S003 | 404\.9407 | 458\.5022 | 96 | 87 |
| S004 | 298\.3734 | 335\.8933 | 92 | 76 |
| S005 | 316\.4250 | 401\.3214 | 91 | 83 |
| S006 | 357\.1710 | 367\.3355 | 96 | 78 |
3\.7 Customisation 2
--------------------
### 3\.7\.1 Accessible colour schemes
One of the drawbacks of using `ggplot2` for visualisation is that the default colour scheme is not accessible (or visually appealing). The red and green default palette is difficult for colour\-blind people to differentiate, and also does not display well in greyscale. You can specify exact custom colours for your plots, but one easy option is to use a custom colour palette. These take the same arguments as their default `scale` sister functions for updating axis names and labels, but display plots in contrasting colours that can be read by colour\-blind people and that also print well in grey scale. For categorical colours, the "Set2", "Dark2" and "Paired" palettes from the `brewer` scale functions are colourblind\-safe (but are hard to distinhuish in greyscale). For continuous colours, such as when colour is representing the magnitude of a correlation in a tile plot, the `viridis` scale functions provide a number of different colourblind and greyscale\-safe options.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_color_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
Figure 3\.8: Use the Dark2 brewer colour scheme for accessibility.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 3\.7\.2 Specifying axis `breaks` with `seq()`
Previously, when we have edited the `breaks` on the axis labels, we have done so manually, typing out all the values we want to display on the axis. For example, the below code edits the y\-axis so that `age` is displayed in increments of 5\.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [c](https://rdrr.io/r/base/c.html)(20,25,30,35,40,45,50,55,60))
```
However, this is somewhat inefficient. Instead, we can use the function `[seq()](https://rdrr.io/r/base/seq.html)` (short for sequence) to specify the first and last value and the increments `by` which the breaks should display between these two values.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [seq](https://rdrr.io/r/base/seq.html)(20,60, by = 5))
```
3\.8 Activities 2
-----------------
Before you move on try the following:
1. Use `fill` to created grouped histograms that display the distributions for `rt` for each `language` group separately and also edit the fill axis labels. Try adding `position = "dodge"` to `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` to see what happens.
Solution 1
```
# fill and axis changes
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = language)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
# add in dodge
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = language)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, position = "dodge") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
```
2. Use `scale_*` functions to edit the name of the x and y\-axis on the scatterplot
Solution 2
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
3. Use `se = FALSE` to remove the confidence envelope from the scatterplots
Solution 3
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm", se = FALSE) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
4. Remove `method = "lm"` from `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)` to produce a curved fit line.
Solution 4
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)() +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
5. Replace the default fill on the grouped density plot with a colour\-blind friendly version.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = 0.75)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Set2", # or "Dark2" or "Paired"
name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
3\.1 Data formats
-----------------
To visualise the experimental reaction time and accuracy data using `ggplot2`, we first need to reshape the data from wide format to long format. This step can cause friction with novice users of R. Traditionally, psychologists have been taught data skills using wide\-format data. Wide\-format data typically has one row of data for each participant, with separate columns for each score or variable. For repeated\-measures variables, the dependent variable is split across different columns. For between\-groups variables, a separate column is added to encode the group to which a participant or observation belongs.
The simulated lexical decision data is currently in wide format (see Table [3\.1](transforming-data.html#tab:wide-data)), where each participant's aggregated 4 reaction time and accuracy for each level of the within\-subject variable is split across multiple columns for the repeated factor of conditon (words versus non\-words).
Table 3\.1: Data in wide format.
| id | age | language | rt\_word | rt\_nonword | acc\_word | acc\_nonword |
| --- | --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | 379\.46 | 516\.82 | 99 | 90 |
| S002 | 33 | monolingual | 312\.45 | 435\.04 | 94 | 82 |
| S003 | 23 | monolingual | 404\.94 | 458\.50 | 96 | 87 |
| S004 | 28 | monolingual | 298\.37 | 335\.89 | 92 | 76 |
| S005 | 26 | monolingual | 316\.42 | 401\.32 | 91 | 83 |
| S006 | 29 | monolingual | 357\.17 | 367\.34 | 96 | 78 |
Wide format is popular because it is intuitive to read and easy to enter data into as all the data for one participant is contained within a single row. However, for the purposes of analysis, and particularly for analysis using R, this format is unsuitable. Whilst it is intuitive to read by a human, the same is not true for a computer. Wide\-format data concatenates multiple pieces of information in a single column, for example in Table [3\.1](transforming-data.html#tab:wide-data), `rt_word` contains information related to both a DV and one level of an IV. In comparison, long\-format data separates the DV from the IVs so that each column represents only one variable. The less intuitive part is that long\-format data has multiple rows for each participant (one row for each observation) and a column that encodes the level of the IV (`word` or `nonword`). Wickham ([2014](references.html#ref-wickham2014tidy)) provides a comprehensive overview of the benefits of a similar format known as tidy data, which is a standard way of mapping a dataset to its structure. For the purposes of this tutorial there are two important rules: each column should be a *variable* and each row should be an *observation*.
Moving from using wide\-format to long\-format datasets can require a conceptual shift on the part of the researcher and one that usually only comes with practice and repeated exposure5. It may be helpful to make a note that “row \= participant” (wide format) and “row \= observation” (long format) until you get used to moving between the formats. For our example dataset, adhering to these rules for reshaping the data would produce Table [3\.2](transforming-data.html#tab:long). Rather than different observations of the same dependent variable being split across columns, there is now a single column for the DV reaction time, and a single column for the DV accuracy. Each participant now has multiple rows of data, one for each observation (i.e., for each participant there will be as many rows as there are levels of the within\-subject IV). Although there is some repetition of age and language group, each row is unique when looking at the combination of measures.
Table 3\.2: Data in the correct format for visualization.
| id | age | language | condition | rt | acc |
| --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | word | 379\.46 | 99 |
| S001 | 22 | monolingual | nonword | 516\.82 | 90 |
| S002 | 33 | monolingual | word | 312\.45 | 94 |
| S002 | 33 | monolingual | nonword | 435\.04 | 82 |
| S003 | 23 | monolingual | word | 404\.94 | 96 |
| S003 | 23 | monolingual | nonword | 458\.50 | 87 |
The benefits and flexibility of this format will hopefully become apparent as we progress through the tutorial, however, a useful rule of thumb when working with data in R for visualisation is that *anything that shares an axis should probably be in the same column*. For example, a simple boxplot showing reaction time by condition would display the variable `condition` on the x\-axis with bars representing both the `word` and `nonword` data, and `rt` on the y\-axis. Therefore, all the data relating to `condition` should be in one column, and all the data relating to `rt` should be in a separate single column, rather than being split like in wide\-format data.
3\.2 Wide to long format
------------------------
We have chosen a 2 x 2 design with two DVs, as we anticipate that this is a design many researchers will be familiar with and may also have existing datasets with a similar structure. However, it is worth normalising that trial\-and\-error is part of the process of learning how to apply these functions to new datasets and structures. Data visualisation can be a useful way to scaffold learning these data transformations because they can provide a concrete visual check as to whether you have done what you intended to do with your data.
### 3\.2\.1 Step 1: `pivot_longer()`
The first step is to use the function `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)` to transform the data to long\-form. We have purposefully used a more complex dataset with two DVs for this tutorial to aid researchers applying our code to their own datasets. Because of this, we will break down the steps involved to help show how the code works.
This first code ignores that the dataset has two DVs, a problem we will fix in step 2\. The pivot functions can be easier to show than tell \- you may find it a useful exercise to run the below code and compare the newly created object `long` (Table [3\.3](transforming-data.html#tab:long1-example)) with the original `dat` Table [3\.1](transforming-data.html#tab:wide-data) before reading on.
```
long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_to = "dv_condition",
values_to = "dv")
```
* As with the other tidyverse functions, the first argument specifies the dataset to use as the base, in this case `dat`. This argument name is often dropped in examples.
* `cols` specifies all the columns you want to transform. The easiest way to visualise this is to think about which columns would be the same in the new long\-form dataset and which will change. If you refer back to Table [3\.1](transforming-data.html#tab:wide-data), you can see that `id`, `age`, and `language` all remain, while the columns that contain the measurements of the DVs change. The colon notation `first_column:last_column` is used to select all variables from the first column specified to the last In our code, `cols` specifies that the columns we want to transform are `rt_word` to `acc_nonword`.
* `names_to` specifies the name of the new column that will be created. This column will contain the names of the selected existing columns.
* Finally, `values_to` names the new column that will contain the values in the selected columns. In this case we'll call it `dv`.
At this point you may find it helpful to go back and compare `dat` and `long` again to see how each argument matches up with the output of the table.
Table 3\.3: Data in long format with mixed DVs.
| id | age | language | dv\_condition | dv |
| --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt\_word | 379\.46 |
| S001 | 22 | monolingual | rt\_nonword | 516\.82 |
| S001 | 22 | monolingual | acc\_word | 99\.00 |
| S001 | 22 | monolingual | acc\_nonword | 90\.00 |
| S002 | 33 | monolingual | rt\_word | 312\.45 |
| S002 | 33 | monolingual | rt\_nonword | 435\.04 |
### 3\.2\.2 Step 2: `pivot_longer()` adjusted
The problem with the above long\-format data\-set is that `dv_condition` combines two variables \- it has information about the type of DV and the condition of the IV. To account for this, we include a new argument `names_sep` and adjust `name_to` to specify the creation of two new columns. Note that we are pivoting the same wide\-format dataset `dat` as we did in step 1\.
```
long2 <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv")
```
* `names_sep` specifies how to split up the variable name in cases where it has multiple components. This is when taking care to name your variables consistently and meaningfully pays off. Because the word to the left of the separator (`_`) is always the DV type and the word to the right is always the condition of the within\-subject IV, it is easy to automatically split the columns.
* Note that when specifying more than one column name, they must be combined using `[c()](https://rdrr.io/r/base/c.html)` and be enclosed in their own quotation marks.
Table 3\.4: Data in long format with dv type and condition in separate columns.
| id | age | language | dv\_type | condition | dv |
| --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt | word | 379\.46 |
| S001 | 22 | monolingual | rt | nonword | 516\.82 |
| S001 | 22 | monolingual | acc | word | 99\.00 |
| S001 | 22 | monolingual | acc | nonword | 90\.00 |
| S002 | 33 | monolingual | rt | word | 312\.45 |
| S002 | 33 | monolingual | rt | nonword | 435\.04 |
### 3\.2\.3 Step 3: `pivot_wider()`
Although we have now split the columns so that there are separate variables for the DV type and level of condition, because the two DVs are different types of data, there is an additional bit of wrangling required to get the data in the right format for plotting.
In the current long\-format dataset, the column `dv` contains both reaction time and accuracy measures. Keeping in mind the rule of thumb that *anything that shares an axis should probably be in the same column,* this creates a problem because we cannot plot two different units of measurement on the same axis. To fix this we need to use the function `[pivot_wider()](https://tidyr.tidyverse.org/reference/pivot_wider.html)`. Again, we would encourage you at this point to compare `long2` and `dat_long` with the below code to try and map the connections before reading on.
```
dat_long <- [pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(long2,
names_from = "dv_type",
values_from = "dv")
```
* The first argument is again the dataset you wish to work from, in this case `long2`. We have removed the argument name `data` in this example.
* `names_from` is the reverse of `names_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It will take the values from the variable specified and use these as the new column names. In this case, the values of `rt` and `acc` that are currently in the `dv_type` column will become the new column names.
* `values_from` is the reverse of `values_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It specifies the column that contains the values to fill the new columns with. In this case, the new columns `rt` and `acc` will be filled with the values that were in `dv`.
Again, it can be helpful to compare each dataset with the code to see how it aligns. This final long\-form data should look like Table [3\.2](transforming-data.html#tab:long).
If you are working with a dataset with only one DV, note that only step 1 of this process would be necessary. Also, be careful not to calculate demographic descriptive statistics from this long\-form dataset. Because the process of transformation has introduced some repetition for these variables, the wide\-format dataset where one row equals one participant should be used for demographic information. Finally, the three step process noted above is broken down for teaching purposes, in reality, one would likely do this in a single pipeline of code, for example:
```
dat_long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv") [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(names_from = "dv_type",
values_from = "dv")
```
### 3\.2\.1 Step 1: `pivot_longer()`
The first step is to use the function `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)` to transform the data to long\-form. We have purposefully used a more complex dataset with two DVs for this tutorial to aid researchers applying our code to their own datasets. Because of this, we will break down the steps involved to help show how the code works.
This first code ignores that the dataset has two DVs, a problem we will fix in step 2\. The pivot functions can be easier to show than tell \- you may find it a useful exercise to run the below code and compare the newly created object `long` (Table [3\.3](transforming-data.html#tab:long1-example)) with the original `dat` Table [3\.1](transforming-data.html#tab:wide-data) before reading on.
```
long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_to = "dv_condition",
values_to = "dv")
```
* As with the other tidyverse functions, the first argument specifies the dataset to use as the base, in this case `dat`. This argument name is often dropped in examples.
* `cols` specifies all the columns you want to transform. The easiest way to visualise this is to think about which columns would be the same in the new long\-form dataset and which will change. If you refer back to Table [3\.1](transforming-data.html#tab:wide-data), you can see that `id`, `age`, and `language` all remain, while the columns that contain the measurements of the DVs change. The colon notation `first_column:last_column` is used to select all variables from the first column specified to the last In our code, `cols` specifies that the columns we want to transform are `rt_word` to `acc_nonword`.
* `names_to` specifies the name of the new column that will be created. This column will contain the names of the selected existing columns.
* Finally, `values_to` names the new column that will contain the values in the selected columns. In this case we'll call it `dv`.
At this point you may find it helpful to go back and compare `dat` and `long` again to see how each argument matches up with the output of the table.
Table 3\.3: Data in long format with mixed DVs.
| id | age | language | dv\_condition | dv |
| --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt\_word | 379\.46 |
| S001 | 22 | monolingual | rt\_nonword | 516\.82 |
| S001 | 22 | monolingual | acc\_word | 99\.00 |
| S001 | 22 | monolingual | acc\_nonword | 90\.00 |
| S002 | 33 | monolingual | rt\_word | 312\.45 |
| S002 | 33 | monolingual | rt\_nonword | 435\.04 |
### 3\.2\.2 Step 2: `pivot_longer()` adjusted
The problem with the above long\-format data\-set is that `dv_condition` combines two variables \- it has information about the type of DV and the condition of the IV. To account for this, we include a new argument `names_sep` and adjust `name_to` to specify the creation of two new columns. Note that we are pivoting the same wide\-format dataset `dat` as we did in step 1\.
```
long2 <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv")
```
* `names_sep` specifies how to split up the variable name in cases where it has multiple components. This is when taking care to name your variables consistently and meaningfully pays off. Because the word to the left of the separator (`_`) is always the DV type and the word to the right is always the condition of the within\-subject IV, it is easy to automatically split the columns.
* Note that when specifying more than one column name, they must be combined using `[c()](https://rdrr.io/r/base/c.html)` and be enclosed in their own quotation marks.
Table 3\.4: Data in long format with dv type and condition in separate columns.
| id | age | language | dv\_type | condition | dv |
| --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt | word | 379\.46 |
| S001 | 22 | monolingual | rt | nonword | 516\.82 |
| S001 | 22 | monolingual | acc | word | 99\.00 |
| S001 | 22 | monolingual | acc | nonword | 90\.00 |
| S002 | 33 | monolingual | rt | word | 312\.45 |
| S002 | 33 | monolingual | rt | nonword | 435\.04 |
### 3\.2\.3 Step 3: `pivot_wider()`
Although we have now split the columns so that there are separate variables for the DV type and level of condition, because the two DVs are different types of data, there is an additional bit of wrangling required to get the data in the right format for plotting.
In the current long\-format dataset, the column `dv` contains both reaction time and accuracy measures. Keeping in mind the rule of thumb that *anything that shares an axis should probably be in the same column,* this creates a problem because we cannot plot two different units of measurement on the same axis. To fix this we need to use the function `[pivot_wider()](https://tidyr.tidyverse.org/reference/pivot_wider.html)`. Again, we would encourage you at this point to compare `long2` and `dat_long` with the below code to try and map the connections before reading on.
```
dat_long <- [pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(long2,
names_from = "dv_type",
values_from = "dv")
```
* The first argument is again the dataset you wish to work from, in this case `long2`. We have removed the argument name `data` in this example.
* `names_from` is the reverse of `names_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It will take the values from the variable specified and use these as the new column names. In this case, the values of `rt` and `acc` that are currently in the `dv_type` column will become the new column names.
* `values_from` is the reverse of `values_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It specifies the column that contains the values to fill the new columns with. In this case, the new columns `rt` and `acc` will be filled with the values that were in `dv`.
Again, it can be helpful to compare each dataset with the code to see how it aligns. This final long\-form data should look like Table [3\.2](transforming-data.html#tab:long).
If you are working with a dataset with only one DV, note that only step 1 of this process would be necessary. Also, be careful not to calculate demographic descriptive statistics from this long\-form dataset. Because the process of transformation has introduced some repetition for these variables, the wide\-format dataset where one row equals one participant should be used for demographic information. Finally, the three step process noted above is broken down for teaching purposes, in reality, one would likely do this in a single pipeline of code, for example:
```
dat_long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv") [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(names_from = "dv_type",
values_from = "dv")
```
3\.3 Histogram 2
----------------
Now that we have the experimental data in the right form, we can begin to create some useful visualizations. First, to demonstrate how code recipes can be reused and adapted, we will create histograms of reaction time and accuracy. The below code uses the same template as before but changes the dataset (`dat_long`), the bin\-widths of the histograms, the `x` variable to display (`rt`/`acc`), and the name of the x\-axis.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", colour = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", colour = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)")
```
Figure 3\.1: Histograms showing the distribution of reaction time (top) and accuracy (bottom)
3\.4 Density plots
------------------
The layer system makes it easy to create new types of plots by adapting existing recipes. For example, rather than creating a histogram, we can create a smoothed density plot by calling `[geom_density()](https://ggplot2.tidyverse.org/reference/geom_density.html)` rather than `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)`. The rest of the code remains identical.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
```
Figure 3\.2: Density plot of reaction time.
### 3\.4\.1 Grouped density plots
Density plots are most useful for comparing the distributions of different groups of data. Because the dataset is now in long format, with each variable contained within a single column, we can map `condition` to the plot.
* In addition to mapping `rt` to the x\-axis, we specify the `fill` aesthetic to fill the visualisation so that each level of the `condition` variable is represented by a different colour.
* Because the density plots are overlapping, we set `alpha = 0.75` to make the geoms 75% transparent.
* As with the x and y\-axis scale functions, we can edit the names and labels of our fill aesthetic by adding on another `scale_*` layer (`[scale_fill_discrete()](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)`).
* Note that the `fill` here is set inside the `[aes()](https://ggplot2.tidyverse.org/reference/aes.html)` function, which tells ggplot to set the fill differently for each value in the `condition` column. You cannot specify which colour here (e.g., `fill="red"`), like you could when you set `fill` inside the `geom_*()` function before.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = 0.75)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.3: Density plot of reaction times grouped by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 3\.4\.1 Grouped density plots
Density plots are most useful for comparing the distributions of different groups of data. Because the dataset is now in long format, with each variable contained within a single column, we can map `condition` to the plot.
* In addition to mapping `rt` to the x\-axis, we specify the `fill` aesthetic to fill the visualisation so that each level of the `condition` variable is represented by a different colour.
* Because the density plots are overlapping, we set `alpha = 0.75` to make the geoms 75% transparent.
* As with the x and y\-axis scale functions, we can edit the names and labels of our fill aesthetic by adding on another `scale_*` layer (`[scale_fill_discrete()](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)`).
* Note that the `fill` here is set inside the `[aes()](https://ggplot2.tidyverse.org/reference/aes.html)` function, which tells ggplot to set the fill differently for each value in the `condition` column. You cannot specify which colour here (e.g., `fill="red"`), like you could when you set `fill` inside the `geom_*()` function before.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = 0.75)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.3: Density plot of reaction times grouped by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
3\.5 Scatterplots
-----------------
Scatterplots are created by calling `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` and require both an `x` and `y` variable to be specified in the mapping.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)()
```
Figure 3\.4: Scatterplot of reaction time versus age.
A line of best fit can be added with an additional layer that calls the function `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)`. The default is to draw a LOESS or curved regression line. However, a linear line of best fit can be specified using `method = "lm"`. By default, `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)` will also draw a confidence envelope around the regression line; this can be removed by adding `se = FALSE` to `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)`. A common error is to try and use `[geom_line()](https://ggplot2.tidyverse.org/reference/geom_path.html)` to draw the line of best fit, which whilst a sensible guess, will not work (try it).
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure 3\.5: Line of best fit for reaction time versus age.
### 3\.5\.1 Grouped scatterplots
Similar to the density plot, the scatterplot can also be easily adjusted to display grouped data. For `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`, the grouping variable is mapped to `colour` rather than `fill` and the relevant `scale_*` function is added.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_colour_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.6: Grouped scatterplot of reaction time versus age by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 3\.5\.1 Grouped scatterplots
Similar to the density plot, the scatterplot can also be easily adjusted to display grouped data. For `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`, the grouping variable is mapped to `colour` rather than `fill` and the relevant `scale_*` function is added.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_colour_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.6: Grouped scatterplot of reaction time versus age by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
3\.6 Long to wide format
------------------------
Following the rule that *anything that shares an axis should probably be in the same column* means that we will frequently need our data in long\-form when using `ggplot2`, However, there are some cases when wide format is necessary. For example, we may wish to visualise the relationship between reaction time in the word and non\-word conditions. This requires that the corresponding word and non\-word values for each participant be in the same row. The easiest way to achieve this in our case would simply be to use the original wide\-format data as the input:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_word, y = rt_nonword, colour = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure 3\.7: Scatterplot with data grouped by language group
However, there may also be cases when you do not have an original wide\-format version and you can use the `[pivot_wider()](https://tidyr.tidyverse.org/reference/pivot_wider.html)` function to transform from long to wide.
```
dat_wide <- dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(id_cols = "id",
names_from = "condition",
values_from = [c](https://rdrr.io/r/base/c.html)(rt,acc))
```
| id | rt\_word | rt\_nonword | acc\_word | acc\_nonword |
| --- | --- | --- | --- | --- |
| S001 | 379\.4585 | 516\.8176 | 99 | 90 |
| S002 | 312\.4513 | 435\.0404 | 94 | 82 |
| S003 | 404\.9407 | 458\.5022 | 96 | 87 |
| S004 | 298\.3734 | 335\.8933 | 92 | 76 |
| S005 | 316\.4250 | 401\.3214 | 91 | 83 |
| S006 | 357\.1710 | 367\.3355 | 96 | 78 |
3\.7 Customisation 2
--------------------
### 3\.7\.1 Accessible colour schemes
One of the drawbacks of using `ggplot2` for visualisation is that the default colour scheme is not accessible (or visually appealing). The red and green default palette is difficult for colour\-blind people to differentiate, and also does not display well in greyscale. You can specify exact custom colours for your plots, but one easy option is to use a custom colour palette. These take the same arguments as their default `scale` sister functions for updating axis names and labels, but display plots in contrasting colours that can be read by colour\-blind people and that also print well in grey scale. For categorical colours, the "Set2", "Dark2" and "Paired" palettes from the `brewer` scale functions are colourblind\-safe (but are hard to distinhuish in greyscale). For continuous colours, such as when colour is representing the magnitude of a correlation in a tile plot, the `viridis` scale functions provide a number of different colourblind and greyscale\-safe options.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_color_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
Figure 3\.8: Use the Dark2 brewer colour scheme for accessibility.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 3\.7\.2 Specifying axis `breaks` with `seq()`
Previously, when we have edited the `breaks` on the axis labels, we have done so manually, typing out all the values we want to display on the axis. For example, the below code edits the y\-axis so that `age` is displayed in increments of 5\.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [c](https://rdrr.io/r/base/c.html)(20,25,30,35,40,45,50,55,60))
```
However, this is somewhat inefficient. Instead, we can use the function `[seq()](https://rdrr.io/r/base/seq.html)` (short for sequence) to specify the first and last value and the increments `by` which the breaks should display between these two values.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [seq](https://rdrr.io/r/base/seq.html)(20,60, by = 5))
```
### 3\.7\.1 Accessible colour schemes
One of the drawbacks of using `ggplot2` for visualisation is that the default colour scheme is not accessible (or visually appealing). The red and green default palette is difficult for colour\-blind people to differentiate, and also does not display well in greyscale. You can specify exact custom colours for your plots, but one easy option is to use a custom colour palette. These take the same arguments as their default `scale` sister functions for updating axis names and labels, but display plots in contrasting colours that can be read by colour\-blind people and that also print well in grey scale. For categorical colours, the "Set2", "Dark2" and "Paired" palettes from the `brewer` scale functions are colourblind\-safe (but are hard to distinhuish in greyscale). For continuous colours, such as when colour is representing the magnitude of a correlation in a tile plot, the `viridis` scale functions provide a number of different colourblind and greyscale\-safe options.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_color_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
Figure 3\.8: Use the Dark2 brewer colour scheme for accessibility.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 3\.7\.2 Specifying axis `breaks` with `seq()`
Previously, when we have edited the `breaks` on the axis labels, we have done so manually, typing out all the values we want to display on the axis. For example, the below code edits the y\-axis so that `age` is displayed in increments of 5\.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [c](https://rdrr.io/r/base/c.html)(20,25,30,35,40,45,50,55,60))
```
However, this is somewhat inefficient. Instead, we can use the function `[seq()](https://rdrr.io/r/base/seq.html)` (short for sequence) to specify the first and last value and the increments `by` which the breaks should display between these two values.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [seq](https://rdrr.io/r/base/seq.html)(20,60, by = 5))
```
3\.8 Activities 2
-----------------
Before you move on try the following:
1. Use `fill` to created grouped histograms that display the distributions for `rt` for each `language` group separately and also edit the fill axis labels. Try adding `position = "dodge"` to `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` to see what happens.
Solution 1
```
# fill and axis changes
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = language)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
# add in dodge
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = language)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, position = "dodge") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
```
2. Use `scale_*` functions to edit the name of the x and y\-axis on the scatterplot
Solution 2
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
3. Use `se = FALSE` to remove the confidence envelope from the scatterplots
Solution 3
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm", se = FALSE) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
4. Remove `method = "lm"` from `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)` to produce a curved fit line.
Solution 4
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)() +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
5. Replace the default fill on the grouped density plot with a colour\-blind friendly version.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = 0.75)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Set2", # or "Dark2" or "Paired"
name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/transforming-data.html |
3 Transforming Data
===================
3\.1 Data formats
-----------------
To visualise the experimental reaction time and accuracy data using `ggplot2`, we first need to reshape the data from wide format to long format. This step can cause friction with novice users of R. Traditionally, psychologists have been taught data skills using wide\-format data. Wide\-format data typically has one row of data for each participant, with separate columns for each score or variable. For repeated\-measures variables, the dependent variable is split across different columns. For between\-groups variables, a separate column is added to encode the group to which a participant or observation belongs.
The simulated lexical decision data is currently in wide format (see Table [3\.1](transforming-data.html#tab:wide-data)), where each participant's aggregated 4 reaction time and accuracy for each level of the within\-subject variable is split across multiple columns for the repeated factor of conditon (words versus non\-words).
Table 3\.1: Data in wide format.
| id | age | language | rt\_word | rt\_nonword | acc\_word | acc\_nonword |
| --- | --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | 379\.46 | 516\.82 | 99 | 90 |
| S002 | 33 | monolingual | 312\.45 | 435\.04 | 94 | 82 |
| S003 | 23 | monolingual | 404\.94 | 458\.50 | 96 | 87 |
| S004 | 28 | monolingual | 298\.37 | 335\.89 | 92 | 76 |
| S005 | 26 | monolingual | 316\.42 | 401\.32 | 91 | 83 |
| S006 | 29 | monolingual | 357\.17 | 367\.34 | 96 | 78 |
Wide format is popular because it is intuitive to read and easy to enter data into as all the data for one participant is contained within a single row. However, for the purposes of analysis, and particularly for analysis using R, this format is unsuitable. Whilst it is intuitive to read by a human, the same is not true for a computer. Wide\-format data concatenates multiple pieces of information in a single column, for example in Table [3\.1](transforming-data.html#tab:wide-data), `rt_word` contains information related to both a DV and one level of an IV. In comparison, long\-format data separates the DV from the IVs so that each column represents only one variable. The less intuitive part is that long\-format data has multiple rows for each participant (one row for each observation) and a column that encodes the level of the IV (`word` or `nonword`). Wickham ([2014](references.html#ref-wickham2014tidy)) provides a comprehensive overview of the benefits of a similar format known as tidy data, which is a standard way of mapping a dataset to its structure. For the purposes of this tutorial there are two important rules: each column should be a *variable* and each row should be an *observation*.
Moving from using wide\-format to long\-format datasets can require a conceptual shift on the part of the researcher and one that usually only comes with practice and repeated exposure5. It may be helpful to make a note that “row \= participant” (wide format) and “row \= observation” (long format) until you get used to moving between the formats. For our example dataset, adhering to these rules for reshaping the data would produce Table [3\.2](transforming-data.html#tab:long). Rather than different observations of the same dependent variable being split across columns, there is now a single column for the DV reaction time, and a single column for the DV accuracy. Each participant now has multiple rows of data, one for each observation (i.e., for each participant there will be as many rows as there are levels of the within\-subject IV). Although there is some repetition of age and language group, each row is unique when looking at the combination of measures.
Table 3\.2: Data in the correct format for visualization.
| id | age | language | condition | rt | acc |
| --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | word | 379\.46 | 99 |
| S001 | 22 | monolingual | nonword | 516\.82 | 90 |
| S002 | 33 | monolingual | word | 312\.45 | 94 |
| S002 | 33 | monolingual | nonword | 435\.04 | 82 |
| S003 | 23 | monolingual | word | 404\.94 | 96 |
| S003 | 23 | monolingual | nonword | 458\.50 | 87 |
The benefits and flexibility of this format will hopefully become apparent as we progress through the tutorial, however, a useful rule of thumb when working with data in R for visualisation is that *anything that shares an axis should probably be in the same column*. For example, a simple boxplot showing reaction time by condition would display the variable `condition` on the x\-axis with bars representing both the `word` and `nonword` data, and `rt` on the y\-axis. Therefore, all the data relating to `condition` should be in one column, and all the data relating to `rt` should be in a separate single column, rather than being split like in wide\-format data.
3\.2 Wide to long format
------------------------
We have chosen a 2 x 2 design with two DVs, as we anticipate that this is a design many researchers will be familiar with and may also have existing datasets with a similar structure. However, it is worth normalising that trial\-and\-error is part of the process of learning how to apply these functions to new datasets and structures. Data visualisation can be a useful way to scaffold learning these data transformations because they can provide a concrete visual check as to whether you have done what you intended to do with your data.
### 3\.2\.1 Step 1: `pivot_longer()`
The first step is to use the function `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)` to transform the data to long\-form. We have purposefully used a more complex dataset with two DVs for this tutorial to aid researchers applying our code to their own datasets. Because of this, we will break down the steps involved to help show how the code works.
This first code ignores that the dataset has two DVs, a problem we will fix in step 2\. The pivot functions can be easier to show than tell \- you may find it a useful exercise to run the below code and compare the newly created object `long` (Table [3\.3](transforming-data.html#tab:long1-example)) with the original `dat` Table [3\.1](transforming-data.html#tab:wide-data) before reading on.
```
long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_to = "dv_condition",
values_to = "dv")
```
* As with the other tidyverse functions, the first argument specifies the dataset to use as the base, in this case `dat`. This argument name is often dropped in examples.
* `cols` specifies all the columns you want to transform. The easiest way to visualise this is to think about which columns would be the same in the new long\-form dataset and which will change. If you refer back to Table [3\.1](transforming-data.html#tab:wide-data), you can see that `id`, `age`, and `language` all remain, while the columns that contain the measurements of the DVs change. The colon notation `first_column:last_column` is used to select all variables from the first column specified to the last In our code, `cols` specifies that the columns we want to transform are `rt_word` to `acc_nonword`.
* `names_to` specifies the name of the new column that will be created. This column will contain the names of the selected existing columns.
* Finally, `values_to` names the new column that will contain the values in the selected columns. In this case we'll call it `dv`.
At this point you may find it helpful to go back and compare `dat` and `long` again to see how each argument matches up with the output of the table.
Table 3\.3: Data in long format with mixed DVs.
| id | age | language | dv\_condition | dv |
| --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt\_word | 379\.46 |
| S001 | 22 | monolingual | rt\_nonword | 516\.82 |
| S001 | 22 | monolingual | acc\_word | 99\.00 |
| S001 | 22 | monolingual | acc\_nonword | 90\.00 |
| S002 | 33 | monolingual | rt\_word | 312\.45 |
| S002 | 33 | monolingual | rt\_nonword | 435\.04 |
### 3\.2\.2 Step 2: `pivot_longer()` adjusted
The problem with the above long\-format data\-set is that `dv_condition` combines two variables \- it has information about the type of DV and the condition of the IV. To account for this, we include a new argument `names_sep` and adjust `name_to` to specify the creation of two new columns. Note that we are pivoting the same wide\-format dataset `dat` as we did in step 1\.
```
long2 <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv")
```
* `names_sep` specifies how to split up the variable name in cases where it has multiple components. This is when taking care to name your variables consistently and meaningfully pays off. Because the word to the left of the separator (`_`) is always the DV type and the word to the right is always the condition of the within\-subject IV, it is easy to automatically split the columns.
* Note that when specifying more than one column name, they must be combined using `[c()](https://rdrr.io/r/base/c.html)` and be enclosed in their own quotation marks.
Table 3\.4: Data in long format with dv type and condition in separate columns.
| id | age | language | dv\_type | condition | dv |
| --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt | word | 379\.46 |
| S001 | 22 | monolingual | rt | nonword | 516\.82 |
| S001 | 22 | monolingual | acc | word | 99\.00 |
| S001 | 22 | monolingual | acc | nonword | 90\.00 |
| S002 | 33 | monolingual | rt | word | 312\.45 |
| S002 | 33 | monolingual | rt | nonword | 435\.04 |
### 3\.2\.3 Step 3: `pivot_wider()`
Although we have now split the columns so that there are separate variables for the DV type and level of condition, because the two DVs are different types of data, there is an additional bit of wrangling required to get the data in the right format for plotting.
In the current long\-format dataset, the column `dv` contains both reaction time and accuracy measures. Keeping in mind the rule of thumb that *anything that shares an axis should probably be in the same column,* this creates a problem because we cannot plot two different units of measurement on the same axis. To fix this we need to use the function `[pivot_wider()](https://tidyr.tidyverse.org/reference/pivot_wider.html)`. Again, we would encourage you at this point to compare `long2` and `dat_long` with the below code to try and map the connections before reading on.
```
dat_long <- [pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(long2,
names_from = "dv_type",
values_from = "dv")
```
* The first argument is again the dataset you wish to work from, in this case `long2`. We have removed the argument name `data` in this example.
* `names_from` is the reverse of `names_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It will take the values from the variable specified and use these as the new column names. In this case, the values of `rt` and `acc` that are currently in the `dv_type` column will become the new column names.
* `values_from` is the reverse of `values_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It specifies the column that contains the values to fill the new columns with. In this case, the new columns `rt` and `acc` will be filled with the values that were in `dv`.
Again, it can be helpful to compare each dataset with the code to see how it aligns. This final long\-form data should look like Table [3\.2](transforming-data.html#tab:long).
If you are working with a dataset with only one DV, note that only step 1 of this process would be necessary. Also, be careful not to calculate demographic descriptive statistics from this long\-form dataset. Because the process of transformation has introduced some repetition for these variables, the wide\-format dataset where one row equals one participant should be used for demographic information. Finally, the three step process noted above is broken down for teaching purposes, in reality, one would likely do this in a single pipeline of code, for example:
```
dat_long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv") [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(names_from = "dv_type",
values_from = "dv")
```
3\.3 Histogram 2
----------------
Now that we have the experimental data in the right form, we can begin to create some useful visualizations. First, to demonstrate how code recipes can be reused and adapted, we will create histograms of reaction time and accuracy. The below code uses the same template as before but changes the dataset (`dat_long`), the bin\-widths of the histograms, the `x` variable to display (`rt`/`acc`), and the name of the x\-axis.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", colour = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", colour = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)")
```
Figure 3\.1: Histograms showing the distribution of reaction time (top) and accuracy (bottom)
3\.4 Density plots
------------------
The layer system makes it easy to create new types of plots by adapting existing recipes. For example, rather than creating a histogram, we can create a smoothed density plot by calling `[geom_density()](https://ggplot2.tidyverse.org/reference/geom_density.html)` rather than `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)`. The rest of the code remains identical.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
```
Figure 3\.2: Density plot of reaction time.
### 3\.4\.1 Grouped density plots
Density plots are most useful for comparing the distributions of different groups of data. Because the dataset is now in long format, with each variable contained within a single column, we can map `condition` to the plot.
* In addition to mapping `rt` to the x\-axis, we specify the `fill` aesthetic to fill the visualisation so that each level of the `condition` variable is represented by a different colour.
* Because the density plots are overlapping, we set `alpha = 0.75` to make the geoms 75% transparent.
* As with the x and y\-axis scale functions, we can edit the names and labels of our fill aesthetic by adding on another `scale_*` layer (`[scale_fill_discrete()](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)`).
* Note that the `fill` here is set inside the `[aes()](https://ggplot2.tidyverse.org/reference/aes.html)` function, which tells ggplot to set the fill differently for each value in the `condition` column. You cannot specify which colour here (e.g., `fill="red"`), like you could when you set `fill` inside the `geom_*()` function before.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = 0.75)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.3: Density plot of reaction times grouped by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
3\.5 Scatterplots
-----------------
Scatterplots are created by calling `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` and require both an `x` and `y` variable to be specified in the mapping.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)()
```
Figure 3\.4: Scatterplot of reaction time versus age.
A line of best fit can be added with an additional layer that calls the function `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)`. The default is to draw a LOESS or curved regression line. However, a linear line of best fit can be specified using `method = "lm"`. By default, `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)` will also draw a confidence envelope around the regression line; this can be removed by adding `se = FALSE` to `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)`. A common error is to try and use `[geom_line()](https://ggplot2.tidyverse.org/reference/geom_path.html)` to draw the line of best fit, which whilst a sensible guess, will not work (try it).
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure 3\.5: Line of best fit for reaction time versus age.
### 3\.5\.1 Grouped scatterplots
Similar to the density plot, the scatterplot can also be easily adjusted to display grouped data. For `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`, the grouping variable is mapped to `colour` rather than `fill` and the relevant `scale_*` function is added.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_colour_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.6: Grouped scatterplot of reaction time versus age by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
3\.6 Long to wide format
------------------------
Following the rule that *anything that shares an axis should probably be in the same column* means that we will frequently need our data in long\-form when using `ggplot2`, However, there are some cases when wide format is necessary. For example, we may wish to visualise the relationship between reaction time in the word and non\-word conditions. This requires that the corresponding word and non\-word values for each participant be in the same row. The easiest way to achieve this in our case would simply be to use the original wide\-format data as the input:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_word, y = rt_nonword, colour = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure 3\.7: Scatterplot with data grouped by language group
However, there may also be cases when you do not have an original wide\-format version and you can use the `[pivot_wider()](https://tidyr.tidyverse.org/reference/pivot_wider.html)` function to transform from long to wide.
```
dat_wide <- dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(id_cols = "id",
names_from = "condition",
values_from = [c](https://rdrr.io/r/base/c.html)(rt,acc))
```
| id | rt\_word | rt\_nonword | acc\_word | acc\_nonword |
| --- | --- | --- | --- | --- |
| S001 | 379\.4585 | 516\.8176 | 99 | 90 |
| S002 | 312\.4513 | 435\.0404 | 94 | 82 |
| S003 | 404\.9407 | 458\.5022 | 96 | 87 |
| S004 | 298\.3734 | 335\.8933 | 92 | 76 |
| S005 | 316\.4250 | 401\.3214 | 91 | 83 |
| S006 | 357\.1710 | 367\.3355 | 96 | 78 |
3\.7 Customisation 2
--------------------
### 3\.7\.1 Accessible colour schemes
One of the drawbacks of using `ggplot2` for visualisation is that the default colour scheme is not accessible (or visually appealing). The red and green default palette is difficult for colour\-blind people to differentiate, and also does not display well in greyscale. You can specify exact custom colours for your plots, but one easy option is to use a custom colour palette. These take the same arguments as their default `scale` sister functions for updating axis names and labels, but display plots in contrasting colours that can be read by colour\-blind people and that also print well in grey scale. For categorical colours, the "Set2", "Dark2" and "Paired" palettes from the `brewer` scale functions are colourblind\-safe (but are hard to distinhuish in greyscale). For continuous colours, such as when colour is representing the magnitude of a correlation in a tile plot, the `viridis` scale functions provide a number of different colourblind and greyscale\-safe options.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_color_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
Figure 3\.8: Use the Dark2 brewer colour scheme for accessibility.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 3\.7\.2 Specifying axis `breaks` with `seq()`
Previously, when we have edited the `breaks` on the axis labels, we have done so manually, typing out all the values we want to display on the axis. For example, the below code edits the y\-axis so that `age` is displayed in increments of 5\.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [c](https://rdrr.io/r/base/c.html)(20,25,30,35,40,45,50,55,60))
```
However, this is somewhat inefficient. Instead, we can use the function `[seq()](https://rdrr.io/r/base/seq.html)` (short for sequence) to specify the first and last value and the increments `by` which the breaks should display between these two values.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [seq](https://rdrr.io/r/base/seq.html)(20,60, by = 5))
```
3\.8 Activities 2
-----------------
Before you move on try the following:
1. Use `fill` to created grouped histograms that display the distributions for `rt` for each `language` group separately and also edit the fill axis labels. Try adding `position = "dodge"` to `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` to see what happens.
Solution 1
```
# fill and axis changes
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = language)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
# add in dodge
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = language)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, position = "dodge") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
```
2. Use `scale_*` functions to edit the name of the x and y\-axis on the scatterplot
Solution 2
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
3. Use `se = FALSE` to remove the confidence envelope from the scatterplots
Solution 3
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm", se = FALSE) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
4. Remove `method = "lm"` from `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)` to produce a curved fit line.
Solution 4
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)() +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
5. Replace the default fill on the grouped density plot with a colour\-blind friendly version.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = 0.75)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Set2", # or "Dark2" or "Paired"
name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
3\.1 Data formats
-----------------
To visualise the experimental reaction time and accuracy data using `ggplot2`, we first need to reshape the data from wide format to long format. This step can cause friction with novice users of R. Traditionally, psychologists have been taught data skills using wide\-format data. Wide\-format data typically has one row of data for each participant, with separate columns for each score or variable. For repeated\-measures variables, the dependent variable is split across different columns. For between\-groups variables, a separate column is added to encode the group to which a participant or observation belongs.
The simulated lexical decision data is currently in wide format (see Table [3\.1](transforming-data.html#tab:wide-data)), where each participant's aggregated 4 reaction time and accuracy for each level of the within\-subject variable is split across multiple columns for the repeated factor of conditon (words versus non\-words).
Table 3\.1: Data in wide format.
| id | age | language | rt\_word | rt\_nonword | acc\_word | acc\_nonword |
| --- | --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | 379\.46 | 516\.82 | 99 | 90 |
| S002 | 33 | monolingual | 312\.45 | 435\.04 | 94 | 82 |
| S003 | 23 | monolingual | 404\.94 | 458\.50 | 96 | 87 |
| S004 | 28 | monolingual | 298\.37 | 335\.89 | 92 | 76 |
| S005 | 26 | monolingual | 316\.42 | 401\.32 | 91 | 83 |
| S006 | 29 | monolingual | 357\.17 | 367\.34 | 96 | 78 |
Wide format is popular because it is intuitive to read and easy to enter data into as all the data for one participant is contained within a single row. However, for the purposes of analysis, and particularly for analysis using R, this format is unsuitable. Whilst it is intuitive to read by a human, the same is not true for a computer. Wide\-format data concatenates multiple pieces of information in a single column, for example in Table [3\.1](transforming-data.html#tab:wide-data), `rt_word` contains information related to both a DV and one level of an IV. In comparison, long\-format data separates the DV from the IVs so that each column represents only one variable. The less intuitive part is that long\-format data has multiple rows for each participant (one row for each observation) and a column that encodes the level of the IV (`word` or `nonword`). Wickham ([2014](references.html#ref-wickham2014tidy)) provides a comprehensive overview of the benefits of a similar format known as tidy data, which is a standard way of mapping a dataset to its structure. For the purposes of this tutorial there are two important rules: each column should be a *variable* and each row should be an *observation*.
Moving from using wide\-format to long\-format datasets can require a conceptual shift on the part of the researcher and one that usually only comes with practice and repeated exposure5. It may be helpful to make a note that “row \= participant” (wide format) and “row \= observation” (long format) until you get used to moving between the formats. For our example dataset, adhering to these rules for reshaping the data would produce Table [3\.2](transforming-data.html#tab:long). Rather than different observations of the same dependent variable being split across columns, there is now a single column for the DV reaction time, and a single column for the DV accuracy. Each participant now has multiple rows of data, one for each observation (i.e., for each participant there will be as many rows as there are levels of the within\-subject IV). Although there is some repetition of age and language group, each row is unique when looking at the combination of measures.
Table 3\.2: Data in the correct format for visualization.
| id | age | language | condition | rt | acc |
| --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | word | 379\.46 | 99 |
| S001 | 22 | monolingual | nonword | 516\.82 | 90 |
| S002 | 33 | monolingual | word | 312\.45 | 94 |
| S002 | 33 | monolingual | nonword | 435\.04 | 82 |
| S003 | 23 | monolingual | word | 404\.94 | 96 |
| S003 | 23 | monolingual | nonword | 458\.50 | 87 |
The benefits and flexibility of this format will hopefully become apparent as we progress through the tutorial, however, a useful rule of thumb when working with data in R for visualisation is that *anything that shares an axis should probably be in the same column*. For example, a simple boxplot showing reaction time by condition would display the variable `condition` on the x\-axis with bars representing both the `word` and `nonword` data, and `rt` on the y\-axis. Therefore, all the data relating to `condition` should be in one column, and all the data relating to `rt` should be in a separate single column, rather than being split like in wide\-format data.
3\.2 Wide to long format
------------------------
We have chosen a 2 x 2 design with two DVs, as we anticipate that this is a design many researchers will be familiar with and may also have existing datasets with a similar structure. However, it is worth normalising that trial\-and\-error is part of the process of learning how to apply these functions to new datasets and structures. Data visualisation can be a useful way to scaffold learning these data transformations because they can provide a concrete visual check as to whether you have done what you intended to do with your data.
### 3\.2\.1 Step 1: `pivot_longer()`
The first step is to use the function `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)` to transform the data to long\-form. We have purposefully used a more complex dataset with two DVs for this tutorial to aid researchers applying our code to their own datasets. Because of this, we will break down the steps involved to help show how the code works.
This first code ignores that the dataset has two DVs, a problem we will fix in step 2\. The pivot functions can be easier to show than tell \- you may find it a useful exercise to run the below code and compare the newly created object `long` (Table [3\.3](transforming-data.html#tab:long1-example)) with the original `dat` Table [3\.1](transforming-data.html#tab:wide-data) before reading on.
```
long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_to = "dv_condition",
values_to = "dv")
```
* As with the other tidyverse functions, the first argument specifies the dataset to use as the base, in this case `dat`. This argument name is often dropped in examples.
* `cols` specifies all the columns you want to transform. The easiest way to visualise this is to think about which columns would be the same in the new long\-form dataset and which will change. If you refer back to Table [3\.1](transforming-data.html#tab:wide-data), you can see that `id`, `age`, and `language` all remain, while the columns that contain the measurements of the DVs change. The colon notation `first_column:last_column` is used to select all variables from the first column specified to the last In our code, `cols` specifies that the columns we want to transform are `rt_word` to `acc_nonword`.
* `names_to` specifies the name of the new column that will be created. This column will contain the names of the selected existing columns.
* Finally, `values_to` names the new column that will contain the values in the selected columns. In this case we'll call it `dv`.
At this point you may find it helpful to go back and compare `dat` and `long` again to see how each argument matches up with the output of the table.
Table 3\.3: Data in long format with mixed DVs.
| id | age | language | dv\_condition | dv |
| --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt\_word | 379\.46 |
| S001 | 22 | monolingual | rt\_nonword | 516\.82 |
| S001 | 22 | monolingual | acc\_word | 99\.00 |
| S001 | 22 | monolingual | acc\_nonword | 90\.00 |
| S002 | 33 | monolingual | rt\_word | 312\.45 |
| S002 | 33 | monolingual | rt\_nonword | 435\.04 |
### 3\.2\.2 Step 2: `pivot_longer()` adjusted
The problem with the above long\-format data\-set is that `dv_condition` combines two variables \- it has information about the type of DV and the condition of the IV. To account for this, we include a new argument `names_sep` and adjust `name_to` to specify the creation of two new columns. Note that we are pivoting the same wide\-format dataset `dat` as we did in step 1\.
```
long2 <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv")
```
* `names_sep` specifies how to split up the variable name in cases where it has multiple components. This is when taking care to name your variables consistently and meaningfully pays off. Because the word to the left of the separator (`_`) is always the DV type and the word to the right is always the condition of the within\-subject IV, it is easy to automatically split the columns.
* Note that when specifying more than one column name, they must be combined using `[c()](https://rdrr.io/r/base/c.html)` and be enclosed in their own quotation marks.
Table 3\.4: Data in long format with dv type and condition in separate columns.
| id | age | language | dv\_type | condition | dv |
| --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt | word | 379\.46 |
| S001 | 22 | monolingual | rt | nonword | 516\.82 |
| S001 | 22 | monolingual | acc | word | 99\.00 |
| S001 | 22 | monolingual | acc | nonword | 90\.00 |
| S002 | 33 | monolingual | rt | word | 312\.45 |
| S002 | 33 | monolingual | rt | nonword | 435\.04 |
### 3\.2\.3 Step 3: `pivot_wider()`
Although we have now split the columns so that there are separate variables for the DV type and level of condition, because the two DVs are different types of data, there is an additional bit of wrangling required to get the data in the right format for plotting.
In the current long\-format dataset, the column `dv` contains both reaction time and accuracy measures. Keeping in mind the rule of thumb that *anything that shares an axis should probably be in the same column,* this creates a problem because we cannot plot two different units of measurement on the same axis. To fix this we need to use the function `[pivot_wider()](https://tidyr.tidyverse.org/reference/pivot_wider.html)`. Again, we would encourage you at this point to compare `long2` and `dat_long` with the below code to try and map the connections before reading on.
```
dat_long <- [pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(long2,
names_from = "dv_type",
values_from = "dv")
```
* The first argument is again the dataset you wish to work from, in this case `long2`. We have removed the argument name `data` in this example.
* `names_from` is the reverse of `names_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It will take the values from the variable specified and use these as the new column names. In this case, the values of `rt` and `acc` that are currently in the `dv_type` column will become the new column names.
* `values_from` is the reverse of `values_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It specifies the column that contains the values to fill the new columns with. In this case, the new columns `rt` and `acc` will be filled with the values that were in `dv`.
Again, it can be helpful to compare each dataset with the code to see how it aligns. This final long\-form data should look like Table [3\.2](transforming-data.html#tab:long).
If you are working with a dataset with only one DV, note that only step 1 of this process would be necessary. Also, be careful not to calculate demographic descriptive statistics from this long\-form dataset. Because the process of transformation has introduced some repetition for these variables, the wide\-format dataset where one row equals one participant should be used for demographic information. Finally, the three step process noted above is broken down for teaching purposes, in reality, one would likely do this in a single pipeline of code, for example:
```
dat_long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv") [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(names_from = "dv_type",
values_from = "dv")
```
### 3\.2\.1 Step 1: `pivot_longer()`
The first step is to use the function `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)` to transform the data to long\-form. We have purposefully used a more complex dataset with two DVs for this tutorial to aid researchers applying our code to their own datasets. Because of this, we will break down the steps involved to help show how the code works.
This first code ignores that the dataset has two DVs, a problem we will fix in step 2\. The pivot functions can be easier to show than tell \- you may find it a useful exercise to run the below code and compare the newly created object `long` (Table [3\.3](transforming-data.html#tab:long1-example)) with the original `dat` Table [3\.1](transforming-data.html#tab:wide-data) before reading on.
```
long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_to = "dv_condition",
values_to = "dv")
```
* As with the other tidyverse functions, the first argument specifies the dataset to use as the base, in this case `dat`. This argument name is often dropped in examples.
* `cols` specifies all the columns you want to transform. The easiest way to visualise this is to think about which columns would be the same in the new long\-form dataset and which will change. If you refer back to Table [3\.1](transforming-data.html#tab:wide-data), you can see that `id`, `age`, and `language` all remain, while the columns that contain the measurements of the DVs change. The colon notation `first_column:last_column` is used to select all variables from the first column specified to the last In our code, `cols` specifies that the columns we want to transform are `rt_word` to `acc_nonword`.
* `names_to` specifies the name of the new column that will be created. This column will contain the names of the selected existing columns.
* Finally, `values_to` names the new column that will contain the values in the selected columns. In this case we'll call it `dv`.
At this point you may find it helpful to go back and compare `dat` and `long` again to see how each argument matches up with the output of the table.
Table 3\.3: Data in long format with mixed DVs.
| id | age | language | dv\_condition | dv |
| --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt\_word | 379\.46 |
| S001 | 22 | monolingual | rt\_nonword | 516\.82 |
| S001 | 22 | monolingual | acc\_word | 99\.00 |
| S001 | 22 | monolingual | acc\_nonword | 90\.00 |
| S002 | 33 | monolingual | rt\_word | 312\.45 |
| S002 | 33 | monolingual | rt\_nonword | 435\.04 |
### 3\.2\.2 Step 2: `pivot_longer()` adjusted
The problem with the above long\-format data\-set is that `dv_condition` combines two variables \- it has information about the type of DV and the condition of the IV. To account for this, we include a new argument `names_sep` and adjust `name_to` to specify the creation of two new columns. Note that we are pivoting the same wide\-format dataset `dat` as we did in step 1\.
```
long2 <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv")
```
* `names_sep` specifies how to split up the variable name in cases where it has multiple components. This is when taking care to name your variables consistently and meaningfully pays off. Because the word to the left of the separator (`_`) is always the DV type and the word to the right is always the condition of the within\-subject IV, it is easy to automatically split the columns.
* Note that when specifying more than one column name, they must be combined using `[c()](https://rdrr.io/r/base/c.html)` and be enclosed in their own quotation marks.
Table 3\.4: Data in long format with dv type and condition in separate columns.
| id | age | language | dv\_type | condition | dv |
| --- | --- | --- | --- | --- | --- |
| S001 | 22 | monolingual | rt | word | 379\.46 |
| S001 | 22 | monolingual | rt | nonword | 516\.82 |
| S001 | 22 | monolingual | acc | word | 99\.00 |
| S001 | 22 | monolingual | acc | nonword | 90\.00 |
| S002 | 33 | monolingual | rt | word | 312\.45 |
| S002 | 33 | monolingual | rt | nonword | 435\.04 |
### 3\.2\.3 Step 3: `pivot_wider()`
Although we have now split the columns so that there are separate variables for the DV type and level of condition, because the two DVs are different types of data, there is an additional bit of wrangling required to get the data in the right format for plotting.
In the current long\-format dataset, the column `dv` contains both reaction time and accuracy measures. Keeping in mind the rule of thumb that *anything that shares an axis should probably be in the same column,* this creates a problem because we cannot plot two different units of measurement on the same axis. To fix this we need to use the function `[pivot_wider()](https://tidyr.tidyverse.org/reference/pivot_wider.html)`. Again, we would encourage you at this point to compare `long2` and `dat_long` with the below code to try and map the connections before reading on.
```
dat_long <- [pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(long2,
names_from = "dv_type",
values_from = "dv")
```
* The first argument is again the dataset you wish to work from, in this case `long2`. We have removed the argument name `data` in this example.
* `names_from` is the reverse of `names_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It will take the values from the variable specified and use these as the new column names. In this case, the values of `rt` and `acc` that are currently in the `dv_type` column will become the new column names.
* `values_from` is the reverse of `values_to` from `[pivot_longer()](https://tidyr.tidyverse.org/reference/pivot_longer.html)`. It specifies the column that contains the values to fill the new columns with. In this case, the new columns `rt` and `acc` will be filled with the values that were in `dv`.
Again, it can be helpful to compare each dataset with the code to see how it aligns. This final long\-form data should look like Table [3\.2](transforming-data.html#tab:long).
If you are working with a dataset with only one DV, note that only step 1 of this process would be necessary. Also, be careful not to calculate demographic descriptive statistics from this long\-form dataset. Because the process of transformation has introduced some repetition for these variables, the wide\-format dataset where one row equals one participant should be used for demographic information. Finally, the three step process noted above is broken down for teaching purposes, in reality, one would likely do this in a single pipeline of code, for example:
```
dat_long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data = dat,
cols = rt_word:acc_nonword,
names_sep = "_",
names_to = [c](https://rdrr.io/r/base/c.html)("dv_type", "condition"),
values_to = "dv") [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(names_from = "dv_type",
values_from = "dv")
```
3\.3 Histogram 2
----------------
Now that we have the experimental data in the right form, we can begin to create some useful visualizations. First, to demonstrate how code recipes can be reused and adapted, we will create histograms of reaction time and accuracy. The below code uses the same template as before but changes the dataset (`dat_long`), the bin\-widths of the histograms, the `x` variable to display (`rt`/`acc`), and the name of the x\-axis.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", colour = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", colour = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)")
```
Figure 3\.1: Histograms showing the distribution of reaction time (top) and accuracy (bottom)
3\.4 Density plots
------------------
The layer system makes it easy to create new types of plots by adapting existing recipes. For example, rather than creating a histogram, we can create a smoothed density plot by calling `[geom_density()](https://ggplot2.tidyverse.org/reference/geom_density.html)` rather than `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)`. The rest of the code remains identical.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
```
Figure 3\.2: Density plot of reaction time.
### 3\.4\.1 Grouped density plots
Density plots are most useful for comparing the distributions of different groups of data. Because the dataset is now in long format, with each variable contained within a single column, we can map `condition` to the plot.
* In addition to mapping `rt` to the x\-axis, we specify the `fill` aesthetic to fill the visualisation so that each level of the `condition` variable is represented by a different colour.
* Because the density plots are overlapping, we set `alpha = 0.75` to make the geoms 75% transparent.
* As with the x and y\-axis scale functions, we can edit the names and labels of our fill aesthetic by adding on another `scale_*` layer (`[scale_fill_discrete()](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)`).
* Note that the `fill` here is set inside the `[aes()](https://ggplot2.tidyverse.org/reference/aes.html)` function, which tells ggplot to set the fill differently for each value in the `condition` column. You cannot specify which colour here (e.g., `fill="red"`), like you could when you set `fill` inside the `geom_*()` function before.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = 0.75)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.3: Density plot of reaction times grouped by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 3\.4\.1 Grouped density plots
Density plots are most useful for comparing the distributions of different groups of data. Because the dataset is now in long format, with each variable contained within a single column, we can map `condition` to the plot.
* In addition to mapping `rt` to the x\-axis, we specify the `fill` aesthetic to fill the visualisation so that each level of the `condition` variable is represented by a different colour.
* Because the density plots are overlapping, we set `alpha = 0.75` to make the geoms 75% transparent.
* As with the x and y\-axis scale functions, we can edit the names and labels of our fill aesthetic by adding on another `scale_*` layer (`[scale_fill_discrete()](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)`).
* Note that the `fill` here is set inside the `[aes()](https://ggplot2.tidyverse.org/reference/aes.html)` function, which tells ggplot to set the fill differently for each value in the `condition` column. You cannot specify which colour here (e.g., `fill="red"`), like you could when you set `fill` inside the `geom_*()` function before.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = 0.75)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.3: Density plot of reaction times grouped by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
3\.5 Scatterplots
-----------------
Scatterplots are created by calling `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` and require both an `x` and `y` variable to be specified in the mapping.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)()
```
Figure 3\.4: Scatterplot of reaction time versus age.
A line of best fit can be added with an additional layer that calls the function `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)`. The default is to draw a LOESS or curved regression line. However, a linear line of best fit can be specified using `method = "lm"`. By default, `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)` will also draw a confidence envelope around the regression line; this can be removed by adding `se = FALSE` to `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)`. A common error is to try and use `[geom_line()](https://ggplot2.tidyverse.org/reference/geom_path.html)` to draw the line of best fit, which whilst a sensible guess, will not work (try it).
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure 3\.5: Line of best fit for reaction time versus age.
### 3\.5\.1 Grouped scatterplots
Similar to the density plot, the scatterplot can also be easily adjusted to display grouped data. For `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`, the grouping variable is mapped to `colour` rather than `fill` and the relevant `scale_*` function is added.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_colour_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.6: Grouped scatterplot of reaction time versus age by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 3\.5\.1 Grouped scatterplots
Similar to the density plot, the scatterplot can also be easily adjusted to display grouped data. For `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`, the grouping variable is mapped to `colour` rather than `fill` and the relevant `scale_*` function is added.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_colour_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word"))
```
Figure 3\.6: Grouped scatterplot of reaction time versus age by condition.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
3\.6 Long to wide format
------------------------
Following the rule that *anything that shares an axis should probably be in the same column* means that we will frequently need our data in long\-form when using `ggplot2`, However, there are some cases when wide format is necessary. For example, we may wish to visualise the relationship between reaction time in the word and non\-word conditions. This requires that the corresponding word and non\-word values for each participant be in the same row. The easiest way to achieve this in our case would simply be to use the original wide\-format data as the input:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_word, y = rt_nonword, colour = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure 3\.7: Scatterplot with data grouped by language group
However, there may also be cases when you do not have an original wide\-format version and you can use the `[pivot_wider()](https://tidyr.tidyverse.org/reference/pivot_wider.html)` function to transform from long to wide.
```
dat_wide <- dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(id_cols = "id",
names_from = "condition",
values_from = [c](https://rdrr.io/r/base/c.html)(rt,acc))
```
| id | rt\_word | rt\_nonword | acc\_word | acc\_nonword |
| --- | --- | --- | --- | --- |
| S001 | 379\.4585 | 516\.8176 | 99 | 90 |
| S002 | 312\.4513 | 435\.0404 | 94 | 82 |
| S003 | 404\.9407 | 458\.5022 | 96 | 87 |
| S004 | 298\.3734 | 335\.8933 | 92 | 76 |
| S005 | 316\.4250 | 401\.3214 | 91 | 83 |
| S006 | 357\.1710 | 367\.3355 | 96 | 78 |
3\.7 Customisation 2
--------------------
### 3\.7\.1 Accessible colour schemes
One of the drawbacks of using `ggplot2` for visualisation is that the default colour scheme is not accessible (or visually appealing). The red and green default palette is difficult for colour\-blind people to differentiate, and also does not display well in greyscale. You can specify exact custom colours for your plots, but one easy option is to use a custom colour palette. These take the same arguments as their default `scale` sister functions for updating axis names and labels, but display plots in contrasting colours that can be read by colour\-blind people and that also print well in grey scale. For categorical colours, the "Set2", "Dark2" and "Paired" palettes from the `brewer` scale functions are colourblind\-safe (but are hard to distinhuish in greyscale). For continuous colours, such as when colour is representing the magnitude of a correlation in a tile plot, the `viridis` scale functions provide a number of different colourblind and greyscale\-safe options.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_color_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
Figure 3\.8: Use the Dark2 brewer colour scheme for accessibility.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 3\.7\.2 Specifying axis `breaks` with `seq()`
Previously, when we have edited the `breaks` on the axis labels, we have done so manually, typing out all the values we want to display on the axis. For example, the below code edits the y\-axis so that `age` is displayed in increments of 5\.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [c](https://rdrr.io/r/base/c.html)(20,25,30,35,40,45,50,55,60))
```
However, this is somewhat inefficient. Instead, we can use the function `[seq()](https://rdrr.io/r/base/seq.html)` (short for sequence) to specify the first and last value and the increments `by` which the breaks should display between these two values.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [seq](https://rdrr.io/r/base/seq.html)(20,60, by = 5))
```
### 3\.7\.1 Accessible colour schemes
One of the drawbacks of using `ggplot2` for visualisation is that the default colour scheme is not accessible (or visually appealing). The red and green default palette is difficult for colour\-blind people to differentiate, and also does not display well in greyscale. You can specify exact custom colours for your plots, but one easy option is to use a custom colour palette. These take the same arguments as their default `scale` sister functions for updating axis names and labels, but display plots in contrasting colours that can be read by colour\-blind people and that also print well in grey scale. For categorical colours, the "Set2", "Dark2" and "Paired" palettes from the `brewer` scale functions are colourblind\-safe (but are hard to distinhuish in greyscale). For continuous colours, such as when colour is representing the magnitude of a correlation in a tile plot, the `viridis` scale functions provide a number of different colourblind and greyscale\-safe options.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age, colour = condition)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_color_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
Figure 3\.8: Use the Dark2 brewer colour scheme for accessibility.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 3\.7\.2 Specifying axis `breaks` with `seq()`
Previously, when we have edited the `breaks` on the axis labels, we have done so manually, typing out all the values we want to display on the axis. For example, the below code edits the y\-axis so that `age` is displayed in increments of 5\.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [c](https://rdrr.io/r/base/c.html)(20,25,30,35,40,45,50,55,60))
```
However, this is somewhat inefficient. Instead, we can use the function `[seq()](https://rdrr.io/r/base/seq.html)` (short for sequence) to specify the first and last value and the increments `by` which the breaks should display between these two values.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(breaks = [seq](https://rdrr.io/r/base/seq.html)(20,60, by = 5))
```
3\.8 Activities 2
-----------------
Before you move on try the following:
1. Use `fill` to created grouped histograms that display the distributions for `rt` for each `language` group separately and also edit the fill axis labels. Try adding `position = "dodge"` to `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)` to see what happens.
Solution 1
```
# fill and axis changes
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = language)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
# add in dodge
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = language)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, position = "dodge") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual", "Bilingual"))
```
2. Use `scale_*` functions to edit the name of the x and y\-axis on the scatterplot
Solution 2
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
3. Use `se = FALSE` to remove the confidence envelope from the scatterplots
Solution 3
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm", se = FALSE) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
4. Remove `method = "lm"` from `[geom_smooth()](https://ggplot2.tidyverse.org/reference/geom_smooth.html)` to produce a curved fit line.
Solution 4
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)() +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time") +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Age")
```
5. Replace the default fill on the grouped density plot with a colour\-blind friendly version.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = 0.75)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Set2", # or "Dark2" or "Paired"
name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/representing-summary-statistics.html |
4 Representing Summary Statistics
=================================
The layering approach that is used in `ggplot2` to make figures comes into its own when you want to include information about the distribution and spread of scores. In this section we introduce different ways of including summary statistics in your figures.
4\.1 Boxplots
-------------
As with `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`, boxplots also require an x\- and y\-variable to be specified. In this case, `x` must be a discrete, or categorical variable6, whilst `y` must be continuous.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)()
```
Figure 4\.1: Basic boxplot.
### 4\.1\.1 Grouped boxplots
As with histograms and density plots, `fill` can be used to create grouped boxplots. This looks like a lot of complicated code at first glance, but most of it is just editing the axis labels.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc, fill = language)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Bilingual", "Monolingual")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy")
```
Figure 4\.2: Grouped boxplots
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
4\.2 Violin plots
-----------------
Violin plots display the distribution of a dataset and can be created by calling `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)`. They are so\-called because the shape they make sometimes looks something like a violin. They are essentially sideways, mirrored density plots. Note that the below code is identical to the code used to draw the boxplots above, except for the call to `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)` rather than `geom_boxplot().`
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Bilingual", "Monolingual")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy")
```
Figure 4\.3: Violin plot.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
4\.3 Bar chart of means
-----------------------
Commonly, rather than visualising distributions of raw data, researchers will wish to visualise means using a bar chart with error bars. As with SPSS and Excel, `ggplot2` requires you to calculate the summary statistics and then plot the summary. There are at least two ways to do this, in the first you make a table of summary statistics as we did earlier when calculating the participant demographics and then plot that table. The second approach is to calculate the statistics within a layer of the plot. That is the approach we will use below.
First we present code for making a bar chart. The code for bar charts is here because it is a common visualisation that is familiar to most researchers. However, we would urge you to use a visualisation that provides more transparency about the distribution of the raw data, such as the violin\-boxplots we will present in the next section.
To summarise the data into means, we use a new function `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)`. Rather than calling a `geom_*` function, we call `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` and specify how we want to summarise the data and how we want to present that summary in our figure.
* `fun` specifies the summary function that gives us the y\-value we want to plot, in this case, `mean`.
* `geom` specifies what shape or plot we want to use to display the summary. For the first layer we will specify `bar`. As with the other geom\-type functions we have shown you, this part of the `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` function is tied to the aesthetic mapping in the first line of code. The underlying statistics for a bar chart means that we must specify and IV (x\-axis) as well as the DV (y\-axis).
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "bar")
```
Figure 4\.4: Bar plot of means.
To add the error bars, another layer is added with a second call to `stat_summary`. This time, the function represents the type of error bars we wish to draw, you can choose from `mean_se` for standard error, `mean_cl_normal` for confidence intervals, or `mean_sdl` for standard deviation. `width` controls the width of the error bars \- try changing the value to see what happens.
* Whilst `fun` returns a single value (y) per condition, `fun.data` returns the y\-values we want to plot plus their minimum and maximum values, in this case, `mean_se`
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "bar") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .2)
```
Figure 4\.5: Bar plot of means with error bars representing SE.
4\.4 Violin\-boxplot
--------------------
The power of the layered system for making figures is further highlighted by the ability to combine different types of plots. For example, rather than using a bar chart with error bars, one can easily create a single plot that includes density of the distribution, confidence intervals, means and standard errors. In the below code we first draw a violin plot, then layer on a boxplot, a point for the mean (note `geom = "point"` instead of `"bar"`) and standard error bars (`geom = "errorbar"`). This plot does not require much additional code to produce than the bar plot with error bars, yet the amount of information displayed is vastly superior.
* `fatten = NULL` in the boxplot geom removes the median line, which can make it easier to see the mean and error bars. Including this argument will result in the message `Removed 1 rows containing missing values (geom_segment)` and is not a cause for concern. Removing this argument will reinstate the median line.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
Figure 4\.6: Violin\-boxplot with mean dot and standard error bars.
It is important to note that the order of the layers matters and it is worth experimenting with the order to see where the order matters. For example, if we call `[geom_boxplot()](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)` followed by `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)`, we get the following mess:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
Figure 4\.7: Plot with the geoms in the wrong order.
### 4\.4\.1 Grouped violin\-boxplots
As with previous plots, another variable can be mapped to `fill` for the violin\-boxplot. (Remember to add a colourblind\-safe palette.) However, simply adding `fill` to the mapping causes the different components of the plot to become misaligned because they have different default positions:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.8: Grouped violin\-boxplots without repositioning.
To rectify this we need to adjust the argument `position` for each of the misaligned layers. `[position_dodge()](https://ggplot2.tidyverse.org/reference/position_dodge.html)` instructs R to move (dodge) the position of the plot component by the specified value; finding what value looks best can sometimes take trial and error.
```
# set the offset position of the geoms
pos <- [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(0.9)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL,
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.9: Grouped violin\-boxplots with repositioning.
4\.5 Customisation part 3
-------------------------
Combining multiple type of plots can present an issue with the colours, particularly when the fill and line colours are similar. For example, it is hard to make out the boxplot against the violin plot above.
There are a number of solutions to this problem. One solution is to adjust the transparency of each layer using `alpha`. The exact values needed can take trial and error:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language,
group = [paste](https://rdrr.io/r/base/paste.html)(condition, language))) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = 0.25, position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL,
alpha = 0.75,
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.10: Using transparency on the fill color.
Alternatively, we can change the fill of individual geoms by adding `fill = "colour"` to each relevant geom. In the example below, we fill the boxplots with white. Since all of the boxplots are no longer being filled according to language, but you still want a four separate boxplots, you have to add an extra mapping to `[geom_boxplot()](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)` to specify that you want the output grouped by the interaction of condition and language.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL,
mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = [interaction](https://rdrr.io/r/base/interaction.html)(condition, language)),
fill = "white",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.11: Manually changing the fill color.
4\.6 Activities 3
-----------------
Before you go on, do the following:
1. Review all the code you have run so far. Try to identify the commonalities between each plot's code and the bits of the code you might change if you were using a different dataset.
2. Take a moment to recognise the complexity of the code you are now able to read.
3. For the violin\-boxplot, for `geom = "point"`, try changing `fun` to `median`
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "median", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
4. For the violin\-boxplot, for `geom = "errorbar"`, try changing `fun.data` to `mean_cl_normal` (for 95% CI)
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_cl_normal",
geom = "errorbar",
width = .1)
```
5. Go back to the grouped density plots and try changing the transparency with `alpha`.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = .4)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
4\.1 Boxplots
-------------
As with `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`, boxplots also require an x\- and y\-variable to be specified. In this case, `x` must be a discrete, or categorical variable6, whilst `y` must be continuous.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)()
```
Figure 4\.1: Basic boxplot.
### 4\.1\.1 Grouped boxplots
As with histograms and density plots, `fill` can be used to create grouped boxplots. This looks like a lot of complicated code at first glance, but most of it is just editing the axis labels.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc, fill = language)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Bilingual", "Monolingual")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy")
```
Figure 4\.2: Grouped boxplots
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 4\.1\.1 Grouped boxplots
As with histograms and density plots, `fill` can be used to create grouped boxplots. This looks like a lot of complicated code at first glance, but most of it is just editing the axis labels.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc, fill = language)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Bilingual", "Monolingual")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy")
```
Figure 4\.2: Grouped boxplots
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
4\.2 Violin plots
-----------------
Violin plots display the distribution of a dataset and can be created by calling `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)`. They are so\-called because the shape they make sometimes looks something like a violin. They are essentially sideways, mirrored density plots. Note that the below code is identical to the code used to draw the boxplots above, except for the call to `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)` rather than `geom_boxplot().`
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Bilingual", "Monolingual")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy")
```
Figure 4\.3: Violin plot.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
4\.3 Bar chart of means
-----------------------
Commonly, rather than visualising distributions of raw data, researchers will wish to visualise means using a bar chart with error bars. As with SPSS and Excel, `ggplot2` requires you to calculate the summary statistics and then plot the summary. There are at least two ways to do this, in the first you make a table of summary statistics as we did earlier when calculating the participant demographics and then plot that table. The second approach is to calculate the statistics within a layer of the plot. That is the approach we will use below.
First we present code for making a bar chart. The code for bar charts is here because it is a common visualisation that is familiar to most researchers. However, we would urge you to use a visualisation that provides more transparency about the distribution of the raw data, such as the violin\-boxplots we will present in the next section.
To summarise the data into means, we use a new function `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)`. Rather than calling a `geom_*` function, we call `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` and specify how we want to summarise the data and how we want to present that summary in our figure.
* `fun` specifies the summary function that gives us the y\-value we want to plot, in this case, `mean`.
* `geom` specifies what shape or plot we want to use to display the summary. For the first layer we will specify `bar`. As with the other geom\-type functions we have shown you, this part of the `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` function is tied to the aesthetic mapping in the first line of code. The underlying statistics for a bar chart means that we must specify and IV (x\-axis) as well as the DV (y\-axis).
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "bar")
```
Figure 4\.4: Bar plot of means.
To add the error bars, another layer is added with a second call to `stat_summary`. This time, the function represents the type of error bars we wish to draw, you can choose from `mean_se` for standard error, `mean_cl_normal` for confidence intervals, or `mean_sdl` for standard deviation. `width` controls the width of the error bars \- try changing the value to see what happens.
* Whilst `fun` returns a single value (y) per condition, `fun.data` returns the y\-values we want to plot plus their minimum and maximum values, in this case, `mean_se`
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "bar") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .2)
```
Figure 4\.5: Bar plot of means with error bars representing SE.
4\.4 Violin\-boxplot
--------------------
The power of the layered system for making figures is further highlighted by the ability to combine different types of plots. For example, rather than using a bar chart with error bars, one can easily create a single plot that includes density of the distribution, confidence intervals, means and standard errors. In the below code we first draw a violin plot, then layer on a boxplot, a point for the mean (note `geom = "point"` instead of `"bar"`) and standard error bars (`geom = "errorbar"`). This plot does not require much additional code to produce than the bar plot with error bars, yet the amount of information displayed is vastly superior.
* `fatten = NULL` in the boxplot geom removes the median line, which can make it easier to see the mean and error bars. Including this argument will result in the message `Removed 1 rows containing missing values (geom_segment)` and is not a cause for concern. Removing this argument will reinstate the median line.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
Figure 4\.6: Violin\-boxplot with mean dot and standard error bars.
It is important to note that the order of the layers matters and it is worth experimenting with the order to see where the order matters. For example, if we call `[geom_boxplot()](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)` followed by `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)`, we get the following mess:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
Figure 4\.7: Plot with the geoms in the wrong order.
### 4\.4\.1 Grouped violin\-boxplots
As with previous plots, another variable can be mapped to `fill` for the violin\-boxplot. (Remember to add a colourblind\-safe palette.) However, simply adding `fill` to the mapping causes the different components of the plot to become misaligned because they have different default positions:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.8: Grouped violin\-boxplots without repositioning.
To rectify this we need to adjust the argument `position` for each of the misaligned layers. `[position_dodge()](https://ggplot2.tidyverse.org/reference/position_dodge.html)` instructs R to move (dodge) the position of the plot component by the specified value; finding what value looks best can sometimes take trial and error.
```
# set the offset position of the geoms
pos <- [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(0.9)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL,
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.9: Grouped violin\-boxplots with repositioning.
### 4\.4\.1 Grouped violin\-boxplots
As with previous plots, another variable can be mapped to `fill` for the violin\-boxplot. (Remember to add a colourblind\-safe palette.) However, simply adding `fill` to the mapping causes the different components of the plot to become misaligned because they have different default positions:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.8: Grouped violin\-boxplots without repositioning.
To rectify this we need to adjust the argument `position` for each of the misaligned layers. `[position_dodge()](https://ggplot2.tidyverse.org/reference/position_dodge.html)` instructs R to move (dodge) the position of the plot component by the specified value; finding what value looks best can sometimes take trial and error.
```
# set the offset position of the geoms
pos <- [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(0.9)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL,
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.9: Grouped violin\-boxplots with repositioning.
4\.5 Customisation part 3
-------------------------
Combining multiple type of plots can present an issue with the colours, particularly when the fill and line colours are similar. For example, it is hard to make out the boxplot against the violin plot above.
There are a number of solutions to this problem. One solution is to adjust the transparency of each layer using `alpha`. The exact values needed can take trial and error:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language,
group = [paste](https://rdrr.io/r/base/paste.html)(condition, language))) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = 0.25, position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL,
alpha = 0.75,
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.10: Using transparency on the fill color.
Alternatively, we can change the fill of individual geoms by adding `fill = "colour"` to each relevant geom. In the example below, we fill the boxplots with white. Since all of the boxplots are no longer being filled according to language, but you still want a four separate boxplots, you have to add an extra mapping to `[geom_boxplot()](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)` to specify that you want the output grouped by the interaction of condition and language.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL,
mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = [interaction](https://rdrr.io/r/base/interaction.html)(condition, language)),
fill = "white",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.11: Manually changing the fill color.
4\.6 Activities 3
-----------------
Before you go on, do the following:
1. Review all the code you have run so far. Try to identify the commonalities between each plot's code and the bits of the code you might change if you were using a different dataset.
2. Take a moment to recognise the complexity of the code you are now able to read.
3. For the violin\-boxplot, for `geom = "point"`, try changing `fun` to `median`
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "median", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
4. For the violin\-boxplot, for `geom = "errorbar"`, try changing `fun.data` to `mean_cl_normal` (for 95% CI)
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_cl_normal",
geom = "errorbar",
width = .1)
```
5. Go back to the grouped density plots and try changing the transparency with `alpha`.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = .4)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/representing-summary-statistics.html |
4 Representing Summary Statistics
=================================
The layering approach that is used in `ggplot2` to make figures comes into its own when you want to include information about the distribution and spread of scores. In this section we introduce different ways of including summary statistics in your figures.
4\.1 Boxplots
-------------
As with `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`, boxplots also require an x\- and y\-variable to be specified. In this case, `x` must be a discrete, or categorical variable6, whilst `y` must be continuous.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)()
```
Figure 4\.1: Basic boxplot.
### 4\.1\.1 Grouped boxplots
As with histograms and density plots, `fill` can be used to create grouped boxplots. This looks like a lot of complicated code at first glance, but most of it is just editing the axis labels.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc, fill = language)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Bilingual", "Monolingual")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy")
```
Figure 4\.2: Grouped boxplots
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
4\.2 Violin plots
-----------------
Violin plots display the distribution of a dataset and can be created by calling `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)`. They are so\-called because the shape they make sometimes looks something like a violin. They are essentially sideways, mirrored density plots. Note that the below code is identical to the code used to draw the boxplots above, except for the call to `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)` rather than `geom_boxplot().`
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Bilingual", "Monolingual")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy")
```
Figure 4\.3: Violin plot.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
4\.3 Bar chart of means
-----------------------
Commonly, rather than visualising distributions of raw data, researchers will wish to visualise means using a bar chart with error bars. As with SPSS and Excel, `ggplot2` requires you to calculate the summary statistics and then plot the summary. There are at least two ways to do this, in the first you make a table of summary statistics as we did earlier when calculating the participant demographics and then plot that table. The second approach is to calculate the statistics within a layer of the plot. That is the approach we will use below.
First we present code for making a bar chart. The code for bar charts is here because it is a common visualisation that is familiar to most researchers. However, we would urge you to use a visualisation that provides more transparency about the distribution of the raw data, such as the violin\-boxplots we will present in the next section.
To summarise the data into means, we use a new function `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)`. Rather than calling a `geom_*` function, we call `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` and specify how we want to summarise the data and how we want to present that summary in our figure.
* `fun` specifies the summary function that gives us the y\-value we want to plot, in this case, `mean`.
* `geom` specifies what shape or plot we want to use to display the summary. For the first layer we will specify `bar`. As with the other geom\-type functions we have shown you, this part of the `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` function is tied to the aesthetic mapping in the first line of code. The underlying statistics for a bar chart means that we must specify and IV (x\-axis) as well as the DV (y\-axis).
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "bar")
```
Figure 4\.4: Bar plot of means.
To add the error bars, another layer is added with a second call to `stat_summary`. This time, the function represents the type of error bars we wish to draw, you can choose from `mean_se` for standard error, `mean_cl_normal` for confidence intervals, or `mean_sdl` for standard deviation. `width` controls the width of the error bars \- try changing the value to see what happens.
* Whilst `fun` returns a single value (y) per condition, `fun.data` returns the y\-values we want to plot plus their minimum and maximum values, in this case, `mean_se`
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "bar") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .2)
```
Figure 4\.5: Bar plot of means with error bars representing SE.
4\.4 Violin\-boxplot
--------------------
The power of the layered system for making figures is further highlighted by the ability to combine different types of plots. For example, rather than using a bar chart with error bars, one can easily create a single plot that includes density of the distribution, confidence intervals, means and standard errors. In the below code we first draw a violin plot, then layer on a boxplot, a point for the mean (note `geom = "point"` instead of `"bar"`) and standard error bars (`geom = "errorbar"`). This plot does not require much additional code to produce than the bar plot with error bars, yet the amount of information displayed is vastly superior.
* `fatten = NULL` in the boxplot geom removes the median line, which can make it easier to see the mean and error bars. Including this argument will result in the message `Removed 1 rows containing missing values (geom_segment)` and is not a cause for concern. Removing this argument will reinstate the median line.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
Figure 4\.6: Violin\-boxplot with mean dot and standard error bars.
It is important to note that the order of the layers matters and it is worth experimenting with the order to see where the order matters. For example, if we call `[geom_boxplot()](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)` followed by `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)`, we get the following mess:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
Figure 4\.7: Plot with the geoms in the wrong order.
### 4\.4\.1 Grouped violin\-boxplots
As with previous plots, another variable can be mapped to `fill` for the violin\-boxplot. (Remember to add a colourblind\-safe palette.) However, simply adding `fill` to the mapping causes the different components of the plot to become misaligned because they have different default positions:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.8: Grouped violin\-boxplots without repositioning.
To rectify this we need to adjust the argument `position` for each of the misaligned layers. `[position_dodge()](https://ggplot2.tidyverse.org/reference/position_dodge.html)` instructs R to move (dodge) the position of the plot component by the specified value; finding what value looks best can sometimes take trial and error.
```
# set the offset position of the geoms
pos <- [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(0.9)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL,
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.9: Grouped violin\-boxplots with repositioning.
4\.5 Customisation part 3
-------------------------
Combining multiple type of plots can present an issue with the colours, particularly when the fill and line colours are similar. For example, it is hard to make out the boxplot against the violin plot above.
There are a number of solutions to this problem. One solution is to adjust the transparency of each layer using `alpha`. The exact values needed can take trial and error:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language,
group = [paste](https://rdrr.io/r/base/paste.html)(condition, language))) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = 0.25, position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL,
alpha = 0.75,
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.10: Using transparency on the fill color.
Alternatively, we can change the fill of individual geoms by adding `fill = "colour"` to each relevant geom. In the example below, we fill the boxplots with white. Since all of the boxplots are no longer being filled according to language, but you still want a four separate boxplots, you have to add an extra mapping to `[geom_boxplot()](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)` to specify that you want the output grouped by the interaction of condition and language.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL,
mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = [interaction](https://rdrr.io/r/base/interaction.html)(condition, language)),
fill = "white",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.11: Manually changing the fill color.
4\.6 Activities 3
-----------------
Before you go on, do the following:
1. Review all the code you have run so far. Try to identify the commonalities between each plot's code and the bits of the code you might change if you were using a different dataset.
2. Take a moment to recognise the complexity of the code you are now able to read.
3. For the violin\-boxplot, for `geom = "point"`, try changing `fun` to `median`
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "median", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
4. For the violin\-boxplot, for `geom = "errorbar"`, try changing `fun.data` to `mean_cl_normal` (for 95% CI)
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_cl_normal",
geom = "errorbar",
width = .1)
```
5. Go back to the grouped density plots and try changing the transparency with `alpha`.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = .4)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
4\.1 Boxplots
-------------
As with `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`, boxplots also require an x\- and y\-variable to be specified. In this case, `x` must be a discrete, or categorical variable6, whilst `y` must be continuous.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)()
```
Figure 4\.1: Basic boxplot.
### 4\.1\.1 Grouped boxplots
As with histograms and density plots, `fill` can be used to create grouped boxplots. This looks like a lot of complicated code at first glance, but most of it is just editing the axis labels.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc, fill = language)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Bilingual", "Monolingual")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy")
```
Figure 4\.2: Grouped boxplots
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 4\.1\.1 Grouped boxplots
As with histograms and density plots, `fill` can be used to create grouped boxplots. This looks like a lot of complicated code at first glance, but most of it is just editing the axis labels.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc, fill = language)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Bilingual", "Monolingual")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy")
```
Figure 4\.2: Grouped boxplots
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
4\.2 Violin plots
-----------------
Violin plots display the distribution of a dataset and can be created by calling `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)`. They are so\-called because the shape they make sometimes looks something like a violin. They are essentially sideways, mirrored density plots. Note that the below code is identical to the code used to draw the boxplots above, except for the call to `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)` rather than `geom_boxplot().`
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Bilingual", "Monolingual")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy")
```
Figure 4\.3: Violin plot.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
4\.3 Bar chart of means
-----------------------
Commonly, rather than visualising distributions of raw data, researchers will wish to visualise means using a bar chart with error bars. As with SPSS and Excel, `ggplot2` requires you to calculate the summary statistics and then plot the summary. There are at least two ways to do this, in the first you make a table of summary statistics as we did earlier when calculating the participant demographics and then plot that table. The second approach is to calculate the statistics within a layer of the plot. That is the approach we will use below.
First we present code for making a bar chart. The code for bar charts is here because it is a common visualisation that is familiar to most researchers. However, we would urge you to use a visualisation that provides more transparency about the distribution of the raw data, such as the violin\-boxplots we will present in the next section.
To summarise the data into means, we use a new function `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)`. Rather than calling a `geom_*` function, we call `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` and specify how we want to summarise the data and how we want to present that summary in our figure.
* `fun` specifies the summary function that gives us the y\-value we want to plot, in this case, `mean`.
* `geom` specifies what shape or plot we want to use to display the summary. For the first layer we will specify `bar`. As with the other geom\-type functions we have shown you, this part of the `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` function is tied to the aesthetic mapping in the first line of code. The underlying statistics for a bar chart means that we must specify and IV (x\-axis) as well as the DV (y\-axis).
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "bar")
```
Figure 4\.4: Bar plot of means.
To add the error bars, another layer is added with a second call to `stat_summary`. This time, the function represents the type of error bars we wish to draw, you can choose from `mean_se` for standard error, `mean_cl_normal` for confidence intervals, or `mean_sdl` for standard deviation. `width` controls the width of the error bars \- try changing the value to see what happens.
* Whilst `fun` returns a single value (y) per condition, `fun.data` returns the y\-values we want to plot plus their minimum and maximum values, in this case, `mean_se`
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "bar") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .2)
```
Figure 4\.5: Bar plot of means with error bars representing SE.
4\.4 Violin\-boxplot
--------------------
The power of the layered system for making figures is further highlighted by the ability to combine different types of plots. For example, rather than using a bar chart with error bars, one can easily create a single plot that includes density of the distribution, confidence intervals, means and standard errors. In the below code we first draw a violin plot, then layer on a boxplot, a point for the mean (note `geom = "point"` instead of `"bar"`) and standard error bars (`geom = "errorbar"`). This plot does not require much additional code to produce than the bar plot with error bars, yet the amount of information displayed is vastly superior.
* `fatten = NULL` in the boxplot geom removes the median line, which can make it easier to see the mean and error bars. Including this argument will result in the message `Removed 1 rows containing missing values (geom_segment)` and is not a cause for concern. Removing this argument will reinstate the median line.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
Figure 4\.6: Violin\-boxplot with mean dot and standard error bars.
It is important to note that the order of the layers matters and it is worth experimenting with the order to see where the order matters. For example, if we call `[geom_boxplot()](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)` followed by `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)`, we get the following mess:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
Figure 4\.7: Plot with the geoms in the wrong order.
### 4\.4\.1 Grouped violin\-boxplots
As with previous plots, another variable can be mapped to `fill` for the violin\-boxplot. (Remember to add a colourblind\-safe palette.) However, simply adding `fill` to the mapping causes the different components of the plot to become misaligned because they have different default positions:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.8: Grouped violin\-boxplots without repositioning.
To rectify this we need to adjust the argument `position` for each of the misaligned layers. `[position_dodge()](https://ggplot2.tidyverse.org/reference/position_dodge.html)` instructs R to move (dodge) the position of the plot component by the specified value; finding what value looks best can sometimes take trial and error.
```
# set the offset position of the geoms
pos <- [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(0.9)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL,
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.9: Grouped violin\-boxplots with repositioning.
### 4\.4\.1 Grouped violin\-boxplots
As with previous plots, another variable can be mapped to `fill` for the violin\-boxplot. (Remember to add a colourblind\-safe palette.) However, simply adding `fill` to the mapping causes the different components of the plot to become misaligned because they have different default positions:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.8: Grouped violin\-boxplots without repositioning.
To rectify this we need to adjust the argument `position` for each of the misaligned layers. `[position_dodge()](https://ggplot2.tidyverse.org/reference/position_dodge.html)` instructs R to move (dodge) the position of the plot component by the specified value; finding what value looks best can sometimes take trial and error.
```
# set the offset position of the geoms
pos <- [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(0.9)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL,
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.9: Grouped violin\-boxplots with repositioning.
4\.5 Customisation part 3
-------------------------
Combining multiple type of plots can present an issue with the colours, particularly when the fill and line colours are similar. For example, it is hard to make out the boxplot against the violin plot above.
There are a number of solutions to this problem. One solution is to adjust the transparency of each layer using `alpha`. The exact values needed can take trial and error:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language,
group = [paste](https://rdrr.io/r/base/paste.html)(condition, language))) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = 0.25, position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL,
alpha = 0.75,
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.10: Using transparency on the fill color.
Alternatively, we can change the fill of individual geoms by adding `fill = "colour"` to each relevant geom. In the example below, we fill the boxplots with white. Since all of the boxplots are no longer being filled according to language, but you still want a four separate boxplots, you have to add an extra mapping to `[geom_boxplot()](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)` to specify that you want the output grouped by the interaction of condition and language.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL,
mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = [interaction](https://rdrr.io/r/base/interaction.html)(condition, language)),
fill = "white",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.11: Manually changing the fill color.
4\.6 Activities 3
-----------------
Before you go on, do the following:
1. Review all the code you have run so far. Try to identify the commonalities between each plot's code and the bits of the code you might change if you were using a different dataset.
2. Take a moment to recognise the complexity of the code you are now able to read.
3. For the violin\-boxplot, for `geom = "point"`, try changing `fun` to `median`
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "median", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
4. For the violin\-boxplot, for `geom = "errorbar"`, try changing `fun.data` to `mean_cl_normal` (for 95% CI)
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_cl_normal",
geom = "errorbar",
width = .1)
```
5. Go back to the grouped density plots and try changing the transparency with `alpha`.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = .4)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/representing-summary-statistics.html |
4 Representing Summary Statistics
=================================
The layering approach that is used in `ggplot2` to make figures comes into its own when you want to include information about the distribution and spread of scores. In this section we introduce different ways of including summary statistics in your figures.
4\.1 Boxplots
-------------
As with `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`, boxplots also require an x\- and y\-variable to be specified. In this case, `x` must be a discrete, or categorical variable6, whilst `y` must be continuous.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)()
```
Figure 4\.1: Basic boxplot.
### 4\.1\.1 Grouped boxplots
As with histograms and density plots, `fill` can be used to create grouped boxplots. This looks like a lot of complicated code at first glance, but most of it is just editing the axis labels.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc, fill = language)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Bilingual", "Monolingual")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy")
```
Figure 4\.2: Grouped boxplots
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
4\.2 Violin plots
-----------------
Violin plots display the distribution of a dataset and can be created by calling `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)`. They are so\-called because the shape they make sometimes looks something like a violin. They are essentially sideways, mirrored density plots. Note that the below code is identical to the code used to draw the boxplots above, except for the call to `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)` rather than `geom_boxplot().`
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Bilingual", "Monolingual")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy")
```
Figure 4\.3: Violin plot.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
4\.3 Bar chart of means
-----------------------
Commonly, rather than visualising distributions of raw data, researchers will wish to visualise means using a bar chart with error bars. As with SPSS and Excel, `ggplot2` requires you to calculate the summary statistics and then plot the summary. There are at least two ways to do this, in the first you make a table of summary statistics as we did earlier when calculating the participant demographics and then plot that table. The second approach is to calculate the statistics within a layer of the plot. That is the approach we will use below.
First we present code for making a bar chart. The code for bar charts is here because it is a common visualisation that is familiar to most researchers. However, we would urge you to use a visualisation that provides more transparency about the distribution of the raw data, such as the violin\-boxplots we will present in the next section.
To summarise the data into means, we use a new function `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)`. Rather than calling a `geom_*` function, we call `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` and specify how we want to summarise the data and how we want to present that summary in our figure.
* `fun` specifies the summary function that gives us the y\-value we want to plot, in this case, `mean`.
* `geom` specifies what shape or plot we want to use to display the summary. For the first layer we will specify `bar`. As with the other geom\-type functions we have shown you, this part of the `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` function is tied to the aesthetic mapping in the first line of code. The underlying statistics for a bar chart means that we must specify and IV (x\-axis) as well as the DV (y\-axis).
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "bar")
```
Figure 4\.4: Bar plot of means.
To add the error bars, another layer is added with a second call to `stat_summary`. This time, the function represents the type of error bars we wish to draw, you can choose from `mean_se` for standard error, `mean_cl_normal` for confidence intervals, or `mean_sdl` for standard deviation. `width` controls the width of the error bars \- try changing the value to see what happens.
* Whilst `fun` returns a single value (y) per condition, `fun.data` returns the y\-values we want to plot plus their minimum and maximum values, in this case, `mean_se`
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "bar") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .2)
```
Figure 4\.5: Bar plot of means with error bars representing SE.
4\.4 Violin\-boxplot
--------------------
The power of the layered system for making figures is further highlighted by the ability to combine different types of plots. For example, rather than using a bar chart with error bars, one can easily create a single plot that includes density of the distribution, confidence intervals, means and standard errors. In the below code we first draw a violin plot, then layer on a boxplot, a point for the mean (note `geom = "point"` instead of `"bar"`) and standard error bars (`geom = "errorbar"`). This plot does not require much additional code to produce than the bar plot with error bars, yet the amount of information displayed is vastly superior.
* `fatten = NULL` in the boxplot geom removes the median line, which can make it easier to see the mean and error bars. Including this argument will result in the message `Removed 1 rows containing missing values (geom_segment)` and is not a cause for concern. Removing this argument will reinstate the median line.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
Figure 4\.6: Violin\-boxplot with mean dot and standard error bars.
It is important to note that the order of the layers matters and it is worth experimenting with the order to see where the order matters. For example, if we call `[geom_boxplot()](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)` followed by `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)`, we get the following mess:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
Figure 4\.7: Plot with the geoms in the wrong order.
### 4\.4\.1 Grouped violin\-boxplots
As with previous plots, another variable can be mapped to `fill` for the violin\-boxplot. (Remember to add a colourblind\-safe palette.) However, simply adding `fill` to the mapping causes the different components of the plot to become misaligned because they have different default positions:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.8: Grouped violin\-boxplots without repositioning.
To rectify this we need to adjust the argument `position` for each of the misaligned layers. `[position_dodge()](https://ggplot2.tidyverse.org/reference/position_dodge.html)` instructs R to move (dodge) the position of the plot component by the specified value; finding what value looks best can sometimes take trial and error.
```
# set the offset position of the geoms
pos <- [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(0.9)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL,
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.9: Grouped violin\-boxplots with repositioning.
4\.5 Customisation part 3
-------------------------
Combining multiple type of plots can present an issue with the colours, particularly when the fill and line colours are similar. For example, it is hard to make out the boxplot against the violin plot above.
There are a number of solutions to this problem. One solution is to adjust the transparency of each layer using `alpha`. The exact values needed can take trial and error:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language,
group = [paste](https://rdrr.io/r/base/paste.html)(condition, language))) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = 0.25, position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL,
alpha = 0.75,
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.10: Using transparency on the fill color.
Alternatively, we can change the fill of individual geoms by adding `fill = "colour"` to each relevant geom. In the example below, we fill the boxplots with white. Since all of the boxplots are no longer being filled according to language, but you still want a four separate boxplots, you have to add an extra mapping to `[geom_boxplot()](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)` to specify that you want the output grouped by the interaction of condition and language.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL,
mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = [interaction](https://rdrr.io/r/base/interaction.html)(condition, language)),
fill = "white",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.11: Manually changing the fill color.
4\.6 Activities 3
-----------------
Before you go on, do the following:
1. Review all the code you have run so far. Try to identify the commonalities between each plot's code and the bits of the code you might change if you were using a different dataset.
2. Take a moment to recognise the complexity of the code you are now able to read.
3. For the violin\-boxplot, for `geom = "point"`, try changing `fun` to `median`
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "median", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
4. For the violin\-boxplot, for `geom = "errorbar"`, try changing `fun.data` to `mean_cl_normal` (for 95% CI)
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_cl_normal",
geom = "errorbar",
width = .1)
```
5. Go back to the grouped density plots and try changing the transparency with `alpha`.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = .4)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
4\.1 Boxplots
-------------
As with `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`, boxplots also require an x\- and y\-variable to be specified. In this case, `x` must be a discrete, or categorical variable6, whilst `y` must be continuous.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)()
```
Figure 4\.1: Basic boxplot.
### 4\.1\.1 Grouped boxplots
As with histograms and density plots, `fill` can be used to create grouped boxplots. This looks like a lot of complicated code at first glance, but most of it is just editing the axis labels.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc, fill = language)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Bilingual", "Monolingual")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy")
```
Figure 4\.2: Grouped boxplots
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
### 4\.1\.1 Grouped boxplots
As with histograms and density plots, `fill` can be used to create grouped boxplots. This looks like a lot of complicated code at first glance, but most of it is just editing the axis labels.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc, fill = language)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Bilingual", "Monolingual")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-Word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy")
```
Figure 4\.2: Grouped boxplots
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
4\.2 Violin plots
-----------------
Violin plots display the distribution of a dataset and can be created by calling `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)`. They are so\-called because the shape they make sometimes looks something like a violin. They are essentially sideways, mirrored density plots. Note that the below code is identical to the code used to draw the boxplots above, except for the call to `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)` rather than `geom_boxplot().`
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2",
name = "Group",
labels = [c](https://rdrr.io/r/base/c.html)("Bilingual", "Monolingual")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy")
```
Figure 4\.3: Violin plot.
Correction to paper
Please note that the code and figure for this plot has been corrected from the published paper due to the labels "Word" and "Non\-word" being incorrectly reversed. This is of course mortifying as authors, although it does provide a useful teachable moment that R will do what you tell it to do, no more, no less, regardless of whether what you tell it to do is wrong.
4\.3 Bar chart of means
-----------------------
Commonly, rather than visualising distributions of raw data, researchers will wish to visualise means using a bar chart with error bars. As with SPSS and Excel, `ggplot2` requires you to calculate the summary statistics and then plot the summary. There are at least two ways to do this, in the first you make a table of summary statistics as we did earlier when calculating the participant demographics and then plot that table. The second approach is to calculate the statistics within a layer of the plot. That is the approach we will use below.
First we present code for making a bar chart. The code for bar charts is here because it is a common visualisation that is familiar to most researchers. However, we would urge you to use a visualisation that provides more transparency about the distribution of the raw data, such as the violin\-boxplots we will present in the next section.
To summarise the data into means, we use a new function `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)`. Rather than calling a `geom_*` function, we call `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` and specify how we want to summarise the data and how we want to present that summary in our figure.
* `fun` specifies the summary function that gives us the y\-value we want to plot, in this case, `mean`.
* `geom` specifies what shape or plot we want to use to display the summary. For the first layer we will specify `bar`. As with the other geom\-type functions we have shown you, this part of the `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` function is tied to the aesthetic mapping in the first line of code. The underlying statistics for a bar chart means that we must specify and IV (x\-axis) as well as the DV (y\-axis).
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "bar")
```
Figure 4\.4: Bar plot of means.
To add the error bars, another layer is added with a second call to `stat_summary`. This time, the function represents the type of error bars we wish to draw, you can choose from `mean_se` for standard error, `mean_cl_normal` for confidence intervals, or `mean_sdl` for standard deviation. `width` controls the width of the error bars \- try changing the value to see what happens.
* Whilst `fun` returns a single value (y) per condition, `fun.data` returns the y\-values we want to plot plus their minimum and maximum values, in this case, `mean_se`
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "bar") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .2)
```
Figure 4\.5: Bar plot of means with error bars representing SE.
4\.4 Violin\-boxplot
--------------------
The power of the layered system for making figures is further highlighted by the ability to combine different types of plots. For example, rather than using a bar chart with error bars, one can easily create a single plot that includes density of the distribution, confidence intervals, means and standard errors. In the below code we first draw a violin plot, then layer on a boxplot, a point for the mean (note `geom = "point"` instead of `"bar"`) and standard error bars (`geom = "errorbar"`). This plot does not require much additional code to produce than the bar plot with error bars, yet the amount of information displayed is vastly superior.
* `fatten = NULL` in the boxplot geom removes the median line, which can make it easier to see the mean and error bars. Including this argument will result in the message `Removed 1 rows containing missing values (geom_segment)` and is not a cause for concern. Removing this argument will reinstate the median line.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
Figure 4\.6: Violin\-boxplot with mean dot and standard error bars.
It is important to note that the order of the layers matters and it is worth experimenting with the order to see where the order matters. For example, if we call `[geom_boxplot()](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)` followed by `[geom_violin()](https://ggplot2.tidyverse.org/reference/geom_violin.html)`, we get the following mess:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
Figure 4\.7: Plot with the geoms in the wrong order.
### 4\.4\.1 Grouped violin\-boxplots
As with previous plots, another variable can be mapped to `fill` for the violin\-boxplot. (Remember to add a colourblind\-safe palette.) However, simply adding `fill` to the mapping causes the different components of the plot to become misaligned because they have different default positions:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.8: Grouped violin\-boxplots without repositioning.
To rectify this we need to adjust the argument `position` for each of the misaligned layers. `[position_dodge()](https://ggplot2.tidyverse.org/reference/position_dodge.html)` instructs R to move (dodge) the position of the plot component by the specified value; finding what value looks best can sometimes take trial and error.
```
# set the offset position of the geoms
pos <- [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(0.9)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL,
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.9: Grouped violin\-boxplots with repositioning.
### 4\.4\.1 Grouped violin\-boxplots
As with previous plots, another variable can be mapped to `fill` for the violin\-boxplot. (Remember to add a colourblind\-safe palette.) However, simply adding `fill` to the mapping causes the different components of the plot to become misaligned because they have different default positions:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.8: Grouped violin\-boxplots without repositioning.
To rectify this we need to adjust the argument `position` for each of the misaligned layers. `[position_dodge()](https://ggplot2.tidyverse.org/reference/position_dodge.html)` instructs R to move (dodge) the position of the plot component by the specified value; finding what value looks best can sometimes take trial and error.
```
# set the offset position of the geoms
pos <- [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(0.9)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL,
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.9: Grouped violin\-boxplots with repositioning.
4\.5 Customisation part 3
-------------------------
Combining multiple type of plots can present an issue with the colours, particularly when the fill and line colours are similar. For example, it is hard to make out the boxplot against the violin plot above.
There are a number of solutions to this problem. One solution is to adjust the transparency of each layer using `alpha`. The exact values needed can take trial and error:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language,
group = [paste](https://rdrr.io/r/base/paste.html)(condition, language))) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = 0.25, position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2,
fatten = NULL,
alpha = 0.75,
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.10: Using transparency on the fill color.
Alternatively, we can change the fill of individual geoms by adding `fill = "colour"` to each relevant geom. In the example below, we fill the boxplots with white. Since all of the boxplots are no longer being filled according to language, but you still want a four separate boxplots, you have to add an extra mapping to `[geom_boxplot()](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)` to specify that you want the output grouped by the interaction of condition and language.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(position = pos) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL,
mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = [interaction](https://rdrr.io/r/base/interaction.html)(condition, language)),
fill = "white",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean",
geom = "point",
position = pos) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1,
position = pos) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2")
```
Figure 4\.11: Manually changing the fill color.
4\.6 Activities 3
-----------------
Before you go on, do the following:
1. Review all the code you have run so far. Try to identify the commonalities between each plot's code and the bits of the code you might change if you were using a different dataset.
2. Take a moment to recognise the complexity of the code you are now able to read.
3. For the violin\-boxplot, for `geom = "point"`, try changing `fun` to `median`
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "median", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se",
geom = "errorbar",
width = .1)
```
4. For the violin\-boxplot, for `geom = "errorbar"`, try changing `fun.data` to `mean_cl_normal` (for 95% CI)
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
# remove the median line with fatten = NULL
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_cl_normal",
geom = "errorbar",
width = .1)
```
5. Go back to the grouped density plots and try changing the transparency with `alpha`.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, fill = condition)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)(alpha = .4)+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[scale_fill_discrete](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)(name = "Condition",
labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word"))
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/multi-part-plots.html |
5 Multi\-part Plots
===================
5\.1 Interaction plots
----------------------
Interaction plots are commonly used to help display or interpret a factorial design. Just as with the bar chart of means, interaction plots represent data summaries and so they are built up with a series of calls to `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)`.
* `shape` acts much like `fill` in previous plots, except that rather than producing different colour fills for each level of the IV, the data points are given different shapes.
* `size` lets you change the size of lines and points. If you want different groups to be different sizes (for example, the sample size of each study when showing the results of a meta\-analysis or population of a city on a map), set this inside the `[aes()](https://ggplot2.tidyverse.org/reference/aes.html)` function; if you want to change the size for all groups, set it inside the relevant `geom_*()` function'.
* `[scale_color_manual()](https://ggplot2.tidyverse.org/reference/scale_manual.html)` works much like `[scale_color_discrete()](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)` except that it lets you specify the colour values manually, instead of them being automatically applied based on the palette. You can specify RGB colour values or a list of predefined colour names \-\- all available options can be found by running `[colours()](https://rdrr.io/r/grDevices/colors.html)` in the console. Other manual scales are also available, for example, `[scale_fill_manual()](https://ggplot2.tidyverse.org/reference/scale_manual.html)`.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt,
shape = language,
group = language,
color = language)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 3) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .2) +
[scale_color_manual](https://ggplot2.tidyverse.org/reference/scale_manual.html)(values = [c](https://rdrr.io/r/base/c.html)("blue", "darkorange")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.1: Interaction plot.
You can use redundant aesthetics, such as indicating the language groups using both colour and shape, in order to increase accessibility for colourblind readers or when images are printed in greyscale.
5\.2 Combined interaction plots
-------------------------------
A more complex interaction plot can be produced that takes advantage of the layers to visualise not only the overall interaction, but the change across conditions for each participant.
This code is more complex than all prior code because it does not use a universal mapping of the plot aesthetics. In our code so far, the aesthetic mapping (`aes`) of the plot has been specified in the first line of code because all layers used the same mapping. However, is is also possible for each layer to use a different mapping \-\- we encourage you to build up the plot by running each line of code sequentially to see how it all combines.
* The first call to `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` sets up the default mappings of the plot that will be used unless otherwise specified \- the `x`, `y` and `group` variable. Note the addition of `shape`, which will vary the shape of the geom according to the language variable.
* `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` overrides the default mapping by setting its own `colour` to draw the data points from each language group in a different colour. `alpha` is set to a low value to aid readability.
* Similarly, `[geom_line()](https://ggplot2.tidyverse.org/reference/geom_path.html)` overrides the default grouping variable so that a line is drawn to connect the individual data points for each *participant* (`group = id`) rather than each language group, and also sets the colours.
* Finally, the calls to `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` remain largely as they were, with the exception of setting `colour = "black"` and `size = 2` so that the overall means and error bars can be more easily distinguished from the individual data points. Because they do not specify an individual mapping, they use the defaults (e.g., the lines are connected by language group). For the error bars, the lines are again made solid.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt,
group = language, shape = language)) +
# adds raw data points in each condition
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language),alpha = .2) +
# add lines to connect each participant's data points across conditions
[geom_line](https://ggplot2.tidyverse.org/reference/geom_path.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = id, colour = language), alpha = .2) +
# add data points representing cell means
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 2, colour = "black") +
# add lines connecting cell means by condition
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line", colour = "black") +
# add errorbars to cell means
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar",
width = .2, colour = "black") +
# change colours and theme
[scale_color_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.2: Interaction plot with by\-participant data.
5\.3 Facets
-----------
So far we have produced single plots that display all the desired variables. However, there are situations in which it may be useful to create separate plots for each level of a variable. This can also help with accessibility when used instead of or in addition to group colours. The below code is an adaptation of the code used to produce the grouped scatterplot (see Figure [4\.8](representing-summary-statistics.html#fig:viobox2)) in which it may be easier to see how the relationship changes when the data are not overlaid.
* Rather than using `colour = condition` to produce different colours for each level of `condition`, this variable is instead passed to `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)`.
* Set the number of rows with `nrow` or the number of columns with `ncol`. If you don't specify this, `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` will make a best guess.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(facets = [vars](https://ggplot2.tidyverse.org/reference/vars.html)(condition), nrow = 2)
```
Figure 5\.3: Faceted scatterplot
As another example, we can use `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` as an alternative to the grouped violin\-boxplot (see Figure [4\.9](representing-summary-statistics.html#fig:viobox3)) in which the variable `language` is passed to `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` rather than `fill`. Using the tilde (`~`) to specify which factor is faceted is an alternative to using `facets = vars(factor)` like above. You may find it helpful to translate `~` as **by**, e.g., facet the plot by language.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~language) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.4: Facted violin\-boxplot
Finally, note that one way to edit the labels for faceted variables involves converting the `language` column into a factor. This allows you to set the order of the `levels` and the `labels` to display.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual"),
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual participants",
"Bilingual participants"))) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.5: Faceted violin\-boxplot with updated labels
5\.4 Storing plots
------------------
Just like with datasets, plots can be saved to objects. The below code saves the histograms we produced for reaction time and accuracy to objects named `p1` and `p2`. These plots can then be viewed by calling the object name in the console.
```
p1 <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, color = "black")
p2 <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, color = "black")
```
Importantly, layers can then be added to these saved objects. For example, the below code adds a theme to the plot saved in `p1` and saves it as a new object `p3`. This is important because many of the examples of `ggplot2` code you will find in online help forums use the `p +` format to build up plots but fail to explain what this means, which can be confusing to beginners.
```
p3 <- p1 + [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
5\.5 Saving plots as images
---------------------------
In addition to saving plots to objects for further use in R, the function `[ggsave()](https://ggplot2.tidyverse.org/reference/ggsave.html)` can be used to save plots as images on your hard drive. The only required argument for `ggsave` is the file name of the image file you will create, complete with file extension (this can be "eps", "ps", "tex", "pdf", "jpeg", "tiff", "png", "bmp", "svg" or "wmf"). By default, `[ggsave()](https://ggplot2.tidyverse.org/reference/ggsave.html)` will save the last plot displayed. However, you can also specify a specific plot object if you have one saved.
```
[ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)(filename = "my_plot.png") # save last displayed plot
[ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)(filename = "my_plot.png", plot = p3) # save plot p3
```
The width, height and resolution of the image can all be manually adjusted. Fonts will scale with these sizes, and may look different to the preview images you see in the Viewer tab. The help documentation is useful here (type `[?ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)` in the console to access the help).
5\.6 Multiple plots
-------------------
As well as creating separate plots for each level of a variable using `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)`, you may also wish to display multiple different plots together. The `patchwork` package provides an intuitive way to do this. Once it is loaded with `[library(patchwork)](https://patchwork.data-imaginist.com)`, you simply need to save the plots you wish to combine to objects as above and use the operators `+`, `/` `()` and `|` to specify the layout of the final figure.
### 5\.6\.1 Combining two plots
Two plots can be combined side\-by\-side or stacked on top of each other. These combined plots could also be saved to an object and then passed to `ggsave`.
```
p1 + p2 # side-by-side
```
Figure 5\.6: Side\-by\-side plots with patchwork
```
p1 / p2 # stacked
```
Figure 5\.7: Stacked plots with patchwork
### 5\.6\.2 Combining three or more plots
Three or more plots can be combined in a number of ways. The `patchwork` syntax is relatively easy to grasp with a few examples and a bit of trial and error. The exact layout of your plots will depend upon a number of factors. Create three plots names `p1`, `p2` and `p3` and try running the examples below. Adjust the use of the operators to see how they change the layout. Each line of code will draw a different figure.
```
p1 / p2 / p3
(p1 + p2) / p3
p2 | p2 / p3
```
5\.7 Customisation part 4
-------------------------
### 5\.7\.1 Axis labels
Previously when we edited the main axis labels we used the `scale_*` functions. These functions are useful to know because they allow you to customise many aspects of the scale, such as the breaks and limits. However, if you only need to change the main axis `name`, there is a quicker way to do so using `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)`. The below code adds a layer to the plot that changes the axis labels for the histogram saved in `p1` and adds a title and subtitle. The title and subtitle do not conform to APA standards (more on APA formatting in the additional resources), however, for presentations and social media they can be useful.
```
p1 + [labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "Mean reaction time (ms)",
y = "Number of participants",
title = "Distribution of reaction times",
subtitle = "for 100 participants")
```
Figure 5\.8: Plot with edited labels and title
You can also use `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)` to remove axis labels, for example, try adjusting the above code to `x = NULL`.
### 5\.7\.2 Redundant aesthetics
So far when we have produced plots with colours, the colours were the only way that different levels of a variable were indicated, but it is sometimes preferable to indicate levels with both colour and other means, such as facets or x\-axis categories.
The code below adds `fill = language` to violin\-boxplots that are also faceted by language. We adjust `alpha` and use the brewer colour palette to customise the colours. Specifying a `fill` variable means that by default, R produces a legend for that variable. However, the use of colour is redundant with the facet labels, so you can remove this legend with the `guides` function.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .6) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual"),
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual participants",
"Bilingual participants"))) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none")
```
Figure 3\.7: Violin\-boxplot with redundant facets and fill.
5\.8 Activities 4
-----------------
Before you go on, do the following:
1. Rather than mapping both variables (`condition` and `language)` to a single interaction plot with individual participant data, instead produce a faceted plot that separates the monolingual and bilingual data. All visual elements should remain the same (colours and shapes) and you should also take care not to have any redundant legends.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt, group = language, shape = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language),alpha = .2) +
[geom_line](https://ggplot2.tidyverse.org/reference/geom_path.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = id, colour = language), alpha = .2) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 2, colour = "black") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line", colour = "black") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .2, colour = "black") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~language) +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(shape = FALSE, colour = FALSE)
```
```
# this wasn't easy so if you got it, well done!
```
2. Choose your favourite three plots you've produced so far in this tutorial, tidy them up with axis labels, your preferred colour scheme, and any necessary titles, and then combine them using `patchwork`. If you're feeling particularly proud of them, post them on Twitter using \#PsyTeachR.
5\.1 Interaction plots
----------------------
Interaction plots are commonly used to help display or interpret a factorial design. Just as with the bar chart of means, interaction plots represent data summaries and so they are built up with a series of calls to `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)`.
* `shape` acts much like `fill` in previous plots, except that rather than producing different colour fills for each level of the IV, the data points are given different shapes.
* `size` lets you change the size of lines and points. If you want different groups to be different sizes (for example, the sample size of each study when showing the results of a meta\-analysis or population of a city on a map), set this inside the `[aes()](https://ggplot2.tidyverse.org/reference/aes.html)` function; if you want to change the size for all groups, set it inside the relevant `geom_*()` function'.
* `[scale_color_manual()](https://ggplot2.tidyverse.org/reference/scale_manual.html)` works much like `[scale_color_discrete()](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)` except that it lets you specify the colour values manually, instead of them being automatically applied based on the palette. You can specify RGB colour values or a list of predefined colour names \-\- all available options can be found by running `[colours()](https://rdrr.io/r/grDevices/colors.html)` in the console. Other manual scales are also available, for example, `[scale_fill_manual()](https://ggplot2.tidyverse.org/reference/scale_manual.html)`.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt,
shape = language,
group = language,
color = language)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 3) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .2) +
[scale_color_manual](https://ggplot2.tidyverse.org/reference/scale_manual.html)(values = [c](https://rdrr.io/r/base/c.html)("blue", "darkorange")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.1: Interaction plot.
You can use redundant aesthetics, such as indicating the language groups using both colour and shape, in order to increase accessibility for colourblind readers or when images are printed in greyscale.
5\.2 Combined interaction plots
-------------------------------
A more complex interaction plot can be produced that takes advantage of the layers to visualise not only the overall interaction, but the change across conditions for each participant.
This code is more complex than all prior code because it does not use a universal mapping of the plot aesthetics. In our code so far, the aesthetic mapping (`aes`) of the plot has been specified in the first line of code because all layers used the same mapping. However, is is also possible for each layer to use a different mapping \-\- we encourage you to build up the plot by running each line of code sequentially to see how it all combines.
* The first call to `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` sets up the default mappings of the plot that will be used unless otherwise specified \- the `x`, `y` and `group` variable. Note the addition of `shape`, which will vary the shape of the geom according to the language variable.
* `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` overrides the default mapping by setting its own `colour` to draw the data points from each language group in a different colour. `alpha` is set to a low value to aid readability.
* Similarly, `[geom_line()](https://ggplot2.tidyverse.org/reference/geom_path.html)` overrides the default grouping variable so that a line is drawn to connect the individual data points for each *participant* (`group = id`) rather than each language group, and also sets the colours.
* Finally, the calls to `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` remain largely as they were, with the exception of setting `colour = "black"` and `size = 2` so that the overall means and error bars can be more easily distinguished from the individual data points. Because they do not specify an individual mapping, they use the defaults (e.g., the lines are connected by language group). For the error bars, the lines are again made solid.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt,
group = language, shape = language)) +
# adds raw data points in each condition
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language),alpha = .2) +
# add lines to connect each participant's data points across conditions
[geom_line](https://ggplot2.tidyverse.org/reference/geom_path.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = id, colour = language), alpha = .2) +
# add data points representing cell means
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 2, colour = "black") +
# add lines connecting cell means by condition
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line", colour = "black") +
# add errorbars to cell means
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar",
width = .2, colour = "black") +
# change colours and theme
[scale_color_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.2: Interaction plot with by\-participant data.
5\.3 Facets
-----------
So far we have produced single plots that display all the desired variables. However, there are situations in which it may be useful to create separate plots for each level of a variable. This can also help with accessibility when used instead of or in addition to group colours. The below code is an adaptation of the code used to produce the grouped scatterplot (see Figure [4\.8](representing-summary-statistics.html#fig:viobox2)) in which it may be easier to see how the relationship changes when the data are not overlaid.
* Rather than using `colour = condition` to produce different colours for each level of `condition`, this variable is instead passed to `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)`.
* Set the number of rows with `nrow` or the number of columns with `ncol`. If you don't specify this, `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` will make a best guess.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(facets = [vars](https://ggplot2.tidyverse.org/reference/vars.html)(condition), nrow = 2)
```
Figure 5\.3: Faceted scatterplot
As another example, we can use `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` as an alternative to the grouped violin\-boxplot (see Figure [4\.9](representing-summary-statistics.html#fig:viobox3)) in which the variable `language` is passed to `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` rather than `fill`. Using the tilde (`~`) to specify which factor is faceted is an alternative to using `facets = vars(factor)` like above. You may find it helpful to translate `~` as **by**, e.g., facet the plot by language.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~language) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.4: Facted violin\-boxplot
Finally, note that one way to edit the labels for faceted variables involves converting the `language` column into a factor. This allows you to set the order of the `levels` and the `labels` to display.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual"),
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual participants",
"Bilingual participants"))) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.5: Faceted violin\-boxplot with updated labels
5\.4 Storing plots
------------------
Just like with datasets, plots can be saved to objects. The below code saves the histograms we produced for reaction time and accuracy to objects named `p1` and `p2`. These plots can then be viewed by calling the object name in the console.
```
p1 <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, color = "black")
p2 <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, color = "black")
```
Importantly, layers can then be added to these saved objects. For example, the below code adds a theme to the plot saved in `p1` and saves it as a new object `p3`. This is important because many of the examples of `ggplot2` code you will find in online help forums use the `p +` format to build up plots but fail to explain what this means, which can be confusing to beginners.
```
p3 <- p1 + [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
5\.5 Saving plots as images
---------------------------
In addition to saving plots to objects for further use in R, the function `[ggsave()](https://ggplot2.tidyverse.org/reference/ggsave.html)` can be used to save plots as images on your hard drive. The only required argument for `ggsave` is the file name of the image file you will create, complete with file extension (this can be "eps", "ps", "tex", "pdf", "jpeg", "tiff", "png", "bmp", "svg" or "wmf"). By default, `[ggsave()](https://ggplot2.tidyverse.org/reference/ggsave.html)` will save the last plot displayed. However, you can also specify a specific plot object if you have one saved.
```
[ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)(filename = "my_plot.png") # save last displayed plot
[ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)(filename = "my_plot.png", plot = p3) # save plot p3
```
The width, height and resolution of the image can all be manually adjusted. Fonts will scale with these sizes, and may look different to the preview images you see in the Viewer tab. The help documentation is useful here (type `[?ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)` in the console to access the help).
5\.6 Multiple plots
-------------------
As well as creating separate plots for each level of a variable using `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)`, you may also wish to display multiple different plots together. The `patchwork` package provides an intuitive way to do this. Once it is loaded with `[library(patchwork)](https://patchwork.data-imaginist.com)`, you simply need to save the plots you wish to combine to objects as above and use the operators `+`, `/` `()` and `|` to specify the layout of the final figure.
### 5\.6\.1 Combining two plots
Two plots can be combined side\-by\-side or stacked on top of each other. These combined plots could also be saved to an object and then passed to `ggsave`.
```
p1 + p2 # side-by-side
```
Figure 5\.6: Side\-by\-side plots with patchwork
```
p1 / p2 # stacked
```
Figure 5\.7: Stacked plots with patchwork
### 5\.6\.2 Combining three or more plots
Three or more plots can be combined in a number of ways. The `patchwork` syntax is relatively easy to grasp with a few examples and a bit of trial and error. The exact layout of your plots will depend upon a number of factors. Create three plots names `p1`, `p2` and `p3` and try running the examples below. Adjust the use of the operators to see how they change the layout. Each line of code will draw a different figure.
```
p1 / p2 / p3
(p1 + p2) / p3
p2 | p2 / p3
```
### 5\.6\.1 Combining two plots
Two plots can be combined side\-by\-side or stacked on top of each other. These combined plots could also be saved to an object and then passed to `ggsave`.
```
p1 + p2 # side-by-side
```
Figure 5\.6: Side\-by\-side plots with patchwork
```
p1 / p2 # stacked
```
Figure 5\.7: Stacked plots with patchwork
### 5\.6\.2 Combining three or more plots
Three or more plots can be combined in a number of ways. The `patchwork` syntax is relatively easy to grasp with a few examples and a bit of trial and error. The exact layout of your plots will depend upon a number of factors. Create three plots names `p1`, `p2` and `p3` and try running the examples below. Adjust the use of the operators to see how they change the layout. Each line of code will draw a different figure.
```
p1 / p2 / p3
(p1 + p2) / p3
p2 | p2 / p3
```
5\.7 Customisation part 4
-------------------------
### 5\.7\.1 Axis labels
Previously when we edited the main axis labels we used the `scale_*` functions. These functions are useful to know because they allow you to customise many aspects of the scale, such as the breaks and limits. However, if you only need to change the main axis `name`, there is a quicker way to do so using `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)`. The below code adds a layer to the plot that changes the axis labels for the histogram saved in `p1` and adds a title and subtitle. The title and subtitle do not conform to APA standards (more on APA formatting in the additional resources), however, for presentations and social media they can be useful.
```
p1 + [labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "Mean reaction time (ms)",
y = "Number of participants",
title = "Distribution of reaction times",
subtitle = "for 100 participants")
```
Figure 5\.8: Plot with edited labels and title
You can also use `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)` to remove axis labels, for example, try adjusting the above code to `x = NULL`.
### 5\.7\.2 Redundant aesthetics
So far when we have produced plots with colours, the colours were the only way that different levels of a variable were indicated, but it is sometimes preferable to indicate levels with both colour and other means, such as facets or x\-axis categories.
The code below adds `fill = language` to violin\-boxplots that are also faceted by language. We adjust `alpha` and use the brewer colour palette to customise the colours. Specifying a `fill` variable means that by default, R produces a legend for that variable. However, the use of colour is redundant with the facet labels, so you can remove this legend with the `guides` function.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .6) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual"),
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual participants",
"Bilingual participants"))) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none")
```
Figure 3\.7: Violin\-boxplot with redundant facets and fill.
### 5\.7\.1 Axis labels
Previously when we edited the main axis labels we used the `scale_*` functions. These functions are useful to know because they allow you to customise many aspects of the scale, such as the breaks and limits. However, if you only need to change the main axis `name`, there is a quicker way to do so using `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)`. The below code adds a layer to the plot that changes the axis labels for the histogram saved in `p1` and adds a title and subtitle. The title and subtitle do not conform to APA standards (more on APA formatting in the additional resources), however, for presentations and social media they can be useful.
```
p1 + [labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "Mean reaction time (ms)",
y = "Number of participants",
title = "Distribution of reaction times",
subtitle = "for 100 participants")
```
Figure 5\.8: Plot with edited labels and title
You can also use `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)` to remove axis labels, for example, try adjusting the above code to `x = NULL`.
### 5\.7\.2 Redundant aesthetics
So far when we have produced plots with colours, the colours were the only way that different levels of a variable were indicated, but it is sometimes preferable to indicate levels with both colour and other means, such as facets or x\-axis categories.
The code below adds `fill = language` to violin\-boxplots that are also faceted by language. We adjust `alpha` and use the brewer colour palette to customise the colours. Specifying a `fill` variable means that by default, R produces a legend for that variable. However, the use of colour is redundant with the facet labels, so you can remove this legend with the `guides` function.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .6) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual"),
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual participants",
"Bilingual participants"))) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none")
```
Figure 3\.7: Violin\-boxplot with redundant facets and fill.
5\.8 Activities 4
-----------------
Before you go on, do the following:
1. Rather than mapping both variables (`condition` and `language)` to a single interaction plot with individual participant data, instead produce a faceted plot that separates the monolingual and bilingual data. All visual elements should remain the same (colours and shapes) and you should also take care not to have any redundant legends.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt, group = language, shape = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language),alpha = .2) +
[geom_line](https://ggplot2.tidyverse.org/reference/geom_path.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = id, colour = language), alpha = .2) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 2, colour = "black") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line", colour = "black") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .2, colour = "black") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~language) +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(shape = FALSE, colour = FALSE)
```
```
# this wasn't easy so if you got it, well done!
```
2. Choose your favourite three plots you've produced so far in this tutorial, tidy them up with axis labels, your preferred colour scheme, and any necessary titles, and then combine them using `patchwork`. If you're feeling particularly proud of them, post them on Twitter using \#PsyTeachR.
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/multi-part-plots.html |
5 Multi\-part Plots
===================
5\.1 Interaction plots
----------------------
Interaction plots are commonly used to help display or interpret a factorial design. Just as with the bar chart of means, interaction plots represent data summaries and so they are built up with a series of calls to `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)`.
* `shape` acts much like `fill` in previous plots, except that rather than producing different colour fills for each level of the IV, the data points are given different shapes.
* `size` lets you change the size of lines and points. If you want different groups to be different sizes (for example, the sample size of each study when showing the results of a meta\-analysis or population of a city on a map), set this inside the `[aes()](https://ggplot2.tidyverse.org/reference/aes.html)` function; if you want to change the size for all groups, set it inside the relevant `geom_*()` function'.
* `[scale_color_manual()](https://ggplot2.tidyverse.org/reference/scale_manual.html)` works much like `[scale_color_discrete()](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)` except that it lets you specify the colour values manually, instead of them being automatically applied based on the palette. You can specify RGB colour values or a list of predefined colour names \-\- all available options can be found by running `[colours()](https://rdrr.io/r/grDevices/colors.html)` in the console. Other manual scales are also available, for example, `[scale_fill_manual()](https://ggplot2.tidyverse.org/reference/scale_manual.html)`.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt,
shape = language,
group = language,
color = language)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 3) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .2) +
[scale_color_manual](https://ggplot2.tidyverse.org/reference/scale_manual.html)(values = [c](https://rdrr.io/r/base/c.html)("blue", "darkorange")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.1: Interaction plot.
You can use redundant aesthetics, such as indicating the language groups using both colour and shape, in order to increase accessibility for colourblind readers or when images are printed in greyscale.
5\.2 Combined interaction plots
-------------------------------
A more complex interaction plot can be produced that takes advantage of the layers to visualise not only the overall interaction, but the change across conditions for each participant.
This code is more complex than all prior code because it does not use a universal mapping of the plot aesthetics. In our code so far, the aesthetic mapping (`aes`) of the plot has been specified in the first line of code because all layers used the same mapping. However, is is also possible for each layer to use a different mapping \-\- we encourage you to build up the plot by running each line of code sequentially to see how it all combines.
* The first call to `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` sets up the default mappings of the plot that will be used unless otherwise specified \- the `x`, `y` and `group` variable. Note the addition of `shape`, which will vary the shape of the geom according to the language variable.
* `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` overrides the default mapping by setting its own `colour` to draw the data points from each language group in a different colour. `alpha` is set to a low value to aid readability.
* Similarly, `[geom_line()](https://ggplot2.tidyverse.org/reference/geom_path.html)` overrides the default grouping variable so that a line is drawn to connect the individual data points for each *participant* (`group = id`) rather than each language group, and also sets the colours.
* Finally, the calls to `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` remain largely as they were, with the exception of setting `colour = "black"` and `size = 2` so that the overall means and error bars can be more easily distinguished from the individual data points. Because they do not specify an individual mapping, they use the defaults (e.g., the lines are connected by language group). For the error bars, the lines are again made solid.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt,
group = language, shape = language)) +
# adds raw data points in each condition
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language),alpha = .2) +
# add lines to connect each participant's data points across conditions
[geom_line](https://ggplot2.tidyverse.org/reference/geom_path.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = id, colour = language), alpha = .2) +
# add data points representing cell means
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 2, colour = "black") +
# add lines connecting cell means by condition
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line", colour = "black") +
# add errorbars to cell means
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar",
width = .2, colour = "black") +
# change colours and theme
[scale_color_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.2: Interaction plot with by\-participant data.
5\.3 Facets
-----------
So far we have produced single plots that display all the desired variables. However, there are situations in which it may be useful to create separate plots for each level of a variable. This can also help with accessibility when used instead of or in addition to group colours. The below code is an adaptation of the code used to produce the grouped scatterplot (see Figure [4\.8](representing-summary-statistics.html#fig:viobox2)) in which it may be easier to see how the relationship changes when the data are not overlaid.
* Rather than using `colour = condition` to produce different colours for each level of `condition`, this variable is instead passed to `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)`.
* Set the number of rows with `nrow` or the number of columns with `ncol`. If you don't specify this, `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` will make a best guess.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(facets = [vars](https://ggplot2.tidyverse.org/reference/vars.html)(condition), nrow = 2)
```
Figure 5\.3: Faceted scatterplot
As another example, we can use `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` as an alternative to the grouped violin\-boxplot (see Figure [4\.9](representing-summary-statistics.html#fig:viobox3)) in which the variable `language` is passed to `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` rather than `fill`. Using the tilde (`~`) to specify which factor is faceted is an alternative to using `facets = vars(factor)` like above. You may find it helpful to translate `~` as **by**, e.g., facet the plot by language.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~language) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.4: Facted violin\-boxplot
Finally, note that one way to edit the labels for faceted variables involves converting the `language` column into a factor. This allows you to set the order of the `levels` and the `labels` to display.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual"),
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual participants",
"Bilingual participants"))) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.5: Faceted violin\-boxplot with updated labels
5\.4 Storing plots
------------------
Just like with datasets, plots can be saved to objects. The below code saves the histograms we produced for reaction time and accuracy to objects named `p1` and `p2`. These plots can then be viewed by calling the object name in the console.
```
p1 <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, color = "black")
p2 <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, color = "black")
```
Importantly, layers can then be added to these saved objects. For example, the below code adds a theme to the plot saved in `p1` and saves it as a new object `p3`. This is important because many of the examples of `ggplot2` code you will find in online help forums use the `p +` format to build up plots but fail to explain what this means, which can be confusing to beginners.
```
p3 <- p1 + [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
5\.5 Saving plots as images
---------------------------
In addition to saving plots to objects for further use in R, the function `[ggsave()](https://ggplot2.tidyverse.org/reference/ggsave.html)` can be used to save plots as images on your hard drive. The only required argument for `ggsave` is the file name of the image file you will create, complete with file extension (this can be "eps", "ps", "tex", "pdf", "jpeg", "tiff", "png", "bmp", "svg" or "wmf"). By default, `[ggsave()](https://ggplot2.tidyverse.org/reference/ggsave.html)` will save the last plot displayed. However, you can also specify a specific plot object if you have one saved.
```
[ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)(filename = "my_plot.png") # save last displayed plot
[ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)(filename = "my_plot.png", plot = p3) # save plot p3
```
The width, height and resolution of the image can all be manually adjusted. Fonts will scale with these sizes, and may look different to the preview images you see in the Viewer tab. The help documentation is useful here (type `[?ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)` in the console to access the help).
5\.6 Multiple plots
-------------------
As well as creating separate plots for each level of a variable using `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)`, you may also wish to display multiple different plots together. The `patchwork` package provides an intuitive way to do this. Once it is loaded with `[library(patchwork)](https://patchwork.data-imaginist.com)`, you simply need to save the plots you wish to combine to objects as above and use the operators `+`, `/` `()` and `|` to specify the layout of the final figure.
### 5\.6\.1 Combining two plots
Two plots can be combined side\-by\-side or stacked on top of each other. These combined plots could also be saved to an object and then passed to `ggsave`.
```
p1 + p2 # side-by-side
```
Figure 5\.6: Side\-by\-side plots with patchwork
```
p1 / p2 # stacked
```
Figure 5\.7: Stacked plots with patchwork
### 5\.6\.2 Combining three or more plots
Three or more plots can be combined in a number of ways. The `patchwork` syntax is relatively easy to grasp with a few examples and a bit of trial and error. The exact layout of your plots will depend upon a number of factors. Create three plots names `p1`, `p2` and `p3` and try running the examples below. Adjust the use of the operators to see how they change the layout. Each line of code will draw a different figure.
```
p1 / p2 / p3
(p1 + p2) / p3
p2 | p2 / p3
```
5\.7 Customisation part 4
-------------------------
### 5\.7\.1 Axis labels
Previously when we edited the main axis labels we used the `scale_*` functions. These functions are useful to know because they allow you to customise many aspects of the scale, such as the breaks and limits. However, if you only need to change the main axis `name`, there is a quicker way to do so using `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)`. The below code adds a layer to the plot that changes the axis labels for the histogram saved in `p1` and adds a title and subtitle. The title and subtitle do not conform to APA standards (more on APA formatting in the additional resources), however, for presentations and social media they can be useful.
```
p1 + [labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "Mean reaction time (ms)",
y = "Number of participants",
title = "Distribution of reaction times",
subtitle = "for 100 participants")
```
Figure 5\.8: Plot with edited labels and title
You can also use `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)` to remove axis labels, for example, try adjusting the above code to `x = NULL`.
### 5\.7\.2 Redundant aesthetics
So far when we have produced plots with colours, the colours were the only way that different levels of a variable were indicated, but it is sometimes preferable to indicate levels with both colour and other means, such as facets or x\-axis categories.
The code below adds `fill = language` to violin\-boxplots that are also faceted by language. We adjust `alpha` and use the brewer colour palette to customise the colours. Specifying a `fill` variable means that by default, R produces a legend for that variable. However, the use of colour is redundant with the facet labels, so you can remove this legend with the `guides` function.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .6) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual"),
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual participants",
"Bilingual participants"))) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none")
```
Figure 3\.7: Violin\-boxplot with redundant facets and fill.
5\.8 Activities 4
-----------------
Before you go on, do the following:
1. Rather than mapping both variables (`condition` and `language)` to a single interaction plot with individual participant data, instead produce a faceted plot that separates the monolingual and bilingual data. All visual elements should remain the same (colours and shapes) and you should also take care not to have any redundant legends.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt, group = language, shape = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language),alpha = .2) +
[geom_line](https://ggplot2.tidyverse.org/reference/geom_path.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = id, colour = language), alpha = .2) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 2, colour = "black") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line", colour = "black") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .2, colour = "black") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~language) +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(shape = FALSE, colour = FALSE)
```
```
# this wasn't easy so if you got it, well done!
```
2. Choose your favourite three plots you've produced so far in this tutorial, tidy them up with axis labels, your preferred colour scheme, and any necessary titles, and then combine them using `patchwork`. If you're feeling particularly proud of them, post them on Twitter using \#PsyTeachR.
5\.1 Interaction plots
----------------------
Interaction plots are commonly used to help display or interpret a factorial design. Just as with the bar chart of means, interaction plots represent data summaries and so they are built up with a series of calls to `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)`.
* `shape` acts much like `fill` in previous plots, except that rather than producing different colour fills for each level of the IV, the data points are given different shapes.
* `size` lets you change the size of lines and points. If you want different groups to be different sizes (for example, the sample size of each study when showing the results of a meta\-analysis or population of a city on a map), set this inside the `[aes()](https://ggplot2.tidyverse.org/reference/aes.html)` function; if you want to change the size for all groups, set it inside the relevant `geom_*()` function'.
* `[scale_color_manual()](https://ggplot2.tidyverse.org/reference/scale_manual.html)` works much like `[scale_color_discrete()](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)` except that it lets you specify the colour values manually, instead of them being automatically applied based on the palette. You can specify RGB colour values or a list of predefined colour names \-\- all available options can be found by running `[colours()](https://rdrr.io/r/grDevices/colors.html)` in the console. Other manual scales are also available, for example, `[scale_fill_manual()](https://ggplot2.tidyverse.org/reference/scale_manual.html)`.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt,
shape = language,
group = language,
color = language)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 3) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .2) +
[scale_color_manual](https://ggplot2.tidyverse.org/reference/scale_manual.html)(values = [c](https://rdrr.io/r/base/c.html)("blue", "darkorange")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.1: Interaction plot.
You can use redundant aesthetics, such as indicating the language groups using both colour and shape, in order to increase accessibility for colourblind readers or when images are printed in greyscale.
5\.2 Combined interaction plots
-------------------------------
A more complex interaction plot can be produced that takes advantage of the layers to visualise not only the overall interaction, but the change across conditions for each participant.
This code is more complex than all prior code because it does not use a universal mapping of the plot aesthetics. In our code so far, the aesthetic mapping (`aes`) of the plot has been specified in the first line of code because all layers used the same mapping. However, is is also possible for each layer to use a different mapping \-\- we encourage you to build up the plot by running each line of code sequentially to see how it all combines.
* The first call to `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` sets up the default mappings of the plot that will be used unless otherwise specified \- the `x`, `y` and `group` variable. Note the addition of `shape`, which will vary the shape of the geom according to the language variable.
* `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` overrides the default mapping by setting its own `colour` to draw the data points from each language group in a different colour. `alpha` is set to a low value to aid readability.
* Similarly, `[geom_line()](https://ggplot2.tidyverse.org/reference/geom_path.html)` overrides the default grouping variable so that a line is drawn to connect the individual data points for each *participant* (`group = id`) rather than each language group, and also sets the colours.
* Finally, the calls to `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` remain largely as they were, with the exception of setting `colour = "black"` and `size = 2` so that the overall means and error bars can be more easily distinguished from the individual data points. Because they do not specify an individual mapping, they use the defaults (e.g., the lines are connected by language group). For the error bars, the lines are again made solid.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt,
group = language, shape = language)) +
# adds raw data points in each condition
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language),alpha = .2) +
# add lines to connect each participant's data points across conditions
[geom_line](https://ggplot2.tidyverse.org/reference/geom_path.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = id, colour = language), alpha = .2) +
# add data points representing cell means
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 2, colour = "black") +
# add lines connecting cell means by condition
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line", colour = "black") +
# add errorbars to cell means
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar",
width = .2, colour = "black") +
# change colours and theme
[scale_color_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.2: Interaction plot with by\-participant data.
5\.3 Facets
-----------
So far we have produced single plots that display all the desired variables. However, there are situations in which it may be useful to create separate plots for each level of a variable. This can also help with accessibility when used instead of or in addition to group colours. The below code is an adaptation of the code used to produce the grouped scatterplot (see Figure [4\.8](representing-summary-statistics.html#fig:viobox2)) in which it may be easier to see how the relationship changes when the data are not overlaid.
* Rather than using `colour = condition` to produce different colours for each level of `condition`, this variable is instead passed to `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)`.
* Set the number of rows with `nrow` or the number of columns with `ncol`. If you don't specify this, `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` will make a best guess.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(facets = [vars](https://ggplot2.tidyverse.org/reference/vars.html)(condition), nrow = 2)
```
Figure 5\.3: Faceted scatterplot
As another example, we can use `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` as an alternative to the grouped violin\-boxplot (see Figure [4\.9](representing-summary-statistics.html#fig:viobox3)) in which the variable `language` is passed to `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` rather than `fill`. Using the tilde (`~`) to specify which factor is faceted is an alternative to using `facets = vars(factor)` like above. You may find it helpful to translate `~` as **by**, e.g., facet the plot by language.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~language) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.4: Facted violin\-boxplot
Finally, note that one way to edit the labels for faceted variables involves converting the `language` column into a factor. This allows you to set the order of the `levels` and the `labels` to display.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual"),
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual participants",
"Bilingual participants"))) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.5: Faceted violin\-boxplot with updated labels
5\.4 Storing plots
------------------
Just like with datasets, plots can be saved to objects. The below code saves the histograms we produced for reaction time and accuracy to objects named `p1` and `p2`. These plots can then be viewed by calling the object name in the console.
```
p1 <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, color = "black")
p2 <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, color = "black")
```
Importantly, layers can then be added to these saved objects. For example, the below code adds a theme to the plot saved in `p1` and saves it as a new object `p3`. This is important because many of the examples of `ggplot2` code you will find in online help forums use the `p +` format to build up plots but fail to explain what this means, which can be confusing to beginners.
```
p3 <- p1 + [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
5\.5 Saving plots as images
---------------------------
In addition to saving plots to objects for further use in R, the function `[ggsave()](https://ggplot2.tidyverse.org/reference/ggsave.html)` can be used to save plots as images on your hard drive. The only required argument for `ggsave` is the file name of the image file you will create, complete with file extension (this can be "eps", "ps", "tex", "pdf", "jpeg", "tiff", "png", "bmp", "svg" or "wmf"). By default, `[ggsave()](https://ggplot2.tidyverse.org/reference/ggsave.html)` will save the last plot displayed. However, you can also specify a specific plot object if you have one saved.
```
[ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)(filename = "my_plot.png") # save last displayed plot
[ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)(filename = "my_plot.png", plot = p3) # save plot p3
```
The width, height and resolution of the image can all be manually adjusted. Fonts will scale with these sizes, and may look different to the preview images you see in the Viewer tab. The help documentation is useful here (type `[?ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)` in the console to access the help).
5\.6 Multiple plots
-------------------
As well as creating separate plots for each level of a variable using `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)`, you may also wish to display multiple different plots together. The `patchwork` package provides an intuitive way to do this. Once it is loaded with `[library(patchwork)](https://patchwork.data-imaginist.com)`, you simply need to save the plots you wish to combine to objects as above and use the operators `+`, `/` `()` and `|` to specify the layout of the final figure.
### 5\.6\.1 Combining two plots
Two plots can be combined side\-by\-side or stacked on top of each other. These combined plots could also be saved to an object and then passed to `ggsave`.
```
p1 + p2 # side-by-side
```
Figure 5\.6: Side\-by\-side plots with patchwork
```
p1 / p2 # stacked
```
Figure 5\.7: Stacked plots with patchwork
### 5\.6\.2 Combining three or more plots
Three or more plots can be combined in a number of ways. The `patchwork` syntax is relatively easy to grasp with a few examples and a bit of trial and error. The exact layout of your plots will depend upon a number of factors. Create three plots names `p1`, `p2` and `p3` and try running the examples below. Adjust the use of the operators to see how they change the layout. Each line of code will draw a different figure.
```
p1 / p2 / p3
(p1 + p2) / p3
p2 | p2 / p3
```
### 5\.6\.1 Combining two plots
Two plots can be combined side\-by\-side or stacked on top of each other. These combined plots could also be saved to an object and then passed to `ggsave`.
```
p1 + p2 # side-by-side
```
Figure 5\.6: Side\-by\-side plots with patchwork
```
p1 / p2 # stacked
```
Figure 5\.7: Stacked plots with patchwork
### 5\.6\.2 Combining three or more plots
Three or more plots can be combined in a number of ways. The `patchwork` syntax is relatively easy to grasp with a few examples and a bit of trial and error. The exact layout of your plots will depend upon a number of factors. Create three plots names `p1`, `p2` and `p3` and try running the examples below. Adjust the use of the operators to see how they change the layout. Each line of code will draw a different figure.
```
p1 / p2 / p3
(p1 + p2) / p3
p2 | p2 / p3
```
5\.7 Customisation part 4
-------------------------
### 5\.7\.1 Axis labels
Previously when we edited the main axis labels we used the `scale_*` functions. These functions are useful to know because they allow you to customise many aspects of the scale, such as the breaks and limits. However, if you only need to change the main axis `name`, there is a quicker way to do so using `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)`. The below code adds a layer to the plot that changes the axis labels for the histogram saved in `p1` and adds a title and subtitle. The title and subtitle do not conform to APA standards (more on APA formatting in the additional resources), however, for presentations and social media they can be useful.
```
p1 + [labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "Mean reaction time (ms)",
y = "Number of participants",
title = "Distribution of reaction times",
subtitle = "for 100 participants")
```
Figure 5\.8: Plot with edited labels and title
You can also use `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)` to remove axis labels, for example, try adjusting the above code to `x = NULL`.
### 5\.7\.2 Redundant aesthetics
So far when we have produced plots with colours, the colours were the only way that different levels of a variable were indicated, but it is sometimes preferable to indicate levels with both colour and other means, such as facets or x\-axis categories.
The code below adds `fill = language` to violin\-boxplots that are also faceted by language. We adjust `alpha` and use the brewer colour palette to customise the colours. Specifying a `fill` variable means that by default, R produces a legend for that variable. However, the use of colour is redundant with the facet labels, so you can remove this legend with the `guides` function.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .6) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual"),
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual participants",
"Bilingual participants"))) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none")
```
Figure 3\.7: Violin\-boxplot with redundant facets and fill.
### 5\.7\.1 Axis labels
Previously when we edited the main axis labels we used the `scale_*` functions. These functions are useful to know because they allow you to customise many aspects of the scale, such as the breaks and limits. However, if you only need to change the main axis `name`, there is a quicker way to do so using `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)`. The below code adds a layer to the plot that changes the axis labels for the histogram saved in `p1` and adds a title and subtitle. The title and subtitle do not conform to APA standards (more on APA formatting in the additional resources), however, for presentations and social media they can be useful.
```
p1 + [labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "Mean reaction time (ms)",
y = "Number of participants",
title = "Distribution of reaction times",
subtitle = "for 100 participants")
```
Figure 5\.8: Plot with edited labels and title
You can also use `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)` to remove axis labels, for example, try adjusting the above code to `x = NULL`.
### 5\.7\.2 Redundant aesthetics
So far when we have produced plots with colours, the colours were the only way that different levels of a variable were indicated, but it is sometimes preferable to indicate levels with both colour and other means, such as facets or x\-axis categories.
The code below adds `fill = language` to violin\-boxplots that are also faceted by language. We adjust `alpha` and use the brewer colour palette to customise the colours. Specifying a `fill` variable means that by default, R produces a legend for that variable. However, the use of colour is redundant with the facet labels, so you can remove this legend with the `guides` function.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .6) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual"),
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual participants",
"Bilingual participants"))) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none")
```
Figure 3\.7: Violin\-boxplot with redundant facets and fill.
5\.8 Activities 4
-----------------
Before you go on, do the following:
1. Rather than mapping both variables (`condition` and `language)` to a single interaction plot with individual participant data, instead produce a faceted plot that separates the monolingual and bilingual data. All visual elements should remain the same (colours and shapes) and you should also take care not to have any redundant legends.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt, group = language, shape = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language),alpha = .2) +
[geom_line](https://ggplot2.tidyverse.org/reference/geom_path.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = id, colour = language), alpha = .2) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 2, colour = "black") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line", colour = "black") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .2, colour = "black") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~language) +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(shape = FALSE, colour = FALSE)
```
```
# this wasn't easy so if you got it, well done!
```
2. Choose your favourite three plots you've produced so far in this tutorial, tidy them up with axis labels, your preferred colour scheme, and any necessary titles, and then combine them using `patchwork`. If you're feeling particularly proud of them, post them on Twitter using \#PsyTeachR.
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/multi-part-plots.html |
5 Multi\-part Plots
===================
5\.1 Interaction plots
----------------------
Interaction plots are commonly used to help display or interpret a factorial design. Just as with the bar chart of means, interaction plots represent data summaries and so they are built up with a series of calls to `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)`.
* `shape` acts much like `fill` in previous plots, except that rather than producing different colour fills for each level of the IV, the data points are given different shapes.
* `size` lets you change the size of lines and points. If you want different groups to be different sizes (for example, the sample size of each study when showing the results of a meta\-analysis or population of a city on a map), set this inside the `[aes()](https://ggplot2.tidyverse.org/reference/aes.html)` function; if you want to change the size for all groups, set it inside the relevant `geom_*()` function'.
* `[scale_color_manual()](https://ggplot2.tidyverse.org/reference/scale_manual.html)` works much like `[scale_color_discrete()](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)` except that it lets you specify the colour values manually, instead of them being automatically applied based on the palette. You can specify RGB colour values or a list of predefined colour names \-\- all available options can be found by running `[colours()](https://rdrr.io/r/grDevices/colors.html)` in the console. Other manual scales are also available, for example, `[scale_fill_manual()](https://ggplot2.tidyverse.org/reference/scale_manual.html)`.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt,
shape = language,
group = language,
color = language)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 3) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .2) +
[scale_color_manual](https://ggplot2.tidyverse.org/reference/scale_manual.html)(values = [c](https://rdrr.io/r/base/c.html)("blue", "darkorange")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.1: Interaction plot.
You can use redundant aesthetics, such as indicating the language groups using both colour and shape, in order to increase accessibility for colourblind readers or when images are printed in greyscale.
5\.2 Combined interaction plots
-------------------------------
A more complex interaction plot can be produced that takes advantage of the layers to visualise not only the overall interaction, but the change across conditions for each participant.
This code is more complex than all prior code because it does not use a universal mapping of the plot aesthetics. In our code so far, the aesthetic mapping (`aes`) of the plot has been specified in the first line of code because all layers used the same mapping. However, is is also possible for each layer to use a different mapping \-\- we encourage you to build up the plot by running each line of code sequentially to see how it all combines.
* The first call to `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` sets up the default mappings of the plot that will be used unless otherwise specified \- the `x`, `y` and `group` variable. Note the addition of `shape`, which will vary the shape of the geom according to the language variable.
* `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` overrides the default mapping by setting its own `colour` to draw the data points from each language group in a different colour. `alpha` is set to a low value to aid readability.
* Similarly, `[geom_line()](https://ggplot2.tidyverse.org/reference/geom_path.html)` overrides the default grouping variable so that a line is drawn to connect the individual data points for each *participant* (`group = id`) rather than each language group, and also sets the colours.
* Finally, the calls to `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` remain largely as they were, with the exception of setting `colour = "black"` and `size = 2` so that the overall means and error bars can be more easily distinguished from the individual data points. Because they do not specify an individual mapping, they use the defaults (e.g., the lines are connected by language group). For the error bars, the lines are again made solid.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt,
group = language, shape = language)) +
# adds raw data points in each condition
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language),alpha = .2) +
# add lines to connect each participant's data points across conditions
[geom_line](https://ggplot2.tidyverse.org/reference/geom_path.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = id, colour = language), alpha = .2) +
# add data points representing cell means
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 2, colour = "black") +
# add lines connecting cell means by condition
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line", colour = "black") +
# add errorbars to cell means
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar",
width = .2, colour = "black") +
# change colours and theme
[scale_color_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.2: Interaction plot with by\-participant data.
5\.3 Facets
-----------
So far we have produced single plots that display all the desired variables. However, there are situations in which it may be useful to create separate plots for each level of a variable. This can also help with accessibility when used instead of or in addition to group colours. The below code is an adaptation of the code used to produce the grouped scatterplot (see Figure [4\.8](representing-summary-statistics.html#fig:viobox2)) in which it may be easier to see how the relationship changes when the data are not overlaid.
* Rather than using `colour = condition` to produce different colours for each level of `condition`, this variable is instead passed to `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)`.
* Set the number of rows with `nrow` or the number of columns with `ncol`. If you don't specify this, `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` will make a best guess.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(facets = [vars](https://ggplot2.tidyverse.org/reference/vars.html)(condition), nrow = 2)
```
Figure 5\.3: Faceted scatterplot
As another example, we can use `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` as an alternative to the grouped violin\-boxplot (see Figure [4\.9](representing-summary-statistics.html#fig:viobox3)) in which the variable `language` is passed to `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` rather than `fill`. Using the tilde (`~`) to specify which factor is faceted is an alternative to using `facets = vars(factor)` like above. You may find it helpful to translate `~` as **by**, e.g., facet the plot by language.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~language) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.4: Facted violin\-boxplot
Finally, note that one way to edit the labels for faceted variables involves converting the `language` column into a factor. This allows you to set the order of the `levels` and the `labels` to display.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual"),
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual participants",
"Bilingual participants"))) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.5: Faceted violin\-boxplot with updated labels
5\.4 Storing plots
------------------
Just like with datasets, plots can be saved to objects. The below code saves the histograms we produced for reaction time and accuracy to objects named `p1` and `p2`. These plots can then be viewed by calling the object name in the console.
```
p1 <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, color = "black")
p2 <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, color = "black")
```
Importantly, layers can then be added to these saved objects. For example, the below code adds a theme to the plot saved in `p1` and saves it as a new object `p3`. This is important because many of the examples of `ggplot2` code you will find in online help forums use the `p +` format to build up plots but fail to explain what this means, which can be confusing to beginners.
```
p3 <- p1 + [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
5\.5 Saving plots as images
---------------------------
In addition to saving plots to objects for further use in R, the function `[ggsave()](https://ggplot2.tidyverse.org/reference/ggsave.html)` can be used to save plots as images on your hard drive. The only required argument for `ggsave` is the file name of the image file you will create, complete with file extension (this can be "eps", "ps", "tex", "pdf", "jpeg", "tiff", "png", "bmp", "svg" or "wmf"). By default, `[ggsave()](https://ggplot2.tidyverse.org/reference/ggsave.html)` will save the last plot displayed. However, you can also specify a specific plot object if you have one saved.
```
[ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)(filename = "my_plot.png") # save last displayed plot
[ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)(filename = "my_plot.png", plot = p3) # save plot p3
```
The width, height and resolution of the image can all be manually adjusted. Fonts will scale with these sizes, and may look different to the preview images you see in the Viewer tab. The help documentation is useful here (type `[?ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)` in the console to access the help).
5\.6 Multiple plots
-------------------
As well as creating separate plots for each level of a variable using `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)`, you may also wish to display multiple different plots together. The `patchwork` package provides an intuitive way to do this. Once it is loaded with `[library(patchwork)](https://patchwork.data-imaginist.com)`, you simply need to save the plots you wish to combine to objects as above and use the operators `+`, `/` `()` and `|` to specify the layout of the final figure.
### 5\.6\.1 Combining two plots
Two plots can be combined side\-by\-side or stacked on top of each other. These combined plots could also be saved to an object and then passed to `ggsave`.
```
p1 + p2 # side-by-side
```
Figure 5\.6: Side\-by\-side plots with patchwork
```
p1 / p2 # stacked
```
Figure 5\.7: Stacked plots with patchwork
### 5\.6\.2 Combining three or more plots
Three or more plots can be combined in a number of ways. The `patchwork` syntax is relatively easy to grasp with a few examples and a bit of trial and error. The exact layout of your plots will depend upon a number of factors. Create three plots names `p1`, `p2` and `p3` and try running the examples below. Adjust the use of the operators to see how they change the layout. Each line of code will draw a different figure.
```
p1 / p2 / p3
(p1 + p2) / p3
p2 | p2 / p3
```
5\.7 Customisation part 4
-------------------------
### 5\.7\.1 Axis labels
Previously when we edited the main axis labels we used the `scale_*` functions. These functions are useful to know because they allow you to customise many aspects of the scale, such as the breaks and limits. However, if you only need to change the main axis `name`, there is a quicker way to do so using `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)`. The below code adds a layer to the plot that changes the axis labels for the histogram saved in `p1` and adds a title and subtitle. The title and subtitle do not conform to APA standards (more on APA formatting in the additional resources), however, for presentations and social media they can be useful.
```
p1 + [labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "Mean reaction time (ms)",
y = "Number of participants",
title = "Distribution of reaction times",
subtitle = "for 100 participants")
```
Figure 5\.8: Plot with edited labels and title
You can also use `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)` to remove axis labels, for example, try adjusting the above code to `x = NULL`.
### 5\.7\.2 Redundant aesthetics
So far when we have produced plots with colours, the colours were the only way that different levels of a variable were indicated, but it is sometimes preferable to indicate levels with both colour and other means, such as facets or x\-axis categories.
The code below adds `fill = language` to violin\-boxplots that are also faceted by language. We adjust `alpha` and use the brewer colour palette to customise the colours. Specifying a `fill` variable means that by default, R produces a legend for that variable. However, the use of colour is redundant with the facet labels, so you can remove this legend with the `guides` function.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .6) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual"),
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual participants",
"Bilingual participants"))) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none")
```
Figure 3\.7: Violin\-boxplot with redundant facets and fill.
5\.8 Activities 4
-----------------
Before you go on, do the following:
1. Rather than mapping both variables (`condition` and `language)` to a single interaction plot with individual participant data, instead produce a faceted plot that separates the monolingual and bilingual data. All visual elements should remain the same (colours and shapes) and you should also take care not to have any redundant legends.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt, group = language, shape = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language),alpha = .2) +
[geom_line](https://ggplot2.tidyverse.org/reference/geom_path.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = id, colour = language), alpha = .2) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 2, colour = "black") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line", colour = "black") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .2, colour = "black") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~language) +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(shape = FALSE, colour = FALSE)
```
```
# this wasn't easy so if you got it, well done!
```
2. Choose your favourite three plots you've produced so far in this tutorial, tidy them up with axis labels, your preferred colour scheme, and any necessary titles, and then combine them using `patchwork`. If you're feeling particularly proud of them, post them on Twitter using \#PsyTeachR.
5\.1 Interaction plots
----------------------
Interaction plots are commonly used to help display or interpret a factorial design. Just as with the bar chart of means, interaction plots represent data summaries and so they are built up with a series of calls to `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)`.
* `shape` acts much like `fill` in previous plots, except that rather than producing different colour fills for each level of the IV, the data points are given different shapes.
* `size` lets you change the size of lines and points. If you want different groups to be different sizes (for example, the sample size of each study when showing the results of a meta\-analysis or population of a city on a map), set this inside the `[aes()](https://ggplot2.tidyverse.org/reference/aes.html)` function; if you want to change the size for all groups, set it inside the relevant `geom_*()` function'.
* `[scale_color_manual()](https://ggplot2.tidyverse.org/reference/scale_manual.html)` works much like `[scale_color_discrete()](https://ggplot2.tidyverse.org/reference/scale_colour_discrete.html)` except that it lets you specify the colour values manually, instead of them being automatically applied based on the palette. You can specify RGB colour values or a list of predefined colour names \-\- all available options can be found by running `[colours()](https://rdrr.io/r/grDevices/colors.html)` in the console. Other manual scales are also available, for example, `[scale_fill_manual()](https://ggplot2.tidyverse.org/reference/scale_manual.html)`.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt,
shape = language,
group = language,
color = language)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 3) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .2) +
[scale_color_manual](https://ggplot2.tidyverse.org/reference/scale_manual.html)(values = [c](https://rdrr.io/r/base/c.html)("blue", "darkorange")) +
[theme_classic](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.1: Interaction plot.
You can use redundant aesthetics, such as indicating the language groups using both colour and shape, in order to increase accessibility for colourblind readers or when images are printed in greyscale.
5\.2 Combined interaction plots
-------------------------------
A more complex interaction plot can be produced that takes advantage of the layers to visualise not only the overall interaction, but the change across conditions for each participant.
This code is more complex than all prior code because it does not use a universal mapping of the plot aesthetics. In our code so far, the aesthetic mapping (`aes`) of the plot has been specified in the first line of code because all layers used the same mapping. However, is is also possible for each layer to use a different mapping \-\- we encourage you to build up the plot by running each line of code sequentially to see how it all combines.
* The first call to `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` sets up the default mappings of the plot that will be used unless otherwise specified \- the `x`, `y` and `group` variable. Note the addition of `shape`, which will vary the shape of the geom according to the language variable.
* `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)` overrides the default mapping by setting its own `colour` to draw the data points from each language group in a different colour. `alpha` is set to a low value to aid readability.
* Similarly, `[geom_line()](https://ggplot2.tidyverse.org/reference/geom_path.html)` overrides the default grouping variable so that a line is drawn to connect the individual data points for each *participant* (`group = id`) rather than each language group, and also sets the colours.
* Finally, the calls to `[stat_summary()](https://ggplot2.tidyverse.org/reference/stat_summary.html)` remain largely as they were, with the exception of setting `colour = "black"` and `size = 2` so that the overall means and error bars can be more easily distinguished from the individual data points. Because they do not specify an individual mapping, they use the defaults (e.g., the lines are connected by language group). For the error bars, the lines are again made solid.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt,
group = language, shape = language)) +
# adds raw data points in each condition
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language),alpha = .2) +
# add lines to connect each participant's data points across conditions
[geom_line](https://ggplot2.tidyverse.org/reference/geom_path.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = id, colour = language), alpha = .2) +
# add data points representing cell means
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 2, colour = "black") +
# add lines connecting cell means by condition
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line", colour = "black") +
# add errorbars to cell means
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar",
width = .2, colour = "black") +
# change colours and theme
[scale_color_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.2: Interaction plot with by\-participant data.
5\.3 Facets
-----------
So far we have produced single plots that display all the desired variables. However, there are situations in which it may be useful to create separate plots for each level of a variable. This can also help with accessibility when used instead of or in addition to group colours. The below code is an adaptation of the code used to produce the grouped scatterplot (see Figure [4\.8](representing-summary-statistics.html#fig:viobox2)) in which it may be easier to see how the relationship changes when the data are not overlaid.
* Rather than using `colour = condition` to produce different colours for each level of `condition`, this variable is instead passed to `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)`.
* Set the number of rows with `nrow` or the number of columns with `ncol`. If you don't specify this, `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` will make a best guess.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(facets = [vars](https://ggplot2.tidyverse.org/reference/vars.html)(condition), nrow = 2)
```
Figure 5\.3: Faceted scatterplot
As another example, we can use `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` as an alternative to the grouped violin\-boxplot (see Figure [4\.9](representing-summary-statistics.html#fig:viobox3)) in which the variable `language` is passed to `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)` rather than `fill`. Using the tilde (`~`) to specify which factor is faceted is an alternative to using `facets = vars(factor)` like above. You may find it helpful to translate `~` as **by**, e.g., facet the plot by language.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~language) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.4: Facted violin\-boxplot
Finally, note that one way to edit the labels for faceted variables involves converting the `language` column into a factor. This allows you to set the order of the `levels` and the `labels` to display.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual"),
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual participants",
"Bilingual participants"))) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 5\.5: Faceted violin\-boxplot with updated labels
5\.4 Storing plots
------------------
Just like with datasets, plots can be saved to objects. The below code saves the histograms we produced for reaction time and accuracy to objects named `p1` and `p2`. These plots can then be viewed by calling the object name in the console.
```
p1 <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, color = "black")
p2 <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, color = "black")
```
Importantly, layers can then be added to these saved objects. For example, the below code adds a theme to the plot saved in `p1` and saves it as a new object `p3`. This is important because many of the examples of `ggplot2` code you will find in online help forums use the `p +` format to build up plots but fail to explain what this means, which can be confusing to beginners.
```
p3 <- p1 + [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
5\.5 Saving plots as images
---------------------------
In addition to saving plots to objects for further use in R, the function `[ggsave()](https://ggplot2.tidyverse.org/reference/ggsave.html)` can be used to save plots as images on your hard drive. The only required argument for `ggsave` is the file name of the image file you will create, complete with file extension (this can be "eps", "ps", "tex", "pdf", "jpeg", "tiff", "png", "bmp", "svg" or "wmf"). By default, `[ggsave()](https://ggplot2.tidyverse.org/reference/ggsave.html)` will save the last plot displayed. However, you can also specify a specific plot object if you have one saved.
```
[ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)(filename = "my_plot.png") # save last displayed plot
[ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)(filename = "my_plot.png", plot = p3) # save plot p3
```
The width, height and resolution of the image can all be manually adjusted. Fonts will scale with these sizes, and may look different to the preview images you see in the Viewer tab. The help documentation is useful here (type `[?ggsave](https://ggplot2.tidyverse.org/reference/ggsave.html)` in the console to access the help).
5\.6 Multiple plots
-------------------
As well as creating separate plots for each level of a variable using `[facet_wrap()](https://ggplot2.tidyverse.org/reference/facet_wrap.html)`, you may also wish to display multiple different plots together. The `patchwork` package provides an intuitive way to do this. Once it is loaded with `[library(patchwork)](https://patchwork.data-imaginist.com)`, you simply need to save the plots you wish to combine to objects as above and use the operators `+`, `/` `()` and `|` to specify the layout of the final figure.
### 5\.6\.1 Combining two plots
Two plots can be combined side\-by\-side or stacked on top of each other. These combined plots could also be saved to an object and then passed to `ggsave`.
```
p1 + p2 # side-by-side
```
Figure 5\.6: Side\-by\-side plots with patchwork
```
p1 / p2 # stacked
```
Figure 5\.7: Stacked plots with patchwork
### 5\.6\.2 Combining three or more plots
Three or more plots can be combined in a number of ways. The `patchwork` syntax is relatively easy to grasp with a few examples and a bit of trial and error. The exact layout of your plots will depend upon a number of factors. Create three plots names `p1`, `p2` and `p3` and try running the examples below. Adjust the use of the operators to see how they change the layout. Each line of code will draw a different figure.
```
p1 / p2 / p3
(p1 + p2) / p3
p2 | p2 / p3
```
### 5\.6\.1 Combining two plots
Two plots can be combined side\-by\-side or stacked on top of each other. These combined plots could also be saved to an object and then passed to `ggsave`.
```
p1 + p2 # side-by-side
```
Figure 5\.6: Side\-by\-side plots with patchwork
```
p1 / p2 # stacked
```
Figure 5\.7: Stacked plots with patchwork
### 5\.6\.2 Combining three or more plots
Three or more plots can be combined in a number of ways. The `patchwork` syntax is relatively easy to grasp with a few examples and a bit of trial and error. The exact layout of your plots will depend upon a number of factors. Create three plots names `p1`, `p2` and `p3` and try running the examples below. Adjust the use of the operators to see how they change the layout. Each line of code will draw a different figure.
```
p1 / p2 / p3
(p1 + p2) / p3
p2 | p2 / p3
```
5\.7 Customisation part 4
-------------------------
### 5\.7\.1 Axis labels
Previously when we edited the main axis labels we used the `scale_*` functions. These functions are useful to know because they allow you to customise many aspects of the scale, such as the breaks and limits. However, if you only need to change the main axis `name`, there is a quicker way to do so using `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)`. The below code adds a layer to the plot that changes the axis labels for the histogram saved in `p1` and adds a title and subtitle. The title and subtitle do not conform to APA standards (more on APA formatting in the additional resources), however, for presentations and social media they can be useful.
```
p1 + [labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "Mean reaction time (ms)",
y = "Number of participants",
title = "Distribution of reaction times",
subtitle = "for 100 participants")
```
Figure 5\.8: Plot with edited labels and title
You can also use `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)` to remove axis labels, for example, try adjusting the above code to `x = NULL`.
### 5\.7\.2 Redundant aesthetics
So far when we have produced plots with colours, the colours were the only way that different levels of a variable were indicated, but it is sometimes preferable to indicate levels with both colour and other means, such as facets or x\-axis categories.
The code below adds `fill = language` to violin\-boxplots that are also faceted by language. We adjust `alpha` and use the brewer colour palette to customise the colours. Specifying a `fill` variable means that by default, R produces a legend for that variable. However, the use of colour is redundant with the facet labels, so you can remove this legend with the `guides` function.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .6) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual"),
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual participants",
"Bilingual participants"))) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none")
```
Figure 3\.7: Violin\-boxplot with redundant facets and fill.
### 5\.7\.1 Axis labels
Previously when we edited the main axis labels we used the `scale_*` functions. These functions are useful to know because they allow you to customise many aspects of the scale, such as the breaks and limits. However, if you only need to change the main axis `name`, there is a quicker way to do so using `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)`. The below code adds a layer to the plot that changes the axis labels for the histogram saved in `p1` and adds a title and subtitle. The title and subtitle do not conform to APA standards (more on APA formatting in the additional resources), however, for presentations and social media they can be useful.
```
p1 + [labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "Mean reaction time (ms)",
y = "Number of participants",
title = "Distribution of reaction times",
subtitle = "for 100 participants")
```
Figure 5\.8: Plot with edited labels and title
You can also use `[labs()](https://ggplot2.tidyverse.org/reference/labs.html)` to remove axis labels, for example, try adjusting the above code to `x = NULL`.
### 5\.7\.2 Redundant aesthetics
So far when we have produced plots with colours, the colours were the only way that different levels of a variable were indicated, but it is sometimes preferable to indicate levels with both colour and other means, such as facets or x\-axis categories.
The code below adds `fill = language` to violin\-boxplots that are also faceted by language. We adjust `alpha` and use the brewer colour palette to customise the colours. Specifying a `fill` variable means that by default, R produces a legend for that variable. However, the use of colour is redundant with the facet labels, so you can remove this legend with the `guides` function.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .6) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("monolingual", "bilingual"),
labels = [c](https://rdrr.io/r/base/c.html)("Monolingual participants",
"Bilingual participants"))) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none")
```
Figure 3\.7: Violin\-boxplot with redundant facets and fill.
5\.8 Activities 4
-----------------
Before you go on, do the following:
1. Rather than mapping both variables (`condition` and `language)` to a single interaction plot with individual participant data, instead produce a faceted plot that separates the monolingual and bilingual data. All visual elements should remain the same (colours and shapes) and you should also take care not to have any redundant legends.
Solution
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt, group = language, shape = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language),alpha = .2) +
[geom_line](https://ggplot2.tidyverse.org/reference/geom_path.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(group = id, colour = language), alpha = .2) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point", size = 2, colour = "black") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "line", colour = "black") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .2, colour = "black") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~language) +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(shape = FALSE, colour = FALSE)
```
```
# this wasn't easy so if you got it, well done!
```
2. Choose your favourite three plots you've produced so far in this tutorial, tidy them up with axis labels, your preferred colour scheme, and any necessary titles, and then combine them using `patchwork`. If you're feeling particularly proud of them, post them on Twitter using \#PsyTeachR.
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/advanced-plots.html |
6 Advanced Plots
================
This tutorial has but scratched the surface of the visualisation options available using R. In the additional online resources we provide some further advanced plots and customisation options for those readers who are feeling confident with the content covered in this tutorial. However, the below plots give an idea of what is possible, and represent the favourite plots of the authorship team.
We will use some custom functions: `geom_split_violin()` and `geom_flat_violin()`, which you can access through the `introdataviz` package. These functions are modified from ([Allen et al., 2021](references.html#ref-raincloudplots)).
```
# how to install the introdataviz package to get split and half violin plots
devtools::[install_github](https://devtools.r-lib.org/reference/remote-reexports.html)("psyteachr/introdataviz")
```
6\.1 Split\-violin plots
------------------------
Split\-violin plots remove the redundancy of mirrored violin plots and make it easier to compare the distributions between multiple conditions.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt, fill = language)) +
introdataviz::[geom_split_violin](https://rdrr.io/pkg/introdataviz/man/geom_split_violin.html)(alpha = .4, trim = FALSE) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, alpha = .6, fatten = NULL, show.legend = FALSE) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "pointrange", show.legend = F,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.175)) +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition", labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)",
breaks = [seq](https://rdrr.io/r/base/seq.html)(200, 800, 100),
limits = [c](https://rdrr.io/r/base/c.html)(200, 800)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2", name = "Language group") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 6\.1: Split\-violin plot
6\.2 Raincloud plots
--------------------
Raincloud plots combine a density plot, boxplot, raw data points, and any desired summary statistics for a complete visualisation of the data. They are so called because the density plot plus raw data is reminiscent of a rain cloud.
```
rain_height <- .1
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = "", y = rt, fill = language)) +
# clouds
introdataviz::[geom_flat_violin](https://rdrr.io/pkg/introdataviz/man/geom_flat_violin.html)(trim=FALSE, alpha = 0.4,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = rain_height+.05)) +
# rain
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language), size = 2, alpha = .5, show.legend = FALSE,
position = [position_jitter](https://ggplot2.tidyverse.org/reference/position_jitter.html)(width = rain_height, height = 0)) +
# boxplots
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = rain_height, alpha = 0.4, show.legend = FALSE,
outlier.shape = NA,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = -rain_height*2)) +
# mean and SE point in the cloud
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = mean_cl_normal, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(color = language), show.legend = FALSE,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = rain_height * 3)) +
# adjust layout
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "", expand = [c](https://rdrr.io/r/base/c.html)(rain_height*3, 0, 0, 0.7)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)",
breaks = [seq](https://rdrr.io/r/base/seq.html)(200, 800, 100),
limits = [c](https://rdrr.io/r/base/c.html)(200, 800)) +
[coord_flip](https://ggplot2.tidyverse.org/reference/coord_flip.html)() +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(condition,
levels = [c](https://rdrr.io/r/base/c.html)("word", "nonword"),
labels = [c](https://rdrr.io/r/base/c.html)("Word", "Non-Word")),
nrow = 2) +
# custom colours and theme
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2", name = "Language group") +
[scale_colour_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(panel.grid.major.y = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
legend.position = [c](https://rdrr.io/r/base/c.html)(0.8, 0.8),
legend.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "white", color = "white"))
```
Figure 6\.2: Raincloud plot. The point and line in the centre of each cloud represents its mean and 95% CI. The rain respresents individual data points.
6\.3 Ridge plots
----------------
Ridge plots are a series of density plots that show the distribution of values for several groups. Figure [6\.3](advanced-plots.html#fig:ridgeplot) shows data from ([Nation, 2017](references.html#ref-Nation2017)) and demonstrates how effective this type of visualisation can be to convey a lot of information very intuitively whilst being visually attractive.
```
# read in data from Nation et al. 2017
data <- [read_csv](https://readr.tidyverse.org/reference/read_delim.html)("https://raw.githubusercontent.com/zonination/perceptions/master/probly.csv", col_types = "d")
# convert to long format and percents
long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data, cols = [everything](https://tidyselect.r-lib.org/reference/everything.html)(),
names_to = "label",
values_to = "prob") [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(label = [factor](https://rdrr.io/r/base/factor.html)(label, [names](https://rdrr.io/r/base/names.html)(data), [names](https://rdrr.io/r/base/names.html)(data)),
prob = prob/100)
# ridge plot
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = prob, y = label, fill = label)) +
ggridges::[geom_density_ridges](https://wilkelab.org/ggridges/reference/geom_density_ridges.html)(scale = 2, show.legend = FALSE) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Assigned Probability",
limits = [c](https://rdrr.io/r/base/c.html)(0, 1), labels = scales::[percent](https://scales.r-lib.org/reference/label_percent.html)) +
# control space at top and bottom of plot
[scale_y_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "", expand = [c](https://rdrr.io/r/base/c.html)(0.02, 0, .08, 0)) +
[scale_fill_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)(option = "D") # colourblind-safe colours
```
Figure 6\.3: A ridge plot.
6\.4 Alluvial plots
-------------------
Alluvial plots visualise multi\-level categorical data through flows that can easily be traced in the diagram.
```
[library](https://rdrr.io/r/base/library.html)([ggalluvial](http://corybrunson.github.io/ggalluvial/))
# simulate data for 4 years of grades from 500 students
# with a correlation of 0.75 from year to year
# and a slight increase each year
dat <- faux::[sim_design](https://rdrr.io/pkg/faux/man/sim_design.html)(
within = [list](https://rdrr.io/r/base/list.html)(year = [c](https://rdrr.io/r/base/c.html)("Y1", "Y2", "Y3", "Y4")),
n = 500,
mu = [c](https://rdrr.io/r/base/c.html)(Y1 = 0, Y2 = .2, Y3 = .4, Y4 = .6), r = 0.75,
dv = "grade", long = TRUE, plot = FALSE) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
# convert numeric grades to letters with a defined probability
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(grade = faux::[norm2likert](https://rdrr.io/pkg/faux/man/norm2likert.html)(grade, prob = [c](https://rdrr.io/r/base/c.html)("3rd" = 5, "2.2" = 10, "2.1" = 40, "1st" = 20)),
grade = [factor](https://rdrr.io/r/base/factor.html)(grade, [c](https://rdrr.io/r/base/c.html)("1st", "2.1", "2.2", "3rd"))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
# reformat data and count each combination
tidyr::[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(names_from = year, values_from = grade) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
dplyr::[count](https://dplyr.tidyverse.org/reference/count.html)(Y1, Y2, Y3, Y4)
# plot data with colours by Year1 grades
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = n, axis1 = Y1, axis2 = Y2, axis3 = Y3, axis4 = Y4)) +
[geom_alluvium](http://corybrunson.github.io/ggalluvial/reference/geom_alluvium.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(fill = Y4), width = 1/6) +
[geom_stratum](http://corybrunson.github.io/ggalluvial/reference/geom_stratum.html)(fill = "grey", width = 1/3, color = "black") +
[geom_label](https://ggplot2.tidyverse.org/reference/geom_text.html)(stat = "stratum", [aes](https://ggplot2.tidyverse.org/reference/aes.html)(label = [after_stat](https://ggplot2.tidyverse.org/reference/aes_eval.html)(stratum))) +
[scale_fill_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)(name = "Final Classification") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(legend.position = "top")
```
Figure 6\.4: An alluvial plot showing the progression of student grades through the years.
6\.5 Maps
---------
Working with maps can be tricky. The `sf` package provides functions that work with ggplot2, such as `[geom_sf()](https://ggplot2.tidyverse.org/reference/ggsf.html)`. The `rnaturalearth` package provides high\-quality mapping coordinates.
```
[library](https://rdrr.io/r/base/library.html)([sf](https://r-spatial.github.io/sf/)) # for mapping geoms
[library](https://rdrr.io/r/base/library.html)([rnaturalearth](https://github.com/ropenscilabs/rnaturalearth)) # for map data
# get and bind country data
uk_sf <- [ne_states](https://rdrr.io/pkg/rnaturalearth/man/ne_states.html)(country = "united kingdom", returnclass = "sf")
ireland_sf <- [ne_states](https://rdrr.io/pkg/rnaturalearth/man/ne_states.html)(country = "ireland", returnclass = "sf")
islands <- [bind_rows](https://dplyr.tidyverse.org/reference/bind.html)(uk_sf, ireland_sf) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[filter](https://dplyr.tidyverse.org/reference/filter.html)(
# set colours
country_colours <- [c](https://rdrr.io/r/base/c.html)("Scotland" = "#0962BA",
"Wales" = "#00AC48",
"England" = "#FF0000",
"Northern Ireland" = "#FFCD2C",
"Ireland" = "#F77613")
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)() +
[geom_sf](https://ggplot2.tidyverse.org/reference/ggsf.html)(data = islands,
mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(fill = geonunit),
colour = NA,
alpha = 0.75) +
[coord_sf](https://ggplot2.tidyverse.org/reference/ggsf.html)(crs = sf::[st_crs](https://r-spatial.github.io/sf/reference/st_crs.html)(4326),
xlim = [c](https://rdrr.io/r/base/c.html)(-10.7, 2.1),
ylim = [c](https://rdrr.io/r/base/c.html)(49.7, 61)) +
[scale_fill_manual](https://ggplot2.tidyverse.org/reference/scale_manual.html)(name = "Country",
values = country_colours)
```
Figure 6\.5: Map coloured by country.
6\.1 Split\-violin plots
------------------------
Split\-violin plots remove the redundancy of mirrored violin plots and make it easier to compare the distributions between multiple conditions.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt, fill = language)) +
introdataviz::[geom_split_violin](https://rdrr.io/pkg/introdataviz/man/geom_split_violin.html)(alpha = .4, trim = FALSE) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, alpha = .6, fatten = NULL, show.legend = FALSE) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "pointrange", show.legend = F,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.175)) +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition", labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)",
breaks = [seq](https://rdrr.io/r/base/seq.html)(200, 800, 100),
limits = [c](https://rdrr.io/r/base/c.html)(200, 800)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2", name = "Language group") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 6\.1: Split\-violin plot
6\.2 Raincloud plots
--------------------
Raincloud plots combine a density plot, boxplot, raw data points, and any desired summary statistics for a complete visualisation of the data. They are so called because the density plot plus raw data is reminiscent of a rain cloud.
```
rain_height <- .1
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = "", y = rt, fill = language)) +
# clouds
introdataviz::[geom_flat_violin](https://rdrr.io/pkg/introdataviz/man/geom_flat_violin.html)(trim=FALSE, alpha = 0.4,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = rain_height+.05)) +
# rain
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language), size = 2, alpha = .5, show.legend = FALSE,
position = [position_jitter](https://ggplot2.tidyverse.org/reference/position_jitter.html)(width = rain_height, height = 0)) +
# boxplots
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = rain_height, alpha = 0.4, show.legend = FALSE,
outlier.shape = NA,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = -rain_height*2)) +
# mean and SE point in the cloud
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = mean_cl_normal, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(color = language), show.legend = FALSE,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = rain_height * 3)) +
# adjust layout
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "", expand = [c](https://rdrr.io/r/base/c.html)(rain_height*3, 0, 0, 0.7)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)",
breaks = [seq](https://rdrr.io/r/base/seq.html)(200, 800, 100),
limits = [c](https://rdrr.io/r/base/c.html)(200, 800)) +
[coord_flip](https://ggplot2.tidyverse.org/reference/coord_flip.html)() +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(condition,
levels = [c](https://rdrr.io/r/base/c.html)("word", "nonword"),
labels = [c](https://rdrr.io/r/base/c.html)("Word", "Non-Word")),
nrow = 2) +
# custom colours and theme
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2", name = "Language group") +
[scale_colour_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(panel.grid.major.y = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
legend.position = [c](https://rdrr.io/r/base/c.html)(0.8, 0.8),
legend.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "white", color = "white"))
```
Figure 6\.2: Raincloud plot. The point and line in the centre of each cloud represents its mean and 95% CI. The rain respresents individual data points.
6\.3 Ridge plots
----------------
Ridge plots are a series of density plots that show the distribution of values for several groups. Figure [6\.3](advanced-plots.html#fig:ridgeplot) shows data from ([Nation, 2017](references.html#ref-Nation2017)) and demonstrates how effective this type of visualisation can be to convey a lot of information very intuitively whilst being visually attractive.
```
# read in data from Nation et al. 2017
data <- [read_csv](https://readr.tidyverse.org/reference/read_delim.html)("https://raw.githubusercontent.com/zonination/perceptions/master/probly.csv", col_types = "d")
# convert to long format and percents
long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data, cols = [everything](https://tidyselect.r-lib.org/reference/everything.html)(),
names_to = "label",
values_to = "prob") [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(label = [factor](https://rdrr.io/r/base/factor.html)(label, [names](https://rdrr.io/r/base/names.html)(data), [names](https://rdrr.io/r/base/names.html)(data)),
prob = prob/100)
# ridge plot
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = prob, y = label, fill = label)) +
ggridges::[geom_density_ridges](https://wilkelab.org/ggridges/reference/geom_density_ridges.html)(scale = 2, show.legend = FALSE) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Assigned Probability",
limits = [c](https://rdrr.io/r/base/c.html)(0, 1), labels = scales::[percent](https://scales.r-lib.org/reference/label_percent.html)) +
# control space at top and bottom of plot
[scale_y_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "", expand = [c](https://rdrr.io/r/base/c.html)(0.02, 0, .08, 0)) +
[scale_fill_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)(option = "D") # colourblind-safe colours
```
Figure 6\.3: A ridge plot.
6\.4 Alluvial plots
-------------------
Alluvial plots visualise multi\-level categorical data through flows that can easily be traced in the diagram.
```
[library](https://rdrr.io/r/base/library.html)([ggalluvial](http://corybrunson.github.io/ggalluvial/))
# simulate data for 4 years of grades from 500 students
# with a correlation of 0.75 from year to year
# and a slight increase each year
dat <- faux::[sim_design](https://rdrr.io/pkg/faux/man/sim_design.html)(
within = [list](https://rdrr.io/r/base/list.html)(year = [c](https://rdrr.io/r/base/c.html)("Y1", "Y2", "Y3", "Y4")),
n = 500,
mu = [c](https://rdrr.io/r/base/c.html)(Y1 = 0, Y2 = .2, Y3 = .4, Y4 = .6), r = 0.75,
dv = "grade", long = TRUE, plot = FALSE) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
# convert numeric grades to letters with a defined probability
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(grade = faux::[norm2likert](https://rdrr.io/pkg/faux/man/norm2likert.html)(grade, prob = [c](https://rdrr.io/r/base/c.html)("3rd" = 5, "2.2" = 10, "2.1" = 40, "1st" = 20)),
grade = [factor](https://rdrr.io/r/base/factor.html)(grade, [c](https://rdrr.io/r/base/c.html)("1st", "2.1", "2.2", "3rd"))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
# reformat data and count each combination
tidyr::[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(names_from = year, values_from = grade) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
dplyr::[count](https://dplyr.tidyverse.org/reference/count.html)(Y1, Y2, Y3, Y4)
# plot data with colours by Year1 grades
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = n, axis1 = Y1, axis2 = Y2, axis3 = Y3, axis4 = Y4)) +
[geom_alluvium](http://corybrunson.github.io/ggalluvial/reference/geom_alluvium.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(fill = Y4), width = 1/6) +
[geom_stratum](http://corybrunson.github.io/ggalluvial/reference/geom_stratum.html)(fill = "grey", width = 1/3, color = "black") +
[geom_label](https://ggplot2.tidyverse.org/reference/geom_text.html)(stat = "stratum", [aes](https://ggplot2.tidyverse.org/reference/aes.html)(label = [after_stat](https://ggplot2.tidyverse.org/reference/aes_eval.html)(stratum))) +
[scale_fill_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)(name = "Final Classification") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(legend.position = "top")
```
Figure 6\.4: An alluvial plot showing the progression of student grades through the years.
6\.5 Maps
---------
Working with maps can be tricky. The `sf` package provides functions that work with ggplot2, such as `[geom_sf()](https://ggplot2.tidyverse.org/reference/ggsf.html)`. The `rnaturalearth` package provides high\-quality mapping coordinates.
```
[library](https://rdrr.io/r/base/library.html)([sf](https://r-spatial.github.io/sf/)) # for mapping geoms
[library](https://rdrr.io/r/base/library.html)([rnaturalearth](https://github.com/ropenscilabs/rnaturalearth)) # for map data
# get and bind country data
uk_sf <- [ne_states](https://rdrr.io/pkg/rnaturalearth/man/ne_states.html)(country = "united kingdom", returnclass = "sf")
ireland_sf <- [ne_states](https://rdrr.io/pkg/rnaturalearth/man/ne_states.html)(country = "ireland", returnclass = "sf")
islands <- [bind_rows](https://dplyr.tidyverse.org/reference/bind.html)(uk_sf, ireland_sf) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[filter](https://dplyr.tidyverse.org/reference/filter.html)(
# set colours
country_colours <- [c](https://rdrr.io/r/base/c.html)("Scotland" = "#0962BA",
"Wales" = "#00AC48",
"England" = "#FF0000",
"Northern Ireland" = "#FFCD2C",
"Ireland" = "#F77613")
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)() +
[geom_sf](https://ggplot2.tidyverse.org/reference/ggsf.html)(data = islands,
mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(fill = geonunit),
colour = NA,
alpha = 0.75) +
[coord_sf](https://ggplot2.tidyverse.org/reference/ggsf.html)(crs = sf::[st_crs](https://r-spatial.github.io/sf/reference/st_crs.html)(4326),
xlim = [c](https://rdrr.io/r/base/c.html)(-10.7, 2.1),
ylim = [c](https://rdrr.io/r/base/c.html)(49.7, 61)) +
[scale_fill_manual](https://ggplot2.tidyverse.org/reference/scale_manual.html)(name = "Country",
values = country_colours)
```
Figure 6\.5: Map coloured by country.
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/advanced-plots.html |
6 Advanced Plots
================
This tutorial has but scratched the surface of the visualisation options available using R. In the additional online resources we provide some further advanced plots and customisation options for those readers who are feeling confident with the content covered in this tutorial. However, the below plots give an idea of what is possible, and represent the favourite plots of the authorship team.
We will use some custom functions: `geom_split_violin()` and `geom_flat_violin()`, which you can access through the `introdataviz` package. These functions are modified from ([Allen et al., 2021](references.html#ref-raincloudplots)).
```
# how to install the introdataviz package to get split and half violin plots
devtools::[install_github](https://devtools.r-lib.org/reference/remote-reexports.html)("psyteachr/introdataviz")
```
6\.1 Split\-violin plots
------------------------
Split\-violin plots remove the redundancy of mirrored violin plots and make it easier to compare the distributions between multiple conditions.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt, fill = language)) +
introdataviz::[geom_split_violin](https://rdrr.io/pkg/introdataviz/man/geom_split_violin.html)(alpha = .4, trim = FALSE) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, alpha = .6, fatten = NULL, show.legend = FALSE) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "pointrange", show.legend = F,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.175)) +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition", labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)",
breaks = [seq](https://rdrr.io/r/base/seq.html)(200, 800, 100),
limits = [c](https://rdrr.io/r/base/c.html)(200, 800)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2", name = "Language group") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 6\.1: Split\-violin plot
6\.2 Raincloud plots
--------------------
Raincloud plots combine a density plot, boxplot, raw data points, and any desired summary statistics for a complete visualisation of the data. They are so called because the density plot plus raw data is reminiscent of a rain cloud.
```
rain_height <- .1
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = "", y = rt, fill = language)) +
# clouds
introdataviz::[geom_flat_violin](https://rdrr.io/pkg/introdataviz/man/geom_flat_violin.html)(trim=FALSE, alpha = 0.4,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = rain_height+.05)) +
# rain
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language), size = 2, alpha = .5, show.legend = FALSE,
position = [position_jitter](https://ggplot2.tidyverse.org/reference/position_jitter.html)(width = rain_height, height = 0)) +
# boxplots
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = rain_height, alpha = 0.4, show.legend = FALSE,
outlier.shape = NA,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = -rain_height*2)) +
# mean and SE point in the cloud
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = mean_cl_normal, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(color = language), show.legend = FALSE,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = rain_height * 3)) +
# adjust layout
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "", expand = [c](https://rdrr.io/r/base/c.html)(rain_height*3, 0, 0, 0.7)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)",
breaks = [seq](https://rdrr.io/r/base/seq.html)(200, 800, 100),
limits = [c](https://rdrr.io/r/base/c.html)(200, 800)) +
[coord_flip](https://ggplot2.tidyverse.org/reference/coord_flip.html)() +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(condition,
levels = [c](https://rdrr.io/r/base/c.html)("word", "nonword"),
labels = [c](https://rdrr.io/r/base/c.html)("Word", "Non-Word")),
nrow = 2) +
# custom colours and theme
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2", name = "Language group") +
[scale_colour_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(panel.grid.major.y = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
legend.position = [c](https://rdrr.io/r/base/c.html)(0.8, 0.8),
legend.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "white", color = "white"))
```
Figure 6\.2: Raincloud plot. The point and line in the centre of each cloud represents its mean and 95% CI. The rain respresents individual data points.
6\.3 Ridge plots
----------------
Ridge plots are a series of density plots that show the distribution of values for several groups. Figure [6\.3](advanced-plots.html#fig:ridgeplot) shows data from ([Nation, 2017](references.html#ref-Nation2017)) and demonstrates how effective this type of visualisation can be to convey a lot of information very intuitively whilst being visually attractive.
```
# read in data from Nation et al. 2017
data <- [read_csv](https://readr.tidyverse.org/reference/read_delim.html)("https://raw.githubusercontent.com/zonination/perceptions/master/probly.csv", col_types = "d")
# convert to long format and percents
long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data, cols = [everything](https://tidyselect.r-lib.org/reference/everything.html)(),
names_to = "label",
values_to = "prob") [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(label = [factor](https://rdrr.io/r/base/factor.html)(label, [names](https://rdrr.io/r/base/names.html)(data), [names](https://rdrr.io/r/base/names.html)(data)),
prob = prob/100)
# ridge plot
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = prob, y = label, fill = label)) +
ggridges::[geom_density_ridges](https://wilkelab.org/ggridges/reference/geom_density_ridges.html)(scale = 2, show.legend = FALSE) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Assigned Probability",
limits = [c](https://rdrr.io/r/base/c.html)(0, 1), labels = scales::[percent](https://scales.r-lib.org/reference/label_percent.html)) +
# control space at top and bottom of plot
[scale_y_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "", expand = [c](https://rdrr.io/r/base/c.html)(0.02, 0, .08, 0)) +
[scale_fill_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)(option = "D") # colourblind-safe colours
```
Figure 6\.3: A ridge plot.
6\.4 Alluvial plots
-------------------
Alluvial plots visualise multi\-level categorical data through flows that can easily be traced in the diagram.
```
[library](https://rdrr.io/r/base/library.html)([ggalluvial](http://corybrunson.github.io/ggalluvial/))
# simulate data for 4 years of grades from 500 students
# with a correlation of 0.75 from year to year
# and a slight increase each year
dat <- faux::[sim_design](https://rdrr.io/pkg/faux/man/sim_design.html)(
within = [list](https://rdrr.io/r/base/list.html)(year = [c](https://rdrr.io/r/base/c.html)("Y1", "Y2", "Y3", "Y4")),
n = 500,
mu = [c](https://rdrr.io/r/base/c.html)(Y1 = 0, Y2 = .2, Y3 = .4, Y4 = .6), r = 0.75,
dv = "grade", long = TRUE, plot = FALSE) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
# convert numeric grades to letters with a defined probability
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(grade = faux::[norm2likert](https://rdrr.io/pkg/faux/man/norm2likert.html)(grade, prob = [c](https://rdrr.io/r/base/c.html)("3rd" = 5, "2.2" = 10, "2.1" = 40, "1st" = 20)),
grade = [factor](https://rdrr.io/r/base/factor.html)(grade, [c](https://rdrr.io/r/base/c.html)("1st", "2.1", "2.2", "3rd"))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
# reformat data and count each combination
tidyr::[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(names_from = year, values_from = grade) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
dplyr::[count](https://dplyr.tidyverse.org/reference/count.html)(Y1, Y2, Y3, Y4)
# plot data with colours by Year1 grades
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = n, axis1 = Y1, axis2 = Y2, axis3 = Y3, axis4 = Y4)) +
[geom_alluvium](http://corybrunson.github.io/ggalluvial/reference/geom_alluvium.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(fill = Y4), width = 1/6) +
[geom_stratum](http://corybrunson.github.io/ggalluvial/reference/geom_stratum.html)(fill = "grey", width = 1/3, color = "black") +
[geom_label](https://ggplot2.tidyverse.org/reference/geom_text.html)(stat = "stratum", [aes](https://ggplot2.tidyverse.org/reference/aes.html)(label = [after_stat](https://ggplot2.tidyverse.org/reference/aes_eval.html)(stratum))) +
[scale_fill_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)(name = "Final Classification") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(legend.position = "top")
```
Figure 6\.4: An alluvial plot showing the progression of student grades through the years.
6\.5 Maps
---------
Working with maps can be tricky. The `sf` package provides functions that work with ggplot2, such as `[geom_sf()](https://ggplot2.tidyverse.org/reference/ggsf.html)`. The `rnaturalearth` package provides high\-quality mapping coordinates.
```
[library](https://rdrr.io/r/base/library.html)([sf](https://r-spatial.github.io/sf/)) # for mapping geoms
[library](https://rdrr.io/r/base/library.html)([rnaturalearth](https://github.com/ropenscilabs/rnaturalearth)) # for map data
# get and bind country data
uk_sf <- [ne_states](https://rdrr.io/pkg/rnaturalearth/man/ne_states.html)(country = "united kingdom", returnclass = "sf")
ireland_sf <- [ne_states](https://rdrr.io/pkg/rnaturalearth/man/ne_states.html)(country = "ireland", returnclass = "sf")
islands <- [bind_rows](https://dplyr.tidyverse.org/reference/bind.html)(uk_sf, ireland_sf) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[filter](https://dplyr.tidyverse.org/reference/filter.html)(
# set colours
country_colours <- [c](https://rdrr.io/r/base/c.html)("Scotland" = "#0962BA",
"Wales" = "#00AC48",
"England" = "#FF0000",
"Northern Ireland" = "#FFCD2C",
"Ireland" = "#F77613")
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)() +
[geom_sf](https://ggplot2.tidyverse.org/reference/ggsf.html)(data = islands,
mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(fill = geonunit),
colour = NA,
alpha = 0.75) +
[coord_sf](https://ggplot2.tidyverse.org/reference/ggsf.html)(crs = sf::[st_crs](https://r-spatial.github.io/sf/reference/st_crs.html)(4326),
xlim = [c](https://rdrr.io/r/base/c.html)(-10.7, 2.1),
ylim = [c](https://rdrr.io/r/base/c.html)(49.7, 61)) +
[scale_fill_manual](https://ggplot2.tidyverse.org/reference/scale_manual.html)(name = "Country",
values = country_colours)
```
Figure 6\.5: Map coloured by country.
6\.1 Split\-violin plots
------------------------
Split\-violin plots remove the redundancy of mirrored violin plots and make it easier to compare the distributions between multiple conditions.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt, fill = language)) +
introdataviz::[geom_split_violin](https://rdrr.io/pkg/introdataviz/man/geom_split_violin.html)(alpha = .4, trim = FALSE) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, alpha = .6, fatten = NULL, show.legend = FALSE) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "pointrange", show.legend = F,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.175)) +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition", labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)",
breaks = [seq](https://rdrr.io/r/base/seq.html)(200, 800, 100),
limits = [c](https://rdrr.io/r/base/c.html)(200, 800)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2", name = "Language group") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 6\.1: Split\-violin plot
6\.2 Raincloud plots
--------------------
Raincloud plots combine a density plot, boxplot, raw data points, and any desired summary statistics for a complete visualisation of the data. They are so called because the density plot plus raw data is reminiscent of a rain cloud.
```
rain_height <- .1
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = "", y = rt, fill = language)) +
# clouds
introdataviz::[geom_flat_violin](https://rdrr.io/pkg/introdataviz/man/geom_flat_violin.html)(trim=FALSE, alpha = 0.4,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = rain_height+.05)) +
# rain
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language), size = 2, alpha = .5, show.legend = FALSE,
position = [position_jitter](https://ggplot2.tidyverse.org/reference/position_jitter.html)(width = rain_height, height = 0)) +
# boxplots
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = rain_height, alpha = 0.4, show.legend = FALSE,
outlier.shape = NA,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = -rain_height*2)) +
# mean and SE point in the cloud
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = mean_cl_normal, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(color = language), show.legend = FALSE,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = rain_height * 3)) +
# adjust layout
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "", expand = [c](https://rdrr.io/r/base/c.html)(rain_height*3, 0, 0, 0.7)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)",
breaks = [seq](https://rdrr.io/r/base/seq.html)(200, 800, 100),
limits = [c](https://rdrr.io/r/base/c.html)(200, 800)) +
[coord_flip](https://ggplot2.tidyverse.org/reference/coord_flip.html)() +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(condition,
levels = [c](https://rdrr.io/r/base/c.html)("word", "nonword"),
labels = [c](https://rdrr.io/r/base/c.html)("Word", "Non-Word")),
nrow = 2) +
# custom colours and theme
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2", name = "Language group") +
[scale_colour_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(panel.grid.major.y = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
legend.position = [c](https://rdrr.io/r/base/c.html)(0.8, 0.8),
legend.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "white", color = "white"))
```
Figure 6\.2: Raincloud plot. The point and line in the centre of each cloud represents its mean and 95% CI. The rain respresents individual data points.
6\.3 Ridge plots
----------------
Ridge plots are a series of density plots that show the distribution of values for several groups. Figure [6\.3](advanced-plots.html#fig:ridgeplot) shows data from ([Nation, 2017](references.html#ref-Nation2017)) and demonstrates how effective this type of visualisation can be to convey a lot of information very intuitively whilst being visually attractive.
```
# read in data from Nation et al. 2017
data <- [read_csv](https://readr.tidyverse.org/reference/read_delim.html)("https://raw.githubusercontent.com/zonination/perceptions/master/probly.csv", col_types = "d")
# convert to long format and percents
long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data, cols = [everything](https://tidyselect.r-lib.org/reference/everything.html)(),
names_to = "label",
values_to = "prob") [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(label = [factor](https://rdrr.io/r/base/factor.html)(label, [names](https://rdrr.io/r/base/names.html)(data), [names](https://rdrr.io/r/base/names.html)(data)),
prob = prob/100)
# ridge plot
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = prob, y = label, fill = label)) +
ggridges::[geom_density_ridges](https://wilkelab.org/ggridges/reference/geom_density_ridges.html)(scale = 2, show.legend = FALSE) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Assigned Probability",
limits = [c](https://rdrr.io/r/base/c.html)(0, 1), labels = scales::[percent](https://scales.r-lib.org/reference/label_percent.html)) +
# control space at top and bottom of plot
[scale_y_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "", expand = [c](https://rdrr.io/r/base/c.html)(0.02, 0, .08, 0)) +
[scale_fill_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)(option = "D") # colourblind-safe colours
```
Figure 6\.3: A ridge plot.
6\.4 Alluvial plots
-------------------
Alluvial plots visualise multi\-level categorical data through flows that can easily be traced in the diagram.
```
[library](https://rdrr.io/r/base/library.html)([ggalluvial](http://corybrunson.github.io/ggalluvial/))
# simulate data for 4 years of grades from 500 students
# with a correlation of 0.75 from year to year
# and a slight increase each year
dat <- faux::[sim_design](https://rdrr.io/pkg/faux/man/sim_design.html)(
within = [list](https://rdrr.io/r/base/list.html)(year = [c](https://rdrr.io/r/base/c.html)("Y1", "Y2", "Y3", "Y4")),
n = 500,
mu = [c](https://rdrr.io/r/base/c.html)(Y1 = 0, Y2 = .2, Y3 = .4, Y4 = .6), r = 0.75,
dv = "grade", long = TRUE, plot = FALSE) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
# convert numeric grades to letters with a defined probability
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(grade = faux::[norm2likert](https://rdrr.io/pkg/faux/man/norm2likert.html)(grade, prob = [c](https://rdrr.io/r/base/c.html)("3rd" = 5, "2.2" = 10, "2.1" = 40, "1st" = 20)),
grade = [factor](https://rdrr.io/r/base/factor.html)(grade, [c](https://rdrr.io/r/base/c.html)("1st", "2.1", "2.2", "3rd"))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
# reformat data and count each combination
tidyr::[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(names_from = year, values_from = grade) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
dplyr::[count](https://dplyr.tidyverse.org/reference/count.html)(Y1, Y2, Y3, Y4)
# plot data with colours by Year1 grades
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = n, axis1 = Y1, axis2 = Y2, axis3 = Y3, axis4 = Y4)) +
[geom_alluvium](http://corybrunson.github.io/ggalluvial/reference/geom_alluvium.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(fill = Y4), width = 1/6) +
[geom_stratum](http://corybrunson.github.io/ggalluvial/reference/geom_stratum.html)(fill = "grey", width = 1/3, color = "black") +
[geom_label](https://ggplot2.tidyverse.org/reference/geom_text.html)(stat = "stratum", [aes](https://ggplot2.tidyverse.org/reference/aes.html)(label = [after_stat](https://ggplot2.tidyverse.org/reference/aes_eval.html)(stratum))) +
[scale_fill_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)(name = "Final Classification") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(legend.position = "top")
```
Figure 6\.4: An alluvial plot showing the progression of student grades through the years.
6\.5 Maps
---------
Working with maps can be tricky. The `sf` package provides functions that work with ggplot2, such as `[geom_sf()](https://ggplot2.tidyverse.org/reference/ggsf.html)`. The `rnaturalearth` package provides high\-quality mapping coordinates.
```
[library](https://rdrr.io/r/base/library.html)([sf](https://r-spatial.github.io/sf/)) # for mapping geoms
[library](https://rdrr.io/r/base/library.html)([rnaturalearth](https://github.com/ropenscilabs/rnaturalearth)) # for map data
# get and bind country data
uk_sf <- [ne_states](https://rdrr.io/pkg/rnaturalearth/man/ne_states.html)(country = "united kingdom", returnclass = "sf")
ireland_sf <- [ne_states](https://rdrr.io/pkg/rnaturalearth/man/ne_states.html)(country = "ireland", returnclass = "sf")
islands <- [bind_rows](https://dplyr.tidyverse.org/reference/bind.html)(uk_sf, ireland_sf) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[filter](https://dplyr.tidyverse.org/reference/filter.html)(
# set colours
country_colours <- [c](https://rdrr.io/r/base/c.html)("Scotland" = "#0962BA",
"Wales" = "#00AC48",
"England" = "#FF0000",
"Northern Ireland" = "#FFCD2C",
"Ireland" = "#F77613")
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)() +
[geom_sf](https://ggplot2.tidyverse.org/reference/ggsf.html)(data = islands,
mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(fill = geonunit),
colour = NA,
alpha = 0.75) +
[coord_sf](https://ggplot2.tidyverse.org/reference/ggsf.html)(crs = sf::[st_crs](https://r-spatial.github.io/sf/reference/st_crs.html)(4326),
xlim = [c](https://rdrr.io/r/base/c.html)(-10.7, 2.1),
ylim = [c](https://rdrr.io/r/base/c.html)(49.7, 61)) +
[scale_fill_manual](https://ggplot2.tidyverse.org/reference/scale_manual.html)(name = "Country",
values = country_colours)
```
Figure 6\.5: Map coloured by country.
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/advanced-plots.html |
6 Advanced Plots
================
This tutorial has but scratched the surface of the visualisation options available using R. In the additional online resources we provide some further advanced plots and customisation options for those readers who are feeling confident with the content covered in this tutorial. However, the below plots give an idea of what is possible, and represent the favourite plots of the authorship team.
We will use some custom functions: `geom_split_violin()` and `geom_flat_violin()`, which you can access through the `introdataviz` package. These functions are modified from ([Allen et al., 2021](references.html#ref-raincloudplots)).
```
# how to install the introdataviz package to get split and half violin plots
devtools::[install_github](https://devtools.r-lib.org/reference/remote-reexports.html)("psyteachr/introdataviz")
```
6\.1 Split\-violin plots
------------------------
Split\-violin plots remove the redundancy of mirrored violin plots and make it easier to compare the distributions between multiple conditions.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt, fill = language)) +
introdataviz::[geom_split_violin](https://rdrr.io/pkg/introdataviz/man/geom_split_violin.html)(alpha = .4, trim = FALSE) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, alpha = .6, fatten = NULL, show.legend = FALSE) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "pointrange", show.legend = F,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.175)) +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition", labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)",
breaks = [seq](https://rdrr.io/r/base/seq.html)(200, 800, 100),
limits = [c](https://rdrr.io/r/base/c.html)(200, 800)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2", name = "Language group") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 6\.1: Split\-violin plot
6\.2 Raincloud plots
--------------------
Raincloud plots combine a density plot, boxplot, raw data points, and any desired summary statistics for a complete visualisation of the data. They are so called because the density plot plus raw data is reminiscent of a rain cloud.
```
rain_height <- .1
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = "", y = rt, fill = language)) +
# clouds
introdataviz::[geom_flat_violin](https://rdrr.io/pkg/introdataviz/man/geom_flat_violin.html)(trim=FALSE, alpha = 0.4,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = rain_height+.05)) +
# rain
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language), size = 2, alpha = .5, show.legend = FALSE,
position = [position_jitter](https://ggplot2.tidyverse.org/reference/position_jitter.html)(width = rain_height, height = 0)) +
# boxplots
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = rain_height, alpha = 0.4, show.legend = FALSE,
outlier.shape = NA,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = -rain_height*2)) +
# mean and SE point in the cloud
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = mean_cl_normal, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(color = language), show.legend = FALSE,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = rain_height * 3)) +
# adjust layout
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "", expand = [c](https://rdrr.io/r/base/c.html)(rain_height*3, 0, 0, 0.7)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)",
breaks = [seq](https://rdrr.io/r/base/seq.html)(200, 800, 100),
limits = [c](https://rdrr.io/r/base/c.html)(200, 800)) +
[coord_flip](https://ggplot2.tidyverse.org/reference/coord_flip.html)() +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(condition,
levels = [c](https://rdrr.io/r/base/c.html)("word", "nonword"),
labels = [c](https://rdrr.io/r/base/c.html)("Word", "Non-Word")),
nrow = 2) +
# custom colours and theme
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2", name = "Language group") +
[scale_colour_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(panel.grid.major.y = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
legend.position = [c](https://rdrr.io/r/base/c.html)(0.8, 0.8),
legend.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "white", color = "white"))
```
Figure 6\.2: Raincloud plot. The point and line in the centre of each cloud represents its mean and 95% CI. The rain respresents individual data points.
6\.3 Ridge plots
----------------
Ridge plots are a series of density plots that show the distribution of values for several groups. Figure [6\.3](advanced-plots.html#fig:ridgeplot) shows data from ([Nation, 2017](references.html#ref-Nation2017)) and demonstrates how effective this type of visualisation can be to convey a lot of information very intuitively whilst being visually attractive.
```
# read in data from Nation et al. 2017
data <- [read_csv](https://readr.tidyverse.org/reference/read_delim.html)("https://raw.githubusercontent.com/zonination/perceptions/master/probly.csv", col_types = "d")
# convert to long format and percents
long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data, cols = [everything](https://tidyselect.r-lib.org/reference/everything.html)(),
names_to = "label",
values_to = "prob") [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(label = [factor](https://rdrr.io/r/base/factor.html)(label, [names](https://rdrr.io/r/base/names.html)(data), [names](https://rdrr.io/r/base/names.html)(data)),
prob = prob/100)
# ridge plot
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = prob, y = label, fill = label)) +
ggridges::[geom_density_ridges](https://wilkelab.org/ggridges/reference/geom_density_ridges.html)(scale = 2, show.legend = FALSE) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Assigned Probability",
limits = [c](https://rdrr.io/r/base/c.html)(0, 1), labels = scales::[percent](https://scales.r-lib.org/reference/label_percent.html)) +
# control space at top and bottom of plot
[scale_y_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "", expand = [c](https://rdrr.io/r/base/c.html)(0.02, 0, .08, 0)) +
[scale_fill_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)(option = "D") # colourblind-safe colours
```
Figure 6\.3: A ridge plot.
6\.4 Alluvial plots
-------------------
Alluvial plots visualise multi\-level categorical data through flows that can easily be traced in the diagram.
```
[library](https://rdrr.io/r/base/library.html)([ggalluvial](http://corybrunson.github.io/ggalluvial/))
# simulate data for 4 years of grades from 500 students
# with a correlation of 0.75 from year to year
# and a slight increase each year
dat <- faux::[sim_design](https://rdrr.io/pkg/faux/man/sim_design.html)(
within = [list](https://rdrr.io/r/base/list.html)(year = [c](https://rdrr.io/r/base/c.html)("Y1", "Y2", "Y3", "Y4")),
n = 500,
mu = [c](https://rdrr.io/r/base/c.html)(Y1 = 0, Y2 = .2, Y3 = .4, Y4 = .6), r = 0.75,
dv = "grade", long = TRUE, plot = FALSE) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
# convert numeric grades to letters with a defined probability
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(grade = faux::[norm2likert](https://rdrr.io/pkg/faux/man/norm2likert.html)(grade, prob = [c](https://rdrr.io/r/base/c.html)("3rd" = 5, "2.2" = 10, "2.1" = 40, "1st" = 20)),
grade = [factor](https://rdrr.io/r/base/factor.html)(grade, [c](https://rdrr.io/r/base/c.html)("1st", "2.1", "2.2", "3rd"))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
# reformat data and count each combination
tidyr::[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(names_from = year, values_from = grade) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
dplyr::[count](https://dplyr.tidyverse.org/reference/count.html)(Y1, Y2, Y3, Y4)
# plot data with colours by Year1 grades
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = n, axis1 = Y1, axis2 = Y2, axis3 = Y3, axis4 = Y4)) +
[geom_alluvium](http://corybrunson.github.io/ggalluvial/reference/geom_alluvium.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(fill = Y4), width = 1/6) +
[geom_stratum](http://corybrunson.github.io/ggalluvial/reference/geom_stratum.html)(fill = "grey", width = 1/3, color = "black") +
[geom_label](https://ggplot2.tidyverse.org/reference/geom_text.html)(stat = "stratum", [aes](https://ggplot2.tidyverse.org/reference/aes.html)(label = [after_stat](https://ggplot2.tidyverse.org/reference/aes_eval.html)(stratum))) +
[scale_fill_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)(name = "Final Classification") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(legend.position = "top")
```
Figure 6\.4: An alluvial plot showing the progression of student grades through the years.
6\.5 Maps
---------
Working with maps can be tricky. The `sf` package provides functions that work with ggplot2, such as `[geom_sf()](https://ggplot2.tidyverse.org/reference/ggsf.html)`. The `rnaturalearth` package provides high\-quality mapping coordinates.
```
[library](https://rdrr.io/r/base/library.html)([sf](https://r-spatial.github.io/sf/)) # for mapping geoms
[library](https://rdrr.io/r/base/library.html)([rnaturalearth](https://github.com/ropenscilabs/rnaturalearth)) # for map data
# get and bind country data
uk_sf <- [ne_states](https://rdrr.io/pkg/rnaturalearth/man/ne_states.html)(country = "united kingdom", returnclass = "sf")
ireland_sf <- [ne_states](https://rdrr.io/pkg/rnaturalearth/man/ne_states.html)(country = "ireland", returnclass = "sf")
islands <- [bind_rows](https://dplyr.tidyverse.org/reference/bind.html)(uk_sf, ireland_sf) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[filter](https://dplyr.tidyverse.org/reference/filter.html)(
# set colours
country_colours <- [c](https://rdrr.io/r/base/c.html)("Scotland" = "#0962BA",
"Wales" = "#00AC48",
"England" = "#FF0000",
"Northern Ireland" = "#FFCD2C",
"Ireland" = "#F77613")
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)() +
[geom_sf](https://ggplot2.tidyverse.org/reference/ggsf.html)(data = islands,
mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(fill = geonunit),
colour = NA,
alpha = 0.75) +
[coord_sf](https://ggplot2.tidyverse.org/reference/ggsf.html)(crs = sf::[st_crs](https://r-spatial.github.io/sf/reference/st_crs.html)(4326),
xlim = [c](https://rdrr.io/r/base/c.html)(-10.7, 2.1),
ylim = [c](https://rdrr.io/r/base/c.html)(49.7, 61)) +
[scale_fill_manual](https://ggplot2.tidyverse.org/reference/scale_manual.html)(name = "Country",
values = country_colours)
```
Figure 6\.5: Map coloured by country.
6\.1 Split\-violin plots
------------------------
Split\-violin plots remove the redundancy of mirrored violin plots and make it easier to compare the distributions between multiple conditions.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = rt, fill = language)) +
introdataviz::[geom_split_violin](https://rdrr.io/pkg/introdataviz/man/geom_split_violin.html)(alpha = .4, trim = FALSE) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, alpha = .6, fatten = NULL, show.legend = FALSE) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "pointrange", show.legend = F,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.175)) +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "Condition", labels = [c](https://rdrr.io/r/base/c.html)("Non-word", "Word")) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)",
breaks = [seq](https://rdrr.io/r/base/seq.html)(200, 800, 100),
limits = [c](https://rdrr.io/r/base/c.html)(200, 800)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2", name = "Language group") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure 6\.1: Split\-violin plot
6\.2 Raincloud plots
--------------------
Raincloud plots combine a density plot, boxplot, raw data points, and any desired summary statistics for a complete visualisation of the data. They are so called because the density plot plus raw data is reminiscent of a rain cloud.
```
rain_height <- .1
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = "", y = rt, fill = language)) +
# clouds
introdataviz::[geom_flat_violin](https://rdrr.io/pkg/introdataviz/man/geom_flat_violin.html)(trim=FALSE, alpha = 0.4,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = rain_height+.05)) +
# rain
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(colour = language), size = 2, alpha = .5, show.legend = FALSE,
position = [position_jitter](https://ggplot2.tidyverse.org/reference/position_jitter.html)(width = rain_height, height = 0)) +
# boxplots
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = rain_height, alpha = 0.4, show.legend = FALSE,
outlier.shape = NA,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = -rain_height*2)) +
# mean and SE point in the cloud
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = mean_cl_normal, mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(color = language), show.legend = FALSE,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = rain_height * 3)) +
# adjust layout
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "", expand = [c](https://rdrr.io/r/base/c.html)(rain_height*3, 0, 0, 0.7)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)",
breaks = [seq](https://rdrr.io/r/base/seq.html)(200, 800, 100),
limits = [c](https://rdrr.io/r/base/c.html)(200, 800)) +
[coord_flip](https://ggplot2.tidyverse.org/reference/coord_flip.html)() +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~[factor](https://rdrr.io/r/base/factor.html)(condition,
levels = [c](https://rdrr.io/r/base/c.html)("word", "nonword"),
labels = [c](https://rdrr.io/r/base/c.html)("Word", "Non-Word")),
nrow = 2) +
# custom colours and theme
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2", name = "Language group") +
[scale_colour_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(panel.grid.major.y = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
legend.position = [c](https://rdrr.io/r/base/c.html)(0.8, 0.8),
legend.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "white", color = "white"))
```
Figure 6\.2: Raincloud plot. The point and line in the centre of each cloud represents its mean and 95% CI. The rain respresents individual data points.
6\.3 Ridge plots
----------------
Ridge plots are a series of density plots that show the distribution of values for several groups. Figure [6\.3](advanced-plots.html#fig:ridgeplot) shows data from ([Nation, 2017](references.html#ref-Nation2017)) and demonstrates how effective this type of visualisation can be to convey a lot of information very intuitively whilst being visually attractive.
```
# read in data from Nation et al. 2017
data <- [read_csv](https://readr.tidyverse.org/reference/read_delim.html)("https://raw.githubusercontent.com/zonination/perceptions/master/probly.csv", col_types = "d")
# convert to long format and percents
long <- [pivot_longer](https://tidyr.tidyverse.org/reference/pivot_longer.html)(data, cols = [everything](https://tidyselect.r-lib.org/reference/everything.html)(),
names_to = "label",
values_to = "prob") [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(label = [factor](https://rdrr.io/r/base/factor.html)(label, [names](https://rdrr.io/r/base/names.html)(data), [names](https://rdrr.io/r/base/names.html)(data)),
prob = prob/100)
# ridge plot
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = prob, y = label, fill = label)) +
ggridges::[geom_density_ridges](https://wilkelab.org/ggridges/reference/geom_density_ridges.html)(scale = 2, show.legend = FALSE) +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Assigned Probability",
limits = [c](https://rdrr.io/r/base/c.html)(0, 1), labels = scales::[percent](https://scales.r-lib.org/reference/label_percent.html)) +
# control space at top and bottom of plot
[scale_y_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(name = "", expand = [c](https://rdrr.io/r/base/c.html)(0.02, 0, .08, 0)) +
[scale_fill_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)(option = "D") # colourblind-safe colours
```
Figure 6\.3: A ridge plot.
6\.4 Alluvial plots
-------------------
Alluvial plots visualise multi\-level categorical data through flows that can easily be traced in the diagram.
```
[library](https://rdrr.io/r/base/library.html)([ggalluvial](http://corybrunson.github.io/ggalluvial/))
# simulate data for 4 years of grades from 500 students
# with a correlation of 0.75 from year to year
# and a slight increase each year
dat <- faux::[sim_design](https://rdrr.io/pkg/faux/man/sim_design.html)(
within = [list](https://rdrr.io/r/base/list.html)(year = [c](https://rdrr.io/r/base/c.html)("Y1", "Y2", "Y3", "Y4")),
n = 500,
mu = [c](https://rdrr.io/r/base/c.html)(Y1 = 0, Y2 = .2, Y3 = .4, Y4 = .6), r = 0.75,
dv = "grade", long = TRUE, plot = FALSE) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
# convert numeric grades to letters with a defined probability
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(grade = faux::[norm2likert](https://rdrr.io/pkg/faux/man/norm2likert.html)(grade, prob = [c](https://rdrr.io/r/base/c.html)("3rd" = 5, "2.2" = 10, "2.1" = 40, "1st" = 20)),
grade = [factor](https://rdrr.io/r/base/factor.html)(grade, [c](https://rdrr.io/r/base/c.html)("1st", "2.1", "2.2", "3rd"))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
# reformat data and count each combination
tidyr::[pivot_wider](https://tidyr.tidyverse.org/reference/pivot_wider.html)(names_from = year, values_from = grade) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
dplyr::[count](https://dplyr.tidyverse.org/reference/count.html)(Y1, Y2, Y3, Y4)
# plot data with colours by Year1 grades
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = n, axis1 = Y1, axis2 = Y2, axis3 = Y3, axis4 = Y4)) +
[geom_alluvium](http://corybrunson.github.io/ggalluvial/reference/geom_alluvium.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(fill = Y4), width = 1/6) +
[geom_stratum](http://corybrunson.github.io/ggalluvial/reference/geom_stratum.html)(fill = "grey", width = 1/3, color = "black") +
[geom_label](https://ggplot2.tidyverse.org/reference/geom_text.html)(stat = "stratum", [aes](https://ggplot2.tidyverse.org/reference/aes.html)(label = [after_stat](https://ggplot2.tidyverse.org/reference/aes_eval.html)(stratum))) +
[scale_fill_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)(name = "Final Classification") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(legend.position = "top")
```
Figure 6\.4: An alluvial plot showing the progression of student grades through the years.
6\.5 Maps
---------
Working with maps can be tricky. The `sf` package provides functions that work with ggplot2, such as `[geom_sf()](https://ggplot2.tidyverse.org/reference/ggsf.html)`. The `rnaturalearth` package provides high\-quality mapping coordinates.
```
[library](https://rdrr.io/r/base/library.html)([sf](https://r-spatial.github.io/sf/)) # for mapping geoms
[library](https://rdrr.io/r/base/library.html)([rnaturalearth](https://github.com/ropenscilabs/rnaturalearth)) # for map data
# get and bind country data
uk_sf <- [ne_states](https://rdrr.io/pkg/rnaturalearth/man/ne_states.html)(country = "united kingdom", returnclass = "sf")
ireland_sf <- [ne_states](https://rdrr.io/pkg/rnaturalearth/man/ne_states.html)(country = "ireland", returnclass = "sf")
islands <- [bind_rows](https://dplyr.tidyverse.org/reference/bind.html)(uk_sf, ireland_sf) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[filter](https://dplyr.tidyverse.org/reference/filter.html)(
# set colours
country_colours <- [c](https://rdrr.io/r/base/c.html)("Scotland" = "#0962BA",
"Wales" = "#00AC48",
"England" = "#FF0000",
"Northern Ireland" = "#FFCD2C",
"Ireland" = "#F77613")
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)() +
[geom_sf](https://ggplot2.tidyverse.org/reference/ggsf.html)(data = islands,
mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(fill = geonunit),
colour = NA,
alpha = 0.75) +
[coord_sf](https://ggplot2.tidyverse.org/reference/ggsf.html)(crs = sf::[st_crs](https://r-spatial.github.io/sf/reference/st_crs.html)(4326),
xlim = [c](https://rdrr.io/r/base/c.html)(-10.7, 2.1),
ylim = [c](https://rdrr.io/r/base/c.html)(49.7, 61)) +
[scale_fill_manual](https://ggplot2.tidyverse.org/reference/scale_manual.html)(name = "Country",
values = country_colours)
```
Figure 6\.5: Map coloured by country.
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/conclusion.html |
7 Conclusion
============
In this tutorial we aimed to provide a practical introduction to common data visualisation techniques using R. Whilst a number of the plots produced in this tutorial can be created in point\-and\-click software, the underlying skill\-set developed by making these visualisations is as powerful as it is extendable.
We hope that this tutorial serves as a jumping off point to encourage more researchers to adopt reproducible workflows and open\-access software, in addition to beautiful data visualisations.
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/conclusion.html |
7 Conclusion
============
In this tutorial we aimed to provide a practical introduction to common data visualisation techniques using R. Whilst a number of the plots produced in this tutorial can be created in point\-and\-click software, the underlying skill\-set developed by making these visualisations is as powerful as it is extendable.
We hope that this tutorial serves as a jumping off point to encourage more researchers to adopt reproducible workflows and open\-access software, in addition to beautiful data visualisations.
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/conclusion.html |
7 Conclusion
============
In this tutorial we aimed to provide a practical introduction to common data visualisation techniques using R. Whilst a number of the plots produced in this tutorial can be created in point\-and\-click software, the underlying skill\-set developed by making these visualisations is as powerful as it is extendable.
We hope that this tutorial serves as a jumping off point to encourage more researchers to adopt reproducible workflows and open\-access software, in addition to beautiful data visualisations.
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/additional-resources.html |
A Additional resources
======================
There are a number of incredible open\-access online resources that, using the skills you have developed in this tutorial, will allow you to start adapting your figures and plots to make them as informative as possible for your reader. Additionally, there are also many excellent resources that expand on some of the topics we have covered here briefly, particularly data wrangling, that can help you consolidate and expand your skill set.
**PsyTeachR**
The [psyTeachR](https://psyteachr.github.io/) team at the University of Glasgow School of Psychology and Neuroscience has successfully made the transition to teaching reproducible research using R across all undergraduate and postgraduate levels. Our curriculum now emphasizes essential ‘data science’ graduate skills that have been overlooked in traditional approaches to teaching, including programming skills, data visualisation, data wrangling and reproducible reports. Students learn about probability and inference through data simulation as well as by working with real datasets. These materials cover all the functions we have used in this tutorial in more depth and all have Creative Commons licences to allow their use and reuse without attribution.
* [Applied Data Skills](https://psyteachr.github.io/ads/)
* [Level 1 Data Skills](https://psyteachr.github.io/data-skills/)
* [Level 2 Analysis](https://psyteachr.github.io/analysis/)
* [Level 3 Statistical Models](https://psyteachr.github.io/stat-models/)
* [Msc Fundamentals of Quantititive Analysis](https://psyteachr.github.io/fun-quant/)
* [MSc Data Skills for Reproducible Research](https://psyteachr.github.io/reprores/)
**Installing R and RStudio**
* [Installing R \- PsyTeachR](https://psyteachr.github.io/data-skills-v1/installing-r.html)
* [Running R on your own computer (walkthrough videos) \- Danielle Navarro](https://www.youtube.com/playlist?list=PLRPB0ZzEYegOZivdelOuEn-R-XUN-DOjd)
**Intro to R and RStudio**
* RStudio Essentials: [Programming \- Part 1 (Writing code in RStudio)](https://www.rstudio.com/resources/webinars/programming-part-1-writing-code-in-rstudio/)
* RStudio Essentials: [Programming \- Part 2 (Debugging code in RStudio)](https://www.rstudio.com/resources/webinars/programming-part-2-debugging-code-in-rstudio/)
**R Markdown**
* [Introduction to R Markdown](https://rmarkdown.rstudio.com/lesson-1.html)
* [R Markdown: The Definitive Guide](https://bookdown.org/yihui/rmarkdown/)
**Data wrangling**
* [R for Data Science](https://r4ds.had.co.nz/)
* [Text Mining with R](https://www.tidytextmining.com/)
**Data visualisation**
* [R Graph Gallery](https://www.r-graph-gallery.com/)
* [Fundamentals of Data Vizualisation](https://clauswilke.com/dataviz/)
* [Data Vizualisation: A Practical Introduction](https://socviz.co/)
* [Look at Data](https://socviz.co/lookatdata.html) from [Data Vizualization for Social Science](http://socviz.co/)
* [Graphs](http://www.cookbook-r.com/Graphs) in *Cookbook for R*
* [Top 50 ggplot2 Visualizations](http://r-statistics.co/Top50-Ggplot2-Visualizations-MasterList-R-Code.html)
* [R Graphics Cookbook](http://www.cookbook-r.com/Graphs/) by Winston Chang
* [ggplot extensions](https://exts.ggplot2.tidyverse.org/)
* [plotly](https://plot.ly/ggplot2/) for creating interactive graphs
* [Drawing Beautiful Maps Programmatically](https://r-spatial.org/r/2018/10/25/ggplot2-sf.html)
* [gganimate](https://gganimate.com/)
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/additional-resources.html |
A Additional resources
======================
There are a number of incredible open\-access online resources that, using the skills you have developed in this tutorial, will allow you to start adapting your figures and plots to make them as informative as possible for your reader. Additionally, there are also many excellent resources that expand on some of the topics we have covered here briefly, particularly data wrangling, that can help you consolidate and expand your skill set.
**PsyTeachR**
The [psyTeachR](https://psyteachr.github.io/) team at the University of Glasgow School of Psychology and Neuroscience has successfully made the transition to teaching reproducible research using R across all undergraduate and postgraduate levels. Our curriculum now emphasizes essential ‘data science’ graduate skills that have been overlooked in traditional approaches to teaching, including programming skills, data visualisation, data wrangling and reproducible reports. Students learn about probability and inference through data simulation as well as by working with real datasets. These materials cover all the functions we have used in this tutorial in more depth and all have Creative Commons licences to allow their use and reuse without attribution.
* [Applied Data Skills](https://psyteachr.github.io/ads/)
* [Level 1 Data Skills](https://psyteachr.github.io/data-skills/)
* [Level 2 Analysis](https://psyteachr.github.io/analysis/)
* [Level 3 Statistical Models](https://psyteachr.github.io/stat-models/)
* [Msc Fundamentals of Quantititive Analysis](https://psyteachr.github.io/fun-quant/)
* [MSc Data Skills for Reproducible Research](https://psyteachr.github.io/reprores/)
**Installing R and RStudio**
* [Installing R \- PsyTeachR](https://psyteachr.github.io/data-skills-v1/installing-r.html)
* [Running R on your own computer (walkthrough videos) \- Danielle Navarro](https://www.youtube.com/playlist?list=PLRPB0ZzEYegOZivdelOuEn-R-XUN-DOjd)
**Intro to R and RStudio**
* RStudio Essentials: [Programming \- Part 1 (Writing code in RStudio)](https://www.rstudio.com/resources/webinars/programming-part-1-writing-code-in-rstudio/)
* RStudio Essentials: [Programming \- Part 2 (Debugging code in RStudio)](https://www.rstudio.com/resources/webinars/programming-part-2-debugging-code-in-rstudio/)
**R Markdown**
* [Introduction to R Markdown](https://rmarkdown.rstudio.com/lesson-1.html)
* [R Markdown: The Definitive Guide](https://bookdown.org/yihui/rmarkdown/)
**Data wrangling**
* [R for Data Science](https://r4ds.had.co.nz/)
* [Text Mining with R](https://www.tidytextmining.com/)
**Data visualisation**
* [R Graph Gallery](https://www.r-graph-gallery.com/)
* [Fundamentals of Data Vizualisation](https://clauswilke.com/dataviz/)
* [Data Vizualisation: A Practical Introduction](https://socviz.co/)
* [Look at Data](https://socviz.co/lookatdata.html) from [Data Vizualization for Social Science](http://socviz.co/)
* [Graphs](http://www.cookbook-r.com/Graphs) in *Cookbook for R*
* [Top 50 ggplot2 Visualizations](http://r-statistics.co/Top50-Ggplot2-Visualizations-MasterList-R-Code.html)
* [R Graphics Cookbook](http://www.cookbook-r.com/Graphs/) by Winston Chang
* [ggplot extensions](https://exts.ggplot2.tidyverse.org/)
* [plotly](https://plot.ly/ggplot2/) for creating interactive graphs
* [Drawing Beautiful Maps Programmatically](https://r-spatial.org/r/2018/10/25/ggplot2-sf.html)
* [gganimate](https://gganimate.com/)
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/additional-resources.html |
A Additional resources
======================
There are a number of incredible open\-access online resources that, using the skills you have developed in this tutorial, will allow you to start adapting your figures and plots to make them as informative as possible for your reader. Additionally, there are also many excellent resources that expand on some of the topics we have covered here briefly, particularly data wrangling, that can help you consolidate and expand your skill set.
**PsyTeachR**
The [psyTeachR](https://psyteachr.github.io/) team at the University of Glasgow School of Psychology and Neuroscience has successfully made the transition to teaching reproducible research using R across all undergraduate and postgraduate levels. Our curriculum now emphasizes essential ‘data science’ graduate skills that have been overlooked in traditional approaches to teaching, including programming skills, data visualisation, data wrangling and reproducible reports. Students learn about probability and inference through data simulation as well as by working with real datasets. These materials cover all the functions we have used in this tutorial in more depth and all have Creative Commons licences to allow their use and reuse without attribution.
* [Applied Data Skills](https://psyteachr.github.io/ads/)
* [Level 1 Data Skills](https://psyteachr.github.io/data-skills/)
* [Level 2 Analysis](https://psyteachr.github.io/analysis/)
* [Level 3 Statistical Models](https://psyteachr.github.io/stat-models/)
* [Msc Fundamentals of Quantititive Analysis](https://psyteachr.github.io/fun-quant/)
* [MSc Data Skills for Reproducible Research](https://psyteachr.github.io/reprores/)
**Installing R and RStudio**
* [Installing R \- PsyTeachR](https://psyteachr.github.io/data-skills-v1/installing-r.html)
* [Running R on your own computer (walkthrough videos) \- Danielle Navarro](https://www.youtube.com/playlist?list=PLRPB0ZzEYegOZivdelOuEn-R-XUN-DOjd)
**Intro to R and RStudio**
* RStudio Essentials: [Programming \- Part 1 (Writing code in RStudio)](https://www.rstudio.com/resources/webinars/programming-part-1-writing-code-in-rstudio/)
* RStudio Essentials: [Programming \- Part 2 (Debugging code in RStudio)](https://www.rstudio.com/resources/webinars/programming-part-2-debugging-code-in-rstudio/)
**R Markdown**
* [Introduction to R Markdown](https://rmarkdown.rstudio.com/lesson-1.html)
* [R Markdown: The Definitive Guide](https://bookdown.org/yihui/rmarkdown/)
**Data wrangling**
* [R for Data Science](https://r4ds.had.co.nz/)
* [Text Mining with R](https://www.tidytextmining.com/)
**Data visualisation**
* [R Graph Gallery](https://www.r-graph-gallery.com/)
* [Fundamentals of Data Vizualisation](https://clauswilke.com/dataviz/)
* [Data Vizualisation: A Practical Introduction](https://socviz.co/)
* [Look at Data](https://socviz.co/lookatdata.html) from [Data Vizualization for Social Science](http://socviz.co/)
* [Graphs](http://www.cookbook-r.com/Graphs) in *Cookbook for R*
* [Top 50 ggplot2 Visualizations](http://r-statistics.co/Top50-Ggplot2-Visualizations-MasterList-R-Code.html)
* [R Graphics Cookbook](http://www.cookbook-r.com/Graphs/) by Winston Chang
* [ggplot extensions](https://exts.ggplot2.tidyverse.org/)
* [plotly](https://plot.ly/ggplot2/) for creating interactive graphs
* [Drawing Beautiful Maps Programmatically](https://r-spatial.org/r/2018/10/25/ggplot2-sf.html)
* [gganimate](https://gganimate.com/)
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/additional-customisation-options.html |
B Additional customisation options
==================================
B.1 Adding lines to plots
-------------------------
**Vertical Lines \- geom\_vline()**
Often it can be useful to put a marker into our plots to highlight a certain criterion value. For example, if you were working with a scale that has a cut\-off, perhaps the Austim Spectrum Quotient 10 ([Allison et al., 2012](references.html#ref-allison2012toward)), then you might want to put a line at a score of 7; the point at which the researchers suggest the participant is referred further. Alternatively, thinking about the Stroop test we have looked at in this paper, perhaps you had a level of accuracy that you wanted to make sure was reached \- let's say 80%. If we refer back to Figure [3\.1](transforming-data.html#fig:histograms), which used the code below:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)")
```
and displayed the spread of the accuracy scores as such:
Figure B.1: Histogram of accuracy scores.
if we wanted to add a line at the 80% level then we could use the `[geom_vline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` function, again from the **`ggplot2`**, with the argument of `xintercept = 80`, meaning cut the x\-axis at 80, as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)") +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 80)
```
Figure B.2: Histogram of accuracy scores with black solid vertical line indicating 80% accuracy.
Now that looks ok but the line is a bit hard to see so we can change the style (`linetype = value`), color (`color = "color"`) and weight (`size = value`) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)") +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 80, linetype = 2, color = "red", size = 1.5)
```
Figure B.3: Histogram of accuracy scores with red dashed vertical line indicating 80% accuracy.
**Horizontal Lines \- geom\_hline()**
Another situation may be that you want to put a horizontal line on your figure to mark a value of interest on the y\-axis. Again thinking about our Stroop experiment, perhaps we wanted to indicate the 80% accuracy line on our boxplot figures. If we look at Figure [4\.1](representing-summary-statistics.html#fig:boxplot1), which used this code to display the basic boxplot:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)()
```
Figure B.4: Basic boxplot.
we could then use the `[geom_hline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` function, from the **`ggplot2`**, with, this time, the argument of `yintercept = 80`, meaning cut the y\-axis at 80, as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 80)
```
Figure B.5: Basic boxplot with black solid horizontal line indicating 80% accuracy.
and again we can embellish the line using the same arguments as above. We will put in some different values here just to show the changes:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 80, linetype = 3, color = "blue", size = 2)
```
Figure B.6: Basic boxplot with blue dotted horizontal line indicating 80% accuracy.
**LineTypes**
One thing worth noting is that the `linetype` argument can actually be specified as both a value or as a word. They match up as follows:
| Value | Word |
| --- | --- |
| linetype \= 0 | linetype \= "blank" |
| linetype \= 1 | linetype \= "solid" |
| linetype \= 2 | linetype \= "dashed" |
| linetype \= 3 | linetype \= "dotted" |
| linetype \= 4 | linetype \= "dotdash" |
| linetype \= 5 | linetype \= "longdash" |
| linetype \= 6 | linetype \= "twodash" |
**Diagonal Lines \- geom\_abline()**
The last type of line you might want to overlay on a figure is perhaps a diagonal line. For example, perhaps you have created a scatterplot and you want to have the true diagonal line for reference to the line of best fit. To show this, we will refer back to Figure [3\.5](transforming-data.html#fig:smooth-plot) which displayed the line of best fit for the reaction time versus age, and used the following code:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure B.7: Line of best fit for reaction time versus age.
By eye that would appear to be a fairly flat relationship but we will add the true diagonal to help clarify. To do this we use the `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`, again from **`ggplot2`**, and we give it the arguements of the slope (`slope = value`) and the intercept (`intercept = value`). We are also going to scale the data to turn it into z\-scores to help us visualise the relationship better, as follows:
```
dat_long_scale <- dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(rt_zscore = (rt - [mean](https://rdrr.io/r/base/mean.html)(rt))/[sd](https://rdrr.io/r/stats/sd.html)(rt),
age_zscore = (age - [mean](https://rdrr.io/r/base/mean.html)(age))/[sd](https://rdrr.io/r/stats/sd.html)(age))
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long_scale, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_zscore, y = age_zscore)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0)
```
Figure B.8: Line of best fit (blue line) for reaction time versus age with true diagonal shown (black line).
So now we can see the line of best fit (blue line) in relation to the true diagonal (black line). We will come back to why we z\-scored the data in a minute, but first let's finish tidying up this figure, using some of the customisation we have seen as it is a bit messy. Something like this might look cleaner:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long_scale, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_zscore, y = age_zscore)) +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0, linetype = "dashed", color = "black", size = .5) +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 0, linetype = "solid", color = "black", size = .5) +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 0, linetype = "solid", color = "black", size = .5) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure B.9: Line of best fit (blue solid line) for reaction time versus age with true diagonal shown (black line dashed).
That maybe looks a bit cluttered but it gives a nice example of how you can use the different geoms for adding lines to add information to your figure, clearly visualising the weak relationship between reaction time and age. **Note:** Do remember about the layering system however; you will notice that in the code for Figure [B.9](additional-customisation-options.html#fig:smooth-plot-abline2) we have changed the order of the code lines so that the geom lines are behind the points!
**Top Tip: Your intercepts must be values you can see**
Thinking back to why we z\-scored the data for that last figure, we sort of skipped over that, but it did serve a purpose. Here is the original data and the original scatterplot but with the `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` added to the code:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0)
```
Figure B.10: Line of best fit (blue solid line) for reaction time versus age with missing true diagonal.
The code runs but the diagonal line is nowhere to be seen. The reason is that you figure is zoomed in on the data and the diagonal is "out of shot" if you like. If we were to zoom out on the data we would then see the diagonal line as such:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0) +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,60))
```
Figure B.11: Zoomed out to show Line of best fit (blue solid line) for reaction time versus age with true diagonal (black line).
So the key point is that your intercepts have to be set to visible for values for you to see them! If you run your code and the line does not appear, check that the value you have set can actually be seen on your figure. This applies to `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`, `[geom_hline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` and `[geom_vline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`.
B.2 Zooming in and out
----------------------
Like in the example above, it can be very beneficial to be able to zoom in and out of figures, mainly to focus the frame on a given section. One function we can use to do this is the `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)`, in **`ggplot2`**. The main arguments are the limits on the x\-axis (`xlim = c(value, value)`), the limits on the y\-axis (`ylim = c(value, value)`), and whether to add a small expansion to those limits or not (`expand = TRUE/FALSE`). Looking at the scatterplot of age and reaction time again, we could use `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)` to zoom fully out:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,100), expand = FALSE)
```
Figure B.12: Zoomed out on scatterplot with no expansion around set limits
And we can add a small expansion by changing the `expand` argument to `TRUE`:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,100), expand = TRUE)
```
Figure B.13: Zoomed out on scatterplot with small expansion around set limits
Or we can zoom right in on a specific area of the plot if there was something we wanted to highlight. Here for example we are just showing the reaction times between 500 and 725 msecs, and all ages between 15 and 55:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(500,725), ylim = [c](https://rdrr.io/r/base/c.html)(15,55), expand = TRUE)
```
Figure B.14: Zoomed in on scatterplot with small expansion around set limits
And you can zoom in and zoom out just the x\-axis or just the y\-axis; just depends on what you want to show.
B.3 Setting the axis values
---------------------------
**Continuous scales**
You may have noticed that depending on the spread of your data, and how much of the figure you see, the values on the axes tend to change. Often we don't want this and want the values to be constant. We have already used functions to control this in the main body of the paper \- the `scale_*` functions. Here we will use `[scale_x_continuous()](https://ggplot2.tidyverse.org/reference/scale_continuous.html)` and `[scale_y_continuous()](https://ggplot2.tidyverse.org/reference/scale_continuous.html)` to set the values on the axes to what we want. The main arguments in both functions are the limits (`limts = c(value, value)`) and the breaks (the tick marks essentially, `breaks = value:value`). Note that the limits are just two values (minimum and maximum), whereas the breaks are a series of values (from 0 to 100, for example). If we use the scatterplot of age and reaction time, then our code might look like this:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = 0:1000) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = 0:100)
```
Figure B.15: Changing the values on the axes
That actually looks rubbish because we simply have too many values on our axes, so we can use the `[seq()](https://rdrr.io/r/base/seq.html)` function, from **`baseR`**, to get a bit more control. The arguments here are the first value (`from = value`), the last value (`last = value`), and the size of the steps (`by = value`). For example, `seq(0,10,2)` would give all values between 0 and 10 in steps of 2, (i.e. 0, 2, 4, 6, 8 and 10\). So using that idea we can change the y\-axis in steps of 5 (years) and the x\-axis in steps of 50 (msecs) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,1000,50)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,100,5))
```
Figure B.16: Changing the values on the axes using the seq() function
Which gives us a much nicer and cleaner set of values on our axes. And if we combine that approach for setting the axes values with our zoom function (`[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)`), then we can get something that looks like this:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,1000,50)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,100,5)) +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(250,750), ylim = [c](https://rdrr.io/r/base/c.html)(15,55))
```
Figure B.17: Combining scale functions and zoom functions
Which actually looks much like our original scatterplot but with better definition on the axes. So you can see we can actually have a lot of control over the axes and what we see. However, one thing to note, is that you should not use the `limits` argument within the `scale_*` functions as a zoom. It won't work like that and instead will just disregard data. Look at this example:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(500,600))
```
```
## Warning: Removed 166 rows containing non-finite values (stat_smooth).
```
```
## Warning: Removed 166 rows containing missing values (geom_point).
```
Figure B.18: Combining scale functions and zoom functions
It may look like it has zoomed in on the data but actually it has removed all data outwith the limits. That is what the warnings are telling you, and you can see that as there is no data above and below the limits, but we know there should be based on the earlier plots. So `scale_*` functions can change the values on the axes, but `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)` is for zooming in and out.
**Discrete scales**
The same idea of `limits` within a `scale_*` function can also be used to change the order of categories on a discrete scale. For example if we look at our boxplots again in Figure [4\.10](representing-summary-statistics.html#fig:viobox6), we see this figure:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.19: Using transparency on the fill color.
The figures always default to the alphabetical order. Sometimes that is what we want; sometimes that is not what we want. If we wanted to switch the order of **word** and **non\-word** so that the non\-word condition comes first we would use the `[scale_x_discrete()](https://ggplot2.tidyverse.org/reference/scale_discrete.html)` function and set the limits within it (`limits = c("category","category")`) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(limits = [c](https://rdrr.io/r/base/c.html)("nonword","word")) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.20: Switching orders of categorical variables
And that works just the same if you have more conditions, which you will see if you compare Figure [B.20](additional-customisation-options.html#fig:viobox6-scale1) to the below figure where we have flipped the order of non\-word and word from the original default alphabetical order
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point",
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(limits = [c](https://rdrr.io/r/base/c.html)("nonword","word")) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.21: Same as earlier figure but with order of conditions on x\-axis altered.
**Changing Order of Factors**
Again, you have a lot of control beyond the default alphabetical order that **`ggplot2`** tends to plot in. One question you might have though is why **monolingual** and **bilingual** are not in alphabetical order? f they were then the **bilingual** condition would be plotted first. The answer is, thinking back to the start of the paper, we changed our conditions from **1** and **2** to the factor names of **monolingual** and **bilingual**, and **`[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)`** maintains that factor order when plotting. So if we want to plot it in a different fashion we need to do a bit of factor reordering. This can be done much like earlier using the `[factor()](https://rdrr.io/r/base/factor.html)` function and stating the order of conditions we want (`levels = c("factor","factor")`). But be careful with spelling as it must match up to the names of the factors that already exist.
In this example, we will reorder the factors so that **bilingual** is presented first but leave the order of **word** and **non\-word** as the alphabetical default. Note in the code though that we are not permanently storing the factor change as we don't want to keep this new order. We are just changing the order "on the fly" for this one example before putting it into the plot.
```
dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(language = [factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("bilingual","monolingual"))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point",
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.22: Same as earlier figure but with order of conditions on x\-axis altered.
And if we compare this new figure to the original, side\-by\-side, we see the difference:
Figure B.23: Switching factor orders
B.4 Controlling the Legend
--------------------------
**Using the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`**
Whilst we are on the subject of changing order and position of elements of the figure, you might think about changing the position of a figure legend. There is actually a few ways of doing it but a simple approach is to use the the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function and add that to the ggplot chain. For example, if we run the below code and look at the output:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.24: Figure Legend removed using `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`
We see the same display as Figure [B.19](additional-customisation-options.html#fig:viobox6-add) but with no legend. That is quite useful because the legend just repeats the x\-axis and becomes redundant. The `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function works but setting the legened associated with the `fill` layer (i.e. `fill = condition`) to `"none"`, basically removing it. One thing to note with this approach is that you need to set a guide for every legend, otherwise a legend will appear. What that means is that if you had set both `fill = condition` and `color = condition`, then you would need to set both `fill` and `color` to `"none"` within `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none", color = "none") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.25: Removing more than one legend with `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`
Whereas if you hadn't used `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` you would see the following:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.26: Figure with more than one Legend
The key thing to note here is that in the above figure there is actually two legends (one for `fill` and one for `color`) but they are overlaid on top of each other as they are associated with the same variable. You can test this by removing either one of the options from the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function. One of the legends will still remain. So you need to turn them both off or you can use it to leave certain parts clear.
**Using the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)`**
An alternative to the guides function is using the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function. The `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function can actually be used to control a whole host of options in the plot, which we will come on to, but you can use it as a quick way to turn off the legend as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(legend.position = "none")
```
Figure B.27: Removing the legend with `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)`
What you can see is that within the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function we set an argument for `legend.position` and we set that to `"none"` \- again removing the legend entirely. One difference to note here is that it removes all aspects of the legend where as, as we said, using `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` allows you to control different parts of the legend (either leaving the `fill` or `color` showing or both). So using the `legend.position = "none"` is a bit more brute force and can be handy when you are using various different means of distinguishing between conditions of a variable and don't want to have to remove each aspect using the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`.
An extension here of course is not just removing the legend, but moving the legend to a different position. This can be done by setting `legend.position = ...` to either `"top"`, `"bottom"`, `"left"` or `"right"` as shown:
Figure B.28: Legend position options using theme()
Or even as a coordinate within your figure expressed as a propotion of your figure \- i.e. c(x \= 0, y \= 0\) would be the bottom left of your figure and c(x \= 1, y \= 1\) would be the top right, as shown here:
Figure B.29: Legend position options using theme()
And so with a little trial and error you can position your legend where you want it without crashing into your figure, hopefully!
B.1 Adding lines to plots
-------------------------
**Vertical Lines \- geom\_vline()**
Often it can be useful to put a marker into our plots to highlight a certain criterion value. For example, if you were working with a scale that has a cut\-off, perhaps the Austim Spectrum Quotient 10 ([Allison et al., 2012](references.html#ref-allison2012toward)), then you might want to put a line at a score of 7; the point at which the researchers suggest the participant is referred further. Alternatively, thinking about the Stroop test we have looked at in this paper, perhaps you had a level of accuracy that you wanted to make sure was reached \- let's say 80%. If we refer back to Figure [3\.1](transforming-data.html#fig:histograms), which used the code below:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)")
```
and displayed the spread of the accuracy scores as such:
Figure B.1: Histogram of accuracy scores.
if we wanted to add a line at the 80% level then we could use the `[geom_vline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` function, again from the **`ggplot2`**, with the argument of `xintercept = 80`, meaning cut the x\-axis at 80, as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)") +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 80)
```
Figure B.2: Histogram of accuracy scores with black solid vertical line indicating 80% accuracy.
Now that looks ok but the line is a bit hard to see so we can change the style (`linetype = value`), color (`color = "color"`) and weight (`size = value`) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)") +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 80, linetype = 2, color = "red", size = 1.5)
```
Figure B.3: Histogram of accuracy scores with red dashed vertical line indicating 80% accuracy.
**Horizontal Lines \- geom\_hline()**
Another situation may be that you want to put a horizontal line on your figure to mark a value of interest on the y\-axis. Again thinking about our Stroop experiment, perhaps we wanted to indicate the 80% accuracy line on our boxplot figures. If we look at Figure [4\.1](representing-summary-statistics.html#fig:boxplot1), which used this code to display the basic boxplot:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)()
```
Figure B.4: Basic boxplot.
we could then use the `[geom_hline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` function, from the **`ggplot2`**, with, this time, the argument of `yintercept = 80`, meaning cut the y\-axis at 80, as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 80)
```
Figure B.5: Basic boxplot with black solid horizontal line indicating 80% accuracy.
and again we can embellish the line using the same arguments as above. We will put in some different values here just to show the changes:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 80, linetype = 3, color = "blue", size = 2)
```
Figure B.6: Basic boxplot with blue dotted horizontal line indicating 80% accuracy.
**LineTypes**
One thing worth noting is that the `linetype` argument can actually be specified as both a value or as a word. They match up as follows:
| Value | Word |
| --- | --- |
| linetype \= 0 | linetype \= "blank" |
| linetype \= 1 | linetype \= "solid" |
| linetype \= 2 | linetype \= "dashed" |
| linetype \= 3 | linetype \= "dotted" |
| linetype \= 4 | linetype \= "dotdash" |
| linetype \= 5 | linetype \= "longdash" |
| linetype \= 6 | linetype \= "twodash" |
**Diagonal Lines \- geom\_abline()**
The last type of line you might want to overlay on a figure is perhaps a diagonal line. For example, perhaps you have created a scatterplot and you want to have the true diagonal line for reference to the line of best fit. To show this, we will refer back to Figure [3\.5](transforming-data.html#fig:smooth-plot) which displayed the line of best fit for the reaction time versus age, and used the following code:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure B.7: Line of best fit for reaction time versus age.
By eye that would appear to be a fairly flat relationship but we will add the true diagonal to help clarify. To do this we use the `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`, again from **`ggplot2`**, and we give it the arguements of the slope (`slope = value`) and the intercept (`intercept = value`). We are also going to scale the data to turn it into z\-scores to help us visualise the relationship better, as follows:
```
dat_long_scale <- dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(rt_zscore = (rt - [mean](https://rdrr.io/r/base/mean.html)(rt))/[sd](https://rdrr.io/r/stats/sd.html)(rt),
age_zscore = (age - [mean](https://rdrr.io/r/base/mean.html)(age))/[sd](https://rdrr.io/r/stats/sd.html)(age))
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long_scale, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_zscore, y = age_zscore)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0)
```
Figure B.8: Line of best fit (blue line) for reaction time versus age with true diagonal shown (black line).
So now we can see the line of best fit (blue line) in relation to the true diagonal (black line). We will come back to why we z\-scored the data in a minute, but first let's finish tidying up this figure, using some of the customisation we have seen as it is a bit messy. Something like this might look cleaner:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long_scale, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_zscore, y = age_zscore)) +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0, linetype = "dashed", color = "black", size = .5) +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 0, linetype = "solid", color = "black", size = .5) +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 0, linetype = "solid", color = "black", size = .5) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure B.9: Line of best fit (blue solid line) for reaction time versus age with true diagonal shown (black line dashed).
That maybe looks a bit cluttered but it gives a nice example of how you can use the different geoms for adding lines to add information to your figure, clearly visualising the weak relationship between reaction time and age. **Note:** Do remember about the layering system however; you will notice that in the code for Figure [B.9](additional-customisation-options.html#fig:smooth-plot-abline2) we have changed the order of the code lines so that the geom lines are behind the points!
**Top Tip: Your intercepts must be values you can see**
Thinking back to why we z\-scored the data for that last figure, we sort of skipped over that, but it did serve a purpose. Here is the original data and the original scatterplot but with the `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` added to the code:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0)
```
Figure B.10: Line of best fit (blue solid line) for reaction time versus age with missing true diagonal.
The code runs but the diagonal line is nowhere to be seen. The reason is that you figure is zoomed in on the data and the diagonal is "out of shot" if you like. If we were to zoom out on the data we would then see the diagonal line as such:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0) +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,60))
```
Figure B.11: Zoomed out to show Line of best fit (blue solid line) for reaction time versus age with true diagonal (black line).
So the key point is that your intercepts have to be set to visible for values for you to see them! If you run your code and the line does not appear, check that the value you have set can actually be seen on your figure. This applies to `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`, `[geom_hline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` and `[geom_vline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`.
B.2 Zooming in and out
----------------------
Like in the example above, it can be very beneficial to be able to zoom in and out of figures, mainly to focus the frame on a given section. One function we can use to do this is the `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)`, in **`ggplot2`**. The main arguments are the limits on the x\-axis (`xlim = c(value, value)`), the limits on the y\-axis (`ylim = c(value, value)`), and whether to add a small expansion to those limits or not (`expand = TRUE/FALSE`). Looking at the scatterplot of age and reaction time again, we could use `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)` to zoom fully out:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,100), expand = FALSE)
```
Figure B.12: Zoomed out on scatterplot with no expansion around set limits
And we can add a small expansion by changing the `expand` argument to `TRUE`:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,100), expand = TRUE)
```
Figure B.13: Zoomed out on scatterplot with small expansion around set limits
Or we can zoom right in on a specific area of the plot if there was something we wanted to highlight. Here for example we are just showing the reaction times between 500 and 725 msecs, and all ages between 15 and 55:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(500,725), ylim = [c](https://rdrr.io/r/base/c.html)(15,55), expand = TRUE)
```
Figure B.14: Zoomed in on scatterplot with small expansion around set limits
And you can zoom in and zoom out just the x\-axis or just the y\-axis; just depends on what you want to show.
B.3 Setting the axis values
---------------------------
**Continuous scales**
You may have noticed that depending on the spread of your data, and how much of the figure you see, the values on the axes tend to change. Often we don't want this and want the values to be constant. We have already used functions to control this in the main body of the paper \- the `scale_*` functions. Here we will use `[scale_x_continuous()](https://ggplot2.tidyverse.org/reference/scale_continuous.html)` and `[scale_y_continuous()](https://ggplot2.tidyverse.org/reference/scale_continuous.html)` to set the values on the axes to what we want. The main arguments in both functions are the limits (`limts = c(value, value)`) and the breaks (the tick marks essentially, `breaks = value:value`). Note that the limits are just two values (minimum and maximum), whereas the breaks are a series of values (from 0 to 100, for example). If we use the scatterplot of age and reaction time, then our code might look like this:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = 0:1000) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = 0:100)
```
Figure B.15: Changing the values on the axes
That actually looks rubbish because we simply have too many values on our axes, so we can use the `[seq()](https://rdrr.io/r/base/seq.html)` function, from **`baseR`**, to get a bit more control. The arguments here are the first value (`from = value`), the last value (`last = value`), and the size of the steps (`by = value`). For example, `seq(0,10,2)` would give all values between 0 and 10 in steps of 2, (i.e. 0, 2, 4, 6, 8 and 10\). So using that idea we can change the y\-axis in steps of 5 (years) and the x\-axis in steps of 50 (msecs) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,1000,50)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,100,5))
```
Figure B.16: Changing the values on the axes using the seq() function
Which gives us a much nicer and cleaner set of values on our axes. And if we combine that approach for setting the axes values with our zoom function (`[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)`), then we can get something that looks like this:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,1000,50)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,100,5)) +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(250,750), ylim = [c](https://rdrr.io/r/base/c.html)(15,55))
```
Figure B.17: Combining scale functions and zoom functions
Which actually looks much like our original scatterplot but with better definition on the axes. So you can see we can actually have a lot of control over the axes and what we see. However, one thing to note, is that you should not use the `limits` argument within the `scale_*` functions as a zoom. It won't work like that and instead will just disregard data. Look at this example:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(500,600))
```
```
## Warning: Removed 166 rows containing non-finite values (stat_smooth).
```
```
## Warning: Removed 166 rows containing missing values (geom_point).
```
Figure B.18: Combining scale functions and zoom functions
It may look like it has zoomed in on the data but actually it has removed all data outwith the limits. That is what the warnings are telling you, and you can see that as there is no data above and below the limits, but we know there should be based on the earlier plots. So `scale_*` functions can change the values on the axes, but `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)` is for zooming in and out.
**Discrete scales**
The same idea of `limits` within a `scale_*` function can also be used to change the order of categories on a discrete scale. For example if we look at our boxplots again in Figure [4\.10](representing-summary-statistics.html#fig:viobox6), we see this figure:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.19: Using transparency on the fill color.
The figures always default to the alphabetical order. Sometimes that is what we want; sometimes that is not what we want. If we wanted to switch the order of **word** and **non\-word** so that the non\-word condition comes first we would use the `[scale_x_discrete()](https://ggplot2.tidyverse.org/reference/scale_discrete.html)` function and set the limits within it (`limits = c("category","category")`) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(limits = [c](https://rdrr.io/r/base/c.html)("nonword","word")) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.20: Switching orders of categorical variables
And that works just the same if you have more conditions, which you will see if you compare Figure [B.20](additional-customisation-options.html#fig:viobox6-scale1) to the below figure where we have flipped the order of non\-word and word from the original default alphabetical order
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point",
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(limits = [c](https://rdrr.io/r/base/c.html)("nonword","word")) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.21: Same as earlier figure but with order of conditions on x\-axis altered.
**Changing Order of Factors**
Again, you have a lot of control beyond the default alphabetical order that **`ggplot2`** tends to plot in. One question you might have though is why **monolingual** and **bilingual** are not in alphabetical order? f they were then the **bilingual** condition would be plotted first. The answer is, thinking back to the start of the paper, we changed our conditions from **1** and **2** to the factor names of **monolingual** and **bilingual**, and **`[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)`** maintains that factor order when plotting. So if we want to plot it in a different fashion we need to do a bit of factor reordering. This can be done much like earlier using the `[factor()](https://rdrr.io/r/base/factor.html)` function and stating the order of conditions we want (`levels = c("factor","factor")`). But be careful with spelling as it must match up to the names of the factors that already exist.
In this example, we will reorder the factors so that **bilingual** is presented first but leave the order of **word** and **non\-word** as the alphabetical default. Note in the code though that we are not permanently storing the factor change as we don't want to keep this new order. We are just changing the order "on the fly" for this one example before putting it into the plot.
```
dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(language = [factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("bilingual","monolingual"))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point",
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.22: Same as earlier figure but with order of conditions on x\-axis altered.
And if we compare this new figure to the original, side\-by\-side, we see the difference:
Figure B.23: Switching factor orders
B.4 Controlling the Legend
--------------------------
**Using the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`**
Whilst we are on the subject of changing order and position of elements of the figure, you might think about changing the position of a figure legend. There is actually a few ways of doing it but a simple approach is to use the the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function and add that to the ggplot chain. For example, if we run the below code and look at the output:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.24: Figure Legend removed using `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`
We see the same display as Figure [B.19](additional-customisation-options.html#fig:viobox6-add) but with no legend. That is quite useful because the legend just repeats the x\-axis and becomes redundant. The `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function works but setting the legened associated with the `fill` layer (i.e. `fill = condition`) to `"none"`, basically removing it. One thing to note with this approach is that you need to set a guide for every legend, otherwise a legend will appear. What that means is that if you had set both `fill = condition` and `color = condition`, then you would need to set both `fill` and `color` to `"none"` within `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none", color = "none") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.25: Removing more than one legend with `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`
Whereas if you hadn't used `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` you would see the following:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.26: Figure with more than one Legend
The key thing to note here is that in the above figure there is actually two legends (one for `fill` and one for `color`) but they are overlaid on top of each other as they are associated with the same variable. You can test this by removing either one of the options from the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function. One of the legends will still remain. So you need to turn them both off or you can use it to leave certain parts clear.
**Using the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)`**
An alternative to the guides function is using the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function. The `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function can actually be used to control a whole host of options in the plot, which we will come on to, but you can use it as a quick way to turn off the legend as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(legend.position = "none")
```
Figure B.27: Removing the legend with `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)`
What you can see is that within the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function we set an argument for `legend.position` and we set that to `"none"` \- again removing the legend entirely. One difference to note here is that it removes all aspects of the legend where as, as we said, using `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` allows you to control different parts of the legend (either leaving the `fill` or `color` showing or both). So using the `legend.position = "none"` is a bit more brute force and can be handy when you are using various different means of distinguishing between conditions of a variable and don't want to have to remove each aspect using the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`.
An extension here of course is not just removing the legend, but moving the legend to a different position. This can be done by setting `legend.position = ...` to either `"top"`, `"bottom"`, `"left"` or `"right"` as shown:
Figure B.28: Legend position options using theme()
Or even as a coordinate within your figure expressed as a propotion of your figure \- i.e. c(x \= 0, y \= 0\) would be the bottom left of your figure and c(x \= 1, y \= 1\) would be the top right, as shown here:
Figure B.29: Legend position options using theme()
And so with a little trial and error you can position your legend where you want it without crashing into your figure, hopefully!
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/additional-customisation-options.html |
B Additional customisation options
==================================
B.1 Adding lines to plots
-------------------------
**Vertical Lines \- geom\_vline()**
Often it can be useful to put a marker into our plots to highlight a certain criterion value. For example, if you were working with a scale that has a cut\-off, perhaps the Austim Spectrum Quotient 10 ([Allison et al., 2012](references.html#ref-allison2012toward)), then you might want to put a line at a score of 7; the point at which the researchers suggest the participant is referred further. Alternatively, thinking about the Stroop test we have looked at in this paper, perhaps you had a level of accuracy that you wanted to make sure was reached \- let's say 80%. If we refer back to Figure [3\.1](transforming-data.html#fig:histograms), which used the code below:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)")
```
and displayed the spread of the accuracy scores as such:
Figure B.1: Histogram of accuracy scores.
if we wanted to add a line at the 80% level then we could use the `[geom_vline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` function, again from the **`ggplot2`**, with the argument of `xintercept = 80`, meaning cut the x\-axis at 80, as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)") +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 80)
```
Figure B.2: Histogram of accuracy scores with black solid vertical line indicating 80% accuracy.
Now that looks ok but the line is a bit hard to see so we can change the style (`linetype = value`), color (`color = "color"`) and weight (`size = value`) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)") +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 80, linetype = 2, color = "red", size = 1.5)
```
Figure B.3: Histogram of accuracy scores with red dashed vertical line indicating 80% accuracy.
**Horizontal Lines \- geom\_hline()**
Another situation may be that you want to put a horizontal line on your figure to mark a value of interest on the y\-axis. Again thinking about our Stroop experiment, perhaps we wanted to indicate the 80% accuracy line on our boxplot figures. If we look at Figure [4\.1](representing-summary-statistics.html#fig:boxplot1), which used this code to display the basic boxplot:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)()
```
Figure B.4: Basic boxplot.
we could then use the `[geom_hline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` function, from the **`ggplot2`**, with, this time, the argument of `yintercept = 80`, meaning cut the y\-axis at 80, as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 80)
```
Figure B.5: Basic boxplot with black solid horizontal line indicating 80% accuracy.
and again we can embellish the line using the same arguments as above. We will put in some different values here just to show the changes:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 80, linetype = 3, color = "blue", size = 2)
```
Figure B.6: Basic boxplot with blue dotted horizontal line indicating 80% accuracy.
**LineTypes**
One thing worth noting is that the `linetype` argument can actually be specified as both a value or as a word. They match up as follows:
| Value | Word |
| --- | --- |
| linetype \= 0 | linetype \= "blank" |
| linetype \= 1 | linetype \= "solid" |
| linetype \= 2 | linetype \= "dashed" |
| linetype \= 3 | linetype \= "dotted" |
| linetype \= 4 | linetype \= "dotdash" |
| linetype \= 5 | linetype \= "longdash" |
| linetype \= 6 | linetype \= "twodash" |
**Diagonal Lines \- geom\_abline()**
The last type of line you might want to overlay on a figure is perhaps a diagonal line. For example, perhaps you have created a scatterplot and you want to have the true diagonal line for reference to the line of best fit. To show this, we will refer back to Figure [3\.5](transforming-data.html#fig:smooth-plot) which displayed the line of best fit for the reaction time versus age, and used the following code:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure B.7: Line of best fit for reaction time versus age.
By eye that would appear to be a fairly flat relationship but we will add the true diagonal to help clarify. To do this we use the `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`, again from **`ggplot2`**, and we give it the arguements of the slope (`slope = value`) and the intercept (`intercept = value`). We are also going to scale the data to turn it into z\-scores to help us visualise the relationship better, as follows:
```
dat_long_scale <- dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(rt_zscore = (rt - [mean](https://rdrr.io/r/base/mean.html)(rt))/[sd](https://rdrr.io/r/stats/sd.html)(rt),
age_zscore = (age - [mean](https://rdrr.io/r/base/mean.html)(age))/[sd](https://rdrr.io/r/stats/sd.html)(age))
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long_scale, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_zscore, y = age_zscore)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0)
```
Figure B.8: Line of best fit (blue line) for reaction time versus age with true diagonal shown (black line).
So now we can see the line of best fit (blue line) in relation to the true diagonal (black line). We will come back to why we z\-scored the data in a minute, but first let's finish tidying up this figure, using some of the customisation we have seen as it is a bit messy. Something like this might look cleaner:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long_scale, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_zscore, y = age_zscore)) +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0, linetype = "dashed", color = "black", size = .5) +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 0, linetype = "solid", color = "black", size = .5) +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 0, linetype = "solid", color = "black", size = .5) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure B.9: Line of best fit (blue solid line) for reaction time versus age with true diagonal shown (black line dashed).
That maybe looks a bit cluttered but it gives a nice example of how you can use the different geoms for adding lines to add information to your figure, clearly visualising the weak relationship between reaction time and age. **Note:** Do remember about the layering system however; you will notice that in the code for Figure [B.9](additional-customisation-options.html#fig:smooth-plot-abline2) we have changed the order of the code lines so that the geom lines are behind the points!
**Top Tip: Your intercepts must be values you can see**
Thinking back to why we z\-scored the data for that last figure, we sort of skipped over that, but it did serve a purpose. Here is the original data and the original scatterplot but with the `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` added to the code:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0)
```
Figure B.10: Line of best fit (blue solid line) for reaction time versus age with missing true diagonal.
The code runs but the diagonal line is nowhere to be seen. The reason is that you figure is zoomed in on the data and the diagonal is "out of shot" if you like. If we were to zoom out on the data we would then see the diagonal line as such:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0) +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,60))
```
Figure B.11: Zoomed out to show Line of best fit (blue solid line) for reaction time versus age with true diagonal (black line).
So the key point is that your intercepts have to be set to visible for values for you to see them! If you run your code and the line does not appear, check that the value you have set can actually be seen on your figure. This applies to `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`, `[geom_hline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` and `[geom_vline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`.
B.2 Zooming in and out
----------------------
Like in the example above, it can be very beneficial to be able to zoom in and out of figures, mainly to focus the frame on a given section. One function we can use to do this is the `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)`, in **`ggplot2`**. The main arguments are the limits on the x\-axis (`xlim = c(value, value)`), the limits on the y\-axis (`ylim = c(value, value)`), and whether to add a small expansion to those limits or not (`expand = TRUE/FALSE`). Looking at the scatterplot of age and reaction time again, we could use `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)` to zoom fully out:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,100), expand = FALSE)
```
Figure B.12: Zoomed out on scatterplot with no expansion around set limits
And we can add a small expansion by changing the `expand` argument to `TRUE`:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,100), expand = TRUE)
```
Figure B.13: Zoomed out on scatterplot with small expansion around set limits
Or we can zoom right in on a specific area of the plot if there was something we wanted to highlight. Here for example we are just showing the reaction times between 500 and 725 msecs, and all ages between 15 and 55:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(500,725), ylim = [c](https://rdrr.io/r/base/c.html)(15,55), expand = TRUE)
```
Figure B.14: Zoomed in on scatterplot with small expansion around set limits
And you can zoom in and zoom out just the x\-axis or just the y\-axis; just depends on what you want to show.
B.3 Setting the axis values
---------------------------
**Continuous scales**
You may have noticed that depending on the spread of your data, and how much of the figure you see, the values on the axes tend to change. Often we don't want this and want the values to be constant. We have already used functions to control this in the main body of the paper \- the `scale_*` functions. Here we will use `[scale_x_continuous()](https://ggplot2.tidyverse.org/reference/scale_continuous.html)` and `[scale_y_continuous()](https://ggplot2.tidyverse.org/reference/scale_continuous.html)` to set the values on the axes to what we want. The main arguments in both functions are the limits (`limts = c(value, value)`) and the breaks (the tick marks essentially, `breaks = value:value`). Note that the limits are just two values (minimum and maximum), whereas the breaks are a series of values (from 0 to 100, for example). If we use the scatterplot of age and reaction time, then our code might look like this:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = 0:1000) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = 0:100)
```
Figure B.15: Changing the values on the axes
That actually looks rubbish because we simply have too many values on our axes, so we can use the `[seq()](https://rdrr.io/r/base/seq.html)` function, from **`baseR`**, to get a bit more control. The arguments here are the first value (`from = value`), the last value (`last = value`), and the size of the steps (`by = value`). For example, `seq(0,10,2)` would give all values between 0 and 10 in steps of 2, (i.e. 0, 2, 4, 6, 8 and 10\). So using that idea we can change the y\-axis in steps of 5 (years) and the x\-axis in steps of 50 (msecs) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,1000,50)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,100,5))
```
Figure B.16: Changing the values on the axes using the seq() function
Which gives us a much nicer and cleaner set of values on our axes. And if we combine that approach for setting the axes values with our zoom function (`[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)`), then we can get something that looks like this:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,1000,50)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,100,5)) +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(250,750), ylim = [c](https://rdrr.io/r/base/c.html)(15,55))
```
Figure B.17: Combining scale functions and zoom functions
Which actually looks much like our original scatterplot but with better definition on the axes. So you can see we can actually have a lot of control over the axes and what we see. However, one thing to note, is that you should not use the `limits` argument within the `scale_*` functions as a zoom. It won't work like that and instead will just disregard data. Look at this example:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(500,600))
```
```
## Warning: Removed 166 rows containing non-finite values (stat_smooth).
```
```
## Warning: Removed 166 rows containing missing values (geom_point).
```
Figure B.18: Combining scale functions and zoom functions
It may look like it has zoomed in on the data but actually it has removed all data outwith the limits. That is what the warnings are telling you, and you can see that as there is no data above and below the limits, but we know there should be based on the earlier plots. So `scale_*` functions can change the values on the axes, but `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)` is for zooming in and out.
**Discrete scales**
The same idea of `limits` within a `scale_*` function can also be used to change the order of categories on a discrete scale. For example if we look at our boxplots again in Figure [4\.10](representing-summary-statistics.html#fig:viobox6), we see this figure:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.19: Using transparency on the fill color.
The figures always default to the alphabetical order. Sometimes that is what we want; sometimes that is not what we want. If we wanted to switch the order of **word** and **non\-word** so that the non\-word condition comes first we would use the `[scale_x_discrete()](https://ggplot2.tidyverse.org/reference/scale_discrete.html)` function and set the limits within it (`limits = c("category","category")`) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(limits = [c](https://rdrr.io/r/base/c.html)("nonword","word")) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.20: Switching orders of categorical variables
And that works just the same if you have more conditions, which you will see if you compare Figure [B.20](additional-customisation-options.html#fig:viobox6-scale1) to the below figure where we have flipped the order of non\-word and word from the original default alphabetical order
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point",
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(limits = [c](https://rdrr.io/r/base/c.html)("nonword","word")) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.21: Same as earlier figure but with order of conditions on x\-axis altered.
**Changing Order of Factors**
Again, you have a lot of control beyond the default alphabetical order that **`ggplot2`** tends to plot in. One question you might have though is why **monolingual** and **bilingual** are not in alphabetical order? f they were then the **bilingual** condition would be plotted first. The answer is, thinking back to the start of the paper, we changed our conditions from **1** and **2** to the factor names of **monolingual** and **bilingual**, and **`[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)`** maintains that factor order when plotting. So if we want to plot it in a different fashion we need to do a bit of factor reordering. This can be done much like earlier using the `[factor()](https://rdrr.io/r/base/factor.html)` function and stating the order of conditions we want (`levels = c("factor","factor")`). But be careful with spelling as it must match up to the names of the factors that already exist.
In this example, we will reorder the factors so that **bilingual** is presented first but leave the order of **word** and **non\-word** as the alphabetical default. Note in the code though that we are not permanently storing the factor change as we don't want to keep this new order. We are just changing the order "on the fly" for this one example before putting it into the plot.
```
dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(language = [factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("bilingual","monolingual"))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point",
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.22: Same as earlier figure but with order of conditions on x\-axis altered.
And if we compare this new figure to the original, side\-by\-side, we see the difference:
Figure B.23: Switching factor orders
B.4 Controlling the Legend
--------------------------
**Using the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`**
Whilst we are on the subject of changing order and position of elements of the figure, you might think about changing the position of a figure legend. There is actually a few ways of doing it but a simple approach is to use the the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function and add that to the ggplot chain. For example, if we run the below code and look at the output:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.24: Figure Legend removed using `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`
We see the same display as Figure [B.19](additional-customisation-options.html#fig:viobox6-add) but with no legend. That is quite useful because the legend just repeats the x\-axis and becomes redundant. The `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function works but setting the legened associated with the `fill` layer (i.e. `fill = condition`) to `"none"`, basically removing it. One thing to note with this approach is that you need to set a guide for every legend, otherwise a legend will appear. What that means is that if you had set both `fill = condition` and `color = condition`, then you would need to set both `fill` and `color` to `"none"` within `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none", color = "none") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.25: Removing more than one legend with `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`
Whereas if you hadn't used `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` you would see the following:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.26: Figure with more than one Legend
The key thing to note here is that in the above figure there is actually two legends (one for `fill` and one for `color`) but they are overlaid on top of each other as they are associated with the same variable. You can test this by removing either one of the options from the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function. One of the legends will still remain. So you need to turn them both off or you can use it to leave certain parts clear.
**Using the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)`**
An alternative to the guides function is using the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function. The `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function can actually be used to control a whole host of options in the plot, which we will come on to, but you can use it as a quick way to turn off the legend as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(legend.position = "none")
```
Figure B.27: Removing the legend with `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)`
What you can see is that within the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function we set an argument for `legend.position` and we set that to `"none"` \- again removing the legend entirely. One difference to note here is that it removes all aspects of the legend where as, as we said, using `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` allows you to control different parts of the legend (either leaving the `fill` or `color` showing or both). So using the `legend.position = "none"` is a bit more brute force and can be handy when you are using various different means of distinguishing between conditions of a variable and don't want to have to remove each aspect using the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`.
An extension here of course is not just removing the legend, but moving the legend to a different position. This can be done by setting `legend.position = ...` to either `"top"`, `"bottom"`, `"left"` or `"right"` as shown:
Figure B.28: Legend position options using theme()
Or even as a coordinate within your figure expressed as a propotion of your figure \- i.e. c(x \= 0, y \= 0\) would be the bottom left of your figure and c(x \= 1, y \= 1\) would be the top right, as shown here:
Figure B.29: Legend position options using theme()
And so with a little trial and error you can position your legend where you want it without crashing into your figure, hopefully!
B.1 Adding lines to plots
-------------------------
**Vertical Lines \- geom\_vline()**
Often it can be useful to put a marker into our plots to highlight a certain criterion value. For example, if you were working with a scale that has a cut\-off, perhaps the Austim Spectrum Quotient 10 ([Allison et al., 2012](references.html#ref-allison2012toward)), then you might want to put a line at a score of 7; the point at which the researchers suggest the participant is referred further. Alternatively, thinking about the Stroop test we have looked at in this paper, perhaps you had a level of accuracy that you wanted to make sure was reached \- let's say 80%. If we refer back to Figure [3\.1](transforming-data.html#fig:histograms), which used the code below:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)")
```
and displayed the spread of the accuracy scores as such:
Figure B.1: Histogram of accuracy scores.
if we wanted to add a line at the 80% level then we could use the `[geom_vline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` function, again from the **`ggplot2`**, with the argument of `xintercept = 80`, meaning cut the x\-axis at 80, as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)") +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 80)
```
Figure B.2: Histogram of accuracy scores with black solid vertical line indicating 80% accuracy.
Now that looks ok but the line is a bit hard to see so we can change the style (`linetype = value`), color (`color = "color"`) and weight (`size = value`) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)") +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 80, linetype = 2, color = "red", size = 1.5)
```
Figure B.3: Histogram of accuracy scores with red dashed vertical line indicating 80% accuracy.
**Horizontal Lines \- geom\_hline()**
Another situation may be that you want to put a horizontal line on your figure to mark a value of interest on the y\-axis. Again thinking about our Stroop experiment, perhaps we wanted to indicate the 80% accuracy line on our boxplot figures. If we look at Figure [4\.1](representing-summary-statistics.html#fig:boxplot1), which used this code to display the basic boxplot:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)()
```
Figure B.4: Basic boxplot.
we could then use the `[geom_hline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` function, from the **`ggplot2`**, with, this time, the argument of `yintercept = 80`, meaning cut the y\-axis at 80, as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 80)
```
Figure B.5: Basic boxplot with black solid horizontal line indicating 80% accuracy.
and again we can embellish the line using the same arguments as above. We will put in some different values here just to show the changes:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 80, linetype = 3, color = "blue", size = 2)
```
Figure B.6: Basic boxplot with blue dotted horizontal line indicating 80% accuracy.
**LineTypes**
One thing worth noting is that the `linetype` argument can actually be specified as both a value or as a word. They match up as follows:
| Value | Word |
| --- | --- |
| linetype \= 0 | linetype \= "blank" |
| linetype \= 1 | linetype \= "solid" |
| linetype \= 2 | linetype \= "dashed" |
| linetype \= 3 | linetype \= "dotted" |
| linetype \= 4 | linetype \= "dotdash" |
| linetype \= 5 | linetype \= "longdash" |
| linetype \= 6 | linetype \= "twodash" |
**Diagonal Lines \- geom\_abline()**
The last type of line you might want to overlay on a figure is perhaps a diagonal line. For example, perhaps you have created a scatterplot and you want to have the true diagonal line for reference to the line of best fit. To show this, we will refer back to Figure [3\.5](transforming-data.html#fig:smooth-plot) which displayed the line of best fit for the reaction time versus age, and used the following code:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure B.7: Line of best fit for reaction time versus age.
By eye that would appear to be a fairly flat relationship but we will add the true diagonal to help clarify. To do this we use the `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`, again from **`ggplot2`**, and we give it the arguements of the slope (`slope = value`) and the intercept (`intercept = value`). We are also going to scale the data to turn it into z\-scores to help us visualise the relationship better, as follows:
```
dat_long_scale <- dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(rt_zscore = (rt - [mean](https://rdrr.io/r/base/mean.html)(rt))/[sd](https://rdrr.io/r/stats/sd.html)(rt),
age_zscore = (age - [mean](https://rdrr.io/r/base/mean.html)(age))/[sd](https://rdrr.io/r/stats/sd.html)(age))
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long_scale, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_zscore, y = age_zscore)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0)
```
Figure B.8: Line of best fit (blue line) for reaction time versus age with true diagonal shown (black line).
So now we can see the line of best fit (blue line) in relation to the true diagonal (black line). We will come back to why we z\-scored the data in a minute, but first let's finish tidying up this figure, using some of the customisation we have seen as it is a bit messy. Something like this might look cleaner:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long_scale, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_zscore, y = age_zscore)) +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0, linetype = "dashed", color = "black", size = .5) +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 0, linetype = "solid", color = "black", size = .5) +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 0, linetype = "solid", color = "black", size = .5) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure B.9: Line of best fit (blue solid line) for reaction time versus age with true diagonal shown (black line dashed).
That maybe looks a bit cluttered but it gives a nice example of how you can use the different geoms for adding lines to add information to your figure, clearly visualising the weak relationship between reaction time and age. **Note:** Do remember about the layering system however; you will notice that in the code for Figure [B.9](additional-customisation-options.html#fig:smooth-plot-abline2) we have changed the order of the code lines so that the geom lines are behind the points!
**Top Tip: Your intercepts must be values you can see**
Thinking back to why we z\-scored the data for that last figure, we sort of skipped over that, but it did serve a purpose. Here is the original data and the original scatterplot but with the `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` added to the code:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0)
```
Figure B.10: Line of best fit (blue solid line) for reaction time versus age with missing true diagonal.
The code runs but the diagonal line is nowhere to be seen. The reason is that you figure is zoomed in on the data and the diagonal is "out of shot" if you like. If we were to zoom out on the data we would then see the diagonal line as such:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0) +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,60))
```
Figure B.11: Zoomed out to show Line of best fit (blue solid line) for reaction time versus age with true diagonal (black line).
So the key point is that your intercepts have to be set to visible for values for you to see them! If you run your code and the line does not appear, check that the value you have set can actually be seen on your figure. This applies to `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`, `[geom_hline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` and `[geom_vline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`.
B.2 Zooming in and out
----------------------
Like in the example above, it can be very beneficial to be able to zoom in and out of figures, mainly to focus the frame on a given section. One function we can use to do this is the `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)`, in **`ggplot2`**. The main arguments are the limits on the x\-axis (`xlim = c(value, value)`), the limits on the y\-axis (`ylim = c(value, value)`), and whether to add a small expansion to those limits or not (`expand = TRUE/FALSE`). Looking at the scatterplot of age and reaction time again, we could use `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)` to zoom fully out:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,100), expand = FALSE)
```
Figure B.12: Zoomed out on scatterplot with no expansion around set limits
And we can add a small expansion by changing the `expand` argument to `TRUE`:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,100), expand = TRUE)
```
Figure B.13: Zoomed out on scatterplot with small expansion around set limits
Or we can zoom right in on a specific area of the plot if there was something we wanted to highlight. Here for example we are just showing the reaction times between 500 and 725 msecs, and all ages between 15 and 55:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(500,725), ylim = [c](https://rdrr.io/r/base/c.html)(15,55), expand = TRUE)
```
Figure B.14: Zoomed in on scatterplot with small expansion around set limits
And you can zoom in and zoom out just the x\-axis or just the y\-axis; just depends on what you want to show.
B.3 Setting the axis values
---------------------------
**Continuous scales**
You may have noticed that depending on the spread of your data, and how much of the figure you see, the values on the axes tend to change. Often we don't want this and want the values to be constant. We have already used functions to control this in the main body of the paper \- the `scale_*` functions. Here we will use `[scale_x_continuous()](https://ggplot2.tidyverse.org/reference/scale_continuous.html)` and `[scale_y_continuous()](https://ggplot2.tidyverse.org/reference/scale_continuous.html)` to set the values on the axes to what we want. The main arguments in both functions are the limits (`limts = c(value, value)`) and the breaks (the tick marks essentially, `breaks = value:value`). Note that the limits are just two values (minimum and maximum), whereas the breaks are a series of values (from 0 to 100, for example). If we use the scatterplot of age and reaction time, then our code might look like this:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = 0:1000) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = 0:100)
```
Figure B.15: Changing the values on the axes
That actually looks rubbish because we simply have too many values on our axes, so we can use the `[seq()](https://rdrr.io/r/base/seq.html)` function, from **`baseR`**, to get a bit more control. The arguments here are the first value (`from = value`), the last value (`last = value`), and the size of the steps (`by = value`). For example, `seq(0,10,2)` would give all values between 0 and 10 in steps of 2, (i.e. 0, 2, 4, 6, 8 and 10\). So using that idea we can change the y\-axis in steps of 5 (years) and the x\-axis in steps of 50 (msecs) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,1000,50)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,100,5))
```
Figure B.16: Changing the values on the axes using the seq() function
Which gives us a much nicer and cleaner set of values on our axes. And if we combine that approach for setting the axes values with our zoom function (`[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)`), then we can get something that looks like this:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,1000,50)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,100,5)) +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(250,750), ylim = [c](https://rdrr.io/r/base/c.html)(15,55))
```
Figure B.17: Combining scale functions and zoom functions
Which actually looks much like our original scatterplot but with better definition on the axes. So you can see we can actually have a lot of control over the axes and what we see. However, one thing to note, is that you should not use the `limits` argument within the `scale_*` functions as a zoom. It won't work like that and instead will just disregard data. Look at this example:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(500,600))
```
```
## Warning: Removed 166 rows containing non-finite values (stat_smooth).
```
```
## Warning: Removed 166 rows containing missing values (geom_point).
```
Figure B.18: Combining scale functions and zoom functions
It may look like it has zoomed in on the data but actually it has removed all data outwith the limits. That is what the warnings are telling you, and you can see that as there is no data above and below the limits, but we know there should be based on the earlier plots. So `scale_*` functions can change the values on the axes, but `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)` is for zooming in and out.
**Discrete scales**
The same idea of `limits` within a `scale_*` function can also be used to change the order of categories on a discrete scale. For example if we look at our boxplots again in Figure [4\.10](representing-summary-statistics.html#fig:viobox6), we see this figure:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.19: Using transparency on the fill color.
The figures always default to the alphabetical order. Sometimes that is what we want; sometimes that is not what we want. If we wanted to switch the order of **word** and **non\-word** so that the non\-word condition comes first we would use the `[scale_x_discrete()](https://ggplot2.tidyverse.org/reference/scale_discrete.html)` function and set the limits within it (`limits = c("category","category")`) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(limits = [c](https://rdrr.io/r/base/c.html)("nonword","word")) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.20: Switching orders of categorical variables
And that works just the same if you have more conditions, which you will see if you compare Figure [B.20](additional-customisation-options.html#fig:viobox6-scale1) to the below figure where we have flipped the order of non\-word and word from the original default alphabetical order
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point",
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(limits = [c](https://rdrr.io/r/base/c.html)("nonword","word")) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.21: Same as earlier figure but with order of conditions on x\-axis altered.
**Changing Order of Factors**
Again, you have a lot of control beyond the default alphabetical order that **`ggplot2`** tends to plot in. One question you might have though is why **monolingual** and **bilingual** are not in alphabetical order? f they were then the **bilingual** condition would be plotted first. The answer is, thinking back to the start of the paper, we changed our conditions from **1** and **2** to the factor names of **monolingual** and **bilingual**, and **`[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)`** maintains that factor order when plotting. So if we want to plot it in a different fashion we need to do a bit of factor reordering. This can be done much like earlier using the `[factor()](https://rdrr.io/r/base/factor.html)` function and stating the order of conditions we want (`levels = c("factor","factor")`). But be careful with spelling as it must match up to the names of the factors that already exist.
In this example, we will reorder the factors so that **bilingual** is presented first but leave the order of **word** and **non\-word** as the alphabetical default. Note in the code though that we are not permanently storing the factor change as we don't want to keep this new order. We are just changing the order "on the fly" for this one example before putting it into the plot.
```
dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(language = [factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("bilingual","monolingual"))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point",
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.22: Same as earlier figure but with order of conditions on x\-axis altered.
And if we compare this new figure to the original, side\-by\-side, we see the difference:
Figure B.23: Switching factor orders
B.4 Controlling the Legend
--------------------------
**Using the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`**
Whilst we are on the subject of changing order and position of elements of the figure, you might think about changing the position of a figure legend. There is actually a few ways of doing it but a simple approach is to use the the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function and add that to the ggplot chain. For example, if we run the below code and look at the output:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.24: Figure Legend removed using `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`
We see the same display as Figure [B.19](additional-customisation-options.html#fig:viobox6-add) but with no legend. That is quite useful because the legend just repeats the x\-axis and becomes redundant. The `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function works but setting the legened associated with the `fill` layer (i.e. `fill = condition`) to `"none"`, basically removing it. One thing to note with this approach is that you need to set a guide for every legend, otherwise a legend will appear. What that means is that if you had set both `fill = condition` and `color = condition`, then you would need to set both `fill` and `color` to `"none"` within `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none", color = "none") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.25: Removing more than one legend with `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`
Whereas if you hadn't used `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` you would see the following:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.26: Figure with more than one Legend
The key thing to note here is that in the above figure there is actually two legends (one for `fill` and one for `color`) but they are overlaid on top of each other as they are associated with the same variable. You can test this by removing either one of the options from the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function. One of the legends will still remain. So you need to turn them both off or you can use it to leave certain parts clear.
**Using the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)`**
An alternative to the guides function is using the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function. The `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function can actually be used to control a whole host of options in the plot, which we will come on to, but you can use it as a quick way to turn off the legend as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(legend.position = "none")
```
Figure B.27: Removing the legend with `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)`
What you can see is that within the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function we set an argument for `legend.position` and we set that to `"none"` \- again removing the legend entirely. One difference to note here is that it removes all aspects of the legend where as, as we said, using `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` allows you to control different parts of the legend (either leaving the `fill` or `color` showing or both). So using the `legend.position = "none"` is a bit more brute force and can be handy when you are using various different means of distinguishing between conditions of a variable and don't want to have to remove each aspect using the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`.
An extension here of course is not just removing the legend, but moving the legend to a different position. This can be done by setting `legend.position = ...` to either `"top"`, `"bottom"`, `"left"` or `"right"` as shown:
Figure B.28: Legend position options using theme()
Or even as a coordinate within your figure expressed as a propotion of your figure \- i.e. c(x \= 0, y \= 0\) would be the bottom left of your figure and c(x \= 1, y \= 1\) would be the top right, as shown here:
Figure B.29: Legend position options using theme()
And so with a little trial and error you can position your legend where you want it without crashing into your figure, hopefully!
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/additional-customisation-options.html |
B Additional customisation options
==================================
B.1 Adding lines to plots
-------------------------
**Vertical Lines \- geom\_vline()**
Often it can be useful to put a marker into our plots to highlight a certain criterion value. For example, if you were working with a scale that has a cut\-off, perhaps the Austim Spectrum Quotient 10 ([Allison et al., 2012](references.html#ref-allison2012toward)), then you might want to put a line at a score of 7; the point at which the researchers suggest the participant is referred further. Alternatively, thinking about the Stroop test we have looked at in this paper, perhaps you had a level of accuracy that you wanted to make sure was reached \- let's say 80%. If we refer back to Figure [3\.1](transforming-data.html#fig:histograms), which used the code below:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)")
```
and displayed the spread of the accuracy scores as such:
Figure B.1: Histogram of accuracy scores.
if we wanted to add a line at the 80% level then we could use the `[geom_vline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` function, again from the **`ggplot2`**, with the argument of `xintercept = 80`, meaning cut the x\-axis at 80, as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)") +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 80)
```
Figure B.2: Histogram of accuracy scores with black solid vertical line indicating 80% accuracy.
Now that looks ok but the line is a bit hard to see so we can change the style (`linetype = value`), color (`color = "color"`) and weight (`size = value`) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)") +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 80, linetype = 2, color = "red", size = 1.5)
```
Figure B.3: Histogram of accuracy scores with red dashed vertical line indicating 80% accuracy.
**Horizontal Lines \- geom\_hline()**
Another situation may be that you want to put a horizontal line on your figure to mark a value of interest on the y\-axis. Again thinking about our Stroop experiment, perhaps we wanted to indicate the 80% accuracy line on our boxplot figures. If we look at Figure [4\.1](representing-summary-statistics.html#fig:boxplot1), which used this code to display the basic boxplot:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)()
```
Figure B.4: Basic boxplot.
we could then use the `[geom_hline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` function, from the **`ggplot2`**, with, this time, the argument of `yintercept = 80`, meaning cut the y\-axis at 80, as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 80)
```
Figure B.5: Basic boxplot with black solid horizontal line indicating 80% accuracy.
and again we can embellish the line using the same arguments as above. We will put in some different values here just to show the changes:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 80, linetype = 3, color = "blue", size = 2)
```
Figure B.6: Basic boxplot with blue dotted horizontal line indicating 80% accuracy.
**LineTypes**
One thing worth noting is that the `linetype` argument can actually be specified as both a value or as a word. They match up as follows:
| Value | Word |
| --- | --- |
| linetype \= 0 | linetype \= "blank" |
| linetype \= 1 | linetype \= "solid" |
| linetype \= 2 | linetype \= "dashed" |
| linetype \= 3 | linetype \= "dotted" |
| linetype \= 4 | linetype \= "dotdash" |
| linetype \= 5 | linetype \= "longdash" |
| linetype \= 6 | linetype \= "twodash" |
**Diagonal Lines \- geom\_abline()**
The last type of line you might want to overlay on a figure is perhaps a diagonal line. For example, perhaps you have created a scatterplot and you want to have the true diagonal line for reference to the line of best fit. To show this, we will refer back to Figure [3\.5](transforming-data.html#fig:smooth-plot) which displayed the line of best fit for the reaction time versus age, and used the following code:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure B.7: Line of best fit for reaction time versus age.
By eye that would appear to be a fairly flat relationship but we will add the true diagonal to help clarify. To do this we use the `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`, again from **`ggplot2`**, and we give it the arguements of the slope (`slope = value`) and the intercept (`intercept = value`). We are also going to scale the data to turn it into z\-scores to help us visualise the relationship better, as follows:
```
dat_long_scale <- dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(rt_zscore = (rt - [mean](https://rdrr.io/r/base/mean.html)(rt))/[sd](https://rdrr.io/r/stats/sd.html)(rt),
age_zscore = (age - [mean](https://rdrr.io/r/base/mean.html)(age))/[sd](https://rdrr.io/r/stats/sd.html)(age))
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long_scale, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_zscore, y = age_zscore)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0)
```
Figure B.8: Line of best fit (blue line) for reaction time versus age with true diagonal shown (black line).
So now we can see the line of best fit (blue line) in relation to the true diagonal (black line). We will come back to why we z\-scored the data in a minute, but first let's finish tidying up this figure, using some of the customisation we have seen as it is a bit messy. Something like this might look cleaner:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long_scale, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_zscore, y = age_zscore)) +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0, linetype = "dashed", color = "black", size = .5) +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 0, linetype = "solid", color = "black", size = .5) +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 0, linetype = "solid", color = "black", size = .5) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure B.9: Line of best fit (blue solid line) for reaction time versus age with true diagonal shown (black line dashed).
That maybe looks a bit cluttered but it gives a nice example of how you can use the different geoms for adding lines to add information to your figure, clearly visualising the weak relationship between reaction time and age. **Note:** Do remember about the layering system however; you will notice that in the code for Figure [B.9](additional-customisation-options.html#fig:smooth-plot-abline2) we have changed the order of the code lines so that the geom lines are behind the points!
**Top Tip: Your intercepts must be values you can see**
Thinking back to why we z\-scored the data for that last figure, we sort of skipped over that, but it did serve a purpose. Here is the original data and the original scatterplot but with the `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` added to the code:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0)
```
Figure B.10: Line of best fit (blue solid line) for reaction time versus age with missing true diagonal.
The code runs but the diagonal line is nowhere to be seen. The reason is that you figure is zoomed in on the data and the diagonal is "out of shot" if you like. If we were to zoom out on the data we would then see the diagonal line as such:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0) +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,60))
```
Figure B.11: Zoomed out to show Line of best fit (blue solid line) for reaction time versus age with true diagonal (black line).
So the key point is that your intercepts have to be set to visible for values for you to see them! If you run your code and the line does not appear, check that the value you have set can actually be seen on your figure. This applies to `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`, `[geom_hline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` and `[geom_vline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`.
B.2 Zooming in and out
----------------------
Like in the example above, it can be very beneficial to be able to zoom in and out of figures, mainly to focus the frame on a given section. One function we can use to do this is the `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)`, in **`ggplot2`**. The main arguments are the limits on the x\-axis (`xlim = c(value, value)`), the limits on the y\-axis (`ylim = c(value, value)`), and whether to add a small expansion to those limits or not (`expand = TRUE/FALSE`). Looking at the scatterplot of age and reaction time again, we could use `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)` to zoom fully out:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,100), expand = FALSE)
```
Figure B.12: Zoomed out on scatterplot with no expansion around set limits
And we can add a small expansion by changing the `expand` argument to `TRUE`:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,100), expand = TRUE)
```
Figure B.13: Zoomed out on scatterplot with small expansion around set limits
Or we can zoom right in on a specific area of the plot if there was something we wanted to highlight. Here for example we are just showing the reaction times between 500 and 725 msecs, and all ages between 15 and 55:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(500,725), ylim = [c](https://rdrr.io/r/base/c.html)(15,55), expand = TRUE)
```
Figure B.14: Zoomed in on scatterplot with small expansion around set limits
And you can zoom in and zoom out just the x\-axis or just the y\-axis; just depends on what you want to show.
B.3 Setting the axis values
---------------------------
**Continuous scales**
You may have noticed that depending on the spread of your data, and how much of the figure you see, the values on the axes tend to change. Often we don't want this and want the values to be constant. We have already used functions to control this in the main body of the paper \- the `scale_*` functions. Here we will use `[scale_x_continuous()](https://ggplot2.tidyverse.org/reference/scale_continuous.html)` and `[scale_y_continuous()](https://ggplot2.tidyverse.org/reference/scale_continuous.html)` to set the values on the axes to what we want. The main arguments in both functions are the limits (`limts = c(value, value)`) and the breaks (the tick marks essentially, `breaks = value:value`). Note that the limits are just two values (minimum and maximum), whereas the breaks are a series of values (from 0 to 100, for example). If we use the scatterplot of age and reaction time, then our code might look like this:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = 0:1000) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = 0:100)
```
Figure B.15: Changing the values on the axes
That actually looks rubbish because we simply have too many values on our axes, so we can use the `[seq()](https://rdrr.io/r/base/seq.html)` function, from **`baseR`**, to get a bit more control. The arguments here are the first value (`from = value`), the last value (`last = value`), and the size of the steps (`by = value`). For example, `seq(0,10,2)` would give all values between 0 and 10 in steps of 2, (i.e. 0, 2, 4, 6, 8 and 10\). So using that idea we can change the y\-axis in steps of 5 (years) and the x\-axis in steps of 50 (msecs) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,1000,50)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,100,5))
```
Figure B.16: Changing the values on the axes using the seq() function
Which gives us a much nicer and cleaner set of values on our axes. And if we combine that approach for setting the axes values with our zoom function (`[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)`), then we can get something that looks like this:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,1000,50)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,100,5)) +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(250,750), ylim = [c](https://rdrr.io/r/base/c.html)(15,55))
```
Figure B.17: Combining scale functions and zoom functions
Which actually looks much like our original scatterplot but with better definition on the axes. So you can see we can actually have a lot of control over the axes and what we see. However, one thing to note, is that you should not use the `limits` argument within the `scale_*` functions as a zoom. It won't work like that and instead will just disregard data. Look at this example:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(500,600))
```
```
## Warning: Removed 166 rows containing non-finite values (stat_smooth).
```
```
## Warning: Removed 166 rows containing missing values (geom_point).
```
Figure B.18: Combining scale functions and zoom functions
It may look like it has zoomed in on the data but actually it has removed all data outwith the limits. That is what the warnings are telling you, and you can see that as there is no data above and below the limits, but we know there should be based on the earlier plots. So `scale_*` functions can change the values on the axes, but `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)` is for zooming in and out.
**Discrete scales**
The same idea of `limits` within a `scale_*` function can also be used to change the order of categories on a discrete scale. For example if we look at our boxplots again in Figure [4\.10](representing-summary-statistics.html#fig:viobox6), we see this figure:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.19: Using transparency on the fill color.
The figures always default to the alphabetical order. Sometimes that is what we want; sometimes that is not what we want. If we wanted to switch the order of **word** and **non\-word** so that the non\-word condition comes first we would use the `[scale_x_discrete()](https://ggplot2.tidyverse.org/reference/scale_discrete.html)` function and set the limits within it (`limits = c("category","category")`) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(limits = [c](https://rdrr.io/r/base/c.html)("nonword","word")) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.20: Switching orders of categorical variables
And that works just the same if you have more conditions, which you will see if you compare Figure [B.20](additional-customisation-options.html#fig:viobox6-scale1) to the below figure where we have flipped the order of non\-word and word from the original default alphabetical order
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point",
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(limits = [c](https://rdrr.io/r/base/c.html)("nonword","word")) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.21: Same as earlier figure but with order of conditions on x\-axis altered.
**Changing Order of Factors**
Again, you have a lot of control beyond the default alphabetical order that **`ggplot2`** tends to plot in. One question you might have though is why **monolingual** and **bilingual** are not in alphabetical order? f they were then the **bilingual** condition would be plotted first. The answer is, thinking back to the start of the paper, we changed our conditions from **1** and **2** to the factor names of **monolingual** and **bilingual**, and **`[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)`** maintains that factor order when plotting. So if we want to plot it in a different fashion we need to do a bit of factor reordering. This can be done much like earlier using the `[factor()](https://rdrr.io/r/base/factor.html)` function and stating the order of conditions we want (`levels = c("factor","factor")`). But be careful with spelling as it must match up to the names of the factors that already exist.
In this example, we will reorder the factors so that **bilingual** is presented first but leave the order of **word** and **non\-word** as the alphabetical default. Note in the code though that we are not permanently storing the factor change as we don't want to keep this new order. We are just changing the order "on the fly" for this one example before putting it into the plot.
```
dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(language = [factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("bilingual","monolingual"))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point",
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.22: Same as earlier figure but with order of conditions on x\-axis altered.
And if we compare this new figure to the original, side\-by\-side, we see the difference:
Figure B.23: Switching factor orders
B.4 Controlling the Legend
--------------------------
**Using the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`**
Whilst we are on the subject of changing order and position of elements of the figure, you might think about changing the position of a figure legend. There is actually a few ways of doing it but a simple approach is to use the the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function and add that to the ggplot chain. For example, if we run the below code and look at the output:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.24: Figure Legend removed using `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`
We see the same display as Figure [B.19](additional-customisation-options.html#fig:viobox6-add) but with no legend. That is quite useful because the legend just repeats the x\-axis and becomes redundant. The `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function works but setting the legened associated with the `fill` layer (i.e. `fill = condition`) to `"none"`, basically removing it. One thing to note with this approach is that you need to set a guide for every legend, otherwise a legend will appear. What that means is that if you had set both `fill = condition` and `color = condition`, then you would need to set both `fill` and `color` to `"none"` within `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none", color = "none") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.25: Removing more than one legend with `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`
Whereas if you hadn't used `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` you would see the following:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.26: Figure with more than one Legend
The key thing to note here is that in the above figure there is actually two legends (one for `fill` and one for `color`) but they are overlaid on top of each other as they are associated with the same variable. You can test this by removing either one of the options from the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function. One of the legends will still remain. So you need to turn them both off or you can use it to leave certain parts clear.
**Using the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)`**
An alternative to the guides function is using the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function. The `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function can actually be used to control a whole host of options in the plot, which we will come on to, but you can use it as a quick way to turn off the legend as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(legend.position = "none")
```
Figure B.27: Removing the legend with `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)`
What you can see is that within the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function we set an argument for `legend.position` and we set that to `"none"` \- again removing the legend entirely. One difference to note here is that it removes all aspects of the legend where as, as we said, using `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` allows you to control different parts of the legend (either leaving the `fill` or `color` showing or both). So using the `legend.position = "none"` is a bit more brute force and can be handy when you are using various different means of distinguishing between conditions of a variable and don't want to have to remove each aspect using the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`.
An extension here of course is not just removing the legend, but moving the legend to a different position. This can be done by setting `legend.position = ...` to either `"top"`, `"bottom"`, `"left"` or `"right"` as shown:
Figure B.28: Legend position options using theme()
Or even as a coordinate within your figure expressed as a propotion of your figure \- i.e. c(x \= 0, y \= 0\) would be the bottom left of your figure and c(x \= 1, y \= 1\) would be the top right, as shown here:
Figure B.29: Legend position options using theme()
And so with a little trial and error you can position your legend where you want it without crashing into your figure, hopefully!
B.1 Adding lines to plots
-------------------------
**Vertical Lines \- geom\_vline()**
Often it can be useful to put a marker into our plots to highlight a certain criterion value. For example, if you were working with a scale that has a cut\-off, perhaps the Austim Spectrum Quotient 10 ([Allison et al., 2012](references.html#ref-allison2012toward)), then you might want to put a line at a score of 7; the point at which the researchers suggest the participant is referred further. Alternatively, thinking about the Stroop test we have looked at in this paper, perhaps you had a level of accuracy that you wanted to make sure was reached \- let's say 80%. If we refer back to Figure [3\.1](transforming-data.html#fig:histograms), which used the code below:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)")
```
and displayed the spread of the accuracy scores as such:
Figure B.1: Histogram of accuracy scores.
if we wanted to add a line at the 80% level then we could use the `[geom_vline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` function, again from the **`ggplot2`**, with the argument of `xintercept = 80`, meaning cut the x\-axis at 80, as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)") +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 80)
```
Figure B.2: Histogram of accuracy scores with black solid vertical line indicating 80% accuracy.
Now that looks ok but the line is a bit hard to see so we can change the style (`linetype = value`), color (`color = "color"`) and weight (`size = value`) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = acc)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 1, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Accuracy (0-100)") +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 80, linetype = 2, color = "red", size = 1.5)
```
Figure B.3: Histogram of accuracy scores with red dashed vertical line indicating 80% accuracy.
**Horizontal Lines \- geom\_hline()**
Another situation may be that you want to put a horizontal line on your figure to mark a value of interest on the y\-axis. Again thinking about our Stroop experiment, perhaps we wanted to indicate the 80% accuracy line on our boxplot figures. If we look at Figure [4\.1](representing-summary-statistics.html#fig:boxplot1), which used this code to display the basic boxplot:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)()
```
Figure B.4: Basic boxplot.
we could then use the `[geom_hline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` function, from the **`ggplot2`**, with, this time, the argument of `yintercept = 80`, meaning cut the y\-axis at 80, as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 80)
```
Figure B.5: Basic boxplot with black solid horizontal line indicating 80% accuracy.
and again we can embellish the line using the same arguments as above. We will put in some different values here just to show the changes:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y = acc)) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)() +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 80, linetype = 3, color = "blue", size = 2)
```
Figure B.6: Basic boxplot with blue dotted horizontal line indicating 80% accuracy.
**LineTypes**
One thing worth noting is that the `linetype` argument can actually be specified as both a value or as a word. They match up as follows:
| Value | Word |
| --- | --- |
| linetype \= 0 | linetype \= "blank" |
| linetype \= 1 | linetype \= "solid" |
| linetype \= 2 | linetype \= "dashed" |
| linetype \= 3 | linetype \= "dotted" |
| linetype \= 4 | linetype \= "dotdash" |
| linetype \= 5 | linetype \= "longdash" |
| linetype \= 6 | linetype \= "twodash" |
**Diagonal Lines \- geom\_abline()**
The last type of line you might want to overlay on a figure is perhaps a diagonal line. For example, perhaps you have created a scatterplot and you want to have the true diagonal line for reference to the line of best fit. To show this, we will refer back to Figure [3\.5](transforming-data.html#fig:smooth-plot) which displayed the line of best fit for the reaction time versus age, and used the following code:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure B.7: Line of best fit for reaction time versus age.
By eye that would appear to be a fairly flat relationship but we will add the true diagonal to help clarify. To do this we use the `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`, again from **`ggplot2`**, and we give it the arguements of the slope (`slope = value`) and the intercept (`intercept = value`). We are also going to scale the data to turn it into z\-scores to help us visualise the relationship better, as follows:
```
dat_long_scale <- dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(rt_zscore = (rt - [mean](https://rdrr.io/r/base/mean.html)(rt))/[sd](https://rdrr.io/r/stats/sd.html)(rt),
age_zscore = (age - [mean](https://rdrr.io/r/base/mean.html)(age))/[sd](https://rdrr.io/r/stats/sd.html)(age))
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long_scale, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_zscore, y = age_zscore)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0)
```
Figure B.8: Line of best fit (blue line) for reaction time versus age with true diagonal shown (black line).
So now we can see the line of best fit (blue line) in relation to the true diagonal (black line). We will come back to why we z\-scored the data in a minute, but first let's finish tidying up this figure, using some of the customisation we have seen as it is a bit messy. Something like this might look cleaner:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long_scale, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt_zscore, y = age_zscore)) +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0, linetype = "dashed", color = "black", size = .5) +
[geom_hline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(yintercept = 0, linetype = "solid", color = "black", size = .5) +
[geom_vline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(xintercept = 0, linetype = "solid", color = "black", size = .5) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm")
```
Figure B.9: Line of best fit (blue solid line) for reaction time versus age with true diagonal shown (black line dashed).
That maybe looks a bit cluttered but it gives a nice example of how you can use the different geoms for adding lines to add information to your figure, clearly visualising the weak relationship between reaction time and age. **Note:** Do remember about the layering system however; you will notice that in the code for Figure [B.9](additional-customisation-options.html#fig:smooth-plot-abline2) we have changed the order of the code lines so that the geom lines are behind the points!
**Top Tip: Your intercepts must be values you can see**
Thinking back to why we z\-scored the data for that last figure, we sort of skipped over that, but it did serve a purpose. Here is the original data and the original scatterplot but with the `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` added to the code:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0)
```
Figure B.10: Line of best fit (blue solid line) for reaction time versus age with missing true diagonal.
The code runs but the diagonal line is nowhere to be seen. The reason is that you figure is zoomed in on the data and the diagonal is "out of shot" if you like. If we were to zoom out on the data we would then see the diagonal line as such:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[geom_abline](https://ggplot2.tidyverse.org/reference/geom_abline.html)(slope = 1, intercept = 0) +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,60))
```
Figure B.11: Zoomed out to show Line of best fit (blue solid line) for reaction time versus age with true diagonal (black line).
So the key point is that your intercepts have to be set to visible for values for you to see them! If you run your code and the line does not appear, check that the value you have set can actually be seen on your figure. This applies to `[geom_abline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`, `[geom_hline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)` and `[geom_vline()](https://ggplot2.tidyverse.org/reference/geom_abline.html)`.
B.2 Zooming in and out
----------------------
Like in the example above, it can be very beneficial to be able to zoom in and out of figures, mainly to focus the frame on a given section. One function we can use to do this is the `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)`, in **`ggplot2`**. The main arguments are the limits on the x\-axis (`xlim = c(value, value)`), the limits on the y\-axis (`ylim = c(value, value)`), and whether to add a small expansion to those limits or not (`expand = TRUE/FALSE`). Looking at the scatterplot of age and reaction time again, we could use `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)` to zoom fully out:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,100), expand = FALSE)
```
Figure B.12: Zoomed out on scatterplot with no expansion around set limits
And we can add a small expansion by changing the `expand` argument to `TRUE`:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(0,1000), ylim = [c](https://rdrr.io/r/base/c.html)(0,100), expand = TRUE)
```
Figure B.13: Zoomed out on scatterplot with small expansion around set limits
Or we can zoom right in on a specific area of the plot if there was something we wanted to highlight. Here for example we are just showing the reaction times between 500 and 725 msecs, and all ages between 15 and 55:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(500,725), ylim = [c](https://rdrr.io/r/base/c.html)(15,55), expand = TRUE)
```
Figure B.14: Zoomed in on scatterplot with small expansion around set limits
And you can zoom in and zoom out just the x\-axis or just the y\-axis; just depends on what you want to show.
B.3 Setting the axis values
---------------------------
**Continuous scales**
You may have noticed that depending on the spread of your data, and how much of the figure you see, the values on the axes tend to change. Often we don't want this and want the values to be constant. We have already used functions to control this in the main body of the paper \- the `scale_*` functions. Here we will use `[scale_x_continuous()](https://ggplot2.tidyverse.org/reference/scale_continuous.html)` and `[scale_y_continuous()](https://ggplot2.tidyverse.org/reference/scale_continuous.html)` to set the values on the axes to what we want. The main arguments in both functions are the limits (`limts = c(value, value)`) and the breaks (the tick marks essentially, `breaks = value:value`). Note that the limits are just two values (minimum and maximum), whereas the breaks are a series of values (from 0 to 100, for example). If we use the scatterplot of age and reaction time, then our code might look like this:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = 0:1000) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = 0:100)
```
Figure B.15: Changing the values on the axes
That actually looks rubbish because we simply have too many values on our axes, so we can use the `[seq()](https://rdrr.io/r/base/seq.html)` function, from **`baseR`**, to get a bit more control. The arguments here are the first value (`from = value`), the last value (`last = value`), and the size of the steps (`by = value`). For example, `seq(0,10,2)` would give all values between 0 and 10 in steps of 2, (i.e. 0, 2, 4, 6, 8 and 10\). So using that idea we can change the y\-axis in steps of 5 (years) and the x\-axis in steps of 50 (msecs) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,1000,50)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,100,5))
```
Figure B.16: Changing the values on the axes using the seq() function
Which gives us a much nicer and cleaner set of values on our axes. And if we combine that approach for setting the axes values with our zoom function (`[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)`), then we can get something that looks like this:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,1000), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,1000,50)) +
[scale_y_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(0,100), breaks = [seq](https://rdrr.io/r/base/seq.html)(0,100,5)) +
[coord_cartesian](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)(xlim = [c](https://rdrr.io/r/base/c.html)(250,750), ylim = [c](https://rdrr.io/r/base/c.html)(15,55))
```
Figure B.17: Combining scale functions and zoom functions
Which actually looks much like our original scatterplot but with better definition on the axes. So you can see we can actually have a lot of control over the axes and what we see. However, one thing to note, is that you should not use the `limits` argument within the `scale_*` functions as a zoom. It won't work like that and instead will just disregard data. Look at this example:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt, y = age)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)() +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = "lm") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(limits = [c](https://rdrr.io/r/base/c.html)(500,600))
```
```
## Warning: Removed 166 rows containing non-finite values (stat_smooth).
```
```
## Warning: Removed 166 rows containing missing values (geom_point).
```
Figure B.18: Combining scale functions and zoom functions
It may look like it has zoomed in on the data but actually it has removed all data outwith the limits. That is what the warnings are telling you, and you can see that as there is no data above and below the limits, but we know there should be based on the earlier plots. So `scale_*` functions can change the values on the axes, but `[coord_cartesian()](https://ggplot2.tidyverse.org/reference/coord_cartesian.html)` is for zooming in and out.
**Discrete scales**
The same idea of `limits` within a `scale_*` function can also be used to change the order of categories on a discrete scale. For example if we look at our boxplots again in Figure [4\.10](representing-summary-statistics.html#fig:viobox6), we see this figure:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.19: Using transparency on the fill color.
The figures always default to the alphabetical order. Sometimes that is what we want; sometimes that is not what we want. If we wanted to switch the order of **word** and **non\-word** so that the non\-word condition comes first we would use the `[scale_x_discrete()](https://ggplot2.tidyverse.org/reference/scale_discrete.html)` function and set the limits within it (`limits = c("category","category")`) as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(limits = [c](https://rdrr.io/r/base/c.html)("nonword","word")) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.20: Switching orders of categorical variables
And that works just the same if you have more conditions, which you will see if you compare Figure [B.20](additional-customisation-options.html#fig:viobox6-scale1) to the below figure where we have flipped the order of non\-word and word from the original default alphabetical order
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point",
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(limits = [c](https://rdrr.io/r/base/c.html)("nonword","word")) +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.21: Same as earlier figure but with order of conditions on x\-axis altered.
**Changing Order of Factors**
Again, you have a lot of control beyond the default alphabetical order that **`ggplot2`** tends to plot in. One question you might have though is why **monolingual** and **bilingual** are not in alphabetical order? f they were then the **bilingual** condition would be plotted first. The answer is, thinking back to the start of the paper, we changed our conditions from **1** and **2** to the factor names of **monolingual** and **bilingual**, and **`[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)`** maintains that factor order when plotting. So if we want to plot it in a different fashion we need to do a bit of factor reordering. This can be done much like earlier using the `[factor()](https://rdrr.io/r/base/factor.html)` function and stating the order of conditions we want (`levels = c("factor","factor")`). But be careful with spelling as it must match up to the names of the factors that already exist.
In this example, we will reorder the factors so that **bilingual** is presented first but leave the order of **word** and **non\-word** as the alphabetical default. Note in the code though that we are not permanently storing the factor change as we don't want to keep this new order. We are just changing the order "on the fly" for this one example before putting it into the plot.
```
dat_long [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(language = [factor](https://rdrr.io/r/base/factor.html)(language,
levels = [c](https://rdrr.io/r/base/c.html)("bilingual","monolingual"))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = language)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)() +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point",
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1,
position = [position_dodge](https://ggplot2.tidyverse.org/reference/position_dodge.html)(.9)) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.22: Same as earlier figure but with order of conditions on x\-axis altered.
And if we compare this new figure to the original, side\-by\-side, we see the difference:
Figure B.23: Switching factor orders
B.4 Controlling the Legend
--------------------------
**Using the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`**
Whilst we are on the subject of changing order and position of elements of the figure, you might think about changing the position of a figure legend. There is actually a few ways of doing it but a simple approach is to use the the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function and add that to the ggplot chain. For example, if we run the below code and look at the output:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.24: Figure Legend removed using `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`
We see the same display as Figure [B.19](additional-customisation-options.html#fig:viobox6-add) but with no legend. That is quite useful because the legend just repeats the x\-axis and becomes redundant. The `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function works but setting the legened associated with the `fill` layer (i.e. `fill = condition`) to `"none"`, basically removing it. One thing to note with this approach is that you need to set a guide for every legend, otherwise a legend will appear. What that means is that if you had set both `fill = condition` and `color = condition`, then you would need to set both `fill` and `color` to `"none"` within `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[guides](https://ggplot2.tidyverse.org/reference/guides.html)(fill = "none", color = "none") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.25: Removing more than one legend with `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`
Whereas if you hadn't used `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` you would see the following:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure B.26: Figure with more than one Legend
The key thing to note here is that in the above figure there is actually two legends (one for `fill` and one for `color`) but they are overlaid on top of each other as they are associated with the same variable. You can test this by removing either one of the options from the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` function. One of the legends will still remain. So you need to turn them both off or you can use it to leave certain parts clear.
**Using the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)`**
An alternative to the guides function is using the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function. The `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function can actually be used to control a whole host of options in the plot, which we will come on to, but you can use it as a quick way to turn off the legend as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = condition, y= rt, fill = condition, color = condition)) +
[geom_violin](https://ggplot2.tidyverse.org/reference/geom_violin.html)(alpha = .4) +
[geom_boxplot](https://ggplot2.tidyverse.org/reference/geom_boxplot.html)(width = .2, fatten = NULL, alpha = .5) +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun = "mean", geom = "point") +
[stat_summary](https://ggplot2.tidyverse.org/reference/stat_summary.html)(fun.data = "mean_se", geom = "errorbar", width = .1) +
[scale_fill_brewer](https://ggplot2.tidyverse.org/reference/scale_brewer.html)(palette = "Dark2") +
[theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)() +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(legend.position = "none")
```
Figure B.27: Removing the legend with `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)`
What you can see is that within the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function we set an argument for `legend.position` and we set that to `"none"` \- again removing the legend entirely. One difference to note here is that it removes all aspects of the legend where as, as we said, using `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)` allows you to control different parts of the legend (either leaving the `fill` or `color` showing or both). So using the `legend.position = "none"` is a bit more brute force and can be handy when you are using various different means of distinguishing between conditions of a variable and don't want to have to remove each aspect using the `[guides()](https://ggplot2.tidyverse.org/reference/guides.html)`.
An extension here of course is not just removing the legend, but moving the legend to a different position. This can be done by setting `legend.position = ...` to either `"top"`, `"bottom"`, `"left"` or `"right"` as shown:
Figure B.28: Legend position options using theme()
Or even as a coordinate within your figure expressed as a propotion of your figure \- i.e. c(x \= 0, y \= 0\) would be the bottom left of your figure and c(x \= 1, y \= 1\) would be the top right, as shown here:
Figure B.29: Legend position options using theme()
And so with a little trial and error you can position your legend where you want it without crashing into your figure, hopefully!
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/plotstyle.html |
C Styling Plots
===============
C.1 Aesthetics
--------------
### C.1\.1 Colour/Fill
The `colour` argument changes the point and line colour, while the `fill` argument changes the interior colour of shapes. Type `[colours()](https://rdrr.io/r/grDevices/colors.html)` into the console to see a list of all the named colours in R. Alternatively, you can use hexadecimal colours like `"#FF8000"` or the `[rgb()](https://rdrr.io/r/grDevices/rgb.html)` function to set red, green, and blue values on a scale from 0 to 1\.
Hover over a colour to see its R name.
* black
* gray1
* gray2
* gray3
* gray4
* gray5
* gray6
* gray7
* gray8
* gray9
* gray10
* gray11
* gray12
* gray13
* gray14
* gray15
* gray16
* gray17
* gray18
* gray19
* gray20
* gray21
* gray22
* gray23
* gray24
* gray25
* gray26
* gray27
* gray28
* gray29
* gray30
* gray31
* gray32
* gray33
* gray34
* gray35
* gray36
* gray37
* gray38
* gray39
* gray40
* dimgray
* gray42
* gray43
* gray44
* gray45
* gray46
* gray47
* gray48
* gray49
* gray50
* gray51
* gray52
* gray53
* gray54
* gray55
* gray56
* gray57
* gray58
* gray59
* gray60
* gray61
* gray62
* gray63
* gray64
* gray65
* darkgray
* gray66
* gray67
* gray68
* gray69
* gray70
* gray71
* gray72
* gray73
* gray74
* gray
* gray75
* gray76
* gray77
* gray78
* gray79
* gray80
* gray81
* gray82
* gray83
* lightgray
* gray84
* gray85
* gainsboro
* gray86
* gray87
* gray88
* gray89
* gray90
* gray91
* gray92
* gray93
* gray94
* gray95
* gray96
* gray97
* gray98
* gray99
* white
* snow4
* snow3
* snow2
* snow
* rosybrown4
* rosybrown
* rosybrown3
* rosybrown2
* rosybrown1
* lightcoral
* indianred
* indianred4
* indianred2
* indianred1
* indianred3
* brown4
* brown
* brown3
* brown2
* brown1
* firebrick4
* firebrick
* firebrick3
* firebrick1
* firebrick2
* darkred
* red3
* red2
* red
* mistyrose3
* mistyrose4
* mistyrose2
* mistyrose
* salmon
* tomato3
* coral4
* coral3
* coral2
* coral1
* tomato2
* tomato
* tomato4
* darksalmon
* salmon4
* salmon3
* salmon2
* salmon1
* coral
* orangered4
* orangered3
* orangered2
* lightsalmon3
* lightsalmon2
* lightsalmon
* lightsalmon4
* sienna
* sienna3
* sienna2
* sienna1
* sienna4
* orangered
* seashell4
* seashell3
* seashell2
* seashell
* chocolate4
* chocolate3
* chocolate
* chocolate2
* chocolate1
* linen
* peachpuff4
* peachpuff3
* peachpuff2
* peachpuff
* sandybrown
* tan4
* peru
* tan2
* tan1
* darkorange4
* darkorange3
* darkorange2
* darkorange1
* antiquewhite3
* antiquewhite2
* antiquewhite1
* bisque4
* bisque3
* bisque2
* bisque
* burlywood4
* burlywood3
* burlywood
* burlywood2
* burlywood1
* darkorange
* antiquewhite4
* antiquewhite
* papayawhip
* blanchedalmond
* navajowhite4
* navajowhite3
* navajowhite2
* navajowhite
* tan
* floralwhite
* oldlace
* wheat4
* wheat3
* wheat2
* wheat
* wheat1
* moccasin
* orange4
* orange3
* orange2
* orange
* goldenrod
* goldenrod1
* goldenrod4
* goldenrod3
* goldenrod2
* darkgoldenrod4
* darkgoldenrod
* darkgoldenrod3
* darkgoldenrod2
* darkgoldenrod1
* cornsilk
* cornsilk4
* cornsilk3
* cornsilk2
* lightgoldenrod4
* lightgoldenrod3
* lightgoldenrod
* lightgoldenrod2
* lightgoldenrod1
* gold4
* gold3
* gold2
* gold
* lemonchiffon4
* lemonchiffon3
* lemonchiffon2
* lemonchiffon
* palegoldenrod
* khaki
* darkkhaki
* khaki4
* khaki3
* khaki2
* khaki1
* ivory4
* ivory3
* ivory2
* ivory
* beige
* lightyellow4
* lightyellow3
* lightyellow2
* lightyellow
* lightgoldenrodyellow
* yellow4
* yellow3
* yellow2
* yellow
* olivedrab
* olivedrab4
* olivedrab3
* olivedrab2
* olivedrab1
* darkolivegreen
* darkolivegreen4
* darkolivegreen3
* darkolivegreen2
* darkolivegreen1
* greenyellow
* chartreuse4
* chartreuse3
* chartreuse2
* lawngreen
* chartreuse
* honeydew4
* honeydew3
* honeydew2
* honeydew
* darkseagreen4
* darkseagreen
* darkseagreen3
* darkseagreen2
* darkseagreen1
* lightgreen
* palegreen
* palegreen4
* palegreen3
* palegreen1
* forestgreen
* limegreen
* darkgreen
* green4
* green3
* green2
* green
* mediumseagreen
* seagreen
* seagreen3
* seagreen2
* seagreen1
* mintcream
* springgreen4
* springgreen3
* springgreen2
* springgreen
* aquamarine3
* aquamarine2
* aquamarine
* mediumspringgreen
* aquamarine4
* turquoise
* mediumturquoise
* lightseagreen
* azure4
* azure3
* azure2
* azure
* lightcyan4
* lightcyan3
* lightcyan2
* lightcyan
* paleturquoise
* paleturquoise4
* paleturquoise3
* paleturquoise2
* paleturquoise1
* darkslategray
* darkslategray4
* darkslategray3
* darkslategray2
* darkslategray1
* cyan4
* cyan3
* darkturquoise
* cyan2
* cyan
* cadetblue4
* cadetblue
* turquoise4
* turquoise3
* turquoise2
* turquoise1
* powderblue
* cadetblue3
* cadetblue2
* cadetblue1
* lightblue4
* lightblue3
* lightblue
* lightblue2
* lightblue1
* deepskyblue4
* deepskyblue3
* deepskyblue2
* deepskyblue
* skyblue
* lightskyblue4
* lightskyblue3
* lightskyblue2
* lightskyblue1
* lightskyblue
* skyblue4
* skyblue3
* skyblue2
* skyblue1
* aliceblue
* slategray
* lightslategray
* slategray3
* slategray2
* slategray1
* steelblue4
* steelblue
* steelblue3
* steelblue2
* steelblue1
* dodgerblue4
* dodgerblue3
* dodgerblue2
* dodgerblue
* lightsteelblue4
* lightsteelblue3
* lightsteelblue
* lightsteelblue2
* lightsteelblue1
* slategray4
* cornflowerblue
* royalblue
* royalblue4
* royalblue3
* royalblue2
* royalblue1
* ghostwhite
* lavender
* midnightblue
* navy
* blue4
* blue3
* blue2
* blue
* darkslateblue
* slateblue
* mediumslateblue
* lightslateblue
* slateblue1
* slateblue4
* slateblue3
* slateblue2
* mediumpurple4
* mediumpurple3
* mediumpurple
* mediumpurple2
* mediumpurple1
* purple4
* purple3
* blueviolet
* purple1
* purple2
* purple
* darkorchid
* darkorchid4
* darkorchid3
* darkorchid2
* darkorchid1
* darkviolet
* mediumorchid4
* mediumorchid3
* mediumorchid
* mediumorchid2
* mediumorchid1
* thistle4
* thistle3
* thistle
* thistle2
* thistle1
* plum4
* plum3
* plum2
* plum1
* plum
* violet
* darkmagenta
* magenta3
* magenta2
* magenta
* orchid4
* orchid3
* orchid
* orchid2
* orchid1
* maroon4
* violetred
* maroon3
* maroon2
* maroon1
* mediumvioletred
* deeppink3
* deeppink2
* deeppink
* deeppink4
* hotpink2
* hotpink1
* hotpink4
* hotpink
* violetred4
* violetred3
* violetred2
* violetred1
* hotpink3
* lavenderblush4
* lavenderblush3
* lavenderblush2
* lavenderblush
* maroon
* palevioletred4
* palevioletred3
* palevioletred
* palevioletred2
* palevioletred1
* pink4
* pink3
* pink2
* pink1
* pink
* lightpink
* lightpink4
* lightpink3
* lightpink2
* lightpink1
### C.1\.2 Alpha
The `alpha` argument changes transparency (0 \= totally transparent, 1 \= totally opaque).
Figure C.1: Varying alpha values.
### C.1\.3 Shape
The `shape` argument changes the shape of points.
Figure C.2: The 25 shape values
### C.1\.4 Linetype
You can probably guess what the `linetype` argument does.
Figure C.3: The 6 linetype values at different sizes.
C.2 Palettes
------------
Discrete palettes change depending on the number of categories.
Figure C.4: Default discrete palette with different numbers of levels.
### C.2\.1 Viridis Palettes
Viridis palettes are very good for colourblind\-safe and greyscale\-safe plots. The work with any number of categories, but are best for larger numbers of categories or continuous colours.
#### C.2\.1\.1 Discrete Viridis Palettes
Set [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") viridis colours with `[scale_colour_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure C.5: Discrete viridis palettes.
If the end colour is too light for your plot or the start colour too dark, you can set the `begin` and `end` arguments to values between 0 and 1, such as `scale_colour_viridis_c(begin = 0.1, end = 0.9)`.
#### C.2\.1\.2 Continuous Viridis Palettes
Set [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") viridis colours with `[scale_colour_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure 3\.7: Continuous viridis palettes.
### C.2\.2 Brewer Palettes
Brewer palettes give you a lot of control over plot colour and fill. You set them with `[scale_color_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` or `[scale_fill_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` and set the `palette` argument to one of the palettes below. Set `direction = -1` to reverse the order of colours.
#### C.2\.2\.1 Qualitative Brewer Palettes
These palettes are good for [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") data with up to 8 categories (some palettes can handle up to 12\). The "Paired" palette is useful if your categories are arranged in pairs.
Figure C.6: Qualitative brewer palettes.
#### C.2\.2\.2 Sequential Brewer Palettes
These palettes are good for up to 9 [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with a lot of categories.
Figure C.7: Sequential brewer palettes.
#### C.2\.2\.3 Diverging Brewer Palettes
These palettes are good for [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with up to 11 levels where the centre level is a neutral or baseline category and the levels above and below it differ in an important way, such as agree versus disagree options.
Figure C.8: Diverging brewer palettes.
C.3 Themes
----------
`ggplot2` has 8 built\-in themes that you can add to a plot like `plot + theme_bw()` or set as the default theme at the top of your script like `theme_set(theme_bw())`.
Figure C.9: {ggplot2} themes.
### C.3\.1 ggthemes
You can get more themes from add\-on packages, like `[ggthemes](https://yutannihilation.github.io/allYourFigureAreBelongToUs/ggthemes/)`. Most of the themes also have custom `scale_` functions like `scale_colour_economist()`. Their website has extensive examples and instructions for alternate or dark versions of these themes.
Figure C.10: {ggthemes} themes.
### C.3\.2 Fonts
You can customise the fonts used in themes. All computers should be able to recognise the families "sans", "serif", and "mono", and some computers will be able to access other installed fonts by name.
```
sans <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "sans") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Sans")
serif <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "serif") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Serif")
mono <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "mono") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Mono")
font <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "Comic Sans MS") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Comic Sans MS")
sans + serif + mono + font + [plot_layout](https://patchwork.data-imaginist.com/reference/plot_layout.html)(nrow = 1)
```
Figure C.11: Different fonts.
If you are working on a Windows machine and get the error "font family not found in Windows font database", you may need to explicitly map the fonts. In your setup code chunk, add the following code, which should fix the error. You may need to do this for any fonts that you specify.
The `showtext` package is a flexible way to add fonts.
If you have a .ttf file from a font site, like [Font Squirrel](https://www.fontsquirrel.com), you can load the file directly using `[font_add()](https://rdrr.io/pkg/sysfonts/man/font_add.html)`. Set `regular` as the path to the file for the regular version of the font, and optionally add other versions. Set the `family` to the name you want to use for the font. You will need to include any local font files if you are sharing your script with others.
```
[library](https://rdrr.io/r/base/library.html)([showtext](https://github.com/yixuan/showtext))
# font from https://www.fontsquirrel.com/fonts/SF-Cartoonist-Hand
[font_add](https://rdrr.io/pkg/sysfonts/man/font_add.html)(
regular = "fonts/cartoonist/SF_Cartoonist_Hand.ttf",
bold = "fonts/cartoonist/SF_Cartoonist_Hand_Bold.ttf",
italic = "fonts/cartoonist/SF_Cartoonist_Hand_Italic.ttf",
bolditalic = "fonts/cartoonist/SF_Cartoonist_Hand_Bold_Italic.ttf",
family = "cartoonist"
)
```
To download fonts directly from [Google fonts](https://fonts.google.com/), use the function `[font_add_google()](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)`, set the `name` to the exact name from the site, and the `family` to the name you want to use for the font.
```
# download fonts from Google
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Courgette", family = "courgette")
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Poiret One", family = "poiret")
```
After you've added fonts from local files or Google, you need to make them available to R using `[showtext_auto()](https://rdrr.io/pkg/showtext/man/showtext_auto.html)`. You will have to do these steps in each script where you want to use the custom fonts.
```
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
```
To change the fonts used overall in a plot, use the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function and set `text` to `element_text(family = "new_font_family")`.
```
a <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "courgette")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Courgette")
b <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand")
c <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "poiret")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Poiret One")
a + b + c
```
Figure C.12: Custom Fonts.
To set the fonts for individual elements in the plot, you need to find the specific argument for that element. You can use the argument `face` to choose "bold", "italic", or "bolditalic" versions, if they are available.
```
g + [ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand") +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "bold"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "italic"),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "sans")
)
```
Figure C.13: Multiple custom fonts on the same plot.
### C.3\.3 Setting A Lab Theme using `theme()`
The `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function, as we mentioned, does a lot more than just change the position of a legend and can be used to really control a variety of elements and to eventually create your own "theme" for your figures \- say you want to have a consistent look to your figures across your publications or across your lab posters.
First, we'll create a basic plot to demonstrate the changes.
```
g <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(diamonds, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = carat,
y = price,
color = cut)) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~color, nrow = 2) +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = lm, formula = y~x) +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(title = "The relationship between carat and price",
subtitle = "For each level of color and cut",
caption = "Data from ggplot2::diamonds")
g
```
Figure C.14: Basic plot in default theme
Always start with a base theme, like `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)` and set the size and font. Make sure to load any custom fonts.
```
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Nunito", family = "Nunito")
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
# set up custom theme to add to all plots
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)( # always start with a base theme_****
base_size = 16, # 16-point font (adjusted for axes)
base_family = "Nunito" # custom font family
)
```
```
g + mytheme
```
Figure C.15: Basic customised theme
Now add specific theme customisations. See `[?theme](https://ggplot2.tidyverse.org/reference/theme.html)` for detailed explanations. Most theme arguments take a value of `[element_blank()](https://ggplot2.tidyverse.org/reference/element.html)` to remove the feature entirely, or `[element_text()](https://ggplot2.tidyverse.org/reference/element.html)`, `[element_line()](https://ggplot2.tidyverse.org/reference/element.html)` or `[element_rect()](https://ggplot2.tidyverse.org/reference/element.html)`, depending on whether the feature is text, a box, or a line.
```
# add more specific customisations with theme()
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)(
base_size = 16,
base_family = "Nunito"
) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
plot.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "black"),
panel.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey10",
color = "grey30"),
text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "white"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0), # left justify
strip.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey60", ),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60"),
axis.line = [element_line](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60", size = 1),
panel.grid = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
plot.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5), # center justify
plot.subtitle = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5, color = "grey60"),
plot.caption = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(face = "italic")
)
```
```
g + mytheme
```
Figure C.16: Further customised theme
You can still add further theme customisation for specific plots.
```
g + mytheme +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
legend.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 11),
legend.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 9),
legend.key.height = [unit](https://rdrr.io/r/grid/unit.html)(0.2, "inches"),
legend.position = [c](https://rdrr.io/r/base/c.html)(.9, 0.175)
)
```
Figure C.17: Plot\-specific customising.
C.1 Aesthetics
--------------
### C.1\.1 Colour/Fill
The `colour` argument changes the point and line colour, while the `fill` argument changes the interior colour of shapes. Type `[colours()](https://rdrr.io/r/grDevices/colors.html)` into the console to see a list of all the named colours in R. Alternatively, you can use hexadecimal colours like `"#FF8000"` or the `[rgb()](https://rdrr.io/r/grDevices/rgb.html)` function to set red, green, and blue values on a scale from 0 to 1\.
Hover over a colour to see its R name.
* black
* gray1
* gray2
* gray3
* gray4
* gray5
* gray6
* gray7
* gray8
* gray9
* gray10
* gray11
* gray12
* gray13
* gray14
* gray15
* gray16
* gray17
* gray18
* gray19
* gray20
* gray21
* gray22
* gray23
* gray24
* gray25
* gray26
* gray27
* gray28
* gray29
* gray30
* gray31
* gray32
* gray33
* gray34
* gray35
* gray36
* gray37
* gray38
* gray39
* gray40
* dimgray
* gray42
* gray43
* gray44
* gray45
* gray46
* gray47
* gray48
* gray49
* gray50
* gray51
* gray52
* gray53
* gray54
* gray55
* gray56
* gray57
* gray58
* gray59
* gray60
* gray61
* gray62
* gray63
* gray64
* gray65
* darkgray
* gray66
* gray67
* gray68
* gray69
* gray70
* gray71
* gray72
* gray73
* gray74
* gray
* gray75
* gray76
* gray77
* gray78
* gray79
* gray80
* gray81
* gray82
* gray83
* lightgray
* gray84
* gray85
* gainsboro
* gray86
* gray87
* gray88
* gray89
* gray90
* gray91
* gray92
* gray93
* gray94
* gray95
* gray96
* gray97
* gray98
* gray99
* white
* snow4
* snow3
* snow2
* snow
* rosybrown4
* rosybrown
* rosybrown3
* rosybrown2
* rosybrown1
* lightcoral
* indianred
* indianred4
* indianred2
* indianred1
* indianred3
* brown4
* brown
* brown3
* brown2
* brown1
* firebrick4
* firebrick
* firebrick3
* firebrick1
* firebrick2
* darkred
* red3
* red2
* red
* mistyrose3
* mistyrose4
* mistyrose2
* mistyrose
* salmon
* tomato3
* coral4
* coral3
* coral2
* coral1
* tomato2
* tomato
* tomato4
* darksalmon
* salmon4
* salmon3
* salmon2
* salmon1
* coral
* orangered4
* orangered3
* orangered2
* lightsalmon3
* lightsalmon2
* lightsalmon
* lightsalmon4
* sienna
* sienna3
* sienna2
* sienna1
* sienna4
* orangered
* seashell4
* seashell3
* seashell2
* seashell
* chocolate4
* chocolate3
* chocolate
* chocolate2
* chocolate1
* linen
* peachpuff4
* peachpuff3
* peachpuff2
* peachpuff
* sandybrown
* tan4
* peru
* tan2
* tan1
* darkorange4
* darkorange3
* darkorange2
* darkorange1
* antiquewhite3
* antiquewhite2
* antiquewhite1
* bisque4
* bisque3
* bisque2
* bisque
* burlywood4
* burlywood3
* burlywood
* burlywood2
* burlywood1
* darkorange
* antiquewhite4
* antiquewhite
* papayawhip
* blanchedalmond
* navajowhite4
* navajowhite3
* navajowhite2
* navajowhite
* tan
* floralwhite
* oldlace
* wheat4
* wheat3
* wheat2
* wheat
* wheat1
* moccasin
* orange4
* orange3
* orange2
* orange
* goldenrod
* goldenrod1
* goldenrod4
* goldenrod3
* goldenrod2
* darkgoldenrod4
* darkgoldenrod
* darkgoldenrod3
* darkgoldenrod2
* darkgoldenrod1
* cornsilk
* cornsilk4
* cornsilk3
* cornsilk2
* lightgoldenrod4
* lightgoldenrod3
* lightgoldenrod
* lightgoldenrod2
* lightgoldenrod1
* gold4
* gold3
* gold2
* gold
* lemonchiffon4
* lemonchiffon3
* lemonchiffon2
* lemonchiffon
* palegoldenrod
* khaki
* darkkhaki
* khaki4
* khaki3
* khaki2
* khaki1
* ivory4
* ivory3
* ivory2
* ivory
* beige
* lightyellow4
* lightyellow3
* lightyellow2
* lightyellow
* lightgoldenrodyellow
* yellow4
* yellow3
* yellow2
* yellow
* olivedrab
* olivedrab4
* olivedrab3
* olivedrab2
* olivedrab1
* darkolivegreen
* darkolivegreen4
* darkolivegreen3
* darkolivegreen2
* darkolivegreen1
* greenyellow
* chartreuse4
* chartreuse3
* chartreuse2
* lawngreen
* chartreuse
* honeydew4
* honeydew3
* honeydew2
* honeydew
* darkseagreen4
* darkseagreen
* darkseagreen3
* darkseagreen2
* darkseagreen1
* lightgreen
* palegreen
* palegreen4
* palegreen3
* palegreen1
* forestgreen
* limegreen
* darkgreen
* green4
* green3
* green2
* green
* mediumseagreen
* seagreen
* seagreen3
* seagreen2
* seagreen1
* mintcream
* springgreen4
* springgreen3
* springgreen2
* springgreen
* aquamarine3
* aquamarine2
* aquamarine
* mediumspringgreen
* aquamarine4
* turquoise
* mediumturquoise
* lightseagreen
* azure4
* azure3
* azure2
* azure
* lightcyan4
* lightcyan3
* lightcyan2
* lightcyan
* paleturquoise
* paleturquoise4
* paleturquoise3
* paleturquoise2
* paleturquoise1
* darkslategray
* darkslategray4
* darkslategray3
* darkslategray2
* darkslategray1
* cyan4
* cyan3
* darkturquoise
* cyan2
* cyan
* cadetblue4
* cadetblue
* turquoise4
* turquoise3
* turquoise2
* turquoise1
* powderblue
* cadetblue3
* cadetblue2
* cadetblue1
* lightblue4
* lightblue3
* lightblue
* lightblue2
* lightblue1
* deepskyblue4
* deepskyblue3
* deepskyblue2
* deepskyblue
* skyblue
* lightskyblue4
* lightskyblue3
* lightskyblue2
* lightskyblue1
* lightskyblue
* skyblue4
* skyblue3
* skyblue2
* skyblue1
* aliceblue
* slategray
* lightslategray
* slategray3
* slategray2
* slategray1
* steelblue4
* steelblue
* steelblue3
* steelblue2
* steelblue1
* dodgerblue4
* dodgerblue3
* dodgerblue2
* dodgerblue
* lightsteelblue4
* lightsteelblue3
* lightsteelblue
* lightsteelblue2
* lightsteelblue1
* slategray4
* cornflowerblue
* royalblue
* royalblue4
* royalblue3
* royalblue2
* royalblue1
* ghostwhite
* lavender
* midnightblue
* navy
* blue4
* blue3
* blue2
* blue
* darkslateblue
* slateblue
* mediumslateblue
* lightslateblue
* slateblue1
* slateblue4
* slateblue3
* slateblue2
* mediumpurple4
* mediumpurple3
* mediumpurple
* mediumpurple2
* mediumpurple1
* purple4
* purple3
* blueviolet
* purple1
* purple2
* purple
* darkorchid
* darkorchid4
* darkorchid3
* darkorchid2
* darkorchid1
* darkviolet
* mediumorchid4
* mediumorchid3
* mediumorchid
* mediumorchid2
* mediumorchid1
* thistle4
* thistle3
* thistle
* thistle2
* thistle1
* plum4
* plum3
* plum2
* plum1
* plum
* violet
* darkmagenta
* magenta3
* magenta2
* magenta
* orchid4
* orchid3
* orchid
* orchid2
* orchid1
* maroon4
* violetred
* maroon3
* maroon2
* maroon1
* mediumvioletred
* deeppink3
* deeppink2
* deeppink
* deeppink4
* hotpink2
* hotpink1
* hotpink4
* hotpink
* violetred4
* violetred3
* violetred2
* violetred1
* hotpink3
* lavenderblush4
* lavenderblush3
* lavenderblush2
* lavenderblush
* maroon
* palevioletred4
* palevioletred3
* palevioletred
* palevioletred2
* palevioletred1
* pink4
* pink3
* pink2
* pink1
* pink
* lightpink
* lightpink4
* lightpink3
* lightpink2
* lightpink1
### C.1\.2 Alpha
The `alpha` argument changes transparency (0 \= totally transparent, 1 \= totally opaque).
Figure C.1: Varying alpha values.
### C.1\.3 Shape
The `shape` argument changes the shape of points.
Figure C.2: The 25 shape values
### C.1\.4 Linetype
You can probably guess what the `linetype` argument does.
Figure C.3: The 6 linetype values at different sizes.
### C.1\.1 Colour/Fill
The `colour` argument changes the point and line colour, while the `fill` argument changes the interior colour of shapes. Type `[colours()](https://rdrr.io/r/grDevices/colors.html)` into the console to see a list of all the named colours in R. Alternatively, you can use hexadecimal colours like `"#FF8000"` or the `[rgb()](https://rdrr.io/r/grDevices/rgb.html)` function to set red, green, and blue values on a scale from 0 to 1\.
Hover over a colour to see its R name.
* black
* gray1
* gray2
* gray3
* gray4
* gray5
* gray6
* gray7
* gray8
* gray9
* gray10
* gray11
* gray12
* gray13
* gray14
* gray15
* gray16
* gray17
* gray18
* gray19
* gray20
* gray21
* gray22
* gray23
* gray24
* gray25
* gray26
* gray27
* gray28
* gray29
* gray30
* gray31
* gray32
* gray33
* gray34
* gray35
* gray36
* gray37
* gray38
* gray39
* gray40
* dimgray
* gray42
* gray43
* gray44
* gray45
* gray46
* gray47
* gray48
* gray49
* gray50
* gray51
* gray52
* gray53
* gray54
* gray55
* gray56
* gray57
* gray58
* gray59
* gray60
* gray61
* gray62
* gray63
* gray64
* gray65
* darkgray
* gray66
* gray67
* gray68
* gray69
* gray70
* gray71
* gray72
* gray73
* gray74
* gray
* gray75
* gray76
* gray77
* gray78
* gray79
* gray80
* gray81
* gray82
* gray83
* lightgray
* gray84
* gray85
* gainsboro
* gray86
* gray87
* gray88
* gray89
* gray90
* gray91
* gray92
* gray93
* gray94
* gray95
* gray96
* gray97
* gray98
* gray99
* white
* snow4
* snow3
* snow2
* snow
* rosybrown4
* rosybrown
* rosybrown3
* rosybrown2
* rosybrown1
* lightcoral
* indianred
* indianred4
* indianred2
* indianred1
* indianred3
* brown4
* brown
* brown3
* brown2
* brown1
* firebrick4
* firebrick
* firebrick3
* firebrick1
* firebrick2
* darkred
* red3
* red2
* red
* mistyrose3
* mistyrose4
* mistyrose2
* mistyrose
* salmon
* tomato3
* coral4
* coral3
* coral2
* coral1
* tomato2
* tomato
* tomato4
* darksalmon
* salmon4
* salmon3
* salmon2
* salmon1
* coral
* orangered4
* orangered3
* orangered2
* lightsalmon3
* lightsalmon2
* lightsalmon
* lightsalmon4
* sienna
* sienna3
* sienna2
* sienna1
* sienna4
* orangered
* seashell4
* seashell3
* seashell2
* seashell
* chocolate4
* chocolate3
* chocolate
* chocolate2
* chocolate1
* linen
* peachpuff4
* peachpuff3
* peachpuff2
* peachpuff
* sandybrown
* tan4
* peru
* tan2
* tan1
* darkorange4
* darkorange3
* darkorange2
* darkorange1
* antiquewhite3
* antiquewhite2
* antiquewhite1
* bisque4
* bisque3
* bisque2
* bisque
* burlywood4
* burlywood3
* burlywood
* burlywood2
* burlywood1
* darkorange
* antiquewhite4
* antiquewhite
* papayawhip
* blanchedalmond
* navajowhite4
* navajowhite3
* navajowhite2
* navajowhite
* tan
* floralwhite
* oldlace
* wheat4
* wheat3
* wheat2
* wheat
* wheat1
* moccasin
* orange4
* orange3
* orange2
* orange
* goldenrod
* goldenrod1
* goldenrod4
* goldenrod3
* goldenrod2
* darkgoldenrod4
* darkgoldenrod
* darkgoldenrod3
* darkgoldenrod2
* darkgoldenrod1
* cornsilk
* cornsilk4
* cornsilk3
* cornsilk2
* lightgoldenrod4
* lightgoldenrod3
* lightgoldenrod
* lightgoldenrod2
* lightgoldenrod1
* gold4
* gold3
* gold2
* gold
* lemonchiffon4
* lemonchiffon3
* lemonchiffon2
* lemonchiffon
* palegoldenrod
* khaki
* darkkhaki
* khaki4
* khaki3
* khaki2
* khaki1
* ivory4
* ivory3
* ivory2
* ivory
* beige
* lightyellow4
* lightyellow3
* lightyellow2
* lightyellow
* lightgoldenrodyellow
* yellow4
* yellow3
* yellow2
* yellow
* olivedrab
* olivedrab4
* olivedrab3
* olivedrab2
* olivedrab1
* darkolivegreen
* darkolivegreen4
* darkolivegreen3
* darkolivegreen2
* darkolivegreen1
* greenyellow
* chartreuse4
* chartreuse3
* chartreuse2
* lawngreen
* chartreuse
* honeydew4
* honeydew3
* honeydew2
* honeydew
* darkseagreen4
* darkseagreen
* darkseagreen3
* darkseagreen2
* darkseagreen1
* lightgreen
* palegreen
* palegreen4
* palegreen3
* palegreen1
* forestgreen
* limegreen
* darkgreen
* green4
* green3
* green2
* green
* mediumseagreen
* seagreen
* seagreen3
* seagreen2
* seagreen1
* mintcream
* springgreen4
* springgreen3
* springgreen2
* springgreen
* aquamarine3
* aquamarine2
* aquamarine
* mediumspringgreen
* aquamarine4
* turquoise
* mediumturquoise
* lightseagreen
* azure4
* azure3
* azure2
* azure
* lightcyan4
* lightcyan3
* lightcyan2
* lightcyan
* paleturquoise
* paleturquoise4
* paleturquoise3
* paleturquoise2
* paleturquoise1
* darkslategray
* darkslategray4
* darkslategray3
* darkslategray2
* darkslategray1
* cyan4
* cyan3
* darkturquoise
* cyan2
* cyan
* cadetblue4
* cadetblue
* turquoise4
* turquoise3
* turquoise2
* turquoise1
* powderblue
* cadetblue3
* cadetblue2
* cadetblue1
* lightblue4
* lightblue3
* lightblue
* lightblue2
* lightblue1
* deepskyblue4
* deepskyblue3
* deepskyblue2
* deepskyblue
* skyblue
* lightskyblue4
* lightskyblue3
* lightskyblue2
* lightskyblue1
* lightskyblue
* skyblue4
* skyblue3
* skyblue2
* skyblue1
* aliceblue
* slategray
* lightslategray
* slategray3
* slategray2
* slategray1
* steelblue4
* steelblue
* steelblue3
* steelblue2
* steelblue1
* dodgerblue4
* dodgerblue3
* dodgerblue2
* dodgerblue
* lightsteelblue4
* lightsteelblue3
* lightsteelblue
* lightsteelblue2
* lightsteelblue1
* slategray4
* cornflowerblue
* royalblue
* royalblue4
* royalblue3
* royalblue2
* royalblue1
* ghostwhite
* lavender
* midnightblue
* navy
* blue4
* blue3
* blue2
* blue
* darkslateblue
* slateblue
* mediumslateblue
* lightslateblue
* slateblue1
* slateblue4
* slateblue3
* slateblue2
* mediumpurple4
* mediumpurple3
* mediumpurple
* mediumpurple2
* mediumpurple1
* purple4
* purple3
* blueviolet
* purple1
* purple2
* purple
* darkorchid
* darkorchid4
* darkorchid3
* darkorchid2
* darkorchid1
* darkviolet
* mediumorchid4
* mediumorchid3
* mediumorchid
* mediumorchid2
* mediumorchid1
* thistle4
* thistle3
* thistle
* thistle2
* thistle1
* plum4
* plum3
* plum2
* plum1
* plum
* violet
* darkmagenta
* magenta3
* magenta2
* magenta
* orchid4
* orchid3
* orchid
* orchid2
* orchid1
* maroon4
* violetred
* maroon3
* maroon2
* maroon1
* mediumvioletred
* deeppink3
* deeppink2
* deeppink
* deeppink4
* hotpink2
* hotpink1
* hotpink4
* hotpink
* violetred4
* violetred3
* violetred2
* violetred1
* hotpink3
* lavenderblush4
* lavenderblush3
* lavenderblush2
* lavenderblush
* maroon
* palevioletred4
* palevioletred3
* palevioletred
* palevioletred2
* palevioletred1
* pink4
* pink3
* pink2
* pink1
* pink
* lightpink
* lightpink4
* lightpink3
* lightpink2
* lightpink1
### C.1\.2 Alpha
The `alpha` argument changes transparency (0 \= totally transparent, 1 \= totally opaque).
Figure C.1: Varying alpha values.
### C.1\.3 Shape
The `shape` argument changes the shape of points.
Figure C.2: The 25 shape values
### C.1\.4 Linetype
You can probably guess what the `linetype` argument does.
Figure C.3: The 6 linetype values at different sizes.
C.2 Palettes
------------
Discrete palettes change depending on the number of categories.
Figure C.4: Default discrete palette with different numbers of levels.
### C.2\.1 Viridis Palettes
Viridis palettes are very good for colourblind\-safe and greyscale\-safe plots. The work with any number of categories, but are best for larger numbers of categories or continuous colours.
#### C.2\.1\.1 Discrete Viridis Palettes
Set [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") viridis colours with `[scale_colour_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure C.5: Discrete viridis palettes.
If the end colour is too light for your plot or the start colour too dark, you can set the `begin` and `end` arguments to values between 0 and 1, such as `scale_colour_viridis_c(begin = 0.1, end = 0.9)`.
#### C.2\.1\.2 Continuous Viridis Palettes
Set [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") viridis colours with `[scale_colour_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure 3\.7: Continuous viridis palettes.
### C.2\.2 Brewer Palettes
Brewer palettes give you a lot of control over plot colour and fill. You set them with `[scale_color_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` or `[scale_fill_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` and set the `palette` argument to one of the palettes below. Set `direction = -1` to reverse the order of colours.
#### C.2\.2\.1 Qualitative Brewer Palettes
These palettes are good for [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") data with up to 8 categories (some palettes can handle up to 12\). The "Paired" palette is useful if your categories are arranged in pairs.
Figure C.6: Qualitative brewer palettes.
#### C.2\.2\.2 Sequential Brewer Palettes
These palettes are good for up to 9 [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with a lot of categories.
Figure C.7: Sequential brewer palettes.
#### C.2\.2\.3 Diverging Brewer Palettes
These palettes are good for [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with up to 11 levels where the centre level is a neutral or baseline category and the levels above and below it differ in an important way, such as agree versus disagree options.
Figure C.8: Diverging brewer palettes.
### C.2\.1 Viridis Palettes
Viridis palettes are very good for colourblind\-safe and greyscale\-safe plots. The work with any number of categories, but are best for larger numbers of categories or continuous colours.
#### C.2\.1\.1 Discrete Viridis Palettes
Set [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") viridis colours with `[scale_colour_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure C.5: Discrete viridis palettes.
If the end colour is too light for your plot or the start colour too dark, you can set the `begin` and `end` arguments to values between 0 and 1, such as `scale_colour_viridis_c(begin = 0.1, end = 0.9)`.
#### C.2\.1\.2 Continuous Viridis Palettes
Set [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") viridis colours with `[scale_colour_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure 3\.7: Continuous viridis palettes.
#### C.2\.1\.1 Discrete Viridis Palettes
Set [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") viridis colours with `[scale_colour_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure C.5: Discrete viridis palettes.
If the end colour is too light for your plot or the start colour too dark, you can set the `begin` and `end` arguments to values between 0 and 1, such as `scale_colour_viridis_c(begin = 0.1, end = 0.9)`.
#### C.2\.1\.2 Continuous Viridis Palettes
Set [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") viridis colours with `[scale_colour_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure 3\.7: Continuous viridis palettes.
### C.2\.2 Brewer Palettes
Brewer palettes give you a lot of control over plot colour and fill. You set them with `[scale_color_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` or `[scale_fill_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` and set the `palette` argument to one of the palettes below. Set `direction = -1` to reverse the order of colours.
#### C.2\.2\.1 Qualitative Brewer Palettes
These palettes are good for [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") data with up to 8 categories (some palettes can handle up to 12\). The "Paired" palette is useful if your categories are arranged in pairs.
Figure C.6: Qualitative brewer palettes.
#### C.2\.2\.2 Sequential Brewer Palettes
These palettes are good for up to 9 [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with a lot of categories.
Figure C.7: Sequential brewer palettes.
#### C.2\.2\.3 Diverging Brewer Palettes
These palettes are good for [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with up to 11 levels where the centre level is a neutral or baseline category and the levels above and below it differ in an important way, such as agree versus disagree options.
Figure C.8: Diverging brewer palettes.
#### C.2\.2\.1 Qualitative Brewer Palettes
These palettes are good for [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") data with up to 8 categories (some palettes can handle up to 12\). The "Paired" palette is useful if your categories are arranged in pairs.
Figure C.6: Qualitative brewer palettes.
#### C.2\.2\.2 Sequential Brewer Palettes
These palettes are good for up to 9 [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with a lot of categories.
Figure C.7: Sequential brewer palettes.
#### C.2\.2\.3 Diverging Brewer Palettes
These palettes are good for [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with up to 11 levels where the centre level is a neutral or baseline category and the levels above and below it differ in an important way, such as agree versus disagree options.
Figure C.8: Diverging brewer palettes.
C.3 Themes
----------
`ggplot2` has 8 built\-in themes that you can add to a plot like `plot + theme_bw()` or set as the default theme at the top of your script like `theme_set(theme_bw())`.
Figure C.9: {ggplot2} themes.
### C.3\.1 ggthemes
You can get more themes from add\-on packages, like `[ggthemes](https://yutannihilation.github.io/allYourFigureAreBelongToUs/ggthemes/)`. Most of the themes also have custom `scale_` functions like `scale_colour_economist()`. Their website has extensive examples and instructions for alternate or dark versions of these themes.
Figure C.10: {ggthemes} themes.
### C.3\.2 Fonts
You can customise the fonts used in themes. All computers should be able to recognise the families "sans", "serif", and "mono", and some computers will be able to access other installed fonts by name.
```
sans <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "sans") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Sans")
serif <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "serif") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Serif")
mono <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "mono") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Mono")
font <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "Comic Sans MS") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Comic Sans MS")
sans + serif + mono + font + [plot_layout](https://patchwork.data-imaginist.com/reference/plot_layout.html)(nrow = 1)
```
Figure C.11: Different fonts.
If you are working on a Windows machine and get the error "font family not found in Windows font database", you may need to explicitly map the fonts. In your setup code chunk, add the following code, which should fix the error. You may need to do this for any fonts that you specify.
The `showtext` package is a flexible way to add fonts.
If you have a .ttf file from a font site, like [Font Squirrel](https://www.fontsquirrel.com), you can load the file directly using `[font_add()](https://rdrr.io/pkg/sysfonts/man/font_add.html)`. Set `regular` as the path to the file for the regular version of the font, and optionally add other versions. Set the `family` to the name you want to use for the font. You will need to include any local font files if you are sharing your script with others.
```
[library](https://rdrr.io/r/base/library.html)([showtext](https://github.com/yixuan/showtext))
# font from https://www.fontsquirrel.com/fonts/SF-Cartoonist-Hand
[font_add](https://rdrr.io/pkg/sysfonts/man/font_add.html)(
regular = "fonts/cartoonist/SF_Cartoonist_Hand.ttf",
bold = "fonts/cartoonist/SF_Cartoonist_Hand_Bold.ttf",
italic = "fonts/cartoonist/SF_Cartoonist_Hand_Italic.ttf",
bolditalic = "fonts/cartoonist/SF_Cartoonist_Hand_Bold_Italic.ttf",
family = "cartoonist"
)
```
To download fonts directly from [Google fonts](https://fonts.google.com/), use the function `[font_add_google()](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)`, set the `name` to the exact name from the site, and the `family` to the name you want to use for the font.
```
# download fonts from Google
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Courgette", family = "courgette")
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Poiret One", family = "poiret")
```
After you've added fonts from local files or Google, you need to make them available to R using `[showtext_auto()](https://rdrr.io/pkg/showtext/man/showtext_auto.html)`. You will have to do these steps in each script where you want to use the custom fonts.
```
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
```
To change the fonts used overall in a plot, use the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function and set `text` to `element_text(family = "new_font_family")`.
```
a <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "courgette")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Courgette")
b <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand")
c <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "poiret")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Poiret One")
a + b + c
```
Figure C.12: Custom Fonts.
To set the fonts for individual elements in the plot, you need to find the specific argument for that element. You can use the argument `face` to choose "bold", "italic", or "bolditalic" versions, if they are available.
```
g + [ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand") +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "bold"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "italic"),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "sans")
)
```
Figure C.13: Multiple custom fonts on the same plot.
### C.3\.3 Setting A Lab Theme using `theme()`
The `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function, as we mentioned, does a lot more than just change the position of a legend and can be used to really control a variety of elements and to eventually create your own "theme" for your figures \- say you want to have a consistent look to your figures across your publications or across your lab posters.
First, we'll create a basic plot to demonstrate the changes.
```
g <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(diamonds, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = carat,
y = price,
color = cut)) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~color, nrow = 2) +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = lm, formula = y~x) +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(title = "The relationship between carat and price",
subtitle = "For each level of color and cut",
caption = "Data from ggplot2::diamonds")
g
```
Figure C.14: Basic plot in default theme
Always start with a base theme, like `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)` and set the size and font. Make sure to load any custom fonts.
```
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Nunito", family = "Nunito")
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
# set up custom theme to add to all plots
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)( # always start with a base theme_****
base_size = 16, # 16-point font (adjusted for axes)
base_family = "Nunito" # custom font family
)
```
```
g + mytheme
```
Figure C.15: Basic customised theme
Now add specific theme customisations. See `[?theme](https://ggplot2.tidyverse.org/reference/theme.html)` for detailed explanations. Most theme arguments take a value of `[element_blank()](https://ggplot2.tidyverse.org/reference/element.html)` to remove the feature entirely, or `[element_text()](https://ggplot2.tidyverse.org/reference/element.html)`, `[element_line()](https://ggplot2.tidyverse.org/reference/element.html)` or `[element_rect()](https://ggplot2.tidyverse.org/reference/element.html)`, depending on whether the feature is text, a box, or a line.
```
# add more specific customisations with theme()
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)(
base_size = 16,
base_family = "Nunito"
) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
plot.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "black"),
panel.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey10",
color = "grey30"),
text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "white"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0), # left justify
strip.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey60", ),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60"),
axis.line = [element_line](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60", size = 1),
panel.grid = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
plot.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5), # center justify
plot.subtitle = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5, color = "grey60"),
plot.caption = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(face = "italic")
)
```
```
g + mytheme
```
Figure C.16: Further customised theme
You can still add further theme customisation for specific plots.
```
g + mytheme +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
legend.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 11),
legend.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 9),
legend.key.height = [unit](https://rdrr.io/r/grid/unit.html)(0.2, "inches"),
legend.position = [c](https://rdrr.io/r/base/c.html)(.9, 0.175)
)
```
Figure C.17: Plot\-specific customising.
### C.3\.1 ggthemes
You can get more themes from add\-on packages, like `[ggthemes](https://yutannihilation.github.io/allYourFigureAreBelongToUs/ggthemes/)`. Most of the themes also have custom `scale_` functions like `scale_colour_economist()`. Their website has extensive examples and instructions for alternate or dark versions of these themes.
Figure C.10: {ggthemes} themes.
### C.3\.2 Fonts
You can customise the fonts used in themes. All computers should be able to recognise the families "sans", "serif", and "mono", and some computers will be able to access other installed fonts by name.
```
sans <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "sans") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Sans")
serif <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "serif") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Serif")
mono <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "mono") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Mono")
font <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "Comic Sans MS") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Comic Sans MS")
sans + serif + mono + font + [plot_layout](https://patchwork.data-imaginist.com/reference/plot_layout.html)(nrow = 1)
```
Figure C.11: Different fonts.
If you are working on a Windows machine and get the error "font family not found in Windows font database", you may need to explicitly map the fonts. In your setup code chunk, add the following code, which should fix the error. You may need to do this for any fonts that you specify.
The `showtext` package is a flexible way to add fonts.
If you have a .ttf file from a font site, like [Font Squirrel](https://www.fontsquirrel.com), you can load the file directly using `[font_add()](https://rdrr.io/pkg/sysfonts/man/font_add.html)`. Set `regular` as the path to the file for the regular version of the font, and optionally add other versions. Set the `family` to the name you want to use for the font. You will need to include any local font files if you are sharing your script with others.
```
[library](https://rdrr.io/r/base/library.html)([showtext](https://github.com/yixuan/showtext))
# font from https://www.fontsquirrel.com/fonts/SF-Cartoonist-Hand
[font_add](https://rdrr.io/pkg/sysfonts/man/font_add.html)(
regular = "fonts/cartoonist/SF_Cartoonist_Hand.ttf",
bold = "fonts/cartoonist/SF_Cartoonist_Hand_Bold.ttf",
italic = "fonts/cartoonist/SF_Cartoonist_Hand_Italic.ttf",
bolditalic = "fonts/cartoonist/SF_Cartoonist_Hand_Bold_Italic.ttf",
family = "cartoonist"
)
```
To download fonts directly from [Google fonts](https://fonts.google.com/), use the function `[font_add_google()](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)`, set the `name` to the exact name from the site, and the `family` to the name you want to use for the font.
```
# download fonts from Google
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Courgette", family = "courgette")
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Poiret One", family = "poiret")
```
After you've added fonts from local files or Google, you need to make them available to R using `[showtext_auto()](https://rdrr.io/pkg/showtext/man/showtext_auto.html)`. You will have to do these steps in each script where you want to use the custom fonts.
```
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
```
To change the fonts used overall in a plot, use the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function and set `text` to `element_text(family = "new_font_family")`.
```
a <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "courgette")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Courgette")
b <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand")
c <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "poiret")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Poiret One")
a + b + c
```
Figure C.12: Custom Fonts.
To set the fonts for individual elements in the plot, you need to find the specific argument for that element. You can use the argument `face` to choose "bold", "italic", or "bolditalic" versions, if they are available.
```
g + [ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand") +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "bold"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "italic"),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "sans")
)
```
Figure C.13: Multiple custom fonts on the same plot.
### C.3\.3 Setting A Lab Theme using `theme()`
The `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function, as we mentioned, does a lot more than just change the position of a legend and can be used to really control a variety of elements and to eventually create your own "theme" for your figures \- say you want to have a consistent look to your figures across your publications or across your lab posters.
First, we'll create a basic plot to demonstrate the changes.
```
g <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(diamonds, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = carat,
y = price,
color = cut)) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~color, nrow = 2) +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = lm, formula = y~x) +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(title = "The relationship between carat and price",
subtitle = "For each level of color and cut",
caption = "Data from ggplot2::diamonds")
g
```
Figure C.14: Basic plot in default theme
Always start with a base theme, like `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)` and set the size and font. Make sure to load any custom fonts.
```
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Nunito", family = "Nunito")
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
# set up custom theme to add to all plots
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)( # always start with a base theme_****
base_size = 16, # 16-point font (adjusted for axes)
base_family = "Nunito" # custom font family
)
```
```
g + mytheme
```
Figure C.15: Basic customised theme
Now add specific theme customisations. See `[?theme](https://ggplot2.tidyverse.org/reference/theme.html)` for detailed explanations. Most theme arguments take a value of `[element_blank()](https://ggplot2.tidyverse.org/reference/element.html)` to remove the feature entirely, or `[element_text()](https://ggplot2.tidyverse.org/reference/element.html)`, `[element_line()](https://ggplot2.tidyverse.org/reference/element.html)` or `[element_rect()](https://ggplot2.tidyverse.org/reference/element.html)`, depending on whether the feature is text, a box, or a line.
```
# add more specific customisations with theme()
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)(
base_size = 16,
base_family = "Nunito"
) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
plot.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "black"),
panel.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey10",
color = "grey30"),
text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "white"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0), # left justify
strip.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey60", ),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60"),
axis.line = [element_line](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60", size = 1),
panel.grid = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
plot.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5), # center justify
plot.subtitle = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5, color = "grey60"),
plot.caption = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(face = "italic")
)
```
```
g + mytheme
```
Figure C.16: Further customised theme
You can still add further theme customisation for specific plots.
```
g + mytheme +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
legend.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 11),
legend.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 9),
legend.key.height = [unit](https://rdrr.io/r/grid/unit.html)(0.2, "inches"),
legend.position = [c](https://rdrr.io/r/base/c.html)(.9, 0.175)
)
```
Figure C.17: Plot\-specific customising.
| Data Visualization |