domain
stringclasses
48 values
url
stringlengths
35
137
text
stringlengths
0
836k
topic
stringclasses
13 values
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/hierarchies-in-formatting.html
2\.6 Hierarchies in formatting ------------------------------ Different kinds of formatting might also represent different levels of a hierarchy, e.g. | formatting | interpretation | | --- | --- | | none | good | | italic | satisfactory | | bold | poor | | bold \& italic | fail | When each kind of formatting relates to a different level of one hierarchy, import the different kinds of formatting into different columns, and then combine them into a third column, perhaps using `paste()`, or `case_when()`. ``` # Step 1: import the table taking only cell values and ignoring the formatting x <- read_excel(path, sheet = "highlight-hierarchy") x ``` ``` ## # A tibble: 4 x 2 ## Name Score ## <chr> <dbl> ## 1 Matilda 7 ## 2 Nicholas 5 ## 3 Olivia 3 ## 4 Paul 1 ``` ``` # Step 2: import one kind of formatting of one column of the table # `formats` is a pallette of fill colours that can be indexed by the # `local_format_id` of a given cell to get the fill colour of that cell bold <- xlsx_formats(path)$local$font$bold italic <- xlsx_formats(path)$local$font$italic # Import all the cells, filter out the header row, filter for the first column, # and create a new column `fill` of the fill colours, by looking up the # local_format_id of each cell in the `formats` pallette. formats <- xlsx_cells(path, sheet = "highlight-hierarchy") %>% dplyr::filter(row >= 2, col == 1) %>% # Omit the header row mutate(bold = bold[local_format_id], italic = italic[local_format_id]) %>% mutate(grade = case_when(bold & italic ~ "fail", bold ~ "poor", italic ~ "satisfactory", TRUE ~ "good")) %>% select(bold, italic, grade) # Step 3: append the `fill` column to the rest of the data bind_cols(x, formats) ``` ``` ## # A tibble: 4 x 5 ## Name Score bold italic grade ## <chr> <dbl> <lgl> <lgl> <chr> ## 1 Matilda 7 FALSE FALSE good ## 2 Nicholas 5 FALSE TRUE satisfactory ## 3 Olivia 3 TRUE FALSE poor ## 4 Paul 1 TRUE TRUE fail ``` Here it is again, using only tidyxl and unpivotr. ``` bold <- xlsx_formats(path)$local$font$bold italic <- xlsx_formats(path)$local$font$italic xlsx_cells(path, sheet = "highlight-hierarchy") %>% mutate(bold = bold[local_format_id], italic = italic[local_format_id]) %>% mutate(grade = case_when(bold & italic ~ "fail", bold ~ "poor", italic ~ "satisfactory", TRUE ~ "good")) %>% select(row, col, data_type, character, numeric, bold, italic, grade) %>% behead("up", header) %>% select(-col) %>% spatter(header) ``` ``` ## # A tibble: 4 x 6 ## row bold italic grade Name Score ## <int> <lgl> <lgl> <chr> <chr> <dbl> ## 1 2 FALSE FALSE good Matilda 7 ## 2 3 FALSE TRUE satisfactory Nicholas 5 ## 3 4 TRUE FALSE poor Olivia 3 ## 4 5 TRUE TRUE fail Paul 1 ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/tidy-sentinel.html
2\.7 Sentinel values in non\-text columns ----------------------------------------- R packages like [readr](http://readr.tidyverse.org/) recognise `NA` as a sentinel value that means “Not Applicable”, or “Not Available”, or anything you want. It doesn’t affect the data type of a column when `NA` is one of the values. Some datasets use other symbols as a sentinel value, e.g. `N/A` or `.`, or a combination, in which case you can instruct `readr` to interpret those values as sentinels, and it will import them all as `NA`. But what if the data uses more than one *kind* of sentinel value. For example, Statistics New Zealand uses `…` to mean “Not applicable”, and `..C` to mean “Confidentialised”. Most tools will either regard both values as `NA`, or coerce the whole column to characters. ``` read_csv("a, b, c 1, 2, 3 4, …, ..C", na = c("…", "..C")) # Regard both values as NA ``` ``` ## # A tibble: 2 x 3 ## a b c ## <dbl> <dbl> <dbl> ## 1 1 2 3 ## 2 4 NA NA ``` ``` read_csv("a, b, c 1, 2, 3 4, …, ..C", na = "") # Coerce the whole column to characters ``` ``` ## # A tibble: 2 x 3 ## a b c ## <dbl> <chr> <chr> ## 1 1 2 3 ## 2 4 … ..C ``` A better procedure is to import the sentinel values into their own column, or even into separate `TRUE`/`FALSE` columns for each kind of sentinel. Note that sentinel values relate the the value in the cell, rather than to the whole row, so the first step is to make the dataset *extra\-tidy* as in the section “Already a tidy table but with meaningful formatting of single cells”. ``` # Tidy path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") x <- read_excel(path, sheet = "sentinels") x ``` ``` ## # A tibble: 4 x 3 ## Name Subject Score ## <chr> <chr> <chr> ## 1 Matilda Music 7 ## 2 Nicholas Classics NA ## 3 Olivia … 3 ## 4 Paul NA ..C ``` ``` # Extra-tidy extra_tidy <- gather(x, variable, value, -Name) %>% arrange(Name, variable) extra_tidy ``` ``` ## # A tibble: 8 x 3 ## Name variable value ## <chr> <chr> <chr> ## 1 Matilda Score 7 ## 2 Matilda Subject Music ## 3 Nicholas Score NA ## 4 Nicholas Subject Classics ## 5 Olivia Score 3 ## 6 Olivia Subject … ## 7 Paul Score ..C ## 8 Paul Subject NA ``` With an extra\-tidy dataset, the sentinels can now be appended to the values of individual variables, rather than to whole observations. ``` # Extra-tidy, with row and column numbers of the original variables, and the # sentinels omitted extra_tidy <- read_excel(path, sheet = "sentinels", na = c("NA", "…", "..C")) %>% mutate(row = row_number() + 1L) %>% gather(variable, value, -row, -Name) %>% group_by(row) %>% mutate(col = row_number() + 1L) %>% ungroup() %>% select(row, col, Name, variable, value) %>% arrange(row, col) extra_tidy ``` ``` ## # A tibble: 8 x 5 ## row col Name variable value ## <int> <int> <chr> <chr> <chr> ## 1 2 2 Matilda Subject Music ## 2 2 3 Matilda Score 7 ## 3 3 2 Nicholas Subject Classics ## 4 3 3 Nicholas Score <NA> ## 5 4 2 Olivia Subject <NA> ## 6 4 3 Olivia Score 3 ## 7 5 2 Paul Subject <NA> ## 8 5 3 Paul Score <NA> ``` ``` # Import all the cells, and filter for sentinel values sentinels <- xlsx_cells(path, sheet = "sentinels") %>% dplyr::filter(character %in% c("NA", "…", "..C")) %>% mutate(sentinel = character) %>% select(row, col, sentinel) sentinels ``` ``` ## # A tibble: 4 x 3 ## row col sentinel ## <int> <int> <chr> ## 1 3 3 NA ## 2 4 2 … ## 3 5 2 NA ## 4 5 3 ..C ``` ``` # Join the `sentinel` column to the rest of the data left_join(extra_tidy, sentinels, by = c("row", "col")) ``` ``` ## # A tibble: 8 x 6 ## row col Name variable value sentinel ## <int> <int> <chr> <chr> <chr> <chr> ## 1 2 2 Matilda Subject Music <NA> ## 2 2 3 Matilda Score 7 <NA> ## 3 3 2 Nicholas Subject Classics <NA> ## 4 3 3 Nicholas Score <NA> NA ## 5 4 2 Olivia Subject <NA> … ## 6 4 3 Olivia Score 3 <NA> ## 7 5 2 Paul Subject <NA> NA ## 8 5 3 Paul Score <NA> ..C ``` Here’s another version using only tidyxl and unpivotr, which provides `isolate_sentinels()` to make this much more straightforward. ``` xlsx_cells(path, sheet = "sentinels") %>% select(row, col, data_type, character, numeric) %>% isolate_sentinels(character, c("NA", "…", "..C")) %>% behead("left", Name) %>% behead("up", variable) %>% select(Name, variable, character, numeric, sentinel) ``` ``` ## # A tibble: 8 x 5 ## Name variable character numeric sentinel ## <chr> <chr> <chr> <dbl> <chr> ## 1 Matilda Subject Music NA <NA> ## 2 Matilda Score <NA> 7 <NA> ## 3 Nicholas Subject Classics NA <NA> ## 4 Nicholas Score <NA> NA NA ## 5 Olivia Subject <NA> NA … ## 6 Olivia Score <NA> 3 <NA> ## 7 Paul Subject <NA> NA NA ## 8 Paul Score <NA> NA ..C ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/pivot.html
3 Pivot tables ============== This part introduces pivot tables. [Tidyxl](https://nacnudus.github.io/tidyxl) and [unpivotr](https://nacnudus.github.io/unpivotr) come into their own here, and are (as far as I know) the only packages to acknowledge the intuitive grammar of pivot tables. Pivot tables are ones with more than one row of column headers, or more than one column of row headers, or both (and there can be more complex arrangements). Tables in that form take up less space on a page or a screen than ‘tidy’ tables, and are easier for humans to read. But most software can’t interpret or traverse data in that form; it must first be reshaped into a long, ‘tidy’ form, with a single row of column headers. It takes a lot of code to reshape a pivot table into a ‘tidy’ one, and the code has to be bespoke for each table. There’s no general solution, because it is ambiguous whether a given cell is part of a header or part of the data. There are some ambiguities in ‘tidy’ tables, too, which is why most functions for reading csv files allow you to specify whether the first row of the data is a header, and how many rows to skip before the data begins. Functions often guess, but they can never be certain. Pivot tables, being more complex, are so much more ambiguous that it isn’t reasonable to import them with a single function. A better way is to break the problem down into steps: 1. Identify which cells are headers, and which are data. 2. State how the data cells relate to the header cells. The first step is a matter of traversing the cells, which is *much easier* if you load them with the [tidyxl](https://nacnudus.github.io/tidyxl) package, or pass the table through `as_cells()` in the [unpivotr](https://nacnudus.github.io/unpivotr) package. This gives you a table of cells and their properties; one row of the table describes one cell of the source table or spreadsheet. The first two properties are the row and column position of the cell, which makes it easy to filter for cells in a particular region of the spreadsheet. If the first row of cells is a header row, then you can filter for `row == 1`. Here is an example of a pivot table where the first two rows, and the first two columns, are headers. The other cells contain the data. First, see how the cells are laid out in the source file by importing it with readxl. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") original <- read_excel(path, sheet = "pivot-annotations", col_names = FALSE) ``` ``` ## New names: ## * `` -> ...1 ## * `` -> ...2 ## * `` -> ...3 ## * `` -> ...4 ## * `` -> ...5 ## * ... ``` ``` print(original, n = Inf) ``` ``` ## # A tibble: 6 x 6 ## ...1 ...2 ...3 ...4 ...5 ...6 ## <chr> <chr> <chr> <chr> <chr> <chr> ## 1 <NA> <NA> Female <NA> Male <NA> ## 2 <NA> <NA> Matilda Olivia Nicholas Paul ## 3 Humanities Classics 1 2 3 0 ## 4 <NA> History 3 4 5 1 ## 5 Performance Music 5 6 9 2 ## 6 <NA> Drama 7 8 12 3 ``` Compare that with the long set of cells, one per row, that tidyxl gives. (Only a few properties of each cell are shown, to make it easier to read). ``` cells <- xlsx_cells(path, sheets = "pivot-annotations") select(cells, row, col, data_type, character, numeric) %>% print(cells, n = 20) ``` ``` ## # A tibble: 32 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 2 4 character Female NA ## 2 2 5 blank <NA> NA ## 3 2 6 character Male NA ## 4 2 7 blank <NA> NA ## 5 3 4 character Matilda NA ## 6 3 5 character Olivia NA ## 7 3 6 character Nicholas NA ## 8 3 7 character Paul NA ## 9 4 2 character Humanities NA ## 10 4 3 character Classics NA ## 11 4 4 numeric <NA> 1 ## 12 4 5 numeric <NA> 2 ## 13 4 6 numeric <NA> 3 ## 14 4 7 numeric <NA> 0 ## 15 5 2 blank <NA> NA ## 16 5 3 character History NA ## 17 5 4 numeric <NA> 3 ## 18 5 5 numeric <NA> 4 ## 19 5 6 numeric <NA> 5 ## 20 5 7 numeric <NA> 1 ## # … with 12 more rows ``` A similar result is obtained via `unpivotr::as_cells()`. ``` original <- read_excel(path, sheet = "pivot-annotations", col_names = FALSE) ``` ``` ## New names: ## * `` -> ...1 ## * `` -> ...2 ## * `` -> ...3 ## * `` -> ...4 ## * `` -> ...5 ## * ... ``` ``` as_cells(original) %>% arrange(row, col) %>% print(n = 20) ``` ``` ## # A tibble: 36 x 4 ## row col data_type chr ## <int> <int> <chr> <chr> ## 1 1 1 chr <NA> ## 2 1 2 chr <NA> ## 3 1 3 chr Female ## 4 1 4 chr <NA> ## 5 1 5 chr Male ## 6 1 6 chr <NA> ## 7 2 1 chr <NA> ## 8 2 2 chr <NA> ## 9 2 3 chr Matilda ## 10 2 4 chr Olivia ## 11 2 5 chr Nicholas ## 12 2 6 chr Paul ## 13 3 1 chr Humanities ## 14 3 2 chr Classics ## 15 3 3 chr 1 ## 16 3 4 chr 2 ## 17 3 5 chr 3 ## 18 3 6 chr 0 ## 19 4 1 chr <NA> ## 20 4 2 chr History ## # … with 16 more rows ``` (One difference is that `read_excel()` has filled in some missing cells with blanks, which `as_cells()` retains. Another is that `read_excel()` has coerced all data types to `character`, whereas `xlsx_cells()` preserved the original data types.) The tidyxl version is easier to traverse, because it describes the position of each cell as well as the value. To filter for the first row of headers: ``` dplyr::filter(cells, row == 2, !is_blank) %>% select(row, col, character, numeric) ``` ``` ## # A tibble: 2 x 4 ## row col character numeric ## <int> <int> <chr> <dbl> ## 1 2 4 Female NA ## 2 2 6 Male NA ``` Or to filter for cells containing data (in this case, we know that only data cells are numeric) ``` dplyr::filter(cells, data_type == "numeric") %>% select(row, col, numeric) ``` ``` ## # A tibble: 16 x 3 ## row col numeric ## <int> <int> <dbl> ## 1 4 4 1 ## 2 4 5 2 ## 3 4 6 3 ## 4 4 7 0 ## 5 5 4 3 ## 6 5 5 4 ## 7 5 6 5 ## 8 5 7 1 ## 9 6 4 5 ## 10 6 5 6 ## 11 6 6 9 ## 12 6 7 2 ## 13 7 4 7 ## 14 7 5 8 ## 15 7 6 12 ## 16 7 7 3 ``` By identifying the header cells separately from the data cells, and knowing exactly where they are on the sheet, we can associated the data cells with the relevant headers. To a human it is intuitive that the cells below and to the right of the header `Male` represent males, and that ones to the right of and below the header `Postgraduate qualification` represent people with postgraduate qualifications, but it isn’t so obvious to the computer. How would the computer know that the header `Male` doesn’t also relate to the column of cells below and to the left, beginning with `2`? This section shows how you can express the relationships between headers and data cells, using the [unpivotr](https://nacnudus.github.io/unpivotr) package.
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/pivot-simple.html
3\.1 Simple unpivoting ---------------------- The `behead()` function takes one level of headers from a pivot table and makes it part of the data. Think of it like `tidyr::gather()`, except that it works when there is more than one row of headers (or more than one column of row\-headers), and it only works on tables that have first come through `as_cells()` or `tidyxl::xlsx_cells()`. ### 3\.1\.1 Two clear rows of text column headers, left\-aligned Here we have a pivot table with two rows of column headers. The first row of headers is left\-aligned, so `"Female"` applies to the first two columns of data, and `"Male"` applies to the next two. The second row of headers has a header in every column. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(col >= 4, !is_blank) %>% # Ignore the row headers in this example select(row, col, data_type, character, numeric) all_cells ``` ``` ## # A tibble: 22 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 2 4 character Female NA ## 2 2 6 character Male NA ## 3 3 4 character Matilda NA ## 4 3 5 character Olivia NA ## 5 3 6 character Nicholas NA ## 6 3 7 character Paul NA ## 7 4 4 numeric <NA> 1 ## 8 4 5 numeric <NA> 2 ## 9 4 6 numeric <NA> 3 ## 10 4 7 numeric <NA> 0 ## # … with 12 more rows ``` The `behead()` function takes the ‘melted’ output of `as_cells()`, `tidyxl::xlsx_cells()`, or a previous `behead()`, and three more arguments to specify how the header cells relate to the data cells. The outermost header is the top row, `"Female" NA "Male" NA`. The `"Female"` and `"Male"` headers are up\-and\-to\-the\-left\-of the data cells. We express this as `"up-left"`. We also give the headers a name, `sex`, and say which column of `all_cells` contains the value of the header cells – it’s usually the `character` column. ``` all_cells %>% behead("up-left", sex) ``` ``` ## # A tibble: 20 x 6 ## row col data_type character numeric sex ## <int> <int> <chr> <chr> <dbl> <chr> ## 1 3 4 character Matilda NA Female ## 2 3 5 character Olivia NA Female ## 3 4 4 numeric <NA> 1 Female ## 4 4 5 numeric <NA> 2 Female ## 5 5 4 numeric <NA> 3 Female ## 6 5 5 numeric <NA> 4 Female ## 7 6 4 numeric <NA> 5 Female ## 8 6 5 numeric <NA> 6 Female ## 9 7 4 numeric <NA> 7 Female ## 10 7 5 numeric <NA> 8 Female ## 11 3 6 character Nicholas NA Male ## 12 3 7 character Paul NA Male ## 13 4 6 numeric <NA> 3 Male ## 14 4 7 numeric <NA> 0 Male ## 15 5 6 numeric <NA> 5 Male ## 16 5 7 numeric <NA> 1 Male ## 17 6 6 numeric <NA> 9 Male ## 18 6 7 numeric <NA> 2 Male ## 19 7 6 numeric <NA> 12 Male ## 20 7 7 numeric <NA> 3 Male ``` That did half the job. The value 2 in row 4 column 5 is indeed a score of a female. But the value `"matilda"` in row 3 column 4 isn’t a population – it’s another header. The next step is to strip that second level of column headers. This time, the direction is `"up"`, because the headers are directly up from the associated data cells, and we call it `name`, because it represents names of people. ``` all_cells %>% behead("up-left", sex) %>% behead("up", `name`) ``` ``` ## # A tibble: 16 x 7 ## row col data_type character numeric sex name ## <int> <int> <chr> <chr> <dbl> <chr> <chr> ## 1 4 4 numeric <NA> 1 Female Matilda ## 2 4 5 numeric <NA> 2 Female Olivia ## 3 5 4 numeric <NA> 3 Female Matilda ## 4 5 5 numeric <NA> 4 Female Olivia ## 5 6 4 numeric <NA> 5 Female Matilda ## 6 6 5 numeric <NA> 6 Female Olivia ## 7 7 4 numeric <NA> 7 Female Matilda ## 8 7 5 numeric <NA> 8 Female Olivia ## 9 4 6 numeric <NA> 3 Male Nicholas ## 10 4 7 numeric <NA> 0 Male Paul ## 11 5 6 numeric <NA> 5 Male Nicholas ## 12 5 7 numeric <NA> 1 Male Paul ## 13 6 6 numeric <NA> 9 Male Nicholas ## 14 6 7 numeric <NA> 2 Male Paul ## 15 7 6 numeric <NA> 12 Male Nicholas ## 16 7 7 numeric <NA> 3 Male Paul ``` A final step is a normal clean\-up. We drop the `row`, `col` and `character` columns, and we rename the `numeric` column to `score`, which is what it represents. ``` all_cells %>% behead("up-left", sex) %>% behead("up", `name`) %>% select(score = numeric, sex, `name`) ``` ``` ## # A tibble: 16 x 3 ## score sex name ## <dbl> <chr> <chr> ## 1 1 Female Matilda ## 2 2 Female Olivia ## 3 3 Female Matilda ## 4 4 Female Olivia ## 5 5 Female Matilda ## 6 6 Female Olivia ## 7 7 Female Matilda ## 8 8 Female Olivia ## 9 3 Male Nicholas ## 10 0 Male Paul ## 11 5 Male Nicholas ## 12 1 Male Paul ## 13 9 Male Nicholas ## 14 2 Male Paul ## 15 12 Male Nicholas ## 16 3 Male Paul ``` ### 3\.1\.2 Two clear rows and columns of text headers, top\-aligned and left\-aligned There are no new techniques are used, just more directions: `"left"` for headers directly to the left of the data cells, and `"left-up"` for headers left\-then\-up from the data cells. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric) %>% print() ``` ``` ## # A tibble: 28 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 2 4 character Female NA ## 2 2 6 character Male NA ## 3 3 4 character Matilda NA ## 4 3 5 character Olivia NA ## 5 3 6 character Nicholas NA ## 6 3 7 character Paul NA ## 7 4 2 character Humanities NA ## 8 4 3 character Classics NA ## 9 4 4 numeric <NA> 1 ## 10 4 5 numeric <NA> 2 ## # … with 18 more rows ``` ``` all_cells %>% behead("up-left", sex) %>% # As before behead("up", `name`) %>% # As before behead("left-up", field) %>% # Left-and-above behead("left", subject) %>% # Directly left rename(score = numeric) %>% select(-row, -col, -character) ``` ``` ## # A tibble: 16 x 6 ## data_type score sex name field subject ## <chr> <dbl> <chr> <chr> <chr> <chr> ## 1 numeric 1 Female Matilda Humanities Classics ## 2 numeric 2 Female Olivia Humanities Classics ## 3 numeric 3 Female Matilda Humanities History ## 4 numeric 4 Female Olivia Humanities History ## 5 numeric 3 Male Nicholas Humanities Classics ## 6 numeric 0 Male Paul Humanities Classics ## 7 numeric 5 Male Nicholas Humanities History ## 8 numeric 1 Male Paul Humanities History ## 9 numeric 5 Female Matilda Performance Music ## 10 numeric 6 Female Olivia Performance Music ## 11 numeric 7 Female Matilda Performance Drama ## 12 numeric 8 Female Olivia Performance Drama ## 13 numeric 9 Male Nicholas Performance Music ## 14 numeric 2 Male Paul Performance Music ## 15 numeric 12 Male Nicholas Performance Drama ## 16 numeric 3 Male Paul Performance Drama ``` ### 3\.1\.3 Multiple rows or columns of headers, with meaningful formatting This is a combination of the previous section with [meaningfully formatted rows](tidy-formatted-rows). The section [meaninfully formatted cells](tidy-formatted-cells) doesn’t work here, because the unpivoting of multiple rows/columns of headers complicates the relationship between the data and the formatting. 1. Unpivot the multiple rows/columns of headers, as above, but keep the `row` and `col` of each data cell. 2. Collect the `row`, `col` and formatting of each data cell. 3. Join the data to the formatting by the `row` and `col`. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric) %>% print() ``` ``` ## # A tibble: 28 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 2 4 character Female NA ## 2 2 6 character Male NA ## 3 3 4 character Matilda NA ## 4 3 5 character Olivia NA ## 5 3 6 character Nicholas NA ## 6 3 7 character Paul NA ## 7 4 2 character Humanities NA ## 8 4 3 character Classics NA ## 9 4 4 numeric <NA> 1 ## 10 4 5 numeric <NA> 2 ## # … with 18 more rows ``` ``` unpivoted <- all_cells %>% behead("up-left", sex) %>% # As before behead("up", `name`) %>% # As before behead("left-up", field) %>% # Left-and-above behead("left", subject) %>% # Directly left rename(score = numeric) %>% select(-character) # Retain the row and col for now unpivoted ``` ``` ## # A tibble: 16 x 8 ## row col data_type score sex name field subject ## <int> <int> <chr> <dbl> <chr> <chr> <chr> <chr> ## 1 4 4 numeric 1 Female Matilda Humanities Classics ## 2 4 5 numeric 2 Female Olivia Humanities Classics ## 3 5 4 numeric 3 Female Matilda Humanities History ## 4 5 5 numeric 4 Female Olivia Humanities History ## 5 4 6 numeric 3 Male Nicholas Humanities Classics ## 6 4 7 numeric 0 Male Paul Humanities Classics ## 7 5 6 numeric 5 Male Nicholas Humanities History ## 8 5 7 numeric 1 Male Paul Humanities History ## 9 6 4 numeric 5 Female Matilda Performance Music ## 10 6 5 numeric 6 Female Olivia Performance Music ## 11 7 4 numeric 7 Female Matilda Performance Drama ## 12 7 5 numeric 8 Female Olivia Performance Drama ## 13 6 6 numeric 9 Male Nicholas Performance Music ## 14 6 7 numeric 2 Male Paul Performance Music ## 15 7 6 numeric 12 Male Nicholas Performance Drama ## 16 7 7 numeric 3 Male Paul Performance Drama ``` ``` # `formats` is a pallette of fill colours that can be indexed by the # `local_format_id` of a given cell to get the fill colour of that cell fill_colours <- xlsx_formats(path)$local$fill$patternFill$fgColor$rgb fill_colours ``` ``` ## [1] NA NA NA NA NA NA NA NA ## [9] "FFFFFF00" "FF92D050" "FFFFFF00" NA NA "FFFFFF00" NA NA ## [17] NA NA NA NA NA "FFFFFF00" "FFFFFF00" NA ## [25] NA "FFFFFF00" NA NA NA NA NA NA ## [33] NA NA NA NA NA NA NA NA ## [41] NA NA NA NA NA NA NA NA ## [49] NA NA NA NA NA NA NA NA ## [57] "FFFFC7CE" NA NA ``` ``` # Import all the cells, filter out the header row, filter for the first column, # and create a new column `approximate` based on the fill colours, by looking up # the local_format_id of each cell in the `formats` pallette. annotations <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(row >= 4, col >= 4) %>% # Omit the headers mutate(fill_colour = fill_colours[local_format_id]) %>% select(row, col, fill_colour) annotations ``` ``` ## # A tibble: 16 x 3 ## row col fill_colour ## <int> <int> <chr> ## 1 4 4 <NA> ## 2 4 5 FFFFFF00 ## 3 4 6 <NA> ## 4 4 7 <NA> ## 5 5 4 FFFFFF00 ## 6 5 5 <NA> ## 7 5 6 <NA> ## 8 5 7 <NA> ## 9 6 4 <NA> ## 10 6 5 <NA> ## 11 6 6 <NA> ## 12 6 7 <NA> ## 13 7 4 <NA> ## 14 7 5 <NA> ## 15 7 6 FFFFFF00 ## 16 7 7 <NA> ``` ``` left_join(unpivoted, annotations, by = c("row", "col")) %>% select(-row, -col) ``` ``` ## # A tibble: 16 x 7 ## data_type score sex name field subject fill_colour ## <chr> <dbl> <chr> <chr> <chr> <chr> <chr> ## 1 numeric 1 Female Matilda Humanities Classics <NA> ## 2 numeric 2 Female Olivia Humanities Classics FFFFFF00 ## 3 numeric 3 Female Matilda Humanities History FFFFFF00 ## 4 numeric 4 Female Olivia Humanities History <NA> ## 5 numeric 3 Male Nicholas Humanities Classics <NA> ## 6 numeric 0 Male Paul Humanities Classics <NA> ## 7 numeric 5 Male Nicholas Humanities History <NA> ## 8 numeric 1 Male Paul Humanities History <NA> ## 9 numeric 5 Female Matilda Performance Music <NA> ## 10 numeric 6 Female Olivia Performance Music <NA> ## 11 numeric 7 Female Matilda Performance Drama <NA> ## 12 numeric 8 Female Olivia Performance Drama <NA> ## 13 numeric 9 Male Nicholas Performance Music <NA> ## 14 numeric 2 Male Paul Performance Music <NA> ## 15 numeric 12 Male Nicholas Performance Drama FFFFFF00 ## 16 numeric 3 Male Paul Performance Drama <NA> ``` ### 3\.1\.4 Mixed headers and notes in the same row/column, distinguished by formatting This needs two passes over each row/column that contains a mixture. The first pass, with `behead_if()` is to deal with the cells that are headers, and the second pass, with `dplyr::filter()` removes the remaining cells that are notes. The `behead_if()` function takes predicate functions to choose which cells are headers. ``` # only treat bold cells beginning "Country: " as a header cells %>% behead_if(formats$local$font$bold[local_format_id], # true for bold cells str_detect(character, "^Country: "), # true for "Country: ..." direction = "left-up", # argument must be named name = "country_name") %>% dplyr::filter(col != 1L) # discard remaining cells ``` Note that the `direction` and `name` arguments must now be named, because they follow the `...`. After `behead_if()`, any cells that haven’t been treated as headers will still exist, so if you want to discard them then use `dplyr::filter()` on the column or row number. In the screenshot above, cells with italic or red text aren’t headers, even though they are in amongst header cells. First, identify the IDs of formats that have italic or red text. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") formats <- xlsx_formats(path) italic <- formats$local$font$italic # For 'red' we can either look for the RGB code for red "FFFF0000" red <- "FFFF0000" # Or we can find out what that code is by starting from a cell that we know is # red. red_cell_format_id <- xlsx_cells(path, sheets = "pivot-notes") %>% dplyr::filter(row == 5, col == 2) %>% pull(local_format_id) red_cell_format_id ``` ``` ## [1] 40 ``` ``` red <- formats$local$font$color$rgb[red_cell_format_id] red ``` ``` ## [1] "FFFF0000" ``` Now we use `behead_if()`, filtering out cells with the format IDs of red or italic cells. ``` cells <- xlsx_cells(path, sheets = "pivot-notes") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric, local_format_id) %>% print() ``` ``` ## # A tibble: 31 x 6 ## row col data_type character numeric local_format_id ## <int> <int> <chr> <chr> <dbl> <int> ## 1 2 4 character Female NA 18 ## 2 2 6 character Male NA 18 ## 3 2 7 character 0 = absent NA 39 ## 4 3 4 character Matilda NA 20 ## 5 3 5 character Olivia NA 21 ## 6 3 6 character Nicholas NA 20 ## 7 3 7 character Paul NA 21 ## 8 4 2 character Humanities NA 18 ## 9 4 3 character Classics NA 19 ## 10 4 4 numeric <NA> 1 33 ## # … with 21 more rows ``` ``` cells %>% behead_if(!italic[local_format_id], # not italic direction = "up-left", name = "sex") %>% dplyr::filter(row != min(row)) %>% # discard non-header cells behead("up", "name") %>% behead_if(formats$local$font$color$rgb[local_format_id] != red, # not red direction = "left-up", name = "field") %>% dplyr::filter(col != min(col)) %>% # discard non-headere cells behead("left", "subject") %>% select(sex, name, field, subject, score = numeric) ``` ``` ## # A tibble: 16 x 5 ## sex name field subject score ## <chr> <chr> <chr> <chr> <dbl> ## 1 Male Nicholas Humanities Classics 3 ## 2 Male Paul Humanities Classics 0 ## 3 Male Nicholas Humanities History 5 ## 4 Male Paul Humanities History 1 ## 5 Female Matilda Humanities Classics 1 ## 6 Female Olivia Humanities Classics 2 ## 7 Female Matilda Humanities History 3 ## 8 Female Olivia Humanities History 4 ## 9 Male Nicholas Performance Music 9 ## 10 Male Paul Performance Music 2 ## 11 Male Nicholas Performance Drama 12 ## 12 Male Paul Performance Drama 3 ## 13 Female Matilda Performance Music 5 ## 14 Female Olivia Performance Music 6 ## 15 Female Matilda Performance Drama 7 ## 16 Female Olivia Performance Drama 8 ``` ### 3\.1\.5 Mixed levels of headers in the same row/column, distinguished by formatting Normally different levels of headers are in different rows, or different columns, like [Two clear rows of text column headers, left\-aligned](2Rl). But sometimes they coexist in the same row or column, and are distinguishable by formatting, e.g. by indentation, or bold for the top level, italic for the mid level, and plain for the lowest level. In this example, there is a single column of row headers, where the levels are shown by different amounts of indentation. The indentation is done by formatting, rather than by leading spaces or tabs. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") formats <- xlsx_formats(path) formats$local$alignment$indent ``` ``` ## [1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ## [46] 1 0 0 0 0 0 0 0 0 0 0 0 0 0 ``` We can use the indentation with `behead_if()` to make two passes over the column of row headers, first for the unindented headers, then for the indented headers. ``` cells <- xlsx_cells(path, sheets = "pivot-hierarchy") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric, local_format_id) %>% print() ``` ``` ## # A tibble: 16 x 6 ## row col data_type character numeric local_format_id ## <int> <int> <chr> <chr> <dbl> <int> ## 1 2 3 character Matilda NA 18 ## 2 2 4 character Nicholas NA 42 ## 3 3 2 character Humanities NA 18 ## 4 4 2 character Classics NA 44 ## 5 4 3 numeric <NA> 1 20 ## 6 4 4 numeric <NA> 3 45 ## 7 5 2 character History NA 44 ## 8 5 3 numeric <NA> 3 20 ## 9 5 4 numeric <NA> 5 45 ## 10 6 2 character Performance NA 20 ## 11 7 2 character Music NA 44 ## 12 7 3 numeric <NA> 5 20 ## 13 7 4 numeric <NA> 9 45 ## 14 8 2 character Drama NA 46 ## 15 8 3 numeric <NA> 7 24 ## 16 8 4 numeric <NA> 12 47 ``` ``` cells %>% behead_if(formats$local$alignment$indent[local_format_id] == 0, direction = "left-up", name = "field") %>% behead("left", "subject") %>% behead("up", "name") %>% select(field, subject, name, score = numeric) ``` ``` ## # A tibble: 8 x 4 ## field subject name score ## <chr> <chr> <chr> <dbl> ## 1 Humanities Classics Matilda 1 ## 2 Humanities Classics Nicholas 3 ## 3 Humanities History Matilda 3 ## 4 Humanities History Nicholas 5 ## 5 Performance Music Matilda 5 ## 6 Performance Music Nicholas 9 ## 7 Performance Drama Matilda 7 ## 8 Performance Drama Nicholas 12 ``` ### 3\.1\.1 Two clear rows of text column headers, left\-aligned Here we have a pivot table with two rows of column headers. The first row of headers is left\-aligned, so `"Female"` applies to the first two columns of data, and `"Male"` applies to the next two. The second row of headers has a header in every column. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(col >= 4, !is_blank) %>% # Ignore the row headers in this example select(row, col, data_type, character, numeric) all_cells ``` ``` ## # A tibble: 22 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 2 4 character Female NA ## 2 2 6 character Male NA ## 3 3 4 character Matilda NA ## 4 3 5 character Olivia NA ## 5 3 6 character Nicholas NA ## 6 3 7 character Paul NA ## 7 4 4 numeric <NA> 1 ## 8 4 5 numeric <NA> 2 ## 9 4 6 numeric <NA> 3 ## 10 4 7 numeric <NA> 0 ## # … with 12 more rows ``` The `behead()` function takes the ‘melted’ output of `as_cells()`, `tidyxl::xlsx_cells()`, or a previous `behead()`, and three more arguments to specify how the header cells relate to the data cells. The outermost header is the top row, `"Female" NA "Male" NA`. The `"Female"` and `"Male"` headers are up\-and\-to\-the\-left\-of the data cells. We express this as `"up-left"`. We also give the headers a name, `sex`, and say which column of `all_cells` contains the value of the header cells – it’s usually the `character` column. ``` all_cells %>% behead("up-left", sex) ``` ``` ## # A tibble: 20 x 6 ## row col data_type character numeric sex ## <int> <int> <chr> <chr> <dbl> <chr> ## 1 3 4 character Matilda NA Female ## 2 3 5 character Olivia NA Female ## 3 4 4 numeric <NA> 1 Female ## 4 4 5 numeric <NA> 2 Female ## 5 5 4 numeric <NA> 3 Female ## 6 5 5 numeric <NA> 4 Female ## 7 6 4 numeric <NA> 5 Female ## 8 6 5 numeric <NA> 6 Female ## 9 7 4 numeric <NA> 7 Female ## 10 7 5 numeric <NA> 8 Female ## 11 3 6 character Nicholas NA Male ## 12 3 7 character Paul NA Male ## 13 4 6 numeric <NA> 3 Male ## 14 4 7 numeric <NA> 0 Male ## 15 5 6 numeric <NA> 5 Male ## 16 5 7 numeric <NA> 1 Male ## 17 6 6 numeric <NA> 9 Male ## 18 6 7 numeric <NA> 2 Male ## 19 7 6 numeric <NA> 12 Male ## 20 7 7 numeric <NA> 3 Male ``` That did half the job. The value 2 in row 4 column 5 is indeed a score of a female. But the value `"matilda"` in row 3 column 4 isn’t a population – it’s another header. The next step is to strip that second level of column headers. This time, the direction is `"up"`, because the headers are directly up from the associated data cells, and we call it `name`, because it represents names of people. ``` all_cells %>% behead("up-left", sex) %>% behead("up", `name`) ``` ``` ## # A tibble: 16 x 7 ## row col data_type character numeric sex name ## <int> <int> <chr> <chr> <dbl> <chr> <chr> ## 1 4 4 numeric <NA> 1 Female Matilda ## 2 4 5 numeric <NA> 2 Female Olivia ## 3 5 4 numeric <NA> 3 Female Matilda ## 4 5 5 numeric <NA> 4 Female Olivia ## 5 6 4 numeric <NA> 5 Female Matilda ## 6 6 5 numeric <NA> 6 Female Olivia ## 7 7 4 numeric <NA> 7 Female Matilda ## 8 7 5 numeric <NA> 8 Female Olivia ## 9 4 6 numeric <NA> 3 Male Nicholas ## 10 4 7 numeric <NA> 0 Male Paul ## 11 5 6 numeric <NA> 5 Male Nicholas ## 12 5 7 numeric <NA> 1 Male Paul ## 13 6 6 numeric <NA> 9 Male Nicholas ## 14 6 7 numeric <NA> 2 Male Paul ## 15 7 6 numeric <NA> 12 Male Nicholas ## 16 7 7 numeric <NA> 3 Male Paul ``` A final step is a normal clean\-up. We drop the `row`, `col` and `character` columns, and we rename the `numeric` column to `score`, which is what it represents. ``` all_cells %>% behead("up-left", sex) %>% behead("up", `name`) %>% select(score = numeric, sex, `name`) ``` ``` ## # A tibble: 16 x 3 ## score sex name ## <dbl> <chr> <chr> ## 1 1 Female Matilda ## 2 2 Female Olivia ## 3 3 Female Matilda ## 4 4 Female Olivia ## 5 5 Female Matilda ## 6 6 Female Olivia ## 7 7 Female Matilda ## 8 8 Female Olivia ## 9 3 Male Nicholas ## 10 0 Male Paul ## 11 5 Male Nicholas ## 12 1 Male Paul ## 13 9 Male Nicholas ## 14 2 Male Paul ## 15 12 Male Nicholas ## 16 3 Male Paul ``` ### 3\.1\.2 Two clear rows and columns of text headers, top\-aligned and left\-aligned There are no new techniques are used, just more directions: `"left"` for headers directly to the left of the data cells, and `"left-up"` for headers left\-then\-up from the data cells. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric) %>% print() ``` ``` ## # A tibble: 28 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 2 4 character Female NA ## 2 2 6 character Male NA ## 3 3 4 character Matilda NA ## 4 3 5 character Olivia NA ## 5 3 6 character Nicholas NA ## 6 3 7 character Paul NA ## 7 4 2 character Humanities NA ## 8 4 3 character Classics NA ## 9 4 4 numeric <NA> 1 ## 10 4 5 numeric <NA> 2 ## # … with 18 more rows ``` ``` all_cells %>% behead("up-left", sex) %>% # As before behead("up", `name`) %>% # As before behead("left-up", field) %>% # Left-and-above behead("left", subject) %>% # Directly left rename(score = numeric) %>% select(-row, -col, -character) ``` ``` ## # A tibble: 16 x 6 ## data_type score sex name field subject ## <chr> <dbl> <chr> <chr> <chr> <chr> ## 1 numeric 1 Female Matilda Humanities Classics ## 2 numeric 2 Female Olivia Humanities Classics ## 3 numeric 3 Female Matilda Humanities History ## 4 numeric 4 Female Olivia Humanities History ## 5 numeric 3 Male Nicholas Humanities Classics ## 6 numeric 0 Male Paul Humanities Classics ## 7 numeric 5 Male Nicholas Humanities History ## 8 numeric 1 Male Paul Humanities History ## 9 numeric 5 Female Matilda Performance Music ## 10 numeric 6 Female Olivia Performance Music ## 11 numeric 7 Female Matilda Performance Drama ## 12 numeric 8 Female Olivia Performance Drama ## 13 numeric 9 Male Nicholas Performance Music ## 14 numeric 2 Male Paul Performance Music ## 15 numeric 12 Male Nicholas Performance Drama ## 16 numeric 3 Male Paul Performance Drama ``` ### 3\.1\.3 Multiple rows or columns of headers, with meaningful formatting This is a combination of the previous section with [meaningfully formatted rows](tidy-formatted-rows). The section [meaninfully formatted cells](tidy-formatted-cells) doesn’t work here, because the unpivoting of multiple rows/columns of headers complicates the relationship between the data and the formatting. 1. Unpivot the multiple rows/columns of headers, as above, but keep the `row` and `col` of each data cell. 2. Collect the `row`, `col` and formatting of each data cell. 3. Join the data to the formatting by the `row` and `col`. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric) %>% print() ``` ``` ## # A tibble: 28 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 2 4 character Female NA ## 2 2 6 character Male NA ## 3 3 4 character Matilda NA ## 4 3 5 character Olivia NA ## 5 3 6 character Nicholas NA ## 6 3 7 character Paul NA ## 7 4 2 character Humanities NA ## 8 4 3 character Classics NA ## 9 4 4 numeric <NA> 1 ## 10 4 5 numeric <NA> 2 ## # … with 18 more rows ``` ``` unpivoted <- all_cells %>% behead("up-left", sex) %>% # As before behead("up", `name`) %>% # As before behead("left-up", field) %>% # Left-and-above behead("left", subject) %>% # Directly left rename(score = numeric) %>% select(-character) # Retain the row and col for now unpivoted ``` ``` ## # A tibble: 16 x 8 ## row col data_type score sex name field subject ## <int> <int> <chr> <dbl> <chr> <chr> <chr> <chr> ## 1 4 4 numeric 1 Female Matilda Humanities Classics ## 2 4 5 numeric 2 Female Olivia Humanities Classics ## 3 5 4 numeric 3 Female Matilda Humanities History ## 4 5 5 numeric 4 Female Olivia Humanities History ## 5 4 6 numeric 3 Male Nicholas Humanities Classics ## 6 4 7 numeric 0 Male Paul Humanities Classics ## 7 5 6 numeric 5 Male Nicholas Humanities History ## 8 5 7 numeric 1 Male Paul Humanities History ## 9 6 4 numeric 5 Female Matilda Performance Music ## 10 6 5 numeric 6 Female Olivia Performance Music ## 11 7 4 numeric 7 Female Matilda Performance Drama ## 12 7 5 numeric 8 Female Olivia Performance Drama ## 13 6 6 numeric 9 Male Nicholas Performance Music ## 14 6 7 numeric 2 Male Paul Performance Music ## 15 7 6 numeric 12 Male Nicholas Performance Drama ## 16 7 7 numeric 3 Male Paul Performance Drama ``` ``` # `formats` is a pallette of fill colours that can be indexed by the # `local_format_id` of a given cell to get the fill colour of that cell fill_colours <- xlsx_formats(path)$local$fill$patternFill$fgColor$rgb fill_colours ``` ``` ## [1] NA NA NA NA NA NA NA NA ## [9] "FFFFFF00" "FF92D050" "FFFFFF00" NA NA "FFFFFF00" NA NA ## [17] NA NA NA NA NA "FFFFFF00" "FFFFFF00" NA ## [25] NA "FFFFFF00" NA NA NA NA NA NA ## [33] NA NA NA NA NA NA NA NA ## [41] NA NA NA NA NA NA NA NA ## [49] NA NA NA NA NA NA NA NA ## [57] "FFFFC7CE" NA NA ``` ``` # Import all the cells, filter out the header row, filter for the first column, # and create a new column `approximate` based on the fill colours, by looking up # the local_format_id of each cell in the `formats` pallette. annotations <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(row >= 4, col >= 4) %>% # Omit the headers mutate(fill_colour = fill_colours[local_format_id]) %>% select(row, col, fill_colour) annotations ``` ``` ## # A tibble: 16 x 3 ## row col fill_colour ## <int> <int> <chr> ## 1 4 4 <NA> ## 2 4 5 FFFFFF00 ## 3 4 6 <NA> ## 4 4 7 <NA> ## 5 5 4 FFFFFF00 ## 6 5 5 <NA> ## 7 5 6 <NA> ## 8 5 7 <NA> ## 9 6 4 <NA> ## 10 6 5 <NA> ## 11 6 6 <NA> ## 12 6 7 <NA> ## 13 7 4 <NA> ## 14 7 5 <NA> ## 15 7 6 FFFFFF00 ## 16 7 7 <NA> ``` ``` left_join(unpivoted, annotations, by = c("row", "col")) %>% select(-row, -col) ``` ``` ## # A tibble: 16 x 7 ## data_type score sex name field subject fill_colour ## <chr> <dbl> <chr> <chr> <chr> <chr> <chr> ## 1 numeric 1 Female Matilda Humanities Classics <NA> ## 2 numeric 2 Female Olivia Humanities Classics FFFFFF00 ## 3 numeric 3 Female Matilda Humanities History FFFFFF00 ## 4 numeric 4 Female Olivia Humanities History <NA> ## 5 numeric 3 Male Nicholas Humanities Classics <NA> ## 6 numeric 0 Male Paul Humanities Classics <NA> ## 7 numeric 5 Male Nicholas Humanities History <NA> ## 8 numeric 1 Male Paul Humanities History <NA> ## 9 numeric 5 Female Matilda Performance Music <NA> ## 10 numeric 6 Female Olivia Performance Music <NA> ## 11 numeric 7 Female Matilda Performance Drama <NA> ## 12 numeric 8 Female Olivia Performance Drama <NA> ## 13 numeric 9 Male Nicholas Performance Music <NA> ## 14 numeric 2 Male Paul Performance Music <NA> ## 15 numeric 12 Male Nicholas Performance Drama FFFFFF00 ## 16 numeric 3 Male Paul Performance Drama <NA> ``` ### 3\.1\.4 Mixed headers and notes in the same row/column, distinguished by formatting This needs two passes over each row/column that contains a mixture. The first pass, with `behead_if()` is to deal with the cells that are headers, and the second pass, with `dplyr::filter()` removes the remaining cells that are notes. The `behead_if()` function takes predicate functions to choose which cells are headers. ``` # only treat bold cells beginning "Country: " as a header cells %>% behead_if(formats$local$font$bold[local_format_id], # true for bold cells str_detect(character, "^Country: "), # true for "Country: ..." direction = "left-up", # argument must be named name = "country_name") %>% dplyr::filter(col != 1L) # discard remaining cells ``` Note that the `direction` and `name` arguments must now be named, because they follow the `...`. After `behead_if()`, any cells that haven’t been treated as headers will still exist, so if you want to discard them then use `dplyr::filter()` on the column or row number. In the screenshot above, cells with italic or red text aren’t headers, even though they are in amongst header cells. First, identify the IDs of formats that have italic or red text. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") formats <- xlsx_formats(path) italic <- formats$local$font$italic # For 'red' we can either look for the RGB code for red "FFFF0000" red <- "FFFF0000" # Or we can find out what that code is by starting from a cell that we know is # red. red_cell_format_id <- xlsx_cells(path, sheets = "pivot-notes") %>% dplyr::filter(row == 5, col == 2) %>% pull(local_format_id) red_cell_format_id ``` ``` ## [1] 40 ``` ``` red <- formats$local$font$color$rgb[red_cell_format_id] red ``` ``` ## [1] "FFFF0000" ``` Now we use `behead_if()`, filtering out cells with the format IDs of red or italic cells. ``` cells <- xlsx_cells(path, sheets = "pivot-notes") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric, local_format_id) %>% print() ``` ``` ## # A tibble: 31 x 6 ## row col data_type character numeric local_format_id ## <int> <int> <chr> <chr> <dbl> <int> ## 1 2 4 character Female NA 18 ## 2 2 6 character Male NA 18 ## 3 2 7 character 0 = absent NA 39 ## 4 3 4 character Matilda NA 20 ## 5 3 5 character Olivia NA 21 ## 6 3 6 character Nicholas NA 20 ## 7 3 7 character Paul NA 21 ## 8 4 2 character Humanities NA 18 ## 9 4 3 character Classics NA 19 ## 10 4 4 numeric <NA> 1 33 ## # … with 21 more rows ``` ``` cells %>% behead_if(!italic[local_format_id], # not italic direction = "up-left", name = "sex") %>% dplyr::filter(row != min(row)) %>% # discard non-header cells behead("up", "name") %>% behead_if(formats$local$font$color$rgb[local_format_id] != red, # not red direction = "left-up", name = "field") %>% dplyr::filter(col != min(col)) %>% # discard non-headere cells behead("left", "subject") %>% select(sex, name, field, subject, score = numeric) ``` ``` ## # A tibble: 16 x 5 ## sex name field subject score ## <chr> <chr> <chr> <chr> <dbl> ## 1 Male Nicholas Humanities Classics 3 ## 2 Male Paul Humanities Classics 0 ## 3 Male Nicholas Humanities History 5 ## 4 Male Paul Humanities History 1 ## 5 Female Matilda Humanities Classics 1 ## 6 Female Olivia Humanities Classics 2 ## 7 Female Matilda Humanities History 3 ## 8 Female Olivia Humanities History 4 ## 9 Male Nicholas Performance Music 9 ## 10 Male Paul Performance Music 2 ## 11 Male Nicholas Performance Drama 12 ## 12 Male Paul Performance Drama 3 ## 13 Female Matilda Performance Music 5 ## 14 Female Olivia Performance Music 6 ## 15 Female Matilda Performance Drama 7 ## 16 Female Olivia Performance Drama 8 ``` ### 3\.1\.5 Mixed levels of headers in the same row/column, distinguished by formatting Normally different levels of headers are in different rows, or different columns, like [Two clear rows of text column headers, left\-aligned](2Rl). But sometimes they coexist in the same row or column, and are distinguishable by formatting, e.g. by indentation, or bold for the top level, italic for the mid level, and plain for the lowest level. In this example, there is a single column of row headers, where the levels are shown by different amounts of indentation. The indentation is done by formatting, rather than by leading spaces or tabs. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") formats <- xlsx_formats(path) formats$local$alignment$indent ``` ``` ## [1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ## [46] 1 0 0 0 0 0 0 0 0 0 0 0 0 0 ``` We can use the indentation with `behead_if()` to make two passes over the column of row headers, first for the unindented headers, then for the indented headers. ``` cells <- xlsx_cells(path, sheets = "pivot-hierarchy") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric, local_format_id) %>% print() ``` ``` ## # A tibble: 16 x 6 ## row col data_type character numeric local_format_id ## <int> <int> <chr> <chr> <dbl> <int> ## 1 2 3 character Matilda NA 18 ## 2 2 4 character Nicholas NA 42 ## 3 3 2 character Humanities NA 18 ## 4 4 2 character Classics NA 44 ## 5 4 3 numeric <NA> 1 20 ## 6 4 4 numeric <NA> 3 45 ## 7 5 2 character History NA 44 ## 8 5 3 numeric <NA> 3 20 ## 9 5 4 numeric <NA> 5 45 ## 10 6 2 character Performance NA 20 ## 11 7 2 character Music NA 44 ## 12 7 3 numeric <NA> 5 20 ## 13 7 4 numeric <NA> 9 45 ## 14 8 2 character Drama NA 46 ## 15 8 3 numeric <NA> 7 24 ## 16 8 4 numeric <NA> 12 47 ``` ``` cells %>% behead_if(formats$local$alignment$indent[local_format_id] == 0, direction = "left-up", name = "field") %>% behead("left", "subject") %>% behead("up", "name") %>% select(field, subject, name, score = numeric) ``` ``` ## # A tibble: 8 x 4 ## field subject name score ## <chr> <chr> <chr> <dbl> ## 1 Humanities Classics Matilda 1 ## 2 Humanities Classics Nicholas 3 ## 3 Humanities History Matilda 3 ## 4 Humanities History Nicholas 5 ## 5 Performance Music Matilda 5 ## 6 Performance Music Nicholas 9 ## 7 Performance Drama Matilda 7 ## 8 Performance Drama Nicholas 12 ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/pivot-complex.html
3\.2 Complex unpivoting ----------------------- When `behead()` isn’t powerful enough (it makes certain assumptions, and it doesn’t understand formatting), then you can get much more control by using `enhead()`, which joins together two separate data frames of data cells and header cells. This kind of unpivoting is always done in two stages. 1. Identify which cells are headers, and which are data 2. State how the data cells relate to the header cells. ### 3\.2\.1 Two clear rows of text column headers, left\-aligned The first stage, identifying header vs data cells, is simply filtering. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(col >= 4, !is_blank) %>% # Ignore the row headers in this example select(row, col, data_type, character, numeric) %>% print() ``` ``` ## # A tibble: 22 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 2 4 character Female NA ## 2 2 6 character Male NA ## 3 3 4 character Matilda NA ## 4 3 5 character Olivia NA ## 5 3 6 character Nicholas NA ## 6 3 7 character Paul NA ## 7 4 4 numeric <NA> 1 ## 8 4 5 numeric <NA> 2 ## 9 4 6 numeric <NA> 3 ## 10 4 7 numeric <NA> 0 ## # … with 12 more rows ``` ``` # View the cells in their original positions on the spreadsheet rectify(all_cells) ``` ``` ## # A tibble: 6 x 5 ## `row/col` `4(D)` `5(E)` `6(F)` `7(G)` ## <int> <chr> <chr> <chr> <chr> ## 1 2 Female <NA> Male <NA> ## 2 3 Matilda Olivia Nicholas Paul ## 3 4 1 2 3 0 ## 4 5 3 4 5 1 ## 5 6 5 6 9 2 ## 6 7 7 8 12 3 ``` ``` first_header_row <- dplyr::filter(all_cells, row == 2) %>% select(row, col, sex = character) # the title of this header is 'sex' # the cells are text cells (`"Female"` and `"Male"`) so take the value in the # '`character` column. first_header_row ``` ``` ## # A tibble: 2 x 3 ## row col sex ## <int> <int> <chr> ## 1 2 4 Female ## 2 2 6 Male ``` ``` second_header_row <- dplyr::filter(all_cells, row == 3) %>% select(row, col, name = character) # The title of this header is 'name'. # The cells are text cells, so take the value in the '`character` column. second_header_row ``` ``` ## # A tibble: 4 x 3 ## row col name ## <int> <int> <chr> ## 1 3 4 Matilda ## 2 3 5 Olivia ## 3 3 6 Nicholas ## 4 3 7 Paul ``` ``` data_cells <- dplyr::filter(all_cells, data_type == "numeric") %>% select(row, col, score = numeric) # The data is exam scores in certain subjects, so give the data that title. # The data is numeric, so select only that 'value'. If some of the data was # also text or true/false, then you would select the `character` and `logical` # columns as well as `numeric` ``` The second stage is to declare how the data cells relate to each row of column headers. Starting from the point of view of a data cell, the relevant column header from the second row of headers is the one directly `"up"`. ``` enhead(data_cells, second_header_row, "up") ``` ``` ## # A tibble: 16 x 4 ## row col score name ## <int> <int> <dbl> <chr> ## 1 4 4 1 Matilda ## 2 4 5 2 Olivia ## 3 4 6 3 Nicholas ## 4 4 7 0 Paul ## 5 5 4 3 Matilda ## 6 5 5 4 Olivia ## 7 5 6 5 Nicholas ## 8 5 7 1 Paul ## 9 6 4 5 Matilda ## 10 6 5 6 Olivia ## 11 6 6 9 Nicholas ## 12 6 7 2 Paul ## 13 7 4 7 Matilda ## 14 7 5 8 Olivia ## 15 7 6 12 Nicholas ## 16 7 7 3 Paul ``` The first row of headers, from the point of view of a data cell, is either directly up, or up\-then\-left. ``` enhead(data_cells, first_header_row, "up-left") ``` ``` ## # A tibble: 16 x 4 ## row col score sex ## <int> <int> <dbl> <chr> ## 1 4 4 1 Female ## 2 4 5 2 Female ## 3 5 4 3 Female ## 4 5 5 4 Female ## 5 6 4 5 Female ## 6 6 5 6 Female ## 7 7 4 7 Female ## 8 7 5 8 Female ## 9 4 6 3 Male ## 10 4 7 0 Male ## 11 5 6 5 Male ## 12 5 7 1 Male ## 13 6 6 9 Male ## 14 6 7 2 Male ## 15 7 6 12 Male ## 16 7 7 3 Male ``` Piping everything together, we get a complete, tidy dataset, and can finally drop the `row` and `col` columns. ``` data_cells %>% enhead(first_header_row, "up-left") %>% enhead(second_header_row, "up") %>% select(-row, -col) ``` ``` ## # A tibble: 16 x 3 ## score sex name ## <dbl> <chr> <chr> ## 1 1 Female Matilda ## 2 2 Female Olivia ## 3 3 Female Matilda ## 4 4 Female Olivia ## 5 5 Female Matilda ## 6 6 Female Olivia ## 7 7 Female Matilda ## 8 8 Female Olivia ## 9 3 Male Nicholas ## 10 0 Male Paul ## 11 5 Male Nicholas ## 12 1 Male Paul ## 13 9 Male Nicholas ## 14 2 Male Paul ## 15 12 Male Nicholas ## 16 3 Male Paul ``` ### 3\.2\.2 Two clear columns of text row headers, top\-aligned This is almost the same as [Two clear rows of text column headers, left\-aligned](2RL), but with different directions: `"left"` for directly left, and `"left-up"` for left\-then\-up. (`"up-left"` and `"left-up"` look like synonyms. They happen to be synonyms in `enhead()`, but they aren’t in `behead()`. In this example, the table has no column headers, only row headers. This is artificial here, but sometimes table are deliberately laid out in transpose form: the first column contains the headers, and the data extends in columns from left to right instead of from top to bottom. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(row >= 3, !is_blank) %>% # Ignore the column headers in this example select(row, col, data_type, character, numeric) %>% print() ``` ``` ## # A tibble: 26 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 3 4 character Matilda NA ## 2 3 5 character Olivia NA ## 3 3 6 character Nicholas NA ## 4 3 7 character Paul NA ## 5 4 2 character Humanities NA ## 6 4 3 character Classics NA ## 7 4 4 numeric <NA> 1 ## 8 4 5 numeric <NA> 2 ## 9 4 6 numeric <NA> 3 ## 10 4 7 numeric <NA> 0 ## # … with 16 more rows ``` ``` # View the cells in their original positions on the spreadsheet rectify(all_cells) ``` ``` ## # A tibble: 5 x 7 ## `row/col` `2(B)` `3(C)` `4(D)` `5(E)` `6(F)` `7(G)` ## <int> <chr> <chr> <chr> <chr> <chr> <chr> ## 1 3 <NA> <NA> Matilda Olivia Nicholas Paul ## 2 4 Humanities Classics 1 2 3 0 ## 3 5 <NA> History 3 4 5 1 ## 4 6 Performance Music 5 6 9 2 ## 5 7 <NA> Drama 7 8 12 3 ``` ``` first_header_col <- dplyr::filter(all_cells, col == 2) %>% select(row, col, field = character) # the title of this header is 'field', meaning 'group of subjects'. # The cells are text cells (`"Humanities"`, `"Performance"`) so take the value # in the '`character` column. first_header_col ``` ``` ## # A tibble: 2 x 3 ## row col field ## <int> <int> <chr> ## 1 4 2 Humanities ## 2 6 2 Performance ``` ``` second_header_col <- dplyr::filter(all_cells, col == 3) %>% select(row, col, subject = character) # The title of this header is 'subject' # The cells are text cells (`"history"`, etc.) so take the value in the # '`character` column. second_header_col ``` ``` ## # A tibble: 4 x 3 ## row col subject ## <int> <int> <chr> ## 1 4 3 Classics ## 2 5 3 History ## 3 6 3 Music ## 4 7 3 Drama ``` ``` data_cells <- dplyr::filter(all_cells, data_type == "numeric") %>% select(row, col, score = numeric) # The data is examp scores in certain subjects, so give the data that title. # The data is numeric, so select only that 'value'. If some of the data was # also text or true/false, then you would select the `character` and `logical` # columns as well as `numeric` data_cells %>% enhead(first_header_col, "left-up") %>% enhead(second_header_col, "left") %>% select(-row, -col) ``` ``` ## # A tibble: 16 x 3 ## score field subject ## <dbl> <chr> <chr> ## 1 1 Humanities Classics ## 2 2 Humanities Classics ## 3 3 Humanities Classics ## 4 0 Humanities Classics ## 5 3 Humanities History ## 6 4 Humanities History ## 7 5 Humanities History ## 8 1 Humanities History ## 9 5 Performance Music ## 10 6 Performance Music ## 11 9 Performance Music ## 12 2 Performance Music ## 13 7 Performance Drama ## 14 8 Performance Drama ## 15 12 Performance Drama ## 16 3 Performance Drama ``` ### 3\.2\.3 Two clear rows and columns of text headers, top\-aligned and left\-aligned This is a combination of the previous two sections. No new techniques are used. 1. Identify which cells are headers, and which are data 2. State how the data cells relate to the header cells. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric) %>% print() ``` ``` ## # A tibble: 28 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 2 4 character Female NA ## 2 2 6 character Male NA ## 3 3 4 character Matilda NA ## 4 3 5 character Olivia NA ## 5 3 6 character Nicholas NA ## 6 3 7 character Paul NA ## 7 4 2 character Humanities NA ## 8 4 3 character Classics NA ## 9 4 4 numeric <NA> 1 ## 10 4 5 numeric <NA> 2 ## # … with 18 more rows ``` ``` # View the cells in their original positions on the spreadsheet rectify(all_cells) ``` ``` ## # A tibble: 6 x 7 ## `row/col` `2(B)` `3(C)` `4(D)` `5(E)` `6(F)` `7(G)` ## <int> <chr> <chr> <chr> <chr> <chr> <chr> ## 1 2 <NA> <NA> Female <NA> Male <NA> ## 2 3 <NA> <NA> Matilda Olivia Nicholas Paul ## 3 4 Humanities Classics 1 2 3 0 ## 4 5 <NA> History 3 4 5 1 ## 5 6 Performance Music 5 6 9 2 ## 6 7 <NA> Drama 7 8 12 3 ``` ``` first_header_row <- dplyr::filter(all_cells, row == 2) %>% select(row, col, sex = character) # the title of this header is 'sex' # the cells are text cells (`"Female"` and `"Male"`) so take the value in the # '`character` column. first_header_row ``` ``` ## # A tibble: 2 x 3 ## row col sex ## <int> <int> <chr> ## 1 2 4 Female ## 2 2 6 Male ``` ``` second_header_row <- dplyr::filter(all_cells, row == 3) %>% select(row, col, name = character) # The title of this header is 'name'. # The cells are text cells, so take the value in the '`character` column. second_header_row ``` ``` ## # A tibble: 4 x 3 ## row col name ## <int> <int> <chr> ## 1 3 4 Matilda ## 2 3 5 Olivia ## 3 3 6 Nicholas ## 4 3 7 Paul ``` ``` first_header_col <- dplyr::filter(all_cells, col == 2) %>% select(row, col, field = character) # the title of this header is 'field', meaning 'group of subjects'. # The cells are text cells (`"Humanities"`, `"Performance"`) so take the value # in the '`character` column. first_header_col ``` ``` ## # A tibble: 2 x 3 ## row col field ## <int> <int> <chr> ## 1 4 2 Humanities ## 2 6 2 Performance ``` ``` second_header_col <- dplyr::filter(all_cells, col == 3) %>% select(row, col, subject = character) # The title of this header is 'subject' # The cells are text cells (`"history"`, etc.) so take the value in the # '`character` column. second_header_col ``` ``` ## # A tibble: 4 x 3 ## row col subject ## <int> <int> <chr> ## 1 4 3 Classics ## 2 5 3 History ## 3 6 3 Music ## 4 7 3 Drama ``` ``` data_cells <- dplyr::filter(all_cells, data_type == "numeric") %>% select(row, col, score = numeric) # The data is examp scores in certain subjects, so give the data that title. # The data is numeric, so select only that 'value'. If some of the data was # also text or true/false, then you would select the `character` and `logical` # columns as well as `numeric` data_cells %>% enhead(first_header_row, "up-left") %>% enhead(second_header_row, "up") %>% enhead(first_header_col, "left-up") %>% enhead(second_header_col, "left") %>% select(-row, -col) ``` ``` ## # A tibble: 16 x 5 ## score sex name field subject ## <dbl> <chr> <chr> <chr> <chr> ## 1 1 Female Matilda Humanities Classics ## 2 2 Female Olivia Humanities Classics ## 3 3 Female Matilda Humanities History ## 4 4 Female Olivia Humanities History ## 5 3 Male Nicholas Humanities Classics ## 6 0 Male Paul Humanities Classics ## 7 5 Male Nicholas Humanities History ## 8 1 Male Paul Humanities History ## 9 5 Female Matilda Performance Music ## 10 6 Female Olivia Performance Music ## 11 7 Female Matilda Performance Drama ## 12 8 Female Olivia Performance Drama ## 13 9 Male Nicholas Performance Music ## 14 2 Male Paul Performance Music ## 15 12 Male Nicholas Performance Drama ## 16 3 Male Paul Performance Drama ``` ### 3\.2\.4 Centre\-aligned headers Headers aren’t always aligned to one side of the data cells that they describe. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-centre-aligned") rectify(all_cells) ``` ``` ## # A tibble: 10 x 10 ## `row/col` `2(B)` `3(C)` `4(D)` `5(E)` `6(F)` `7(G)` `8(H)` `9(I)` `10(J)` ## <int> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> ## 1 2 <NA> <NA> <NA> Female <NA> <NA> <NA> Male <NA> ## 2 3 <NA> <NA> Leah Matilda Olivia Lenny Max Nicholas Paul ## 3 4 <NA> Classics 3 1 2 4 3 3 0 ## 4 5 Humanities History 8 3 4 7 5 5 1 ## 5 6 <NA> Literature 1 1 9 3 12 7 5 ## 6 7 <NA> Philosophy 5 10 10 8 2 5 12 ## 7 8 <NA> Languages 5 4 5 9 8 3 8 ## 8 9 <NA> Music 4 10 10 2 4 5 6 ## 9 10 Performance Dance 4 5 6 4 12 9 2 ## 10 11 <NA> Drama 2 7 8 6 1 12 3 ``` Looking at that table, it’s not immediately obvious where the boundary between `Female` and `Male` falls, or between `Humanities` and `Performance`. A naive approach would be to match the inner headers to the outer ones by proximity, and there are four directions to do so: `"up-ish"`, `"left-ish"`, `"down-ish"`, and `"right-ish"`. But in this case, those directions are too naive. * `Languages` is closest to the `Performance` header, but is a humanity. * `Lenny` is the same distance from `Female` as from `Male`. You can fix this by justifying the header cells towards one side of the data cells that they describe, and then use a direction like `"up-left"` as usual. Do this with `justify()`, providing the header cells with a second set of cells at the positions you want the header cells to move to. * `header_cells` is the cells whose value will be used as the header * `corner_cells` is the cells whose position is in one corner of the domain of the header (e.g. the top\-left\-hand corner). In the original spreadsheet, the borders mark the boundaries. So the corner cells of the headers can be found by filtering for cells with a particular border. ``` all_cells <- xlsx_cells(path, sheets = "pivot-centre-aligned") %>% select(row, col, is_blank, data_type, character, numeric, local_format_id) formats <- xlsx_formats(path) top_borders <- which(!is.na(formats$local$border$top$style)) left_borders <- which(!is.na(formats$local$border$left$style)) first_header_row_corners <- dplyr::filter(all_cells, row == 2, local_format_id %in% left_borders) %>% select(row, col) first_header_row_corners ``` ``` ## # A tibble: 2 x 2 ## row col ## <int> <int> ## 1 2 4 ## 2 2 7 ``` ``` first_header_col_corners <- dplyr::filter(all_cells, col == 2, local_format_id %in% top_borders) %>% select(row, col) first_header_col_corners ``` ``` ## # A tibble: 2 x 2 ## row col ## <int> <int> ## 1 4 2 ## 2 9 2 ``` Next, get the first row and first column of header cells as usual. ``` first_header_row <- dplyr::filter(all_cells, !is_blank, row == 2) %>% select(row, col, sex = character) # the title of this header is 'sex' # the cells are text cells (`"Female"` and `"Male"`) so take the value in the # '`character` column. first_header_row ``` ``` ## # A tibble: 2 x 3 ## row col sex ## <int> <int> <chr> ## 1 2 5 Female ## 2 2 9 Male ``` ``` first_header_col <- dplyr::filter(all_cells, !is_blank, col == 2) %>% select(row, col, field = character) # the title of this header is 'field', meaning 'group of subjects'. # The cells are text cells (`"Humanities"`, `"Performance"`) so take the value # in the '`character` column. first_header_col ``` ``` ## # A tibble: 2 x 3 ## row col field ## <int> <int> <chr> ## 1 5 2 Humanities ## 2 10 2 Performance ``` And now justify the header cells to the same positions as the corner cells. ``` first_header_row <- justify(first_header_row, first_header_row_corners) first_header_col <- justify(first_header_col, first_header_col_corners) first_header_row ``` ``` ## # A tibble: 2 x 3 ## row col sex ## <int> <int> <chr> ## 1 2 4 Female ## 2 2 7 Male ``` ``` first_header_col ``` ``` ## # A tibble: 2 x 3 ## row col field ## <int> <int> <chr> ## 1 4 2 Humanities ## 2 9 2 Performance ``` The rest of this example is the same as “Two clear rows and columns of text headers, top\-aligned and left\-aligned”. ``` second_header_row <- dplyr::filter(all_cells, row == 3) %>% select(row, col, name = character) # The title of this header is 'name'. # The cells are text cells, so take the value in the '`character` column. second_header_row ``` ``` ## # A tibble: 7 x 3 ## row col name ## <int> <int> <chr> ## 1 3 4 Leah ## 2 3 5 Matilda ## 3 3 6 Olivia ## 4 3 7 Lenny ## 5 3 8 Max ## 6 3 9 Nicholas ## 7 3 10 Paul ``` ``` second_header_col <- dplyr::filter(all_cells, col == 3) %>% select(row, col, subject = character) # The title of this header is 'subject' # The cells are text cells (`"history"`, etc.) so take the value in the # '`character` column. second_header_col ``` ``` ## # A tibble: 8 x 3 ## row col subject ## <int> <int> <chr> ## 1 4 3 Classics ## 2 5 3 History ## 3 6 3 Literature ## 4 7 3 Philosophy ## 5 8 3 Languages ## 6 9 3 Music ## 7 10 3 Dance ## 8 11 3 Drama ``` ``` data_cells <- dplyr::filter(all_cells, data_type == "numeric") %>% select(row, col, score = numeric) # The data is examp scores in certain subjects, so give the data that title. # The data is numeric, so select only that 'value'. If some of the data was # also text or true/false, then you would select the `character` and `logical` # columns as well as `numeric` data_cells %>% enhead(first_header_row, "up-left") %>% enhead(second_header_row, "up") %>% enhead(first_header_col, "left-up") %>% enhead(second_header_col, "left") %>% select(-row, -col) ``` ``` ## # A tibble: 56 x 5 ## score sex name field subject ## <dbl> <chr> <chr> <chr> <chr> ## 1 3 Female Leah Humanities Classics ## 2 1 Female Matilda Humanities Classics ## 3 2 Female Olivia Humanities Classics ## 4 8 Female Leah Humanities History ## 5 3 Female Matilda Humanities History ## 6 4 Female Olivia Humanities History ## 7 1 Female Leah Humanities Literature ## 8 1 Female Matilda Humanities Literature ## 9 9 Female Olivia Humanities Literature ## 10 5 Female Leah Humanities Philosophy ## # … with 46 more rows ``` ### 3\.2\.5 Multiple rows or columns of headers, with meaningful formatting This is a combination of the previous section with [Meaningfully formatted cells](tidy-formatted-cells). The section [Meaningfully formatted rows](tidy-formatted-rows) doesn’t work here, because the unpivoting of multiple rows/columns of headers complicates the relationship between the data and the formatting. 1. Unpivot the multiple rows/columns of headers, as above, but keep the `row` and `col` of each data cell. 2. Collect the `row`, `col` and formatting of each data cell. 3. Join the data to the formatting by the `row` and `col`. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric) %>% print() ``` ``` ## # A tibble: 28 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 2 4 character Female NA ## 2 2 6 character Male NA ## 3 3 4 character Matilda NA ## 4 3 5 character Olivia NA ## 5 3 6 character Nicholas NA ## 6 3 7 character Paul NA ## 7 4 2 character Humanities NA ## 8 4 3 character Classics NA ## 9 4 4 numeric <NA> 1 ## 10 4 5 numeric <NA> 2 ## # … with 18 more rows ``` ``` # View the cells in their original positions on the spreadsheet rectify(all_cells) ``` ``` ## # A tibble: 6 x 7 ## `row/col` `2(B)` `3(C)` `4(D)` `5(E)` `6(F)` `7(G)` ## <int> <chr> <chr> <chr> <chr> <chr> <chr> ## 1 2 <NA> <NA> Female <NA> Male <NA> ## 2 3 <NA> <NA> Matilda Olivia Nicholas Paul ## 3 4 Humanities Classics 1 2 3 0 ## 4 5 <NA> History 3 4 5 1 ## 5 6 Performance Music 5 6 9 2 ## 6 7 <NA> Drama 7 8 12 3 ``` ``` first_header_row <- dplyr::filter(all_cells, row == 2) %>% select(row, col, sex = character) # the title of this header is 'sex' # the cells are text cells (`"Female"` and `"Male"`) so take the value in the # '`character` column. first_header_row ``` ``` ## # A tibble: 2 x 3 ## row col sex ## <int> <int> <chr> ## 1 2 4 Female ## 2 2 6 Male ``` ``` second_header_row <- dplyr::filter(all_cells, row == 3) %>% select(row, col, name = character) # The title of this header is 'name'. # The cells are text cells, so take the value in the '`character` column. second_header_row ``` ``` ## # A tibble: 4 x 3 ## row col name ## <int> <int> <chr> ## 1 3 4 Matilda ## 2 3 5 Olivia ## 3 3 6 Nicholas ## 4 3 7 Paul ``` ``` first_header_col <- dplyr::filter(all_cells, col == 2) %>% select(row, col, field = character) # the title of this header is 'field', meaning 'group of subjects'. # The cells are text cells (`"Humanities"`, `"Performance"`) so take the value # in the '`character` column. first_header_col ``` ``` ## # A tibble: 2 x 3 ## row col field ## <int> <int> <chr> ## 1 4 2 Humanities ## 2 6 2 Performance ``` ``` second_header_col <- dplyr::filter(all_cells, col == 3) %>% select(row, col, subject = character) # The title of this header is 'subject' # The cells are text cells (`"history"`, etc.) so take the value in the # '`character` column. second_header_col ``` ``` ## # A tibble: 4 x 3 ## row col subject ## <int> <int> <chr> ## 1 4 3 Classics ## 2 5 3 History ## 3 6 3 Music ## 4 7 3 Drama ``` ``` data_cells <- dplyr::filter(all_cells, data_type == "numeric") %>% select(row, col, score = numeric) # The data is exam scores in certain subjects, so give the data that title. # The data is numeric, so select only that 'value'. If some of the data was # also text or true/false, then you would select the `character` and `logical` # columns as well as `numeric` unpivoted <- data_cells %>% enhead(first_header_row, "up-left") %>% enhead(second_header_row, "up") %>% enhead(first_header_col, "left-up") %>% enhead(second_header_col, "left") # Don't delet the `row` and `col` columns yet, because we need them to join on # the formatting # `formats` is a pallette of fill colours that can be indexed by the # `local_format_id` of a given cell to get the fill colour of that cell fill_colours <- xlsx_formats(path)$local$fill$patternFill$fgColor$rgb # Import all the cells, filter out the header row, filter for the first column, # and create a new column `approximate` based on the fill colours, by looking up # the local_format_id of each cell in the `formats` pallette. annotations <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(row >= 4, col >= 4) %>% # Omit the headers mutate(fill_colour = fill_colours[local_format_id]) %>% select(row, col, fill_colour) annotations ``` ``` ## # A tibble: 16 x 3 ## row col fill_colour ## <int> <int> <chr> ## 1 4 4 <NA> ## 2 4 5 FFFFFF00 ## 3 4 6 <NA> ## 4 4 7 <NA> ## 5 5 4 FFFFFF00 ## 6 5 5 <NA> ## 7 5 6 <NA> ## 8 5 7 <NA> ## 9 6 4 <NA> ## 10 6 5 <NA> ## 11 6 6 <NA> ## 12 6 7 <NA> ## 13 7 4 <NA> ## 14 7 5 <NA> ## 15 7 6 FFFFFF00 ## 16 7 7 <NA> ``` ``` left_join(unpivoted, annotations, by = c("row", "col")) %>% select(-row, -col) ``` ``` ## # A tibble: 16 x 6 ## score sex name field subject fill_colour ## <dbl> <chr> <chr> <chr> <chr> <chr> ## 1 1 Female Matilda Humanities Classics <NA> ## 2 2 Female Olivia Humanities Classics FFFFFF00 ## 3 3 Female Matilda Humanities History FFFFFF00 ## 4 4 Female Olivia Humanities History <NA> ## 5 3 Male Nicholas Humanities Classics <NA> ## 6 0 Male Paul Humanities Classics <NA> ## 7 5 Male Nicholas Humanities History <NA> ## 8 1 Male Paul Humanities History <NA> ## 9 5 Female Matilda Performance Music <NA> ## 10 6 Female Olivia Performance Music <NA> ## 11 7 Female Matilda Performance Drama <NA> ## 12 8 Female Olivia Performance Drama <NA> ## 13 9 Male Nicholas Performance Music <NA> ## 14 2 Male Paul Performance Music <NA> ## 15 12 Male Nicholas Performance Drama FFFFFF00 ## 16 3 Male Paul Performance Drama <NA> ``` ### 3\.2\.6 Mixed headers and notes in the same row/column, distinguished by formatting This doesn’t use any new techniques. The trick is, when selecting a row or column of header cells, to filter out ones that have the ‘wrong’ formatting (formatting that shows they aren’t really headers). In this example, cells with italic or red text aren’t headers, even if they are in amongst header cells. First, identify the IDs of formats that have italic or red text. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") formats <- xlsx_formats(path) italic <- which(formats$local$font$italic) # For 'red' we can either look for the RGB code for red "FFFF0000" red <- which(formats$local$font$color$rgb == "FFFF0000") red ``` ``` ## [1] 12 13 14 40 41 ``` ``` # Or we can find out what that code is by starting from a cell that we know is # red. red_cell_format_id <- xlsx_cells(path, sheets = "pivot-notes") %>% dplyr::filter(row == 5, col == 2) %>% pull(local_format_id) red_cell_format_id ``` ``` ## [1] 40 ``` ``` red_rgb <- formats$local$font$color$rgb[red_cell_format_id] red <- which(formats$local$font$color$rgb == red_rgb) red ``` ``` ## [1] 12 13 14 40 41 ``` Now we select the headers, filtering out cells with the format IDs of red or italic cells. ``` all_cells <- xlsx_cells(path, sheets = "pivot-notes") %>% dplyr::filter(!is_blank) %>% select(row, col, character, numeric, local_format_id) %>% print() ``` ``` ## # A tibble: 31 x 5 ## row col character numeric local_format_id ## <int> <int> <chr> <dbl> <int> ## 1 2 4 Female NA 18 ## 2 2 6 Male NA 18 ## 3 2 7 0 = absent NA 39 ## 4 3 4 Matilda NA 20 ## 5 3 5 Olivia NA 21 ## 6 3 6 Nicholas NA 20 ## 7 3 7 Paul NA 21 ## 8 4 2 Humanities NA 18 ## 9 4 3 Classics NA 19 ## 10 4 4 <NA> 1 33 ## # … with 21 more rows ``` ``` first_header_row <- dplyr::filter(all_cells, row == 2, !(local_format_id %in% c(red, italic))) %>% select(row, col, sex = character) # the title of this header is 'sex' # the cells are text cells (`"Female"` and `"Male"`) so take the value in the # '`character` column. first_header_row ``` ``` ## # A tibble: 2 x 3 ## row col sex ## <int> <int> <chr> ## 1 2 4 Female ## 2 2 6 Male ``` ``` first_header_col <- dplyr::filter(all_cells, col == 2, !(local_format_id %in% c(red, italic))) %>% select(row, col, qualification = character) # the title of this header is 'field', meaning 'group of subjects'. # The cells are text cells (`"Humanities"`, `"Performance"`) so take the value # in the '`character` column. first_header_col ``` ``` ## # A tibble: 2 x 3 ## row col qualification ## <int> <int> <chr> ## 1 4 2 Humanities ## 2 6 2 Performance ``` ``` second_header_col <- dplyr::filter(all_cells, col == 3) %>% select(row, col, subject = character) # The title of this header is 'subject' # The cells are text cells (`"history"`, etc.) so take the value in the # '`character` column. data_cells %>% enhead(first_header_row, "up-left") %>% enhead(first_header_col, "left-up") %>% select(-row, -col) ``` ``` ## # A tibble: 16 x 3 ## score sex qualification ## <dbl> <chr> <chr> ## 1 1 Female Humanities ## 2 2 Female Humanities ## 3 3 Female Humanities ## 4 4 Female Humanities ## 5 3 Male Humanities ## 6 0 Male Humanities ## 7 5 Male Humanities ## 8 1 Male Humanities ## 9 5 Female Performance ## 10 6 Female Performance ## 11 7 Female Performance ## 12 8 Female Performance ## 13 9 Male Performance ## 14 2 Male Performance ## 15 12 Male Performance ## 16 3 Male Performance ``` ### 3\.2\.7 Mixed levels of headers in the same row/column, distinguished by formatting Normally different levels of headers are in different rows, or different columns, like [Two clear rows of text column headers, left\-aligned](2Rl). But sometimes they coexist in the same row or column, and are distinguishable by formatting, e.g. bold for the top level, italic for the mid level, and plain for the lowest level. In this example, there is a single column of row headers, where the levels are shown by different amounts of indentation. The indentation is done by formatting, rather than by leading spaces or tabs. The first step is to find the format IDs of all the different levels of indentation. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") formats <- xlsx_formats(path) indent0 <- which(formats$local$alignment$indent == 0) indent1 <- which(formats$local$alignment$indent == 1) indent0 ``` ``` ## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 ## [31] 31 32 33 34 35 36 37 38 39 40 41 42 43 45 47 48 49 50 51 52 53 54 55 56 57 58 59 ``` ``` indent1 ``` ``` ## [1] 44 46 ``` Now we use these format IDs to indentify the different levels of headers in the first column. ``` all_cells <- xlsx_cells(path, sheets = "pivot-hierarchy") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric, local_format_id) %>% print() ``` ``` ## # A tibble: 16 x 6 ## row col data_type character numeric local_format_id ## <int> <int> <chr> <chr> <dbl> <int> ## 1 2 3 character Matilda NA 18 ## 2 2 4 character Nicholas NA 42 ## 3 3 2 character Humanities NA 18 ## 4 4 2 character Classics NA 44 ## 5 4 3 numeric <NA> 1 20 ## 6 4 4 numeric <NA> 3 45 ## 7 5 2 character History NA 44 ## 8 5 3 numeric <NA> 3 20 ## 9 5 4 numeric <NA> 5 45 ## 10 6 2 character Performance NA 20 ## 11 7 2 character Music NA 44 ## 12 7 3 numeric <NA> 5 20 ## 13 7 4 numeric <NA> 9 45 ## 14 8 2 character Drama NA 46 ## 15 8 3 numeric <NA> 7 24 ## 16 8 4 numeric <NA> 12 47 ``` ``` field <- dplyr::filter(all_cells, col == 2, local_format_id %in% indent0) %>% select(row, col, field = character) # the title of this header is 'field', meaning 'group of subjects'. # The cells are text cells (`"Humanities"`, `"Performance"`) so take the value # in the '`character` column. field ``` ``` ## # A tibble: 2 x 3 ## row col field ## <int> <int> <chr> ## 1 3 2 Humanities ## 2 6 2 Performance ``` ``` subject <- dplyr::filter(all_cells, col == 2, local_format_id %in% indent1) %>% select(row, col, subject = character) # The title of this header is 'subject' # The cells are text cells (`"history"`, etc.) so take the value in the # '`character` column. subject ``` ``` ## # A tibble: 4 x 3 ## row col subject ## <int> <int> <chr> ## 1 4 2 Classics ## 2 5 2 History ## 3 7 2 Music ## 4 8 2 Drama ``` ``` name <- dplyr::filter(all_cells, row == 2) %>% select(row, col, name = character) # The title of this header is 'name'. # The cells are text cells, so take the value in the '`character` column. name ``` ``` ## # A tibble: 2 x 3 ## row col name ## <int> <int> <chr> ## 1 2 3 Matilda ## 2 2 4 Nicholas ``` ``` data_cells <- dplyr::filter(all_cells, data_type == "numeric") %>% select(row, col, score = numeric) # The data is exam scores in certain subjects, so give the data that title. # The data is numeric, so select only that 'value'. If some of the data was # also text or true/false, then you would select the `character` and `logical` # columns as well as `numeric` data_cells %>% enhead(field, "left-up") %>% enhead(subject, "left") %>% enhead(name, "up") %>% select(-row, -col) ``` ``` ## # A tibble: 8 x 4 ## score field subject name ## <dbl> <chr> <chr> <chr> ## 1 1 Humanities Classics Matilda ## 2 3 Humanities Classics Nicholas ## 3 3 Humanities History Matilda ## 4 5 Humanities History Nicholas ## 5 5 Performance Music Matilda ## 6 9 Performance Music Nicholas ## 7 7 Performance Drama Matilda ## 8 12 Performance Drama Nicholas ``` ### 3\.2\.8 Repeated rows/columns of headers within the table Repetitions can simply be ignored. Select one of the sets of headers, and use it for all the data. In this example, the data cells are easy to distinguish from the headers mixed in among them, because only the data cells have the `numeric` data type. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-repeated-headers") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric) %>% print() ``` ``` ## # A tibble: 80 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 2 4 character Term 1 NA ## 2 2 5 character Term 2 NA ## 3 2 6 character Term 3 NA ## 4 3 2 character Classics NA ## 5 3 3 character Matilda NA ## 6 3 4 numeric <NA> 1 ## 7 3 5 numeric <NA> 8 ## 8 3 6 numeric <NA> 7 ## 9 4 3 character Nicholas NA ## 10 4 4 numeric <NA> 3 ## # … with 70 more rows ``` ``` # View the cells in their original positions on the spreadsheet rectify(all_cells) ``` ``` ## # A tibble: 20 x 6 ## `row/col` `2(B)` `3(C)` `4(D)` `5(E)` `6(F)` ## <int> <chr> <chr> <chr> <chr> <chr> ## 1 2 <NA> <NA> Term 1 Term 2 Term 3 ## 2 3 Classics Matilda 1 8 7 ## 3 4 <NA> Nicholas 3 1 2 ## 4 5 <NA> Olivia 4 0 1 ## 5 6 <NA> Paul 2 4 8 ## 6 7 <NA> <NA> Term 1 Term 2 Term 3 ## 7 8 History Matilda 4 7 3 ## 8 9 <NA> Nicholas 3 5 5 ## 9 10 <NA> Olivia 9 8 5 ## 10 11 <NA> Paul 6 2 0 ## 11 12 <NA> <NA> Term 1 Term 2 Term 3 ## 12 13 Music Matilda 2 9 9 ## 13 14 <NA> Nicholas 1 7 7 ## 14 15 <NA> Olivia 0 3 5 ## 15 16 <NA> Paul 2 2 3 ## 16 17 <NA> <NA> Term 1 Term 2 Term 3 ## 17 18 Drama Matilda 9 8 9 ## 18 19 <NA> Nicholas 1 3 4 ## 19 20 <NA> Olivia 6 1 4 ## 20 21 <NA> Paul 6 0 2 ``` ``` # The 'term' headers appear four times, but only the first one is needed. term <- dplyr::filter(all_cells, row == 2) %>% select(row, col, term = character) # the title of this header is 'field', meaning 'group of subjects'. # The cells are text cells (`"Humanities"`, `"Performance"`) so take the value # in the '`character` column. term ``` ``` ## # A tibble: 3 x 3 ## row col term ## <int> <int> <chr> ## 1 2 4 Term 1 ## 2 2 5 Term 2 ## 3 2 6 Term 3 ``` ``` subject <- dplyr::filter(all_cells, col == 2) %>% select(row, col, subject = character) # The title of this header is 'subject' # The cells are text cells (`"history"`, etc.) so take the value in the # '`character` column. subject ``` ``` ## # A tibble: 4 x 3 ## row col subject ## <int> <int> <chr> ## 1 3 2 Classics ## 2 8 2 History ## 3 13 2 Music ## 4 18 2 Drama ``` ``` name <- dplyr::filter(all_cells, col == 3) %>% select(row, col, name = character) # The title of this header is 'name'. # The cells are text cells, so take the value in the '`character` column. name ``` ``` ## # A tibble: 16 x 3 ## row col name ## <int> <int> <chr> ## 1 3 3 Matilda ## 2 4 3 Nicholas ## 3 5 3 Olivia ## 4 6 3 Paul ## 5 8 3 Matilda ## 6 9 3 Nicholas ## 7 10 3 Olivia ## 8 11 3 Paul ## 9 13 3 Matilda ## 10 14 3 Nicholas ## 11 15 3 Olivia ## 12 16 3 Paul ## 13 18 3 Matilda ## 14 19 3 Nicholas ## 15 20 3 Olivia ## 16 21 3 Paul ``` ``` # The data cells are distinguished from the 'term' headers by their data type -- # the data cells are numeric, whereas the term headers are character. data_cells <- dplyr::filter(all_cells, data_type == "numeric") %>% select(row, col, score = numeric) # The data is exam scores in certain subjects, so give the data that title. # The data is numeric, so select only that 'value'. If some of the data was # also text or true/false, then you would select the `character` and `logical` # columns as well as `numeric` data_cells ``` ``` ## # A tibble: 48 x 3 ## row col score ## <int> <int> <dbl> ## 1 3 4 1 ## 2 3 5 8 ## 3 3 6 7 ## 4 4 4 3 ## 5 4 5 1 ## 6 4 6 2 ## 7 5 4 4 ## 8 5 5 0 ## 9 5 6 1 ## 10 6 4 2 ## # … with 38 more rows ``` ``` data_cells %>% enhead(term, "up") %>% enhead(subject, "up-left") %>% enhead(name, "left") %>% select(-row, -col) ``` ``` ## # A tibble: 48 x 4 ## score term subject name ## <dbl> <chr> <chr> <chr> ## 1 1 Term 1 Classics Matilda ## 2 8 Term 2 Classics Matilda ## 3 7 Term 3 Classics Matilda ## 4 3 Term 1 Classics Nicholas ## 5 1 Term 2 Classics Nicholas ## 6 2 Term 3 Classics Nicholas ## 7 4 Term 1 Classics Olivia ## 8 0 Term 2 Classics Olivia ## 9 1 Term 3 Classics Olivia ## 10 2 Term 1 Classics Paul ## # … with 38 more rows ``` ### 3\.2\.9 Headers amongst the data This happens when what is actually a row\-header, instead of being presented to the left of the data, is presented above the data. (Alternatively, what is actually a column header, instead of being presented above the data, is presented to the side.) The way to handle it is to *pretend* that it is a row header, and use the `"left-up"` direction as normal. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-header-within-data") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric, local_format_id) %>% print() ``` ``` ## # A tibble: 80 x 6 ## row col data_type character numeric local_format_id ## <int> <int> <chr> <chr> <dbl> <int> ## 1 2 3 character Classics NA 2 ## 2 3 3 character Term 1 NA 20 ## 3 3 4 character Term 2 NA 37 ## 4 3 5 character Term 3 NA 21 ## 5 4 2 character Matilda NA 18 ## 6 4 3 numeric <NA> 4 18 ## 7 4 4 numeric <NA> 0 27 ## 8 4 5 numeric <NA> 7 19 ## 9 5 2 character Nicholas NA 20 ## 10 5 3 numeric <NA> 4 20 ## # … with 70 more rows ``` ``` # View the cells in their original positions on the spreadsheet rectify(all_cells) ``` ``` ## # A tibble: 24 x 5 ## `row/col` `2(B)` `3(C)` `4(D)` `5(E)` ## <int> <chr> <chr> <chr> <chr> ## 1 2 <NA> Classics <NA> <NA> ## 2 3 <NA> Term 1 Term 2 Term 3 ## 3 4 Matilda 4 0 7 ## 4 5 Nicholas 4 6 2 ## 5 6 Olivia 9 9 9 ## 6 7 Paul 5 0 0 ## 7 8 <NA> History <NA> <NA> ## 8 9 <NA> Term 1 Term 2 Term 3 ## 9 10 Matilda 0 4 2 ## 10 11 Nicholas 2 5 2 ## # … with 14 more rows ``` ``` bold <- which(xlsx_formats(path)$local$font$bold) # The subject headers, though mixed with the data and the 'term' headers, are # distinguishable by the data type "character" and by being bold. subject <- dplyr::filter(all_cells, col == 3, data_type == "character", local_format_id %in% bold) %>% select(row, col, subject = character) # The title of this header is 'subject' # The cells are text cells (`"history"`, etc.) so take the value in the # '`character` column. subject ``` ``` ## # A tibble: 4 x 3 ## row col subject ## <int> <int> <chr> ## 1 2 3 Classics ## 2 8 3 History ## 3 14 3 Music ## 4 20 3 Drama ``` ``` # We only need one set of the 'term' headers term <- dplyr::filter(all_cells, row == 3, data_type == "character") %>% select(row, col, term = character) # the title of this header is 'field', meaning 'group of subjects'. # The cells are text cells (`"Humanities"`, `"Performance"`) so take the value # in the '`character` column. term ``` ``` ## # A tibble: 3 x 3 ## row col term ## <int> <int> <chr> ## 1 3 3 Term 1 ## 2 3 4 Term 2 ## 3 3 5 Term 3 ``` ``` name <- dplyr::filter(all_cells, col == 2) %>% select(row, col, name = character) # The title of this header is 'name'. # The cells are text cells, so take the value in the '`character` column. name ``` ``` ## # A tibble: 16 x 3 ## row col name ## <int> <int> <chr> ## 1 4 2 Matilda ## 2 5 2 Nicholas ## 3 6 2 Olivia ## 4 7 2 Paul ## 5 10 2 Matilda ## 6 11 2 Nicholas ## 7 12 2 Olivia ## 8 13 2 Paul ## 9 16 2 Matilda ## 10 17 2 Nicholas ## 11 18 2 Olivia ## 12 19 2 Paul ## 13 22 2 Matilda ## 14 23 2 Nicholas ## 15 24 2 Olivia ## 16 25 2 Paul ``` ``` # The data cells are distinguished from the 'subject' headers by their data # type -- the data cells are numeric, whereas the term headers are character. data_cells <- dplyr::filter(all_cells, data_type == "numeric") %>% select(row, col, score = numeric) # The data is exam scores in certain subjects, so give the data that title. # The data is numeric, so select only that 'value'. If some of the data was # also text or true/false, then you would select the `character` and `logical` # columns as well as `numeric` data_cells ``` ``` ## # A tibble: 48 x 3 ## row col score ## <int> <int> <dbl> ## 1 4 3 4 ## 2 4 4 0 ## 3 4 5 7 ## 4 5 3 4 ## 5 5 4 6 ## 6 5 5 2 ## 7 6 3 9 ## 8 6 4 9 ## 9 6 5 9 ## 10 7 3 5 ## # … with 38 more rows ``` ``` data_cells %>% enhead(subject, "left-up") %>% enhead(term, "up") %>% enhead(name, "left") %>% select(-row, -col) ``` ``` ## # A tibble: 48 x 4 ## score subject term name ## <dbl> <chr> <chr> <chr> ## 1 4 Classics Term 1 Matilda ## 2 0 Classics Term 2 Matilda ## 3 7 Classics Term 3 Matilda ## 4 4 Classics Term 1 Nicholas ## 5 6 Classics Term 2 Nicholas ## 6 2 Classics Term 3 Nicholas ## 7 9 Classics Term 1 Olivia ## 8 9 Classics Term 2 Olivia ## 9 9 Classics Term 3 Olivia ## 10 5 Classics Term 1 Paul ## # … with 38 more rows ``` ### 3\.2\.1 Two clear rows of text column headers, left\-aligned The first stage, identifying header vs data cells, is simply filtering. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(col >= 4, !is_blank) %>% # Ignore the row headers in this example select(row, col, data_type, character, numeric) %>% print() ``` ``` ## # A tibble: 22 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 2 4 character Female NA ## 2 2 6 character Male NA ## 3 3 4 character Matilda NA ## 4 3 5 character Olivia NA ## 5 3 6 character Nicholas NA ## 6 3 7 character Paul NA ## 7 4 4 numeric <NA> 1 ## 8 4 5 numeric <NA> 2 ## 9 4 6 numeric <NA> 3 ## 10 4 7 numeric <NA> 0 ## # … with 12 more rows ``` ``` # View the cells in their original positions on the spreadsheet rectify(all_cells) ``` ``` ## # A tibble: 6 x 5 ## `row/col` `4(D)` `5(E)` `6(F)` `7(G)` ## <int> <chr> <chr> <chr> <chr> ## 1 2 Female <NA> Male <NA> ## 2 3 Matilda Olivia Nicholas Paul ## 3 4 1 2 3 0 ## 4 5 3 4 5 1 ## 5 6 5 6 9 2 ## 6 7 7 8 12 3 ``` ``` first_header_row <- dplyr::filter(all_cells, row == 2) %>% select(row, col, sex = character) # the title of this header is 'sex' # the cells are text cells (`"Female"` and `"Male"`) so take the value in the # '`character` column. first_header_row ``` ``` ## # A tibble: 2 x 3 ## row col sex ## <int> <int> <chr> ## 1 2 4 Female ## 2 2 6 Male ``` ``` second_header_row <- dplyr::filter(all_cells, row == 3) %>% select(row, col, name = character) # The title of this header is 'name'. # The cells are text cells, so take the value in the '`character` column. second_header_row ``` ``` ## # A tibble: 4 x 3 ## row col name ## <int> <int> <chr> ## 1 3 4 Matilda ## 2 3 5 Olivia ## 3 3 6 Nicholas ## 4 3 7 Paul ``` ``` data_cells <- dplyr::filter(all_cells, data_type == "numeric") %>% select(row, col, score = numeric) # The data is exam scores in certain subjects, so give the data that title. # The data is numeric, so select only that 'value'. If some of the data was # also text or true/false, then you would select the `character` and `logical` # columns as well as `numeric` ``` The second stage is to declare how the data cells relate to each row of column headers. Starting from the point of view of a data cell, the relevant column header from the second row of headers is the one directly `"up"`. ``` enhead(data_cells, second_header_row, "up") ``` ``` ## # A tibble: 16 x 4 ## row col score name ## <int> <int> <dbl> <chr> ## 1 4 4 1 Matilda ## 2 4 5 2 Olivia ## 3 4 6 3 Nicholas ## 4 4 7 0 Paul ## 5 5 4 3 Matilda ## 6 5 5 4 Olivia ## 7 5 6 5 Nicholas ## 8 5 7 1 Paul ## 9 6 4 5 Matilda ## 10 6 5 6 Olivia ## 11 6 6 9 Nicholas ## 12 6 7 2 Paul ## 13 7 4 7 Matilda ## 14 7 5 8 Olivia ## 15 7 6 12 Nicholas ## 16 7 7 3 Paul ``` The first row of headers, from the point of view of a data cell, is either directly up, or up\-then\-left. ``` enhead(data_cells, first_header_row, "up-left") ``` ``` ## # A tibble: 16 x 4 ## row col score sex ## <int> <int> <dbl> <chr> ## 1 4 4 1 Female ## 2 4 5 2 Female ## 3 5 4 3 Female ## 4 5 5 4 Female ## 5 6 4 5 Female ## 6 6 5 6 Female ## 7 7 4 7 Female ## 8 7 5 8 Female ## 9 4 6 3 Male ## 10 4 7 0 Male ## 11 5 6 5 Male ## 12 5 7 1 Male ## 13 6 6 9 Male ## 14 6 7 2 Male ## 15 7 6 12 Male ## 16 7 7 3 Male ``` Piping everything together, we get a complete, tidy dataset, and can finally drop the `row` and `col` columns. ``` data_cells %>% enhead(first_header_row, "up-left") %>% enhead(second_header_row, "up") %>% select(-row, -col) ``` ``` ## # A tibble: 16 x 3 ## score sex name ## <dbl> <chr> <chr> ## 1 1 Female Matilda ## 2 2 Female Olivia ## 3 3 Female Matilda ## 4 4 Female Olivia ## 5 5 Female Matilda ## 6 6 Female Olivia ## 7 7 Female Matilda ## 8 8 Female Olivia ## 9 3 Male Nicholas ## 10 0 Male Paul ## 11 5 Male Nicholas ## 12 1 Male Paul ## 13 9 Male Nicholas ## 14 2 Male Paul ## 15 12 Male Nicholas ## 16 3 Male Paul ``` ### 3\.2\.2 Two clear columns of text row headers, top\-aligned This is almost the same as [Two clear rows of text column headers, left\-aligned](2RL), but with different directions: `"left"` for directly left, and `"left-up"` for left\-then\-up. (`"up-left"` and `"left-up"` look like synonyms. They happen to be synonyms in `enhead()`, but they aren’t in `behead()`. In this example, the table has no column headers, only row headers. This is artificial here, but sometimes table are deliberately laid out in transpose form: the first column contains the headers, and the data extends in columns from left to right instead of from top to bottom. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(row >= 3, !is_blank) %>% # Ignore the column headers in this example select(row, col, data_type, character, numeric) %>% print() ``` ``` ## # A tibble: 26 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 3 4 character Matilda NA ## 2 3 5 character Olivia NA ## 3 3 6 character Nicholas NA ## 4 3 7 character Paul NA ## 5 4 2 character Humanities NA ## 6 4 3 character Classics NA ## 7 4 4 numeric <NA> 1 ## 8 4 5 numeric <NA> 2 ## 9 4 6 numeric <NA> 3 ## 10 4 7 numeric <NA> 0 ## # … with 16 more rows ``` ``` # View the cells in their original positions on the spreadsheet rectify(all_cells) ``` ``` ## # A tibble: 5 x 7 ## `row/col` `2(B)` `3(C)` `4(D)` `5(E)` `6(F)` `7(G)` ## <int> <chr> <chr> <chr> <chr> <chr> <chr> ## 1 3 <NA> <NA> Matilda Olivia Nicholas Paul ## 2 4 Humanities Classics 1 2 3 0 ## 3 5 <NA> History 3 4 5 1 ## 4 6 Performance Music 5 6 9 2 ## 5 7 <NA> Drama 7 8 12 3 ``` ``` first_header_col <- dplyr::filter(all_cells, col == 2) %>% select(row, col, field = character) # the title of this header is 'field', meaning 'group of subjects'. # The cells are text cells (`"Humanities"`, `"Performance"`) so take the value # in the '`character` column. first_header_col ``` ``` ## # A tibble: 2 x 3 ## row col field ## <int> <int> <chr> ## 1 4 2 Humanities ## 2 6 2 Performance ``` ``` second_header_col <- dplyr::filter(all_cells, col == 3) %>% select(row, col, subject = character) # The title of this header is 'subject' # The cells are text cells (`"history"`, etc.) so take the value in the # '`character` column. second_header_col ``` ``` ## # A tibble: 4 x 3 ## row col subject ## <int> <int> <chr> ## 1 4 3 Classics ## 2 5 3 History ## 3 6 3 Music ## 4 7 3 Drama ``` ``` data_cells <- dplyr::filter(all_cells, data_type == "numeric") %>% select(row, col, score = numeric) # The data is examp scores in certain subjects, so give the data that title. # The data is numeric, so select only that 'value'. If some of the data was # also text or true/false, then you would select the `character` and `logical` # columns as well as `numeric` data_cells %>% enhead(first_header_col, "left-up") %>% enhead(second_header_col, "left") %>% select(-row, -col) ``` ``` ## # A tibble: 16 x 3 ## score field subject ## <dbl> <chr> <chr> ## 1 1 Humanities Classics ## 2 2 Humanities Classics ## 3 3 Humanities Classics ## 4 0 Humanities Classics ## 5 3 Humanities History ## 6 4 Humanities History ## 7 5 Humanities History ## 8 1 Humanities History ## 9 5 Performance Music ## 10 6 Performance Music ## 11 9 Performance Music ## 12 2 Performance Music ## 13 7 Performance Drama ## 14 8 Performance Drama ## 15 12 Performance Drama ## 16 3 Performance Drama ``` ### 3\.2\.3 Two clear rows and columns of text headers, top\-aligned and left\-aligned This is a combination of the previous two sections. No new techniques are used. 1. Identify which cells are headers, and which are data 2. State how the data cells relate to the header cells. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric) %>% print() ``` ``` ## # A tibble: 28 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 2 4 character Female NA ## 2 2 6 character Male NA ## 3 3 4 character Matilda NA ## 4 3 5 character Olivia NA ## 5 3 6 character Nicholas NA ## 6 3 7 character Paul NA ## 7 4 2 character Humanities NA ## 8 4 3 character Classics NA ## 9 4 4 numeric <NA> 1 ## 10 4 5 numeric <NA> 2 ## # … with 18 more rows ``` ``` # View the cells in their original positions on the spreadsheet rectify(all_cells) ``` ``` ## # A tibble: 6 x 7 ## `row/col` `2(B)` `3(C)` `4(D)` `5(E)` `6(F)` `7(G)` ## <int> <chr> <chr> <chr> <chr> <chr> <chr> ## 1 2 <NA> <NA> Female <NA> Male <NA> ## 2 3 <NA> <NA> Matilda Olivia Nicholas Paul ## 3 4 Humanities Classics 1 2 3 0 ## 4 5 <NA> History 3 4 5 1 ## 5 6 Performance Music 5 6 9 2 ## 6 7 <NA> Drama 7 8 12 3 ``` ``` first_header_row <- dplyr::filter(all_cells, row == 2) %>% select(row, col, sex = character) # the title of this header is 'sex' # the cells are text cells (`"Female"` and `"Male"`) so take the value in the # '`character` column. first_header_row ``` ``` ## # A tibble: 2 x 3 ## row col sex ## <int> <int> <chr> ## 1 2 4 Female ## 2 2 6 Male ``` ``` second_header_row <- dplyr::filter(all_cells, row == 3) %>% select(row, col, name = character) # The title of this header is 'name'. # The cells are text cells, so take the value in the '`character` column. second_header_row ``` ``` ## # A tibble: 4 x 3 ## row col name ## <int> <int> <chr> ## 1 3 4 Matilda ## 2 3 5 Olivia ## 3 3 6 Nicholas ## 4 3 7 Paul ``` ``` first_header_col <- dplyr::filter(all_cells, col == 2) %>% select(row, col, field = character) # the title of this header is 'field', meaning 'group of subjects'. # The cells are text cells (`"Humanities"`, `"Performance"`) so take the value # in the '`character` column. first_header_col ``` ``` ## # A tibble: 2 x 3 ## row col field ## <int> <int> <chr> ## 1 4 2 Humanities ## 2 6 2 Performance ``` ``` second_header_col <- dplyr::filter(all_cells, col == 3) %>% select(row, col, subject = character) # The title of this header is 'subject' # The cells are text cells (`"history"`, etc.) so take the value in the # '`character` column. second_header_col ``` ``` ## # A tibble: 4 x 3 ## row col subject ## <int> <int> <chr> ## 1 4 3 Classics ## 2 5 3 History ## 3 6 3 Music ## 4 7 3 Drama ``` ``` data_cells <- dplyr::filter(all_cells, data_type == "numeric") %>% select(row, col, score = numeric) # The data is examp scores in certain subjects, so give the data that title. # The data is numeric, so select only that 'value'. If some of the data was # also text or true/false, then you would select the `character` and `logical` # columns as well as `numeric` data_cells %>% enhead(first_header_row, "up-left") %>% enhead(second_header_row, "up") %>% enhead(first_header_col, "left-up") %>% enhead(second_header_col, "left") %>% select(-row, -col) ``` ``` ## # A tibble: 16 x 5 ## score sex name field subject ## <dbl> <chr> <chr> <chr> <chr> ## 1 1 Female Matilda Humanities Classics ## 2 2 Female Olivia Humanities Classics ## 3 3 Female Matilda Humanities History ## 4 4 Female Olivia Humanities History ## 5 3 Male Nicholas Humanities Classics ## 6 0 Male Paul Humanities Classics ## 7 5 Male Nicholas Humanities History ## 8 1 Male Paul Humanities History ## 9 5 Female Matilda Performance Music ## 10 6 Female Olivia Performance Music ## 11 7 Female Matilda Performance Drama ## 12 8 Female Olivia Performance Drama ## 13 9 Male Nicholas Performance Music ## 14 2 Male Paul Performance Music ## 15 12 Male Nicholas Performance Drama ## 16 3 Male Paul Performance Drama ``` ### 3\.2\.4 Centre\-aligned headers Headers aren’t always aligned to one side of the data cells that they describe. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-centre-aligned") rectify(all_cells) ``` ``` ## # A tibble: 10 x 10 ## `row/col` `2(B)` `3(C)` `4(D)` `5(E)` `6(F)` `7(G)` `8(H)` `9(I)` `10(J)` ## <int> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> ## 1 2 <NA> <NA> <NA> Female <NA> <NA> <NA> Male <NA> ## 2 3 <NA> <NA> Leah Matilda Olivia Lenny Max Nicholas Paul ## 3 4 <NA> Classics 3 1 2 4 3 3 0 ## 4 5 Humanities History 8 3 4 7 5 5 1 ## 5 6 <NA> Literature 1 1 9 3 12 7 5 ## 6 7 <NA> Philosophy 5 10 10 8 2 5 12 ## 7 8 <NA> Languages 5 4 5 9 8 3 8 ## 8 9 <NA> Music 4 10 10 2 4 5 6 ## 9 10 Performance Dance 4 5 6 4 12 9 2 ## 10 11 <NA> Drama 2 7 8 6 1 12 3 ``` Looking at that table, it’s not immediately obvious where the boundary between `Female` and `Male` falls, or between `Humanities` and `Performance`. A naive approach would be to match the inner headers to the outer ones by proximity, and there are four directions to do so: `"up-ish"`, `"left-ish"`, `"down-ish"`, and `"right-ish"`. But in this case, those directions are too naive. * `Languages` is closest to the `Performance` header, but is a humanity. * `Lenny` is the same distance from `Female` as from `Male`. You can fix this by justifying the header cells towards one side of the data cells that they describe, and then use a direction like `"up-left"` as usual. Do this with `justify()`, providing the header cells with a second set of cells at the positions you want the header cells to move to. * `header_cells` is the cells whose value will be used as the header * `corner_cells` is the cells whose position is in one corner of the domain of the header (e.g. the top\-left\-hand corner). In the original spreadsheet, the borders mark the boundaries. So the corner cells of the headers can be found by filtering for cells with a particular border. ``` all_cells <- xlsx_cells(path, sheets = "pivot-centre-aligned") %>% select(row, col, is_blank, data_type, character, numeric, local_format_id) formats <- xlsx_formats(path) top_borders <- which(!is.na(formats$local$border$top$style)) left_borders <- which(!is.na(formats$local$border$left$style)) first_header_row_corners <- dplyr::filter(all_cells, row == 2, local_format_id %in% left_borders) %>% select(row, col) first_header_row_corners ``` ``` ## # A tibble: 2 x 2 ## row col ## <int> <int> ## 1 2 4 ## 2 2 7 ``` ``` first_header_col_corners <- dplyr::filter(all_cells, col == 2, local_format_id %in% top_borders) %>% select(row, col) first_header_col_corners ``` ``` ## # A tibble: 2 x 2 ## row col ## <int> <int> ## 1 4 2 ## 2 9 2 ``` Next, get the first row and first column of header cells as usual. ``` first_header_row <- dplyr::filter(all_cells, !is_blank, row == 2) %>% select(row, col, sex = character) # the title of this header is 'sex' # the cells are text cells (`"Female"` and `"Male"`) so take the value in the # '`character` column. first_header_row ``` ``` ## # A tibble: 2 x 3 ## row col sex ## <int> <int> <chr> ## 1 2 5 Female ## 2 2 9 Male ``` ``` first_header_col <- dplyr::filter(all_cells, !is_blank, col == 2) %>% select(row, col, field = character) # the title of this header is 'field', meaning 'group of subjects'. # The cells are text cells (`"Humanities"`, `"Performance"`) so take the value # in the '`character` column. first_header_col ``` ``` ## # A tibble: 2 x 3 ## row col field ## <int> <int> <chr> ## 1 5 2 Humanities ## 2 10 2 Performance ``` And now justify the header cells to the same positions as the corner cells. ``` first_header_row <- justify(first_header_row, first_header_row_corners) first_header_col <- justify(first_header_col, first_header_col_corners) first_header_row ``` ``` ## # A tibble: 2 x 3 ## row col sex ## <int> <int> <chr> ## 1 2 4 Female ## 2 2 7 Male ``` ``` first_header_col ``` ``` ## # A tibble: 2 x 3 ## row col field ## <int> <int> <chr> ## 1 4 2 Humanities ## 2 9 2 Performance ``` The rest of this example is the same as “Two clear rows and columns of text headers, top\-aligned and left\-aligned”. ``` second_header_row <- dplyr::filter(all_cells, row == 3) %>% select(row, col, name = character) # The title of this header is 'name'. # The cells are text cells, so take the value in the '`character` column. second_header_row ``` ``` ## # A tibble: 7 x 3 ## row col name ## <int> <int> <chr> ## 1 3 4 Leah ## 2 3 5 Matilda ## 3 3 6 Olivia ## 4 3 7 Lenny ## 5 3 8 Max ## 6 3 9 Nicholas ## 7 3 10 Paul ``` ``` second_header_col <- dplyr::filter(all_cells, col == 3) %>% select(row, col, subject = character) # The title of this header is 'subject' # The cells are text cells (`"history"`, etc.) so take the value in the # '`character` column. second_header_col ``` ``` ## # A tibble: 8 x 3 ## row col subject ## <int> <int> <chr> ## 1 4 3 Classics ## 2 5 3 History ## 3 6 3 Literature ## 4 7 3 Philosophy ## 5 8 3 Languages ## 6 9 3 Music ## 7 10 3 Dance ## 8 11 3 Drama ``` ``` data_cells <- dplyr::filter(all_cells, data_type == "numeric") %>% select(row, col, score = numeric) # The data is examp scores in certain subjects, so give the data that title. # The data is numeric, so select only that 'value'. If some of the data was # also text or true/false, then you would select the `character` and `logical` # columns as well as `numeric` data_cells %>% enhead(first_header_row, "up-left") %>% enhead(second_header_row, "up") %>% enhead(first_header_col, "left-up") %>% enhead(second_header_col, "left") %>% select(-row, -col) ``` ``` ## # A tibble: 56 x 5 ## score sex name field subject ## <dbl> <chr> <chr> <chr> <chr> ## 1 3 Female Leah Humanities Classics ## 2 1 Female Matilda Humanities Classics ## 3 2 Female Olivia Humanities Classics ## 4 8 Female Leah Humanities History ## 5 3 Female Matilda Humanities History ## 6 4 Female Olivia Humanities History ## 7 1 Female Leah Humanities Literature ## 8 1 Female Matilda Humanities Literature ## 9 9 Female Olivia Humanities Literature ## 10 5 Female Leah Humanities Philosophy ## # … with 46 more rows ``` ### 3\.2\.5 Multiple rows or columns of headers, with meaningful formatting This is a combination of the previous section with [Meaningfully formatted cells](tidy-formatted-cells). The section [Meaningfully formatted rows](tidy-formatted-rows) doesn’t work here, because the unpivoting of multiple rows/columns of headers complicates the relationship between the data and the formatting. 1. Unpivot the multiple rows/columns of headers, as above, but keep the `row` and `col` of each data cell. 2. Collect the `row`, `col` and formatting of each data cell. 3. Join the data to the formatting by the `row` and `col`. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric) %>% print() ``` ``` ## # A tibble: 28 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 2 4 character Female NA ## 2 2 6 character Male NA ## 3 3 4 character Matilda NA ## 4 3 5 character Olivia NA ## 5 3 6 character Nicholas NA ## 6 3 7 character Paul NA ## 7 4 2 character Humanities NA ## 8 4 3 character Classics NA ## 9 4 4 numeric <NA> 1 ## 10 4 5 numeric <NA> 2 ## # … with 18 more rows ``` ``` # View the cells in their original positions on the spreadsheet rectify(all_cells) ``` ``` ## # A tibble: 6 x 7 ## `row/col` `2(B)` `3(C)` `4(D)` `5(E)` `6(F)` `7(G)` ## <int> <chr> <chr> <chr> <chr> <chr> <chr> ## 1 2 <NA> <NA> Female <NA> Male <NA> ## 2 3 <NA> <NA> Matilda Olivia Nicholas Paul ## 3 4 Humanities Classics 1 2 3 0 ## 4 5 <NA> History 3 4 5 1 ## 5 6 Performance Music 5 6 9 2 ## 6 7 <NA> Drama 7 8 12 3 ``` ``` first_header_row <- dplyr::filter(all_cells, row == 2) %>% select(row, col, sex = character) # the title of this header is 'sex' # the cells are text cells (`"Female"` and `"Male"`) so take the value in the # '`character` column. first_header_row ``` ``` ## # A tibble: 2 x 3 ## row col sex ## <int> <int> <chr> ## 1 2 4 Female ## 2 2 6 Male ``` ``` second_header_row <- dplyr::filter(all_cells, row == 3) %>% select(row, col, name = character) # The title of this header is 'name'. # The cells are text cells, so take the value in the '`character` column. second_header_row ``` ``` ## # A tibble: 4 x 3 ## row col name ## <int> <int> <chr> ## 1 3 4 Matilda ## 2 3 5 Olivia ## 3 3 6 Nicholas ## 4 3 7 Paul ``` ``` first_header_col <- dplyr::filter(all_cells, col == 2) %>% select(row, col, field = character) # the title of this header is 'field', meaning 'group of subjects'. # The cells are text cells (`"Humanities"`, `"Performance"`) so take the value # in the '`character` column. first_header_col ``` ``` ## # A tibble: 2 x 3 ## row col field ## <int> <int> <chr> ## 1 4 2 Humanities ## 2 6 2 Performance ``` ``` second_header_col <- dplyr::filter(all_cells, col == 3) %>% select(row, col, subject = character) # The title of this header is 'subject' # The cells are text cells (`"history"`, etc.) so take the value in the # '`character` column. second_header_col ``` ``` ## # A tibble: 4 x 3 ## row col subject ## <int> <int> <chr> ## 1 4 3 Classics ## 2 5 3 History ## 3 6 3 Music ## 4 7 3 Drama ``` ``` data_cells <- dplyr::filter(all_cells, data_type == "numeric") %>% select(row, col, score = numeric) # The data is exam scores in certain subjects, so give the data that title. # The data is numeric, so select only that 'value'. If some of the data was # also text or true/false, then you would select the `character` and `logical` # columns as well as `numeric` unpivoted <- data_cells %>% enhead(first_header_row, "up-left") %>% enhead(second_header_row, "up") %>% enhead(first_header_col, "left-up") %>% enhead(second_header_col, "left") # Don't delet the `row` and `col` columns yet, because we need them to join on # the formatting # `formats` is a pallette of fill colours that can be indexed by the # `local_format_id` of a given cell to get the fill colour of that cell fill_colours <- xlsx_formats(path)$local$fill$patternFill$fgColor$rgb # Import all the cells, filter out the header row, filter for the first column, # and create a new column `approximate` based on the fill colours, by looking up # the local_format_id of each cell in the `formats` pallette. annotations <- xlsx_cells(path, sheets = "pivot-annotations") %>% dplyr::filter(row >= 4, col >= 4) %>% # Omit the headers mutate(fill_colour = fill_colours[local_format_id]) %>% select(row, col, fill_colour) annotations ``` ``` ## # A tibble: 16 x 3 ## row col fill_colour ## <int> <int> <chr> ## 1 4 4 <NA> ## 2 4 5 FFFFFF00 ## 3 4 6 <NA> ## 4 4 7 <NA> ## 5 5 4 FFFFFF00 ## 6 5 5 <NA> ## 7 5 6 <NA> ## 8 5 7 <NA> ## 9 6 4 <NA> ## 10 6 5 <NA> ## 11 6 6 <NA> ## 12 6 7 <NA> ## 13 7 4 <NA> ## 14 7 5 <NA> ## 15 7 6 FFFFFF00 ## 16 7 7 <NA> ``` ``` left_join(unpivoted, annotations, by = c("row", "col")) %>% select(-row, -col) ``` ``` ## # A tibble: 16 x 6 ## score sex name field subject fill_colour ## <dbl> <chr> <chr> <chr> <chr> <chr> ## 1 1 Female Matilda Humanities Classics <NA> ## 2 2 Female Olivia Humanities Classics FFFFFF00 ## 3 3 Female Matilda Humanities History FFFFFF00 ## 4 4 Female Olivia Humanities History <NA> ## 5 3 Male Nicholas Humanities Classics <NA> ## 6 0 Male Paul Humanities Classics <NA> ## 7 5 Male Nicholas Humanities History <NA> ## 8 1 Male Paul Humanities History <NA> ## 9 5 Female Matilda Performance Music <NA> ## 10 6 Female Olivia Performance Music <NA> ## 11 7 Female Matilda Performance Drama <NA> ## 12 8 Female Olivia Performance Drama <NA> ## 13 9 Male Nicholas Performance Music <NA> ## 14 2 Male Paul Performance Music <NA> ## 15 12 Male Nicholas Performance Drama FFFFFF00 ## 16 3 Male Paul Performance Drama <NA> ``` ### 3\.2\.6 Mixed headers and notes in the same row/column, distinguished by formatting This doesn’t use any new techniques. The trick is, when selecting a row or column of header cells, to filter out ones that have the ‘wrong’ formatting (formatting that shows they aren’t really headers). In this example, cells with italic or red text aren’t headers, even if they are in amongst header cells. First, identify the IDs of formats that have italic or red text. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") formats <- xlsx_formats(path) italic <- which(formats$local$font$italic) # For 'red' we can either look for the RGB code for red "FFFF0000" red <- which(formats$local$font$color$rgb == "FFFF0000") red ``` ``` ## [1] 12 13 14 40 41 ``` ``` # Or we can find out what that code is by starting from a cell that we know is # red. red_cell_format_id <- xlsx_cells(path, sheets = "pivot-notes") %>% dplyr::filter(row == 5, col == 2) %>% pull(local_format_id) red_cell_format_id ``` ``` ## [1] 40 ``` ``` red_rgb <- formats$local$font$color$rgb[red_cell_format_id] red <- which(formats$local$font$color$rgb == red_rgb) red ``` ``` ## [1] 12 13 14 40 41 ``` Now we select the headers, filtering out cells with the format IDs of red or italic cells. ``` all_cells <- xlsx_cells(path, sheets = "pivot-notes") %>% dplyr::filter(!is_blank) %>% select(row, col, character, numeric, local_format_id) %>% print() ``` ``` ## # A tibble: 31 x 5 ## row col character numeric local_format_id ## <int> <int> <chr> <dbl> <int> ## 1 2 4 Female NA 18 ## 2 2 6 Male NA 18 ## 3 2 7 0 = absent NA 39 ## 4 3 4 Matilda NA 20 ## 5 3 5 Olivia NA 21 ## 6 3 6 Nicholas NA 20 ## 7 3 7 Paul NA 21 ## 8 4 2 Humanities NA 18 ## 9 4 3 Classics NA 19 ## 10 4 4 <NA> 1 33 ## # … with 21 more rows ``` ``` first_header_row <- dplyr::filter(all_cells, row == 2, !(local_format_id %in% c(red, italic))) %>% select(row, col, sex = character) # the title of this header is 'sex' # the cells are text cells (`"Female"` and `"Male"`) so take the value in the # '`character` column. first_header_row ``` ``` ## # A tibble: 2 x 3 ## row col sex ## <int> <int> <chr> ## 1 2 4 Female ## 2 2 6 Male ``` ``` first_header_col <- dplyr::filter(all_cells, col == 2, !(local_format_id %in% c(red, italic))) %>% select(row, col, qualification = character) # the title of this header is 'field', meaning 'group of subjects'. # The cells are text cells (`"Humanities"`, `"Performance"`) so take the value # in the '`character` column. first_header_col ``` ``` ## # A tibble: 2 x 3 ## row col qualification ## <int> <int> <chr> ## 1 4 2 Humanities ## 2 6 2 Performance ``` ``` second_header_col <- dplyr::filter(all_cells, col == 3) %>% select(row, col, subject = character) # The title of this header is 'subject' # The cells are text cells (`"history"`, etc.) so take the value in the # '`character` column. data_cells %>% enhead(first_header_row, "up-left") %>% enhead(first_header_col, "left-up") %>% select(-row, -col) ``` ``` ## # A tibble: 16 x 3 ## score sex qualification ## <dbl> <chr> <chr> ## 1 1 Female Humanities ## 2 2 Female Humanities ## 3 3 Female Humanities ## 4 4 Female Humanities ## 5 3 Male Humanities ## 6 0 Male Humanities ## 7 5 Male Humanities ## 8 1 Male Humanities ## 9 5 Female Performance ## 10 6 Female Performance ## 11 7 Female Performance ## 12 8 Female Performance ## 13 9 Male Performance ## 14 2 Male Performance ## 15 12 Male Performance ## 16 3 Male Performance ``` ### 3\.2\.7 Mixed levels of headers in the same row/column, distinguished by formatting Normally different levels of headers are in different rows, or different columns, like [Two clear rows of text column headers, left\-aligned](2Rl). But sometimes they coexist in the same row or column, and are distinguishable by formatting, e.g. bold for the top level, italic for the mid level, and plain for the lowest level. In this example, there is a single column of row headers, where the levels are shown by different amounts of indentation. The indentation is done by formatting, rather than by leading spaces or tabs. The first step is to find the format IDs of all the different levels of indentation. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") formats <- xlsx_formats(path) indent0 <- which(formats$local$alignment$indent == 0) indent1 <- which(formats$local$alignment$indent == 1) indent0 ``` ``` ## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 ## [31] 31 32 33 34 35 36 37 38 39 40 41 42 43 45 47 48 49 50 51 52 53 54 55 56 57 58 59 ``` ``` indent1 ``` ``` ## [1] 44 46 ``` Now we use these format IDs to indentify the different levels of headers in the first column. ``` all_cells <- xlsx_cells(path, sheets = "pivot-hierarchy") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric, local_format_id) %>% print() ``` ``` ## # A tibble: 16 x 6 ## row col data_type character numeric local_format_id ## <int> <int> <chr> <chr> <dbl> <int> ## 1 2 3 character Matilda NA 18 ## 2 2 4 character Nicholas NA 42 ## 3 3 2 character Humanities NA 18 ## 4 4 2 character Classics NA 44 ## 5 4 3 numeric <NA> 1 20 ## 6 4 4 numeric <NA> 3 45 ## 7 5 2 character History NA 44 ## 8 5 3 numeric <NA> 3 20 ## 9 5 4 numeric <NA> 5 45 ## 10 6 2 character Performance NA 20 ## 11 7 2 character Music NA 44 ## 12 7 3 numeric <NA> 5 20 ## 13 7 4 numeric <NA> 9 45 ## 14 8 2 character Drama NA 46 ## 15 8 3 numeric <NA> 7 24 ## 16 8 4 numeric <NA> 12 47 ``` ``` field <- dplyr::filter(all_cells, col == 2, local_format_id %in% indent0) %>% select(row, col, field = character) # the title of this header is 'field', meaning 'group of subjects'. # The cells are text cells (`"Humanities"`, `"Performance"`) so take the value # in the '`character` column. field ``` ``` ## # A tibble: 2 x 3 ## row col field ## <int> <int> <chr> ## 1 3 2 Humanities ## 2 6 2 Performance ``` ``` subject <- dplyr::filter(all_cells, col == 2, local_format_id %in% indent1) %>% select(row, col, subject = character) # The title of this header is 'subject' # The cells are text cells (`"history"`, etc.) so take the value in the # '`character` column. subject ``` ``` ## # A tibble: 4 x 3 ## row col subject ## <int> <int> <chr> ## 1 4 2 Classics ## 2 5 2 History ## 3 7 2 Music ## 4 8 2 Drama ``` ``` name <- dplyr::filter(all_cells, row == 2) %>% select(row, col, name = character) # The title of this header is 'name'. # The cells are text cells, so take the value in the '`character` column. name ``` ``` ## # A tibble: 2 x 3 ## row col name ## <int> <int> <chr> ## 1 2 3 Matilda ## 2 2 4 Nicholas ``` ``` data_cells <- dplyr::filter(all_cells, data_type == "numeric") %>% select(row, col, score = numeric) # The data is exam scores in certain subjects, so give the data that title. # The data is numeric, so select only that 'value'. If some of the data was # also text or true/false, then you would select the `character` and `logical` # columns as well as `numeric` data_cells %>% enhead(field, "left-up") %>% enhead(subject, "left") %>% enhead(name, "up") %>% select(-row, -col) ``` ``` ## # A tibble: 8 x 4 ## score field subject name ## <dbl> <chr> <chr> <chr> ## 1 1 Humanities Classics Matilda ## 2 3 Humanities Classics Nicholas ## 3 3 Humanities History Matilda ## 4 5 Humanities History Nicholas ## 5 5 Performance Music Matilda ## 6 9 Performance Music Nicholas ## 7 7 Performance Drama Matilda ## 8 12 Performance Drama Nicholas ``` ### 3\.2\.8 Repeated rows/columns of headers within the table Repetitions can simply be ignored. Select one of the sets of headers, and use it for all the data. In this example, the data cells are easy to distinguish from the headers mixed in among them, because only the data cells have the `numeric` data type. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-repeated-headers") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric) %>% print() ``` ``` ## # A tibble: 80 x 5 ## row col data_type character numeric ## <int> <int> <chr> <chr> <dbl> ## 1 2 4 character Term 1 NA ## 2 2 5 character Term 2 NA ## 3 2 6 character Term 3 NA ## 4 3 2 character Classics NA ## 5 3 3 character Matilda NA ## 6 3 4 numeric <NA> 1 ## 7 3 5 numeric <NA> 8 ## 8 3 6 numeric <NA> 7 ## 9 4 3 character Nicholas NA ## 10 4 4 numeric <NA> 3 ## # … with 70 more rows ``` ``` # View the cells in their original positions on the spreadsheet rectify(all_cells) ``` ``` ## # A tibble: 20 x 6 ## `row/col` `2(B)` `3(C)` `4(D)` `5(E)` `6(F)` ## <int> <chr> <chr> <chr> <chr> <chr> ## 1 2 <NA> <NA> Term 1 Term 2 Term 3 ## 2 3 Classics Matilda 1 8 7 ## 3 4 <NA> Nicholas 3 1 2 ## 4 5 <NA> Olivia 4 0 1 ## 5 6 <NA> Paul 2 4 8 ## 6 7 <NA> <NA> Term 1 Term 2 Term 3 ## 7 8 History Matilda 4 7 3 ## 8 9 <NA> Nicholas 3 5 5 ## 9 10 <NA> Olivia 9 8 5 ## 10 11 <NA> Paul 6 2 0 ## 11 12 <NA> <NA> Term 1 Term 2 Term 3 ## 12 13 Music Matilda 2 9 9 ## 13 14 <NA> Nicholas 1 7 7 ## 14 15 <NA> Olivia 0 3 5 ## 15 16 <NA> Paul 2 2 3 ## 16 17 <NA> <NA> Term 1 Term 2 Term 3 ## 17 18 Drama Matilda 9 8 9 ## 18 19 <NA> Nicholas 1 3 4 ## 19 20 <NA> Olivia 6 1 4 ## 20 21 <NA> Paul 6 0 2 ``` ``` # The 'term' headers appear four times, but only the first one is needed. term <- dplyr::filter(all_cells, row == 2) %>% select(row, col, term = character) # the title of this header is 'field', meaning 'group of subjects'. # The cells are text cells (`"Humanities"`, `"Performance"`) so take the value # in the '`character` column. term ``` ``` ## # A tibble: 3 x 3 ## row col term ## <int> <int> <chr> ## 1 2 4 Term 1 ## 2 2 5 Term 2 ## 3 2 6 Term 3 ``` ``` subject <- dplyr::filter(all_cells, col == 2) %>% select(row, col, subject = character) # The title of this header is 'subject' # The cells are text cells (`"history"`, etc.) so take the value in the # '`character` column. subject ``` ``` ## # A tibble: 4 x 3 ## row col subject ## <int> <int> <chr> ## 1 3 2 Classics ## 2 8 2 History ## 3 13 2 Music ## 4 18 2 Drama ``` ``` name <- dplyr::filter(all_cells, col == 3) %>% select(row, col, name = character) # The title of this header is 'name'. # The cells are text cells, so take the value in the '`character` column. name ``` ``` ## # A tibble: 16 x 3 ## row col name ## <int> <int> <chr> ## 1 3 3 Matilda ## 2 4 3 Nicholas ## 3 5 3 Olivia ## 4 6 3 Paul ## 5 8 3 Matilda ## 6 9 3 Nicholas ## 7 10 3 Olivia ## 8 11 3 Paul ## 9 13 3 Matilda ## 10 14 3 Nicholas ## 11 15 3 Olivia ## 12 16 3 Paul ## 13 18 3 Matilda ## 14 19 3 Nicholas ## 15 20 3 Olivia ## 16 21 3 Paul ``` ``` # The data cells are distinguished from the 'term' headers by their data type -- # the data cells are numeric, whereas the term headers are character. data_cells <- dplyr::filter(all_cells, data_type == "numeric") %>% select(row, col, score = numeric) # The data is exam scores in certain subjects, so give the data that title. # The data is numeric, so select only that 'value'. If some of the data was # also text or true/false, then you would select the `character` and `logical` # columns as well as `numeric` data_cells ``` ``` ## # A tibble: 48 x 3 ## row col score ## <int> <int> <dbl> ## 1 3 4 1 ## 2 3 5 8 ## 3 3 6 7 ## 4 4 4 3 ## 5 4 5 1 ## 6 4 6 2 ## 7 5 4 4 ## 8 5 5 0 ## 9 5 6 1 ## 10 6 4 2 ## # … with 38 more rows ``` ``` data_cells %>% enhead(term, "up") %>% enhead(subject, "up-left") %>% enhead(name, "left") %>% select(-row, -col) ``` ``` ## # A tibble: 48 x 4 ## score term subject name ## <dbl> <chr> <chr> <chr> ## 1 1 Term 1 Classics Matilda ## 2 8 Term 2 Classics Matilda ## 3 7 Term 3 Classics Matilda ## 4 3 Term 1 Classics Nicholas ## 5 1 Term 2 Classics Nicholas ## 6 2 Term 3 Classics Nicholas ## 7 4 Term 1 Classics Olivia ## 8 0 Term 2 Classics Olivia ## 9 1 Term 3 Classics Olivia ## 10 2 Term 1 Classics Paul ## # … with 38 more rows ``` ### 3\.2\.9 Headers amongst the data This happens when what is actually a row\-header, instead of being presented to the left of the data, is presented above the data. (Alternatively, what is actually a column header, instead of being presented above the data, is presented to the side.) The way to handle it is to *pretend* that it is a row header, and use the `"left-up"` direction as normal. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "pivot-header-within-data") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric, local_format_id) %>% print() ``` ``` ## # A tibble: 80 x 6 ## row col data_type character numeric local_format_id ## <int> <int> <chr> <chr> <dbl> <int> ## 1 2 3 character Classics NA 2 ## 2 3 3 character Term 1 NA 20 ## 3 3 4 character Term 2 NA 37 ## 4 3 5 character Term 3 NA 21 ## 5 4 2 character Matilda NA 18 ## 6 4 3 numeric <NA> 4 18 ## 7 4 4 numeric <NA> 0 27 ## 8 4 5 numeric <NA> 7 19 ## 9 5 2 character Nicholas NA 20 ## 10 5 3 numeric <NA> 4 20 ## # … with 70 more rows ``` ``` # View the cells in their original positions on the spreadsheet rectify(all_cells) ``` ``` ## # A tibble: 24 x 5 ## `row/col` `2(B)` `3(C)` `4(D)` `5(E)` ## <int> <chr> <chr> <chr> <chr> ## 1 2 <NA> Classics <NA> <NA> ## 2 3 <NA> Term 1 Term 2 Term 3 ## 3 4 Matilda 4 0 7 ## 4 5 Nicholas 4 6 2 ## 5 6 Olivia 9 9 9 ## 6 7 Paul 5 0 0 ## 7 8 <NA> History <NA> <NA> ## 8 9 <NA> Term 1 Term 2 Term 3 ## 9 10 Matilda 0 4 2 ## 10 11 Nicholas 2 5 2 ## # … with 14 more rows ``` ``` bold <- which(xlsx_formats(path)$local$font$bold) # The subject headers, though mixed with the data and the 'term' headers, are # distinguishable by the data type "character" and by being bold. subject <- dplyr::filter(all_cells, col == 3, data_type == "character", local_format_id %in% bold) %>% select(row, col, subject = character) # The title of this header is 'subject' # The cells are text cells (`"history"`, etc.) so take the value in the # '`character` column. subject ``` ``` ## # A tibble: 4 x 3 ## row col subject ## <int> <int> <chr> ## 1 2 3 Classics ## 2 8 3 History ## 3 14 3 Music ## 4 20 3 Drama ``` ``` # We only need one set of the 'term' headers term <- dplyr::filter(all_cells, row == 3, data_type == "character") %>% select(row, col, term = character) # the title of this header is 'field', meaning 'group of subjects'. # The cells are text cells (`"Humanities"`, `"Performance"`) so take the value # in the '`character` column. term ``` ``` ## # A tibble: 3 x 3 ## row col term ## <int> <int> <chr> ## 1 3 3 Term 1 ## 2 3 4 Term 2 ## 3 3 5 Term 3 ``` ``` name <- dplyr::filter(all_cells, col == 2) %>% select(row, col, name = character) # The title of this header is 'name'. # The cells are text cells, so take the value in the '`character` column. name ``` ``` ## # A tibble: 16 x 3 ## row col name ## <int> <int> <chr> ## 1 4 2 Matilda ## 2 5 2 Nicholas ## 3 6 2 Olivia ## 4 7 2 Paul ## 5 10 2 Matilda ## 6 11 2 Nicholas ## 7 12 2 Olivia ## 8 13 2 Paul ## 9 16 2 Matilda ## 10 17 2 Nicholas ## 11 18 2 Olivia ## 12 19 2 Paul ## 13 22 2 Matilda ## 14 23 2 Nicholas ## 15 24 2 Olivia ## 16 25 2 Paul ``` ``` # The data cells are distinguished from the 'subject' headers by their data # type -- the data cells are numeric, whereas the term headers are character. data_cells <- dplyr::filter(all_cells, data_type == "numeric") %>% select(row, col, score = numeric) # The data is exam scores in certain subjects, so give the data that title. # The data is numeric, so select only that 'value'. If some of the data was # also text or true/false, then you would select the `character` and `logical` # columns as well as `numeric` data_cells ``` ``` ## # A tibble: 48 x 3 ## row col score ## <int> <int> <dbl> ## 1 4 3 4 ## 2 4 4 0 ## 3 4 5 7 ## 4 5 3 4 ## 5 5 4 6 ## 6 5 5 2 ## 7 6 3 9 ## 8 6 4 9 ## 9 6 5 9 ## 10 7 3 5 ## # … with 38 more rows ``` ``` data_cells %>% enhead(subject, "left-up") %>% enhead(term, "up") %>% enhead(name, "left") %>% select(-row, -col) ``` ``` ## # A tibble: 48 x 4 ## score subject term name ## <dbl> <chr> <chr> <chr> ## 1 4 Classics Term 1 Matilda ## 2 0 Classics Term 2 Matilda ## 3 7 Classics Term 3 Matilda ## 4 4 Classics Term 1 Nicholas ## 5 6 Classics Term 2 Nicholas ## 6 2 Classics Term 3 Nicholas ## 7 9 Classics Term 1 Olivia ## 8 9 Classics Term 2 Olivia ## 9 9 Classics Term 3 Olivia ## 10 5 Classics Term 1 Paul ## # … with 38 more rows ```
Getting Cleaning and Wrangling Data
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/2-time-varying-regression.html
Chapter 2 Time\-varying regression ================================== Time\-varying regression is simply a linear regression where time is the explanatory variable: \\\[log(catch) \= \\alpha \+ \\beta t \+ \\beta\_2 t^2 \+ \\dots \+ e\_t\\] The error term ( \\(e\_t\\) ) was treated as an independent Normal error ( \\(\\sim N(0, \\sigma)\\) ) in Stergiou and Christou (1996\). If that is not a reasonable assumption, then it is simple to fit a non\-Gausian error model in R. #### Order of the time polynomial The order of the polynomial of \\(t\\) determines how the time axis (x\-axis) relates to the overall trend in the data. A 1st order polynomial (\\(\\beta t\\)) will allow a linear relationship only. A 2nd order polynomial(\\(\\beta\_1 t \+ \\beta\_2 t^2\\)) will allow a convex or concave relationship with one peak. 3rd and 4th orders will allow more flexible relationships with more peaks. #### Order of the time polynomial The order of the polynomial of \\(t\\) determines how the time axis (x\-axis) relates to the overall trend in the data. A 1st order polynomial (\\(\\beta t\\)) will allow a linear relationship only. A 2nd order polynomial(\\(\\beta\_1 t \+ \\beta\_2 t^2\\)) will allow a convex or concave relationship with one peak. 3rd and 4th orders will allow more flexible relationships with more peaks.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/2-1-fitting.html
2\.1 Fitting ------------ Fitting a time\-varying regression is done with the `lm()` function. For example, here is how to fit a 4th\-order polynomial for time to the anchovy data. We are fitting this model: \\\[log(Anchovy) \= \\alpha \+ \\beta t \+ \\beta\_2 t^2 \+ \\beta\_3 t^3 \+ \\beta\_4 t^4 \+ e\_t\\] First load in the data by loading the **FishForecast** package. `anchovy` is a data frame with year and log.metric.tons columns. `anchovy87` is the same data frame but with the years 1964 to 1987\. These are the years that Stergio and Christou use for fitting their models. They hold out 1988 and 1989 for forecast evaluation. ``` require(FishForecast) ``` We need to add on a column for \\(t\\) (and \\(t^2\\), \\(t^3\\), \\(t^4\\)) where the first year is \\(t\=1\\). We could regress against year (so 1964 to 1987\), but by convention, one regresses against 1 to the number of years or 0 to the number of years minus 1\. Stergiou and Christou did the former. ``` anchovy87$t = anchovy87$Year-1963 anchovy87$t2 = anchovy87$t^2 anchovy87$t3 = anchovy87$t^3 anchovy87$t4 = anchovy87$t^4 model <- lm(log.metric.tons ~ t + t2 + t3 + t4, data=anchovy87) ``` All our covariates are functions of \\(t\\), so we do not actually need to add on the \\(t^2\\), \\(t^3\\) and \\(t^4\\) to our data frame. We can use the `I()` function. This function is useful whenever you want to use a transformed value of a column of your data frame in your regression. ``` anchovy87$t = anchovy87$Year-1963 model <- lm(log.metric.tons ~ t + I(t^2) + I(t^3) + I(t^4), data=anchovy87) ``` Let’s look at the fit. ``` summary(model) ``` ``` ## ## Call: ## lm(formula = log.metric.tons ~ t + I(t^2) + I(t^3) + I(t^4), ## data = anchovy87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.26951 -0.09922 -0.01018 0.11777 0.20006 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 8.300e+00 1.953e-01 42.498 <2e-16 *** ## t 1.751e-01 1.035e-01 1.692 0.107 ## I(t^2) -2.182e-02 1.636e-02 -1.333 0.198 ## I(t^3) 1.183e-03 9.739e-04 1.215 0.239 ## I(t^4) -1.881e-05 1.934e-05 -0.972 0.343 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.1458 on 19 degrees of freedom ## Multiple R-squared: 0.9143, Adjusted R-squared: 0.8962 ## F-statistic: 50.65 on 4 and 19 DF, p-value: 7.096e-10 ``` ### 2\.1\.1 Orthogonal polynomials None of the time effects are significant despite an obvious linear temporal trend to the data. What’s going on? Well \\(t\\), \\(t^2\\), \\(t^3\\) and \\(t^4\\) are all highly correlated. Fitting a linear regression with multiple highly correlated covariates will not get you anywhere unless perhaps all the covariates are needed to explain the data. We will see the latter case for the sardine. In the anchovy case, multiple of the covariates could explain the linear\-ish trend. You could try fitting the first degree model \\(x\_t \= \\alpha \+ \\beta t \+ e\_t\\), then the second \\(x\_t \= \\alpha \+ \\beta\_1 t \+ \\beta\_2 t^2 \+ e\_t\\), then the third. This would reveal that in the first and second order fits, we get significant effects of time in our model. However the correct way to do this would be to use orthogonal polynomials. #### `poly()` function The `poly()` function creates orthogonal covariates for your polynomial. What does that mean? Let’s say you want to fit a model with a 2nd order polynomial of \\(t\\). It has \\(t\\) and \\(t^2\\), but using these as covariates directly lead to using two covariates that are highly correlated. Instead we want a covariate that explains \\(t\\) and another that explains the part of \\(t^2\\) that cannot be explained by \\(t\\). `poly()` creates these orthogonal covariates. The `poly()` function creates covariates with mean zero and identical variances. Covariates with different means and variances makes it hard to compare the estimated effect sizes. ``` T1 = 1:24; T2=T1^2 c(mean(T1),mean(T2),cov(T1, T2)) ``` ``` ## [1] 12.5000 204.1667 1250.0000 ``` ``` T1 = poly(T1,2)[,1]; T2=poly(T1,2)[,2] c(mean(T1),mean(T2),cov(T1, T2)) ``` ``` ## [1] 4.921826e-18 2.674139e-17 -4.949619e-20 ``` #### Using `poly()` to fit the anchovy data We saw in the anchovy fit that using \\(t\\), \\(t^2\\), \\(t^3\\) and \\(t^4\\) directly in the fit resulted in no significant estimated time effect despite a clear temporal trend in the data. If we fit with `poly()` so that we do not use correlated time covariates, we see a different picture. ``` model <- lm(log.metric.tons ~ poly(t,4), data=anchovy87) summary(model) ``` ``` ## ## Call: ## lm(formula = log.metric.tons ~ poly(t, 4), data = anchovy87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.26951 -0.09922 -0.01018 0.11777 0.20006 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 9.08880 0.02976 305.373 < 2e-16 *** ## poly(t, 4)1 1.97330 0.14581 13.534 3.31e-11 *** ## poly(t, 4)2 0.54728 0.14581 3.753 0.00135 ** ## poly(t, 4)3 0.30678 0.14581 2.104 0.04892 * ## poly(t, 4)4 -0.14180 0.14581 -0.972 0.34302 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.1458 on 19 degrees of freedom ## Multiple R-squared: 0.9143, Adjusted R-squared: 0.8962 ## F-statistic: 50.65 on 4 and 19 DF, p-value: 7.096e-10 ``` ### 2\.1\.2 Residual diagnostics We want to test if our residuals are temporally independent. We can do this with the Ljung\-Box test as Stergio and Christou do. For the Ljung\-Box test * Null hypothesis is that the data are independent * Alternate hypothesis is that the data are serially correlated #### Example of the Ljung\-Box test ``` Box.test(rnorm(100), type="Ljung-Box") ``` ``` ## ## Box-Ljung test ## ## data: rnorm(100) ## X-squared = 1.2273, df = 1, p-value = 0.2679 ``` The null hypothesis is not rejected. These are not serially correlated. Stergio and Christou appear to use a lag of 14 for the test (this is a bit large for 24 data points). The degrees of freedom is lag minus the number of estimated parameters in the model. So for the Anchovy data, \\(df \= 14 \- 2\\). ``` x <- resid(model) Box.test(x, lag = 14, type = "Ljung-Box", fitdf=2) ``` ``` ## ## Box-Ljung test ## ## data: x ## X-squared = 14.627, df = 12, p-value = 0.2625 ``` Compare to the values in the far right column in Table 4\. The null hypothesis of independence is rejected. #### Breusch\-Godfrey test Although Stergiou and Christou use the Ljung\-Box test, the Breusch\-Godfrey test is more standard for regression residuals. The forecast package has the `checkresiduals()` function which will run this test and some diagnostic plots. ``` forecast::checkresiduals(model) ``` ``` ## ## Breusch-Godfrey test for serial correlation of order up to 8 ## ## data: Residuals ## LM test = 12.858, df = 8, p-value = 0.1168 ``` ### 2\.1\.3 Compare to Stergiou and Christou Stergiou and Christou (1996\) fit time\-varying regressions to the 1964\-1987 data and show the results in Table 4\. Table 4 #### Compare anchovy fit to Stergiou and Christou Stergiou and Christou use a first order polynomial, linear relationship with time, for the anchovy data. They do not state how they choose this over a 2nd order polynomial which also appears supported (see fit with `poly()` fit to the anchovy data). ``` anchovy87$t = anchovy87$Year-1963 model <- lm(log.metric.tons ~ t, data=anchovy87) ``` The coefficients and adjusted R2 are similar to that shown in their Table 4\. The coefficients are not identical so there may be some differences in the data I extracted from the Greek statistical reports and those used in Stergiou and Christou. ``` c(coef(model), summary(model)$adj.r.squared) ``` ``` ## (Intercept) t ## 8.36143085 0.05818942 0.81856644 ``` #### Compare sardine fit to Stergiou and Christou For the sardine (bottom row in Table 4\), Stergio and Christou fit a 4th order polynomial. With `poly()`, a 4th order time\-varying regression model is fit to the sardine data as: ``` sardine87$t = sardine87$Year-1963 model <- lm(log.metric.tons ~ poly(t,4), data=sardine87) ``` This indicates support for the 2nd, 3rd, and 4th orders but not the 1st (linear) part. ``` summary(model) ``` ``` ## ## Call: ## lm(formula = log.metric.tons ~ poly(t, 4), data = sardine87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.115300 -0.053090 -0.008895 0.041783 0.165885 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 9.31524 0.01717 542.470 < 2e-16 *** ## poly(t, 4)1 0.08314 0.08412 0.988 0.335453 ## poly(t, 4)2 -0.18809 0.08412 -2.236 0.037559 * ## poly(t, 4)3 -0.35504 0.08412 -4.220 0.000463 *** ## poly(t, 4)4 0.25674 0.08412 3.052 0.006562 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.08412 on 19 degrees of freedom ## Multiple R-squared: 0.6353, Adjusted R-squared: 0.5586 ## F-statistic: 8.275 on 4 and 19 DF, p-value: 0.0004846 ``` Stergiou and Christou appear to have used a raw polynomial model using \\(t\\), \\(t^2\\), \\(t^3\\) and \\(t^4\\) as the covariates instead of orthogonal polynomials. To fit the model that they did, we use ``` model <- lm(log.metric.tons ~ t + I(t^2) + I(t^3) + I(t^4), data=sardine87) ``` Using a model fit with the raw time covariates, the coefficients and adjusted R2 are similar to that shown in Table 4\. ``` c(coef(model), summary(model)$adj.r.squared) ``` ``` ## (Intercept) t I(t^2) I(t^3) I(t^4) ## 9.672783e+00 -2.443273e-01 3.738773e-02 -1.983588e-03 3.405533e-05 ## ## 5.585532e-01 ``` The test for autocorrelation of the residuals is ``` x <- resid(model) Box.test(x, lag = 14, type = "Ljung-Box", fitdf=5) ``` ``` ## ## Box-Ljung test ## ## data: x ## X-squared = 32.317, df = 9, p-value = 0.0001755 ``` `fitdf` specifies the number of parameters estimated by the model. In this case it is 5, intercept and 4 coefficients. The p\-value is less than 0\.05 indicating that the residuals are temporally correlated. ### 2\.1\.4 Summary #### Why use time\-varying regression? * It looks there is a simple time relationship. If a high\-order polynomial is required, that is a bad sign. * Easy and fast * Easy to explain * You are only forecasting a few years ahead * No assumptions required about ‘stationarity’ #### Why not to use time\-varying regression? * Autocorrelation is not modeled. That autocorrelation may hold information for forecasting. * You are only using temporal trend for forecasting (mean level). * If you use a high\-order polynomial, you might be modeling noise from a random walk. That means interpreting the temporal pattern as having information when in fact it has none. #### Is time\-varying regression used? It seems pretty simple. Is this used? All the time. Most “trend” analyses are a variant of time\-varying regression. If you fit a line to your data and report the trend or percent change, that’s a time\-varying regression. ### 2\.1\.1 Orthogonal polynomials None of the time effects are significant despite an obvious linear temporal trend to the data. What’s going on? Well \\(t\\), \\(t^2\\), \\(t^3\\) and \\(t^4\\) are all highly correlated. Fitting a linear regression with multiple highly correlated covariates will not get you anywhere unless perhaps all the covariates are needed to explain the data. We will see the latter case for the sardine. In the anchovy case, multiple of the covariates could explain the linear\-ish trend. You could try fitting the first degree model \\(x\_t \= \\alpha \+ \\beta t \+ e\_t\\), then the second \\(x\_t \= \\alpha \+ \\beta\_1 t \+ \\beta\_2 t^2 \+ e\_t\\), then the third. This would reveal that in the first and second order fits, we get significant effects of time in our model. However the correct way to do this would be to use orthogonal polynomials. #### `poly()` function The `poly()` function creates orthogonal covariates for your polynomial. What does that mean? Let’s say you want to fit a model with a 2nd order polynomial of \\(t\\). It has \\(t\\) and \\(t^2\\), but using these as covariates directly lead to using two covariates that are highly correlated. Instead we want a covariate that explains \\(t\\) and another that explains the part of \\(t^2\\) that cannot be explained by \\(t\\). `poly()` creates these orthogonal covariates. The `poly()` function creates covariates with mean zero and identical variances. Covariates with different means and variances makes it hard to compare the estimated effect sizes. ``` T1 = 1:24; T2=T1^2 c(mean(T1),mean(T2),cov(T1, T2)) ``` ``` ## [1] 12.5000 204.1667 1250.0000 ``` ``` T1 = poly(T1,2)[,1]; T2=poly(T1,2)[,2] c(mean(T1),mean(T2),cov(T1, T2)) ``` ``` ## [1] 4.921826e-18 2.674139e-17 -4.949619e-20 ``` #### Using `poly()` to fit the anchovy data We saw in the anchovy fit that using \\(t\\), \\(t^2\\), \\(t^3\\) and \\(t^4\\) directly in the fit resulted in no significant estimated time effect despite a clear temporal trend in the data. If we fit with `poly()` so that we do not use correlated time covariates, we see a different picture. ``` model <- lm(log.metric.tons ~ poly(t,4), data=anchovy87) summary(model) ``` ``` ## ## Call: ## lm(formula = log.metric.tons ~ poly(t, 4), data = anchovy87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.26951 -0.09922 -0.01018 0.11777 0.20006 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 9.08880 0.02976 305.373 < 2e-16 *** ## poly(t, 4)1 1.97330 0.14581 13.534 3.31e-11 *** ## poly(t, 4)2 0.54728 0.14581 3.753 0.00135 ** ## poly(t, 4)3 0.30678 0.14581 2.104 0.04892 * ## poly(t, 4)4 -0.14180 0.14581 -0.972 0.34302 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.1458 on 19 degrees of freedom ## Multiple R-squared: 0.9143, Adjusted R-squared: 0.8962 ## F-statistic: 50.65 on 4 and 19 DF, p-value: 7.096e-10 ``` #### `poly()` function The `poly()` function creates orthogonal covariates for your polynomial. What does that mean? Let’s say you want to fit a model with a 2nd order polynomial of \\(t\\). It has \\(t\\) and \\(t^2\\), but using these as covariates directly lead to using two covariates that are highly correlated. Instead we want a covariate that explains \\(t\\) and another that explains the part of \\(t^2\\) that cannot be explained by \\(t\\). `poly()` creates these orthogonal covariates. The `poly()` function creates covariates with mean zero and identical variances. Covariates with different means and variances makes it hard to compare the estimated effect sizes. ``` T1 = 1:24; T2=T1^2 c(mean(T1),mean(T2),cov(T1, T2)) ``` ``` ## [1] 12.5000 204.1667 1250.0000 ``` ``` T1 = poly(T1,2)[,1]; T2=poly(T1,2)[,2] c(mean(T1),mean(T2),cov(T1, T2)) ``` ``` ## [1] 4.921826e-18 2.674139e-17 -4.949619e-20 ``` #### Using `poly()` to fit the anchovy data We saw in the anchovy fit that using \\(t\\), \\(t^2\\), \\(t^3\\) and \\(t^4\\) directly in the fit resulted in no significant estimated time effect despite a clear temporal trend in the data. If we fit with `poly()` so that we do not use correlated time covariates, we see a different picture. ``` model <- lm(log.metric.tons ~ poly(t,4), data=anchovy87) summary(model) ``` ``` ## ## Call: ## lm(formula = log.metric.tons ~ poly(t, 4), data = anchovy87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.26951 -0.09922 -0.01018 0.11777 0.20006 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 9.08880 0.02976 305.373 < 2e-16 *** ## poly(t, 4)1 1.97330 0.14581 13.534 3.31e-11 *** ## poly(t, 4)2 0.54728 0.14581 3.753 0.00135 ** ## poly(t, 4)3 0.30678 0.14581 2.104 0.04892 * ## poly(t, 4)4 -0.14180 0.14581 -0.972 0.34302 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.1458 on 19 degrees of freedom ## Multiple R-squared: 0.9143, Adjusted R-squared: 0.8962 ## F-statistic: 50.65 on 4 and 19 DF, p-value: 7.096e-10 ``` ### 2\.1\.2 Residual diagnostics We want to test if our residuals are temporally independent. We can do this with the Ljung\-Box test as Stergio and Christou do. For the Ljung\-Box test * Null hypothesis is that the data are independent * Alternate hypothesis is that the data are serially correlated #### Example of the Ljung\-Box test ``` Box.test(rnorm(100), type="Ljung-Box") ``` ``` ## ## Box-Ljung test ## ## data: rnorm(100) ## X-squared = 1.2273, df = 1, p-value = 0.2679 ``` The null hypothesis is not rejected. These are not serially correlated. Stergio and Christou appear to use a lag of 14 for the test (this is a bit large for 24 data points). The degrees of freedom is lag minus the number of estimated parameters in the model. So for the Anchovy data, \\(df \= 14 \- 2\\). ``` x <- resid(model) Box.test(x, lag = 14, type = "Ljung-Box", fitdf=2) ``` ``` ## ## Box-Ljung test ## ## data: x ## X-squared = 14.627, df = 12, p-value = 0.2625 ``` Compare to the values in the far right column in Table 4\. The null hypothesis of independence is rejected. #### Breusch\-Godfrey test Although Stergiou and Christou use the Ljung\-Box test, the Breusch\-Godfrey test is more standard for regression residuals. The forecast package has the `checkresiduals()` function which will run this test and some diagnostic plots. ``` forecast::checkresiduals(model) ``` ``` ## ## Breusch-Godfrey test for serial correlation of order up to 8 ## ## data: Residuals ## LM test = 12.858, df = 8, p-value = 0.1168 ``` #### Example of the Ljung\-Box test ``` Box.test(rnorm(100), type="Ljung-Box") ``` ``` ## ## Box-Ljung test ## ## data: rnorm(100) ## X-squared = 1.2273, df = 1, p-value = 0.2679 ``` The null hypothesis is not rejected. These are not serially correlated. Stergio and Christou appear to use a lag of 14 for the test (this is a bit large for 24 data points). The degrees of freedom is lag minus the number of estimated parameters in the model. So for the Anchovy data, \\(df \= 14 \- 2\\). ``` x <- resid(model) Box.test(x, lag = 14, type = "Ljung-Box", fitdf=2) ``` ``` ## ## Box-Ljung test ## ## data: x ## X-squared = 14.627, df = 12, p-value = 0.2625 ``` Compare to the values in the far right column in Table 4\. The null hypothesis of independence is rejected. #### Breusch\-Godfrey test Although Stergiou and Christou use the Ljung\-Box test, the Breusch\-Godfrey test is more standard for regression residuals. The forecast package has the `checkresiduals()` function which will run this test and some diagnostic plots. ``` forecast::checkresiduals(model) ``` ``` ## ## Breusch-Godfrey test for serial correlation of order up to 8 ## ## data: Residuals ## LM test = 12.858, df = 8, p-value = 0.1168 ``` ### 2\.1\.3 Compare to Stergiou and Christou Stergiou and Christou (1996\) fit time\-varying regressions to the 1964\-1987 data and show the results in Table 4\. Table 4 #### Compare anchovy fit to Stergiou and Christou Stergiou and Christou use a first order polynomial, linear relationship with time, for the anchovy data. They do not state how they choose this over a 2nd order polynomial which also appears supported (see fit with `poly()` fit to the anchovy data). ``` anchovy87$t = anchovy87$Year-1963 model <- lm(log.metric.tons ~ t, data=anchovy87) ``` The coefficients and adjusted R2 are similar to that shown in their Table 4\. The coefficients are not identical so there may be some differences in the data I extracted from the Greek statistical reports and those used in Stergiou and Christou. ``` c(coef(model), summary(model)$adj.r.squared) ``` ``` ## (Intercept) t ## 8.36143085 0.05818942 0.81856644 ``` #### Compare sardine fit to Stergiou and Christou For the sardine (bottom row in Table 4\), Stergio and Christou fit a 4th order polynomial. With `poly()`, a 4th order time\-varying regression model is fit to the sardine data as: ``` sardine87$t = sardine87$Year-1963 model <- lm(log.metric.tons ~ poly(t,4), data=sardine87) ``` This indicates support for the 2nd, 3rd, and 4th orders but not the 1st (linear) part. ``` summary(model) ``` ``` ## ## Call: ## lm(formula = log.metric.tons ~ poly(t, 4), data = sardine87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.115300 -0.053090 -0.008895 0.041783 0.165885 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 9.31524 0.01717 542.470 < 2e-16 *** ## poly(t, 4)1 0.08314 0.08412 0.988 0.335453 ## poly(t, 4)2 -0.18809 0.08412 -2.236 0.037559 * ## poly(t, 4)3 -0.35504 0.08412 -4.220 0.000463 *** ## poly(t, 4)4 0.25674 0.08412 3.052 0.006562 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.08412 on 19 degrees of freedom ## Multiple R-squared: 0.6353, Adjusted R-squared: 0.5586 ## F-statistic: 8.275 on 4 and 19 DF, p-value: 0.0004846 ``` Stergiou and Christou appear to have used a raw polynomial model using \\(t\\), \\(t^2\\), \\(t^3\\) and \\(t^4\\) as the covariates instead of orthogonal polynomials. To fit the model that they did, we use ``` model <- lm(log.metric.tons ~ t + I(t^2) + I(t^3) + I(t^4), data=sardine87) ``` Using a model fit with the raw time covariates, the coefficients and adjusted R2 are similar to that shown in Table 4\. ``` c(coef(model), summary(model)$adj.r.squared) ``` ``` ## (Intercept) t I(t^2) I(t^3) I(t^4) ## 9.672783e+00 -2.443273e-01 3.738773e-02 -1.983588e-03 3.405533e-05 ## ## 5.585532e-01 ``` The test for autocorrelation of the residuals is ``` x <- resid(model) Box.test(x, lag = 14, type = "Ljung-Box", fitdf=5) ``` ``` ## ## Box-Ljung test ## ## data: x ## X-squared = 32.317, df = 9, p-value = 0.0001755 ``` `fitdf` specifies the number of parameters estimated by the model. In this case it is 5, intercept and 4 coefficients. The p\-value is less than 0\.05 indicating that the residuals are temporally correlated. #### Compare anchovy fit to Stergiou and Christou Stergiou and Christou use a first order polynomial, linear relationship with time, for the anchovy data. They do not state how they choose this over a 2nd order polynomial which also appears supported (see fit with `poly()` fit to the anchovy data). ``` anchovy87$t = anchovy87$Year-1963 model <- lm(log.metric.tons ~ t, data=anchovy87) ``` The coefficients and adjusted R2 are similar to that shown in their Table 4\. The coefficients are not identical so there may be some differences in the data I extracted from the Greek statistical reports and those used in Stergiou and Christou. ``` c(coef(model), summary(model)$adj.r.squared) ``` ``` ## (Intercept) t ## 8.36143085 0.05818942 0.81856644 ``` #### Compare sardine fit to Stergiou and Christou For the sardine (bottom row in Table 4\), Stergio and Christou fit a 4th order polynomial. With `poly()`, a 4th order time\-varying regression model is fit to the sardine data as: ``` sardine87$t = sardine87$Year-1963 model <- lm(log.metric.tons ~ poly(t,4), data=sardine87) ``` This indicates support for the 2nd, 3rd, and 4th orders but not the 1st (linear) part. ``` summary(model) ``` ``` ## ## Call: ## lm(formula = log.metric.tons ~ poly(t, 4), data = sardine87) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.115300 -0.053090 -0.008895 0.041783 0.165885 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 9.31524 0.01717 542.470 < 2e-16 *** ## poly(t, 4)1 0.08314 0.08412 0.988 0.335453 ## poly(t, 4)2 -0.18809 0.08412 -2.236 0.037559 * ## poly(t, 4)3 -0.35504 0.08412 -4.220 0.000463 *** ## poly(t, 4)4 0.25674 0.08412 3.052 0.006562 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.08412 on 19 degrees of freedom ## Multiple R-squared: 0.6353, Adjusted R-squared: 0.5586 ## F-statistic: 8.275 on 4 and 19 DF, p-value: 0.0004846 ``` Stergiou and Christou appear to have used a raw polynomial model using \\(t\\), \\(t^2\\), \\(t^3\\) and \\(t^4\\) as the covariates instead of orthogonal polynomials. To fit the model that they did, we use ``` model <- lm(log.metric.tons ~ t + I(t^2) + I(t^3) + I(t^4), data=sardine87) ``` Using a model fit with the raw time covariates, the coefficients and adjusted R2 are similar to that shown in Table 4\. ``` c(coef(model), summary(model)$adj.r.squared) ``` ``` ## (Intercept) t I(t^2) I(t^3) I(t^4) ## 9.672783e+00 -2.443273e-01 3.738773e-02 -1.983588e-03 3.405533e-05 ## ## 5.585532e-01 ``` The test for autocorrelation of the residuals is ``` x <- resid(model) Box.test(x, lag = 14, type = "Ljung-Box", fitdf=5) ``` ``` ## ## Box-Ljung test ## ## data: x ## X-squared = 32.317, df = 9, p-value = 0.0001755 ``` `fitdf` specifies the number of parameters estimated by the model. In this case it is 5, intercept and 4 coefficients. The p\-value is less than 0\.05 indicating that the residuals are temporally correlated. ### 2\.1\.4 Summary #### Why use time\-varying regression? * It looks there is a simple time relationship. If a high\-order polynomial is required, that is a bad sign. * Easy and fast * Easy to explain * You are only forecasting a few years ahead * No assumptions required about ‘stationarity’ #### Why not to use time\-varying regression? * Autocorrelation is not modeled. That autocorrelation may hold information for forecasting. * You are only using temporal trend for forecasting (mean level). * If you use a high\-order polynomial, you might be modeling noise from a random walk. That means interpreting the temporal pattern as having information when in fact it has none. #### Is time\-varying regression used? It seems pretty simple. Is this used? All the time. Most “trend” analyses are a variant of time\-varying regression. If you fit a line to your data and report the trend or percent change, that’s a time\-varying regression. #### Why use time\-varying regression? * It looks there is a simple time relationship. If a high\-order polynomial is required, that is a bad sign. * Easy and fast * Easy to explain * You are only forecasting a few years ahead * No assumptions required about ‘stationarity’ #### Why not to use time\-varying regression? * Autocorrelation is not modeled. That autocorrelation may hold information for forecasting. * You are only using temporal trend for forecasting (mean level). * If you use a high\-order polynomial, you might be modeling noise from a random walk. That means interpreting the temporal pattern as having information when in fact it has none. #### Is time\-varying regression used? It seems pretty simple. Is this used? All the time. Most “trend” analyses are a variant of time\-varying regression. If you fit a line to your data and report the trend or percent change, that’s a time\-varying regression.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/2-2-forecasting.html
2\.2 Forecasting ---------------- Forecasting is easy in R once you have a fitted model. Let’s say for the anchovy, we fit the model \\\[C\_t \= \\alpha \+ \\beta t \+ e\_t\\] where \\(t\\) starts at 1 (so 1964 is \\(t\=1\\) ). To predict, predict the catch in year t, we use \\\[C\_t \= \\alpha \+ \\beta t \+ e\_t\\] Model fit: ``` require(FishForecast) anchovy87$t <- anchovy87$Year-1963 model <- lm(log.metric.tons ~ t, data=anchovy87) coef(model) ``` ``` ## (Intercept) t ## 8.36143085 0.05818942 ``` For anchovy, the estimated \\(\\alpha\\) (Intercept) is 8\.3614309 and \\(\\beta\\) is 0\.0581894\. We want to use these estimates to forecast 1988 ( \\(t\=25\\) ). So the 1988 forecast is 8\.3614309 \+ 0\.0581894 \\(\\times\\) 25 : ``` coef(model)[1]+coef(model)[2]*25 ``` ``` ## (Intercept) ## 9.816166 ``` log metric tons. ### 2\.2\.1 The forecast package The forecast package in R makes it easy to create forecasts with fitted models and to plot (some of) those forecasts. For a TV Regression model, our `forecast()` call looks like ``` fr <- forecast::forecast(model, newdata = data.frame(t=25:29)) ``` The dark grey bands are the 80% prediction intervals and the light grey are the 95% prediction intervals. ``` plot(fr) ``` Anchovy forecasts from a higher order polynomial can similarly be made. Let’s fit a 4\-th order polynomial. \\\[C\_t \= \\alpha \+ \\beta\_1 t \+ \\beta\_2 t^2 \+ \\beta\_3 t^3 \+ \\beta\_4 t^4 \+ e\_t\\] To forecast with this model, we fit the model to estimate the \\(\\beta\\)’s and then replace \\(t\\) with \\(24\\): \\\[C\_{1988} \= \\alpha \+ \\beta\_1 24 \+ \\beta\_2 24^2 \+ \\beta\_3 24^3 \+ \\beta\_4 24^4 \+ e\_t\\] This is how to do that in R: ``` model <- lm(log.metric.tons ~ t + I(t^2) + I(t^3) + I(t^4), data=anchovy87) fr <- forecast::forecast(model, newdata = data.frame(t=24:28)) fr ``` ``` ## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95 ## 1 10.05019 9.800941 10.29944 9.657275 10.44310 ## 2 10.18017 9.856576 10.50377 9.670058 10.69028 ## 3 10.30288 9.849849 10.75591 9.588723 11.01704 ## 4 10.41391 9.770926 11.05689 9.400315 11.42750 ## 5 10.50839 9.609866 11.40691 9.091963 11.92482 ``` Unfortunately, forecast’s `plot()` function for forecast objects does not recognize that there is only one predictor \\(t\\) thus we cannot use forecast’s plot function. If you do this in R, it throws an error. ``` try(plot(fr)) ``` ``` ## Error in plotlmforecast(x, PI = PI, shaded = shaded, shadecols = shadecols, : ## Forecast plot for regression models only available for a single predictor ``` ``` Error in plotlmforecast(x, PI = PI, shaded = shaded, shadecols = shadecols, : Forecast plot for regression models only available for a single predictor ``` I created a function that you can use to plot time\-varying regressions with polynomial \\(t\\). You will use this function in the lab. ``` plotforecasttv(model, ylims=c(8,17)) ``` A feature of a time\-varying regression with many polynomials is that it fits the data well, but the forecast quickly becomes uncertain due to uncertainty regarding the polynomial fit. A simpler model can give forecasts that do not become rapidly uncertain. The flip\-side is that the simpler model may not capture the short\-term trends very well and may suffer from autocorrelated residuals. ``` model <- lm(log.metric.tons ~ t + I(t^2), data=anchovy87) ``` ``` plotforecasttv(model, ylims=c(8,17)) ``` ### 2\.2\.1 The forecast package The forecast package in R makes it easy to create forecasts with fitted models and to plot (some of) those forecasts. For a TV Regression model, our `forecast()` call looks like ``` fr <- forecast::forecast(model, newdata = data.frame(t=25:29)) ``` The dark grey bands are the 80% prediction intervals and the light grey are the 95% prediction intervals. ``` plot(fr) ``` Anchovy forecasts from a higher order polynomial can similarly be made. Let’s fit a 4\-th order polynomial. \\\[C\_t \= \\alpha \+ \\beta\_1 t \+ \\beta\_2 t^2 \+ \\beta\_3 t^3 \+ \\beta\_4 t^4 \+ e\_t\\] To forecast with this model, we fit the model to estimate the \\(\\beta\\)’s and then replace \\(t\\) with \\(24\\): \\\[C\_{1988} \= \\alpha \+ \\beta\_1 24 \+ \\beta\_2 24^2 \+ \\beta\_3 24^3 \+ \\beta\_4 24^4 \+ e\_t\\] This is how to do that in R: ``` model <- lm(log.metric.tons ~ t + I(t^2) + I(t^3) + I(t^4), data=anchovy87) fr <- forecast::forecast(model, newdata = data.frame(t=24:28)) fr ``` ``` ## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95 ## 1 10.05019 9.800941 10.29944 9.657275 10.44310 ## 2 10.18017 9.856576 10.50377 9.670058 10.69028 ## 3 10.30288 9.849849 10.75591 9.588723 11.01704 ## 4 10.41391 9.770926 11.05689 9.400315 11.42750 ## 5 10.50839 9.609866 11.40691 9.091963 11.92482 ``` Unfortunately, forecast’s `plot()` function for forecast objects does not recognize that there is only one predictor \\(t\\) thus we cannot use forecast’s plot function. If you do this in R, it throws an error. ``` try(plot(fr)) ``` ``` ## Error in plotlmforecast(x, PI = PI, shaded = shaded, shadecols = shadecols, : ## Forecast plot for regression models only available for a single predictor ``` ``` Error in plotlmforecast(x, PI = PI, shaded = shaded, shadecols = shadecols, : Forecast plot for regression models only available for a single predictor ``` I created a function that you can use to plot time\-varying regressions with polynomial \\(t\\). You will use this function in the lab. ``` plotforecasttv(model, ylims=c(8,17)) ``` A feature of a time\-varying regression with many polynomials is that it fits the data well, but the forecast quickly becomes uncertain due to uncertainty regarding the polynomial fit. A simpler model can give forecasts that do not become rapidly uncertain. The flip\-side is that the simpler model may not capture the short\-term trends very well and may suffer from autocorrelated residuals. ``` model <- lm(log.metric.tons ~ t + I(t^2), data=anchovy87) ``` ``` plotforecasttv(model, ylims=c(8,17)) ```
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/3-arima-models.html
Chapter 3 ARIMA Models ====================== The basic idea in an ARMA model is that past values in the time series have information about the current state. An AR model, the first part of ARMA, models the current state as a linear function of past values: \\\[x\_t \= \\phi\_1 x\_{t\-1} \+ \\phi\_2 x\_{t\-2} \+ ... \+ \\phi\_p x\_{t\-p} \+ e\_t\\]
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/3-1-overview.html
3\.1 Overview ------------- ### 3\.1\.1 Components of an ARIMA model You will commonly see ARIMA models referred to as *Box\-Jenkins* models. This model has 3 components (p, d, q): * **AR autoregressive** \\(y\_t\\) depends on past values. The AR level is maximum lag \\(p\\). \\\[x\_t \= \\phi\_1 x\_{t\-1} \+ \\phi\_2 x\_{t\-2} \+ ... \+ \\phi\_p x\_{t\-p} \+ e\_t\\] * **I differencing** \\(x\_t\\) may be a difference of the observed time series. The number of differences is denoted \\(d\\). First difference is \\(d\=1\\): \\\[x\_t \= y\_t \- y\_{t\-1}\\] * **MA moving average** The error \\(e\_t\\) can be a sum of a time series of independent random errors. The maximum lag is denoted \\(q\\). \\\[e\_t \= \\eta\_t \+ \\theta\_1 \\eta\_{t\-1} \+ \\theta\_2 \\eta\_{t\-2} \+ ... \+ \\theta\_q \\eta\_{t\-q},\\quad \\eta\_t \\sim N(0, \\sigma)\\] #### Create some data from an AR(2\) Model \\\[x\_t \= 0\.5 x\_{t\-1} \+ 0\.3 x\_{t\-2} \+ e\_t\\] ``` dat = arima.sim(n=1000, model=list(ar=c(.5,.3))) plot(dat) abline(h=0, col="red") ``` Compare AR(2\) and random data. #### AR(2\) is auto\-correlated Plot the data at time \\(t\\) against the data at time \\(t\-1\\) ### 3\.1\.2 Box\-Jenkins method This refers to a step\-by\-step process of selecting a forecasting model. You need to go through the steps otherwise you could end up fitting a nonsensical model or using fitting a sensible model with an algorithm that will not work on your data. A. Model form selection 1. Evaluate stationarity and seasonality 2. Selection of the differencing level (d) 3. Selection of the AR level (p) 4. Selection of the MA level (q) B. Parameter estimation C. Model checking ### 3\.1\.3 ACF and PACF functions #### The ACF function The auto\-correlation function (ACF) is the correlation between the data at time \\(t\\) and \\(t\+1\\). This is one of the basic diagnostic plots for time series data. ``` acf(dat[1:50]) ``` The ACF simply shows the correlation between all the data points that are lag \\(p\\) apart. Here are the correlations for points lag 1 and lag 10 apart. `cor()` is the correlation function. ``` cor(dat[2:TT], dat[1:(TT-1)]) ``` ``` ## [1] 0.7022108 ``` ``` cor(dat[11:TT], dat[1:(TT-10)]) ``` ``` ## [1] 0.1095311 ``` The values match what we see in the ACF plot. #### ACF for independent data Temporally independent data shows no significant autocorrelation. #### PACF function In the ACF for the AR(2\), we see that \\(x\_t\\) and \\(x\_{t\-3}\\) are correlated even those the model for \\(x\_t\\) does not include \\(x\_{t\-3}\\). \\(x\_{t\-3}\\) is correlated with \\(x\_t\\) indirectly because \\(x\_{t\-3}\\) is directly correlated with \\(x\_{t\-2}\\) and \\(x\_{t\-1}\\) and these two are in turn directly correlated with \\(x\_t\\). The partial autocorrelation function removes this indirect correlation. Thus the only significant lags in the PACF should be the lags that appear in the process model. For example, if the model is #### Partial ACF for AR(2\) \\\[x\_t \= 0\.5 x\_{t\-1} \+ 0\.3 x\_{t\-2} \+ e\_t\\] then only the first two lags should be significant in the PACF. ``` pacf(dat) ``` #### Partial ACF for AR(1\) Similarly if the process model is \\\[x\_t \= 0\.5 x\_{t\-1} \+ e\_t\\] The PACF should only have significant values at lag 1\. ``` dat <- arima.sim(TT, model=list(ar=c(.5))) pacf(dat) ``` ### 3\.1\.1 Components of an ARIMA model You will commonly see ARIMA models referred to as *Box\-Jenkins* models. This model has 3 components (p, d, q): * **AR autoregressive** \\(y\_t\\) depends on past values. The AR level is maximum lag \\(p\\). \\\[x\_t \= \\phi\_1 x\_{t\-1} \+ \\phi\_2 x\_{t\-2} \+ ... \+ \\phi\_p x\_{t\-p} \+ e\_t\\] * **I differencing** \\(x\_t\\) may be a difference of the observed time series. The number of differences is denoted \\(d\\). First difference is \\(d\=1\\): \\\[x\_t \= y\_t \- y\_{t\-1}\\] * **MA moving average** The error \\(e\_t\\) can be a sum of a time series of independent random errors. The maximum lag is denoted \\(q\\). \\\[e\_t \= \\eta\_t \+ \\theta\_1 \\eta\_{t\-1} \+ \\theta\_2 \\eta\_{t\-2} \+ ... \+ \\theta\_q \\eta\_{t\-q},\\quad \\eta\_t \\sim N(0, \\sigma)\\] #### Create some data from an AR(2\) Model \\\[x\_t \= 0\.5 x\_{t\-1} \+ 0\.3 x\_{t\-2} \+ e\_t\\] ``` dat = arima.sim(n=1000, model=list(ar=c(.5,.3))) plot(dat) abline(h=0, col="red") ``` Compare AR(2\) and random data. #### AR(2\) is auto\-correlated Plot the data at time \\(t\\) against the data at time \\(t\-1\\) #### Create some data from an AR(2\) Model \\\[x\_t \= 0\.5 x\_{t\-1} \+ 0\.3 x\_{t\-2} \+ e\_t\\] ``` dat = arima.sim(n=1000, model=list(ar=c(.5,.3))) plot(dat) abline(h=0, col="red") ``` Compare AR(2\) and random data. #### AR(2\) is auto\-correlated Plot the data at time \\(t\\) against the data at time \\(t\-1\\) ### 3\.1\.2 Box\-Jenkins method This refers to a step\-by\-step process of selecting a forecasting model. You need to go through the steps otherwise you could end up fitting a nonsensical model or using fitting a sensible model with an algorithm that will not work on your data. A. Model form selection 1. Evaluate stationarity and seasonality 2. Selection of the differencing level (d) 3. Selection of the AR level (p) 4. Selection of the MA level (q) B. Parameter estimation C. Model checking ### 3\.1\.3 ACF and PACF functions #### The ACF function The auto\-correlation function (ACF) is the correlation between the data at time \\(t\\) and \\(t\+1\\). This is one of the basic diagnostic plots for time series data. ``` acf(dat[1:50]) ``` The ACF simply shows the correlation between all the data points that are lag \\(p\\) apart. Here are the correlations for points lag 1 and lag 10 apart. `cor()` is the correlation function. ``` cor(dat[2:TT], dat[1:(TT-1)]) ``` ``` ## [1] 0.7022108 ``` ``` cor(dat[11:TT], dat[1:(TT-10)]) ``` ``` ## [1] 0.1095311 ``` The values match what we see in the ACF plot. #### ACF for independent data Temporally independent data shows no significant autocorrelation. #### PACF function In the ACF for the AR(2\), we see that \\(x\_t\\) and \\(x\_{t\-3}\\) are correlated even those the model for \\(x\_t\\) does not include \\(x\_{t\-3}\\). \\(x\_{t\-3}\\) is correlated with \\(x\_t\\) indirectly because \\(x\_{t\-3}\\) is directly correlated with \\(x\_{t\-2}\\) and \\(x\_{t\-1}\\) and these two are in turn directly correlated with \\(x\_t\\). The partial autocorrelation function removes this indirect correlation. Thus the only significant lags in the PACF should be the lags that appear in the process model. For example, if the model is #### Partial ACF for AR(2\) \\\[x\_t \= 0\.5 x\_{t\-1} \+ 0\.3 x\_{t\-2} \+ e\_t\\] then only the first two lags should be significant in the PACF. ``` pacf(dat) ``` #### Partial ACF for AR(1\) Similarly if the process model is \\\[x\_t \= 0\.5 x\_{t\-1} \+ e\_t\\] The PACF should only have significant values at lag 1\. ``` dat <- arima.sim(TT, model=list(ar=c(.5))) pacf(dat) ``` #### The ACF function The auto\-correlation function (ACF) is the correlation between the data at time \\(t\\) and \\(t\+1\\). This is one of the basic diagnostic plots for time series data. ``` acf(dat[1:50]) ``` The ACF simply shows the correlation between all the data points that are lag \\(p\\) apart. Here are the correlations for points lag 1 and lag 10 apart. `cor()` is the correlation function. ``` cor(dat[2:TT], dat[1:(TT-1)]) ``` ``` ## [1] 0.7022108 ``` ``` cor(dat[11:TT], dat[1:(TT-10)]) ``` ``` ## [1] 0.1095311 ``` The values match what we see in the ACF plot. #### ACF for independent data Temporally independent data shows no significant autocorrelation. #### PACF function In the ACF for the AR(2\), we see that \\(x\_t\\) and \\(x\_{t\-3}\\) are correlated even those the model for \\(x\_t\\) does not include \\(x\_{t\-3}\\). \\(x\_{t\-3}\\) is correlated with \\(x\_t\\) indirectly because \\(x\_{t\-3}\\) is directly correlated with \\(x\_{t\-2}\\) and \\(x\_{t\-1}\\) and these two are in turn directly correlated with \\(x\_t\\). The partial autocorrelation function removes this indirect correlation. Thus the only significant lags in the PACF should be the lags that appear in the process model. For example, if the model is #### Partial ACF for AR(2\) \\\[x\_t \= 0\.5 x\_{t\-1} \+ 0\.3 x\_{t\-2} \+ e\_t\\] then only the first two lags should be significant in the PACF. ``` pacf(dat) ``` #### Partial ACF for AR(1\) Similarly if the process model is \\\[x\_t \= 0\.5 x\_{t\-1} \+ e\_t\\] The PACF should only have significant values at lag 1\. ``` dat <- arima.sim(TT, model=list(ar=c(.5))) pacf(dat) ```
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/3-2-stationarity.html
3\.2 Stationarity ----------------- The first two steps of the Box\-Jenkins Method have to do with evaluating for stationarity and correcting for lack of stationarity in your data: A. Model form selection 1. **Evaluate stationarity and seasonality** 2. **Selection of the differencing level (d)** 3. Selection of the AR level (p) 4. Selection of the MA level (q) B. Parameter estimation C. Model checking ### 3\.2\.1 Definition Stationarity means ‘not changing in time’ in the context of time\-series models. Typically we test the trend and variance, however more generally all statistical properties of a time\-series is time\-constant if the time series is ‘stationary’. Many ARMA models exhibit stationarity. White noise is one type: \\\[x\_t \= e\_t, e\_t \\sim N(0,\\sigma)\\] ``` ## Loading required package: gridExtra ``` ``` ## Loading required package: reshape2 ``` An AR\-1 process with \\(b\<1\\) \\\[x\_t \= b x\_{t\-1} \+ e\_t\\] is also stationary. #### Stationarity around a trend The processes shown above have mean 0 and a flat level. We can also have stationarity around an non\-zero level or stationarity around an linear trend. If \\(b\=0\\), we have white noise and if \\(b\<1\\) we have AR\-1\. 1. Non\-zero mean: \\(x\_t \= \\mu \+ b x\_{t\-1} \+ e\_t\\) 2. Linear trend: \\(x\_t \= \\mu \+ at \+ b x\_{t\-1} \+ e\_t\\) ### 3\.2\.2 Non\-stationarity One of the most common forms of non\-stationarity that is tested for is ‘unit root’, which means that the process is a random walk: \\\[x\_t \= x\_{t\-1} \+ e\_t\\] . #### Non\-stationarity with a trend Similar to the way we added an intecept and linear trend to the stationarity processes, we can do the same to the random walk. 1. Non\-zero mean or intercept: \\(x\_t \= \\mu \+ x\_{t\-1} \+ e\_t\\) 2. Linear trend: \\(x\_t \= \\mu \+ at \+ x\_{t\-1} \+ e\_t\\) The effects are fundamentally different however. The addition of \\(\\mu\\) leads to a upward mean linear trend while the addition of \\(at\\) leads to exponential growth. ### 3\.2\.3 Stationarity tests Why is evaluating stationarity important? * Many AR models have a flat level or trend and time\-constant variance. If your data do not have those properties, you are fitting a model that is fundamentally inconsistent with your data. * Many standard algorithms for fitting ARIMA models assume stationarity. Note, you can fit ARIMA models without making this assumption, but you need to use the appropriate algorithm. We will discuss three common approaches to evaluating stationarity: * Visual test * (Augmented) Dickey\-Fuller test * KPSS test #### Visual test The visual test is simply looking at a plot of the data versus time. Look for * Change in the level over time. Is the time series increasing or decreasing? Does it appear to cycle? * Change in the variance over time. Do deviations away from the mean change over time, increase or decrease? Here is a plot of the anchovy and sardine in Greek waters from 1965 to 1989\. The anchovies have an obvious non\-stationary trend during this period. The mean level is going up. The sardines have a roughly stationary trend. The variance (deviations away from the mean) appear to be roughly stationary, neither increasing or decreasing in time. Although the logged anchovy time series is increasing, it appears to have an linear trend. #### Dickey\-Fuller test The Dickey\=Fuller test (and Augmented Dickey\-Fuller test) look for evidence that the time series has a unit root. The null hypothesis is that the time series has a unit root, that is, it has a random walk component. The alternative hypothesis is some variation of stationarity. The test has three main verisons. Visually, the null and alternative hypotheses for the three Dickey\-Fuller tests are the following. It is hard to see but in the panels on the left, the variance around the trend is increasing and on the right, it is not. Mathematically, here are the null and alternative hypotheses. In each, we are testing if \\(\\delta\=0\\). 1. Null is a random walk with no drift \\(x\_t \= x\_{t\-1}\+e\_t\\) Alternative is a mean\-reverting (stationary) process with zero mean. \\(x\_t \= \\delta x\_{t\-1}\+e\_t\\) 2. Null is a random walk with drift (linear STOCHASTIC trend) \\(x\_t \= \\mu \+ x\_{t\-1} \+ e\_t\\) Alternative is a mean\-reverting (stationary) process with non\-zero mean and no trend. \\(x\_t \= \\mu \+ \\delta x\_{t\-1} \+ e\_t\\) 3. Null is a random walk with exponential trend \\(x\_t \= \\mu \+ at \+ x\_{t\-1} \+ e\_t\\) Alternative is a mean\-reverting (stationary) process with non\-zero mean and linear DETERMINISTIC trend. \\(x\_t \= \\mu \+ at \+ \\delta x\_{t\-1} \+ e\_t\\) #### Example: Dickey\-Fuller test using adf.test() `adf.test()` in the tseries package will apply the Augemented Dickey\-Fuller and report the p\-value. We want to reject the Dickey\=Fuller null hypothesis of non\-stationarity. We will set `k=0` to apply the Dickey\-Fuller test which tests for AR(1\) stationarity. The Augmented Dickey\-Fuller tests for more general lag\-p stationarity. ``` adf.test(x, alternative = c("stationary", "explosive"), k = trunc((length(x)-1)^(1/3))) ``` `x` is the time\-series data in vector or ts form. Here is how to apply this test to the anchovy data ``` tseries::adf.test(anchovy87ts, k=0) ``` ``` ## ## Augmented Dickey-Fuller Test ## ## data: anchovy87ts ## Dickey-Fuller = -2.8685, Lag order = 0, p-value = 0.2415 ## alternative hypothesis: stationary ``` ``` # or tseries::adf.test(anchovy87$log.metric.tons, k=0) ``` The null hypothesis is not rejected. That is not what we want. #### Example: Dickey\-Fuller test using ur.df() The `urca` R package can also be used to apply the Dickey\-Fuller tests. Use `lags=0` for Dickey\-Fuller which tests for AR(1\) stationarity. We will set `type="trend"` to deal with the trend seen in the anchovy data. Note, `adf.test()` uses this type by default. ``` ur.df(y, type = c("none", "drift", "trend"), lags = 0) ``` ``` test = urca::ur.df(anchovy87ts, type="trend", lags=0) test ``` ``` ## ## ############################################################### ## # Augmented Dickey-Fuller Test Unit Root / Cointegration Test # ## ############################################################### ## ## The value of the test statistic is: -2.8685 4.0886 4.7107 ``` `ur.df()` will report the test statistic. You can look up the values of the test statistic for different \\(\\alpha\\) levels using `summary(test)` or `attr(test, "cval")`. If the test statistic is less than the critical value for \\(\\alpha\\)\=0\.05 (‘5pct’ in cval), it means the null hypothesis of non\-stationarity is rejected. For the Dickey\-Fuller test, you do want to reject the null hypothesis. The test statistic is ``` attr(test, "teststat") ``` ``` ## tau3 phi2 phi3 ## statistic -2.86847 4.088559 4.71069 ``` and the critical value at \\(\\alpha \= 0\.05\\) is ``` attr(test,"cval") ``` ``` ## 1pct 5pct 10pct ## tau3 -4.38 -3.60 -3.24 ## phi2 8.21 5.68 4.67 ## phi3 10.61 7.24 5.91 ``` The statistic is larger than the critical value and thus the null hypothesis of non\-stationarity is not rejected. That’s not what we want. #### Augmented Dickey\-Fuller test The Dickey\-Fuller test assumes that the stationary process is AR(1\) (autoregressive lag\-1\). The Augmented Dickey\-Fuller test allows a general stationary process. The idea of the test however is the same. We can apply the Augmented Dickey\-Fuller test with the `ur.df()` function or the `adf.test()` function in the `tseries` package. ``` adf.test(x, alternative = c("stationary", "explosive"), k = trunc((length(x)-1)^(1/3))) ``` The alternative is either stationary like \\(x\_t \= \\delta x\_{t\-1} \+ \\eta\_t\\) with \\(\\delta\<1\\) or ‘explosive’ with \\(\\delta\>1\\). `k` is the number of lags which determines the number of time lags allowed in the autoregression. `k` is generally determined by the length of your time series. #### Example: Augmented Dickey\-Fuller tests with adf.test() With the `tseries` package, we apply the Augmented Dickey\-Fuller test with `adf.test()`. This function uses the test where the alternative model is stationary around a linear trend: \\(x\_t \= \\mu \+ at \+ \\delta x\_{t\-1} \+ e\_t\\). ``` tseries::adf.test(anchovy87ts) ``` ``` ## ## Augmented Dickey-Fuller Test ## ## data: anchovy87ts ## Dickey-Fuller = -0.57814, Lag order = 2, p-value = 0.9685 ## alternative hypothesis: stationary ``` In both cases, we do not reject the null hypothesis that the data have a random walk. Thus there is not support for these time series being stationary. #### Example: Augmented Dickey\-Fuller tests with ur.df() With the `urca` package, we apply the Augmented Dickey\-Fuller test with `ur.df()`. The defaults for `ur.df()` are different than for `adf.test()`. `ur.df()` allows you to specify which of the 3 alternative hypotheses you want: none (stationary around 0\), drift (stationary around a non\-zero intercept), trend (stationary around a linear trend). Another difference is that by default, `ur.df()` uses a fixed lag of 1 while by default `adf.test()` selects the lag based on the length of the time series. We will specify “trend” to make the test similar to `adf.test()`. We will set the lags like `adf.test()` does also. ``` k = trunc((length(anchovy87ts)-1)^(1/3)) test = urca::ur.df(anchovy87ts, type="trend", lags=k) test ``` ``` ## ## ############################################################### ## # Augmented Dickey-Fuller Test Unit Root / Cointegration Test # ## ############################################################### ## ## The value of the test statistic is: -0.5781 3.2816 0.8113 ``` The test statistic values are the same, but we need to look up the critical values with `summary(test)`. #### KPSS test In the Dickey\-Fuller test, the null hypothesis is the unit root, i.e. random walk. Often times, there is not enough power to reject the null hypothesis. A null hypothesis is accepted unless there is strong evidence against it. The Kwiatkowski–Phillips–Schmidt–Shin (KPSS) test has as the null hypothesis that a time series is stationary around a level trend (or a linear trend). The alternative hypothesis for the KPSS test is a random walk. The stationarity assumption is general; it does not assume a specific type of stationarity such as white noise. If both KPSS and Dickey\-Fuller tests support non\-stationarity, then the stationarity assumption is not supported. #### Example: KPSS tests ``` tseries::kpss.test(anchovy87ts, null="Trend") ``` ``` ## ## KPSS Test for Trend Stationarity ## ## data: anchovy87ts ## KPSS Trend = 0.19182, Truncation lag parameter = 2, p-value = 0.01907 ``` Here `null="Trend"` was included to account for the increasing trend in the data. The null hypothesis of stationarity is rejected. Thus both the KPSS and Dickey\-Fuller tests support the hypothesis that the anchovy time series is non\-stationary. That’s not what we want. ### 3\.2\.4 Differencing the data Differencing the data is used to correct non\-stationarity. Differencing means to create a new time series \\(z\_t \= x\_t \- x\_{t\-1}\\). First order differencing means you do this once (so \\(z\_t\\)) and second order differencing means you do this twice (so \\(z\_t \- z\_{t\-1}\\)). The `diff()` function takes the first difference: ``` x <- diff(c(1,2,4,7,11)) x ``` ``` ## [1] 1 2 3 4 ``` The second difference is the first difference of the first difference. ``` diff(x) ``` ``` ## [1] 1 1 1 ``` Here is a plot of the anchovy data and its first difference. ``` par(mfrow=c(1,2)) plot(anchovy87ts, type="l") title("Anchovy") plot(diff(anchovy87ts), type="l") title("Anchovy first difference") ``` Let’s test the anchovy data with one difference using the KPSS test. ``` diff.anchovy = diff(anchovy87ts) tseries::kpss.test(diff.anchovy) ``` ``` ## Warning in tseries::kpss.test(diff.anchovy): p-value greater than printed p- ## value ``` ``` ## ## KPSS Test for Level Stationarity ## ## data: diff.anchovy ## KPSS Level = 0.28972, Truncation lag parameter = 2, p-value = 0.1 ``` The null hypothesis of stationairity is not rejected. That is good. Let’s test the first difference of the anchovy data using the Augmented Dickey\-Fuller test. We do the default test and allow it to chose the number of lags. ``` tseries::adf.test(diff.anchovy) ``` ``` ## ## Augmented Dickey-Fuller Test ## ## data: diff.anchovy ## Dickey-Fuller = -4.2126, Lag order = 2, p-value = 0.01584 ## alternative hypothesis: stationary ``` The null hypothesis of non\-stationarity is rejected. That is what we want. However, we differenced which removed the trend thus we are testing against a more general model than we need. Let’s test with an alternative hypothesis that has a non\-zero mean but not trend. We can do this with `ur.df()` and `type='drift'`. ``` test <- urca::ur.df(diff.anchovy, type="drift", lags=2) ``` The null hypothesis of NON\-stationairity IS rejected. That is good. The test statistic is ``` attr(test, "teststat") ``` ``` ## tau2 phi1 ## statistic -3.492685 6.099778 ``` and the critical value at \\(\\alpha \= 0\.05\\) is ``` attr(test,"cval") ``` ``` ## 1pct 5pct 10pct ## tau2 -3.75 -3.00 -2.63 ## phi1 7.88 5.18 4.12 ``` ### 3\.2\.5 Summary Test stationarity before you fit a ARMA model. * Visual test: is the time series flutuating about a level or a linear trend? Yes or maybe? Apply a “unit root” test. * (Augmented) Dickey\-Fuller test * KPSS test No or fails the unit root test. * Apply differencing again and re\-test. Still not passing? * Try a second difference. Still not passing? * ARMA model might not be the best choice. Or you may need to an adhoc detrend. ### 3\.2\.1 Definition Stationarity means ‘not changing in time’ in the context of time\-series models. Typically we test the trend and variance, however more generally all statistical properties of a time\-series is time\-constant if the time series is ‘stationary’. Many ARMA models exhibit stationarity. White noise is one type: \\\[x\_t \= e\_t, e\_t \\sim N(0,\\sigma)\\] ``` ## Loading required package: gridExtra ``` ``` ## Loading required package: reshape2 ``` An AR\-1 process with \\(b\<1\\) \\\[x\_t \= b x\_{t\-1} \+ e\_t\\] is also stationary. #### Stationarity around a trend The processes shown above have mean 0 and a flat level. We can also have stationarity around an non\-zero level or stationarity around an linear trend. If \\(b\=0\\), we have white noise and if \\(b\<1\\) we have AR\-1\. 1. Non\-zero mean: \\(x\_t \= \\mu \+ b x\_{t\-1} \+ e\_t\\) 2. Linear trend: \\(x\_t \= \\mu \+ at \+ b x\_{t\-1} \+ e\_t\\) #### Stationarity around a trend The processes shown above have mean 0 and a flat level. We can also have stationarity around an non\-zero level or stationarity around an linear trend. If \\(b\=0\\), we have white noise and if \\(b\<1\\) we have AR\-1\. 1. Non\-zero mean: \\(x\_t \= \\mu \+ b x\_{t\-1} \+ e\_t\\) 2. Linear trend: \\(x\_t \= \\mu \+ at \+ b x\_{t\-1} \+ e\_t\\) ### 3\.2\.2 Non\-stationarity One of the most common forms of non\-stationarity that is tested for is ‘unit root’, which means that the process is a random walk: \\\[x\_t \= x\_{t\-1} \+ e\_t\\] . #### Non\-stationarity with a trend Similar to the way we added an intecept and linear trend to the stationarity processes, we can do the same to the random walk. 1. Non\-zero mean or intercept: \\(x\_t \= \\mu \+ x\_{t\-1} \+ e\_t\\) 2. Linear trend: \\(x\_t \= \\mu \+ at \+ x\_{t\-1} \+ e\_t\\) The effects are fundamentally different however. The addition of \\(\\mu\\) leads to a upward mean linear trend while the addition of \\(at\\) leads to exponential growth. #### Non\-stationarity with a trend Similar to the way we added an intecept and linear trend to the stationarity processes, we can do the same to the random walk. 1. Non\-zero mean or intercept: \\(x\_t \= \\mu \+ x\_{t\-1} \+ e\_t\\) 2. Linear trend: \\(x\_t \= \\mu \+ at \+ x\_{t\-1} \+ e\_t\\) The effects are fundamentally different however. The addition of \\(\\mu\\) leads to a upward mean linear trend while the addition of \\(at\\) leads to exponential growth. ### 3\.2\.3 Stationarity tests Why is evaluating stationarity important? * Many AR models have a flat level or trend and time\-constant variance. If your data do not have those properties, you are fitting a model that is fundamentally inconsistent with your data. * Many standard algorithms for fitting ARIMA models assume stationarity. Note, you can fit ARIMA models without making this assumption, but you need to use the appropriate algorithm. We will discuss three common approaches to evaluating stationarity: * Visual test * (Augmented) Dickey\-Fuller test * KPSS test #### Visual test The visual test is simply looking at a plot of the data versus time. Look for * Change in the level over time. Is the time series increasing or decreasing? Does it appear to cycle? * Change in the variance over time. Do deviations away from the mean change over time, increase or decrease? Here is a plot of the anchovy and sardine in Greek waters from 1965 to 1989\. The anchovies have an obvious non\-stationary trend during this period. The mean level is going up. The sardines have a roughly stationary trend. The variance (deviations away from the mean) appear to be roughly stationary, neither increasing or decreasing in time. Although the logged anchovy time series is increasing, it appears to have an linear trend. #### Dickey\-Fuller test The Dickey\=Fuller test (and Augmented Dickey\-Fuller test) look for evidence that the time series has a unit root. The null hypothesis is that the time series has a unit root, that is, it has a random walk component. The alternative hypothesis is some variation of stationarity. The test has three main verisons. Visually, the null and alternative hypotheses for the three Dickey\-Fuller tests are the following. It is hard to see but in the panels on the left, the variance around the trend is increasing and on the right, it is not. Mathematically, here are the null and alternative hypotheses. In each, we are testing if \\(\\delta\=0\\). 1. Null is a random walk with no drift \\(x\_t \= x\_{t\-1}\+e\_t\\) Alternative is a mean\-reverting (stationary) process with zero mean. \\(x\_t \= \\delta x\_{t\-1}\+e\_t\\) 2. Null is a random walk with drift (linear STOCHASTIC trend) \\(x\_t \= \\mu \+ x\_{t\-1} \+ e\_t\\) Alternative is a mean\-reverting (stationary) process with non\-zero mean and no trend. \\(x\_t \= \\mu \+ \\delta x\_{t\-1} \+ e\_t\\) 3. Null is a random walk with exponential trend \\(x\_t \= \\mu \+ at \+ x\_{t\-1} \+ e\_t\\) Alternative is a mean\-reverting (stationary) process with non\-zero mean and linear DETERMINISTIC trend. \\(x\_t \= \\mu \+ at \+ \\delta x\_{t\-1} \+ e\_t\\) #### Example: Dickey\-Fuller test using adf.test() `adf.test()` in the tseries package will apply the Augemented Dickey\-Fuller and report the p\-value. We want to reject the Dickey\=Fuller null hypothesis of non\-stationarity. We will set `k=0` to apply the Dickey\-Fuller test which tests for AR(1\) stationarity. The Augmented Dickey\-Fuller tests for more general lag\-p stationarity. ``` adf.test(x, alternative = c("stationary", "explosive"), k = trunc((length(x)-1)^(1/3))) ``` `x` is the time\-series data in vector or ts form. Here is how to apply this test to the anchovy data ``` tseries::adf.test(anchovy87ts, k=0) ``` ``` ## ## Augmented Dickey-Fuller Test ## ## data: anchovy87ts ## Dickey-Fuller = -2.8685, Lag order = 0, p-value = 0.2415 ## alternative hypothesis: stationary ``` ``` # or tseries::adf.test(anchovy87$log.metric.tons, k=0) ``` The null hypothesis is not rejected. That is not what we want. #### Example: Dickey\-Fuller test using ur.df() The `urca` R package can also be used to apply the Dickey\-Fuller tests. Use `lags=0` for Dickey\-Fuller which tests for AR(1\) stationarity. We will set `type="trend"` to deal with the trend seen in the anchovy data. Note, `adf.test()` uses this type by default. ``` ur.df(y, type = c("none", "drift", "trend"), lags = 0) ``` ``` test = urca::ur.df(anchovy87ts, type="trend", lags=0) test ``` ``` ## ## ############################################################### ## # Augmented Dickey-Fuller Test Unit Root / Cointegration Test # ## ############################################################### ## ## The value of the test statistic is: -2.8685 4.0886 4.7107 ``` `ur.df()` will report the test statistic. You can look up the values of the test statistic for different \\(\\alpha\\) levels using `summary(test)` or `attr(test, "cval")`. If the test statistic is less than the critical value for \\(\\alpha\\)\=0\.05 (‘5pct’ in cval), it means the null hypothesis of non\-stationarity is rejected. For the Dickey\-Fuller test, you do want to reject the null hypothesis. The test statistic is ``` attr(test, "teststat") ``` ``` ## tau3 phi2 phi3 ## statistic -2.86847 4.088559 4.71069 ``` and the critical value at \\(\\alpha \= 0\.05\\) is ``` attr(test,"cval") ``` ``` ## 1pct 5pct 10pct ## tau3 -4.38 -3.60 -3.24 ## phi2 8.21 5.68 4.67 ## phi3 10.61 7.24 5.91 ``` The statistic is larger than the critical value and thus the null hypothesis of non\-stationarity is not rejected. That’s not what we want. #### Augmented Dickey\-Fuller test The Dickey\-Fuller test assumes that the stationary process is AR(1\) (autoregressive lag\-1\). The Augmented Dickey\-Fuller test allows a general stationary process. The idea of the test however is the same. We can apply the Augmented Dickey\-Fuller test with the `ur.df()` function or the `adf.test()` function in the `tseries` package. ``` adf.test(x, alternative = c("stationary", "explosive"), k = trunc((length(x)-1)^(1/3))) ``` The alternative is either stationary like \\(x\_t \= \\delta x\_{t\-1} \+ \\eta\_t\\) with \\(\\delta\<1\\) or ‘explosive’ with \\(\\delta\>1\\). `k` is the number of lags which determines the number of time lags allowed in the autoregression. `k` is generally determined by the length of your time series. #### Example: Augmented Dickey\-Fuller tests with adf.test() With the `tseries` package, we apply the Augmented Dickey\-Fuller test with `adf.test()`. This function uses the test where the alternative model is stationary around a linear trend: \\(x\_t \= \\mu \+ at \+ \\delta x\_{t\-1} \+ e\_t\\). ``` tseries::adf.test(anchovy87ts) ``` ``` ## ## Augmented Dickey-Fuller Test ## ## data: anchovy87ts ## Dickey-Fuller = -0.57814, Lag order = 2, p-value = 0.9685 ## alternative hypothesis: stationary ``` In both cases, we do not reject the null hypothesis that the data have a random walk. Thus there is not support for these time series being stationary. #### Example: Augmented Dickey\-Fuller tests with ur.df() With the `urca` package, we apply the Augmented Dickey\-Fuller test with `ur.df()`. The defaults for `ur.df()` are different than for `adf.test()`. `ur.df()` allows you to specify which of the 3 alternative hypotheses you want: none (stationary around 0\), drift (stationary around a non\-zero intercept), trend (stationary around a linear trend). Another difference is that by default, `ur.df()` uses a fixed lag of 1 while by default `adf.test()` selects the lag based on the length of the time series. We will specify “trend” to make the test similar to `adf.test()`. We will set the lags like `adf.test()` does also. ``` k = trunc((length(anchovy87ts)-1)^(1/3)) test = urca::ur.df(anchovy87ts, type="trend", lags=k) test ``` ``` ## ## ############################################################### ## # Augmented Dickey-Fuller Test Unit Root / Cointegration Test # ## ############################################################### ## ## The value of the test statistic is: -0.5781 3.2816 0.8113 ``` The test statistic values are the same, but we need to look up the critical values with `summary(test)`. #### KPSS test In the Dickey\-Fuller test, the null hypothesis is the unit root, i.e. random walk. Often times, there is not enough power to reject the null hypothesis. A null hypothesis is accepted unless there is strong evidence against it. The Kwiatkowski–Phillips–Schmidt–Shin (KPSS) test has as the null hypothesis that a time series is stationary around a level trend (or a linear trend). The alternative hypothesis for the KPSS test is a random walk. The stationarity assumption is general; it does not assume a specific type of stationarity such as white noise. If both KPSS and Dickey\-Fuller tests support non\-stationarity, then the stationarity assumption is not supported. #### Example: KPSS tests ``` tseries::kpss.test(anchovy87ts, null="Trend") ``` ``` ## ## KPSS Test for Trend Stationarity ## ## data: anchovy87ts ## KPSS Trend = 0.19182, Truncation lag parameter = 2, p-value = 0.01907 ``` Here `null="Trend"` was included to account for the increasing trend in the data. The null hypothesis of stationarity is rejected. Thus both the KPSS and Dickey\-Fuller tests support the hypothesis that the anchovy time series is non\-stationary. That’s not what we want. #### Visual test The visual test is simply looking at a plot of the data versus time. Look for * Change in the level over time. Is the time series increasing or decreasing? Does it appear to cycle? * Change in the variance over time. Do deviations away from the mean change over time, increase or decrease? Here is a plot of the anchovy and sardine in Greek waters from 1965 to 1989\. The anchovies have an obvious non\-stationary trend during this period. The mean level is going up. The sardines have a roughly stationary trend. The variance (deviations away from the mean) appear to be roughly stationary, neither increasing or decreasing in time. Although the logged anchovy time series is increasing, it appears to have an linear trend. #### Dickey\-Fuller test The Dickey\=Fuller test (and Augmented Dickey\-Fuller test) look for evidence that the time series has a unit root. The null hypothesis is that the time series has a unit root, that is, it has a random walk component. The alternative hypothesis is some variation of stationarity. The test has three main verisons. Visually, the null and alternative hypotheses for the three Dickey\-Fuller tests are the following. It is hard to see but in the panels on the left, the variance around the trend is increasing and on the right, it is not. Mathematically, here are the null and alternative hypotheses. In each, we are testing if \\(\\delta\=0\\). 1. Null is a random walk with no drift \\(x\_t \= x\_{t\-1}\+e\_t\\) Alternative is a mean\-reverting (stationary) process with zero mean. \\(x\_t \= \\delta x\_{t\-1}\+e\_t\\) 2. Null is a random walk with drift (linear STOCHASTIC trend) \\(x\_t \= \\mu \+ x\_{t\-1} \+ e\_t\\) Alternative is a mean\-reverting (stationary) process with non\-zero mean and no trend. \\(x\_t \= \\mu \+ \\delta x\_{t\-1} \+ e\_t\\) 3. Null is a random walk with exponential trend \\(x\_t \= \\mu \+ at \+ x\_{t\-1} \+ e\_t\\) Alternative is a mean\-reverting (stationary) process with non\-zero mean and linear DETERMINISTIC trend. \\(x\_t \= \\mu \+ at \+ \\delta x\_{t\-1} \+ e\_t\\) #### Example: Dickey\-Fuller test using adf.test() `adf.test()` in the tseries package will apply the Augemented Dickey\-Fuller and report the p\-value. We want to reject the Dickey\=Fuller null hypothesis of non\-stationarity. We will set `k=0` to apply the Dickey\-Fuller test which tests for AR(1\) stationarity. The Augmented Dickey\-Fuller tests for more general lag\-p stationarity. ``` adf.test(x, alternative = c("stationary", "explosive"), k = trunc((length(x)-1)^(1/3))) ``` `x` is the time\-series data in vector or ts form. Here is how to apply this test to the anchovy data ``` tseries::adf.test(anchovy87ts, k=0) ``` ``` ## ## Augmented Dickey-Fuller Test ## ## data: anchovy87ts ## Dickey-Fuller = -2.8685, Lag order = 0, p-value = 0.2415 ## alternative hypothesis: stationary ``` ``` # or tseries::adf.test(anchovy87$log.metric.tons, k=0) ``` The null hypothesis is not rejected. That is not what we want. #### Example: Dickey\-Fuller test using ur.df() The `urca` R package can also be used to apply the Dickey\-Fuller tests. Use `lags=0` for Dickey\-Fuller which tests for AR(1\) stationarity. We will set `type="trend"` to deal with the trend seen in the anchovy data. Note, `adf.test()` uses this type by default. ``` ur.df(y, type = c("none", "drift", "trend"), lags = 0) ``` ``` test = urca::ur.df(anchovy87ts, type="trend", lags=0) test ``` ``` ## ## ############################################################### ## # Augmented Dickey-Fuller Test Unit Root / Cointegration Test # ## ############################################################### ## ## The value of the test statistic is: -2.8685 4.0886 4.7107 ``` `ur.df()` will report the test statistic. You can look up the values of the test statistic for different \\(\\alpha\\) levels using `summary(test)` or `attr(test, "cval")`. If the test statistic is less than the critical value for \\(\\alpha\\)\=0\.05 (‘5pct’ in cval), it means the null hypothesis of non\-stationarity is rejected. For the Dickey\-Fuller test, you do want to reject the null hypothesis. The test statistic is ``` attr(test, "teststat") ``` ``` ## tau3 phi2 phi3 ## statistic -2.86847 4.088559 4.71069 ``` and the critical value at \\(\\alpha \= 0\.05\\) is ``` attr(test,"cval") ``` ``` ## 1pct 5pct 10pct ## tau3 -4.38 -3.60 -3.24 ## phi2 8.21 5.68 4.67 ## phi3 10.61 7.24 5.91 ``` The statistic is larger than the critical value and thus the null hypothesis of non\-stationarity is not rejected. That’s not what we want. #### Augmented Dickey\-Fuller test The Dickey\-Fuller test assumes that the stationary process is AR(1\) (autoregressive lag\-1\). The Augmented Dickey\-Fuller test allows a general stationary process. The idea of the test however is the same. We can apply the Augmented Dickey\-Fuller test with the `ur.df()` function or the `adf.test()` function in the `tseries` package. ``` adf.test(x, alternative = c("stationary", "explosive"), k = trunc((length(x)-1)^(1/3))) ``` The alternative is either stationary like \\(x\_t \= \\delta x\_{t\-1} \+ \\eta\_t\\) with \\(\\delta\<1\\) or ‘explosive’ with \\(\\delta\>1\\). `k` is the number of lags which determines the number of time lags allowed in the autoregression. `k` is generally determined by the length of your time series. #### Example: Augmented Dickey\-Fuller tests with adf.test() With the `tseries` package, we apply the Augmented Dickey\-Fuller test with `adf.test()`. This function uses the test where the alternative model is stationary around a linear trend: \\(x\_t \= \\mu \+ at \+ \\delta x\_{t\-1} \+ e\_t\\). ``` tseries::adf.test(anchovy87ts) ``` ``` ## ## Augmented Dickey-Fuller Test ## ## data: anchovy87ts ## Dickey-Fuller = -0.57814, Lag order = 2, p-value = 0.9685 ## alternative hypothesis: stationary ``` In both cases, we do not reject the null hypothesis that the data have a random walk. Thus there is not support for these time series being stationary. #### Example: Augmented Dickey\-Fuller tests with ur.df() With the `urca` package, we apply the Augmented Dickey\-Fuller test with `ur.df()`. The defaults for `ur.df()` are different than for `adf.test()`. `ur.df()` allows you to specify which of the 3 alternative hypotheses you want: none (stationary around 0\), drift (stationary around a non\-zero intercept), trend (stationary around a linear trend). Another difference is that by default, `ur.df()` uses a fixed lag of 1 while by default `adf.test()` selects the lag based on the length of the time series. We will specify “trend” to make the test similar to `adf.test()`. We will set the lags like `adf.test()` does also. ``` k = trunc((length(anchovy87ts)-1)^(1/3)) test = urca::ur.df(anchovy87ts, type="trend", lags=k) test ``` ``` ## ## ############################################################### ## # Augmented Dickey-Fuller Test Unit Root / Cointegration Test # ## ############################################################### ## ## The value of the test statistic is: -0.5781 3.2816 0.8113 ``` The test statistic values are the same, but we need to look up the critical values with `summary(test)`. #### KPSS test In the Dickey\-Fuller test, the null hypothesis is the unit root, i.e. random walk. Often times, there is not enough power to reject the null hypothesis. A null hypothesis is accepted unless there is strong evidence against it. The Kwiatkowski–Phillips–Schmidt–Shin (KPSS) test has as the null hypothesis that a time series is stationary around a level trend (or a linear trend). The alternative hypothesis for the KPSS test is a random walk. The stationarity assumption is general; it does not assume a specific type of stationarity such as white noise. If both KPSS and Dickey\-Fuller tests support non\-stationarity, then the stationarity assumption is not supported. #### Example: KPSS tests ``` tseries::kpss.test(anchovy87ts, null="Trend") ``` ``` ## ## KPSS Test for Trend Stationarity ## ## data: anchovy87ts ## KPSS Trend = 0.19182, Truncation lag parameter = 2, p-value = 0.01907 ``` Here `null="Trend"` was included to account for the increasing trend in the data. The null hypothesis of stationarity is rejected. Thus both the KPSS and Dickey\-Fuller tests support the hypothesis that the anchovy time series is non\-stationary. That’s not what we want. ### 3\.2\.4 Differencing the data Differencing the data is used to correct non\-stationarity. Differencing means to create a new time series \\(z\_t \= x\_t \- x\_{t\-1}\\). First order differencing means you do this once (so \\(z\_t\\)) and second order differencing means you do this twice (so \\(z\_t \- z\_{t\-1}\\)). The `diff()` function takes the first difference: ``` x <- diff(c(1,2,4,7,11)) x ``` ``` ## [1] 1 2 3 4 ``` The second difference is the first difference of the first difference. ``` diff(x) ``` ``` ## [1] 1 1 1 ``` Here is a plot of the anchovy data and its first difference. ``` par(mfrow=c(1,2)) plot(anchovy87ts, type="l") title("Anchovy") plot(diff(anchovy87ts), type="l") title("Anchovy first difference") ``` Let’s test the anchovy data with one difference using the KPSS test. ``` diff.anchovy = diff(anchovy87ts) tseries::kpss.test(diff.anchovy) ``` ``` ## Warning in tseries::kpss.test(diff.anchovy): p-value greater than printed p- ## value ``` ``` ## ## KPSS Test for Level Stationarity ## ## data: diff.anchovy ## KPSS Level = 0.28972, Truncation lag parameter = 2, p-value = 0.1 ``` The null hypothesis of stationairity is not rejected. That is good. Let’s test the first difference of the anchovy data using the Augmented Dickey\-Fuller test. We do the default test and allow it to chose the number of lags. ``` tseries::adf.test(diff.anchovy) ``` ``` ## ## Augmented Dickey-Fuller Test ## ## data: diff.anchovy ## Dickey-Fuller = -4.2126, Lag order = 2, p-value = 0.01584 ## alternative hypothesis: stationary ``` The null hypothesis of non\-stationarity is rejected. That is what we want. However, we differenced which removed the trend thus we are testing against a more general model than we need. Let’s test with an alternative hypothesis that has a non\-zero mean but not trend. We can do this with `ur.df()` and `type='drift'`. ``` test <- urca::ur.df(diff.anchovy, type="drift", lags=2) ``` The null hypothesis of NON\-stationairity IS rejected. That is good. The test statistic is ``` attr(test, "teststat") ``` ``` ## tau2 phi1 ## statistic -3.492685 6.099778 ``` and the critical value at \\(\\alpha \= 0\.05\\) is ``` attr(test,"cval") ``` ``` ## 1pct 5pct 10pct ## tau2 -3.75 -3.00 -2.63 ## phi1 7.88 5.18 4.12 ``` ### 3\.2\.5 Summary Test stationarity before you fit a ARMA model. * Visual test: is the time series flutuating about a level or a linear trend? Yes or maybe? Apply a “unit root” test. * (Augmented) Dickey\-Fuller test * KPSS test No or fails the unit root test. * Apply differencing again and re\-test. Still not passing? * Try a second difference. Still not passing? * ARMA model might not be the best choice. Or you may need to an adhoc detrend.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/3-3-model-structure.html
3\.3 Model structure -------------------- We are now at step A3 and A4 of the Box\-Jenkins Method. Note we did not address seasonality since we are working with yearly data. A. Model form selection 1. Evaluate stationarity and seasonality 2. Selection of the differencing level (d) 3. **Selection of the AR level (p)** 4. **Selection of the MA level (q)** B. Parameter estimation C. Model checking *Much of this will be automated when we use the forecast package* ### 3\.3\.1 AR and MA lags Step A3 is to determine the number of \\(p\\) lags in the AR part of the model: \\\[x\_t \= \\phi\_1 x\_{t\-1} \+ \\phi\_2 x\_{t\-2} \+ ... \+ \\phi\_p x\_{t\-p} \+ e\_t\\] Step A4 is to determine the number of \\(q\\) lags in the MA part of the model: \\\[e\_t \= \\eta\_t \+ \\theta\_1 \\eta\_{t\-1} \+ \\theta\_2 \\eta\_{t\-2} \+ ... \+ \\theta\_q \\eta\_{t\-q},\\quad \\eta\_t \\sim N(0, \\sigma)\\] ### 3\.3\.2 Model order For an ARIMA model, the number of AR lags, number of differences, and number of MA lags is called the **model order** or just **order**. Examples. Note \\(e\_t \\sim N(0,\\sigma)\\) * order (0,0,0\) white noise \\\[x\_t \= e\_t\\] * order (1,0,0\) AR\-1 process \\\[x\_t \= \\phi x\_{t\-1} \+ e\_t\\] * order (0,0,1\) MA\-1 process \\\[x\_t \= e\_t \+ \\theta e\_{t\-1}\\] * order (1,0,1\) AR\-1 MA\-1 process \\\[x\_t \= \\phi x\_{t\-1} \+ e\_t \+ \\theta e\_{t\-1}\\] * order (0,1,0\) random walk \\\[x\_t \- x\_{t\-1} \= e\_t\\] which is the same as \\\[x\_t \= x\_{t\-1} \+ e\_t\\] ### 3\.3\.3 Choosing the AR and MA levels #### Method \#1 use the ACF and PACF functions The ACF plot shows you how the correlation between \\(x\_t\\) and \\(x\_{t\+p}\\) decrease as \\(p\\) increases. The PACF plot shows you the same but removes the autocorrelation due to lags less that \\(p\\). If your ACF and PACF look like the top panel, it is AR\-p. The first lag where the PACF is below the dashed lines is the \\(p\\) lag for your model. If it looks like the middle panel, it is MA\-p. The first lag where the ACF is below the dashed lines is the \\(q\\) lag for your model. If it looks like the bottom panel, it is ARMA and this approach doesn’t work. #### Method \#2 Use formal model selection This weighs how well the model fits against how many parameters your model has. We will use this approach. The `auto.arima()` function in the forecast package in R allows you to easily estimate the \\(p\\) and \\(q\\) for your ARMA model. We will use the first difference of the anchovy data since our stationarity diagnostics indicated that a first difference makes our time series stationary. ``` anchovy.diff1 = diff(anchovy87$log.metric.tons) forecast::auto.arima(anchovy.diff1) ``` ``` ## Series: anchovy.diff1 ## ARIMA(0,0,1) with non-zero mean ## ## Coefficients: ## ma1 mean ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` The output indicates that the ‘best’ model is a MA\-1 with a non\-zero mean. “non\-zero mean” means that the mean of our data (`anchovy.diff1`) is not zero. `auto.arima()` will also estimate the amount of differencing needed. ``` forecast::auto.arima(anchovy87ts) ``` ``` ## Series: anchovy87ts ## ARIMA(0,1,1) with drift ## ## Coefficients: ## ma1 drift ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` The output indicates that the ‘best’ model is a MA\-1 with first difference. “with drift” means that the mean of our data (`anchovy87`) is not zero. This is the same model but the jargon regarding the mean is different. #### More examples Let’s try fitting to some simulated data. We will simulate with `arima.sim()`. We will specify no differencing. ``` set.seed(100) a1 = arima.sim(n=100, model=list(ar=c(.8,.1))) forecast::auto.arima(a1, seasonal=FALSE, max.d=0) ``` ``` ## Series: a1 ## ARIMA(1,0,0) with non-zero mean ## ## Coefficients: ## ar1 mean ## 0.6928 -0.5343 ## s.e. 0.0732 0.2774 ## ## sigma^2 estimated as 0.7703: log likelihood=-128.16 ## AIC=262.33 AICc=262.58 BIC=270.14 ``` The ‘best\-fit’ model is simpler than the model used to simulate the data. #### How often is the ‘true’ model is chosen? Let’s fit 100 simulated time series and see how often the ‘true’ model is chosen. By far the correct type of model is selected, AR\-p, but usually a simpler model of AR\-1 is chosen over AR\-2 (correct) most of the time. ``` save.fits = rep(NA,100) for(i in 1:100){ a1 = arima.sim(n=100, model=list(ar=c(.8,.1))) fit = forecast::auto.arima(a1, seasonal=FALSE, max.d=0, max.q=0) save.fits[i] = paste0(fit$arma[1], "-", fit$arma[2]) } table(save.fits) ``` ``` ## save.fits ## 1-0 2-0 3-0 4-0 ## 74 20 5 1 ``` ### 3\.3\.4 Trace \= TRUE You can see what models that `auto.arima()` tried using `trace=TRUE`. The models are selected on AICc by default and the AICc value is shown next to the model. ``` forecast::auto.arima(anchovy87ts, trace=TRUE) ``` ``` ## ## ARIMA(2,1,2) with drift : 0.9971438 ## ARIMA(0,1,0) with drift : -1.582738 ## ARIMA(1,1,0) with drift : -3.215851 ## ARIMA(0,1,1) with drift : -5.727702 ## ARIMA(0,1,0) : -1.869767 ## ARIMA(1,1,1) with drift : -2.907571 ## ARIMA(0,1,2) with drift : -3.219136 ## ARIMA(1,1,2) with drift : -1.363802 ## ARIMA(0,1,1) : -1.425496 ## ## Best model: ARIMA(0,1,1) with drift ``` ``` ## Series: anchovy87ts ## ARIMA(0,1,1) with drift ## ## Coefficients: ## ma1 drift ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` ### 3\.3\.5 stepwise \= FALSE By default, step\-wise selection is used and an approximation is used for the models tried in the model selection step. For a final model selection, you should turn these off. ``` forecast::auto.arima(anchovy87ts, stepwise=FALSE, approximation=FALSE) ``` ``` ## Series: anchovy87ts ## ARIMA(0,1,1) with drift ## ## Coefficients: ## ma1 drift ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` ### 3\.3\.6 Summary * Once you have dealt with stationarity, you need to determine the order of the model: the AR part and the MA part. * Although you could simply use `auto.arima()`, it is best to run `acf()` and `pacf()` on your data to understand it better. + Does it look like a pure AR process? * Also evaluate if there are reasons to assume a particular structure. + Are you using an established model form, from say another paper? + Are you fitting to a process that is fundamentally AR only or AR \+ MA? ### 3\.3\.1 AR and MA lags Step A3 is to determine the number of \\(p\\) lags in the AR part of the model: \\\[x\_t \= \\phi\_1 x\_{t\-1} \+ \\phi\_2 x\_{t\-2} \+ ... \+ \\phi\_p x\_{t\-p} \+ e\_t\\] Step A4 is to determine the number of \\(q\\) lags in the MA part of the model: \\\[e\_t \= \\eta\_t \+ \\theta\_1 \\eta\_{t\-1} \+ \\theta\_2 \\eta\_{t\-2} \+ ... \+ \\theta\_q \\eta\_{t\-q},\\quad \\eta\_t \\sim N(0, \\sigma)\\] ### 3\.3\.2 Model order For an ARIMA model, the number of AR lags, number of differences, and number of MA lags is called the **model order** or just **order**. Examples. Note \\(e\_t \\sim N(0,\\sigma)\\) * order (0,0,0\) white noise \\\[x\_t \= e\_t\\] * order (1,0,0\) AR\-1 process \\\[x\_t \= \\phi x\_{t\-1} \+ e\_t\\] * order (0,0,1\) MA\-1 process \\\[x\_t \= e\_t \+ \\theta e\_{t\-1}\\] * order (1,0,1\) AR\-1 MA\-1 process \\\[x\_t \= \\phi x\_{t\-1} \+ e\_t \+ \\theta e\_{t\-1}\\] * order (0,1,0\) random walk \\\[x\_t \- x\_{t\-1} \= e\_t\\] which is the same as \\\[x\_t \= x\_{t\-1} \+ e\_t\\] ### 3\.3\.3 Choosing the AR and MA levels #### Method \#1 use the ACF and PACF functions The ACF plot shows you how the correlation between \\(x\_t\\) and \\(x\_{t\+p}\\) decrease as \\(p\\) increases. The PACF plot shows you the same but removes the autocorrelation due to lags less that \\(p\\). If your ACF and PACF look like the top panel, it is AR\-p. The first lag where the PACF is below the dashed lines is the \\(p\\) lag for your model. If it looks like the middle panel, it is MA\-p. The first lag where the ACF is below the dashed lines is the \\(q\\) lag for your model. If it looks like the bottom panel, it is ARMA and this approach doesn’t work. #### Method \#2 Use formal model selection This weighs how well the model fits against how many parameters your model has. We will use this approach. The `auto.arima()` function in the forecast package in R allows you to easily estimate the \\(p\\) and \\(q\\) for your ARMA model. We will use the first difference of the anchovy data since our stationarity diagnostics indicated that a first difference makes our time series stationary. ``` anchovy.diff1 = diff(anchovy87$log.metric.tons) forecast::auto.arima(anchovy.diff1) ``` ``` ## Series: anchovy.diff1 ## ARIMA(0,0,1) with non-zero mean ## ## Coefficients: ## ma1 mean ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` The output indicates that the ‘best’ model is a MA\-1 with a non\-zero mean. “non\-zero mean” means that the mean of our data (`anchovy.diff1`) is not zero. `auto.arima()` will also estimate the amount of differencing needed. ``` forecast::auto.arima(anchovy87ts) ``` ``` ## Series: anchovy87ts ## ARIMA(0,1,1) with drift ## ## Coefficients: ## ma1 drift ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` The output indicates that the ‘best’ model is a MA\-1 with first difference. “with drift” means that the mean of our data (`anchovy87`) is not zero. This is the same model but the jargon regarding the mean is different. #### More examples Let’s try fitting to some simulated data. We will simulate with `arima.sim()`. We will specify no differencing. ``` set.seed(100) a1 = arima.sim(n=100, model=list(ar=c(.8,.1))) forecast::auto.arima(a1, seasonal=FALSE, max.d=0) ``` ``` ## Series: a1 ## ARIMA(1,0,0) with non-zero mean ## ## Coefficients: ## ar1 mean ## 0.6928 -0.5343 ## s.e. 0.0732 0.2774 ## ## sigma^2 estimated as 0.7703: log likelihood=-128.16 ## AIC=262.33 AICc=262.58 BIC=270.14 ``` The ‘best\-fit’ model is simpler than the model used to simulate the data. #### How often is the ‘true’ model is chosen? Let’s fit 100 simulated time series and see how often the ‘true’ model is chosen. By far the correct type of model is selected, AR\-p, but usually a simpler model of AR\-1 is chosen over AR\-2 (correct) most of the time. ``` save.fits = rep(NA,100) for(i in 1:100){ a1 = arima.sim(n=100, model=list(ar=c(.8,.1))) fit = forecast::auto.arima(a1, seasonal=FALSE, max.d=0, max.q=0) save.fits[i] = paste0(fit$arma[1], "-", fit$arma[2]) } table(save.fits) ``` ``` ## save.fits ## 1-0 2-0 3-0 4-0 ## 74 20 5 1 ``` #### Method \#1 use the ACF and PACF functions The ACF plot shows you how the correlation between \\(x\_t\\) and \\(x\_{t\+p}\\) decrease as \\(p\\) increases. The PACF plot shows you the same but removes the autocorrelation due to lags less that \\(p\\). If your ACF and PACF look like the top panel, it is AR\-p. The first lag where the PACF is below the dashed lines is the \\(p\\) lag for your model. If it looks like the middle panel, it is MA\-p. The first lag where the ACF is below the dashed lines is the \\(q\\) lag for your model. If it looks like the bottom panel, it is ARMA and this approach doesn’t work. #### Method \#2 Use formal model selection This weighs how well the model fits against how many parameters your model has. We will use this approach. The `auto.arima()` function in the forecast package in R allows you to easily estimate the \\(p\\) and \\(q\\) for your ARMA model. We will use the first difference of the anchovy data since our stationarity diagnostics indicated that a first difference makes our time series stationary. ``` anchovy.diff1 = diff(anchovy87$log.metric.tons) forecast::auto.arima(anchovy.diff1) ``` ``` ## Series: anchovy.diff1 ## ARIMA(0,0,1) with non-zero mean ## ## Coefficients: ## ma1 mean ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` The output indicates that the ‘best’ model is a MA\-1 with a non\-zero mean. “non\-zero mean” means that the mean of our data (`anchovy.diff1`) is not zero. `auto.arima()` will also estimate the amount of differencing needed. ``` forecast::auto.arima(anchovy87ts) ``` ``` ## Series: anchovy87ts ## ARIMA(0,1,1) with drift ## ## Coefficients: ## ma1 drift ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` The output indicates that the ‘best’ model is a MA\-1 with first difference. “with drift” means that the mean of our data (`anchovy87`) is not zero. This is the same model but the jargon regarding the mean is different. #### More examples Let’s try fitting to some simulated data. We will simulate with `arima.sim()`. We will specify no differencing. ``` set.seed(100) a1 = arima.sim(n=100, model=list(ar=c(.8,.1))) forecast::auto.arima(a1, seasonal=FALSE, max.d=0) ``` ``` ## Series: a1 ## ARIMA(1,0,0) with non-zero mean ## ## Coefficients: ## ar1 mean ## 0.6928 -0.5343 ## s.e. 0.0732 0.2774 ## ## sigma^2 estimated as 0.7703: log likelihood=-128.16 ## AIC=262.33 AICc=262.58 BIC=270.14 ``` The ‘best\-fit’ model is simpler than the model used to simulate the data. #### How often is the ‘true’ model is chosen? Let’s fit 100 simulated time series and see how often the ‘true’ model is chosen. By far the correct type of model is selected, AR\-p, but usually a simpler model of AR\-1 is chosen over AR\-2 (correct) most of the time. ``` save.fits = rep(NA,100) for(i in 1:100){ a1 = arima.sim(n=100, model=list(ar=c(.8,.1))) fit = forecast::auto.arima(a1, seasonal=FALSE, max.d=0, max.q=0) save.fits[i] = paste0(fit$arma[1], "-", fit$arma[2]) } table(save.fits) ``` ``` ## save.fits ## 1-0 2-0 3-0 4-0 ## 74 20 5 1 ``` ### 3\.3\.4 Trace \= TRUE You can see what models that `auto.arima()` tried using `trace=TRUE`. The models are selected on AICc by default and the AICc value is shown next to the model. ``` forecast::auto.arima(anchovy87ts, trace=TRUE) ``` ``` ## ## ARIMA(2,1,2) with drift : 0.9971438 ## ARIMA(0,1,0) with drift : -1.582738 ## ARIMA(1,1,0) with drift : -3.215851 ## ARIMA(0,1,1) with drift : -5.727702 ## ARIMA(0,1,0) : -1.869767 ## ARIMA(1,1,1) with drift : -2.907571 ## ARIMA(0,1,2) with drift : -3.219136 ## ARIMA(1,1,2) with drift : -1.363802 ## ARIMA(0,1,1) : -1.425496 ## ## Best model: ARIMA(0,1,1) with drift ``` ``` ## Series: anchovy87ts ## ARIMA(0,1,1) with drift ## ## Coefficients: ## ma1 drift ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` ### 3\.3\.5 stepwise \= FALSE By default, step\-wise selection is used and an approximation is used for the models tried in the model selection step. For a final model selection, you should turn these off. ``` forecast::auto.arima(anchovy87ts, stepwise=FALSE, approximation=FALSE) ``` ``` ## Series: anchovy87ts ## ARIMA(0,1,1) with drift ## ## Coefficients: ## ma1 drift ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` ### 3\.3\.6 Summary * Once you have dealt with stationarity, you need to determine the order of the model: the AR part and the MA part. * Although you could simply use `auto.arima()`, it is best to run `acf()` and `pacf()` on your data to understand it better. + Does it look like a pure AR process? * Also evaluate if there are reasons to assume a particular structure. + Are you using an established model form, from say another paper? + Are you fitting to a process that is fundamentally AR only or AR \+ MA?
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/3-4-fitting-arima-models.html
3\.4 Fitting ARIMA models ------------------------- We are now at step B of the Box\-Jenkins Method. A. Model form selection 1. Evaluate stationarity and seasonality 2. Selection of the differencing level (d) 3. Selection of the AR level (p) 4. Selection of the MA level (q) B. **Parameter estimation** C. **Model checking** ### 3\.4\.1 Fitting with `auto.arima()` `auto.arima()` (in the forecast package) has many arguments. ``` auto.arima(y, d = NA, D = NA, max.p = 5, max.q = 5, max.P = 2, max.Q = 2, max.order = 5, max.d = 2, max.D = 1, start.p = 2, start.q = 2, start.P = 1, start.Q = 1, stationary = FALSE, seasonal = TRUE, ic = c("aicc", "aic", "bic"), stepwise = TRUE, trace = FALSE, approximation = (length(x) > 150 | frequency(x) > 12), truncate = NULL, xreg = NULL, test = c("kpss", "adf", "pp"), seasonal.test = c("seas", "ocsb", "hegy", "ch"), allowdrift = TRUE, allowmean = TRUE, lambda = NULL, biasadj = FALSE, parallel = FALSE, num.cores = 2, x = y, ...) ``` When just getting started, we will focus just on a few of these. * `trace` To print out the models that were tested. * `stepwise` and `approximation` To use slower but better estimation when selecting model order. * `test` The test to use to select the amount of differencing. #### Load the data Load the data by loading the **FishForecast** package. ``` require(FishForecast) ``` `anchovy87ts` is a ts object of the log metric tons for 1964\-1987\. We will use this for `auto.arima()` however we could also use `anchovy87$log.metric.tons`. `anchovy87ts` is just ``` anchovy87ts <- ts(anchovy87, start=1964) ``` #### Fit to the anchovy data using `auto.arima()` ``` fit <- forecast::auto.arima(anchovy87ts) ``` Here are the values for anchovy in Table 8 of Stergiou and Christou. | Model | \\(\\theta\_1\\) | drift (c) | R\\(^2\\) | BIC | LB | | --- | --- | --- | --- | --- | --- | | (0,1,1\) | 0\.563 | 0\.064 | 0\.83 | 1775 | 5\.4 | Here is the equivalent values from the best fit from `auto.arima()`: | Model | theta1 | drift | R2 | BIC | LB | | --- | --- | --- | --- | --- | --- | | (0,1,1\) | 0\.5731337 | 0\.0640889 | 0\.8402976 | \-3\.584377 | 5\.372543 | Where do we find each of the components of Stergiou and Christou’s Table 8? #### The parameter estimates We can extract the parameter estimates from a fitted object in R using `coef()`. ``` coef(fit) ``` ``` ## ma1 drift ## -0.5731337 0.0640889 ``` The `ma1` is the same as \\(\\theta\_1\\) except its negative because of the way Stergiou and Christou write their MA models. They write it as \\\[e\_t \= \\eta\_t \- \\theta\_1 \\eta\_{t\-1}\\] instead of the form that `auto.arima()` uses \\\[e\_t \= \\eta\_t \+ \\theta\_1 \\eta\_{t\-1}\\] #### Computing R2 This is not output as part of a arima fitted object so we need to compute it. ``` res <- resid(fit) dat <- anchovy87$log.metric.tons meany <- mean(dat, na.rm=TRUE) r2 <- 1- sum(res^2,na.rm=TRUE)/sum((dat-meany)^2,na.rm=TRUE) ``` #### Ljung\-Box statistic ``` LB <- Box.test(res, type="Ljung-Box", lag=12, fitdf=2)$statistic ``` fitdf\=2 is from the number of parameters estimated. #### BIC BIC is in `fit$BIC`. Why is BIC different? Because there is a missing constant, which is fairly common. The absolute value of BIC is unimportant. Only its value relative to other models that you tested is important. ### 3\.4\.2 Outputting the models tested Pass in `trace=TRUE` to see a list of the models tested in `auto.arima()`’s search. By default `auto.arima()` uses AICc for model selection and the AICc values are shown. Smaller is better for AICc and AICc values that are different by less than 2 have similar data support. Look for any models with similar AICc to the best selected model. You should consider that model also. ``` forecast::auto.arima(anchovy87ts, trace=TRUE) ``` ``` ## ## ARIMA(2,1,2) with drift : 0.9971438 ## ARIMA(0,1,0) with drift : -1.582738 ## ARIMA(1,1,0) with drift : -3.215851 ## ARIMA(0,1,1) with drift : -5.727702 ## ARIMA(0,1,0) : -1.869767 ## ARIMA(1,1,1) with drift : -2.907571 ## ARIMA(0,1,2) with drift : -3.219136 ## ARIMA(1,1,2) with drift : -1.363802 ## ARIMA(0,1,1) : -1.425496 ## ## Best model: ARIMA(0,1,1) with drift ``` ``` ## Series: anchovy87ts ## ARIMA(0,1,1) with drift ## ## Coefficients: ## ma1 drift ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` ### 3\.4\.3 Repeat with the sardine data Stergiou and Christou sardine model (Table 8\) is ARIMA(0,1,0\): \\\[x\_t \= x\_{t\-1}\+e\_t\\] The model selected by `auto.arima()` is ARIMA(0,0,1\): \\\[x\_t \= e\_t \+ \\theta\_1 e\_{t\-1}\\] ``` forecast::auto.arima(sardine87ts) ``` ``` ## Series: sardine87ts ## ARIMA(0,1,1) with drift ## ## Coefficients: ## ma1 drift ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` Why? Stergiou and Christou used the Augmented Dickey\-Fuller test to determine the amount of differencing needed while the default for `auto.arima()` is to use the KPSS test. #### Repeat using `test='adf'` Now the selected model is the same. ``` fit <- auto.arima(sardine87ts, test="adf") fit ``` ``` ## Series: sardine87ts ## ARIMA(0,1,1) with drift ## ## Coefficients: ## ma1 drift ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` Compare the estimated values in Stergiou and Christou Table 8: | Model | \\(\\theta\_1\\) | drift (c) | R2 | BIC | LB | | --- | --- | --- | --- | --- | --- | | (0,1,0\) | NA | NA | 0\.00 | 1396 | 22\.2 | versus from `auto.arima()` ``` ## Warning in mean.default(sardine, na.rm = TRUE): argument is not numeric or ## logical: returning NA ``` ``` ## Warning in Ops.factor(left, right): '-' not meaningful for factors ``` | Model | theta1 | drift | R2 | BIC | LB | | --- | --- | --- | --- | --- | --- | | (0,1,0\) | 0\.5731337 | 0\.0640889 | \-Inf | \-3\.584377 | 5\.372543 | ### 3\.4\.4 Missing values These functions work fine with missing values. Missing values are denoted NA. ``` anchovy.miss <- anchovy87ts anchovy.miss[10:14] <- NA fit <- auto.arima(anchovy.miss) fit ``` ``` ## Series: anchovy.miss ## ARIMA(1,1,0) with drift ## ## Coefficients: ## ar1 drift ## -0.5622 0.067 ## s.e. 0.2109 0.022 ## ## sigma^2 estimated as 0.02947: log likelihood=6.35 ## AIC=-6.71 AICc=-5.45 BIC=-3.3 ``` ### 3\.4\.5 Fit a specific ARIMA model Sometimes you don’t want to search, but rather fit an ARIMA model with a specific order. Say you wanted to fit this model: \\\[x\_t \= \\beta\_1 x\_{t\-1} \+ \\beta\_2 x\_{t\-2} \+ e\_t\\] For that you can use `Arima()` in the forecast package: ``` fit.AR2 <- forecast::Arima(anchovy87ts, order=c(2,0,0)) fit.AR2 ``` ``` ## Series: anchovy87ts ## ARIMA(2,0,0) with non-zero mean ## ## Coefficients: ## ar1 ar2 mean ## 0.6912 0.2637 9.2353 ## s.e. 0.2063 0.2142 0.5342 ## ## sigma^2 estimated as 0.0511: log likelihood=2.1 ## AIC=3.81 AICc=5.91 BIC=8.52 ``` ### 3\.4\.6 Model checking * Plot your data * Is the plot long\-tailed (Chl, some types of fish data)? Take the logarithm. * Fit model. * Plot your residuals * Check your residuals for stationarity, normality, and independence Ideally your response variable will be unimodal. If not, you are using an ARIMA model that doesn’t produce data like yours. While you could change the assumptions about the error distribution in the model, it will be easier to transform your data. Look at histograms of your data: Use `checkresiduals()` to do basic diagnostics. ``` fit <- forecast::auto.arima(anchovy87ts) checkresiduals(fit) ``` ``` ## ## Ljung-Box test ## ## data: Residuals from ARIMA(0,1,1) with drift ## Q* = 1.4883, df = 3, p-value = 0.685 ## ## Model df: 2. Total lags used: 5 ``` ### 3\.4\.7 Workflow for non\-seasonal data * Go through Box\-Jenkins Method to evaluate stationarity * Plot the data and make decisions about transformations to make the data more unimodal * Make some decisions about differencing and any other data transformations via the stationarity tests * Use `auto.arima(data, trace=TRUE)` to evaluate what ARMA models best fit the data. Fix the differencing if needed. * Determine a set of candidate models. Include a null model in the candidate list. naive and naive with drift are typical nulls. * Test candidate models for forecast performance with cross\-validation (next lecture). ### 3\.4\.8 Stepwise vs exhaustive model selection Stepwise model selection is fast and useful if you need to explore many models and it takes awhile to fit each model. Our models fit quickly and we don’t have season in our models. Though it will not make a difference for this particular dataset, in general set `stepwise=FALSE` to do a more thorough model search. ``` forecast::auto.arima(anchovy87ts, stepwise=FALSE, approximation=FALSE) ``` ``` ## Series: anchovy87ts ## ARIMA(0,1,1) with drift ## ## Coefficients: ## ma1 drift ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` ### 3\.4\.9 Summary * `auto.arima()` in the forecast package is a good choice for selection and fitting of ARIMA models. * `Arima()` is a good choice when you know the order (structure) of the model. * You (may) need to know whether the mean of the data should be zero and whether it is stationary around a linear line. + `include.mean=TRUE` means the mean is not zero + `include.drift=TRUE` means fit a model that fluctuates around a trend (up or down) ### 3\.4\.1 Fitting with `auto.arima()` `auto.arima()` (in the forecast package) has many arguments. ``` auto.arima(y, d = NA, D = NA, max.p = 5, max.q = 5, max.P = 2, max.Q = 2, max.order = 5, max.d = 2, max.D = 1, start.p = 2, start.q = 2, start.P = 1, start.Q = 1, stationary = FALSE, seasonal = TRUE, ic = c("aicc", "aic", "bic"), stepwise = TRUE, trace = FALSE, approximation = (length(x) > 150 | frequency(x) > 12), truncate = NULL, xreg = NULL, test = c("kpss", "adf", "pp"), seasonal.test = c("seas", "ocsb", "hegy", "ch"), allowdrift = TRUE, allowmean = TRUE, lambda = NULL, biasadj = FALSE, parallel = FALSE, num.cores = 2, x = y, ...) ``` When just getting started, we will focus just on a few of these. * `trace` To print out the models that were tested. * `stepwise` and `approximation` To use slower but better estimation when selecting model order. * `test` The test to use to select the amount of differencing. #### Load the data Load the data by loading the **FishForecast** package. ``` require(FishForecast) ``` `anchovy87ts` is a ts object of the log metric tons for 1964\-1987\. We will use this for `auto.arima()` however we could also use `anchovy87$log.metric.tons`. `anchovy87ts` is just ``` anchovy87ts <- ts(anchovy87, start=1964) ``` #### Fit to the anchovy data using `auto.arima()` ``` fit <- forecast::auto.arima(anchovy87ts) ``` Here are the values for anchovy in Table 8 of Stergiou and Christou. | Model | \\(\\theta\_1\\) | drift (c) | R\\(^2\\) | BIC | LB | | --- | --- | --- | --- | --- | --- | | (0,1,1\) | 0\.563 | 0\.064 | 0\.83 | 1775 | 5\.4 | Here is the equivalent values from the best fit from `auto.arima()`: | Model | theta1 | drift | R2 | BIC | LB | | --- | --- | --- | --- | --- | --- | | (0,1,1\) | 0\.5731337 | 0\.0640889 | 0\.8402976 | \-3\.584377 | 5\.372543 | Where do we find each of the components of Stergiou and Christou’s Table 8? #### The parameter estimates We can extract the parameter estimates from a fitted object in R using `coef()`. ``` coef(fit) ``` ``` ## ma1 drift ## -0.5731337 0.0640889 ``` The `ma1` is the same as \\(\\theta\_1\\) except its negative because of the way Stergiou and Christou write their MA models. They write it as \\\[e\_t \= \\eta\_t \- \\theta\_1 \\eta\_{t\-1}\\] instead of the form that `auto.arima()` uses \\\[e\_t \= \\eta\_t \+ \\theta\_1 \\eta\_{t\-1}\\] #### Computing R2 This is not output as part of a arima fitted object so we need to compute it. ``` res <- resid(fit) dat <- anchovy87$log.metric.tons meany <- mean(dat, na.rm=TRUE) r2 <- 1- sum(res^2,na.rm=TRUE)/sum((dat-meany)^2,na.rm=TRUE) ``` #### Ljung\-Box statistic ``` LB <- Box.test(res, type="Ljung-Box", lag=12, fitdf=2)$statistic ``` fitdf\=2 is from the number of parameters estimated. #### BIC BIC is in `fit$BIC`. Why is BIC different? Because there is a missing constant, which is fairly common. The absolute value of BIC is unimportant. Only its value relative to other models that you tested is important. #### Load the data Load the data by loading the **FishForecast** package. ``` require(FishForecast) ``` `anchovy87ts` is a ts object of the log metric tons for 1964\-1987\. We will use this for `auto.arima()` however we could also use `anchovy87$log.metric.tons`. `anchovy87ts` is just ``` anchovy87ts <- ts(anchovy87, start=1964) ``` #### Fit to the anchovy data using `auto.arima()` ``` fit <- forecast::auto.arima(anchovy87ts) ``` Here are the values for anchovy in Table 8 of Stergiou and Christou. | Model | \\(\\theta\_1\\) | drift (c) | R\\(^2\\) | BIC | LB | | --- | --- | --- | --- | --- | --- | | (0,1,1\) | 0\.563 | 0\.064 | 0\.83 | 1775 | 5\.4 | Here is the equivalent values from the best fit from `auto.arima()`: | Model | theta1 | drift | R2 | BIC | LB | | --- | --- | --- | --- | --- | --- | | (0,1,1\) | 0\.5731337 | 0\.0640889 | 0\.8402976 | \-3\.584377 | 5\.372543 | Where do we find each of the components of Stergiou and Christou’s Table 8? #### The parameter estimates We can extract the parameter estimates from a fitted object in R using `coef()`. ``` coef(fit) ``` ``` ## ma1 drift ## -0.5731337 0.0640889 ``` The `ma1` is the same as \\(\\theta\_1\\) except its negative because of the way Stergiou and Christou write their MA models. They write it as \\\[e\_t \= \\eta\_t \- \\theta\_1 \\eta\_{t\-1}\\] instead of the form that `auto.arima()` uses \\\[e\_t \= \\eta\_t \+ \\theta\_1 \\eta\_{t\-1}\\] #### Computing R2 This is not output as part of a arima fitted object so we need to compute it. ``` res <- resid(fit) dat <- anchovy87$log.metric.tons meany <- mean(dat, na.rm=TRUE) r2 <- 1- sum(res^2,na.rm=TRUE)/sum((dat-meany)^2,na.rm=TRUE) ``` #### Ljung\-Box statistic ``` LB <- Box.test(res, type="Ljung-Box", lag=12, fitdf=2)$statistic ``` fitdf\=2 is from the number of parameters estimated. #### BIC BIC is in `fit$BIC`. Why is BIC different? Because there is a missing constant, which is fairly common. The absolute value of BIC is unimportant. Only its value relative to other models that you tested is important. ### 3\.4\.2 Outputting the models tested Pass in `trace=TRUE` to see a list of the models tested in `auto.arima()`’s search. By default `auto.arima()` uses AICc for model selection and the AICc values are shown. Smaller is better for AICc and AICc values that are different by less than 2 have similar data support. Look for any models with similar AICc to the best selected model. You should consider that model also. ``` forecast::auto.arima(anchovy87ts, trace=TRUE) ``` ``` ## ## ARIMA(2,1,2) with drift : 0.9971438 ## ARIMA(0,1,0) with drift : -1.582738 ## ARIMA(1,1,0) with drift : -3.215851 ## ARIMA(0,1,1) with drift : -5.727702 ## ARIMA(0,1,0) : -1.869767 ## ARIMA(1,1,1) with drift : -2.907571 ## ARIMA(0,1,2) with drift : -3.219136 ## ARIMA(1,1,2) with drift : -1.363802 ## ARIMA(0,1,1) : -1.425496 ## ## Best model: ARIMA(0,1,1) with drift ``` ``` ## Series: anchovy87ts ## ARIMA(0,1,1) with drift ## ## Coefficients: ## ma1 drift ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` ### 3\.4\.3 Repeat with the sardine data Stergiou and Christou sardine model (Table 8\) is ARIMA(0,1,0\): \\\[x\_t \= x\_{t\-1}\+e\_t\\] The model selected by `auto.arima()` is ARIMA(0,0,1\): \\\[x\_t \= e\_t \+ \\theta\_1 e\_{t\-1}\\] ``` forecast::auto.arima(sardine87ts) ``` ``` ## Series: sardine87ts ## ARIMA(0,1,1) with drift ## ## Coefficients: ## ma1 drift ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` Why? Stergiou and Christou used the Augmented Dickey\-Fuller test to determine the amount of differencing needed while the default for `auto.arima()` is to use the KPSS test. #### Repeat using `test='adf'` Now the selected model is the same. ``` fit <- auto.arima(sardine87ts, test="adf") fit ``` ``` ## Series: sardine87ts ## ARIMA(0,1,1) with drift ## ## Coefficients: ## ma1 drift ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` Compare the estimated values in Stergiou and Christou Table 8: | Model | \\(\\theta\_1\\) | drift (c) | R2 | BIC | LB | | --- | --- | --- | --- | --- | --- | | (0,1,0\) | NA | NA | 0\.00 | 1396 | 22\.2 | versus from `auto.arima()` ``` ## Warning in mean.default(sardine, na.rm = TRUE): argument is not numeric or ## logical: returning NA ``` ``` ## Warning in Ops.factor(left, right): '-' not meaningful for factors ``` | Model | theta1 | drift | R2 | BIC | LB | | --- | --- | --- | --- | --- | --- | | (0,1,0\) | 0\.5731337 | 0\.0640889 | \-Inf | \-3\.584377 | 5\.372543 | #### Repeat using `test='adf'` Now the selected model is the same. ``` fit <- auto.arima(sardine87ts, test="adf") fit ``` ``` ## Series: sardine87ts ## ARIMA(0,1,1) with drift ## ## Coefficients: ## ma1 drift ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` Compare the estimated values in Stergiou and Christou Table 8: | Model | \\(\\theta\_1\\) | drift (c) | R2 | BIC | LB | | --- | --- | --- | --- | --- | --- | | (0,1,0\) | NA | NA | 0\.00 | 1396 | 22\.2 | versus from `auto.arima()` ``` ## Warning in mean.default(sardine, na.rm = TRUE): argument is not numeric or ## logical: returning NA ``` ``` ## Warning in Ops.factor(left, right): '-' not meaningful for factors ``` | Model | theta1 | drift | R2 | BIC | LB | | --- | --- | --- | --- | --- | --- | | (0,1,0\) | 0\.5731337 | 0\.0640889 | \-Inf | \-3\.584377 | 5\.372543 | ### 3\.4\.4 Missing values These functions work fine with missing values. Missing values are denoted NA. ``` anchovy.miss <- anchovy87ts anchovy.miss[10:14] <- NA fit <- auto.arima(anchovy.miss) fit ``` ``` ## Series: anchovy.miss ## ARIMA(1,1,0) with drift ## ## Coefficients: ## ar1 drift ## -0.5622 0.067 ## s.e. 0.2109 0.022 ## ## sigma^2 estimated as 0.02947: log likelihood=6.35 ## AIC=-6.71 AICc=-5.45 BIC=-3.3 ``` ### 3\.4\.5 Fit a specific ARIMA model Sometimes you don’t want to search, but rather fit an ARIMA model with a specific order. Say you wanted to fit this model: \\\[x\_t \= \\beta\_1 x\_{t\-1} \+ \\beta\_2 x\_{t\-2} \+ e\_t\\] For that you can use `Arima()` in the forecast package: ``` fit.AR2 <- forecast::Arima(anchovy87ts, order=c(2,0,0)) fit.AR2 ``` ``` ## Series: anchovy87ts ## ARIMA(2,0,0) with non-zero mean ## ## Coefficients: ## ar1 ar2 mean ## 0.6912 0.2637 9.2353 ## s.e. 0.2063 0.2142 0.5342 ## ## sigma^2 estimated as 0.0511: log likelihood=2.1 ## AIC=3.81 AICc=5.91 BIC=8.52 ``` ### 3\.4\.6 Model checking * Plot your data * Is the plot long\-tailed (Chl, some types of fish data)? Take the logarithm. * Fit model. * Plot your residuals * Check your residuals for stationarity, normality, and independence Ideally your response variable will be unimodal. If not, you are using an ARIMA model that doesn’t produce data like yours. While you could change the assumptions about the error distribution in the model, it will be easier to transform your data. Look at histograms of your data: Use `checkresiduals()` to do basic diagnostics. ``` fit <- forecast::auto.arima(anchovy87ts) checkresiduals(fit) ``` ``` ## ## Ljung-Box test ## ## data: Residuals from ARIMA(0,1,1) with drift ## Q* = 1.4883, df = 3, p-value = 0.685 ## ## Model df: 2. Total lags used: 5 ``` ### 3\.4\.7 Workflow for non\-seasonal data * Go through Box\-Jenkins Method to evaluate stationarity * Plot the data and make decisions about transformations to make the data more unimodal * Make some decisions about differencing and any other data transformations via the stationarity tests * Use `auto.arima(data, trace=TRUE)` to evaluate what ARMA models best fit the data. Fix the differencing if needed. * Determine a set of candidate models. Include a null model in the candidate list. naive and naive with drift are typical nulls. * Test candidate models for forecast performance with cross\-validation (next lecture). ### 3\.4\.8 Stepwise vs exhaustive model selection Stepwise model selection is fast and useful if you need to explore many models and it takes awhile to fit each model. Our models fit quickly and we don’t have season in our models. Though it will not make a difference for this particular dataset, in general set `stepwise=FALSE` to do a more thorough model search. ``` forecast::auto.arima(anchovy87ts, stepwise=FALSE, approximation=FALSE) ``` ``` ## Series: anchovy87ts ## ARIMA(0,1,1) with drift ## ## Coefficients: ## ma1 drift ## -0.5731 0.0641 ## s.e. 0.1610 0.0173 ## ## sigma^2 estimated as 0.03583: log likelihood=6.5 ## AIC=-6.99 AICc=-5.73 BIC=-3.58 ``` ### 3\.4\.9 Summary * `auto.arima()` in the forecast package is a good choice for selection and fitting of ARIMA models. * `Arima()` is a good choice when you know the order (structure) of the model. * You (may) need to know whether the mean of the data should be zero and whether it is stationary around a linear line. + `include.mean=TRUE` means the mean is not zero + `include.drift=TRUE` means fit a model that fluctuates around a trend (up or down)
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/3-5-forecasting.html
3\.5 Forecasting ---------------- The basic idea of forecasting with an ARIMA model is the same as forecasting with a time\-varying regressiion model. We estimate a model and the parameters for that model. For example, let’s say we want to forecast with ARIMA(2,1,0\) model: \\\[y\_t \= \\beta\_1 y\_{t\-1} \+ \\beta\_2 y\_{t\-2} \+ e\_t\\] where \\(y\_t\\) is the first difference of our anchovy data. Let’s estimate the \\(\\beta\\)’s for this model from the 1964\-1987 anchovy data. ``` fit <- Arima(anchovy87ts, order=c(2,1,0)) coef(fit) ``` ``` ## ar1 ar2 ## -0.3347994 -0.1453928 ``` So we will forecast with this model: \\\[y\_t \= \-0\.3348 y\_{t\-1} \- 0\.1454 y\_{t\-2} \+ e\_t\\] So to get our forecast for 1988, we do this \\\[(y\_{1988}\-y\_{1987}) \= \-0\.3348 (y\_{1987}\-y\_{1986}) \- 0\.1454 (y\_{1986}\-y\_{1985})\\] Thus \\\[y\_{1988} \= y\_{1987}\-0\.3348 (y\_{1987}\-y\_{1986}) \- 0\.1454 (y\_{1986}\-y\_{1985})\\] Here is R code to do that: ``` anchovy87ts[24]+coef(fit)[1]*(anchovy87ts[24]-anchovy87ts[23])+ coef(fit)[2]*(anchovy87ts[23]-anchovy87ts[22]) ``` ``` ## ar1 ## 10.00938 ``` ### 3\.5\.1 Forecasting with `forecast()` `forecast(fit, h=h)` automates the forecast calculations for us. `forecast()` takes a fitted object, `fit`, from `arima()` and output forecasts for `h` time steps forward. The upper and lower prediction intervals are also computed. ``` fit <- forecast::auto.arima(sardine87ts, test="adf") fr <- forecast::forecast(fit, h=5) fr ``` ``` ## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95 ## 1988 10.03216 9.789577 10.27475 9.661160 10.40317 ## 1989 10.09625 9.832489 10.36001 9.692861 10.49964 ## 1990 10.16034 9.876979 10.44370 9.726977 10.59371 ## 1991 10.22443 9.922740 10.52612 9.763035 10.68582 ## 1992 10.28852 9.969552 10.60749 9.800701 10.77634 ``` We can plot our forecast with prediction intervals. Here is the sardine forecast: ``` plot(fr, xlab="Year") ``` #### Forecast for anchovy ``` fit <- forecast::auto.arima(anchovy87ts) fr <- forecast::forecast(fit, h=5) plot(fr) ``` #### Forecasts for chub mackerel We can repeat for other species. ``` spp="Chub.mackerel" dat <- subset(greeklandings, Species==spp & Year<=1987)$log.metric.tons dat <- ts(dat, start=1964) fit <- forecast::auto.arima(dat) fr <- forecast::forecast(fit, h=5) plot(fr, ylim=c(6,10)) ``` ### 3\.5\.2 Missing values Missing values are allowed for `arima()` and we can product forecasts with the same code. ``` anchovy.miss <- anchovy87ts anchovy.miss[10:14] <- NA fit <- forecast::auto.arima(anchovy.miss) fr <- forecast::forecast(fit, h=5) plot(fr) ``` ### 3\.5\.3 Null forecast models Whenever we are testing a forecast model or procedure we have developed, we should test against ‘null’ forecast models. These are standard ‘competing’ forecast models. * The Naive forecast * The Naive forecast with drift * The mean or average forecast #### The “Naive” forecast The “naive” forecast is simply the last value observed. If we want to prediction landings in 2019, the naive forecast would be the landings in 2018\. This is a difficult forecast to beat! It has the advantage of having no parameters. In forecast, we can fit this model with the `naive()` function. Note this is the same as the `rwf()` function. ``` fit.naive <- forecast::naive(anchovy87ts) fr.naive <- forecast::forecast(fit.naive, h=5) plot(fr.naive) ``` #### The “Naive” forecast with drift The “naive” forecast is equivalent to a random walk with no drift. So this \\\[x\_t \= x\_{t\-1} \+ e\_t\\] As you saw with the anchovy fit, it doesn’t allow an upward trend. Let’s make it a little more flexible by add `drift`. This means we estimate one term, the trend. \\\[x\_t \= \\mu \+ x\_{t\-1} \+ e\_t\\] ``` fit.rwf <- forecast::rwf(anchovy87ts, drift=TRUE) fr.rwf <- forecast::forecast(fit.rwf, h=5) plot(fr.rwf) ``` #### The “mean” forecast The “mean” forecast is simply the mean of the data. If we want to prediction landings in 2019, the mean forecast would be the average of all our data. This is a poor forecast typically. It uses no information about the most recent values. In forecast, we can fit this model with the `Arima()` function and `order=c(0,0,0)`. This will fit this model: \\\[x\_t \= e\_t\\] where \\(e\_t \\sim N(\\mu, \\sigma)\\). ``` fit.mean <- forecast::Arima(anchovy87ts, order=c(0,0,0)) fr.mean <- forecast::forecast(fit.mean, h=5) plot(fr.mean) ``` ### 3\.5\.1 Forecasting with `forecast()` `forecast(fit, h=h)` automates the forecast calculations for us. `forecast()` takes a fitted object, `fit`, from `arima()` and output forecasts for `h` time steps forward. The upper and lower prediction intervals are also computed. ``` fit <- forecast::auto.arima(sardine87ts, test="adf") fr <- forecast::forecast(fit, h=5) fr ``` ``` ## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95 ## 1988 10.03216 9.789577 10.27475 9.661160 10.40317 ## 1989 10.09625 9.832489 10.36001 9.692861 10.49964 ## 1990 10.16034 9.876979 10.44370 9.726977 10.59371 ## 1991 10.22443 9.922740 10.52612 9.763035 10.68582 ## 1992 10.28852 9.969552 10.60749 9.800701 10.77634 ``` We can plot our forecast with prediction intervals. Here is the sardine forecast: ``` plot(fr, xlab="Year") ``` #### Forecast for anchovy ``` fit <- forecast::auto.arima(anchovy87ts) fr <- forecast::forecast(fit, h=5) plot(fr) ``` #### Forecasts for chub mackerel We can repeat for other species. ``` spp="Chub.mackerel" dat <- subset(greeklandings, Species==spp & Year<=1987)$log.metric.tons dat <- ts(dat, start=1964) fit <- forecast::auto.arima(dat) fr <- forecast::forecast(fit, h=5) plot(fr, ylim=c(6,10)) ``` #### Forecast for anchovy ``` fit <- forecast::auto.arima(anchovy87ts) fr <- forecast::forecast(fit, h=5) plot(fr) ``` #### Forecasts for chub mackerel We can repeat for other species. ``` spp="Chub.mackerel" dat <- subset(greeklandings, Species==spp & Year<=1987)$log.metric.tons dat <- ts(dat, start=1964) fit <- forecast::auto.arima(dat) fr <- forecast::forecast(fit, h=5) plot(fr, ylim=c(6,10)) ``` ### 3\.5\.2 Missing values Missing values are allowed for `arima()` and we can product forecasts with the same code. ``` anchovy.miss <- anchovy87ts anchovy.miss[10:14] <- NA fit <- forecast::auto.arima(anchovy.miss) fr <- forecast::forecast(fit, h=5) plot(fr) ``` ### 3\.5\.3 Null forecast models Whenever we are testing a forecast model or procedure we have developed, we should test against ‘null’ forecast models. These are standard ‘competing’ forecast models. * The Naive forecast * The Naive forecast with drift * The mean or average forecast #### The “Naive” forecast The “naive” forecast is simply the last value observed. If we want to prediction landings in 2019, the naive forecast would be the landings in 2018\. This is a difficult forecast to beat! It has the advantage of having no parameters. In forecast, we can fit this model with the `naive()` function. Note this is the same as the `rwf()` function. ``` fit.naive <- forecast::naive(anchovy87ts) fr.naive <- forecast::forecast(fit.naive, h=5) plot(fr.naive) ``` #### The “Naive” forecast with drift The “naive” forecast is equivalent to a random walk with no drift. So this \\\[x\_t \= x\_{t\-1} \+ e\_t\\] As you saw with the anchovy fit, it doesn’t allow an upward trend. Let’s make it a little more flexible by add `drift`. This means we estimate one term, the trend. \\\[x\_t \= \\mu \+ x\_{t\-1} \+ e\_t\\] ``` fit.rwf <- forecast::rwf(anchovy87ts, drift=TRUE) fr.rwf <- forecast::forecast(fit.rwf, h=5) plot(fr.rwf) ``` #### The “mean” forecast The “mean” forecast is simply the mean of the data. If we want to prediction landings in 2019, the mean forecast would be the average of all our data. This is a poor forecast typically. It uses no information about the most recent values. In forecast, we can fit this model with the `Arima()` function and `order=c(0,0,0)`. This will fit this model: \\\[x\_t \= e\_t\\] where \\(e\_t \\sim N(\\mu, \\sigma)\\). ``` fit.mean <- forecast::Arima(anchovy87ts, order=c(0,0,0)) fr.mean <- forecast::forecast(fit.mean, h=5) plot(fr.mean) ``` #### The “Naive” forecast The “naive” forecast is simply the last value observed. If we want to prediction landings in 2019, the naive forecast would be the landings in 2018\. This is a difficult forecast to beat! It has the advantage of having no parameters. In forecast, we can fit this model with the `naive()` function. Note this is the same as the `rwf()` function. ``` fit.naive <- forecast::naive(anchovy87ts) fr.naive <- forecast::forecast(fit.naive, h=5) plot(fr.naive) ``` #### The “Naive” forecast with drift The “naive” forecast is equivalent to a random walk with no drift. So this \\\[x\_t \= x\_{t\-1} \+ e\_t\\] As you saw with the anchovy fit, it doesn’t allow an upward trend. Let’s make it a little more flexible by add `drift`. This means we estimate one term, the trend. \\\[x\_t \= \\mu \+ x\_{t\-1} \+ e\_t\\] ``` fit.rwf <- forecast::rwf(anchovy87ts, drift=TRUE) fr.rwf <- forecast::forecast(fit.rwf, h=5) plot(fr.rwf) ``` #### The “mean” forecast The “mean” forecast is simply the mean of the data. If we want to prediction landings in 2019, the mean forecast would be the average of all our data. This is a poor forecast typically. It uses no information about the most recent values. In forecast, we can fit this model with the `Arima()` function and `order=c(0,0,0)`. This will fit this model: \\\[x\_t \= e\_t\\] where \\(e\_t \\sim N(\\mu, \\sigma)\\). ``` fit.mean <- forecast::Arima(anchovy87ts, order=c(0,0,0)) fr.mean <- forecast::forecast(fit.mean, h=5) plot(fr.mean) ```
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/4-exponential-smoothing-models.html
Chapter 4 Exponential smoothing models ====================================== The basic idea with an exponential smoothing model is that your forecast of \\(x\\) at time \\(t\\) is a smoothed function of past \\(x\\) values. \\\[\\hat{x}\_{t} \= \\alpha x\_{t\-1} \+ \\alpha (1\-\\alpha)^2 x\_{t\-2} \+ \\alpha (1\-\\alpha)^3 x\_{t\-3} \+ \\dots\\] Although this looks similar to an AR model with a constraint on the \\(\\beta\\) terms, it is fundamentally different. There is no process model and one is **not** assuming that \\\[x\_{t} \= \\alpha x\_{t\-1} \+ \\alpha (1\-\\alpha)^2 x\_{t\-2} \+ \\alpha (1\-\\alpha)^3 x\_{t\-3} \+ \\dots \+ e\_t\\] The goal is to find the \\(\\alpha\\) that minimizes \\(x\_t \- \\hat{x}\_t\\), i.e. the forecast error. The issues regarding stationarity do not arise because we are not fitting a stationary process model. We are not fitting a process model at all.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/4-1-overview.html
4\.1 Overview ------------- ### 4\.1\.1 Naive model Let’s start with a simple example, an exponential smoothing model with \\(\\alpha\=1\\). This is called the Naive model: \\\[\\hat{x}\_{t} \= x\_{t\-1}\\] For the naive model, our forecast is simply the value in the previous time step. For example, a naive forecast of the anchovy landings in 1988 is the anchovy landings in 1987\. \\\[\\hat{x}\_{1988} \= x\_{1987}\\] This is the same as saying that we put 100% of the ‘weight’ on the most recent value and no weight on any value prior to that. \\\[\\hat{x}\_{1988} \= 1 \\times x\_{1987} \+ 0 \\times x\_{1986} \+ 0 \\times x\_{1985} \+ \\dots\\] Past values in the time series have information about the current state, but only the most recent past value. We can fit this with `forecast::Arima()`. ``` fit.rwf <- forecast::Arima(anchovy87ts, order=c(0,1,0)) fr.rwf <- forecast::forecast(fit.rwf, h=5) ``` Alternatively we can fit with `rwf()` or `naive()` which are shortcuts for the above lines. All fit the same model. ``` fr.rwf <- forecast::rwf(anchovy87ts, h=5) fr.rwf <- forecast::naive(anchovy87ts, h=5) ``` A plot of the forecast shows the forecast and the prediction intervals. ``` plot(fr.rwf) ``` ### 4\.1\.2 Exponential smoothing The naive model is a bit extreme. Often the values prior to the last value also have some information about future states. But the ‘information content’ should decrease the farther in the past that we go. A *smoother* is another word for a filter, which in time series parlance means a weighted sum of sequential values in a time series: \\\[w\_1 x\_t \+ w\_2 x\_{t\-1} \+ w\_3 x\_{t\-2} \+ \\dots\\] An exponential smoother is a filter (weighted sum) where the weights decline exponentially (Figure @ref(fig:ets.alpha)). (\#fig:ets.alpha)Weighting function for exponential smoothing filter. The shape is determined by \\(lpha\\). ### 4\.1\.3 Exponential smoothing model A simple exponential smoothing model is like the naive model that just uses the last value to make the forecast, but instead of only using the last value it will use values farther in the past also. The weighting function falls off exponentially as shown above. Our goal when fitting an exponential smoothing model is to find the the \\(\\alpha\\), which determines the shape of the weighting function (Figure @ref(fig:ets.alpha2\)), that minimizes the forecast errors. (\#fig:ets.alpha2\)The size of \\(lpha\\) determines how past values affect the forecast. ### 4\.1\.1 Naive model Let’s start with a simple example, an exponential smoothing model with \\(\\alpha\=1\\). This is called the Naive model: \\\[\\hat{x}\_{t} \= x\_{t\-1}\\] For the naive model, our forecast is simply the value in the previous time step. For example, a naive forecast of the anchovy landings in 1988 is the anchovy landings in 1987\. \\\[\\hat{x}\_{1988} \= x\_{1987}\\] This is the same as saying that we put 100% of the ‘weight’ on the most recent value and no weight on any value prior to that. \\\[\\hat{x}\_{1988} \= 1 \\times x\_{1987} \+ 0 \\times x\_{1986} \+ 0 \\times x\_{1985} \+ \\dots\\] Past values in the time series have information about the current state, but only the most recent past value. We can fit this with `forecast::Arima()`. ``` fit.rwf <- forecast::Arima(anchovy87ts, order=c(0,1,0)) fr.rwf <- forecast::forecast(fit.rwf, h=5) ``` Alternatively we can fit with `rwf()` or `naive()` which are shortcuts for the above lines. All fit the same model. ``` fr.rwf <- forecast::rwf(anchovy87ts, h=5) fr.rwf <- forecast::naive(anchovy87ts, h=5) ``` A plot of the forecast shows the forecast and the prediction intervals. ``` plot(fr.rwf) ``` ### 4\.1\.2 Exponential smoothing The naive model is a bit extreme. Often the values prior to the last value also have some information about future states. But the ‘information content’ should decrease the farther in the past that we go. A *smoother* is another word for a filter, which in time series parlance means a weighted sum of sequential values in a time series: \\\[w\_1 x\_t \+ w\_2 x\_{t\-1} \+ w\_3 x\_{t\-2} \+ \\dots\\] An exponential smoother is a filter (weighted sum) where the weights decline exponentially (Figure @ref(fig:ets.alpha)). (\#fig:ets.alpha)Weighting function for exponential smoothing filter. The shape is determined by \\(lpha\\). ### 4\.1\.3 Exponential smoothing model A simple exponential smoothing model is like the naive model that just uses the last value to make the forecast, but instead of only using the last value it will use values farther in the past also. The weighting function falls off exponentially as shown above. Our goal when fitting an exponential smoothing model is to find the the \\(\\alpha\\), which determines the shape of the weighting function (Figure @ref(fig:ets.alpha2\)), that minimizes the forecast errors. (\#fig:ets.alpha2\)The size of \\(lpha\\) determines how past values affect the forecast.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/4-2-ets-function.html
4\.2 `ets()` function --------------------- The `ets()` function in the **forecast** package fits exponential smoothing models and produces forecasts from the fitted models. It also includes functions for plotting forecasts. Load the data by loading the **FishForecast** package. ``` require(FishForecast) ``` Fit the model. ``` fit <- forecast::ets(anchovy87ts, model="ANN") ``` `model="ANN"` specifies the simple exponential smoothing model. Create a forecast for 5 time steps into the future. ``` fr <- forecast::forecast(fit, h=5) ``` Plot the forecast. ``` plot(fr) ``` Look at the estimates ``` fit ``` ``` ## ETS(A,N,N) ## ## Call: ## forecast::ets(y = anchovy87ts, model = "ANN") ## ## Smoothing parameters: ## alpha = 0.7065 ## ## Initial states: ## l = 8.5553 ## ## sigma: 0.2166 ## ## AIC AICc BIC ## 6.764613 7.964613 10.298775 ``` ### 4\.2\.1 The weighting function The first coefficient of the ets fit is the \\(\\alpha\\) parameter for the weighting function. ``` alpha <- coef(fit)[1] wts <- alpha*(1-alpha)^(0:23) plot(1987:1964, wts/sum(wts), lwd=2, ylab="weight", xlab="", type="l") ``` (\#fig:ann.weighting)Weighting function for the simple exponential smoothing model for anchovy. ### 4\.2\.2 Decomposing your model fit Sometimes you would like to see the smoothed level that the model estimated. You can see that with `plot(fit)` or `autoplot(fit)`. ``` autoplot(fit) ``` Figure 4\.1: Decompositon of an ets fit. ### 4\.2\.1 The weighting function The first coefficient of the ets fit is the \\(\\alpha\\) parameter for the weighting function. ``` alpha <- coef(fit)[1] wts <- alpha*(1-alpha)^(0:23) plot(1987:1964, wts/sum(wts), lwd=2, ylab="weight", xlab="", type="l") ``` (\#fig:ann.weighting)Weighting function for the simple exponential smoothing model for anchovy. ### 4\.2\.2 Decomposing your model fit Sometimes you would like to see the smoothed level that the model estimated. You can see that with `plot(fit)` or `autoplot(fit)`. ``` autoplot(fit) ``` Figure 4\.1: Decompositon of an ets fit.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/4-3-ets-with-trend.html
4\.3 ETS with trend ------------------- The simple exponential model has a level that evolves over time, but there is no trend, a tendency to go up or down. If a time series has a trend then we might want to include this in our forecast. #### Naive model with drift The naive model with drift is a simple example of a model with level and trend. This model uses the last observation as the forecast but includes a trend estimated from ALL the data. \\\[\\hat{x}\_{T\+1} \= x\_T \+ \\bar{b}\\] where \\(\\bar{b}\\) is the mean trend or change from one time step to the next (\\(x\_t\-x\_{t\-1}\\)). \\\[\\bar{b} \= \\frac{1}{1\-T}\\sum\_{t\=2}^T (x\_t \- x\_{t\-1})\\] We can fit this with `forecast::Arima()`. ``` fit.rwf <- forecast::Arima(anchovy87ts, order=c(0,1,0), include.drift=TRUE) fr.rwf <- forecast::forecast(fit.rwf, h=5) ``` Alternatively we can fit with `rwf()` which is a shortcut for the above lines. ``` fr.rwf <- forecast::rwf(anchovy87ts, h=5, drift=TRUE) ``` A plot of the forecast shows the forecast and the prediction intervals. ``` plot(fr.rwf) ``` The trend seen in the blue line is estimated from the overall trend in ALL the data. ``` coef(fit.rwf) ``` ``` ## drift ## 0.06577281 ``` The trend from all the data is (last\-first)/(number of steps). ``` mean(diff(anchovy87ts)) ``` ``` ## [1] 0.06577281 ``` The naive model with drift only use the latest data to choose the level for our forecast but uses all the data to choose the trend. It would make more sense to weight the more recent trends more heavily. ### 4\.3\.1 Exponential smoothing model with trend The exponential smoothing model has a level term which is an exponential weighting of past \\(x\\) and a trend term which is an exponential weighting of past trends \\(x\_t \- x\_{t\-1}\\). \\\[\\hat{x}\_{T\+1} \= l\_T \+ b\_T\\] where \\(b\_T\\) is a weighted average with the more recent trends given more weight. \\\[b\_T \= \\sum\_{t\=2}^T \\beta (1\-\\beta)^{t\-2}(x\_t \- x\_{t\-1})\\] The value of \\(\\beta\\) determines how much past trends affect the trend we use in our forecast. #### Fit with `ets()` To fit an exponential smoothing model with trend, we use \`model\=“AAN”. ``` fit <- forecast::ets(anchovy87ts, model="AAN") fr <- forecast::forecast(fit, h=5) plot(fr) ``` Passing in “AAN”, specifies that the model must have a trend. We can also let `ets()` choose whether or not to include a trend by passing in “AZN”. Here is a summary of the simple ETS models and the model code for each. | model | “ZZZ” | alternate function | | --- | --- | --- | | exponential smoothing no trend | “ANN” | `ses()` | | exponential smoothing with trend | “AAN” | `holt()` | | exponential smoothing choose trend | “AZN” | NA | The alternate function does exactly the same fitting. It is just a ‘shortcut’. ### 4\.3\.2 Produce forecast using a previous fit Sometimes you want to estimate a forecasting model from one dataset and use that model to forecast another dataset or another area. Here is how to do that. This is the fit to the 1964\-1987 data: ``` fit1 <- forecast::ets(anchovy87ts, model="ANN") ``` Use that model with the 2000\-2007 data and produce a forecast: ``` dat <- subset(greeklandings, Species=="Anchovy" & Year>=2000 & Year<=2007) dat <- ts(dat$log.metric.tons, start=2000) fit2 <- forecast::ets(dat, model=fit1) ``` ``` ## Model is being refit with current smoothing parameters but initial states are being re-estimated. ## Set 'use.initial.values=TRUE' if you want to re-use existing initial values. ``` ``` fr2 <- forecast::forecast(fit2, h=5) ``` ``` plot(fr2) ``` #### Naive model with drift The naive model with drift is a simple example of a model with level and trend. This model uses the last observation as the forecast but includes a trend estimated from ALL the data. \\\[\\hat{x}\_{T\+1} \= x\_T \+ \\bar{b}\\] where \\(\\bar{b}\\) is the mean trend or change from one time step to the next (\\(x\_t\-x\_{t\-1}\\)). \\\[\\bar{b} \= \\frac{1}{1\-T}\\sum\_{t\=2}^T (x\_t \- x\_{t\-1})\\] We can fit this with `forecast::Arima()`. ``` fit.rwf <- forecast::Arima(anchovy87ts, order=c(0,1,0), include.drift=TRUE) fr.rwf <- forecast::forecast(fit.rwf, h=5) ``` Alternatively we can fit with `rwf()` which is a shortcut for the above lines. ``` fr.rwf <- forecast::rwf(anchovy87ts, h=5, drift=TRUE) ``` A plot of the forecast shows the forecast and the prediction intervals. ``` plot(fr.rwf) ``` The trend seen in the blue line is estimated from the overall trend in ALL the data. ``` coef(fit.rwf) ``` ``` ## drift ## 0.06577281 ``` The trend from all the data is (last\-first)/(number of steps). ``` mean(diff(anchovy87ts)) ``` ``` ## [1] 0.06577281 ``` The naive model with drift only use the latest data to choose the level for our forecast but uses all the data to choose the trend. It would make more sense to weight the more recent trends more heavily. ### 4\.3\.1 Exponential smoothing model with trend The exponential smoothing model has a level term which is an exponential weighting of past \\(x\\) and a trend term which is an exponential weighting of past trends \\(x\_t \- x\_{t\-1}\\). \\\[\\hat{x}\_{T\+1} \= l\_T \+ b\_T\\] where \\(b\_T\\) is a weighted average with the more recent trends given more weight. \\\[b\_T \= \\sum\_{t\=2}^T \\beta (1\-\\beta)^{t\-2}(x\_t \- x\_{t\-1})\\] The value of \\(\\beta\\) determines how much past trends affect the trend we use in our forecast. #### Fit with `ets()` To fit an exponential smoothing model with trend, we use \`model\=“AAN”. ``` fit <- forecast::ets(anchovy87ts, model="AAN") fr <- forecast::forecast(fit, h=5) plot(fr) ``` Passing in “AAN”, specifies that the model must have a trend. We can also let `ets()` choose whether or not to include a trend by passing in “AZN”. Here is a summary of the simple ETS models and the model code for each. | model | “ZZZ” | alternate function | | --- | --- | --- | | exponential smoothing no trend | “ANN” | `ses()` | | exponential smoothing with trend | “AAN” | `holt()` | | exponential smoothing choose trend | “AZN” | NA | The alternate function does exactly the same fitting. It is just a ‘shortcut’. #### Fit with `ets()` To fit an exponential smoothing model with trend, we use \`model\=“AAN”. ``` fit <- forecast::ets(anchovy87ts, model="AAN") fr <- forecast::forecast(fit, h=5) plot(fr) ``` Passing in “AAN”, specifies that the model must have a trend. We can also let `ets()` choose whether or not to include a trend by passing in “AZN”. Here is a summary of the simple ETS models and the model code for each. | model | “ZZZ” | alternate function | | --- | --- | --- | | exponential smoothing no trend | “ANN” | `ses()` | | exponential smoothing with trend | “AAN” | `holt()` | | exponential smoothing choose trend | “AZN” | NA | The alternate function does exactly the same fitting. It is just a ‘shortcut’. ### 4\.3\.2 Produce forecast using a previous fit Sometimes you want to estimate a forecasting model from one dataset and use that model to forecast another dataset or another area. Here is how to do that. This is the fit to the 1964\-1987 data: ``` fit1 <- forecast::ets(anchovy87ts, model="ANN") ``` Use that model with the 2000\-2007 data and produce a forecast: ``` dat <- subset(greeklandings, Species=="Anchovy" & Year>=2000 & Year<=2007) dat <- ts(dat$log.metric.tons, start=2000) fit2 <- forecast::ets(dat, model=fit1) ``` ``` ## Model is being refit with current smoothing parameters but initial states are being re-estimated. ## Set 'use.initial.values=TRUE' if you want to re-use existing initial values. ``` ``` fr2 <- forecast::forecast(fit2, h=5) ``` ``` plot(fr2) ```
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/4-4-forecast-performance.html
4\.4 Forecast performance ------------------------- We can evaluate the forecast performance with forecasts of our test data or we can use all the data and use time\-series cross\-validation. Let’s start with the former. ### 4\.4\.1 Test forecast performance #### Test against a test data set We will fit an an exponential smoothing model with trend to the training data and make a forecast for the years that we ‘held out’. ``` fit1 <- forecast::ets(traindat, model="AAN") h=length(testdat) fr <- forecast::forecast(fit1, h=h) plot(fr) points(testdat, pch=2, col="red") legend("topleft", c("forecast","actual"), pch=c(20,2), col=c("blue","red")) ``` We can calculate a variety of forecast error metrics with ``` forecast::accuracy(fr, testdat) ``` ``` ## ME RMSE MAE MPE MAPE MASE ## Training set 0.0155561 0.1788989 0.1442712 0.1272938 1.600532 0.7720807 ## Test set -0.5001701 0.5384355 0.5001701 -5.1678506 5.167851 2.6767060 ## ACF1 Theil's U ## Training set -0.008371542 NA ## Test set -0.500000000 2.690911 ``` We would now repeat this for all the models in our candidate set and choose the model with the best forecast performance. #### Test using time\-series cross\-validation Another approach is to use all the data and test a series of forecasts made by fitting the model to different lengths of the data. In this approach, we don’t have test data. Instead we will use all the data for fitting and for forecast testing. We will redefine `traindat` as all our Anchovy data. #### tsCV() function We will use the `tsCV()` function. We need to define a function that returns a forecast. ``` far2 <- function(x, h, model){ fit <- ets(x, model=model) forecast(fit, h=h) } ``` Now we can use `tsCV()` to run our `far2()` function to a series of training data sets. We will specify that a NEW ets model be estimated for each training set. We are not using the weighting estimated for the whole data set but estimating the weighting new for each set. The `e` are our forecast errors for all the forecasts that we did with the data. ``` e <- forecast::tsCV(traindat, far2, h=1, model="AAN") e ``` ``` ## Time Series: ## Start = 1964 ## End = 1989 ## Frequency = 1 ## [1] -0.245378390 0.366852341 0.419678595 -0.414861770 -0.152727933 ## [6] -0.183775208 -0.013799590 0.308433377 -0.017680471 -0.329690537 ## [11] -0.353441463 0.266143346 -0.110848616 -0.005227309 0.157821831 ## [16] 0.196184446 0.008135667 0.326024067 0.085160559 0.312668447 ## [21] 0.246437781 0.117274740 0.292601670 -0.300814605 -0.406118961 ## [26] NA ``` Let’s look at the first few `e` so we see exactly with `tsCV()` is doing. ``` e[2] ``` ``` ## [1] 0.3668523 ``` This uses training data from \\(t\=1\\) to \\(t\=2\\) so fits an ets to the first two data points alone. Then it creates a forecast for \\(t\=3\\) and compares that forecast to the actual value observed for \\(t\=3\\). ``` TT <- 2 # end of the temp training data temp <- traindat[1:TT] fit.temp <- forecast::ets(temp, model="AAN") fr.temp <- forecast::forecast(fit.temp, h=1) traindat[TT+1] - fr.temp$mean ``` ``` ## Time Series: ## Start = 3 ## End = 3 ## Frequency = 1 ## [1] 0.3668523 ``` ``` e[3] ``` ``` ## [1] 0.4196786 ``` This uses training data from \\(t\=1\\) to \\(t\=2\\) so fits an ets to the first two data points alone. Then it creates a forecast for \\(t\=3\\) and compares that forecast to the actual value observed for \\(t\=3\\). ``` TT <- 3 # end of the temp training data temp <- traindat[1:TT] fit.temp <- forecast::ets(temp, model="AAN") fr.temp <- forecast::forecast(fit.temp, h=1) traindat[TT+1] - fr.temp$mean ``` ``` ## Time Series: ## Start = 4 ## End = 4 ## Frequency = 1 ## [1] 0.4196786 ``` #### Forecast accuracy metrics Once we have the errors from `tsCV()`, we can compute forecast accuracy metrics. RMSE: root mean squared error ``` rmse <- sqrt(mean(e^2, na.rm=TRUE)) ``` MAE: mean absolute error ``` mae <- mean(abs(e), na.rm=TRUE) ``` ### 4\.4\.2 Testing a specific ets model By specifying `model="AAN"`, we estimated a new ets model (meaning new weighting) for each training set used. We might want to specify that we use only the weighting we estimated for the full data set. We do this by passing in a fit to `model`. The `e` are our forecast errors for all the forecasts that we did with the data. `fit1` below is the ets estimated from all the data 1964 to 1989\. Note, the code will produce a warning that it is estimating the initial value and just using the weighting. That is what we want. ``` fit1 <- forecast::ets(traindat, model="AAN") e <- forecast::tsCV(traindat, far2, h=1, model=fit1) e ``` ``` ## Time Series: ## Start = 1964 ## End = 1989 ## Frequency = 1 ## [1] NA 0.576663901 1.031385937 0.897828249 1.033164616 ## [6] 0.935274283 0.958914499 1.265427119 -0.017241938 -0.332751184 ## [11] -0.330473144 0.255886314 -0.103926617 0.031206730 0.154727479 ## [16] 0.198328366 -0.020605522 0.297475742 0.005297401 0.264939892 ## [21] 0.196256334 0.129798648 0.335887872 -0.074017535 -0.373267163 ## [26] NA ``` ### 4\.4\.1 Test forecast performance #### Test against a test data set We will fit an an exponential smoothing model with trend to the training data and make a forecast for the years that we ‘held out’. ``` fit1 <- forecast::ets(traindat, model="AAN") h=length(testdat) fr <- forecast::forecast(fit1, h=h) plot(fr) points(testdat, pch=2, col="red") legend("topleft", c("forecast","actual"), pch=c(20,2), col=c("blue","red")) ``` We can calculate a variety of forecast error metrics with ``` forecast::accuracy(fr, testdat) ``` ``` ## ME RMSE MAE MPE MAPE MASE ## Training set 0.0155561 0.1788989 0.1442712 0.1272938 1.600532 0.7720807 ## Test set -0.5001701 0.5384355 0.5001701 -5.1678506 5.167851 2.6767060 ## ACF1 Theil's U ## Training set -0.008371542 NA ## Test set -0.500000000 2.690911 ``` We would now repeat this for all the models in our candidate set and choose the model with the best forecast performance. #### Test using time\-series cross\-validation Another approach is to use all the data and test a series of forecasts made by fitting the model to different lengths of the data. In this approach, we don’t have test data. Instead we will use all the data for fitting and for forecast testing. We will redefine `traindat` as all our Anchovy data. #### tsCV() function We will use the `tsCV()` function. We need to define a function that returns a forecast. ``` far2 <- function(x, h, model){ fit <- ets(x, model=model) forecast(fit, h=h) } ``` Now we can use `tsCV()` to run our `far2()` function to a series of training data sets. We will specify that a NEW ets model be estimated for each training set. We are not using the weighting estimated for the whole data set but estimating the weighting new for each set. The `e` are our forecast errors for all the forecasts that we did with the data. ``` e <- forecast::tsCV(traindat, far2, h=1, model="AAN") e ``` ``` ## Time Series: ## Start = 1964 ## End = 1989 ## Frequency = 1 ## [1] -0.245378390 0.366852341 0.419678595 -0.414861770 -0.152727933 ## [6] -0.183775208 -0.013799590 0.308433377 -0.017680471 -0.329690537 ## [11] -0.353441463 0.266143346 -0.110848616 -0.005227309 0.157821831 ## [16] 0.196184446 0.008135667 0.326024067 0.085160559 0.312668447 ## [21] 0.246437781 0.117274740 0.292601670 -0.300814605 -0.406118961 ## [26] NA ``` Let’s look at the first few `e` so we see exactly with `tsCV()` is doing. ``` e[2] ``` ``` ## [1] 0.3668523 ``` This uses training data from \\(t\=1\\) to \\(t\=2\\) so fits an ets to the first two data points alone. Then it creates a forecast for \\(t\=3\\) and compares that forecast to the actual value observed for \\(t\=3\\). ``` TT <- 2 # end of the temp training data temp <- traindat[1:TT] fit.temp <- forecast::ets(temp, model="AAN") fr.temp <- forecast::forecast(fit.temp, h=1) traindat[TT+1] - fr.temp$mean ``` ``` ## Time Series: ## Start = 3 ## End = 3 ## Frequency = 1 ## [1] 0.3668523 ``` ``` e[3] ``` ``` ## [1] 0.4196786 ``` This uses training data from \\(t\=1\\) to \\(t\=2\\) so fits an ets to the first two data points alone. Then it creates a forecast for \\(t\=3\\) and compares that forecast to the actual value observed for \\(t\=3\\). ``` TT <- 3 # end of the temp training data temp <- traindat[1:TT] fit.temp <- forecast::ets(temp, model="AAN") fr.temp <- forecast::forecast(fit.temp, h=1) traindat[TT+1] - fr.temp$mean ``` ``` ## Time Series: ## Start = 4 ## End = 4 ## Frequency = 1 ## [1] 0.4196786 ``` #### Forecast accuracy metrics Once we have the errors from `tsCV()`, we can compute forecast accuracy metrics. RMSE: root mean squared error ``` rmse <- sqrt(mean(e^2, na.rm=TRUE)) ``` MAE: mean absolute error ``` mae <- mean(abs(e), na.rm=TRUE) ``` #### Test against a test data set We will fit an an exponential smoothing model with trend to the training data and make a forecast for the years that we ‘held out’. ``` fit1 <- forecast::ets(traindat, model="AAN") h=length(testdat) fr <- forecast::forecast(fit1, h=h) plot(fr) points(testdat, pch=2, col="red") legend("topleft", c("forecast","actual"), pch=c(20,2), col=c("blue","red")) ``` We can calculate a variety of forecast error metrics with ``` forecast::accuracy(fr, testdat) ``` ``` ## ME RMSE MAE MPE MAPE MASE ## Training set 0.0155561 0.1788989 0.1442712 0.1272938 1.600532 0.7720807 ## Test set -0.5001701 0.5384355 0.5001701 -5.1678506 5.167851 2.6767060 ## ACF1 Theil's U ## Training set -0.008371542 NA ## Test set -0.500000000 2.690911 ``` We would now repeat this for all the models in our candidate set and choose the model with the best forecast performance. #### Test using time\-series cross\-validation Another approach is to use all the data and test a series of forecasts made by fitting the model to different lengths of the data. In this approach, we don’t have test data. Instead we will use all the data for fitting and for forecast testing. We will redefine `traindat` as all our Anchovy data. #### tsCV() function We will use the `tsCV()` function. We need to define a function that returns a forecast. ``` far2 <- function(x, h, model){ fit <- ets(x, model=model) forecast(fit, h=h) } ``` Now we can use `tsCV()` to run our `far2()` function to a series of training data sets. We will specify that a NEW ets model be estimated for each training set. We are not using the weighting estimated for the whole data set but estimating the weighting new for each set. The `e` are our forecast errors for all the forecasts that we did with the data. ``` e <- forecast::tsCV(traindat, far2, h=1, model="AAN") e ``` ``` ## Time Series: ## Start = 1964 ## End = 1989 ## Frequency = 1 ## [1] -0.245378390 0.366852341 0.419678595 -0.414861770 -0.152727933 ## [6] -0.183775208 -0.013799590 0.308433377 -0.017680471 -0.329690537 ## [11] -0.353441463 0.266143346 -0.110848616 -0.005227309 0.157821831 ## [16] 0.196184446 0.008135667 0.326024067 0.085160559 0.312668447 ## [21] 0.246437781 0.117274740 0.292601670 -0.300814605 -0.406118961 ## [26] NA ``` Let’s look at the first few `e` so we see exactly with `tsCV()` is doing. ``` e[2] ``` ``` ## [1] 0.3668523 ``` This uses training data from \\(t\=1\\) to \\(t\=2\\) so fits an ets to the first two data points alone. Then it creates a forecast for \\(t\=3\\) and compares that forecast to the actual value observed for \\(t\=3\\). ``` TT <- 2 # end of the temp training data temp <- traindat[1:TT] fit.temp <- forecast::ets(temp, model="AAN") fr.temp <- forecast::forecast(fit.temp, h=1) traindat[TT+1] - fr.temp$mean ``` ``` ## Time Series: ## Start = 3 ## End = 3 ## Frequency = 1 ## [1] 0.3668523 ``` ``` e[3] ``` ``` ## [1] 0.4196786 ``` This uses training data from \\(t\=1\\) to \\(t\=2\\) so fits an ets to the first two data points alone. Then it creates a forecast for \\(t\=3\\) and compares that forecast to the actual value observed for \\(t\=3\\). ``` TT <- 3 # end of the temp training data temp <- traindat[1:TT] fit.temp <- forecast::ets(temp, model="AAN") fr.temp <- forecast::forecast(fit.temp, h=1) traindat[TT+1] - fr.temp$mean ``` ``` ## Time Series: ## Start = 4 ## End = 4 ## Frequency = 1 ## [1] 0.4196786 ``` #### Forecast accuracy metrics Once we have the errors from `tsCV()`, we can compute forecast accuracy metrics. RMSE: root mean squared error ``` rmse <- sqrt(mean(e^2, na.rm=TRUE)) ``` MAE: mean absolute error ``` mae <- mean(abs(e), na.rm=TRUE) ``` ### 4\.4\.2 Testing a specific ets model By specifying `model="AAN"`, we estimated a new ets model (meaning new weighting) for each training set used. We might want to specify that we use only the weighting we estimated for the full data set. We do this by passing in a fit to `model`. The `e` are our forecast errors for all the forecasts that we did with the data. `fit1` below is the ets estimated from all the data 1964 to 1989\. Note, the code will produce a warning that it is estimating the initial value and just using the weighting. That is what we want. ``` fit1 <- forecast::ets(traindat, model="AAN") e <- forecast::tsCV(traindat, far2, h=1, model=fit1) e ``` ``` ## Time Series: ## Start = 1964 ## End = 1989 ## Frequency = 1 ## [1] NA 0.576663901 1.031385937 0.897828249 1.033164616 ## [6] 0.935274283 0.958914499 1.265427119 -0.017241938 -0.332751184 ## [11] -0.330473144 0.255886314 -0.103926617 0.031206730 0.154727479 ## [16] 0.198328366 -0.020605522 0.297475742 0.005297401 0.264939892 ## [21] 0.196256334 0.129798648 0.335887872 -0.074017535 -0.373267163 ## [26] NA ```
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/4-5-further-reading.html
4\.5 Further Reading -------------------- This chapter covers a small sample of the simpler ETS models that can be fit. There are many other types of more complex ETS models and the `ets()` function will fit these also. Rob J Hyndman (lead on the forecast package) and George Athanasopoulos have an excellent online text on practical forecasting and exponential smoothing. Read [their chapter](https://otexts.org/fpp2/expsmooth.html) on exponential smoothing to learn more about these models and how to use them.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/5-perf-testing.html
Chapter 5 Testing forecast accuracy =================================== Once you have found a set of possible forecast models, you are ready to compare forecasts from a variety of models and choose a forecast model. To quantify the forecast performance, we need to create forecasts for data that we have so that we can compare the forecast to actual data. There are two approaches to this: holding out data for testing and cross\-validation.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/5-1-training-settest-set.html
5\.1 Training set/test set -------------------------- One approach is to ‘hold out’ some of your data as the test data and did not use it at all in your fitting. To measure the forecast performance, you fit to your training data and test the forecast against the data in the test set. This is the approach that Stergiou and Christou used. Stergiou and Christou used 1964\-1987 as their training data and tested their forecasts against 1988 and 1989\. ### 5\.1\.1 Forecast versus actual We will fit to the training data and make a forecast for the test data. We can then compare the forecast to the actual values in the test data. ``` fit1 <- forecast::auto.arima(traindat) fr <- forecast::forecast(fit1, h=2) fr ``` ``` ## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95 ## 1988 10.03216 9.789577 10.27475 9.661160 10.40317 ## 1989 10.09625 9.832489 10.36001 9.692861 10.49964 ``` Plot the forecast and compare to the actual values in 1988 and 1989\. ``` plot(fr) points(testdat, pch=2, col="red") legend("topleft", c("forecast","actual"), pch=c(20,2), col=c("blue","red")) ``` ### 5\.1\.1 Forecast versus actual We will fit to the training data and make a forecast for the test data. We can then compare the forecast to the actual values in the test data. ``` fit1 <- forecast::auto.arima(traindat) fr <- forecast::forecast(fit1, h=2) fr ``` ``` ## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95 ## 1988 10.03216 9.789577 10.27475 9.661160 10.40317 ## 1989 10.09625 9.832489 10.36001 9.692861 10.49964 ``` Plot the forecast and compare to the actual values in 1988 and 1989\. ``` plot(fr) points(testdat, pch=2, col="red") legend("topleft", c("forecast","actual"), pch=c(20,2), col=c("blue","red")) ```
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/5-2-cross-validation.html
5\.2 Cross\-Validation ---------------------- An alternate approach to is to use cross\-validation. This approach uses windows or shorter segments of the whole time series to make a series of single forecasts. We can use either a variable length or a fixed length window. ### 5\.2\.1 Variable window For the variable length window approach applied to the Anchovy time series, we would fit the model 1964\-1973 and forecast 1974, then 1964\-1974 and forecast 1975, then 1964\-1975 and forecast 1976, and continue up to 1964\-1988 and forecast 1989\. This would create 16 forecasts which we would compare to the actual landings. The window is ‘variable’ because the length of the time series used for fitting the model, keeps increasing by 1\. ### 5\.2\.2 Fixed window Another approach uses a fixed window. For example, a 10\-year window. ### 5\.2\.3 Cross\-validation farther into the future Sometimes it makes more sense to test the performance for forecasts that are farther in the future. For example, if the data from your catch surveys takes some time to process, then you might need to make forecasts that are farther than 1 year from your last data point. In that case, there is a gap between your training data and your test data point. ### 5\.2\.1 Variable window For the variable length window approach applied to the Anchovy time series, we would fit the model 1964\-1973 and forecast 1974, then 1964\-1974 and forecast 1975, then 1964\-1975 and forecast 1976, and continue up to 1964\-1988 and forecast 1989\. This would create 16 forecasts which we would compare to the actual landings. The window is ‘variable’ because the length of the time series used for fitting the model, keeps increasing by 1\. ### 5\.2\.2 Fixed window Another approach uses a fixed window. For example, a 10\-year window. ### 5\.2\.3 Cross\-validation farther into the future Sometimes it makes more sense to test the performance for forecasts that are farther in the future. For example, if the data from your catch surveys takes some time to process, then you might need to make forecasts that are farther than 1 year from your last data point. In that case, there is a gap between your training data and your test data point.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/5-3-metrics.html
5\.3 Metrics ------------ How to we quantify the difference between the forecast and the actual values in the test data set? Let’s take the example of a training set/test set. The forecast errors are the difference between the test data and the forecasts. ``` fr.err <- testdat - fr$mean fr.err ``` ``` ## Time Series: ## Start = 1988 ## End = 1989 ## Frequency = 1 ## [1] -0.1704302 -0.4944778 ``` ### 5\.3\.1 `accuracy()` function The `accuracy()` function in forecast provides many different metrics such as mean error, root mean square error, mean absolute error, mean percentage error, mean absolute percentage error. It requires a forecast object and a test data set that is the same length. ``` accuracy(fr, testdat) ``` ``` ## ME RMSE MAE MPE MAPE MASE ## Training set -0.00473511 0.1770653 0.1438523 -0.1102259 1.588409 0.7698386 ## Test set -0.33245398 0.3698342 0.3324540 -3.4390277 3.439028 1.7791577 ## ACF1 Theil's U ## Training set -0.04312022 NA ## Test set -0.50000000 1.90214 ``` The metrics are: #### ME Mean err ``` me <- mean(fr.err) me ``` ``` ## [1] -0.332454 ``` #### RMSE Root mean squared error ``` rmse <- sqrt(mean(fr.err^2)) rmse ``` ``` ## [1] 0.3698342 ``` #### MAE Mean absolute error ``` mae <- mean(abs(fr.err)) mae ``` ``` ## [1] 0.332454 ``` #### MPE Mean percentage error ``` fr.pe <- 100*fr.err/testdat mpe <- mean(fr.pe) mpe ``` ``` ## [1] -3.439028 ``` #### MAPE Mean absolute percentage error ``` mape <- mean(abs(fr.pe)) mape ``` ``` ## [1] 3.439028 ``` ``` accuracy(fr, testdat)[,1:5] ``` ``` ## ME RMSE MAE MPE MAPE ## Training set -0.00473511 0.1770653 0.1438523 -0.1102259 1.588409 ## Test set -0.33245398 0.3698342 0.3324540 -3.4390277 3.439028 ``` ``` c(me, rmse, mae, mpe, mape) ``` ``` ## [1] -0.3324540 0.3698342 0.3324540 -3.4390277 3.4390277 ``` ### 5\.3\.2 Test multiple models Now that you have some metrics for forecast accuracy, you can compute these for all the models in your candidate set. ``` # The model picked by auto.arima fit1 <- forecast::Arima(traindat, order=c(0,1,1)) fr1 <- forecast::forecast(fit1, h=2) test1 <- forecast::accuracy(fr1, testdat)[2,1:5] # AR-1 fit2 <- forecast::Arima(traindat, order=c(1,1,0)) fr2 <- forecast::forecast(fit2, h=2) test2 <- forecast::accuracy(fr2, testdat)[2,1:5] # Naive model with drift fit3 <- forecast::rwf(traindat, drift=TRUE) fr3 <- forecast::forecast(fit3, h=2) test3 <- forecast::accuracy(fr3, testdat)[2,1:5] ``` #### Show a summary | | ME | RMSE | MAE | MPE | MAPE | | --- | --- | --- | --- | --- | --- | | (0,1,1\) | \-0\.293 | 0\.320 | 0\.293 | \-3\.024 | 3\.024 | | (1,1,0\) | \-0\.309 | 0\.341 | 0\.309 | \-3\.200 | 3\.200 | | Naive | \-0\.483 | 0\.510 | 0\.483 | \-4\.985 | 4\.985 | ### 5\.3\.3 Cross\-validation Computing forecast errors and performance metrics with time series cross\-validation is similar to the training set/test test approach. The first step to using the `tsCV()` function is to define the function that returns a forecast for your model. Your function needs to take `x`, a time series, and `h` the length of the forecast. You can also have other arguments if needed. Here is an example function for a forecast from an ARIMA model. ``` fun <- function(x, h, order){ forecast::forecast(Arima(x, order=order), h=h) } ``` We pass this into the `tsCV()` function. `tsCV()` requires our dataset and our forecast function. The arguments after the forecast function are those we included in our `fun` definition. `tsCV()` returns a time series of errors. ``` e <- forecast::tsCV(traindat, fun, h=1, order=c(0,1,1)) ``` We then can compute performance metrics from these errors. ``` tscv1 <- c(ME=mean(e, na.rm=TRUE), RMSE=sqrt(mean(e^2, na.rm=TRUE)), MAE=mean(abs(e), na.rm=TRUE)) tscv1 ``` ``` ## ME RMSE MAE ## 0.1128788 0.2261706 0.1880392 ``` #### Cross\-validation farther in future Compare accuracy of forecasts 1 year out versus 4 years out. If `h` is greater than 1, then the errors are returned as a matrix with each `h` in a column. Column 4 is the forecast, 4 years out. ``` e <- forecast::tsCV(traindat, fun, h=4, order=c(0,1,1))[,4] #RMSE tscv4 <- c(ME=mean(e, na.rm=TRUE), RMSE=sqrt(mean(e^2, na.rm=TRUE)), MAE=mean(abs(e), na.rm=TRUE)) rbind(tscv1, tscv4) ``` ``` ## ME RMSE MAE ## tscv1 0.1128788 0.2261706 0.1880392 ## tscv4 0.2839064 0.3812815 0.3359689 ``` As we would expect, forecast errors are higher when we make forecasts farther into the future. #### Cross\-validation with a fixed window Compare accuracy of forecasts with a fixed 10\-year window and 1\-year out forecasts. ``` e <- forecast::tsCV(traindat, fun, h=1, order=c(0,1,1), window=10) #RMSE tscvf1 <- c(ME=mean(e, na.rm=TRUE), RMSE=sqrt(mean(e^2, na.rm=TRUE)), MAE=mean(abs(e), na.rm=TRUE)) tscvf1 ``` ``` ## ME RMSE MAE ## 0.1387670 0.2286572 0.1942840 ``` #### All the forecasts tests together Here are all 4 types of forecasts tests together. There is not right approach. Time series cross\-validation has the advantage that you test many more forecasts and use all your data. ``` comp.tab <- rbind(train.test=test1[c("ME","RMSE","MAE")], tsCV.variable1=tscv1, tsCV.variable4=tscv4, tsCV.fixed1=tscvf1) knitr::kable(comp.tab, format="html") ``` | | ME | RMSE | MAE | | --- | --- | --- | --- | | train.test | \-0\.2925326 | 0\.3201093 | 0\.2925326 | | tsCV.variable1 | 0\.1128788 | 0\.2261706 | 0\.1880392 | | tsCV.variable4 | 0\.2839064 | 0\.3812815 | 0\.3359689 | | tsCV.fixed1 | 0\.1387670 | 0\.2286572 | 0\.1942840 | ### 5\.3\.1 `accuracy()` function The `accuracy()` function in forecast provides many different metrics such as mean error, root mean square error, mean absolute error, mean percentage error, mean absolute percentage error. It requires a forecast object and a test data set that is the same length. ``` accuracy(fr, testdat) ``` ``` ## ME RMSE MAE MPE MAPE MASE ## Training set -0.00473511 0.1770653 0.1438523 -0.1102259 1.588409 0.7698386 ## Test set -0.33245398 0.3698342 0.3324540 -3.4390277 3.439028 1.7791577 ## ACF1 Theil's U ## Training set -0.04312022 NA ## Test set -0.50000000 1.90214 ``` The metrics are: #### ME Mean err ``` me <- mean(fr.err) me ``` ``` ## [1] -0.332454 ``` #### RMSE Root mean squared error ``` rmse <- sqrt(mean(fr.err^2)) rmse ``` ``` ## [1] 0.3698342 ``` #### MAE Mean absolute error ``` mae <- mean(abs(fr.err)) mae ``` ``` ## [1] 0.332454 ``` #### MPE Mean percentage error ``` fr.pe <- 100*fr.err/testdat mpe <- mean(fr.pe) mpe ``` ``` ## [1] -3.439028 ``` #### MAPE Mean absolute percentage error ``` mape <- mean(abs(fr.pe)) mape ``` ``` ## [1] 3.439028 ``` ``` accuracy(fr, testdat)[,1:5] ``` ``` ## ME RMSE MAE MPE MAPE ## Training set -0.00473511 0.1770653 0.1438523 -0.1102259 1.588409 ## Test set -0.33245398 0.3698342 0.3324540 -3.4390277 3.439028 ``` ``` c(me, rmse, mae, mpe, mape) ``` ``` ## [1] -0.3324540 0.3698342 0.3324540 -3.4390277 3.4390277 ``` #### ME Mean err ``` me <- mean(fr.err) me ``` ``` ## [1] -0.332454 ``` #### RMSE Root mean squared error ``` rmse <- sqrt(mean(fr.err^2)) rmse ``` ``` ## [1] 0.3698342 ``` #### MAE Mean absolute error ``` mae <- mean(abs(fr.err)) mae ``` ``` ## [1] 0.332454 ``` #### MPE Mean percentage error ``` fr.pe <- 100*fr.err/testdat mpe <- mean(fr.pe) mpe ``` ``` ## [1] -3.439028 ``` #### MAPE Mean absolute percentage error ``` mape <- mean(abs(fr.pe)) mape ``` ``` ## [1] 3.439028 ``` ``` accuracy(fr, testdat)[,1:5] ``` ``` ## ME RMSE MAE MPE MAPE ## Training set -0.00473511 0.1770653 0.1438523 -0.1102259 1.588409 ## Test set -0.33245398 0.3698342 0.3324540 -3.4390277 3.439028 ``` ``` c(me, rmse, mae, mpe, mape) ``` ``` ## [1] -0.3324540 0.3698342 0.3324540 -3.4390277 3.4390277 ``` ### 5\.3\.2 Test multiple models Now that you have some metrics for forecast accuracy, you can compute these for all the models in your candidate set. ``` # The model picked by auto.arima fit1 <- forecast::Arima(traindat, order=c(0,1,1)) fr1 <- forecast::forecast(fit1, h=2) test1 <- forecast::accuracy(fr1, testdat)[2,1:5] # AR-1 fit2 <- forecast::Arima(traindat, order=c(1,1,0)) fr2 <- forecast::forecast(fit2, h=2) test2 <- forecast::accuracy(fr2, testdat)[2,1:5] # Naive model with drift fit3 <- forecast::rwf(traindat, drift=TRUE) fr3 <- forecast::forecast(fit3, h=2) test3 <- forecast::accuracy(fr3, testdat)[2,1:5] ``` #### Show a summary | | ME | RMSE | MAE | MPE | MAPE | | --- | --- | --- | --- | --- | --- | | (0,1,1\) | \-0\.293 | 0\.320 | 0\.293 | \-3\.024 | 3\.024 | | (1,1,0\) | \-0\.309 | 0\.341 | 0\.309 | \-3\.200 | 3\.200 | | Naive | \-0\.483 | 0\.510 | 0\.483 | \-4\.985 | 4\.985 | #### Show a summary | | ME | RMSE | MAE | MPE | MAPE | | --- | --- | --- | --- | --- | --- | | (0,1,1\) | \-0\.293 | 0\.320 | 0\.293 | \-3\.024 | 3\.024 | | (1,1,0\) | \-0\.309 | 0\.341 | 0\.309 | \-3\.200 | 3\.200 | | Naive | \-0\.483 | 0\.510 | 0\.483 | \-4\.985 | 4\.985 | ### 5\.3\.3 Cross\-validation Computing forecast errors and performance metrics with time series cross\-validation is similar to the training set/test test approach. The first step to using the `tsCV()` function is to define the function that returns a forecast for your model. Your function needs to take `x`, a time series, and `h` the length of the forecast. You can also have other arguments if needed. Here is an example function for a forecast from an ARIMA model. ``` fun <- function(x, h, order){ forecast::forecast(Arima(x, order=order), h=h) } ``` We pass this into the `tsCV()` function. `tsCV()` requires our dataset and our forecast function. The arguments after the forecast function are those we included in our `fun` definition. `tsCV()` returns a time series of errors. ``` e <- forecast::tsCV(traindat, fun, h=1, order=c(0,1,1)) ``` We then can compute performance metrics from these errors. ``` tscv1 <- c(ME=mean(e, na.rm=TRUE), RMSE=sqrt(mean(e^2, na.rm=TRUE)), MAE=mean(abs(e), na.rm=TRUE)) tscv1 ``` ``` ## ME RMSE MAE ## 0.1128788 0.2261706 0.1880392 ``` #### Cross\-validation farther in future Compare accuracy of forecasts 1 year out versus 4 years out. If `h` is greater than 1, then the errors are returned as a matrix with each `h` in a column. Column 4 is the forecast, 4 years out. ``` e <- forecast::tsCV(traindat, fun, h=4, order=c(0,1,1))[,4] #RMSE tscv4 <- c(ME=mean(e, na.rm=TRUE), RMSE=sqrt(mean(e^2, na.rm=TRUE)), MAE=mean(abs(e), na.rm=TRUE)) rbind(tscv1, tscv4) ``` ``` ## ME RMSE MAE ## tscv1 0.1128788 0.2261706 0.1880392 ## tscv4 0.2839064 0.3812815 0.3359689 ``` As we would expect, forecast errors are higher when we make forecasts farther into the future. #### Cross\-validation with a fixed window Compare accuracy of forecasts with a fixed 10\-year window and 1\-year out forecasts. ``` e <- forecast::tsCV(traindat, fun, h=1, order=c(0,1,1), window=10) #RMSE tscvf1 <- c(ME=mean(e, na.rm=TRUE), RMSE=sqrt(mean(e^2, na.rm=TRUE)), MAE=mean(abs(e), na.rm=TRUE)) tscvf1 ``` ``` ## ME RMSE MAE ## 0.1387670 0.2286572 0.1942840 ``` #### All the forecasts tests together Here are all 4 types of forecasts tests together. There is not right approach. Time series cross\-validation has the advantage that you test many more forecasts and use all your data. ``` comp.tab <- rbind(train.test=test1[c("ME","RMSE","MAE")], tsCV.variable1=tscv1, tsCV.variable4=tscv4, tsCV.fixed1=tscvf1) knitr::kable(comp.tab, format="html") ``` | | ME | RMSE | MAE | | --- | --- | --- | --- | | train.test | \-0\.2925326 | 0\.3201093 | 0\.2925326 | | tsCV.variable1 | 0\.1128788 | 0\.2261706 | 0\.1880392 | | tsCV.variable4 | 0\.2839064 | 0\.3812815 | 0\.3359689 | | tsCV.fixed1 | 0\.1387670 | 0\.2286572 | 0\.1942840 | #### Cross\-validation farther in future Compare accuracy of forecasts 1 year out versus 4 years out. If `h` is greater than 1, then the errors are returned as a matrix with each `h` in a column. Column 4 is the forecast, 4 years out. ``` e <- forecast::tsCV(traindat, fun, h=4, order=c(0,1,1))[,4] #RMSE tscv4 <- c(ME=mean(e, na.rm=TRUE), RMSE=sqrt(mean(e^2, na.rm=TRUE)), MAE=mean(abs(e), na.rm=TRUE)) rbind(tscv1, tscv4) ``` ``` ## ME RMSE MAE ## tscv1 0.1128788 0.2261706 0.1880392 ## tscv4 0.2839064 0.3812815 0.3359689 ``` As we would expect, forecast errors are higher when we make forecasts farther into the future. #### Cross\-validation with a fixed window Compare accuracy of forecasts with a fixed 10\-year window and 1\-year out forecasts. ``` e <- forecast::tsCV(traindat, fun, h=1, order=c(0,1,1), window=10) #RMSE tscvf1 <- c(ME=mean(e, na.rm=TRUE), RMSE=sqrt(mean(e^2, na.rm=TRUE)), MAE=mean(abs(e), na.rm=TRUE)) tscvf1 ``` ``` ## ME RMSE MAE ## 0.1387670 0.2286572 0.1942840 ``` #### All the forecasts tests together Here are all 4 types of forecasts tests together. There is not right approach. Time series cross\-validation has the advantage that you test many more forecasts and use all your data. ``` comp.tab <- rbind(train.test=test1[c("ME","RMSE","MAE")], tsCV.variable1=tscv1, tsCV.variable4=tscv4, tsCV.fixed1=tscvf1) knitr::kable(comp.tab, format="html") ``` | | ME | RMSE | MAE | | --- | --- | --- | --- | | train.test | \-0\.2925326 | 0\.3201093 | 0\.2925326 | | tsCV.variable1 | 0\.1128788 | 0\.2261706 | 0\.1880392 | | tsCV.variable4 | 0\.2839064 | 0\.3812815 | 0\.3359689 | | tsCV.fixed1 | 0\.1387670 | 0\.2286572 | 0\.1942840 |
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/5-4-candidate-model-set.html
5\.4 Candidate model set ------------------------ Once you have explored a variety of forecasting models you can come up with a candidate set of models along with a set of null models. Here is our candidate models for the anchovy along with the code to fit and create a forecast from each model. * Exponential smoothing model with trend ``` fit <- forecast::ets(traindat, model="AAN") fr <- forecast::forecast(fit, h=1) ``` * Exponential smoothing model no trend ``` fit <- forecast::ets(traindat, model="ANN") fr <- forecast::forecast(fit, h=1) ``` * ARIMA(0,1,1\) with drift (best) ``` fit <- forecast::Arima(traindat, order(0,1,1), include.drift=TRUE) fr <- forecast::forecast(fit, h=1) ``` * ARIMA(2,1,0\) with drift (within 2 AIC of best) ``` fit <- forecast::Arima(traindat, order(2,1,0), include.drift=TRUE) fr <- forecast::forecast(fit, h=1) ``` * Time\-varying regression with linear time ``` traindat$t <- 1:24 fit <- lm(log.metric.tons ~ t, data=traindat) fr <- forecast::forecast(fit, newdata=data.frame(t=25)) ``` We also need to include null models in our candidate set. #### Null models * Naive no trend ``` fit <- forecast::Arima(traindat, order(0,1,0)) fr <- forecast::forecast(fit, h=1) # or simply fr <- forecast::rwf(traindat, h=1) ``` * Naive with trend ``` fit <- forecast::Arima(traindat, order(0,1,0), include.drift=TRUE) fr <- forecast::forecast(fit, h=1) # or simply fr <- forecast::rwf(traindat, drift=TRUE, h=1) ``` * Average or mean ``` fit <- forecast::Arima(traindat, order(0,0,0)) fr <- forecast::forecast(fit, h=1) ``` #### Null models * Naive no trend ``` fit <- forecast::Arima(traindat, order(0,1,0)) fr <- forecast::forecast(fit, h=1) # or simply fr <- forecast::rwf(traindat, h=1) ``` * Naive with trend ``` fit <- forecast::Arima(traindat, order(0,1,0), include.drift=TRUE) fr <- forecast::forecast(fit, h=1) # or simply fr <- forecast::rwf(traindat, drift=TRUE, h=1) ``` * Average or mean ``` fit <- forecast::Arima(traindat, order(0,0,0)) fr <- forecast::forecast(fit, h=1) ```
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/5-5-testing-the-candidate-model-set.html
5\.5 Testing the candidate model set ------------------------------------ With a set of candidate models, we can prepare tables showing the forecast performance for each model. For each model, we will do the same steps: 1. Fit the model 2. Create forecasts for a test data set or use cross\-validation 3. Compute forecast accuracy metrics for the forecasts Note when you compare models, you can use both ‘training data/test data’ and use time\-series cross\-validation, but report the metrics in separate columns. Example, ‘RMSE from tsCV’ and ‘RMSE from test data’. ### 5\.5\.1 Fit each of our candidate models We will define the training data as 1964 to 1987 and the test data as 1988 and 1989\. The full data is 1964 to 1989\. ``` fulldat <- window(anchovyts, 1964, 1989) traindat <- window(anchovyts, 1964, 1987) testdat <- window(anchovyts, 1988, 1989) ``` We will store our fits and forecasts in a list for easy access. `fun.list` is the function to pass to `tsCV()`. ``` fit.list <- list() fr.list <- list() fun.list <- list() n.fr <- length(testdat) ``` For each model, we will fit, forecast, and define a forecast function. * Exponential smoothing model with trend ``` modelname <- "ETS w trend" fit <- ets(traindat, model="AAN") fit.list[[modelname]] <- fit fr.list[[modelname]] <- forecast(fit, h=n.fr) fun.list[[modelname]] <- function(x, h){ forecast(ets(x, model="AAN"), h=h) } ``` * Exponential smoothing model no trend ``` modelname <- "ETS no trend" fit <- ets(traindat, model="ANN") fit.list[[modelname]] <- fit fr.list[[modelname]] <- forecast(fit, h=n.fr) fun.list[[modelname]] <- function(x, h){ forecast(ets(x, model="ANN"), h=h) } ``` * ARIMA(0,1,1\) with drift (best) ``` modelname <- "ARIMA(0,1,1) w drift" fit <- Arima(traindat, order=c(0,1,1), include.drift=TRUE) fit.list[[modelname]] <- fit fr.list[[modelname]] <- forecast(fit, h=n.fr) fun.list[[modelname]] <- function(x, h){ forecast(Arima(x, order=c(0,1,1), include.drift=TRUE),h=h) } ``` * ARIMA(2,1,0\) with drift (within 2 AIC of best) ``` modelname <- "ARIMA(2,1,0) w drift" fit <- Arima(traindat, order=c(2,1,0), include.drift=TRUE) fit.list[[modelname]] <- fit fr.list[[modelname]] <- forecast(fit, h=n.fr) fun.list[[modelname]] <- function(x, h){ forecast(Arima(x, order=c(2,1,0), include.drift=TRUE),h=h) } ``` * Time\-varying regression with linear time ``` TT <- length(traindat) #make a data.frame for lm dat <- data.frame(log.metric.tons=traindat, t=1:TT) modelname <- "TV linear regression" fit <- lm(log.metric.tons ~ t, data=dat) fit.list[[modelname]] <- fit fr.list[[modelname]] <- forecast(fit, newdata=data.frame(t=TT+1:n.fr)) fun.list[[modelname]] <- function(x, h){ TT <- length(x) dat <- data.frame(log.metric.tons=x, t=1:TT) ft <- lm(log.metric.tons ~ t, data=dat) forecast(ft, newdata=data.frame(t=TT+h)) } ``` * Naive no trend ``` modelname <- "Naive" fit <- Arima(traindat, order=c(0,1,0)) fit.list[[modelname]] <- fit fr.list[[modelname]] <- forecast(fit, h=n.fr) fun.list[[modelname]] <- function(x, h){ rwf(x,h=h) } ``` * Naive with trend ``` modelname <- "Naive w trend" fit <- Arima(traindat, order=c(0,1,0), include.drift=TRUE) fit.list[[modelname]] <- fit fr.list[[modelname]] <- forecast(fit, h=n.fr) fun.list[[modelname]] <- function(x, h){ rwf(x, drift=TRUE, h=h) } ``` * Average or mean ``` modelname <- "Average" fit <- Arima(traindat, order=c(0,0,0)) fit.list[[modelname]] <- fit fr.list[[modelname]] <- forecast(fit, h=n.fr) fun.list[[modelname]] <- function(x, h){ forecast(Arima(x, order=c(0,0,0)),h=h) } ``` ### 5\.5\.1 Fit each of our candidate models We will define the training data as 1964 to 1987 and the test data as 1988 and 1989\. The full data is 1964 to 1989\. ``` fulldat <- window(anchovyts, 1964, 1989) traindat <- window(anchovyts, 1964, 1987) testdat <- window(anchovyts, 1988, 1989) ``` We will store our fits and forecasts in a list for easy access. `fun.list` is the function to pass to `tsCV()`. ``` fit.list <- list() fr.list <- list() fun.list <- list() n.fr <- length(testdat) ``` For each model, we will fit, forecast, and define a forecast function. * Exponential smoothing model with trend ``` modelname <- "ETS w trend" fit <- ets(traindat, model="AAN") fit.list[[modelname]] <- fit fr.list[[modelname]] <- forecast(fit, h=n.fr) fun.list[[modelname]] <- function(x, h){ forecast(ets(x, model="AAN"), h=h) } ``` * Exponential smoothing model no trend ``` modelname <- "ETS no trend" fit <- ets(traindat, model="ANN") fit.list[[modelname]] <- fit fr.list[[modelname]] <- forecast(fit, h=n.fr) fun.list[[modelname]] <- function(x, h){ forecast(ets(x, model="ANN"), h=h) } ``` * ARIMA(0,1,1\) with drift (best) ``` modelname <- "ARIMA(0,1,1) w drift" fit <- Arima(traindat, order=c(0,1,1), include.drift=TRUE) fit.list[[modelname]] <- fit fr.list[[modelname]] <- forecast(fit, h=n.fr) fun.list[[modelname]] <- function(x, h){ forecast(Arima(x, order=c(0,1,1), include.drift=TRUE),h=h) } ``` * ARIMA(2,1,0\) with drift (within 2 AIC of best) ``` modelname <- "ARIMA(2,1,0) w drift" fit <- Arima(traindat, order=c(2,1,0), include.drift=TRUE) fit.list[[modelname]] <- fit fr.list[[modelname]] <- forecast(fit, h=n.fr) fun.list[[modelname]] <- function(x, h){ forecast(Arima(x, order=c(2,1,0), include.drift=TRUE),h=h) } ``` * Time\-varying regression with linear time ``` TT <- length(traindat) #make a data.frame for lm dat <- data.frame(log.metric.tons=traindat, t=1:TT) modelname <- "TV linear regression" fit <- lm(log.metric.tons ~ t, data=dat) fit.list[[modelname]] <- fit fr.list[[modelname]] <- forecast(fit, newdata=data.frame(t=TT+1:n.fr)) fun.list[[modelname]] <- function(x, h){ TT <- length(x) dat <- data.frame(log.metric.tons=x, t=1:TT) ft <- lm(log.metric.tons ~ t, data=dat) forecast(ft, newdata=data.frame(t=TT+h)) } ``` * Naive no trend ``` modelname <- "Naive" fit <- Arima(traindat, order=c(0,1,0)) fit.list[[modelname]] <- fit fr.list[[modelname]] <- forecast(fit, h=n.fr) fun.list[[modelname]] <- function(x, h){ rwf(x,h=h) } ``` * Naive with trend ``` modelname <- "Naive w trend" fit <- Arima(traindat, order=c(0,1,0), include.drift=TRUE) fit.list[[modelname]] <- fit fr.list[[modelname]] <- forecast(fit, h=n.fr) fun.list[[modelname]] <- function(x, h){ rwf(x, drift=TRUE, h=h) } ``` * Average or mean ``` modelname <- "Average" fit <- Arima(traindat, order=c(0,0,0)) fit.list[[modelname]] <- fit fr.list[[modelname]] <- forecast(fit, h=n.fr) fun.list[[modelname]] <- function(x, h){ forecast(Arima(x, order=c(0,0,0)),h=h) } ```
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/5-6-models-fit.html
5\.6 Models fit --------------- Now we can use `names()` to see the models that we have fit. If we want to add more, we use the code above as a template. ``` modelnames <- names(fit.list) modelnames ``` ``` ## [1] "ETS w trend" "ETS no trend" "ARIMA(0,1,1) w drift" ## [4] "ARIMA(2,1,0) w drift" "TV linear regression" "Naive" ## [7] "Naive w trend" "Average" ``` ### 5\.6\.1 Metrics for each model We will run the models and compute the forecast metrics for each and put in a table. ``` restab <- data.frame(model=modelnames, RMSE=NA, ME=NA, tsCV.RMSE=NA, AIC=NA, BIC=NA, stringsAsFactors = FALSE) for(i in modelnames){ fit <- fit.list[[i]] fr <- fr.list[[i]] restab$RMSE[restab$model==i] <- accuracy(fr, testdat)["Test set","RMSE"] restab$ME[restab$model==i] <- accuracy(fr, testdat)["Test set","ME"] e <- tsCV(traindat, fun.list[[i]], h=1) restab$tsCV.RMSE[restab$model==i] <- sqrt(mean(e^2, na.rm=TRUE)) restab$AIC[restab$model==i] <- AIC(fit) restab$BIC[restab$model==i] <- BIC(fit) } ``` Add on \\(\\Delta\\)AIC and \\(\\Delta\\)BIC. Sort by \\(\\Delta\\)AIC and format to have 3 digits. ``` restab$DeltaAIC <- restab$AIC-min(restab$AIC) restab$DeltaBIC <- restab$BIC-min(restab$BIC) restab <- restab[order(restab$DeltaAIC),] resfor <- format(restab, digits=3, trim=TRUE) ``` Bold the minimum values in each column so they are easy to spot. ``` for(i in colnames(resfor)){ if(class(restab[,i])=="character") next if(i!="ME") testval <- restab[,i] else testval <- abs(restab[,i]) theminrow <- which(testval==min(testval)) resfor[theminrow, i] <- paste0("**",resfor[theminrow,i],"**") } ``` This is the table of FORECAST performance metrics. Not how well it fits the data, but how well it forecasts out of the data. RSME and ME are for the 2 data points in 1988 and 1989 that were held out for testing. tsCV.RMSE is the RSME for the time\-series crossvalidation that makes a series of forecasts for each point in the data. AIC and BIC are information criteria, which are a measure of data support for each model. ``` knitr::kable(resfor) ``` | | model | RMSE | ME | tsCV.RMSE | AIC | BIC | DeltaAIC | DeltaBIC | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | 5 | TV linear regression | **0\.195** | **\-0\.114** | 0\.247 | **\-7\.00** | \-3\.4611 | **0\.00000** | 0\.123 | | 3 | ARIMA(0,1,1\) w drift | 0\.370 | \-0\.332 | 0\.231 | \-6\.99 | **\-3\.5844** | 0\.00443 | **0\.000** | | 4 | ARIMA(2,1,0\) w drift | 0\.381 | \-0\.347 | 0\.224 | \-6\.08 | \-1\.5399 | 0\.91340 | 2\.044 | | 7 | Naive w trend | 0\.510 | \-0\.483 | 0\.239 | \-2\.18 | 0\.0883 | 4\.81255 | 3\.673 | | 6 | Naive | 0\.406 | \-0\.384 | **0\.222** | \-2\.06 | \-0\.9247 | 4\.93505 | 2\.660 | | 1 | ETS w trend | 0\.538 | \-0\.500 | 0\.251 | 3\.67 | 9\.5587 | 10\.66374 | 13\.143 | | 2 | ETS no trend | 0\.317 | \-0\.289 | 0\.222 | 6\.76 | 10\.2988 | 13\.75990 | 13\.883 | | 8 | Average | 0\.656 | 0\.643 | 0\.476 | 33\.04 | 35\.3924 | 40\.03162 | 38\.977 | ### 5\.6\.1 Metrics for each model We will run the models and compute the forecast metrics for each and put in a table. ``` restab <- data.frame(model=modelnames, RMSE=NA, ME=NA, tsCV.RMSE=NA, AIC=NA, BIC=NA, stringsAsFactors = FALSE) for(i in modelnames){ fit <- fit.list[[i]] fr <- fr.list[[i]] restab$RMSE[restab$model==i] <- accuracy(fr, testdat)["Test set","RMSE"] restab$ME[restab$model==i] <- accuracy(fr, testdat)["Test set","ME"] e <- tsCV(traindat, fun.list[[i]], h=1) restab$tsCV.RMSE[restab$model==i] <- sqrt(mean(e^2, na.rm=TRUE)) restab$AIC[restab$model==i] <- AIC(fit) restab$BIC[restab$model==i] <- BIC(fit) } ``` Add on \\(\\Delta\\)AIC and \\(\\Delta\\)BIC. Sort by \\(\\Delta\\)AIC and format to have 3 digits. ``` restab$DeltaAIC <- restab$AIC-min(restab$AIC) restab$DeltaBIC <- restab$BIC-min(restab$BIC) restab <- restab[order(restab$DeltaAIC),] resfor <- format(restab, digits=3, trim=TRUE) ``` Bold the minimum values in each column so they are easy to spot. ``` for(i in colnames(resfor)){ if(class(restab[,i])=="character") next if(i!="ME") testval <- restab[,i] else testval <- abs(restab[,i]) theminrow <- which(testval==min(testval)) resfor[theminrow, i] <- paste0("**",resfor[theminrow,i],"**") } ``` This is the table of FORECAST performance metrics. Not how well it fits the data, but how well it forecasts out of the data. RSME and ME are for the 2 data points in 1988 and 1989 that were held out for testing. tsCV.RMSE is the RSME for the time\-series crossvalidation that makes a series of forecasts for each point in the data. AIC and BIC are information criteria, which are a measure of data support for each model. ``` knitr::kable(resfor) ``` | | model | RMSE | ME | tsCV.RMSE | AIC | BIC | DeltaAIC | DeltaBIC | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | 5 | TV linear regression | **0\.195** | **\-0\.114** | 0\.247 | **\-7\.00** | \-3\.4611 | **0\.00000** | 0\.123 | | 3 | ARIMA(0,1,1\) w drift | 0\.370 | \-0\.332 | 0\.231 | \-6\.99 | **\-3\.5844** | 0\.00443 | **0\.000** | | 4 | ARIMA(2,1,0\) w drift | 0\.381 | \-0\.347 | 0\.224 | \-6\.08 | \-1\.5399 | 0\.91340 | 2\.044 | | 7 | Naive w trend | 0\.510 | \-0\.483 | 0\.239 | \-2\.18 | 0\.0883 | 4\.81255 | 3\.673 | | 6 | Naive | 0\.406 | \-0\.384 | **0\.222** | \-2\.06 | \-0\.9247 | 4\.93505 | 2\.660 | | 1 | ETS w trend | 0\.538 | \-0\.500 | 0\.251 | 3\.67 | 9\.5587 | 10\.66374 | 13\.143 | | 2 | ETS no trend | 0\.317 | \-0\.289 | 0\.222 | 6\.76 | 10\.2988 | 13\.75990 | 13\.883 | | 8 | Average | 0\.656 | 0\.643 | 0\.476 | 33\.04 | 35\.3924 | 40\.03162 | 38\.977 |
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/6-covariates.html
Chapter 6 Covariates ==================== Often we want to explain the variability in our data using covariates or exogenous variables. We may want to do this in order to create forecasts using information from the covariates in time step \\(t\-1\\) or \\(t\\) to help forecast at time \\(t\\). Or we may want to understand what causes variability in our data in order to help understand the underlying process. We can include covariates in the time\-varying regression model and the ARIMA models. We cannot include covariates in an exponential smoothing model. That doesn’t make sense as a exponential model is a type of filter of the data not a ‘process’ model. In this chapter, I show a number of approaches for including covariates in a multivariate regression model (MREG) with temporally independent errors. This is not a time series model per se, but rather a multivariate regression applied to time\-ordered data. MREG models with auto\-regressive errors and auto\-regressive models with covariates will be addressed in a separate chapter. I illustrate a variety of approaches for developing a set of covariate for a MREG model. The first approach is variable selection, which was the approach used by Stergiou and Christou for their MREG models ([6\.3](6-3-MREGVAR.html#MREGVAR)). The other approaches are penalized regression ([6\.4](6-4-MREGPR.html#MREGPR)), relative importance metrics ([6\.5](6-5-MREGRELPO.html#MREGRELPO)), and orthogonalization ([6\.6](6-6-MREGORTHO.html#MREGORTHO)). These approaches all deal with the problem of selecting a set of covariates to include in your model. Before discussing models with covariates, I will show a variety of approaches for evaluating the collinearity in your covariate set. Collinearity will dramatically affect your inferences concerning the effect of your covariates and needs to be assessed before you begin modeling.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/6-1-covariates-used-in-stergiou-and-christou.html
6\.1 Covariates used in Stergiou and Christou --------------------------------------------- Stergiou and Christou used five environmental covariates: air temperature (air), sea\-level pressure (slp), sea surface temperature (sst), vertical wind speed (vwnd), and wind speed cubed (wspd3\). I downloaded monthly values for these covariates from the three 1 degree boxes used by Stergiou and Christou from the ICOADS database. I then computed a yearly average over all months in the three boxes. These yearly average environmental covariates are in `covsmean.year`, which is part of `landings` in the **FishForecast** package. ``` require(FishForecast) colnames(ecovsmean.year) ``` ``` ## [1] "Year" "air.degC" "slp.millibars" "sst.degC" ## [5] "vwnd.m/s" "wspd3.m3/s3" ``` The covariates are those in Stergiou and Christou with the following differences. I used the ICOADS data not the COADS data. The boxes are 1 degree but on 1 degree centers not 0\.5 centers. Thus the box is 39\.5\-40\.5 not 39\-40\. ICOADS does not include ‘vertical wind’. I used NS winds which may be different. The code to download the ICOADS data is in the appendix. In addition to the environmental covariates, Stergiou and Christou used many covariates of fishing effort for trawlers, purse seiners, beach seiners, other coastal boats and demersal (sum of trawlers, beach seiners and other coastal boats). For each fishery type, they used data on number of fishers (FI), number of boats (BO), total engine horse power (HP), total boat tonnage (TO). They also used an economic variable: value (VA) of catch for trawlers, purse seiners, beach seiners, other coastal boats. These fishery covariates were extracted from the Greek Statistical Reports ([1\.1\.1](1-1-stergiou-and-christou-1996.html#landingsdata)). ``` colnames(greekfish.cov) ``` ``` ## [1] "Year" "Boats.BO" "Trawlers.BOT" ## [4] "Purse.seiners.BOP" "Beach.seiners.BOB" "Other.BOC" ## [7] "Demersal.BOD" "Fishers.FI" "Trawlers.FIT" ## [10] "Purse.seiners.FIP" "Beach.seiners.FIB" "Other.FIC" ## [13] "Demersal.FID" "Horsepower.HP" "Trawler.HPT" ## [16] "Purse.seiners.HPP" "Beach.seiners.HPB" "Other.HPC" ## [19] "Demersal.HPD" "Trawler.VAT" "Purse.seiners.VAP" ## [22] "Beach.seiners.VAB" "Other.VAC" "Tonnage.TO" ## [25] "Trawlers.TOT" "Purse.seiners.TOP" ``` For anchovy, the fishery effort metrics from the purse seine fishery were used. Lastly, biological covariates were included which were the landings of other species. Stergiou and Christou state (page 118\) that the other species modeled by VAR (page 114\) was included. This would imply that sardine was used as an explanatory variable. However in Table 3 (page 119\), it appears that *Trachurus* (Horse mackerel) was included. It is not clear if sardine was also included but not chosen as an important variable. I included *Trachurus* and not sardine as the biological explanatory variable. ### Preparing the data frame We will model anchovy landings as the response variable. The covariates are lagged by one year, following Stergiou and Christou. This means that the catch in year \\(t\\) is regressed against the covariates in year \\(t\-1\\). We set up our data frame as follows. We use the 1965 to 1987 catch data as the response. We use 1964 to 1986, so year prior, for all the explanatory variables and we log transform the explanatory variables (following Stergiou and Christou). We use \\(t\\) 1 to 23 as a “year” covariate. Our data frame will have the following columns: ``` colnames(df) ``` ``` ## [1] "anchovy" "Year" "Trachurus" "air" "slp" "sst" ## [7] "vwnd" "wspd3" "BOP" "FIP" "HPP" "TOP" ``` In total, there are 11 covariates and 23 years of data—which is not much data per explanatory variable. Section @ref(cov.df) shows the R code to create the `df` data frame with the response variable and all the explanatory variables. For most of the analyses, we will use the untransformed variables, however for some analyses, we will want the effect sizes (the estimated \\(\\beta\\)’s) to be on the same scale. For these analyses, we will use the z\-scored variables, which will be stored in data frame `dfz`. z\-scoring removes the mean and normalizes the variance to 1\. Here is a loop to demean and rescale our data frame. ``` dfz <- df n <- nrow(df) for(i in colnames(df)){ pop_sd <- sd(df[,i])*sqrt((n-1)/n) pop_mean <- mean(df[,i]) dfz[,i] <- (df[,i]-pop_mean)/pop_sd } ``` The function `scale()` will also do a scaling to the unbiased variance instead of the sample variance (divide by \\(n\-1\\) instead of \\(n\\)) and will return a matrix. We will use `dfz` which is scaled to the sample variance as we will need this for the chapter on Principal Components Regression. ``` df.scale <- as.data.frame(scale(df)) ``` --- ### 6\.1\.1 Creating the data frame for model fitting Code to make the `df` data frame used in the model fitting functions. ``` # response df <- data.frame(anchovy=anchovy$log.metric.tons, Year=anchovy$Year) Year1 <- df$Year[1] Year2 <- df$Year[length(df$Year)] df <- subset(df, Year>=Year1+1 & Year<=Year2) # biological covariates df.bio <- subset(greeklandings, Species=="Horse.mackerel")[,c("Year","log.metric.tons")] df.bio <- subset(df.bio, Year>=Year1 & Year<=Year2-1)[,-1,drop=FALSE] # [,-1] to remove year colnames(df.bio) <- "Trachurus" # environmental covariates ecovsmean.year[,"vwnd.m/s"]<- abs(ecovsmean.year[,"vwnd.m/s"]) df.env <- log(subset(ecovsmean.year, Year>=Year1 & Year<=Year2-1)[,-1]) # fishing effort df.fish <- log(subset(greekfish.cov, Year>=Year1 & Year<=Year2-1)[,-1]) purse.cols <- stringr::str_detect(colnames(df.fish),"Purse.seiners") df.fish <- df.fish[,purse.cols] df.fish <- df.fish[!(colnames(df.fish)=="Purse.seiners.VAP")] # assemble df <- data.frame( df, df.bio, df.env, df.fish ) df$Year <- df$Year-df$Year[1]+1 colnames(df) <- sapply(colnames(df), function(x){rev(stringr::str_split(x,"Purse.seiners.")[[1]])[1]}) colnames(df) <- sapply(colnames(df), function(x){stringr::str_split(x,"[.]")[[1]][1]}) df <- df[,colnames(df)!="VAP"] # all the data to 2007 df.full <- df # only training data df <- subset(df, Year>=1965-1964 & Year<=1987-1964) save(df, df.full, file="MREG_Data.RData") ``` ### Preparing the data frame We will model anchovy landings as the response variable. The covariates are lagged by one year, following Stergiou and Christou. This means that the catch in year \\(t\\) is regressed against the covariates in year \\(t\-1\\). We set up our data frame as follows. We use the 1965 to 1987 catch data as the response. We use 1964 to 1986, so year prior, for all the explanatory variables and we log transform the explanatory variables (following Stergiou and Christou). We use \\(t\\) 1 to 23 as a “year” covariate. Our data frame will have the following columns: ``` colnames(df) ``` ``` ## [1] "anchovy" "Year" "Trachurus" "air" "slp" "sst" ## [7] "vwnd" "wspd3" "BOP" "FIP" "HPP" "TOP" ``` In total, there are 11 covariates and 23 years of data—which is not much data per explanatory variable. Section @ref(cov.df) shows the R code to create the `df` data frame with the response variable and all the explanatory variables. For most of the analyses, we will use the untransformed variables, however for some analyses, we will want the effect sizes (the estimated \\(\\beta\\)’s) to be on the same scale. For these analyses, we will use the z\-scored variables, which will be stored in data frame `dfz`. z\-scoring removes the mean and normalizes the variance to 1\. Here is a loop to demean and rescale our data frame. ``` dfz <- df n <- nrow(df) for(i in colnames(df)){ pop_sd <- sd(df[,i])*sqrt((n-1)/n) pop_mean <- mean(df[,i]) dfz[,i] <- (df[,i]-pop_mean)/pop_sd } ``` The function `scale()` will also do a scaling to the unbiased variance instead of the sample variance (divide by \\(n\-1\\) instead of \\(n\\)) and will return a matrix. We will use `dfz` which is scaled to the sample variance as we will need this for the chapter on Principal Components Regression. ``` df.scale <- as.data.frame(scale(df)) ``` --- ### 6\.1\.1 Creating the data frame for model fitting Code to make the `df` data frame used in the model fitting functions. ``` # response df <- data.frame(anchovy=anchovy$log.metric.tons, Year=anchovy$Year) Year1 <- df$Year[1] Year2 <- df$Year[length(df$Year)] df <- subset(df, Year>=Year1+1 & Year<=Year2) # biological covariates df.bio <- subset(greeklandings, Species=="Horse.mackerel")[,c("Year","log.metric.tons")] df.bio <- subset(df.bio, Year>=Year1 & Year<=Year2-1)[,-1,drop=FALSE] # [,-1] to remove year colnames(df.bio) <- "Trachurus" # environmental covariates ecovsmean.year[,"vwnd.m/s"]<- abs(ecovsmean.year[,"vwnd.m/s"]) df.env <- log(subset(ecovsmean.year, Year>=Year1 & Year<=Year2-1)[,-1]) # fishing effort df.fish <- log(subset(greekfish.cov, Year>=Year1 & Year<=Year2-1)[,-1]) purse.cols <- stringr::str_detect(colnames(df.fish),"Purse.seiners") df.fish <- df.fish[,purse.cols] df.fish <- df.fish[!(colnames(df.fish)=="Purse.seiners.VAP")] # assemble df <- data.frame( df, df.bio, df.env, df.fish ) df$Year <- df$Year-df$Year[1]+1 colnames(df) <- sapply(colnames(df), function(x){rev(stringr::str_split(x,"Purse.seiners.")[[1]])[1]}) colnames(df) <- sapply(colnames(df), function(x){stringr::str_split(x,"[.]")[[1]][1]}) df <- df[,colnames(df)!="VAP"] # all the data to 2007 df.full <- df # only training data df <- subset(df, Year>=1965-1964 & Year<=1987-1964) save(df, df.full, file="MREG_Data.RData") ```
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/6-2-collinear.html
6\.2 Collinearity ----------------- Collinearity is near\-linear relationships among the explanatory variables. Collinearity causes many problems such as inflated standard errors of the coefficients and correspondingly unbiased but highly imprecise estimates of the coefficients, false p\-values, and poor predictive accuracy of the model. Thus it is important to evaluate the level of collinearity in your explanatory variables. ``` library(ggplot2) library(car) library(Hmisc) library(corrplot) library(olsrr) ``` #### Pairs plot One way to see this is visually is with the `pairs()` plot. A pairs plot of fishing effort covariates reveals high correlations between Year, HPP and TOP. ``` pairs(df[,c(2,9:12)]) ``` The environmental covariates look generally ok. ``` pairs(df[,c(2,4:8)]) ``` Another way that we can visualize the problem is by looking at the correlation matrix using the **corrplot** package. ``` library(corrplot) X <- as.matrix(df[,colnames(df)!="anchovy"]) corrplot::corrplot(cor(X)) ``` #### Variance inflation factors Another way is to look for collinearity is to compute the variance inflation factors (VIF). The variance inflation factor is an estimate of how much larger the variance of a coefficient estimate is compared to if the variable were uncorrelated with the other explanatory variables in the model. If the VIF of variable \\(i\\) is \\(z\\), then the standard error of the \\(\\beta\_i\\) for variable \\(i\\) is \\(\\sqrt{z}\\) times larger than if variable \\(i\\) were uncorrelated with the other variables. For example, if VIF\=10, the standard error of the coefficient estimate is 3\.16 times larger (inflated). The rule of thumb is that any of the variables with VIF greater than 10 have collinearity problems. The `vif()` function in the **car** package will compute VIFs for us. ``` full <- lm(anchovy ~ ., data=df) car::vif(full) ``` ``` ## Year Trachurus air slp sst vwnd wspd3 ## 103.922970 18.140279 3.733963 3.324463 2.476689 2.010485 1.909992 ## BOP FIP HPP TOP ## 13.676208 8.836446 63.507170 125.295727 ``` The `ols_vif_tol()` function in the **olsrr** [package](https://cran.r-project.org/package=olsrr) also computes the VIF. ``` olsrr::ols_vif_tol(full) ``` (\#tab:vif.olsrr) | Variables | Tolerance | VIF | | Year | 0\.00962 | 104 | | Trachurus | 0\.0551 | 18\.1 | | air | 0\.268 | 3\.73 | | slp | 0\.301 | 3\.32 | | sst | 0\.404 | 2\.48 | | vwnd | 0\.497 | 2\.01 | | wspd3 | 0\.524 | 1\.91 | | BOP | 0\.0731 | 13\.7 | | FIP | 0\.113 | 8\.84 | | HPP | 0\.0157 | 63\.5 | | TOP | 0\.00798 | 125 | This shows that Year, HPP and TOP have severe collinearity problems, and BOP and *Trachusus* also have collinearity issues, though lesser. #### Condition indices Condition indices are computed from the eigenvalues of the correlation matrix of the variates. The size of the index will be greatly affected by whether you have standardized the variance of your covariates, unlike the other tests described here. \\\[ci \= \\sqrt{max(eigenvalue)/eigenvalue}\\] ``` vars <- as.matrix(dfz[,-1]) res <- eigen(crossprod(vars))$values sqrt(max(res)/res) ``` ``` ## [1] 1.000000 1.506975 2.235652 2.332424 3.025852 3.895303 4.753285 ## [8] 5.419310 7.977486 20.115739 35.515852 ``` ``` vars <- as.matrix(dfz[,-1]) res <- eigen(crossprod(vars))$values sqrt(max(res)/res) ``` ``` ## [1] 1.000000 1.506975 2.235652 2.332424 3.025852 3.895303 4.753285 ## [8] 5.419310 7.977486 20.115739 35.515852 ``` See the information from the olsrr package on [condition indices](https://cran.r-project.org/web/packages/olsrr/vignettes/regression_diagnostics.html) on how to use condition indices to spot collinearity. Basically you are looking for condition indices greater than 30 whether the proportion of variance for the covariate is greater than 0\.5\. In the table below, this criteria identifies Year, BOP, and TOP. Note that the test was done with the standardized covariates (`dfz`). ``` model <- lm(anchovy ~ ., data=dfz) round(olsrr::ols_eigen_cindex(model), digit=2) ``` Table 6\.1: | Eigenvalue | Condition Index | intercept | Year | Trachurus | air | slp | sst | vwnd | wspd3 | BOP | FIP | HPP | TOP | | 5\.25 | 1 | 0 | 0 | 0 | 0 | 0\.01 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 2\.31 | 1\.51 | 0 | 0 | 0 | 0\.03 | 0 | 0\.04 | 0\.03 | 0\.02 | 0 | 0 | 0 | 0 | | 1\.05 | 2\.24 | 0 | 0 | 0 | 0 | 0\.01 | 0 | 0\.13 | 0\.17 | 0 | 0\.02 | 0 | 0 | | 1 | 2\.29 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0\.96 | 2\.33 | 0 | 0 | 0 | 0 | 0\.05 | 0\.02 | 0\.1 | 0\.21 | 0\.01 | 0\.01 | 0 | 0 | | 0\.57 | 3\.03 | 0 | 0 | 0 | 0\.03 | 0\.15 | 0\.24 | 0\.12 | 0\.01 | 0\.02 | 0 | 0 | 0 | | 0\.35 | 3\.9 | 0 | 0 | 0 | 0\.14 | 0\.13 | 0\.24 | 0\.04 | 0\.16 | 0\.01 | 0\.09 | 0 | 0 | | 0\.23 | 4\.75 | 0 | 0 | 0\.02 | 0\.33 | 0\.15 | 0\.15 | 0\.45 | 0\.07 | 0\.01 | 0\.03 | 0 | 0 | | 0\.18 | 5\.42 | 0 | 0\.01 | 0\.18 | 0\.09 | 0\.09 | 0\.08 | 0\.01 | 0 | 0 | 0\.03 | 0\.01 | 0 | | 0\.08 | 7\.98 | 0 | 0\.02 | 0\.04 | 0\.23 | 0\.29 | 0\.09 | 0\.01 | 0\.16 | 0\.4 | 0\.12 | 0 | 0 | | 0\.01 | 20\.1 | 0 | 0\.05 | 0\.29 | 0\.02 | 0\.04 | 0\.07 | 0\.04 | 0\.13 | 0 | 0\.67 | 0\.64 | 0\.15 | | 0 | 35\.5 | 0 | 0\.92 | 0\.47 | 0\.12 | 0\.09 | 0\.06 | 0\.07 | 0\.06 | 0\.55 | 0\.01 | 0\.35 | 0\.84 | #### redun() The **Hmisc** library also has a redundancy function (`redun()`) that can help identify which variables are redundant. This identifies variables that can be explained with an \\(R^2\>0\.9\\) by a linear (or non\-linear) combination of other variables. We are fitting a linear model, so we set `nk=0` to force `redun()` to only look at linear combinations. We use `redun()` only on the explanatory variables and thus remove the first column, which is our response variable (anchovy). ``` a <- Hmisc::redun(~ .,data=df[,-1], nk=0) a$Out ``` ``` ## [1] "TOP" "HPP" ``` This indicates that TOP and HPP can be explained by the other variables. ### 6\.2\.1 Effect of collinearity One thing that happens when we have collinearity is that we will get “complementary” (negative matched by positive) and very large coefficients in the variables that are collinear. We see this when we fit a linear regression with all the variables. I use the z\-scored data so that the effect sizes (x\-axis) are on the same scale. The Year coefficients is very large and the TOP and HPP coefficients are negative and very large. If we look at the fit, we see the at the standard errors for Year, TOP and HPP are very large. The p\-value for Year is significant, however in the presence of severe collinearity, reported p\-values should not be trusted. ``` summary(fit.full) ``` ``` ## ## Call: ## lm(formula = anchovy ~ ., data = dfz) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.4112 -0.1633 -0.0441 0.1459 0.5009 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -9.175e-15 7.003e-02 0.000 1.0000 ## Year 2.118e+00 7.139e-01 2.966 0.0128 * ## Trachurus -6.717e-02 2.983e-01 -0.225 0.8260 ## air 2.987e-01 1.353e-01 2.207 0.0495 * ## slp -5.023e-02 1.277e-01 -0.393 0.7016 ## sst -7.250e-02 1.102e-01 -0.658 0.5242 ## vwnd 1.530e-01 9.930e-02 1.540 0.1517 ## wspd3 6.086e-02 9.679e-02 0.629 0.5423 ## BOP 3.137e-01 2.590e-01 1.211 0.2512 ## FIP 1.347e-01 2.082e-01 0.647 0.5309 ## HPP -5.202e-01 5.581e-01 -0.932 0.3713 ## TOP -8.068e-01 7.839e-01 -1.029 0.3255 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.3359 on 11 degrees of freedom ## Multiple R-squared: 0.946, Adjusted R-squared: 0.8921 ## F-statistic: 17.53 on 11 and 11 DF, p-value: 2.073e-05 ``` Stergiou and Christou do not state how (if at all) they address the collinearity in the explanatory variables, but it is clearly present. In the next chapter, I will show how to develop a multivariate regression model using variable selection. This is the approach used by Stergiou and Christou. Keep in mind that variable selection will not perform well when there is collinearity in your covariates and that variable selection is prone to over\-fitting and selecting covariates due to chance. #### Pairs plot One way to see this is visually is with the `pairs()` plot. A pairs plot of fishing effort covariates reveals high correlations between Year, HPP and TOP. ``` pairs(df[,c(2,9:12)]) ``` The environmental covariates look generally ok. ``` pairs(df[,c(2,4:8)]) ``` Another way that we can visualize the problem is by looking at the correlation matrix using the **corrplot** package. ``` library(corrplot) X <- as.matrix(df[,colnames(df)!="anchovy"]) corrplot::corrplot(cor(X)) ``` #### Variance inflation factors Another way is to look for collinearity is to compute the variance inflation factors (VIF). The variance inflation factor is an estimate of how much larger the variance of a coefficient estimate is compared to if the variable were uncorrelated with the other explanatory variables in the model. If the VIF of variable \\(i\\) is \\(z\\), then the standard error of the \\(\\beta\_i\\) for variable \\(i\\) is \\(\\sqrt{z}\\) times larger than if variable \\(i\\) were uncorrelated with the other variables. For example, if VIF\=10, the standard error of the coefficient estimate is 3\.16 times larger (inflated). The rule of thumb is that any of the variables with VIF greater than 10 have collinearity problems. The `vif()` function in the **car** package will compute VIFs for us. ``` full <- lm(anchovy ~ ., data=df) car::vif(full) ``` ``` ## Year Trachurus air slp sst vwnd wspd3 ## 103.922970 18.140279 3.733963 3.324463 2.476689 2.010485 1.909992 ## BOP FIP HPP TOP ## 13.676208 8.836446 63.507170 125.295727 ``` The `ols_vif_tol()` function in the **olsrr** [package](https://cran.r-project.org/package=olsrr) also computes the VIF. ``` olsrr::ols_vif_tol(full) ``` (\#tab:vif.olsrr) | Variables | Tolerance | VIF | | Year | 0\.00962 | 104 | | Trachurus | 0\.0551 | 18\.1 | | air | 0\.268 | 3\.73 | | slp | 0\.301 | 3\.32 | | sst | 0\.404 | 2\.48 | | vwnd | 0\.497 | 2\.01 | | wspd3 | 0\.524 | 1\.91 | | BOP | 0\.0731 | 13\.7 | | FIP | 0\.113 | 8\.84 | | HPP | 0\.0157 | 63\.5 | | TOP | 0\.00798 | 125 | This shows that Year, HPP and TOP have severe collinearity problems, and BOP and *Trachusus* also have collinearity issues, though lesser. #### Condition indices Condition indices are computed from the eigenvalues of the correlation matrix of the variates. The size of the index will be greatly affected by whether you have standardized the variance of your covariates, unlike the other tests described here. \\\[ci \= \\sqrt{max(eigenvalue)/eigenvalue}\\] ``` vars <- as.matrix(dfz[,-1]) res <- eigen(crossprod(vars))$values sqrt(max(res)/res) ``` ``` ## [1] 1.000000 1.506975 2.235652 2.332424 3.025852 3.895303 4.753285 ## [8] 5.419310 7.977486 20.115739 35.515852 ``` ``` vars <- as.matrix(dfz[,-1]) res <- eigen(crossprod(vars))$values sqrt(max(res)/res) ``` ``` ## [1] 1.000000 1.506975 2.235652 2.332424 3.025852 3.895303 4.753285 ## [8] 5.419310 7.977486 20.115739 35.515852 ``` See the information from the olsrr package on [condition indices](https://cran.r-project.org/web/packages/olsrr/vignettes/regression_diagnostics.html) on how to use condition indices to spot collinearity. Basically you are looking for condition indices greater than 30 whether the proportion of variance for the covariate is greater than 0\.5\. In the table below, this criteria identifies Year, BOP, and TOP. Note that the test was done with the standardized covariates (`dfz`). ``` model <- lm(anchovy ~ ., data=dfz) round(olsrr::ols_eigen_cindex(model), digit=2) ``` Table 6\.1: | Eigenvalue | Condition Index | intercept | Year | Trachurus | air | slp | sst | vwnd | wspd3 | BOP | FIP | HPP | TOP | | 5\.25 | 1 | 0 | 0 | 0 | 0 | 0\.01 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 2\.31 | 1\.51 | 0 | 0 | 0 | 0\.03 | 0 | 0\.04 | 0\.03 | 0\.02 | 0 | 0 | 0 | 0 | | 1\.05 | 2\.24 | 0 | 0 | 0 | 0 | 0\.01 | 0 | 0\.13 | 0\.17 | 0 | 0\.02 | 0 | 0 | | 1 | 2\.29 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 0\.96 | 2\.33 | 0 | 0 | 0 | 0 | 0\.05 | 0\.02 | 0\.1 | 0\.21 | 0\.01 | 0\.01 | 0 | 0 | | 0\.57 | 3\.03 | 0 | 0 | 0 | 0\.03 | 0\.15 | 0\.24 | 0\.12 | 0\.01 | 0\.02 | 0 | 0 | 0 | | 0\.35 | 3\.9 | 0 | 0 | 0 | 0\.14 | 0\.13 | 0\.24 | 0\.04 | 0\.16 | 0\.01 | 0\.09 | 0 | 0 | | 0\.23 | 4\.75 | 0 | 0 | 0\.02 | 0\.33 | 0\.15 | 0\.15 | 0\.45 | 0\.07 | 0\.01 | 0\.03 | 0 | 0 | | 0\.18 | 5\.42 | 0 | 0\.01 | 0\.18 | 0\.09 | 0\.09 | 0\.08 | 0\.01 | 0 | 0 | 0\.03 | 0\.01 | 0 | | 0\.08 | 7\.98 | 0 | 0\.02 | 0\.04 | 0\.23 | 0\.29 | 0\.09 | 0\.01 | 0\.16 | 0\.4 | 0\.12 | 0 | 0 | | 0\.01 | 20\.1 | 0 | 0\.05 | 0\.29 | 0\.02 | 0\.04 | 0\.07 | 0\.04 | 0\.13 | 0 | 0\.67 | 0\.64 | 0\.15 | | 0 | 35\.5 | 0 | 0\.92 | 0\.47 | 0\.12 | 0\.09 | 0\.06 | 0\.07 | 0\.06 | 0\.55 | 0\.01 | 0\.35 | 0\.84 | #### redun() The **Hmisc** library also has a redundancy function (`redun()`) that can help identify which variables are redundant. This identifies variables that can be explained with an \\(R^2\>0\.9\\) by a linear (or non\-linear) combination of other variables. We are fitting a linear model, so we set `nk=0` to force `redun()` to only look at linear combinations. We use `redun()` only on the explanatory variables and thus remove the first column, which is our response variable (anchovy). ``` a <- Hmisc::redun(~ .,data=df[,-1], nk=0) a$Out ``` ``` ## [1] "TOP" "HPP" ``` This indicates that TOP and HPP can be explained by the other variables. ### 6\.2\.1 Effect of collinearity One thing that happens when we have collinearity is that we will get “complementary” (negative matched by positive) and very large coefficients in the variables that are collinear. We see this when we fit a linear regression with all the variables. I use the z\-scored data so that the effect sizes (x\-axis) are on the same scale. The Year coefficients is very large and the TOP and HPP coefficients are negative and very large. If we look at the fit, we see the at the standard errors for Year, TOP and HPP are very large. The p\-value for Year is significant, however in the presence of severe collinearity, reported p\-values should not be trusted. ``` summary(fit.full) ``` ``` ## ## Call: ## lm(formula = anchovy ~ ., data = dfz) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.4112 -0.1633 -0.0441 0.1459 0.5009 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -9.175e-15 7.003e-02 0.000 1.0000 ## Year 2.118e+00 7.139e-01 2.966 0.0128 * ## Trachurus -6.717e-02 2.983e-01 -0.225 0.8260 ## air 2.987e-01 1.353e-01 2.207 0.0495 * ## slp -5.023e-02 1.277e-01 -0.393 0.7016 ## sst -7.250e-02 1.102e-01 -0.658 0.5242 ## vwnd 1.530e-01 9.930e-02 1.540 0.1517 ## wspd3 6.086e-02 9.679e-02 0.629 0.5423 ## BOP 3.137e-01 2.590e-01 1.211 0.2512 ## FIP 1.347e-01 2.082e-01 0.647 0.5309 ## HPP -5.202e-01 5.581e-01 -0.932 0.3713 ## TOP -8.068e-01 7.839e-01 -1.029 0.3255 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.3359 on 11 degrees of freedom ## Multiple R-squared: 0.946, Adjusted R-squared: 0.8921 ## F-statistic: 17.53 on 11 and 11 DF, p-value: 2.073e-05 ``` Stergiou and Christou do not state how (if at all) they address the collinearity in the explanatory variables, but it is clearly present. In the next chapter, I will show how to develop a multivariate regression model using variable selection. This is the approach used by Stergiou and Christou. Keep in mind that variable selection will not perform well when there is collinearity in your covariates and that variable selection is prone to over\-fitting and selecting covariates due to chance.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/6-3-MREGVAR.html
6\.3 Variable selection ----------------------- In this chapter, I will illustrate developing a forecasting model using a multivariate regression (MREG). I will show the variable selection approach that Stergiou and Christou used to develop MREG models. More background on the methods discussed in this chapter can be found in the references in the endnotes.[1](#fn1) [2](#fn2) [3](#fn3) [4](#fn4) [5](#fn5). A multivariate linear regression model with Gaussian errors takes the form: \\\[\\begin{equation} \\begin{gathered} x\_t \= \\alpha \+ \\phi\_1 c\_{t,1} \+ \\phi\_2 c\_{t,2} \+ \\dots \+ e\_t \\\\ e\_t \\sim N(0,\\sigma) \\end{gathered} \\end{equation}\\] In R, we can fit this model with `lm()`, which uses ordinary least squares (OLS). For model selection (determining what explanatory variables to include), there are a variety of approaches we can take. I will show approaches that use a few different packages. ``` library(ggplot2) library(MASS) library(car) library(glmnet) library(stringr) library(caret) library(leaps) library(forecast) library(olsrr) ``` ### 6\.3\.1 Model selection with stepwise variable selection Stergiou and Christou state that the covariates to include were selected with stepwise variable selection. Stepwise variable selection is a type of automatic variable selection. Stepwise variable selection has many statistical problems and the problems are worse when the covariates are collinear as they are in our case (see this [link](https://www.stata.com/support/faqs/statistics/stepwise-regression-problems/) for a review of the problems with stepwise variable selection). The gist of the problem is one of over\-fitting. A stepwise selection procedure will tend to choose variables that, by chance, have large coefficients. With only 23 data points and high collinearity, this is likely to be a rather large problem for our dataset. As we saw, collinearity tends to cause very large positive effect sizes offset by large negative effect sizes. However I use stepwise variable selection here to replicate Stergiou and Christou. I will follow this with an example of other more robust approaches to model selection for linear regression. Stergiou and Christou do not give specifics on how they implemented stepwise variable selection. Stepwise variable selection refers to a forward\-backward search, however there are many ways we can implement this and different approaches give different answers. The starting model in particular will have a large effect on the ending model. #### step() When using the `step()` function in the stats package (and the related `stepAIC()` function in the MASS package) , we specify the starting model and the scope of the search, i.e., the smallest model and the largest model. We set direction equal to “both” to specify stepwise variable selection. We also need to specify the selection criteria. The default is to use AIC. Let’s start with a search that starts with a full model which has all the explanatory variables. The first argument to `step()` is the starting model and `scope` specifies the maximum and minimum models as a list. `direction="both"` is stepwise variable selection. `trace=0` turns off the reporting. ``` null <- lm(anchovy ~ 1, data=df) full <- lm(anchovy ~ ., data=df) step.full <- step(full, scope=list(lower=null, upper=full), direction="both", trace = 0) step.full ``` ``` ## ## Call: ## lm(formula = anchovy ~ Year + air + vwnd + BOP + FIP + TOP, data = df) ## ## Coefficients: ## (Intercept) Year air vwnd BOP FIP ## -5.6500 0.1198 3.7000 0.1320 1.8051 1.0189 ## TOP ## -1.7894 ``` We can also apply `step()` with the caret package: ``` step.caret <- caret::train(anchovy ~ ., data = df, method = "lmStepAIC", direction = "both", trace = FALSE ) ``` ``` ## Warning: attempting model selection on an essentially perfect fit is nonsense ## Warning: attempting model selection on an essentially perfect fit is nonsense ``` ``` step.caret$finalModel ``` ``` ## ## Call: ## lm(formula = .outcome ~ Year + air + vwnd + BOP + FIP + TOP, ## data = dat) ## ## Coefficients: ## (Intercept) Year air vwnd BOP FIP ## -5.6500 0.1198 3.7000 0.1320 1.8051 1.0189 ## TOP ## -1.7894 ``` Note that `method="lmStepAIC"` in the `train()` function will always start with the full model. The AIC for this model is \-19\.6\. This is a larger model than that reported in Table 3 (page 119\) of Stergiou and Christou. The model in Table 3 includes only Year, *Trachurus* catch, SST, and FIP. The model selected by `step()` starting from the full model includes Year, *Trachurus* catch, air temperature, vertical wind, BOP, FIP and TOP. Let’s repeat but start the search with the smallest model. ``` null <- lm(anchovy ~ 1, data=df) full <- lm(anchovy ~ ., data=df) step.null <- step(null, scope=list(lower=null, upper=full), direction="both", trace = 0) step.null ``` ``` ## ## Call: ## lm(formula = anchovy ~ Year + FIP + Trachurus + air, data = df) ## ## Coefficients: ## (Intercept) Year FIP Trachurus air ## -0.51874 0.08663 0.81058 -0.28602 1.62735 ``` This model has an AIC of \-18\.7\. This AIC is larger (worse), which illustrates that you need to be careful how you set up the search. This selected model is very similar to that in Table 3 except that air temperature instead of SST is selected. Air temperature and SST are correlated, however. The air temperature is removed from the best model if we use BIC as the model selection criteria. This is done by setting `k=log(n)` where \\(n\\) is sample size. ``` step.null.bic <- step(null, scope=list(lower=null, upper=full), direction="both", trace = 0, k=log(nrow(df))) step.null.bic ``` ``` ## ## Call: ## lm(formula = anchovy ~ Year + FIP + Trachurus, data = df) ## ## Coefficients: ## (Intercept) Year FIP Trachurus ## 2.81733 0.08836 0.98541 -0.30092 ``` We can also do stepwise variable selection using the leaps package. However, the algorithm or starting model is different than for `step()` and the results are correspondingly different. The top row in the plot shows the included (black) variables: Year, Trachurus, air, vwnd, FIP. The results are similar to `step()` starting from the full model but not identical. See the next section for a brief introduction to the leaps package. ``` models <- leaps::regsubsets(anchovy~., data = df, nvmax =11, method = "seqrep", nbest=1) plot(models, scale="bic") ``` #### leaps() We can use the leaps package to do a full search of the model space. The function `leaps::regsubsets()` will find the `nbest` models of size (number of explanatory variables) 1 to `nvmax` using different types of searches: exhaustive, forward, backward, and stepwise variable selection. We can then plot these best models of each size against a criteria. such as BIC. leaps allows us to plot against BIC, Cp (asymptotically the same as AIC and LOOCV), \\(R^2\\) and adjusted \\(R^2\\). Each row in the plot is a model. The dark shading shows which variables are in the model. On the y\-axis, farther away from the x\-axis is better, so the models (rows) at the top of the plot are the best models. Let’s start with an exhaustive search and show only the best model of each size, where size is the number of explanatory variables in the model. ``` models <- leaps::regsubsets(anchovy~., data = df, nvmax = 11, nbest=1, method = "exhaustive") plot(models, scale="bic") ``` We see that when we use BIC as the selection criteria, the best model has Year, *Trachurus*, and FIP. Let’s look at more than one model for each model size. Let’s take the top 3 models for each model size and look at their BICs. ``` models <- leaps::regsubsets(anchovy~., data = df, nvmax = 11, nbest=3, method = "exhaustive") plot(models, scale="bic") ``` We can plot the BIC for each size of model also. ``` smodels = summary(models) nvar <- apply(smodels$which,1,sum)-1 plot(nvar, smodels$bic, xlab = "Number of Variables", ylab = "BIC") min.bic <- which.min(smodels$bic) points(nvar[min.bic], smodels$bic[min.bic], pch = 20, col = "red") abline(h = smodels$bic[min.bic]+2, lty=2) ``` These two plots show that there are many models within 2 of the top model. All the best models have Year and FIP, but there are many different 3rd and 4th variables that can be added and give a similar BIC. Interesting SST does not appear in any of the top models, while it was selected by Stergiou and Christou. This suggests that they computed the yearly SST values slightly differently than I did. My remote sensing data source was slightly different and that might be the cause. #### 6\.3\.1\.1 Comparison of models chosen by AIC, AICc and BIC `step()` uses AIC instead of the AICc (corrected for small sample size). In our case, \\(n\=23\\) is fairly small and using AICc would be better suited for such a small dataset. leaps does not return AIC or AICc, but we can compute them. Note that Mallow’s Cp asymptotically has the same ordering as AIC, but \\(n\=23\\) is small and it does not have the same ordering as AIC in our case. First we use `summary()` to get a matrix showing the best model of each size. This matrix shows what variable is in the best model of each size. Note that this best model does not depend on the metric (BIC, AIC, etc) because we are looking at models with the same number of variables. The metric affects the penalty for different number of variables and thus only affects the models choice when we compare models of different sizes. ``` models <- leaps::regsubsets(anchovy~., data = df, nvmax = 11, nbest=1, method = "exhaustive") smodels <- summary(models) head(smodels$which[,1:10]) ``` ``` ## (Intercept) Year Trachurus air slp sst vwnd wspd3 BOP FIP ## 1 TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE ## 2 TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE ## 3 TRUE TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE TRUE ## 4 TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE FALSE TRUE ## 5 TRUE TRUE TRUE TRUE FALSE FALSE TRUE FALSE FALSE TRUE ## 6 TRUE TRUE FALSE TRUE FALSE FALSE TRUE FALSE TRUE TRUE ``` Next we compute AIC and AICc from BIC. `k` is the number of parameters. We need to add one more parameter for the estimated variance. ``` k <- apply(smodels$which,1,sum)+1 mod.aicc <- smodels$bic+k*(2+(2*k+2)/(23-k-1))-log(23)*k mod.aic <- smodels$bic+k*2-log(23)*k ``` Now we will plot the metrics for each model size. BIC, AICc and Mallow’s Cp all chose models with an intercept and 3 variables: Year, *Trachurus* and FIP. AIC selects a much larger model, however with \\(n\=23\\), AICc would be a better choice. To find the best model, find the row of the `smodels` matrix where AICc is the smallest. For example, here is the best model with AICc. ``` rmin <- which(mod.aicc==min(mod.aicc)) colnames(smodels$which)[smodels$which[rmin,]] ``` ``` ## [1] "(Intercept)" "Year" "Trachurus" "FIP" ``` In comparison, the best model with AIC is larger. ``` rmin <- which(mod.aic==min(mod.aic)) colnames(smodels$which)[smodels$which[rmin,]] ``` ``` ## [1] "(Intercept)" "Year" "air" "vwnd" "BOP" ## [6] "FIP" "TOP" ``` #### **olsrr** package The **olsrr** [package](https://CRAN.R-project.org/package=olsrr) provides a variety of tools for multivariate regression models, including functions for variable selection. The **olsrr** functions include nice table and plot outputs. The functions are a bit more user friendly and the package include very clear vignettes that illustrate the functions. The package includes functions for all subsets, forward and backward selection with a variety of different selection metrics. Here is an example of one of the functions. This is for all subsets selection. ``` ols_step_best_subset(full) ``` | mindex | n | predictors | rsquare | adjr | predrsq | cp | aic | sbic | sbc | msep | fpe | apc | hsp | | 1 | 1 | Year | 0\.826 | 0\.818 | 0\.79 | 16\.5 | \-6\.63 | \-73\.6 | \-3\.22 | 0\.0406 | 0\.0403 | 0\.207 | 0\.00185 | | 2 | 2 | Year FIP | 0\.887 | 0\.876 | 0\.846 | 6\.01 | \-14\.6 | \-79\.9 | \-10\.1 | 0\.0291 | 0\.0285 | 0\.147 | 0\.00133 | | 3 | 3 | Year Trachurus FIP | 0\.911 | 0\.897 | 0\.865 | 3\.1 | \-18\.1 | \-81\.4 | \-12\.5 | 0\.0254 | 0\.0245 | 0\.126 | 0\.00116 | | 4 | 4 | Year Trachurus air FIP | 0\.921 | 0\.903 | 0\.853 | 3\.21 | \-18\.7 | \-80\.1 | \-11\.9 | 0\.0254 | 0\.024 | 0\.124 | 0\.00116 | | 5 | 5 | Year Trachurus air vwnd FIP | 0\.927 | 0\.905 | 0\.837 | 3\.96 | \-18\.5 | \-77\.9 | \-10\.6 | 0\.0264 | 0\.0243 | 0\.125 | 0\.0012 | | 6 | 6 | Year air vwnd BOP FIP TOP | 0\.936 | 0\.912 | 0\.828 | 4\.06 | \-19\.6 | \-75\.4 | \-10\.5 | 0\.0261 | 0\.0233 | 0\.12 | 0\.00119 | | 7 | 7 | Year air slp vwnd BOP FIP TOP | 0\.94 | 0\.912 | 0\.824 | 5\.25 | \-19\.1 | \-71\.9 | \-8\.88 | 0\.028 | 0\.0241 | 0\.124 | 0\.00128 | | 8 | 8 | Year air vwnd wspd3 BOP FIP HPP TOP | 0\.943 | 0\.911 | 0\.798 | 6\.57 | \-18\.4 | \-67\.9 | \-7\.07 | 0\.0305 | 0\.0252 | 0\.13 | 0\.00139 | | 9 | 9 | Year air sst vwnd wspd3 BOP FIP HPP TOP | 0\.945 | 0\.907 | 0\.782 | 8\.22 | \-17\.1 | \-63\.6 | \-4\.64 | 0\.0345 | 0\.0271 | 0\.14 | 0\.00158 | | 10 | 10 | Year air slp sst vwnd wspd3 BOP FIP HPP TOP | 0\.946 | 0\.901 | 0\.768 | 10\.1 | \-15\.5 | \-59\.3 | \-1\.85 | 0\.0402 | 0\.0298 | 0\.154 | 0\.00183 | | 11 | 11 | Year Trachurus air slp sst vwnd wspd3 BOP FIP HPP TOP | 0\.946 | 0\.892 | 0\.732 | 12 | \-13\.6 | \-55 | 1\.18 | 0\.048 | 0\.0333 | 0\.172 | 0\.00219 | ### 6\.3\.2 Model selection with cross\-validation[6](#fn6) Variable selection (forward, backward, stepwise) is known to overfit models and variables will be chosen that just happen to have high correlation with your response variable for your particular dataset. The result is models with low out\-of\-sample predictive accuracy. Cross\-validation is a way to try to deal with that problem. Model selection with cross\-validation estimates the out\-of\-sample predictive performance of a *process* for building a model. So for example, you could use cross\-validation to ask the question, “If I select a best model with AIC does that approach led to models with better predictive performance over selecting a best model with BIC?”. The basic idea behind cross\-validation is that part of the data is used for fitting (training) the model and the left\-out data is used for assessing predictions. You predict the left\-out data and compare the actual data to the predictions. There are two common types of cross\-validation: leave\-one\-out cross\-validation (LOOCV) and k\-fold cross\-validation. Leave\-one\-out cross\-validation (LOOCV) is a cross\-validation where you leave one data point out, fit to the rest of the data, predict the left out data point, and compute the prediction error with prediction minus actual data value. This is repeated for all data points. So you will have \\(n\\) prediction errors if you have \\(n\\) data points. From these errors, you can compute various statistics. Root mean squared error (RMSE), mean squared error (MSE), and mean absolute error (MAE) are common. k\-fold cross\-validation is a cross\-validation where you divide the data into k equal fractions. The model is fit k times: each fraction is treated as a test data set and the other k\-1 fractions are used as the training data. When the model is fit, you predict the data in the test data and compute the prediction errors. Then you’ll compute the statistics (RMSE, MSE, etc) from the errors from all k training sets. There are many different ways you can split your data into k fractions. Thus one often repeats this process many times and uses the average. This is called repeated cross\-validation. #### Example code Let’s see an example of this using models fit via stepwise variable selection using `leaps::regsubsets()`. Let’s start by defining a `predict` function for `regsubsets` objects[7](#fn7). ``` predict.regsubsets <- function(object, newdata, id, ...) { form <- as.formula(object$call[[2]]) mat <- model.matrix(form, newdata) coefi <- leaps:::coef.regsubsets(object, id = id) mat[, names(coefi)] %*% coefi } ``` Next we set up a matrix that defines the folds. Each row has numbers 1 to k (folds) which specify which data points are in the test set. The other (non\-k) data points will be the training set. Each row of `folds` is a different replicate of the repeated cross\-validation. ``` nfolds <- 5 nreps <- 20 folds <- matrix(NA, nreps, nrow(df)) for(i in 1:nreps) folds[i,] <- sample(rep(1:nfolds, length = nrow(df))) ``` Now we can use `df[folds[r,]==k]` to specify the test data for the k\-th fold of the r\-th replicate. And `df[folds[r,]!=k]` is the training dataset for the k\-th fold of the r\-th replicate. The **fold** jargon is just another word for group. We divide the data into k groups and we call each group a **fold**. Next we set up a matrix to hold the prediction errors. We will have prediction errors for each fold, each replicate, and each variable (columns). ``` nvmax <- 8 cv.errors <- matrix(0, nreps*nfolds, nvmax) ``` Now, we step through each replicate and each fold in each replicate. We find the best fit with `regsubsets()` applied to the *training set* for that replicate. Then we predict using that best fit to the test data for that replicate. We compute the errors (prediction minus data) and store. When we are done, we compute the RMSE (or whatever metric we want). ``` for(r in 1:nreps){ for (k in 1:nfolds) { traindat <- df[folds[r,]!=k,] testdat <- df[folds[r,]==k,] best.fit <- leaps::regsubsets(anchovy ~ ., data=traindat, nvmax = nvmax, method = "seqrep") for (i in 1:nvmax) { pred <- predict.regsubsets(best.fit, testdat, id = i) cv.errors[r+(k-1)*nreps, i] <- mean((testdat$anchovy - pred)^2) } } } rmse.cv <- sqrt(apply(cv.errors, 2, mean, na.rm=TRUE)) plot(1:nvmax, rmse.cv, pch = 19, type = "b",xlab="Number of Variables", ylab="RMSE") ``` The model size with the best predictive performance is smaller, intercept plus 2 variables instead of intercept plus 3 variables. This suggests that we should constrain our model size to 2 variables (plus intercept). Note, that with a 5\-fold cross\-validation, we were fitting the models to 19 data points instead of 23\. However, even with a 23\-fold cross\-validation (Leave One Out CV), a model with 2 variables has the lowest RMSE. The best fit 2 variable model has Year and FIP. ``` best.fit <- leaps::regsubsets(anchovy ~ ., data=traindat, nvmax = 2, method = "seqrep") tmp <- summary(best.fit)$which colnames(tmp)[tmp[2,]] ``` ``` ## [1] "(Intercept)" "Year" "FIP" ``` #### Cross\-validation with caret package The the `train()` function in the **caret** package allows us to fit and cross\-validate model sets easily. `trainControl` specifies the type of cross\-validation and `tuneGrid` specifies the parameter over which cross\-validation will be done (in this case the size of the model). ``` library(caret) # Set up repeated k-fold cross-validation train.control <- trainControl(method = "repeatedcv", number=5, repeats=20) # Train the model step.model <- train(anchovy~., data = df, method = "leapSeq", tuneGrid = data.frame(nvmax = 1:nvmax), trControl = train.control ) plot(step.model$results$RMSE, pch = 19, type = "b", ylab="RMSE") ``` The `$results` part of the output shows us the cross\-validation metrics. Best depends on the metric we use. A 2\-parameter model is best for all the error metrics except R\-squared. ``` step.model$results ``` (\#tab:caret.results) | nvmax | RMSE | Rsquared | MAE | RMSESD | RsquaredSD | MAESD | | 1 | 0\.199 | 0\.857 | 0\.17 | 0\.0674 | 0\.122 | 0\.0611 | | 2 | 0\.186 | 0\.875 | 0\.163 | 0\.0492 | 0\.0989 | 0\.0477 | | 3 | 0\.199 | 0\.823 | 0\.164 | 0\.0546 | 0\.164 | 0\.0495 | | 4 | 0\.212 | 0\.804 | 0\.174 | 0\.0587 | 0\.178 | 0\.0552 | | 5 | 0\.223 | 0\.779 | 0\.183 | 0\.0551 | 0\.188 | 0\.0531 | | 6 | 0\.216 | 0\.782 | 0\.178 | 0\.0655 | 0\.183 | 0\.0589 | | 7 | 0\.215 | 0\.777 | 0\.178 | 0\.063 | 0\.206 | 0\.0563 | | 8 | 0\.227 | 0\.767 | 0\.193 | 0\.0612 | 0\.208 | 0\.0572 | The best 2\-parameter model has Year and FIP. ``` coef(step.model$finalModel, id=2) ``` ``` ## (Intercept) Year FIP ## -0.01122016 0.07297605 1.04079295 ``` ### 6\.3\.1 Model selection with stepwise variable selection Stergiou and Christou state that the covariates to include were selected with stepwise variable selection. Stepwise variable selection is a type of automatic variable selection. Stepwise variable selection has many statistical problems and the problems are worse when the covariates are collinear as they are in our case (see this [link](https://www.stata.com/support/faqs/statistics/stepwise-regression-problems/) for a review of the problems with stepwise variable selection). The gist of the problem is one of over\-fitting. A stepwise selection procedure will tend to choose variables that, by chance, have large coefficients. With only 23 data points and high collinearity, this is likely to be a rather large problem for our dataset. As we saw, collinearity tends to cause very large positive effect sizes offset by large negative effect sizes. However I use stepwise variable selection here to replicate Stergiou and Christou. I will follow this with an example of other more robust approaches to model selection for linear regression. Stergiou and Christou do not give specifics on how they implemented stepwise variable selection. Stepwise variable selection refers to a forward\-backward search, however there are many ways we can implement this and different approaches give different answers. The starting model in particular will have a large effect on the ending model. #### step() When using the `step()` function in the stats package (and the related `stepAIC()` function in the MASS package) , we specify the starting model and the scope of the search, i.e., the smallest model and the largest model. We set direction equal to “both” to specify stepwise variable selection. We also need to specify the selection criteria. The default is to use AIC. Let’s start with a search that starts with a full model which has all the explanatory variables. The first argument to `step()` is the starting model and `scope` specifies the maximum and minimum models as a list. `direction="both"` is stepwise variable selection. `trace=0` turns off the reporting. ``` null <- lm(anchovy ~ 1, data=df) full <- lm(anchovy ~ ., data=df) step.full <- step(full, scope=list(lower=null, upper=full), direction="both", trace = 0) step.full ``` ``` ## ## Call: ## lm(formula = anchovy ~ Year + air + vwnd + BOP + FIP + TOP, data = df) ## ## Coefficients: ## (Intercept) Year air vwnd BOP FIP ## -5.6500 0.1198 3.7000 0.1320 1.8051 1.0189 ## TOP ## -1.7894 ``` We can also apply `step()` with the caret package: ``` step.caret <- caret::train(anchovy ~ ., data = df, method = "lmStepAIC", direction = "both", trace = FALSE ) ``` ``` ## Warning: attempting model selection on an essentially perfect fit is nonsense ## Warning: attempting model selection on an essentially perfect fit is nonsense ``` ``` step.caret$finalModel ``` ``` ## ## Call: ## lm(formula = .outcome ~ Year + air + vwnd + BOP + FIP + TOP, ## data = dat) ## ## Coefficients: ## (Intercept) Year air vwnd BOP FIP ## -5.6500 0.1198 3.7000 0.1320 1.8051 1.0189 ## TOP ## -1.7894 ``` Note that `method="lmStepAIC"` in the `train()` function will always start with the full model. The AIC for this model is \-19\.6\. This is a larger model than that reported in Table 3 (page 119\) of Stergiou and Christou. The model in Table 3 includes only Year, *Trachurus* catch, SST, and FIP. The model selected by `step()` starting from the full model includes Year, *Trachurus* catch, air temperature, vertical wind, BOP, FIP and TOP. Let’s repeat but start the search with the smallest model. ``` null <- lm(anchovy ~ 1, data=df) full <- lm(anchovy ~ ., data=df) step.null <- step(null, scope=list(lower=null, upper=full), direction="both", trace = 0) step.null ``` ``` ## ## Call: ## lm(formula = anchovy ~ Year + FIP + Trachurus + air, data = df) ## ## Coefficients: ## (Intercept) Year FIP Trachurus air ## -0.51874 0.08663 0.81058 -0.28602 1.62735 ``` This model has an AIC of \-18\.7\. This AIC is larger (worse), which illustrates that you need to be careful how you set up the search. This selected model is very similar to that in Table 3 except that air temperature instead of SST is selected. Air temperature and SST are correlated, however. The air temperature is removed from the best model if we use BIC as the model selection criteria. This is done by setting `k=log(n)` where \\(n\\) is sample size. ``` step.null.bic <- step(null, scope=list(lower=null, upper=full), direction="both", trace = 0, k=log(nrow(df))) step.null.bic ``` ``` ## ## Call: ## lm(formula = anchovy ~ Year + FIP + Trachurus, data = df) ## ## Coefficients: ## (Intercept) Year FIP Trachurus ## 2.81733 0.08836 0.98541 -0.30092 ``` We can also do stepwise variable selection using the leaps package. However, the algorithm or starting model is different than for `step()` and the results are correspondingly different. The top row in the plot shows the included (black) variables: Year, Trachurus, air, vwnd, FIP. The results are similar to `step()` starting from the full model but not identical. See the next section for a brief introduction to the leaps package. ``` models <- leaps::regsubsets(anchovy~., data = df, nvmax =11, method = "seqrep", nbest=1) plot(models, scale="bic") ``` #### leaps() We can use the leaps package to do a full search of the model space. The function `leaps::regsubsets()` will find the `nbest` models of size (number of explanatory variables) 1 to `nvmax` using different types of searches: exhaustive, forward, backward, and stepwise variable selection. We can then plot these best models of each size against a criteria. such as BIC. leaps allows us to plot against BIC, Cp (asymptotically the same as AIC and LOOCV), \\(R^2\\) and adjusted \\(R^2\\). Each row in the plot is a model. The dark shading shows which variables are in the model. On the y\-axis, farther away from the x\-axis is better, so the models (rows) at the top of the plot are the best models. Let’s start with an exhaustive search and show only the best model of each size, where size is the number of explanatory variables in the model. ``` models <- leaps::regsubsets(anchovy~., data = df, nvmax = 11, nbest=1, method = "exhaustive") plot(models, scale="bic") ``` We see that when we use BIC as the selection criteria, the best model has Year, *Trachurus*, and FIP. Let’s look at more than one model for each model size. Let’s take the top 3 models for each model size and look at their BICs. ``` models <- leaps::regsubsets(anchovy~., data = df, nvmax = 11, nbest=3, method = "exhaustive") plot(models, scale="bic") ``` We can plot the BIC for each size of model also. ``` smodels = summary(models) nvar <- apply(smodels$which,1,sum)-1 plot(nvar, smodels$bic, xlab = "Number of Variables", ylab = "BIC") min.bic <- which.min(smodels$bic) points(nvar[min.bic], smodels$bic[min.bic], pch = 20, col = "red") abline(h = smodels$bic[min.bic]+2, lty=2) ``` These two plots show that there are many models within 2 of the top model. All the best models have Year and FIP, but there are many different 3rd and 4th variables that can be added and give a similar BIC. Interesting SST does not appear in any of the top models, while it was selected by Stergiou and Christou. This suggests that they computed the yearly SST values slightly differently than I did. My remote sensing data source was slightly different and that might be the cause. #### 6\.3\.1\.1 Comparison of models chosen by AIC, AICc and BIC `step()` uses AIC instead of the AICc (corrected for small sample size). In our case, \\(n\=23\\) is fairly small and using AICc would be better suited for such a small dataset. leaps does not return AIC or AICc, but we can compute them. Note that Mallow’s Cp asymptotically has the same ordering as AIC, but \\(n\=23\\) is small and it does not have the same ordering as AIC in our case. First we use `summary()` to get a matrix showing the best model of each size. This matrix shows what variable is in the best model of each size. Note that this best model does not depend on the metric (BIC, AIC, etc) because we are looking at models with the same number of variables. The metric affects the penalty for different number of variables and thus only affects the models choice when we compare models of different sizes. ``` models <- leaps::regsubsets(anchovy~., data = df, nvmax = 11, nbest=1, method = "exhaustive") smodels <- summary(models) head(smodels$which[,1:10]) ``` ``` ## (Intercept) Year Trachurus air slp sst vwnd wspd3 BOP FIP ## 1 TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE ## 2 TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE ## 3 TRUE TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE TRUE ## 4 TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE FALSE TRUE ## 5 TRUE TRUE TRUE TRUE FALSE FALSE TRUE FALSE FALSE TRUE ## 6 TRUE TRUE FALSE TRUE FALSE FALSE TRUE FALSE TRUE TRUE ``` Next we compute AIC and AICc from BIC. `k` is the number of parameters. We need to add one more parameter for the estimated variance. ``` k <- apply(smodels$which,1,sum)+1 mod.aicc <- smodels$bic+k*(2+(2*k+2)/(23-k-1))-log(23)*k mod.aic <- smodels$bic+k*2-log(23)*k ``` Now we will plot the metrics for each model size. BIC, AICc and Mallow’s Cp all chose models with an intercept and 3 variables: Year, *Trachurus* and FIP. AIC selects a much larger model, however with \\(n\=23\\), AICc would be a better choice. To find the best model, find the row of the `smodels` matrix where AICc is the smallest. For example, here is the best model with AICc. ``` rmin <- which(mod.aicc==min(mod.aicc)) colnames(smodels$which)[smodels$which[rmin,]] ``` ``` ## [1] "(Intercept)" "Year" "Trachurus" "FIP" ``` In comparison, the best model with AIC is larger. ``` rmin <- which(mod.aic==min(mod.aic)) colnames(smodels$which)[smodels$which[rmin,]] ``` ``` ## [1] "(Intercept)" "Year" "air" "vwnd" "BOP" ## [6] "FIP" "TOP" ``` #### **olsrr** package The **olsrr** [package](https://CRAN.R-project.org/package=olsrr) provides a variety of tools for multivariate regression models, including functions for variable selection. The **olsrr** functions include nice table and plot outputs. The functions are a bit more user friendly and the package include very clear vignettes that illustrate the functions. The package includes functions for all subsets, forward and backward selection with a variety of different selection metrics. Here is an example of one of the functions. This is for all subsets selection. ``` ols_step_best_subset(full) ``` | mindex | n | predictors | rsquare | adjr | predrsq | cp | aic | sbic | sbc | msep | fpe | apc | hsp | | 1 | 1 | Year | 0\.826 | 0\.818 | 0\.79 | 16\.5 | \-6\.63 | \-73\.6 | \-3\.22 | 0\.0406 | 0\.0403 | 0\.207 | 0\.00185 | | 2 | 2 | Year FIP | 0\.887 | 0\.876 | 0\.846 | 6\.01 | \-14\.6 | \-79\.9 | \-10\.1 | 0\.0291 | 0\.0285 | 0\.147 | 0\.00133 | | 3 | 3 | Year Trachurus FIP | 0\.911 | 0\.897 | 0\.865 | 3\.1 | \-18\.1 | \-81\.4 | \-12\.5 | 0\.0254 | 0\.0245 | 0\.126 | 0\.00116 | | 4 | 4 | Year Trachurus air FIP | 0\.921 | 0\.903 | 0\.853 | 3\.21 | \-18\.7 | \-80\.1 | \-11\.9 | 0\.0254 | 0\.024 | 0\.124 | 0\.00116 | | 5 | 5 | Year Trachurus air vwnd FIP | 0\.927 | 0\.905 | 0\.837 | 3\.96 | \-18\.5 | \-77\.9 | \-10\.6 | 0\.0264 | 0\.0243 | 0\.125 | 0\.0012 | | 6 | 6 | Year air vwnd BOP FIP TOP | 0\.936 | 0\.912 | 0\.828 | 4\.06 | \-19\.6 | \-75\.4 | \-10\.5 | 0\.0261 | 0\.0233 | 0\.12 | 0\.00119 | | 7 | 7 | Year air slp vwnd BOP FIP TOP | 0\.94 | 0\.912 | 0\.824 | 5\.25 | \-19\.1 | \-71\.9 | \-8\.88 | 0\.028 | 0\.0241 | 0\.124 | 0\.00128 | | 8 | 8 | Year air vwnd wspd3 BOP FIP HPP TOP | 0\.943 | 0\.911 | 0\.798 | 6\.57 | \-18\.4 | \-67\.9 | \-7\.07 | 0\.0305 | 0\.0252 | 0\.13 | 0\.00139 | | 9 | 9 | Year air sst vwnd wspd3 BOP FIP HPP TOP | 0\.945 | 0\.907 | 0\.782 | 8\.22 | \-17\.1 | \-63\.6 | \-4\.64 | 0\.0345 | 0\.0271 | 0\.14 | 0\.00158 | | 10 | 10 | Year air slp sst vwnd wspd3 BOP FIP HPP TOP | 0\.946 | 0\.901 | 0\.768 | 10\.1 | \-15\.5 | \-59\.3 | \-1\.85 | 0\.0402 | 0\.0298 | 0\.154 | 0\.00183 | | 11 | 11 | Year Trachurus air slp sst vwnd wspd3 BOP FIP HPP TOP | 0\.946 | 0\.892 | 0\.732 | 12 | \-13\.6 | \-55 | 1\.18 | 0\.048 | 0\.0333 | 0\.172 | 0\.00219 | #### step() When using the `step()` function in the stats package (and the related `stepAIC()` function in the MASS package) , we specify the starting model and the scope of the search, i.e., the smallest model and the largest model. We set direction equal to “both” to specify stepwise variable selection. We also need to specify the selection criteria. The default is to use AIC. Let’s start with a search that starts with a full model which has all the explanatory variables. The first argument to `step()` is the starting model and `scope` specifies the maximum and minimum models as a list. `direction="both"` is stepwise variable selection. `trace=0` turns off the reporting. ``` null <- lm(anchovy ~ 1, data=df) full <- lm(anchovy ~ ., data=df) step.full <- step(full, scope=list(lower=null, upper=full), direction="both", trace = 0) step.full ``` ``` ## ## Call: ## lm(formula = anchovy ~ Year + air + vwnd + BOP + FIP + TOP, data = df) ## ## Coefficients: ## (Intercept) Year air vwnd BOP FIP ## -5.6500 0.1198 3.7000 0.1320 1.8051 1.0189 ## TOP ## -1.7894 ``` We can also apply `step()` with the caret package: ``` step.caret <- caret::train(anchovy ~ ., data = df, method = "lmStepAIC", direction = "both", trace = FALSE ) ``` ``` ## Warning: attempting model selection on an essentially perfect fit is nonsense ## Warning: attempting model selection on an essentially perfect fit is nonsense ``` ``` step.caret$finalModel ``` ``` ## ## Call: ## lm(formula = .outcome ~ Year + air + vwnd + BOP + FIP + TOP, ## data = dat) ## ## Coefficients: ## (Intercept) Year air vwnd BOP FIP ## -5.6500 0.1198 3.7000 0.1320 1.8051 1.0189 ## TOP ## -1.7894 ``` Note that `method="lmStepAIC"` in the `train()` function will always start with the full model. The AIC for this model is \-19\.6\. This is a larger model than that reported in Table 3 (page 119\) of Stergiou and Christou. The model in Table 3 includes only Year, *Trachurus* catch, SST, and FIP. The model selected by `step()` starting from the full model includes Year, *Trachurus* catch, air temperature, vertical wind, BOP, FIP and TOP. Let’s repeat but start the search with the smallest model. ``` null <- lm(anchovy ~ 1, data=df) full <- lm(anchovy ~ ., data=df) step.null <- step(null, scope=list(lower=null, upper=full), direction="both", trace = 0) step.null ``` ``` ## ## Call: ## lm(formula = anchovy ~ Year + FIP + Trachurus + air, data = df) ## ## Coefficients: ## (Intercept) Year FIP Trachurus air ## -0.51874 0.08663 0.81058 -0.28602 1.62735 ``` This model has an AIC of \-18\.7\. This AIC is larger (worse), which illustrates that you need to be careful how you set up the search. This selected model is very similar to that in Table 3 except that air temperature instead of SST is selected. Air temperature and SST are correlated, however. The air temperature is removed from the best model if we use BIC as the model selection criteria. This is done by setting `k=log(n)` where \\(n\\) is sample size. ``` step.null.bic <- step(null, scope=list(lower=null, upper=full), direction="both", trace = 0, k=log(nrow(df))) step.null.bic ``` ``` ## ## Call: ## lm(formula = anchovy ~ Year + FIP + Trachurus, data = df) ## ## Coefficients: ## (Intercept) Year FIP Trachurus ## 2.81733 0.08836 0.98541 -0.30092 ``` We can also do stepwise variable selection using the leaps package. However, the algorithm or starting model is different than for `step()` and the results are correspondingly different. The top row in the plot shows the included (black) variables: Year, Trachurus, air, vwnd, FIP. The results are similar to `step()` starting from the full model but not identical. See the next section for a brief introduction to the leaps package. ``` models <- leaps::regsubsets(anchovy~., data = df, nvmax =11, method = "seqrep", nbest=1) plot(models, scale="bic") ``` #### leaps() We can use the leaps package to do a full search of the model space. The function `leaps::regsubsets()` will find the `nbest` models of size (number of explanatory variables) 1 to `nvmax` using different types of searches: exhaustive, forward, backward, and stepwise variable selection. We can then plot these best models of each size against a criteria. such as BIC. leaps allows us to plot against BIC, Cp (asymptotically the same as AIC and LOOCV), \\(R^2\\) and adjusted \\(R^2\\). Each row in the plot is a model. The dark shading shows which variables are in the model. On the y\-axis, farther away from the x\-axis is better, so the models (rows) at the top of the plot are the best models. Let’s start with an exhaustive search and show only the best model of each size, where size is the number of explanatory variables in the model. ``` models <- leaps::regsubsets(anchovy~., data = df, nvmax = 11, nbest=1, method = "exhaustive") plot(models, scale="bic") ``` We see that when we use BIC as the selection criteria, the best model has Year, *Trachurus*, and FIP. Let’s look at more than one model for each model size. Let’s take the top 3 models for each model size and look at their BICs. ``` models <- leaps::regsubsets(anchovy~., data = df, nvmax = 11, nbest=3, method = "exhaustive") plot(models, scale="bic") ``` We can plot the BIC for each size of model also. ``` smodels = summary(models) nvar <- apply(smodels$which,1,sum)-1 plot(nvar, smodels$bic, xlab = "Number of Variables", ylab = "BIC") min.bic <- which.min(smodels$bic) points(nvar[min.bic], smodels$bic[min.bic], pch = 20, col = "red") abline(h = smodels$bic[min.bic]+2, lty=2) ``` These two plots show that there are many models within 2 of the top model. All the best models have Year and FIP, but there are many different 3rd and 4th variables that can be added and give a similar BIC. Interesting SST does not appear in any of the top models, while it was selected by Stergiou and Christou. This suggests that they computed the yearly SST values slightly differently than I did. My remote sensing data source was slightly different and that might be the cause. #### 6\.3\.1\.1 Comparison of models chosen by AIC, AICc and BIC `step()` uses AIC instead of the AICc (corrected for small sample size). In our case, \\(n\=23\\) is fairly small and using AICc would be better suited for such a small dataset. leaps does not return AIC or AICc, but we can compute them. Note that Mallow’s Cp asymptotically has the same ordering as AIC, but \\(n\=23\\) is small and it does not have the same ordering as AIC in our case. First we use `summary()` to get a matrix showing the best model of each size. This matrix shows what variable is in the best model of each size. Note that this best model does not depend on the metric (BIC, AIC, etc) because we are looking at models with the same number of variables. The metric affects the penalty for different number of variables and thus only affects the models choice when we compare models of different sizes. ``` models <- leaps::regsubsets(anchovy~., data = df, nvmax = 11, nbest=1, method = "exhaustive") smodels <- summary(models) head(smodels$which[,1:10]) ``` ``` ## (Intercept) Year Trachurus air slp sst vwnd wspd3 BOP FIP ## 1 TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE ## 2 TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE ## 3 TRUE TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE TRUE ## 4 TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE FALSE TRUE ## 5 TRUE TRUE TRUE TRUE FALSE FALSE TRUE FALSE FALSE TRUE ## 6 TRUE TRUE FALSE TRUE FALSE FALSE TRUE FALSE TRUE TRUE ``` Next we compute AIC and AICc from BIC. `k` is the number of parameters. We need to add one more parameter for the estimated variance. ``` k <- apply(smodels$which,1,sum)+1 mod.aicc <- smodels$bic+k*(2+(2*k+2)/(23-k-1))-log(23)*k mod.aic <- smodels$bic+k*2-log(23)*k ``` Now we will plot the metrics for each model size. BIC, AICc and Mallow’s Cp all chose models with an intercept and 3 variables: Year, *Trachurus* and FIP. AIC selects a much larger model, however with \\(n\=23\\), AICc would be a better choice. To find the best model, find the row of the `smodels` matrix where AICc is the smallest. For example, here is the best model with AICc. ``` rmin <- which(mod.aicc==min(mod.aicc)) colnames(smodels$which)[smodels$which[rmin,]] ``` ``` ## [1] "(Intercept)" "Year" "Trachurus" "FIP" ``` In comparison, the best model with AIC is larger. ``` rmin <- which(mod.aic==min(mod.aic)) colnames(smodels$which)[smodels$which[rmin,]] ``` ``` ## [1] "(Intercept)" "Year" "air" "vwnd" "BOP" ## [6] "FIP" "TOP" ``` #### **olsrr** package The **olsrr** [package](https://CRAN.R-project.org/package=olsrr) provides a variety of tools for multivariate regression models, including functions for variable selection. The **olsrr** functions include nice table and plot outputs. The functions are a bit more user friendly and the package include very clear vignettes that illustrate the functions. The package includes functions for all subsets, forward and backward selection with a variety of different selection metrics. Here is an example of one of the functions. This is for all subsets selection. ``` ols_step_best_subset(full) ``` | mindex | n | predictors | rsquare | adjr | predrsq | cp | aic | sbic | sbc | msep | fpe | apc | hsp | | 1 | 1 | Year | 0\.826 | 0\.818 | 0\.79 | 16\.5 | \-6\.63 | \-73\.6 | \-3\.22 | 0\.0406 | 0\.0403 | 0\.207 | 0\.00185 | | 2 | 2 | Year FIP | 0\.887 | 0\.876 | 0\.846 | 6\.01 | \-14\.6 | \-79\.9 | \-10\.1 | 0\.0291 | 0\.0285 | 0\.147 | 0\.00133 | | 3 | 3 | Year Trachurus FIP | 0\.911 | 0\.897 | 0\.865 | 3\.1 | \-18\.1 | \-81\.4 | \-12\.5 | 0\.0254 | 0\.0245 | 0\.126 | 0\.00116 | | 4 | 4 | Year Trachurus air FIP | 0\.921 | 0\.903 | 0\.853 | 3\.21 | \-18\.7 | \-80\.1 | \-11\.9 | 0\.0254 | 0\.024 | 0\.124 | 0\.00116 | | 5 | 5 | Year Trachurus air vwnd FIP | 0\.927 | 0\.905 | 0\.837 | 3\.96 | \-18\.5 | \-77\.9 | \-10\.6 | 0\.0264 | 0\.0243 | 0\.125 | 0\.0012 | | 6 | 6 | Year air vwnd BOP FIP TOP | 0\.936 | 0\.912 | 0\.828 | 4\.06 | \-19\.6 | \-75\.4 | \-10\.5 | 0\.0261 | 0\.0233 | 0\.12 | 0\.00119 | | 7 | 7 | Year air slp vwnd BOP FIP TOP | 0\.94 | 0\.912 | 0\.824 | 5\.25 | \-19\.1 | \-71\.9 | \-8\.88 | 0\.028 | 0\.0241 | 0\.124 | 0\.00128 | | 8 | 8 | Year air vwnd wspd3 BOP FIP HPP TOP | 0\.943 | 0\.911 | 0\.798 | 6\.57 | \-18\.4 | \-67\.9 | \-7\.07 | 0\.0305 | 0\.0252 | 0\.13 | 0\.00139 | | 9 | 9 | Year air sst vwnd wspd3 BOP FIP HPP TOP | 0\.945 | 0\.907 | 0\.782 | 8\.22 | \-17\.1 | \-63\.6 | \-4\.64 | 0\.0345 | 0\.0271 | 0\.14 | 0\.00158 | | 10 | 10 | Year air slp sst vwnd wspd3 BOP FIP HPP TOP | 0\.946 | 0\.901 | 0\.768 | 10\.1 | \-15\.5 | \-59\.3 | \-1\.85 | 0\.0402 | 0\.0298 | 0\.154 | 0\.00183 | | 11 | 11 | Year Trachurus air slp sst vwnd wspd3 BOP FIP HPP TOP | 0\.946 | 0\.892 | 0\.732 | 12 | \-13\.6 | \-55 | 1\.18 | 0\.048 | 0\.0333 | 0\.172 | 0\.00219 | ### 6\.3\.2 Model selection with cross\-validation[6](#fn6) Variable selection (forward, backward, stepwise) is known to overfit models and variables will be chosen that just happen to have high correlation with your response variable for your particular dataset. The result is models with low out\-of\-sample predictive accuracy. Cross\-validation is a way to try to deal with that problem. Model selection with cross\-validation estimates the out\-of\-sample predictive performance of a *process* for building a model. So for example, you could use cross\-validation to ask the question, “If I select a best model with AIC does that approach led to models with better predictive performance over selecting a best model with BIC?”. The basic idea behind cross\-validation is that part of the data is used for fitting (training) the model and the left\-out data is used for assessing predictions. You predict the left\-out data and compare the actual data to the predictions. There are two common types of cross\-validation: leave\-one\-out cross\-validation (LOOCV) and k\-fold cross\-validation. Leave\-one\-out cross\-validation (LOOCV) is a cross\-validation where you leave one data point out, fit to the rest of the data, predict the left out data point, and compute the prediction error with prediction minus actual data value. This is repeated for all data points. So you will have \\(n\\) prediction errors if you have \\(n\\) data points. From these errors, you can compute various statistics. Root mean squared error (RMSE), mean squared error (MSE), and mean absolute error (MAE) are common. k\-fold cross\-validation is a cross\-validation where you divide the data into k equal fractions. The model is fit k times: each fraction is treated as a test data set and the other k\-1 fractions are used as the training data. When the model is fit, you predict the data in the test data and compute the prediction errors. Then you’ll compute the statistics (RMSE, MSE, etc) from the errors from all k training sets. There are many different ways you can split your data into k fractions. Thus one often repeats this process many times and uses the average. This is called repeated cross\-validation. #### Example code Let’s see an example of this using models fit via stepwise variable selection using `leaps::regsubsets()`. Let’s start by defining a `predict` function for `regsubsets` objects[7](#fn7). ``` predict.regsubsets <- function(object, newdata, id, ...) { form <- as.formula(object$call[[2]]) mat <- model.matrix(form, newdata) coefi <- leaps:::coef.regsubsets(object, id = id) mat[, names(coefi)] %*% coefi } ``` Next we set up a matrix that defines the folds. Each row has numbers 1 to k (folds) which specify which data points are in the test set. The other (non\-k) data points will be the training set. Each row of `folds` is a different replicate of the repeated cross\-validation. ``` nfolds <- 5 nreps <- 20 folds <- matrix(NA, nreps, nrow(df)) for(i in 1:nreps) folds[i,] <- sample(rep(1:nfolds, length = nrow(df))) ``` Now we can use `df[folds[r,]==k]` to specify the test data for the k\-th fold of the r\-th replicate. And `df[folds[r,]!=k]` is the training dataset for the k\-th fold of the r\-th replicate. The **fold** jargon is just another word for group. We divide the data into k groups and we call each group a **fold**. Next we set up a matrix to hold the prediction errors. We will have prediction errors for each fold, each replicate, and each variable (columns). ``` nvmax <- 8 cv.errors <- matrix(0, nreps*nfolds, nvmax) ``` Now, we step through each replicate and each fold in each replicate. We find the best fit with `regsubsets()` applied to the *training set* for that replicate. Then we predict using that best fit to the test data for that replicate. We compute the errors (prediction minus data) and store. When we are done, we compute the RMSE (or whatever metric we want). ``` for(r in 1:nreps){ for (k in 1:nfolds) { traindat <- df[folds[r,]!=k,] testdat <- df[folds[r,]==k,] best.fit <- leaps::regsubsets(anchovy ~ ., data=traindat, nvmax = nvmax, method = "seqrep") for (i in 1:nvmax) { pred <- predict.regsubsets(best.fit, testdat, id = i) cv.errors[r+(k-1)*nreps, i] <- mean((testdat$anchovy - pred)^2) } } } rmse.cv <- sqrt(apply(cv.errors, 2, mean, na.rm=TRUE)) plot(1:nvmax, rmse.cv, pch = 19, type = "b",xlab="Number of Variables", ylab="RMSE") ``` The model size with the best predictive performance is smaller, intercept plus 2 variables instead of intercept plus 3 variables. This suggests that we should constrain our model size to 2 variables (plus intercept). Note, that with a 5\-fold cross\-validation, we were fitting the models to 19 data points instead of 23\. However, even with a 23\-fold cross\-validation (Leave One Out CV), a model with 2 variables has the lowest RMSE. The best fit 2 variable model has Year and FIP. ``` best.fit <- leaps::regsubsets(anchovy ~ ., data=traindat, nvmax = 2, method = "seqrep") tmp <- summary(best.fit)$which colnames(tmp)[tmp[2,]] ``` ``` ## [1] "(Intercept)" "Year" "FIP" ``` #### Cross\-validation with caret package The the `train()` function in the **caret** package allows us to fit and cross\-validate model sets easily. `trainControl` specifies the type of cross\-validation and `tuneGrid` specifies the parameter over which cross\-validation will be done (in this case the size of the model). ``` library(caret) # Set up repeated k-fold cross-validation train.control <- trainControl(method = "repeatedcv", number=5, repeats=20) # Train the model step.model <- train(anchovy~., data = df, method = "leapSeq", tuneGrid = data.frame(nvmax = 1:nvmax), trControl = train.control ) plot(step.model$results$RMSE, pch = 19, type = "b", ylab="RMSE") ``` The `$results` part of the output shows us the cross\-validation metrics. Best depends on the metric we use. A 2\-parameter model is best for all the error metrics except R\-squared. ``` step.model$results ``` (\#tab:caret.results) | nvmax | RMSE | Rsquared | MAE | RMSESD | RsquaredSD | MAESD | | 1 | 0\.199 | 0\.857 | 0\.17 | 0\.0674 | 0\.122 | 0\.0611 | | 2 | 0\.186 | 0\.875 | 0\.163 | 0\.0492 | 0\.0989 | 0\.0477 | | 3 | 0\.199 | 0\.823 | 0\.164 | 0\.0546 | 0\.164 | 0\.0495 | | 4 | 0\.212 | 0\.804 | 0\.174 | 0\.0587 | 0\.178 | 0\.0552 | | 5 | 0\.223 | 0\.779 | 0\.183 | 0\.0551 | 0\.188 | 0\.0531 | | 6 | 0\.216 | 0\.782 | 0\.178 | 0\.0655 | 0\.183 | 0\.0589 | | 7 | 0\.215 | 0\.777 | 0\.178 | 0\.063 | 0\.206 | 0\.0563 | | 8 | 0\.227 | 0\.767 | 0\.193 | 0\.0612 | 0\.208 | 0\.0572 | The best 2\-parameter model has Year and FIP. ``` coef(step.model$finalModel, id=2) ``` ``` ## (Intercept) Year FIP ## -0.01122016 0.07297605 1.04079295 ``` #### Example code Let’s see an example of this using models fit via stepwise variable selection using `leaps::regsubsets()`. Let’s start by defining a `predict` function for `regsubsets` objects[7](#fn7). ``` predict.regsubsets <- function(object, newdata, id, ...) { form <- as.formula(object$call[[2]]) mat <- model.matrix(form, newdata) coefi <- leaps:::coef.regsubsets(object, id = id) mat[, names(coefi)] %*% coefi } ``` Next we set up a matrix that defines the folds. Each row has numbers 1 to k (folds) which specify which data points are in the test set. The other (non\-k) data points will be the training set. Each row of `folds` is a different replicate of the repeated cross\-validation. ``` nfolds <- 5 nreps <- 20 folds <- matrix(NA, nreps, nrow(df)) for(i in 1:nreps) folds[i,] <- sample(rep(1:nfolds, length = nrow(df))) ``` Now we can use `df[folds[r,]==k]` to specify the test data for the k\-th fold of the r\-th replicate. And `df[folds[r,]!=k]` is the training dataset for the k\-th fold of the r\-th replicate. The **fold** jargon is just another word for group. We divide the data into k groups and we call each group a **fold**. Next we set up a matrix to hold the prediction errors. We will have prediction errors for each fold, each replicate, and each variable (columns). ``` nvmax <- 8 cv.errors <- matrix(0, nreps*nfolds, nvmax) ``` Now, we step through each replicate and each fold in each replicate. We find the best fit with `regsubsets()` applied to the *training set* for that replicate. Then we predict using that best fit to the test data for that replicate. We compute the errors (prediction minus data) and store. When we are done, we compute the RMSE (or whatever metric we want). ``` for(r in 1:nreps){ for (k in 1:nfolds) { traindat <- df[folds[r,]!=k,] testdat <- df[folds[r,]==k,] best.fit <- leaps::regsubsets(anchovy ~ ., data=traindat, nvmax = nvmax, method = "seqrep") for (i in 1:nvmax) { pred <- predict.regsubsets(best.fit, testdat, id = i) cv.errors[r+(k-1)*nreps, i] <- mean((testdat$anchovy - pred)^2) } } } rmse.cv <- sqrt(apply(cv.errors, 2, mean, na.rm=TRUE)) plot(1:nvmax, rmse.cv, pch = 19, type = "b",xlab="Number of Variables", ylab="RMSE") ``` The model size with the best predictive performance is smaller, intercept plus 2 variables instead of intercept plus 3 variables. This suggests that we should constrain our model size to 2 variables (plus intercept). Note, that with a 5\-fold cross\-validation, we were fitting the models to 19 data points instead of 23\. However, even with a 23\-fold cross\-validation (Leave One Out CV), a model with 2 variables has the lowest RMSE. The best fit 2 variable model has Year and FIP. ``` best.fit <- leaps::regsubsets(anchovy ~ ., data=traindat, nvmax = 2, method = "seqrep") tmp <- summary(best.fit)$which colnames(tmp)[tmp[2,]] ``` ``` ## [1] "(Intercept)" "Year" "FIP" ``` #### Cross\-validation with caret package The the `train()` function in the **caret** package allows us to fit and cross\-validate model sets easily. `trainControl` specifies the type of cross\-validation and `tuneGrid` specifies the parameter over which cross\-validation will be done (in this case the size of the model). ``` library(caret) # Set up repeated k-fold cross-validation train.control <- trainControl(method = "repeatedcv", number=5, repeats=20) # Train the model step.model <- train(anchovy~., data = df, method = "leapSeq", tuneGrid = data.frame(nvmax = 1:nvmax), trControl = train.control ) plot(step.model$results$RMSE, pch = 19, type = "b", ylab="RMSE") ``` The `$results` part of the output shows us the cross\-validation metrics. Best depends on the metric we use. A 2\-parameter model is best for all the error metrics except R\-squared. ``` step.model$results ``` (\#tab:caret.results) | nvmax | RMSE | Rsquared | MAE | RMSESD | RsquaredSD | MAESD | | 1 | 0\.199 | 0\.857 | 0\.17 | 0\.0674 | 0\.122 | 0\.0611 | | 2 | 0\.186 | 0\.875 | 0\.163 | 0\.0492 | 0\.0989 | 0\.0477 | | 3 | 0\.199 | 0\.823 | 0\.164 | 0\.0546 | 0\.164 | 0\.0495 | | 4 | 0\.212 | 0\.804 | 0\.174 | 0\.0587 | 0\.178 | 0\.0552 | | 5 | 0\.223 | 0\.779 | 0\.183 | 0\.0551 | 0\.188 | 0\.0531 | | 6 | 0\.216 | 0\.782 | 0\.178 | 0\.0655 | 0\.183 | 0\.0589 | | 7 | 0\.215 | 0\.777 | 0\.178 | 0\.063 | 0\.206 | 0\.0563 | | 8 | 0\.227 | 0\.767 | 0\.193 | 0\.0612 | 0\.208 | 0\.0572 | The best 2\-parameter model has Year and FIP. ``` coef(step.model$finalModel, id=2) ``` ``` ## (Intercept) Year FIP ## -0.01122016 0.07297605 1.04079295 ```
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/6-4-MREGPR.html
6\.4 Penalized regression ------------------------- The problem with model selection using searching and selecting with some model fit criteria is that the selected model tends to be over\-fit—even when using cross\-validation. The predictive value of the model is not optimal because of over\-fitting. Another approach to dealing with variance inflation that arises from collinearity and models with many explanatory variable is penalized regression. The basic idea with penalized regression is that you penalize coefficient estimates that are far from 0\. The true coefficients are (likely) not 0 so fundamentally this will lead to biased coefficient estimates but the idea is that the inflated variance of the coefficient estimates is the bigger problem. ### 6\.4\.1 Ridge Regression First, let’s look at ridge regression. With ridge regression, we will assume that the coefficients have a mean of 0 and a variance of \\(1/\\lambda\\). This is our prior on the coefficients. The \\(\\beta\_i\\) are the most probable values given the data and the prior. Note, there are many other ways to derive ridge regression. We will use the glmnet package to fit the anchovy catch with ridge regression. To fit with a ridge penalty, we set `alpha=0`. ``` library(glmnet) resp <- colnames(dfz)!="anchovy" x <- as.matrix(dfz[,resp]) y <- as.matrix(dfz[,"anchovy"]) fit.ridge <- glmnet(x, y, family="gaussian", alpha=0) ``` We need to choose a value for the penalty parameter \\(\\lambda\\) (called `s` in `coef.glmnet()`). If \\(\\lambda\\) is large, then our prior is that the coefficients are very close to 0\. If our \\(\\lambda\\) is small, then our prior is less informative. We can use cross\-validation to choose \\(\\lambda\\). This chooses a \\(\\lambda\\) that gives us the lowest out of sample errors. `cv.glmnet()` will do k\-fold cross\-validation and report the MSE. We pick the \\(\\lambda\\) with the lowest MSE (`lambda.min`) or the largest value of \\(\\lambda\\) such that error is within 1 s.e. of the minimum (`lambda.1se`). This value is computed via cross\-validation so will vary. We will take the average over a number of runs; here 20 for speed but 100 is better. Once we have a best \\(\\lambda\\) to use, we can get the coefficients at that value. ``` n <- 20; s <- 0 for(i in 1:n) s <- s + cv.glmnet(x, y, nfolds=5, alpha=0)$lambda.min s.best.ridge <- s/n coef(fit.ridge, s=s.best.ridge) ``` ``` ## 12 x 1 sparse Matrix of class "dgCMatrix" ## 1 ## (Intercept) -1.025097e-14 ## Year 5.417884e-01 ## Trachurus -1.828772e-01 ## air 1.897000e-01 ## slp -8.056669e-02 ## sst -8.958912e-02 ## vwnd 8.140451e-02 ## wspd3 2.616673e-02 ## BOP 9.399500e-02 ## FIP 1.156512e-01 ## HPP 3.036030e-01 ## TOP 2.358358e-01 ``` I will plot the standardized coefficients for the ordinary least squares coefficients against the coefficients using ridge regression. This shows the problem caused by the highly collinear TOP and HPP. They have highly inflated coefficient estimates that are offset by an inflated Year coefficient (in the opposite direction). This is why we need to evaluate collinearity in our variables before fitting a linear regression. With ridge regression, all the estimates have shrunk towards 0 (as they should) but the collinear variables still have very large coefficients. ### 6\.4\.2 Lasso In ridge regression, the coefficients will be shrunk towards 0 but none will be set to 0 (unless the OLS estimate happens to be 0\). Lasso is a type of regression that uses a penalty function where 0 is an option. Lasso does a combination of variable selection and shrinkage. We can do lasso with `glmnet()` by setting `alpha=1`. ``` fit.lasso <- glmnet(x, y, family="gaussian", alpha=1) ``` We select the best \\(\\lambda\\) as we did for ridge regression using cross\-validation. ``` n <- 20; s <- 0 for(i in 1:n) s <- s + cv.glmnet(x, y, nfolds=5, alpha=1)$lambda.min s.best.lasso <- s/n coef.lasso <- as.vector(coef(fit.lasso, s=s.best.lasso))[-1] ``` We can compare to the estimates from ridge and OLS and see that the model is now more similar the models we got from stepwise variable selection. The main difference is that slp and air are included as variables. Lasso has estimated a model that is similar to what we got with stepwise variable selection without removing the collinear variables from our data set. ### 6\.4\.3 Elastic net Elastic net is uses both L1 and L2 regularization. Elastic regression generally works well when we have a big dataset. We do not have a big dataset but we will try elastic net. You can tune the amount of L1 and L2 mixing by adjusting `alpha` but for this example, we will just use `alpha=0.5`. ``` fit.en <- glmnet(x, y, family="gaussian", alpha=0.5) n <- 20; s <- 0 for(i in 1:n) s <- s + cv.glmnet(x, y, nfolds=5, alpha=0.5)$lambda.min s.best.el <- s/n coef.en <- as.vector(coef(fit.en, s=s.best.el))[-1] ``` As we might expect, elastic net is part way between the ridge regression model and the Lasso model. ### 6\.4\.1 Ridge Regression First, let’s look at ridge regression. With ridge regression, we will assume that the coefficients have a mean of 0 and a variance of \\(1/\\lambda\\). This is our prior on the coefficients. The \\(\\beta\_i\\) are the most probable values given the data and the prior. Note, there are many other ways to derive ridge regression. We will use the glmnet package to fit the anchovy catch with ridge regression. To fit with a ridge penalty, we set `alpha=0`. ``` library(glmnet) resp <- colnames(dfz)!="anchovy" x <- as.matrix(dfz[,resp]) y <- as.matrix(dfz[,"anchovy"]) fit.ridge <- glmnet(x, y, family="gaussian", alpha=0) ``` We need to choose a value for the penalty parameter \\(\\lambda\\) (called `s` in `coef.glmnet()`). If \\(\\lambda\\) is large, then our prior is that the coefficients are very close to 0\. If our \\(\\lambda\\) is small, then our prior is less informative. We can use cross\-validation to choose \\(\\lambda\\). This chooses a \\(\\lambda\\) that gives us the lowest out of sample errors. `cv.glmnet()` will do k\-fold cross\-validation and report the MSE. We pick the \\(\\lambda\\) with the lowest MSE (`lambda.min`) or the largest value of \\(\\lambda\\) such that error is within 1 s.e. of the minimum (`lambda.1se`). This value is computed via cross\-validation so will vary. We will take the average over a number of runs; here 20 for speed but 100 is better. Once we have a best \\(\\lambda\\) to use, we can get the coefficients at that value. ``` n <- 20; s <- 0 for(i in 1:n) s <- s + cv.glmnet(x, y, nfolds=5, alpha=0)$lambda.min s.best.ridge <- s/n coef(fit.ridge, s=s.best.ridge) ``` ``` ## 12 x 1 sparse Matrix of class "dgCMatrix" ## 1 ## (Intercept) -1.025097e-14 ## Year 5.417884e-01 ## Trachurus -1.828772e-01 ## air 1.897000e-01 ## slp -8.056669e-02 ## sst -8.958912e-02 ## vwnd 8.140451e-02 ## wspd3 2.616673e-02 ## BOP 9.399500e-02 ## FIP 1.156512e-01 ## HPP 3.036030e-01 ## TOP 2.358358e-01 ``` I will plot the standardized coefficients for the ordinary least squares coefficients against the coefficients using ridge regression. This shows the problem caused by the highly collinear TOP and HPP. They have highly inflated coefficient estimates that are offset by an inflated Year coefficient (in the opposite direction). This is why we need to evaluate collinearity in our variables before fitting a linear regression. With ridge regression, all the estimates have shrunk towards 0 (as they should) but the collinear variables still have very large coefficients. ### 6\.4\.2 Lasso In ridge regression, the coefficients will be shrunk towards 0 but none will be set to 0 (unless the OLS estimate happens to be 0\). Lasso is a type of regression that uses a penalty function where 0 is an option. Lasso does a combination of variable selection and shrinkage. We can do lasso with `glmnet()` by setting `alpha=1`. ``` fit.lasso <- glmnet(x, y, family="gaussian", alpha=1) ``` We select the best \\(\\lambda\\) as we did for ridge regression using cross\-validation. ``` n <- 20; s <- 0 for(i in 1:n) s <- s + cv.glmnet(x, y, nfolds=5, alpha=1)$lambda.min s.best.lasso <- s/n coef.lasso <- as.vector(coef(fit.lasso, s=s.best.lasso))[-1] ``` We can compare to the estimates from ridge and OLS and see that the model is now more similar the models we got from stepwise variable selection. The main difference is that slp and air are included as variables. Lasso has estimated a model that is similar to what we got with stepwise variable selection without removing the collinear variables from our data set. ### 6\.4\.3 Elastic net Elastic net is uses both L1 and L2 regularization. Elastic regression generally works well when we have a big dataset. We do not have a big dataset but we will try elastic net. You can tune the amount of L1 and L2 mixing by adjusting `alpha` but for this example, we will just use `alpha=0.5`. ``` fit.en <- glmnet(x, y, family="gaussian", alpha=0.5) n <- 20; s <- 0 for(i in 1:n) s <- s + cv.glmnet(x, y, nfolds=5, alpha=0.5)$lambda.min s.best.el <- s/n coef.en <- as.vector(coef(fit.en, s=s.best.el))[-1] ``` As we might expect, elastic net is part way between the ridge regression model and the Lasso model.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/6-5-MREGRELPO.html
6\.5 Relative importance metrics -------------------------------- Another approach to linear regression with multiple collinear regressors is to compute relative importance metrics[8](#fn8). The **relaimpo** package will compute the relative importance metrics and provides plotting. This gives a somewhat different picture with year, Trachurus and the effort metrics most important while the environmental variables have low importance. ``` reli <- relaimpo::calc.relimp(anchovy~.,data=df) plot(reli) ``` The pattern remains the same without Year as a response variable. ``` reli <- relaimpo::calc.relimp(anchovy~.-Year,data=df) plot(reli) ```
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/6-6-MREGORTHO.html
6\.6 Orthogonalization ---------------------- The last approach that we will discuss for dealing with collinearity is orthogonalization. With this technique, we replace the set of collinear covariates \\(X\\) with a set of orthogonal, i.e. independent, covariates \\(Z\\), which are linear combinations of the original, collinear, covariates. Principal Components Analysis (PCA) is an example of using orthogonalization to deal with collinear covariates. Another example is when we use the `poly()` function to do polynomial regression. In this case, we are replacing a set of collinear variates, \\(x\\), \\(x^2\\), \\(x^3\\), etc., with a set of orthogonal covariates. The ways that you can create a set of orthogonal covariates is not unique. There are many different sets of orthogonal covariates. We will show three ways you might create your orthogonal set: PCA, Gram\-Schmidt Orthogonalization, and residuals from linear regressions. Note, it is important to use standardized covariates with the mean removed and variance scaled to 1 when doing orthogonalization. Thus we will use `dfz` instead of `df`. #### 6\.6\.0\.1 Principal component regression Principal component regression is a linear regression in which you transform your collinear covariates using the othogonal variates created by Principal Components Analysis (PCA). PCA uses an orthogonal set of variates \\(Z\\) in which the first variate accounts for as much of the variability in the variate dataset as possible, the second accounts for as much of the remaining variance as possible, the third accounts for as much of the variance remaining after the first two, etc., etc. Each variate is orthogonal to the preceding variate. Singular value decomposition is a standard way to compute \\(Z\\) for PCA. \\\[Z \= XV\\] where \\(V\\) is the right singular vector from the singular value decomposition of the matrix of covariates (\\(X\=UDV'\\)). The \\(V\\) is the ‘loadings’ matrix in a PCA. The orthogonal covariates \\(Z\\) are linear combinations of the original, collinear, covariates. PCA covariates have the nice feature that they are ordered in terms of the amount of variance they explain, but the orthogonal variates, called axes in a PCA, can be a bit hard to interpret. Let’s see an example with our data set. First we will create a matrix of our collinear variates. We need to use the scaled variates. ``` X <- as.matrix(dfz[,colnames(dfz)!="anchovy"]) ``` We create our orthogonal PCA variates using the `svd()` function which does a singular value decomposition. We will re\-label the variates as ‘principal components (PC)’. A corrplot shows that our variates are now uncorrelated. ``` loadings <- svd(X)$v rownames(loadings) <- colnames(X) Z <- X%*%loadings colnames(Z) <- paste0("PC", 1:ncol(Z)) corrplot(cor(Z)) ``` These new variates are linear combinations of the original variates. The “loadings” indicate the weight of each original variate in the new variate (“principal component”). ``` library(reshape2) meltR = melt(loadings) ggplot(meltR, aes(x=Var1, y = value)) + geom_bar(stat="identity") + coord_flip() + facet_wrap(. ~ Var2) + ggtitle("Loadings") ``` The \\(Z\\) matrix gives us a set of orthogonal variates, but some of them do not explain much of the variance. We know this should be the case because we have collinearity in our data. The singular values (which are square root of the eigenvalues of \\(X^\\top X\\)) show how much of the variance in \\(X\\) explained by each pricipal component (column in \\(Z\\)). In the plot, the singular values were of \\(X/\\sqrt{n}\\) so that \\(X^\\top X\\) is the correlation matrix. The average singular value for a correlation matrix is 1\. With this scaling, any singlular value much less than one is small. ``` sing.val <- svd(X/sqrt(n))$d plot(sing.val, xlab="axis", ylab="singular value") abline(h=1, col="red") ``` We could run a linear regression with all the 11 orthogonal variates (principal components), but that would not be helpful. The point of orthogonalization is to find a smaller set of variates that explains the structure in the larger set of collinear variates. Based on the singular value plot, we will use the first 2 components. These 2 capture a fair bit of the variability in the anchovy catch. ``` dfpca <- data.frame(anchovy=dfz$anchovy, Z[,1:2]) pcalm <- lm(anchovy ~ ., data=dfpca) summary(pcalm) ``` ``` ## ## Call: ## lm(formula = anchovy ~ ., data = dfpca) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.0413 -0.4149 0.1160 0.4328 0.8402 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 6.209e-15 1.166e-01 0.000 1.00000 ## PC1 3.405e-01 5.090e-02 6.688 1.65e-06 *** ## PC2 -2.278e-01 7.671e-02 -2.970 0.00757 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.5592 on 20 degrees of freedom ## Multiple R-squared: 0.7281, Adjusted R-squared: 0.7009 ## F-statistic: 26.78 on 2 and 20 DF, p-value: 2.209e-06 ``` We can also plot the anchovy catch, broken into small, medium, and large, against the 2 components and see that the 2 components do separate these broad catch levels. ``` library(ggplot2) library(grid) library(gridExtra) df_pca <- prcomp(X) df_out <- as.data.frame(df_pca$x) df_out$group <- cut(dfz$anchovy,3) p<-ggplot(df_out,aes(x=PC1,y=PC2,color=group )) p<-p+geom_point() p ``` We can recover the effect sizes for the original variates from \\(V\\tilde{\\beta}\\). This gives a similar picture as the other methods we used. Year, TOP/HPP/BOP and Trachurus come out as important. ``` coef(pcalm) ``` ``` ## (Intercept) PC1 PC2 ## 6.209379e-15 3.404684e-01 -2.278441e-01 ``` ``` eff <- data.frame(value=svd(X)$v[,1:2] %*% coef(pcalm)[-1], var=colnames(X)) ggplot(eff, aes(x=var, y = value)) + geom_bar(stat="identity") + coord_flip() + ggtitle("Effects on scaled original vars") ``` Principal Components Regression (PCR) is a method for dimension reduction, like Lasso regression or variable selection, but your new principal components can be hard to interpret because they are linear combinations of the original variates. In addition, if you are trying to understand if a particular variate improves your model, then PCR is going to help you. Another approach for creating orthogonal variates is Gram\-Schmidt orthogalization and this can help study the effect of adding specific variates. #### 6\.6\.0\.2 Gram\-Schmidt Orthogonalization The Gram\-Schmidt orthogonalization treats the variates in a specific order. The first orthogonal variate will be the first variate, the second othogonal variate will be the variation in the second variate that is not explained by the first, the third will be the variation in the third variate that is not explained by the first two, etc. This makes your orthogonal variates easier to interpret if they have some natural or desired ordering. For example, let’s say we want to study if adding TOP to our model help explain variance over what is already explained by Year. Putting both TOP and Year in a model won’t be helpful because they are highly correlated and we’ll just get effect sizes that offset each other (one negative, one positive) with high standard errors. Instead, we’ll add Year and then add a second variate that is the variability in TOP that is not explained by Year. Why not add TOP first? The ordering is up to you and depends on the specific question you are asking. In this case, we asking what TOP adds to a model with Year not what Year adds to a model with TOP. We create the Let \\(Z\\) be the matrix of orthogonal variates and \\(X\\) be the matrix of original, collinear, covariates. The first column of \\(Z\\) is \\(Z\_1 \= X\_1\\). The second column of \\(Z\\) is \\\[Z\_2 \= X\_2 \- Z\_1(Z\_1^\\top Z\_1\)Z\_1^\\top X\_2\.\\] The third column of \\(Z\\) is \\\[Z\_3 \= X\_3 \- Z\_1(Z\_1^\\top Z\_1\)Z\_1^\\top X\_3 \- Z\_2(Z\_2^\\top Z\_2\)Z\_2^\\top X\_3\.\\] Here is R code to create the first 3 columns of the Z matrix. The cross\-product of Z is diagonal, indicating that our new variates are orthogonal. ``` pr <- function(y, x){ x%*%solve(t(x)%*%x)%*%t(x)%*%y } Z <- cbind(X[,1], 0, 0) Z[,2] <- X[,2] - pr(X[,2], Z[,1]) Z[,3] <- X[,3] - pr(X[,3], Z[,1]) - pr(X[,3], Z[,2]) zapsmall(crossprod(Z)) ``` ``` ## [,1] [,2] [,3] ## [1,] 23 0.000000 0.0000 ## [2,] 0 6.757089 0.0000 ## [3,] 0 0.000000 22.1069 ``` To create our orthogonal variates, we have to give some thought to the ordering. Also not all the variates in our example are collinear. So we don’t need to do Gram\-Schimdt Orthogonalization on all the variates. From the variance inflation factors, Year, HPP, TOP, BOP and *Trachusus* have the worst collinearity problems. ``` full <- lm(anchovy ~ ., data=df) car::vif(full) ``` ``` ## Year Trachurus air slp sst vwnd wspd3 ## 103.922970 18.140279 3.733963 3.324463 2.476689 2.010485 1.909992 ## BOP FIP HPP TOP ## 13.676208 8.836446 63.507170 125.295727 ``` We’ll do Gram\-Schmidt orthogonalization on these 5\. First let’s resort our variates to put these 5 first. ``` pr <- function(y, x){ x%*%solve(t(x)%*%x)%*%t(x)%*%y } Z <- X[,c("Year","BOP","HPP","TOP","Trachurus","FIP","air","slp","sst","vwnd","wspd3")] Z[,2] <- X[,2] - pr(X[,2], Z[,1]) Z[,3] <- X[,3] - pr(X[,3], Z[,1]) - pr(X[,3], Z[,2]) zapsmall(crossprod(Z)) ``` ``` ## Year BOP HPP TOP Trachurus FIP ## Year 23.000000 0.000000 0.000000 21.489875 19.328398 -13.995187 ## BOP 0.000000 6.757089 0.000000 2.743112 6.757089 -0.836240 ## HPP 0.000000 0.000000 22.106900 2.906483 0.000000 7.049420 ## TOP 21.489875 2.743112 2.906483 23.000000 20.802454 -9.247200 ## Trachurus 19.328398 6.757089 0.000000 20.802454 23.000000 -12.597306 ## FIP -13.995187 -0.836240 7.049420 -9.247200 -12.597306 23.000000 ## air -3.834139 -1.309927 22.106900 -1.207694 -4.532004 9.544555 ## slp 12.722819 3.030045 -6.167315 11.179809 13.721858 -12.356850 ## sst -5.105424 -0.381354 14.974748 -1.749126 -4.671774 8.493349 ## vwnd 2.278039 2.424582 -11.802674 3.176227 4.338966 -1.073977 ## wspd3 2.219684 -0.922046 -6.961443 -0.226925 0.943299 -5.505128 ## air slp sst vwnd wspd3 ## Year -3.834139 12.722819 -5.105424 2.278039 2.219684 ## BOP -1.309927 3.030045 -0.381354 2.424582 -0.922046 ## HPP 22.106900 -6.167315 14.974748 -11.802674 -6.961443 ## TOP -1.207694 11.179809 -1.749126 3.176227 -0.226925 ## Trachurus -4.532004 13.721858 -4.671774 4.338966 0.943299 ## FIP 9.544555 -12.356850 8.493349 -1.073977 -5.505128 ## air 23.000000 -8.875634 15.899760 -12.652455 -7.152721 ## slp -8.875634 23.000000 -9.371464 7.341141 -1.298503 ## sst 15.899760 -9.371464 23.000000 -7.755410 -3.926768 ## vwnd -12.652455 7.341141 -7.755410 23.000000 4.991220 ## wspd3 -7.152721 -1.298503 -3.926768 4.991220 23.000000 ``` #### 6\.6\.0\.1 Principal component regression Principal component regression is a linear regression in which you transform your collinear covariates using the othogonal variates created by Principal Components Analysis (PCA). PCA uses an orthogonal set of variates \\(Z\\) in which the first variate accounts for as much of the variability in the variate dataset as possible, the second accounts for as much of the remaining variance as possible, the third accounts for as much of the variance remaining after the first two, etc., etc. Each variate is orthogonal to the preceding variate. Singular value decomposition is a standard way to compute \\(Z\\) for PCA. \\\[Z \= XV\\] where \\(V\\) is the right singular vector from the singular value decomposition of the matrix of covariates (\\(X\=UDV'\\)). The \\(V\\) is the ‘loadings’ matrix in a PCA. The orthogonal covariates \\(Z\\) are linear combinations of the original, collinear, covariates. PCA covariates have the nice feature that they are ordered in terms of the amount of variance they explain, but the orthogonal variates, called axes in a PCA, can be a bit hard to interpret. Let’s see an example with our data set. First we will create a matrix of our collinear variates. We need to use the scaled variates. ``` X <- as.matrix(dfz[,colnames(dfz)!="anchovy"]) ``` We create our orthogonal PCA variates using the `svd()` function which does a singular value decomposition. We will re\-label the variates as ‘principal components (PC)’. A corrplot shows that our variates are now uncorrelated. ``` loadings <- svd(X)$v rownames(loadings) <- colnames(X) Z <- X%*%loadings colnames(Z) <- paste0("PC", 1:ncol(Z)) corrplot(cor(Z)) ``` These new variates are linear combinations of the original variates. The “loadings” indicate the weight of each original variate in the new variate (“principal component”). ``` library(reshape2) meltR = melt(loadings) ggplot(meltR, aes(x=Var1, y = value)) + geom_bar(stat="identity") + coord_flip() + facet_wrap(. ~ Var2) + ggtitle("Loadings") ``` The \\(Z\\) matrix gives us a set of orthogonal variates, but some of them do not explain much of the variance. We know this should be the case because we have collinearity in our data. The singular values (which are square root of the eigenvalues of \\(X^\\top X\\)) show how much of the variance in \\(X\\) explained by each pricipal component (column in \\(Z\\)). In the plot, the singular values were of \\(X/\\sqrt{n}\\) so that \\(X^\\top X\\) is the correlation matrix. The average singular value for a correlation matrix is 1\. With this scaling, any singlular value much less than one is small. ``` sing.val <- svd(X/sqrt(n))$d plot(sing.val, xlab="axis", ylab="singular value") abline(h=1, col="red") ``` We could run a linear regression with all the 11 orthogonal variates (principal components), but that would not be helpful. The point of orthogonalization is to find a smaller set of variates that explains the structure in the larger set of collinear variates. Based on the singular value plot, we will use the first 2 components. These 2 capture a fair bit of the variability in the anchovy catch. ``` dfpca <- data.frame(anchovy=dfz$anchovy, Z[,1:2]) pcalm <- lm(anchovy ~ ., data=dfpca) summary(pcalm) ``` ``` ## ## Call: ## lm(formula = anchovy ~ ., data = dfpca) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.0413 -0.4149 0.1160 0.4328 0.8402 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 6.209e-15 1.166e-01 0.000 1.00000 ## PC1 3.405e-01 5.090e-02 6.688 1.65e-06 *** ## PC2 -2.278e-01 7.671e-02 -2.970 0.00757 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.5592 on 20 degrees of freedom ## Multiple R-squared: 0.7281, Adjusted R-squared: 0.7009 ## F-statistic: 26.78 on 2 and 20 DF, p-value: 2.209e-06 ``` We can also plot the anchovy catch, broken into small, medium, and large, against the 2 components and see that the 2 components do separate these broad catch levels. ``` library(ggplot2) library(grid) library(gridExtra) df_pca <- prcomp(X) df_out <- as.data.frame(df_pca$x) df_out$group <- cut(dfz$anchovy,3) p<-ggplot(df_out,aes(x=PC1,y=PC2,color=group )) p<-p+geom_point() p ``` We can recover the effect sizes for the original variates from \\(V\\tilde{\\beta}\\). This gives a similar picture as the other methods we used. Year, TOP/HPP/BOP and Trachurus come out as important. ``` coef(pcalm) ``` ``` ## (Intercept) PC1 PC2 ## 6.209379e-15 3.404684e-01 -2.278441e-01 ``` ``` eff <- data.frame(value=svd(X)$v[,1:2] %*% coef(pcalm)[-1], var=colnames(X)) ggplot(eff, aes(x=var, y = value)) + geom_bar(stat="identity") + coord_flip() + ggtitle("Effects on scaled original vars") ``` Principal Components Regression (PCR) is a method for dimension reduction, like Lasso regression or variable selection, but your new principal components can be hard to interpret because they are linear combinations of the original variates. In addition, if you are trying to understand if a particular variate improves your model, then PCR is going to help you. Another approach for creating orthogonal variates is Gram\-Schmidt orthogalization and this can help study the effect of adding specific variates. #### 6\.6\.0\.2 Gram\-Schmidt Orthogonalization The Gram\-Schmidt orthogonalization treats the variates in a specific order. The first orthogonal variate will be the first variate, the second othogonal variate will be the variation in the second variate that is not explained by the first, the third will be the variation in the third variate that is not explained by the first two, etc. This makes your orthogonal variates easier to interpret if they have some natural or desired ordering. For example, let’s say we want to study if adding TOP to our model help explain variance over what is already explained by Year. Putting both TOP and Year in a model won’t be helpful because they are highly correlated and we’ll just get effect sizes that offset each other (one negative, one positive) with high standard errors. Instead, we’ll add Year and then add a second variate that is the variability in TOP that is not explained by Year. Why not add TOP first? The ordering is up to you and depends on the specific question you are asking. In this case, we asking what TOP adds to a model with Year not what Year adds to a model with TOP. We create the Let \\(Z\\) be the matrix of orthogonal variates and \\(X\\) be the matrix of original, collinear, covariates. The first column of \\(Z\\) is \\(Z\_1 \= X\_1\\). The second column of \\(Z\\) is \\\[Z\_2 \= X\_2 \- Z\_1(Z\_1^\\top Z\_1\)Z\_1^\\top X\_2\.\\] The third column of \\(Z\\) is \\\[Z\_3 \= X\_3 \- Z\_1(Z\_1^\\top Z\_1\)Z\_1^\\top X\_3 \- Z\_2(Z\_2^\\top Z\_2\)Z\_2^\\top X\_3\.\\] Here is R code to create the first 3 columns of the Z matrix. The cross\-product of Z is diagonal, indicating that our new variates are orthogonal. ``` pr <- function(y, x){ x%*%solve(t(x)%*%x)%*%t(x)%*%y } Z <- cbind(X[,1], 0, 0) Z[,2] <- X[,2] - pr(X[,2], Z[,1]) Z[,3] <- X[,3] - pr(X[,3], Z[,1]) - pr(X[,3], Z[,2]) zapsmall(crossprod(Z)) ``` ``` ## [,1] [,2] [,3] ## [1,] 23 0.000000 0.0000 ## [2,] 0 6.757089 0.0000 ## [3,] 0 0.000000 22.1069 ``` To create our orthogonal variates, we have to give some thought to the ordering. Also not all the variates in our example are collinear. So we don’t need to do Gram\-Schimdt Orthogonalization on all the variates. From the variance inflation factors, Year, HPP, TOP, BOP and *Trachusus* have the worst collinearity problems. ``` full <- lm(anchovy ~ ., data=df) car::vif(full) ``` ``` ## Year Trachurus air slp sst vwnd wspd3 ## 103.922970 18.140279 3.733963 3.324463 2.476689 2.010485 1.909992 ## BOP FIP HPP TOP ## 13.676208 8.836446 63.507170 125.295727 ``` We’ll do Gram\-Schmidt orthogonalization on these 5\. First let’s resort our variates to put these 5 first. ``` pr <- function(y, x){ x%*%solve(t(x)%*%x)%*%t(x)%*%y } Z <- X[,c("Year","BOP","HPP","TOP","Trachurus","FIP","air","slp","sst","vwnd","wspd3")] Z[,2] <- X[,2] - pr(X[,2], Z[,1]) Z[,3] <- X[,3] - pr(X[,3], Z[,1]) - pr(X[,3], Z[,2]) zapsmall(crossprod(Z)) ``` ``` ## Year BOP HPP TOP Trachurus FIP ## Year 23.000000 0.000000 0.000000 21.489875 19.328398 -13.995187 ## BOP 0.000000 6.757089 0.000000 2.743112 6.757089 -0.836240 ## HPP 0.000000 0.000000 22.106900 2.906483 0.000000 7.049420 ## TOP 21.489875 2.743112 2.906483 23.000000 20.802454 -9.247200 ## Trachurus 19.328398 6.757089 0.000000 20.802454 23.000000 -12.597306 ## FIP -13.995187 -0.836240 7.049420 -9.247200 -12.597306 23.000000 ## air -3.834139 -1.309927 22.106900 -1.207694 -4.532004 9.544555 ## slp 12.722819 3.030045 -6.167315 11.179809 13.721858 -12.356850 ## sst -5.105424 -0.381354 14.974748 -1.749126 -4.671774 8.493349 ## vwnd 2.278039 2.424582 -11.802674 3.176227 4.338966 -1.073977 ## wspd3 2.219684 -0.922046 -6.961443 -0.226925 0.943299 -5.505128 ## air slp sst vwnd wspd3 ## Year -3.834139 12.722819 -5.105424 2.278039 2.219684 ## BOP -1.309927 3.030045 -0.381354 2.424582 -0.922046 ## HPP 22.106900 -6.167315 14.974748 -11.802674 -6.961443 ## TOP -1.207694 11.179809 -1.749126 3.176227 -0.226925 ## Trachurus -4.532004 13.721858 -4.671774 4.338966 0.943299 ## FIP 9.544555 -12.356850 8.493349 -1.073977 -5.505128 ## air 23.000000 -8.875634 15.899760 -12.652455 -7.152721 ## slp -8.875634 23.000000 -9.371464 7.341141 -1.298503 ## sst 15.899760 -9.371464 23.000000 -7.755410 -3.926768 ## vwnd -12.652455 7.341141 -7.755410 23.000000 4.991220 ## wspd3 -7.152721 -1.298503 -3.926768 4.991220 23.000000 ```
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/6-7-MREGPREDICT.html
6\.7 Prediction accuracy ------------------------ We could use cross\-validation compare prediction accuracy if we had a pre\-defined set of models to compare. In our case, we do not have a set of models but rather a set of “number of variables” and the specific variables to include in that number are determined using the fit to the data (in some fashion). We cannot use variable selection (any sort) with our full dataset to chose the variables and then turn around and use cross\-validation with the same dataset to test the out\-of\-sample prediction accuracy. Anytime you double\-use your data like that, you will have severe bias problems. Instead, we will test our models using sets of years that we held out for testing, i.e. that were not used for fitting the model or selecting variates. We will use the following test years: 1988 and 1989 as was used in Stergiou and Christou and 1988\-1992 (five years). We will use the performance testing procedure in Chapter [5](5-perf-testing.html#perf-testing). #### Computing the prediction error for a model First we set up the test data frames. We can then compute the RMSE for the predictions from one of our linear regression models. Let’s use the model selected by `step()` using AIC as the metric and stepwise variable regression starting from a full model, `step.full`. ``` fr <- predict(step.full, newdata=testdata2) err <- fr - testdata2$anchovy sqrt(mean(err^2)) ``` ``` ## [1] 0.05289656 ``` We could also use `forecast()` in the forecast package to compute predictions and then use `accuracy()` to compute the prediction metrics for the test data. ``` fr <- forecast::forecast(step.full, newdata=testdata2) forecast::accuracy(fr, testdata2) ``` ``` ## ME RMSE MAE MPE MAPE MASE ## Training set 0.00000000 0.11151614 0.08960780 -0.01540787 0.9925324 0.2480860 ## Test set -0.03755081 0.05289656 0.03755081 -0.39104145 0.3910415 0.1039623 ``` #### Comparing the predictions for a suite of models Let’s compare a suite of models and compare predictions for the full out\-of\-sample data that we have: 1988 to 2007\. ``` fr.list <- list() testdat <- testdata.full <- df.full[24:nrow(df.full),] n.fr <- length(testdat) ``` Then we fit the three best lm models chosen via stepwise regression, exhaustive search or cross\-validation: ``` modelname <- "Year+FIP" fit <- lm(anchovy~Year+FIP, data=df) fr.list[[modelname]] <- predict(fit, newdata=testdat) modelname <- "Year+Trachurus+FIP" fit <- lm(anchovy~Year+Trachurus+FIP, data=df) fr.list[[modelname]] <- predict(fit, newdata=testdat) modelname <- "6 variables" fit <- lm(anchovy~Year+air+vwnd+BOP+FIP+TOP, data=df) fr.list[[modelname]] <- predict(fit, newdata=testdat) ``` Then we add the forecasts for Ridge Regression. ``` library(glmnet) resp <- colnames(df)!="anchovy" x <- as.matrix(df[,resp]) y <- as.matrix(df[,"anchovy"]) fit <- glmnet(x, y, family="gaussian", alpha=0) n <- 20; s <- 0 for(i in 1:n) s <- s + cv.glmnet(x, y, nfolds=5, alpha=0)$lambda.min s.best <- s/n modelname <- "Ridge Regression" newx <- as.matrix(testdat[,resp]) fr.list[[modelname]] <- predict(fit, newx=newx, s=s.best) ``` LASSO regression, ``` fit <- glmnet(x, y, family="gaussian", alpha=1) n <- 20; s <- 0 for(i in 1:n) s <- s + cv.glmnet(x, y, nfolds=5, alpha=1)$lambda.min s.best <- s/n modelname <- "LASSO Regression" newx <- as.matrix(testdat[,resp]) fr.list[[modelname]] <- predict(fit, newx=newx, s=s.best) ``` and elastic net regression. ``` fit <- glmnet(x, y, family="gaussian", alpha=0.5) n <- 20; s <- 0 for(i in 1:n) s <- s + cv.glmnet(x, y, nfolds=5, alpha=0.5)$lambda.min s.best <- s/n modelname <- "Elastic net Regression" newx <- as.matrix(testdat[,resp]) fr.list[[modelname]] <- predict(fit, newx=newx, s=s.best) ``` Now we can create a table ``` restab <- as.data.frame(matrix(NA,1,21)) #restab <- data.frame(model="", stringsAsFactors=FALSE) for(i in 1:length(fr.list)){ err <- fr.list[[i]]-testdat$anchovy restab[i,2:(length(err)+1)] <- sqrt(cumsum(err^2)/1:length(err)) restab[i,1] <- names(fr.list)[i] } tmp <- restab[,c(1,6,11,16,21)] colnames(tmp) <- c("model","5 yrs", "10 yrs", "15 yrs", "20 yrs") knitr::kable(tmp) ``` | model | 5 yrs | 10 yrs | 15 yrs | 20 yrs | | --- | --- | --- | --- | --- | | Year\+FIP | 0\.6905211 | 0\.8252467 | 0\.9733136 | 1\.0597621 | | Year\+Trachurus\+FIP | 0\.8324962 | 0\.9598570 | 1\.2391294 | 1\.4442466 | | 6 variables | 0\.3612936 | 0\.6716181 | 0\.9543952 | 1\.1356324 | | Ridge Regression | 0\.7822712 | 0\.8393271 | 0\.9564713 | 0\.9673194 | | LASSO Regression | 0\.5959503 | 0\.7471635 | 1\.0132615 | 1\.1769412 | | Elastic net Regression | 0\.7092379 | 0\.8589573 | 1\.1609910 | 1\.3481830 | If we plot the forecasts with the 1965\-1987 data (open circles) and the 1988\-2007 data (solid circles), we see that the forecasts continue the upward trend in the data while the data level off. This illustrates a problem with using “Year” as a covariate. This covariate is deterministically increasing. If it is included in the model, then the forecasts will have an upward or downward trend. When using environmental, biological and effort covariates, one hopes that the covariates explain the trends in the data. It would be wiser to not use “Year” as a covariate. LASSO regression with no year, ``` resp <- colnames(df)!="anchovy" & colnames(df)!="Year" x <- as.matrix(df[,resp]) y <- as.matrix(df[,"anchovy"]) fit.lasso <- glmnet(x, y, family="gaussian", alpha=1) n <- 20; s <- 0 for(i in 1:n) s <- s + cv.glmnet(x, y, nfolds=5, alpha=1)$lambda.min s.best.lasso <- s/n modelname <- "LASSO Reg no Year" newx <- as.matrix(testdat[,resp]) fr.list[[modelname]] <- predict(fit.lasso, newx=newx, s=s.best.lasso) ``` Ridge regression with no year, ``` resp <- colnames(df)!="anchovy" & colnames(df)!="Year" x <- as.matrix(df[,resp]) y <- as.matrix(df[,"anchovy"]) fit.ridge <- glmnet(x, y, family="gaussian", alpha=0) n <- 20; s <- 0 for(i in 1:n) s <- s + cv.glmnet(x, y, nfolds=5, alpha=1)$lambda.min s.best.ridge <- s/n modelname <- "Ridge Reg no Year" newx <- as.matrix(testdat[,resp]) fr.list[[modelname]] <- predict(fit.ridge, newx=newx, s=s.best.ridge) ``` Now we can create a table | | model | 5 yrs | 10 yrs | 15 yrs | 20 yrs | | --- | --- | --- | --- | --- | --- | | 7 | LASSO Reg no Year | 0\.7277462 | 0\.7200965 | 0\.7306603 | 0\.6668768 | | 8 | Ridge Reg no Year | 0\.9741012 | 0\.9555257 | 0\.9792060 | 0\.9121324 | Without “Year”, the model predicts 1988 well (using 1987 covariates) but then has a large jump upward after which is has a similar “flat\-ish” trend as seen after 1989\. What happened in 1988 (the covariate year affecting 1989\)? The horsepower covariate, along with BOP (total boats) and TOP (boat tonnage), have a sudden upward jump in 1988\. This is seen in all the fisheries. This suggests that in 1988 either a large number of new boats entered all the fisheries or what boats were counted as “purse seiners” was changed. Upon looking at the covariates, it seems that something changed in the recording from 1988 to 1996\. If we correct that jump from 1988 to 1989 (subtract the jump from all data 1989 onward), the Lasso and Ridge predictions without year look considerably better. #### Computing the prediction error for a model First we set up the test data frames. We can then compute the RMSE for the predictions from one of our linear regression models. Let’s use the model selected by `step()` using AIC as the metric and stepwise variable regression starting from a full model, `step.full`. ``` fr <- predict(step.full, newdata=testdata2) err <- fr - testdata2$anchovy sqrt(mean(err^2)) ``` ``` ## [1] 0.05289656 ``` We could also use `forecast()` in the forecast package to compute predictions and then use `accuracy()` to compute the prediction metrics for the test data. ``` fr <- forecast::forecast(step.full, newdata=testdata2) forecast::accuracy(fr, testdata2) ``` ``` ## ME RMSE MAE MPE MAPE MASE ## Training set 0.00000000 0.11151614 0.08960780 -0.01540787 0.9925324 0.2480860 ## Test set -0.03755081 0.05289656 0.03755081 -0.39104145 0.3910415 0.1039623 ``` #### Comparing the predictions for a suite of models Let’s compare a suite of models and compare predictions for the full out\-of\-sample data that we have: 1988 to 2007\. ``` fr.list <- list() testdat <- testdata.full <- df.full[24:nrow(df.full),] n.fr <- length(testdat) ``` Then we fit the three best lm models chosen via stepwise regression, exhaustive search or cross\-validation: ``` modelname <- "Year+FIP" fit <- lm(anchovy~Year+FIP, data=df) fr.list[[modelname]] <- predict(fit, newdata=testdat) modelname <- "Year+Trachurus+FIP" fit <- lm(anchovy~Year+Trachurus+FIP, data=df) fr.list[[modelname]] <- predict(fit, newdata=testdat) modelname <- "6 variables" fit <- lm(anchovy~Year+air+vwnd+BOP+FIP+TOP, data=df) fr.list[[modelname]] <- predict(fit, newdata=testdat) ``` Then we add the forecasts for Ridge Regression. ``` library(glmnet) resp <- colnames(df)!="anchovy" x <- as.matrix(df[,resp]) y <- as.matrix(df[,"anchovy"]) fit <- glmnet(x, y, family="gaussian", alpha=0) n <- 20; s <- 0 for(i in 1:n) s <- s + cv.glmnet(x, y, nfolds=5, alpha=0)$lambda.min s.best <- s/n modelname <- "Ridge Regression" newx <- as.matrix(testdat[,resp]) fr.list[[modelname]] <- predict(fit, newx=newx, s=s.best) ``` LASSO regression, ``` fit <- glmnet(x, y, family="gaussian", alpha=1) n <- 20; s <- 0 for(i in 1:n) s <- s + cv.glmnet(x, y, nfolds=5, alpha=1)$lambda.min s.best <- s/n modelname <- "LASSO Regression" newx <- as.matrix(testdat[,resp]) fr.list[[modelname]] <- predict(fit, newx=newx, s=s.best) ``` and elastic net regression. ``` fit <- glmnet(x, y, family="gaussian", alpha=0.5) n <- 20; s <- 0 for(i in 1:n) s <- s + cv.glmnet(x, y, nfolds=5, alpha=0.5)$lambda.min s.best <- s/n modelname <- "Elastic net Regression" newx <- as.matrix(testdat[,resp]) fr.list[[modelname]] <- predict(fit, newx=newx, s=s.best) ``` Now we can create a table ``` restab <- as.data.frame(matrix(NA,1,21)) #restab <- data.frame(model="", stringsAsFactors=FALSE) for(i in 1:length(fr.list)){ err <- fr.list[[i]]-testdat$anchovy restab[i,2:(length(err)+1)] <- sqrt(cumsum(err^2)/1:length(err)) restab[i,1] <- names(fr.list)[i] } tmp <- restab[,c(1,6,11,16,21)] colnames(tmp) <- c("model","5 yrs", "10 yrs", "15 yrs", "20 yrs") knitr::kable(tmp) ``` | model | 5 yrs | 10 yrs | 15 yrs | 20 yrs | | --- | --- | --- | --- | --- | | Year\+FIP | 0\.6905211 | 0\.8252467 | 0\.9733136 | 1\.0597621 | | Year\+Trachurus\+FIP | 0\.8324962 | 0\.9598570 | 1\.2391294 | 1\.4442466 | | 6 variables | 0\.3612936 | 0\.6716181 | 0\.9543952 | 1\.1356324 | | Ridge Regression | 0\.7822712 | 0\.8393271 | 0\.9564713 | 0\.9673194 | | LASSO Regression | 0\.5959503 | 0\.7471635 | 1\.0132615 | 1\.1769412 | | Elastic net Regression | 0\.7092379 | 0\.8589573 | 1\.1609910 | 1\.3481830 | If we plot the forecasts with the 1965\-1987 data (open circles) and the 1988\-2007 data (solid circles), we see that the forecasts continue the upward trend in the data while the data level off. This illustrates a problem with using “Year” as a covariate. This covariate is deterministically increasing. If it is included in the model, then the forecasts will have an upward or downward trend. When using environmental, biological and effort covariates, one hopes that the covariates explain the trends in the data. It would be wiser to not use “Year” as a covariate. LASSO regression with no year, ``` resp <- colnames(df)!="anchovy" & colnames(df)!="Year" x <- as.matrix(df[,resp]) y <- as.matrix(df[,"anchovy"]) fit.lasso <- glmnet(x, y, family="gaussian", alpha=1) n <- 20; s <- 0 for(i in 1:n) s <- s + cv.glmnet(x, y, nfolds=5, alpha=1)$lambda.min s.best.lasso <- s/n modelname <- "LASSO Reg no Year" newx <- as.matrix(testdat[,resp]) fr.list[[modelname]] <- predict(fit.lasso, newx=newx, s=s.best.lasso) ``` Ridge regression with no year, ``` resp <- colnames(df)!="anchovy" & colnames(df)!="Year" x <- as.matrix(df[,resp]) y <- as.matrix(df[,"anchovy"]) fit.ridge <- glmnet(x, y, family="gaussian", alpha=0) n <- 20; s <- 0 for(i in 1:n) s <- s + cv.glmnet(x, y, nfolds=5, alpha=1)$lambda.min s.best.ridge <- s/n modelname <- "Ridge Reg no Year" newx <- as.matrix(testdat[,resp]) fr.list[[modelname]] <- predict(fit.ridge, newx=newx, s=s.best.ridge) ``` Now we can create a table | | model | 5 yrs | 10 yrs | 15 yrs | 20 yrs | | --- | --- | --- | --- | --- | --- | | 7 | LASSO Reg no Year | 0\.7277462 | 0\.7200965 | 0\.7306603 | 0\.6668768 | | 8 | Ridge Reg no Year | 0\.9741012 | 0\.9555257 | 0\.9792060 | 0\.9121324 | Without “Year”, the model predicts 1988 well (using 1987 covariates) but then has a large jump upward after which is has a similar “flat\-ish” trend as seen after 1989\. What happened in 1988 (the covariate year affecting 1989\)? The horsepower covariate, along with BOP (total boats) and TOP (boat tonnage), have a sudden upward jump in 1988\. This is seen in all the fisheries. This suggests that in 1988 either a large number of new boats entered all the fisheries or what boats were counted as “purse seiners” was changed. Upon looking at the covariates, it seems that something changed in the recording from 1988 to 1996\. If we correct that jump from 1988 to 1989 (subtract the jump from all data 1989 onward), the Lasso and Ridge predictions without year look considerably better.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/6-8-discussion.html
6\.8 Discussion --------------- This chapter illustrates a variety of approaches for “variable selection”. This is the situation where one has a large number of covariates and one wants to chose the covariates that produce the best predictions. Following Stergiou and Christou, I used mainly linear regressions with variables selected with stepwise variable selection. Keep in mind that stepwise variable selection is generally considered data\-dredging and a reviewer who is statistician will almost certainly find fault with this approach. Penalized regression is a more accepted approach for developing a regression model with many covariates. Part of the appeal of penalized regression is that it is robust to collinearity in your covariates. Stepwise variable regression is not robust to collinearity. Cross\-validation is an approach for testing a *process* of building a model. In the case of the anchovy data, a model with only two covariates, Year and number of fishers, was selected via cross\-validation as having the best (lowest) predictive error. This is considerable smaller than the best model via stepwise variable selection. When we tested the models against data completely held out of the analysis and model development (1988\-2007\), we discovered a number of problems. 1\) Using “Year” as a covariate is a bad idea since it is deterministically linear upward. and 2\) There is a problem with the effort data between 1988 and 1996\. There is a jump in the effort data. We used variable selection or penalized regression to select weighting on a large set of covariates. Another approach is to develop a set of covariates from your knowledge of the system and use only covariates that are thought to be important. In Section 4\.7\.7 of (Harrell 2015\), a rule of thumb (based on shrinkage) for the number of predictors that can be used without overfitting is given by: \\((LR\-p)/9\\) where \\(LR\\) is the likelihood ratio test \\(\\chi^2\\) of the full model against the null model with only intercept and \\(p\\) is the number of variables in the full model. ``` null <- lm(anchovy ~ 1, data=df) full <- lm(anchovy ~ ., data=df) a <- lmtest::lrtest(null, full) (a$Chisq[2]-a$Df[2])/9 ``` ``` ## [1] 6.239126 ``` This rule of thumb suggests that we could include six variables. Another approach to model building would be to select environmental and biological variables based on the known biology of anchovy and to select one effort variable or a composite “effort” based on a combination of the effort variables.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/7-ar-models-with-covariates.html
Chapter 7 AR models with covariates ===================================
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/7-1-MREGARMA.html
7\.1 MREG with ARMA errors -------------------------- ``` library(ggplot2) library(forecast) library(astsa) library(nlme) ``` The `stats::arima()` and `forecast::auto.arima()` functions with argument `xreg` fit a multivariate linear regression with ARMA errors. Note, this is not what is termed a ARMAX model. ARMAX models will be addressed in Section [7\.2](7-2-ARMAX.html#ARMAX). The model fitted when `xreg` is passed in is: \\\[\\begin{equation} \\begin{gathered} x\_t \= \\alpha \+ \\phi\_1 c\_{t,1} \+ \\phi\_2 c\_{t,2} \+ \\dots \+ z\_t \\\\ z\_t \= \\beta\_1 z\_{t\-1} \+ \\dots \+ \\beta\_p z\_{t\-p} \+ e\_t \+ \\theta\_1 e\_{t\-1} \+ \\dots \+ \\theta\_q e\_{t\-q}\\\\ e\_t \\sim N(0,\\sigma) \\end{gathered} \\end{equation}\\] where `xreg` is matrix with \\(c\_{t,1}\\) in column 1, \\(c\_{t\-2}\\) in column 2, etc. \\(z\_t\\) are the ARMA errors. ### 7\.1\.1 Example: fitting with auto.arima Let’s fit two of the best multivariate regression models from Section [6\.3\.1](6-3-MREGVAR.html#stepwise-sel) with ARMA errors. We can use `auto.arima` to search for an ARMA model for the residuals. ``` xreg <- as.matrix(df[,c("Year","FIP")]) forecast::auto.arima(df$anchovy, xreg=xreg) ``` ``` ## Series: df$anchovy ## Regression with ARIMA(0,0,0) errors ## ## Coefficients: ## Year FIP ## 0.0730 1.0394 ## s.e. 0.0046 0.0079 ## ## sigma^2 estimated as 0.024: log likelihood=11.3 ## AIC=-16.6 AICc=-15.34 BIC=-13.2 ``` The esimated model is a “Regression with ARIMA(0,0,0\) errors” which indicates no autoregressive or moving average pattern in the residuals. We can also see this by looking at an ACF plot of the residuals. ``` lm(anchovy~Year+FIP,data=df) %>% resid %>% acf ``` The same pattern is seen with the models with more variables. ``` xreg <- as.matrix(df[,c("Year","Trachurus","FIP")]) forecast::auto.arima(df$anchovy, xreg=xreg) ``` ``` ## Series: df$anchovy ## Regression with ARIMA(0,0,0) errors ## ## Coefficients: ## Year Trachurus FIP ## 0.0883 -0.2339 1.2686 ## s.e. 0.0083 0.1092 0.1073 ## ## sigma^2 estimated as 0.02101: log likelihood=13.39 ## AIC=-18.78 AICc=-16.56 BIC=-14.24 ``` ### 7\.1\.2 Example: fitting with arima and sarima If we want to fit a specific ARMA model, for example an AR(1\) model for the residuals, we can use `arima`. ``` xreg <- as.matrix(df[,c("Year","FIP")]) arima(df$anchovy, xreg=xreg, order = c(1,0,0)) ``` ``` ## ## Call: ## arima(x = df$anchovy, order = c(1, 0, 0), xreg = xreg) ## ## Coefficients: ## ar1 intercept Year FIP ## -0.0404 -0.0975 0.0729 1.0517 ## s.e. 0.2256 2.3540 0.0057 0.2920 ## ## sigma^2 estimated as 0.02188: log likelihood = 11.32, aic = -12.64 ``` We can also use the `sarima` function in the **astsa** package. This plots a nice diagnostics plot with the fit. ``` xreg <- as.matrix(df[,c("Year","FIP")]) astsa::sarima(df$anchovy, 1, 0, 0, xreg=xreg) ``` ``` ## initial value -1.932551 ## iter 2 value -1.945583 ## iter 3 value -1.946840 ## iter 4 value -1.946961 ## iter 5 value -1.946974 ## iter 6 value -1.946976 ## iter 7 value -1.946976 ## iter 7 value -1.946976 ## iter 7 value -1.946976 ## final value -1.946976 ## converged ## initial value -1.897686 ## iter 2 value -1.910866 ## iter 3 value -1.910989 ## iter 4 value -1.911001 ## iter 5 value -1.911004 ## iter 5 value -1.911004 ## iter 5 value -1.911004 ## final value -1.911004 ## converged ``` ``` ## $fit ## ## Call: ## stats::arima(x = xdata, order = c(p, d, q), seasonal = list(order = c(P, D, ## Q), period = S), xreg = xreg, transform.pars = trans, fixed = fixed, optim.control = list(trace = trc, ## REPORT = 1, reltol = tol)) ## ## Coefficients: ## ar1 intercept Year FIP ## -0.0404 -0.0975 0.0729 1.0517 ## s.e. 0.2256 2.3540 0.0057 0.2920 ## ## sigma^2 estimated as 0.02188: log likelihood = 11.32, aic = -12.64 ## ## $degrees_of_freedom ## [1] 19 ## ## $ttable ## Estimate SE t.value p.value ## ar1 -0.0404 0.2256 -0.1791 0.8597 ## intercept -0.0975 2.3540 -0.0414 0.9674 ## Year 0.0729 0.0057 12.8250 0.0000 ## FIP 1.0517 0.2920 3.6024 0.0019 ## ## $AIC ## [1] -0.549349 ## ## $AICc ## [1] -0.4527307 ## ## $BIC ## [1] -0.3025025 ``` ### 7\.1\.3 Example: fitting with gls We can also fit multivariate regression with autocorrelated errors with the nlme package and function `gls()`. The default fitting method is REML, and to get the same results as `arima()`, we need to specify `method="ML"`. ``` mod <- gls(anchovy~Year+FIP, data=df, correlation=corAR1(form=~1), method="ML") summary(mod) ``` ``` ## Generalized least squares fit by maximum likelihood ## Model: anchovy ~ Year + FIP ## Data: df ## AIC BIC logLik ## -12.63503 -6.957558 11.31751 ## ## Correlation Structure: AR(1) ## Formula: ~1 ## Parameter estimate(s): ## Phi ## -0.04023925 ## ## Coefficients: ## Value Std.Error t-value p-value ## (Intercept) -0.0970390 2.4776517 -0.039166 0.9691 ## Year 0.0729497 0.0060939 11.971012 0.0000 ## FIP 1.0516813 0.3070037 3.425630 0.0027 ## ## Correlation: ## (Intr) Year ## Year -0.631 ## FIP -1.000 0.612 ## ## Standardized residuals: ## Min Q1 Med Q3 Max ## -1.5299235 -0.9188421 0.2607087 0.6691076 2.1482577 ## ## Residual standard error: 0.1480464 ## Degrees of freedom: 23 total; 20 residual ``` You can also fit an AR(2\) or ARMA with `gls()`: ``` mod <- gls(anchovy~Year+FIP, data=df, correlation=corARMA(form = ~1,p=2,q=0), method="ML") summary(mod) ``` ``` ## Generalized least squares fit by maximum likelihood ## Model: anchovy ~ Year + FIP ## Data: df ## AIC BIC logLik ## -12.033 -5.220033 12.0165 ## ## Correlation Structure: ARMA(2,0) ## Formula: ~1 ## Parameter estimate(s): ## Phi1 Phi2 ## -0.09861143 -0.28248099 ## ## Coefficients: ## Value Std.Error t-value p-value ## (Intercept) -1.1700795 2.0440075 -0.572444 0.5734 ## Year 0.0732924 0.0048706 15.047960 0.0000 ## FIP 1.1869743 0.2533707 4.684734 0.0001 ## ## Correlation: ## (Intr) Year ## Year -0.662 ## FIP -1.000 0.645 ## ## Standardized residuals: ## Min Q1 Med Q3 Max ## -1.6245159 -0.9492037 0.1640458 0.6481195 2.1941017 ## ## Residual standard error: 0.1494797 ## Degrees of freedom: 23 total; 20 residual ``` ### 7\.1\.4 MREG of first or second differences In the multivariate regression with ARMA errors, the response variable \\(x\_t\\) is not necessarily stationary since the covariates \\(c\_t\\)’s need not be stationary. If we wish to model the first or second differences of \\(x\_t\\), then we are potentially modeling a stationary process if differencing leads to a stationary process. We need to think carefully about how we set up a multivariate regression if our response variable is stationary. One recommendation is if \\(x\_t\\) is differenced, the same differencing is applied to the covariates. The idea is if the response variable is stationary, we want to make sure that the independent variables are also stationary. However, in a fisheries application \\(x\_t \- x\_{t\-1}\\) often has a biological meaning, the yearly (or monthly or hourly) rate of change, and that rate of change is what one is trying explain with a covariate. One would not necessarily expect the first difference to be stationary and one is trying to explain any trend in the one\-step rate of change with some set of covariates. On the other hand, if the response variable, the raw data or the first or second difference, is stationary then trying to explain its variability via a non\-stationary covariate will clearly lead to the effect size of the covariates being zero. We don’t need to fit a model to tell us that. ### 7\.1\.5 Discussion R provides many different functions and packages for fitting a multivariate regression with autoregressive errors. In the case of the anchovy time series, the errors are not autoregressive. In general, the first step to determining whether a model with correlated errors is required is to look at diagnostics for the residuals. Select a model (see previous section) and then examine the residuals for evidence of autocorrelation. However another approach is to include a model with autocorrelated errors in your model set and compare via model selection. If this latter approach is taken, you must be careful to that the model selection criteria (AIC, BIC etc) are comparable. If you use functions from different packages, they authors have often left off a constant in their model selection criteria formulas. If you need to use different packages, you will carefully test the model selection criteria from the same model with different functions and adjust for the missing constants. ### 7\.1\.1 Example: fitting with auto.arima Let’s fit two of the best multivariate regression models from Section [6\.3\.1](6-3-MREGVAR.html#stepwise-sel) with ARMA errors. We can use `auto.arima` to search for an ARMA model for the residuals. ``` xreg <- as.matrix(df[,c("Year","FIP")]) forecast::auto.arima(df$anchovy, xreg=xreg) ``` ``` ## Series: df$anchovy ## Regression with ARIMA(0,0,0) errors ## ## Coefficients: ## Year FIP ## 0.0730 1.0394 ## s.e. 0.0046 0.0079 ## ## sigma^2 estimated as 0.024: log likelihood=11.3 ## AIC=-16.6 AICc=-15.34 BIC=-13.2 ``` The esimated model is a “Regression with ARIMA(0,0,0\) errors” which indicates no autoregressive or moving average pattern in the residuals. We can also see this by looking at an ACF plot of the residuals. ``` lm(anchovy~Year+FIP,data=df) %>% resid %>% acf ``` The same pattern is seen with the models with more variables. ``` xreg <- as.matrix(df[,c("Year","Trachurus","FIP")]) forecast::auto.arima(df$anchovy, xreg=xreg) ``` ``` ## Series: df$anchovy ## Regression with ARIMA(0,0,0) errors ## ## Coefficients: ## Year Trachurus FIP ## 0.0883 -0.2339 1.2686 ## s.e. 0.0083 0.1092 0.1073 ## ## sigma^2 estimated as 0.02101: log likelihood=13.39 ## AIC=-18.78 AICc=-16.56 BIC=-14.24 ``` ### 7\.1\.2 Example: fitting with arima and sarima If we want to fit a specific ARMA model, for example an AR(1\) model for the residuals, we can use `arima`. ``` xreg <- as.matrix(df[,c("Year","FIP")]) arima(df$anchovy, xreg=xreg, order = c(1,0,0)) ``` ``` ## ## Call: ## arima(x = df$anchovy, order = c(1, 0, 0), xreg = xreg) ## ## Coefficients: ## ar1 intercept Year FIP ## -0.0404 -0.0975 0.0729 1.0517 ## s.e. 0.2256 2.3540 0.0057 0.2920 ## ## sigma^2 estimated as 0.02188: log likelihood = 11.32, aic = -12.64 ``` We can also use the `sarima` function in the **astsa** package. This plots a nice diagnostics plot with the fit. ``` xreg <- as.matrix(df[,c("Year","FIP")]) astsa::sarima(df$anchovy, 1, 0, 0, xreg=xreg) ``` ``` ## initial value -1.932551 ## iter 2 value -1.945583 ## iter 3 value -1.946840 ## iter 4 value -1.946961 ## iter 5 value -1.946974 ## iter 6 value -1.946976 ## iter 7 value -1.946976 ## iter 7 value -1.946976 ## iter 7 value -1.946976 ## final value -1.946976 ## converged ## initial value -1.897686 ## iter 2 value -1.910866 ## iter 3 value -1.910989 ## iter 4 value -1.911001 ## iter 5 value -1.911004 ## iter 5 value -1.911004 ## iter 5 value -1.911004 ## final value -1.911004 ## converged ``` ``` ## $fit ## ## Call: ## stats::arima(x = xdata, order = c(p, d, q), seasonal = list(order = c(P, D, ## Q), period = S), xreg = xreg, transform.pars = trans, fixed = fixed, optim.control = list(trace = trc, ## REPORT = 1, reltol = tol)) ## ## Coefficients: ## ar1 intercept Year FIP ## -0.0404 -0.0975 0.0729 1.0517 ## s.e. 0.2256 2.3540 0.0057 0.2920 ## ## sigma^2 estimated as 0.02188: log likelihood = 11.32, aic = -12.64 ## ## $degrees_of_freedom ## [1] 19 ## ## $ttable ## Estimate SE t.value p.value ## ar1 -0.0404 0.2256 -0.1791 0.8597 ## intercept -0.0975 2.3540 -0.0414 0.9674 ## Year 0.0729 0.0057 12.8250 0.0000 ## FIP 1.0517 0.2920 3.6024 0.0019 ## ## $AIC ## [1] -0.549349 ## ## $AICc ## [1] -0.4527307 ## ## $BIC ## [1] -0.3025025 ``` ### 7\.1\.3 Example: fitting with gls We can also fit multivariate regression with autocorrelated errors with the nlme package and function `gls()`. The default fitting method is REML, and to get the same results as `arima()`, we need to specify `method="ML"`. ``` mod <- gls(anchovy~Year+FIP, data=df, correlation=corAR1(form=~1), method="ML") summary(mod) ``` ``` ## Generalized least squares fit by maximum likelihood ## Model: anchovy ~ Year + FIP ## Data: df ## AIC BIC logLik ## -12.63503 -6.957558 11.31751 ## ## Correlation Structure: AR(1) ## Formula: ~1 ## Parameter estimate(s): ## Phi ## -0.04023925 ## ## Coefficients: ## Value Std.Error t-value p-value ## (Intercept) -0.0970390 2.4776517 -0.039166 0.9691 ## Year 0.0729497 0.0060939 11.971012 0.0000 ## FIP 1.0516813 0.3070037 3.425630 0.0027 ## ## Correlation: ## (Intr) Year ## Year -0.631 ## FIP -1.000 0.612 ## ## Standardized residuals: ## Min Q1 Med Q3 Max ## -1.5299235 -0.9188421 0.2607087 0.6691076 2.1482577 ## ## Residual standard error: 0.1480464 ## Degrees of freedom: 23 total; 20 residual ``` You can also fit an AR(2\) or ARMA with `gls()`: ``` mod <- gls(anchovy~Year+FIP, data=df, correlation=corARMA(form = ~1,p=2,q=0), method="ML") summary(mod) ``` ``` ## Generalized least squares fit by maximum likelihood ## Model: anchovy ~ Year + FIP ## Data: df ## AIC BIC logLik ## -12.033 -5.220033 12.0165 ## ## Correlation Structure: ARMA(2,0) ## Formula: ~1 ## Parameter estimate(s): ## Phi1 Phi2 ## -0.09861143 -0.28248099 ## ## Coefficients: ## Value Std.Error t-value p-value ## (Intercept) -1.1700795 2.0440075 -0.572444 0.5734 ## Year 0.0732924 0.0048706 15.047960 0.0000 ## FIP 1.1869743 0.2533707 4.684734 0.0001 ## ## Correlation: ## (Intr) Year ## Year -0.662 ## FIP -1.000 0.645 ## ## Standardized residuals: ## Min Q1 Med Q3 Max ## -1.6245159 -0.9492037 0.1640458 0.6481195 2.1941017 ## ## Residual standard error: 0.1494797 ## Degrees of freedom: 23 total; 20 residual ``` ### 7\.1\.4 MREG of first or second differences In the multivariate regression with ARMA errors, the response variable \\(x\_t\\) is not necessarily stationary since the covariates \\(c\_t\\)’s need not be stationary. If we wish to model the first or second differences of \\(x\_t\\), then we are potentially modeling a stationary process if differencing leads to a stationary process. We need to think carefully about how we set up a multivariate regression if our response variable is stationary. One recommendation is if \\(x\_t\\) is differenced, the same differencing is applied to the covariates. The idea is if the response variable is stationary, we want to make sure that the independent variables are also stationary. However, in a fisheries application \\(x\_t \- x\_{t\-1}\\) often has a biological meaning, the yearly (or monthly or hourly) rate of change, and that rate of change is what one is trying explain with a covariate. One would not necessarily expect the first difference to be stationary and one is trying to explain any trend in the one\-step rate of change with some set of covariates. On the other hand, if the response variable, the raw data or the first or second difference, is stationary then trying to explain its variability via a non\-stationary covariate will clearly lead to the effect size of the covariates being zero. We don’t need to fit a model to tell us that. ### 7\.1\.5 Discussion R provides many different functions and packages for fitting a multivariate regression with autoregressive errors. In the case of the anchovy time series, the errors are not autoregressive. In general, the first step to determining whether a model with correlated errors is required is to look at diagnostics for the residuals. Select a model (see previous section) and then examine the residuals for evidence of autocorrelation. However another approach is to include a model with autocorrelated errors in your model set and compare via model selection. If this latter approach is taken, you must be careful to that the model selection criteria (AIC, BIC etc) are comparable. If you use functions from different packages, they authors have often left off a constant in their model selection criteria formulas. If you need to use different packages, you will carefully test the model selection criteria from the same model with different functions and adjust for the missing constants.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/7-2-ARMAX.html
7\.2 ARMAX Models ----------------- ``` library(MARSS) ``` The `stats::arima()` and `forecast::auto.arima()` functions with argument `xreg` fit a multivariate linear regression with ARMA errors. Note, this is not what is termed a ARMAX model. ARMAX models will be addressed separately. The model fitted when `xreg` is passed in is: \\\[\\begin{equation} \\begin{gathered} x\_t \= \\alpha \+ \\phi\_1 c\_{t,1} \+ \\phi\_2 c\_{t,2} \+ \\dots \+ z\_t \\\\ z\_t \= \\beta\_1 z\_{t\-1} \+ \\dots \+ \\beta\_p z\_{t\-p} \+ e\_t \+ \\theta\_1 e\_{t\-1} \+ \\dots \+ \\theta\_q e\_{t\-q}\\\\ e\_t \\sim N(0,\\sigma) \\end{gathered} \\end{equation}\\] where `xreg` is matrix with \\(c\_{t,1}\\) in column 1, \\(c\_{t\-2}\\) in column 2, etc. \\(z\_t\\) are the ARMA errors. ### 7\.2\.1 Discussion R provides many different functions and packages for fitting a multivariate regression with autoregressive errors. In the case of the anchovy time series, the errors are not autoregressive. In general, the first step to determining whether a model with correlated errors is required is to look at diagnostics for the residuals. Select a model (see previous section) and then examine the residuals for evidence of autocorrelation. However another approach is to include a model with autocorrelated errors in your model set and compare via model selection. If this latter approach is taken, you must be careful to that the model selection criteria (AIC, BIC etc) are comparable. If you use functions from different packages, they authors have often left off a constant in their model selection criteria formulas. If you need to use different packages, you will carefully test the model selection criteria from the same model with different functions and adjust for the missing constants. ### 7\.2\.1 Discussion R provides many different functions and packages for fitting a multivariate regression with autoregressive errors. In the case of the anchovy time series, the errors are not autoregressive. In general, the first step to determining whether a model with correlated errors is required is to look at diagnostics for the residuals. Select a model (see previous section) and then examine the residuals for evidence of autocorrelation. However another approach is to include a model with autocorrelated errors in your model set and compare via model selection. If this latter approach is taken, you must be careful to that the model selection criteria (AIC, BIC etc) are comparable. If you use functions from different packages, they authors have often left off a constant in their model selection criteria formulas. If you need to use different packages, you will carefully test the model selection criteria from the same model with different functions and adjust for the missing constants.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/8-seasonality.html
Chapter 8 Seasonality ===================== To work with seasonal data, we need to turn our data into a ts object, which is a “time\-series” object in R. This will allow us to specify the seasonality. It is important that we do not leave out any data in our time series. You data should look like so ``` Year Month metric.tons 2018 1 1 2018 2 2 2018 3 3 ... 2019 1 4 2019 2 6 2019 3 NA ``` The months are in order and the years are in order.
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/8-1-chinook-data.html
8\.1 Chinook data ----------------- We will illustrate the analysis of seasonal catch data using a data set of monthly chinook salmon from Washington state. ### Load the chinook salmon data set This is in the **FishForecast** package. ``` require(FishForecast) head(chinook.month) ``` (\#tab:f80\-load\_packages) | Year | Month | Species | State | log.metric.tons | metric.tons | Value | | 1990 | Jan | Chinook | WA | 3\.4 | 29\.9 | 1\.09e\+05 | | 1990 | Feb | Chinook | WA | 3\.81 | 45\.1 | 3\.09e\+05 | | 1990 | Mar | Chinook | WA | 3\.51 | 33\.5 | 2\.01e\+05 | | 1990 | Apr | Chinook | WA | 4\.25 | 70 | 4\.31e\+05 | | 1990 | May | Chinook | WA | 5\.2 | 181 | 8\.6e\+05 | | 1990 | Jun | Chinook | WA | 4\.37 | 79\.2 | 4\.2e\+05 | The data are monthly and start in January 1990\. To make this into a ts object do ``` chinookts <- ts(chinook.month$log.metric.tons, start=c(1990,1), frequency=12) ``` `start` is the year and month and frequency is the number of months in the year. If we had quarterly data that started in 2nd quarter of 1990, our call would be ``` ts(chinook$log.metric.tons, start=c(1990,2), frequency=4) ``` If we had daily data starting on hour 5 of day 10 and each row was an hour, our call would be ``` ts(chinook$log.metric.tons, start=c(10,5), frequency=24) ``` Use `?ts` to see more examples of how to set up ts objects. ### Plot seasonal data Now that we have specified our seasonal data as a ts object, it is easy to plot because R knows what the season is. ``` plot(chinookts) ``` ### Load the chinook salmon data set This is in the **FishForecast** package. ``` require(FishForecast) head(chinook.month) ``` (\#tab:f80\-load\_packages) | Year | Month | Species | State | log.metric.tons | metric.tons | Value | | 1990 | Jan | Chinook | WA | 3\.4 | 29\.9 | 1\.09e\+05 | | 1990 | Feb | Chinook | WA | 3\.81 | 45\.1 | 3\.09e\+05 | | 1990 | Mar | Chinook | WA | 3\.51 | 33\.5 | 2\.01e\+05 | | 1990 | Apr | Chinook | WA | 4\.25 | 70 | 4\.31e\+05 | | 1990 | May | Chinook | WA | 5\.2 | 181 | 8\.6e\+05 | | 1990 | Jun | Chinook | WA | 4\.37 | 79\.2 | 4\.2e\+05 | The data are monthly and start in January 1990\. To make this into a ts object do ``` chinookts <- ts(chinook.month$log.metric.tons, start=c(1990,1), frequency=12) ``` `start` is the year and month and frequency is the number of months in the year. If we had quarterly data that started in 2nd quarter of 1990, our call would be ``` ts(chinook$log.metric.tons, start=c(1990,2), frequency=4) ``` If we had daily data starting on hour 5 of day 10 and each row was an hour, our call would be ``` ts(chinook$log.metric.tons, start=c(10,5), frequency=24) ``` Use `?ts` to see more examples of how to set up ts objects. ### Plot seasonal data Now that we have specified our seasonal data as a ts object, it is easy to plot because R knows what the season is. ``` plot(chinookts) ```
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/8-2-seasonal-exponential-smoothing-model.html
8\.2 Seasonal Exponential Smoothing Model ----------------------------------------- Now we add a few more lines to our ETS table of models: | model | “ZZZ” | alternate function | | --- | --- | --- | | exponential smoothing no trend | “ANN” | `ses()` | | exponential smoothing with trend | “AAN” | `holt()` | | exponential smoothing with season no trend | “ANA” | NA | | exponential smoothing with season and trend | “AAA” | NA | | estimate best trend and season model | “ZZZ” | NA | Unfortunately `ets()` will not handle missing values and will find the longest continuous piece of our data and use that. ``` library(forecast) traindat <- window(chinookts, c(1990,1), c(1999,12)) fit <- forecast::ets(traindat, model="AAA") ``` ``` ## Warning in forecast::ets(traindat, model = "AAA"): Missing values encountered. ## Using longest contiguous portion of time series ``` ``` fr <- forecast::forecast(fit, h=24) plot(fr) points(window(chinookts, c(1996,1), c(1996,12))) ``` ### 8\.2\.1 Force seasonality to evolve more If we plot the decomposition, we see the the seasonal component is not changing over time, unlike the actual data. The bar on the right, alerts us that the scale on the 3rd panel is much smaller. ``` autoplot(fit) ``` Pass in a high `gamma` (the season weighting) to force the seasonality to evolve. ``` fit <- forecast::ets(traindat, model="AAA", gamma=0.4) ``` ``` ## Warning in forecast::ets(traindat, model = "AAA", gamma = 0.4): Missing values ## encountered. Using longest contiguous portion of time series ``` ``` autoplot(fit) ``` --- ### 8\.2\.1 Force seasonality to evolve more If we plot the decomposition, we see the the seasonal component is not changing over time, unlike the actual data. The bar on the right, alerts us that the scale on the 3rd panel is much smaller. ``` autoplot(fit) ``` Pass in a high `gamma` (the season weighting) to force the seasonality to evolve. ``` fit <- forecast::ets(traindat, model="AAA", gamma=0.4) ``` ``` ## Warning in forecast::ets(traindat, model = "AAA", gamma = 0.4): Missing values ## encountered. Using longest contiguous portion of time series ``` ``` autoplot(fit) ``` ---
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/8-3-seasonal-arima-model.html
8\.3 Seasonal ARIMA model ------------------------- `auto.arima()` will recognize that our data has season and fit a seasonal ARIMA model to our data by default. Let’s use the data that `ets()` used. This is shorter than our training data and is Oct 1990 to Dec 1995\. The data used by `ets()` is returned in `fit$x`. We will redefine the training data to be the longest segment with no missing values. ``` traindat <- window(chinookts, c(1990,10), c(1995,12)) testdat <- window(chinookts, c(1996,1), c(1996,12)) fit <- forecast::auto.arima(traindat) fr <- forecast::forecast(fit, h=12) plot(fr) points(testdat) ```
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/8-5-forecast-evaluation.html
8\.5 Forecast evaluation ------------------------ We can compute the forecast performance metrics as usual. ``` fit.ets <- forecast::ets(traindat, model="AAA") fr <- forecast::forecast(fit.ets, h=12) ``` Look at the forecast so you know what years and months to include in your test data. Pull those 12 months out of your data using the `window()` function. ``` testdat <- window(chinookts, c(1996,1), c(1996,12)) ``` Use `accuracy()` to get the forecast error metrics. ``` forecast::accuracy(fr, testdat) ``` ``` ## ME RMSE MAE MPE MAPE MASE ## Training set -0.0001825075 0.5642326 0.4440532 -9.254074 25.40106 0.7364593 ## Test set 0.3143200919 0.7518660 0.6077172 65.753096 81.38568 1.0078949 ## ACF1 Theil's U ## Training set 0.07490341 NA ## Test set 0.05504107 0.4178409 ``` We can do the same for the ARIMA model. ``` fit <- forecast::auto.arima(traindat) fr <- forecast::forecast(fit, h=12) forecast::accuracy(fr, testdat) ``` ``` ## ME RMSE MAE MPE MAPE MASE ## Training set 0.01076412 0.5643352 0.3966735 -1.219729 26.91589 0.6578803 ## Test set 0.79665978 0.9180939 0.7966598 19.587692 53.48599 1.3212549 ## ACF1 Theil's U ## Training set -0.05991122 NA ## Test set -0.12306276 0.5993699 ```
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/A-inputting-data.html
A Inputting data ================ This chapter will illustrate how to input data that is stored in csv files in various common formats. ### one response variable If your data look like this: ``` Year Species metric.tons 2018, Fish1, 1 2019, Fish1, 2 2018, Fish2, 3 2019, Fish2, 4 2018, Fish3, 6 2019, Fish4, NA ``` with this code: ``` test <- read.csv("Data/test.csv", stringsAsFactors = FALSE) save(test, file="test.RData") ``` --- ### Many response variables Read in a file where the data are in columns. If your data look like this with each species (or site) across the columns: ``` Year,Anchovy,Sardine,Chub mackerel,Horse mackerel,Mackerel,Jack Mackerel 1964,5449.2,12984.4,1720.7,4022.4,NA,NA 1965,4263.5,10611.1,1278.5,4158.3,NA,NA 1966,5146.4,11437.8,802.6,3012.1,NA,NA ``` Use this code: ``` test <- read.csv("Data/test.csv", stringsAsFactors = FALSE) reshape2::melt(test, id="Year", value.name="metric.tons", variable.name="Species") save(test, file="test.RData") ``` --- ### Many response variables, two time variables If your data also have, say, a month (or qtr) column, use this code: ``` Year,Month,Anchovy,Sardine,Chub mackerel,Horse mackerel,Mackerel,Jack Mackerel 1964,1,5449.2,12984.4,1720.7,4022.4,NA,NA 1964,2,4263.5,10611.1,1278.5,4158.3,NA,NA 1964,3,5146.4,11437.8,802.6,3012.1,NA,NA ``` Use this code: ``` test <- read.csv("Data/test.csv", stringsAsFactors = FALSE) reshape2::melt(test, id=c("Year","Month"), value.name="metric.tons", variable.name="Species") save(test, file="test.RData") ``` --- ### One response variable, multiple explanatory variables ``` Year, Anchovy, SST, Mackerel 1964, 5449.2, 24.4, 1720.7 1965, 4263.5, 30.1, 1278.5 1966, 5146.4, 23.8, 802.6 ``` Use this code: ``` test <- read.csv("Data/test.csv", stringsAsFactors = FALSE) save(test, file="test.RData") ``` Use this `lm()` model (or gam() etc): ``` fit <- lm(Anchovy ~ SST + Mackerel, data=test) ``` ### one response variable If your data look like this: ``` Year Species metric.tons 2018, Fish1, 1 2019, Fish1, 2 2018, Fish2, 3 2019, Fish2, 4 2018, Fish3, 6 2019, Fish4, NA ``` with this code: ``` test <- read.csv("Data/test.csv", stringsAsFactors = FALSE) save(test, file="test.RData") ``` --- ### Many response variables Read in a file where the data are in columns. If your data look like this with each species (or site) across the columns: ``` Year,Anchovy,Sardine,Chub mackerel,Horse mackerel,Mackerel,Jack Mackerel 1964,5449.2,12984.4,1720.7,4022.4,NA,NA 1965,4263.5,10611.1,1278.5,4158.3,NA,NA 1966,5146.4,11437.8,802.6,3012.1,NA,NA ``` Use this code: ``` test <- read.csv("Data/test.csv", stringsAsFactors = FALSE) reshape2::melt(test, id="Year", value.name="metric.tons", variable.name="Species") save(test, file="test.RData") ``` --- ### Many response variables, two time variables If your data also have, say, a month (or qtr) column, use this code: ``` Year,Month,Anchovy,Sardine,Chub mackerel,Horse mackerel,Mackerel,Jack Mackerel 1964,1,5449.2,12984.4,1720.7,4022.4,NA,NA 1964,2,4263.5,10611.1,1278.5,4158.3,NA,NA 1964,3,5146.4,11437.8,802.6,3012.1,NA,NA ``` Use this code: ``` test <- read.csv("Data/test.csv", stringsAsFactors = FALSE) reshape2::melt(test, id=c("Year","Month"), value.name="metric.tons", variable.name="Species") save(test, file="test.RData") ``` --- ### One response variable, multiple explanatory variables ``` Year, Anchovy, SST, Mackerel 1964, 5449.2, 24.4, 1720.7 1965, 4263.5, 30.1, 1278.5 1966, 5146.4, 23.8, 802.6 ``` Use this code: ``` test <- read.csv("Data/test.csv", stringsAsFactors = FALSE) save(test, file="test.RData") ``` Use this `lm()` model (or gam() etc): ``` fit <- lm(Anchovy ~ SST + Mackerel, data=test) ```
Time Series Analysis and Forecasting
fish-forecast.github.io
https://fish-forecast.github.io/Fish-Forecast-Bookdown/B-downloading-icoads-covariates.html
B Downloading ICOADS covariates =============================== The covariates are those in Stergiou and Christou except that NS winds might not be vertical wind. I used the ICOADS data not the COADSs. The boxes are 1 degree but on 1 degree centers not 0\.5 centers. Thus box is 39\.5\-40\.5 not 39\-40\. The following is the code used to download the covariates from the NOAA ERDDAP server. It creates a list with the monthly data for each box. ``` library(RCurl) library(XML) library(stringr) lat <- c(39,39,40) lon <- c(24,25,25) covs <- list() for(i in 1:3){ loc <- paste0("[(",lat[i],".5):1:(",lat[i],".5)][(",lon[i],".5):1:(",lon[i],".5)]") url <- paste0("https://coastwatch.pfeg.noaa.gov/erddap/griddap/esrlIcoads1ge.htmlTable?air[(1964-01-01):1:(2018-08-01T00:00:00Z)]",loc,",slp[(1964-01-01):1:(2018-08-01T00:00:00Z)]",loc,",sst[(1964-01-01):1:(2018-08-01T00:00:00Z)]",loc,",vwnd[(1964-01-01):1:(2018-08-01T00:00:00Z)]",loc,",wspd3[(1964-01-01):1:(2018-08-01T00:00:00Z)]",loc) doc <- getURL(url) cov <- readHTMLTable(doc, which=2, stringsAsFactors=FALSE) coln <- paste0(colnames(cov),".",cov[1,]) coln <- str_replace(coln, "\n", "") coln <- str_replace_all(coln, "[*]", "") cov <- cov[-1,] colnames(cov) <- coln cov[,1] <- as.Date(cov[,1]) for(j in 2:dim(cov)[2]) cov[,j] <- as.numeric(cov[,j]) covs[[i]] <- cov } ``` Now create the monthly and yearly means. ``` covsmean <- covs[[1]] for(j in 2:dim(cov)[2]) covsmean[,j] <- apply(cbind(covs[[1]][,j], covs[[2]][,j], covs[[3]][,j]),1,mean,na.rm=TRUE) covsmean <- covsmean[,c(-2,-3)] covsmean$Year <- as.factor(format(cov[,1],"%Y")) covsmean.mon <- covsmean covsmean.year <- data.frame(Year=unique(covsmean$Year)) for(j in 2:(dim(covsmean)[2]-1)) covsmean.year <- cbind(covsmean.year, tapply(covsmean[,j], covsmean$Year, mean, na.rm=TRUE)) colnames(covsmean.year) <- c("Year",colnames(covsmean)[2:(dim(covsmean)[2]-1)]) ```
Time Series Analysis and Forecasting
jrnold.github.io
https://jrnold.github.io/r4ds-exercise-solutions/many-models.html
25 Many models ============== 25\.1 Introduction ------------------ ``` library("modelr") library("tidyverse") library("gapminder") ``` 25\.2 gapminder --------------- ### Exercise 25\.2\.1 A linear trend seems to be slightly too simple for the overall trend. Can you do better with a quadratic polynomial? How can you interpret the coefficients of the quadratic? Hint you might want to transform year so that it has mean zero.) The following code replicates the analysis in the chapter but replaces the function `country_model()` with a regression that includes the year squared. ``` lifeExp ~ poly(year, 2) ``` ``` country_model <- function(df) { lm(lifeExp ~ poly(year - median(year), 2), data = df) } by_country <- gapminder %>% group_by(country, continent) %>% nest() by_country <- by_country %>% mutate(model = map(data, country_model)) ``` ``` by_country <- by_country %>% mutate( resids = map2(data, model, add_residuals) ) by_country #> # A tibble: 142 x 5 #> # Groups: country, continent [142] #> country continent data model resids #> <fct> <fct> <list> <list> <list> #> 1 Afghanistan Asia <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> 2 Albania Europe <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> 3 Algeria Africa <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> 4 Angola Africa <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> 5 Argentina Americas <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> 6 Australia Oceania <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> # … with 136 more rows ``` ``` unnest(by_country, resids) %>% ggplot(aes(year, resid)) + geom_line(aes(group = country), alpha = 1 / 3) + geom_smooth(se = FALSE) #> `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")' ``` ``` by_country %>% mutate(glance = map(model, broom::glance)) %>% unnest(glance, .drop = TRUE) %>% ggplot(aes(continent, r.squared)) + geom_jitter(width = 0.5) #> Warning: The `.drop` argument of `unnest()` is deprecated as of tidyr 1.0.0. #> All list-columns are now preserved. #> This warning is displayed once every 8 hours. #> Call `lifecycle::last_warnings()` to see where this warning was generated. ``` ### Exercise 25\.2\.2 Explore other methods for visualizing the distribution of \\(R^2\\) per continent. You might want to try the ggbeeswarm package, which provides similar methods for avoiding overlaps as jitter, but uses deterministic methods. See exercise 7\.5\.1\.1\.6 for more on ggbeeswarm ``` library("ggbeeswarm") by_country %>% mutate(glance = map(model, broom::glance)) %>% unnest(glance, .drop = TRUE) %>% ggplot(aes(continent, r.squared)) + geom_beeswarm() ``` ### Exercise 25\.2\.3 To create the last plot (showing the data for the countries with the worst model fits), we needed two steps: we created a data frame with one row per country and then semi\-joined it to the original dataset. It’s possible to avoid this join if we use `unnest()` instead of `unnest(.drop = TRUE)`. How? ``` gapminder %>% group_by(country, continent) %>% nest() %>% mutate(model = map(data, ~lm(lifeExp ~ year, .))) %>% mutate(glance = map(model, broom::glance)) %>% unnest(glance) %>% unnest(data) %>% filter(r.squared < 0.25) %>% ggplot(aes(year, lifeExp)) + geom_line(aes(color = country)) ``` 25\.3 List\-columns ------------------- No exercises 25\.4 Creating list\-columns ---------------------------- ### Exercise 25\.4\.1 List all the functions that you can think of that take a atomic vector and return a list. Many functions in the stringr package take a character vector as input and return a list. ``` str_split(sentences[1:3], " ") #> [[1]] #> [1] "The" "birch" "canoe" "slid" "on" "the" "smooth" #> [8] "planks." #> #> [[2]] #> [1] "Glue" "the" "sheet" "to" "the" #> [6] "dark" "blue" "background." #> #> [[3]] #> [1] "It's" "easy" "to" "tell" "the" "depth" "of" "a" "well." str_match_all(c("abc", "aa", "aabaa", "abbbc"), "a+") #> [[1]] #> [,1] #> [1,] "a" #> #> [[2]] #> [,1] #> [1,] "aa" #> #> [[3]] #> [,1] #> [1,] "aa" #> [2,] "aa" #> #> [[4]] #> [,1] #> [1,] "a" ``` The `map()` function takes a vector and always returns a list. ``` map(1:3, runif) #> [[1]] #> [1] 0.601 #> #> [[2]] #> [1] 0.1572 0.0074 #> #> [[3]] #> [1] 0.466 0.498 0.290 ``` ### Exercise 25\.4\.2 Brainstorm useful summary functions that, like `quantile()`, return multiple values. Some examples of summary functions that return multiple values are the following. ``` range(mtcars$mpg) #> [1] 10.4 33.9 fivenum(mtcars$mpg) #> [1] 10.4 15.3 19.2 22.8 33.9 boxplot.stats(mtcars$mpg) #> $stats #> [1] 10.4 15.3 19.2 22.8 33.9 #> #> $n #> [1] 32 #> #> $conf #> [1] 17.1 21.3 #> #> $out #> numeric(0) ``` ### Exercise 25\.4\.3 What’s missing in the following data frame? How does `quantile()` return that missing piece? Why isn’t that helpful here? ``` mtcars %>% group_by(cyl) %>% summarise(q = list(quantile(mpg))) %>% unnest() #> `summarise()` ungrouping output (override with `.groups` argument) #> Warning: `cols` is now required when using unnest(). #> Please use `cols = c(q)` #> # A tibble: 15 x 2 #> cyl q #> <dbl> <dbl> #> 1 4 21.4 #> 2 4 22.8 #> 3 4 26 #> 4 4 30.4 #> 5 4 33.9 #> 6 6 17.8 #> # … with 9 more rows ``` The particular quantiles of the values are missing, e.g. `0%`, `25%`, `50%`, `75%`, `100%`. `quantile()` returns these in the names of the vector. ``` quantile(mtcars$mpg) #> 0% 25% 50% 75% 100% #> 10.4 15.4 19.2 22.8 33.9 ``` Since the `unnest` function drops the names of the vector, they aren’t useful here. ### Exercise 25\.4\.4 What does this code do? Why might might it be useful? ``` mtcars %>% group_by(cyl) %>% summarise_each(funs(list)) ``` ``` mtcars %>% group_by(cyl) %>% summarise_each(funs(list)) #> Warning: `summarise_each_()` is deprecated as of dplyr 0.7.0. #> Please use `across()` instead. #> This warning is displayed once every 8 hours. #> Call `lifecycle::last_warnings()` to see where this warning was generated. #> Warning: `funs()` is deprecated as of dplyr 0.8.0. #> Please use a list of either functions or lambdas: #> #> # Simple named list: #> list(mean = mean, median = median) #> #> # Auto named with `tibble::lst()`: #> tibble::lst(mean, median) #> #> # Using lambdas #> list(~ mean(., trim = .2), ~ median(., na.rm = TRUE)) #> This warning is displayed once every 8 hours. #> Call `lifecycle::last_warnings()` to see where this warning was generated. #> # A tibble: 3 x 11 #> cyl mpg disp hp drat wt qsec vs am gear carb #> <dbl> <list> <list> <list> <list> <list> <list> <list> <list> <list> <list> #> 1 4 <dbl [… <dbl [… <dbl [… <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … #> 2 6 <dbl [… <dbl [… <dbl [… <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … #> 3 8 <dbl [… <dbl [… <dbl [… <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … ``` It creates a data frame in which each row corresponds to a value of `cyl`, and each observation for each column (other than `cyl`) is a vector of all the values of that column for that value of `cyl`. It seems like it should be useful to have all the observations of each variable for each group, but off the top of my head, I can’t think of a specific use for this. But, it seems that it may do many things that `dplyr::do` does. 25\.5 Simplifying list\-columns ------------------------------- ### Exercise 25\.5\.1 Why might the `lengths()` function be useful for creating atomic vector columns from list\-columns? The `lengths()` function returns the lengths of each element in a list. It could be useful for testing whether all elements in a list\-column are the same length. You could get the maximum length to determine how many atomic vector columns to create. It is also a replacement for something like `map_int(x, length)` or `sapply(x, length)`. ### Exercise 25\.5\.2 List the most common types of vector found in a data frame. What makes lists different? The common types of vectors in data frames are: * `logical` * `numeric` * `integer` * `character` * `factor` All of the common types of vectors in data frames are atomic. Lists are not atomic since they can contain other lists and other vectors. 25\.6 Making tidy data with broom --------------------------------- No exercises 25\.1 Introduction ------------------ ``` library("modelr") library("tidyverse") library("gapminder") ``` 25\.2 gapminder --------------- ### Exercise 25\.2\.1 A linear trend seems to be slightly too simple for the overall trend. Can you do better with a quadratic polynomial? How can you interpret the coefficients of the quadratic? Hint you might want to transform year so that it has mean zero.) The following code replicates the analysis in the chapter but replaces the function `country_model()` with a regression that includes the year squared. ``` lifeExp ~ poly(year, 2) ``` ``` country_model <- function(df) { lm(lifeExp ~ poly(year - median(year), 2), data = df) } by_country <- gapminder %>% group_by(country, continent) %>% nest() by_country <- by_country %>% mutate(model = map(data, country_model)) ``` ``` by_country <- by_country %>% mutate( resids = map2(data, model, add_residuals) ) by_country #> # A tibble: 142 x 5 #> # Groups: country, continent [142] #> country continent data model resids #> <fct> <fct> <list> <list> <list> #> 1 Afghanistan Asia <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> 2 Albania Europe <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> 3 Algeria Africa <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> 4 Angola Africa <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> 5 Argentina Americas <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> 6 Australia Oceania <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> # … with 136 more rows ``` ``` unnest(by_country, resids) %>% ggplot(aes(year, resid)) + geom_line(aes(group = country), alpha = 1 / 3) + geom_smooth(se = FALSE) #> `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")' ``` ``` by_country %>% mutate(glance = map(model, broom::glance)) %>% unnest(glance, .drop = TRUE) %>% ggplot(aes(continent, r.squared)) + geom_jitter(width = 0.5) #> Warning: The `.drop` argument of `unnest()` is deprecated as of tidyr 1.0.0. #> All list-columns are now preserved. #> This warning is displayed once every 8 hours. #> Call `lifecycle::last_warnings()` to see where this warning was generated. ``` ### Exercise 25\.2\.2 Explore other methods for visualizing the distribution of \\(R^2\\) per continent. You might want to try the ggbeeswarm package, which provides similar methods for avoiding overlaps as jitter, but uses deterministic methods. See exercise 7\.5\.1\.1\.6 for more on ggbeeswarm ``` library("ggbeeswarm") by_country %>% mutate(glance = map(model, broom::glance)) %>% unnest(glance, .drop = TRUE) %>% ggplot(aes(continent, r.squared)) + geom_beeswarm() ``` ### Exercise 25\.2\.3 To create the last plot (showing the data for the countries with the worst model fits), we needed two steps: we created a data frame with one row per country and then semi\-joined it to the original dataset. It’s possible to avoid this join if we use `unnest()` instead of `unnest(.drop = TRUE)`. How? ``` gapminder %>% group_by(country, continent) %>% nest() %>% mutate(model = map(data, ~lm(lifeExp ~ year, .))) %>% mutate(glance = map(model, broom::glance)) %>% unnest(glance) %>% unnest(data) %>% filter(r.squared < 0.25) %>% ggplot(aes(year, lifeExp)) + geom_line(aes(color = country)) ``` ### Exercise 25\.2\.1 A linear trend seems to be slightly too simple for the overall trend. Can you do better with a quadratic polynomial? How can you interpret the coefficients of the quadratic? Hint you might want to transform year so that it has mean zero.) The following code replicates the analysis in the chapter but replaces the function `country_model()` with a regression that includes the year squared. ``` lifeExp ~ poly(year, 2) ``` ``` country_model <- function(df) { lm(lifeExp ~ poly(year - median(year), 2), data = df) } by_country <- gapminder %>% group_by(country, continent) %>% nest() by_country <- by_country %>% mutate(model = map(data, country_model)) ``` ``` by_country <- by_country %>% mutate( resids = map2(data, model, add_residuals) ) by_country #> # A tibble: 142 x 5 #> # Groups: country, continent [142] #> country continent data model resids #> <fct> <fct> <list> <list> <list> #> 1 Afghanistan Asia <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> 2 Albania Europe <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> 3 Algeria Africa <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> 4 Angola Africa <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> 5 Argentina Americas <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> 6 Australia Oceania <tibble [12 × 4]> <lm> <tibble [12 × 5]> #> # … with 136 more rows ``` ``` unnest(by_country, resids) %>% ggplot(aes(year, resid)) + geom_line(aes(group = country), alpha = 1 / 3) + geom_smooth(se = FALSE) #> `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")' ``` ``` by_country %>% mutate(glance = map(model, broom::glance)) %>% unnest(glance, .drop = TRUE) %>% ggplot(aes(continent, r.squared)) + geom_jitter(width = 0.5) #> Warning: The `.drop` argument of `unnest()` is deprecated as of tidyr 1.0.0. #> All list-columns are now preserved. #> This warning is displayed once every 8 hours. #> Call `lifecycle::last_warnings()` to see where this warning was generated. ``` ### Exercise 25\.2\.2 Explore other methods for visualizing the distribution of \\(R^2\\) per continent. You might want to try the ggbeeswarm package, which provides similar methods for avoiding overlaps as jitter, but uses deterministic methods. See exercise 7\.5\.1\.1\.6 for more on ggbeeswarm ``` library("ggbeeswarm") by_country %>% mutate(glance = map(model, broom::glance)) %>% unnest(glance, .drop = TRUE) %>% ggplot(aes(continent, r.squared)) + geom_beeswarm() ``` ### Exercise 25\.2\.3 To create the last plot (showing the data for the countries with the worst model fits), we needed two steps: we created a data frame with one row per country and then semi\-joined it to the original dataset. It’s possible to avoid this join if we use `unnest()` instead of `unnest(.drop = TRUE)`. How? ``` gapminder %>% group_by(country, continent) %>% nest() %>% mutate(model = map(data, ~lm(lifeExp ~ year, .))) %>% mutate(glance = map(model, broom::glance)) %>% unnest(glance) %>% unnest(data) %>% filter(r.squared < 0.25) %>% ggplot(aes(year, lifeExp)) + geom_line(aes(color = country)) ``` 25\.3 List\-columns ------------------- No exercises 25\.4 Creating list\-columns ---------------------------- ### Exercise 25\.4\.1 List all the functions that you can think of that take a atomic vector and return a list. Many functions in the stringr package take a character vector as input and return a list. ``` str_split(sentences[1:3], " ") #> [[1]] #> [1] "The" "birch" "canoe" "slid" "on" "the" "smooth" #> [8] "planks." #> #> [[2]] #> [1] "Glue" "the" "sheet" "to" "the" #> [6] "dark" "blue" "background." #> #> [[3]] #> [1] "It's" "easy" "to" "tell" "the" "depth" "of" "a" "well." str_match_all(c("abc", "aa", "aabaa", "abbbc"), "a+") #> [[1]] #> [,1] #> [1,] "a" #> #> [[2]] #> [,1] #> [1,] "aa" #> #> [[3]] #> [,1] #> [1,] "aa" #> [2,] "aa" #> #> [[4]] #> [,1] #> [1,] "a" ``` The `map()` function takes a vector and always returns a list. ``` map(1:3, runif) #> [[1]] #> [1] 0.601 #> #> [[2]] #> [1] 0.1572 0.0074 #> #> [[3]] #> [1] 0.466 0.498 0.290 ``` ### Exercise 25\.4\.2 Brainstorm useful summary functions that, like `quantile()`, return multiple values. Some examples of summary functions that return multiple values are the following. ``` range(mtcars$mpg) #> [1] 10.4 33.9 fivenum(mtcars$mpg) #> [1] 10.4 15.3 19.2 22.8 33.9 boxplot.stats(mtcars$mpg) #> $stats #> [1] 10.4 15.3 19.2 22.8 33.9 #> #> $n #> [1] 32 #> #> $conf #> [1] 17.1 21.3 #> #> $out #> numeric(0) ``` ### Exercise 25\.4\.3 What’s missing in the following data frame? How does `quantile()` return that missing piece? Why isn’t that helpful here? ``` mtcars %>% group_by(cyl) %>% summarise(q = list(quantile(mpg))) %>% unnest() #> `summarise()` ungrouping output (override with `.groups` argument) #> Warning: `cols` is now required when using unnest(). #> Please use `cols = c(q)` #> # A tibble: 15 x 2 #> cyl q #> <dbl> <dbl> #> 1 4 21.4 #> 2 4 22.8 #> 3 4 26 #> 4 4 30.4 #> 5 4 33.9 #> 6 6 17.8 #> # … with 9 more rows ``` The particular quantiles of the values are missing, e.g. `0%`, `25%`, `50%`, `75%`, `100%`. `quantile()` returns these in the names of the vector. ``` quantile(mtcars$mpg) #> 0% 25% 50% 75% 100% #> 10.4 15.4 19.2 22.8 33.9 ``` Since the `unnest` function drops the names of the vector, they aren’t useful here. ### Exercise 25\.4\.4 What does this code do? Why might might it be useful? ``` mtcars %>% group_by(cyl) %>% summarise_each(funs(list)) ``` ``` mtcars %>% group_by(cyl) %>% summarise_each(funs(list)) #> Warning: `summarise_each_()` is deprecated as of dplyr 0.7.0. #> Please use `across()` instead. #> This warning is displayed once every 8 hours. #> Call `lifecycle::last_warnings()` to see where this warning was generated. #> Warning: `funs()` is deprecated as of dplyr 0.8.0. #> Please use a list of either functions or lambdas: #> #> # Simple named list: #> list(mean = mean, median = median) #> #> # Auto named with `tibble::lst()`: #> tibble::lst(mean, median) #> #> # Using lambdas #> list(~ mean(., trim = .2), ~ median(., na.rm = TRUE)) #> This warning is displayed once every 8 hours. #> Call `lifecycle::last_warnings()` to see where this warning was generated. #> # A tibble: 3 x 11 #> cyl mpg disp hp drat wt qsec vs am gear carb #> <dbl> <list> <list> <list> <list> <list> <list> <list> <list> <list> <list> #> 1 4 <dbl [… <dbl [… <dbl [… <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … #> 2 6 <dbl [… <dbl [… <dbl [… <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … #> 3 8 <dbl [… <dbl [… <dbl [… <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … ``` It creates a data frame in which each row corresponds to a value of `cyl`, and each observation for each column (other than `cyl`) is a vector of all the values of that column for that value of `cyl`. It seems like it should be useful to have all the observations of each variable for each group, but off the top of my head, I can’t think of a specific use for this. But, it seems that it may do many things that `dplyr::do` does. ### Exercise 25\.4\.1 List all the functions that you can think of that take a atomic vector and return a list. Many functions in the stringr package take a character vector as input and return a list. ``` str_split(sentences[1:3], " ") #> [[1]] #> [1] "The" "birch" "canoe" "slid" "on" "the" "smooth" #> [8] "planks." #> #> [[2]] #> [1] "Glue" "the" "sheet" "to" "the" #> [6] "dark" "blue" "background." #> #> [[3]] #> [1] "It's" "easy" "to" "tell" "the" "depth" "of" "a" "well." str_match_all(c("abc", "aa", "aabaa", "abbbc"), "a+") #> [[1]] #> [,1] #> [1,] "a" #> #> [[2]] #> [,1] #> [1,] "aa" #> #> [[3]] #> [,1] #> [1,] "aa" #> [2,] "aa" #> #> [[4]] #> [,1] #> [1,] "a" ``` The `map()` function takes a vector and always returns a list. ``` map(1:3, runif) #> [[1]] #> [1] 0.601 #> #> [[2]] #> [1] 0.1572 0.0074 #> #> [[3]] #> [1] 0.466 0.498 0.290 ``` ### Exercise 25\.4\.2 Brainstorm useful summary functions that, like `quantile()`, return multiple values. Some examples of summary functions that return multiple values are the following. ``` range(mtcars$mpg) #> [1] 10.4 33.9 fivenum(mtcars$mpg) #> [1] 10.4 15.3 19.2 22.8 33.9 boxplot.stats(mtcars$mpg) #> $stats #> [1] 10.4 15.3 19.2 22.8 33.9 #> #> $n #> [1] 32 #> #> $conf #> [1] 17.1 21.3 #> #> $out #> numeric(0) ``` ### Exercise 25\.4\.3 What’s missing in the following data frame? How does `quantile()` return that missing piece? Why isn’t that helpful here? ``` mtcars %>% group_by(cyl) %>% summarise(q = list(quantile(mpg))) %>% unnest() #> `summarise()` ungrouping output (override with `.groups` argument) #> Warning: `cols` is now required when using unnest(). #> Please use `cols = c(q)` #> # A tibble: 15 x 2 #> cyl q #> <dbl> <dbl> #> 1 4 21.4 #> 2 4 22.8 #> 3 4 26 #> 4 4 30.4 #> 5 4 33.9 #> 6 6 17.8 #> # … with 9 more rows ``` The particular quantiles of the values are missing, e.g. `0%`, `25%`, `50%`, `75%`, `100%`. `quantile()` returns these in the names of the vector. ``` quantile(mtcars$mpg) #> 0% 25% 50% 75% 100% #> 10.4 15.4 19.2 22.8 33.9 ``` Since the `unnest` function drops the names of the vector, they aren’t useful here. ### Exercise 25\.4\.4 What does this code do? Why might might it be useful? ``` mtcars %>% group_by(cyl) %>% summarise_each(funs(list)) ``` ``` mtcars %>% group_by(cyl) %>% summarise_each(funs(list)) #> Warning: `summarise_each_()` is deprecated as of dplyr 0.7.0. #> Please use `across()` instead. #> This warning is displayed once every 8 hours. #> Call `lifecycle::last_warnings()` to see where this warning was generated. #> Warning: `funs()` is deprecated as of dplyr 0.8.0. #> Please use a list of either functions or lambdas: #> #> # Simple named list: #> list(mean = mean, median = median) #> #> # Auto named with `tibble::lst()`: #> tibble::lst(mean, median) #> #> # Using lambdas #> list(~ mean(., trim = .2), ~ median(., na.rm = TRUE)) #> This warning is displayed once every 8 hours. #> Call `lifecycle::last_warnings()` to see where this warning was generated. #> # A tibble: 3 x 11 #> cyl mpg disp hp drat wt qsec vs am gear carb #> <dbl> <list> <list> <list> <list> <list> <list> <list> <list> <list> <list> #> 1 4 <dbl [… <dbl [… <dbl [… <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … #> 2 6 <dbl [… <dbl [… <dbl [… <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … #> 3 8 <dbl [… <dbl [… <dbl [… <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … <dbl … ``` It creates a data frame in which each row corresponds to a value of `cyl`, and each observation for each column (other than `cyl`) is a vector of all the values of that column for that value of `cyl`. It seems like it should be useful to have all the observations of each variable for each group, but off the top of my head, I can’t think of a specific use for this. But, it seems that it may do many things that `dplyr::do` does. 25\.5 Simplifying list\-columns ------------------------------- ### Exercise 25\.5\.1 Why might the `lengths()` function be useful for creating atomic vector columns from list\-columns? The `lengths()` function returns the lengths of each element in a list. It could be useful for testing whether all elements in a list\-column are the same length. You could get the maximum length to determine how many atomic vector columns to create. It is also a replacement for something like `map_int(x, length)` or `sapply(x, length)`. ### Exercise 25\.5\.2 List the most common types of vector found in a data frame. What makes lists different? The common types of vectors in data frames are: * `logical` * `numeric` * `integer` * `character` * `factor` All of the common types of vectors in data frames are atomic. Lists are not atomic since they can contain other lists and other vectors. ### Exercise 25\.5\.1 Why might the `lengths()` function be useful for creating atomic vector columns from list\-columns? The `lengths()` function returns the lengths of each element in a list. It could be useful for testing whether all elements in a list\-column are the same length. You could get the maximum length to determine how many atomic vector columns to create. It is also a replacement for something like `map_int(x, length)` or `sapply(x, length)`. ### Exercise 25\.5\.2 List the most common types of vector found in a data frame. What makes lists different? The common types of vectors in data frames are: * `logical` * `numeric` * `integer` * `character` * `factor` All of the common types of vectors in data frames are atomic. Lists are not atomic since they can contain other lists and other vectors. 25\.6 Making tidy data with broom --------------------------------- No exercises
Data Science
jrnold.github.io
https://jrnold.github.io/r4ds-exercise-solutions/communicate-intro.html
26 Introduction =============== No exercises
Data Science
jrnold.github.io
https://jrnold.github.io/r4ds-exercise-solutions/r-markdown.html
27 R Markdown ============= 27\.1 Introduction ------------------ 27\.2 R Markdown basics ----------------------- ### Exercise 27\.2\.1 Create a new notebook using *File \> New File \> R Notebook*. Read the instructions. Practice running the chunks. Verify that you can modify the code, re\-run it, and see modified output. This exercise is left to the reader. ### Exercise 27\.2\.2 Create a new R Markdown document with *File \> New File \> R Markdown …*. Knit it by clicking the appropriate button. Knit it by using the appropriate keyboard short cut. Verify that you can modify the input and see the output update. This exercise is mostly left to the reader. Recall that the keyboard shortcut to knit a file is `Cmd/Ctrl + Alt + K`. ### Exercise 27\.2\.3 Compare and contrast the R notebook and R markdown files you created above. How are the outputs similar? How are they different? How are the inputs similar? How are they different? What happens if you copy the YAML header from one to the other? R notebook files show the output of code chunks inside the editor, while hiding the console, when they are edited in RStudio. This contrasts with R markdown files, which show their output inside the console, and do not show output inside the editor. This makes R notebook documents appealing for interactive exploration. In this R markdown file, the plot is displayed in the “Plot” tab, while the output of `summary()` is displayed in the tab. However, when this same file is converted to a R notebook, the plot and `summary()` output are displayed in the “Editor” below the chunk of code which created them. Both R notebooks and R markdown files and can be knit to produce HTML output. R markdown files can be knit to a variety of formats including HTML, PDF, and DOCX. However, R notebooks can only be knit to HTML files, which are given the extension `.nb.html`. However, unlike R markdown files knit to HTML, the HTML output of an R notebook includes copy of the original `.Rmd` source. If a `.nb.html` file is opened in RStudio, the source of the `.Rmd` file can be extracted and edited. In contrast, there is no way to recover the original source of an R markdown file from its output, except through the parts that are displayed in the output itself. R markdown files and R notebooks differ in the value of `output` in their YAML headers. The YAML header for the R notebook will have the line, ``` --- ouptut: html_notebook --- ``` For example, this is a R notebook, ``` --- title: "Diamond sizes" date: 2016-08-25 output: html_notebook --- Text of the document. ``` The YAML header for the R markdown file will have the line, ``` ouptut: html_document ``` For example, this is a R markdown file. ``` --- title: "Diamond sizes" date: 2016-08-25 output: html_document --- Text of the document. ``` Copying the YAML header from an R notebook to a R markdown file changes it to an R notebook, and vice\-versa. More specifically, an `.Rmd` file can be changed to R markdown file or R notebook by changing the value of the `output` key in the header. The RStudio IDE and the rmarkdown package both use the YAML header of an `.Rmd` file to determine the document\-type of the file. For more information on R markdown notebooks see the following sources: * [R Markdown: The Definitive Guide](https://bookdown.org/yihui/rmarkdown/) section), Chapter [Notebook](https://bookdown.org/yihui/rmarkdown/notebook.html) * [Difference between R MarkDown and R NoteBook](https://stackoverflow.com/questions/43820483/difference-between-r-markdown-and-r-notebook/43898504#43898504) StackOverflow thread. ### Exercise 27\.2\.4 Create one new R Markdown document for each of the three built\-in formats: HTML, PDF and Word. Knit each of the three documents. How does the output differ? How does the input differ? (You may need to install LaTeX in order to build the PDF output — RStudio will prompt you if this is necessary.) They produce different outputs, both in the final documents and intermediate files (notably the type of plots produced). The only difference in the inputs is the value of `output` in the YAML header. The following `.Rmd` would be knit to HTML. ``` --- title: "Diamond sizes" date: 2016-08-25 output: html_document --- Text of the document. ``` If the value of the `output` key is changed to `word_document`, knitting the file will create a Word document (DOCX). ``` --- title: "Diamond sizes" date: 2016-08-25 output: word_document --- Text of the document. ``` Similarly, if the value of the `output` key is changed to `pdf_document`, knitting the file will create a PDF. ``` --- title: "Diamond sizes" date: 2016-08-25 output: pdf_document --- Text of the document. ``` If you click on the *Knit* menu button and then on one of *Knit to HTML*, *Knit to PDF*, or *Knit to Word*, you will see that the value of the `output` key will change to `html_document`, `pdf_document`, or `word_document`, respectively. You will see that the value of `output` will look a little different than the previous examples. It will add a new line with a value like, `pdf_document: default`. ``` --- title: "Diamond sizes" date: 2016-08-25 output: pdf_document: default --- Text of the document. ``` This format is more general, allows the document have multiple output formats as well as configuration settings that allow more fine\-grained control over the look of the output format. The chapter [R Markdown Formats](https://r4ds.had.co.nz/r-markdown-formats.html) discusses output formats for R markdown files in more detail. 27\.3 Text formatting with Markdown ----------------------------------- ### Exercise 27\.3\.1 Practice what you’ve learned by creating a brief CV. The title should be your name, and you should include headings for (at least) education or employment. Each of the sections should include a bulleted list of jobs/degrees. Highlight the year in bold. A minimal example is the following CV. ``` --- title: "Hadley Wickham" --- ## Employment - Chief Scientist, Rstudio, **2013--present**. - Adjust Professor, Rice University, Houston, TX, **2013--present**. - Assistant Professor, Rice University, Houston, TX, **2008--12**. ## Education - Ph.D. in Statistics, Iowa State University, Ames, IA, **2008** - M.Sc. in Statistics, University of Auckland, New Zealand, **2004** - B.Sc. in Statistics and Computer Science, First Class Honours, The University of Auckland, New Zealand, **2002**. - Bachelor of Human Biology, First Class Honours, The University of Auckland, Auckland, New Zealand, **1999**. ``` Your own example could be much more detailed. ### Exercise 27\.3\.2 Using the R Markdown quick reference, figure out how to: 1. Add a footnote. 2. Add a horizontal rule. 3. Add a block quote. ``` --- title: Horizontal Rules, Block Quotes, and Footnotes --- The quick brown fox jumped over the lazy dog.[^quick-fox] Use three or more `-` for a horizontal rule. For example, --- The horizontal rule uses the same syntax as a YAML block? So how does R markdown distinguish between the two? Three dashes ("---") is only treated the start of a YAML block if it is at the start of the document. > This would be a block quote. Generally, block quotes are used to indicate > quotes longer than a three or four lines. [^quick-fox]: This is an example of a footnote. The sentence this is footnoting is often used for displaying fonts because it includes all 26 letters of the English alphabet. ``` ### Exercise 27\.3\.3 Copy and paste the contents of `diamond-sizes.Rmd` from <https://github.com/hadley/r4ds/tree/master/rmarkdown> in to a local R markdown document. Check that you can run it, then add text after the frequency polygon that describes its most striking features. The following R markdown document answers this question as well as exercises [Exercise 27\.4\.1](r-markdown.html#exercise-27.4.1), [Exercise 27\.4\.2](r-markdown.html#exercise-27.4.2), and [Exercise 27\.4\.3](r-markdown.html#exercise-27.4.3). ``` --- title: "Diamond sizes" output: html_document date: '2018-07-15' --- ```{r knitr_opts, include = FALSE} knitr::opts_chunk$set(echo = FALSE) ``` ```{r setup, message = FALSE} library("ggplot2") library("dplyr") ``` ```{r} smaller <- diamonds %>% filter(carat <= 2.5) ``` ```{r include = FALSE, purl = FALSE} # Hide objects and functions ONLY used inline n_larger <- nrow(diamonds) - nrow(smaller) pct_larger <- n_larger / nrow(diamonds) * 100 comma <- function(x) { format(x, digits = 2, big.mark = ",") } ``` ## Size and Cut, Color, and Clarity Diamonds with lower quality cuts (cuts are ranked from "Ideal" to "Fair") tend to be be larger. ```{r} ggplot(diamonds, aes(y = carat, x = cut)) + geom_boxplot() ``` Likewise, diamonds with worse color (diamond colors are ranked from J (worst) to D (best)) tend to be larger: ```{r} ggplot(diamonds, aes(y = carat, x = color)) + geom_boxplot() ``` The pattern present in cut and color is also present in clarity. Diamonds with worse clarity (I1 (worst), SI1, SI2, VS1, VS2, VVS1, VVS2, IF (best)) tend to be larger: ```{r} ggplot(diamonds, aes(y = carat, x = clarity)) + geom_boxplot() ``` These patterns are consistent with there being a profitability threshold for retail diamonds that is a function of carat, clarity, color, cut and other characteristics. A diamond may be profitable to sell if a poor value of one feature, for example, poor clarity, color, or cut, is be offset by a good value of another feature, such as a large size. This can be considered an example of [Berkson's paradox](https://en.wikipedia.org/wiki/Berkson%27s_paradox). ## Largest Diamonds We have data about `r comma(nrow(diamonds))` diamonds. Only `r n_larger` (`r round(pct_larger, 1)`%) are larger than 2.5 carats. The distribution of the remainder is shown below: ```{r} smaller %>% ggplot(aes(carat)) + geom_freqpoly(binwidth = 0.01) ``` The frequency distribution of diamond sizes is marked by spikes at whole-number and half-carat values, as well as several other carat values corresponding to fractions. The largest twenty diamonds (by carat) in the datasets are, ```{r results = "asis"} diamonds %>% arrange(desc(carat)) %>% slice(1:20) %>% select(carat, cut, color, clarity) %>% knitr::kable( caption = "The largest 20 diamonds in the `diamonds` dataset." ) ``` Most of the twenty largest datasets are in the lowest clarity category ("I1"), with one being in the second best category ("VVS2") The top twenty diamonds have colors ranging from the worst, "J", to best, "D",categories, though most are in the lower categories "J" and "I". The top twenty diamonds are more evenly distributed among the cut categories, from "Fair" to "Ideal", although the worst category (Fair) is the most common. ``` 27\.4 Code chunks ----------------- ### Exercise 27\.4\.1 Add a section that explores how diamond sizes vary by cut, color, and clarity. Assume you’re writing a report for someone who doesn’t know R, and instead of setting `echo = FALSE` on each chunk, set a global option. See the answer to [Exercise 27\.3\.3](r-markdown.html#exercise-27.3.3). ### Exercise 27\.4\.2 Download `diamond-sizes.Rmd` from <https://github.com/hadley/r4ds/tree/master/rmarkdown>. Add a section that describes the largest 20 diamonds, including a table that displays their most important attributes. See the answer to [Exercise 27\.3\.3](r-markdown.html#exercise-27.3.3). I use `arrange()` and `slice()` to select the largest twenty diamonds, and `knitr::kable()` to produce a formatted table. ### Exercise 27\.4\.3 Modify `diamonds-sizes.Rmd` to use `comma()` to produce nicely formatted output. Also include the percentage of diamonds that are larger than 2\.5 carats. See the answer to [Exercise 27\.3\.3](r-markdown.html#exercise-27.3.3). I moved the computation of the number larger and percent of diamonds larger than 2\.5 carats into a code chunk. I find that it is best to keep inline R expressions simple, usually consisting of an object and a formatting function. This makes it both easier to read and test the R code, while simultaneously making the prose easier to read. It helps the readability of the code and document to keep the computation of objects used in prose close to their use. Calculating those objects in a code chunk with the `include = FALSE` option (as is done in `diamonds-size.Rmd`) is useful in this regard. ### Exercise 27\.4\.4 Set up a network of chunks where `d` depends on `c` and `b`, and both `b` and `c` depend on `a`. Have each chunk print lubridate::now(), set cache \= TRUE, then verify your understanding of caching. ``` --- title: "Exercise 24.4.7.4" author: "Jeffrey Arnold" date: "2/1/2018" output: html_document --- ```{r setup, include=FALSE} knitr::opts_chunk$set(echo = TRUE, cache = TRUE) ``` The chunk `a` has no dependencies. ```{r a} print(lubridate::now()) x <- 1 ``` The chunk `b` depends on `a`. ```{r b, dependson = c("a")} print(lubridate::now()) y <- x + 1 ``` The chunk `c` depends on `a`. ```{r c, dependson = c("a")} print(lubridate::now()) z <- x * 2 ``` The chunk `d` depends on `c` and `b`: ```{r d, dependson = c("c", "b")} print(lubridate::now()) w <- y + z ``` If this document is knit repeatedly, the value printed by `lubridate::now()` will be the same for all chunks, and the same as the first time the document was run with caching. ``` 27\.5 Troubleshooting --------------------- No exercises 27\.6 YAML header ----------------- No exercises 27\.7 Learning more ------------------- No exercises 27\.1 Introduction ------------------ 27\.2 R Markdown basics ----------------------- ### Exercise 27\.2\.1 Create a new notebook using *File \> New File \> R Notebook*. Read the instructions. Practice running the chunks. Verify that you can modify the code, re\-run it, and see modified output. This exercise is left to the reader. ### Exercise 27\.2\.2 Create a new R Markdown document with *File \> New File \> R Markdown …*. Knit it by clicking the appropriate button. Knit it by using the appropriate keyboard short cut. Verify that you can modify the input and see the output update. This exercise is mostly left to the reader. Recall that the keyboard shortcut to knit a file is `Cmd/Ctrl + Alt + K`. ### Exercise 27\.2\.3 Compare and contrast the R notebook and R markdown files you created above. How are the outputs similar? How are they different? How are the inputs similar? How are they different? What happens if you copy the YAML header from one to the other? R notebook files show the output of code chunks inside the editor, while hiding the console, when they are edited in RStudio. This contrasts with R markdown files, which show their output inside the console, and do not show output inside the editor. This makes R notebook documents appealing for interactive exploration. In this R markdown file, the plot is displayed in the “Plot” tab, while the output of `summary()` is displayed in the tab. However, when this same file is converted to a R notebook, the plot and `summary()` output are displayed in the “Editor” below the chunk of code which created them. Both R notebooks and R markdown files and can be knit to produce HTML output. R markdown files can be knit to a variety of formats including HTML, PDF, and DOCX. However, R notebooks can only be knit to HTML files, which are given the extension `.nb.html`. However, unlike R markdown files knit to HTML, the HTML output of an R notebook includes copy of the original `.Rmd` source. If a `.nb.html` file is opened in RStudio, the source of the `.Rmd` file can be extracted and edited. In contrast, there is no way to recover the original source of an R markdown file from its output, except through the parts that are displayed in the output itself. R markdown files and R notebooks differ in the value of `output` in their YAML headers. The YAML header for the R notebook will have the line, ``` --- ouptut: html_notebook --- ``` For example, this is a R notebook, ``` --- title: "Diamond sizes" date: 2016-08-25 output: html_notebook --- Text of the document. ``` The YAML header for the R markdown file will have the line, ``` ouptut: html_document ``` For example, this is a R markdown file. ``` --- title: "Diamond sizes" date: 2016-08-25 output: html_document --- Text of the document. ``` Copying the YAML header from an R notebook to a R markdown file changes it to an R notebook, and vice\-versa. More specifically, an `.Rmd` file can be changed to R markdown file or R notebook by changing the value of the `output` key in the header. The RStudio IDE and the rmarkdown package both use the YAML header of an `.Rmd` file to determine the document\-type of the file. For more information on R markdown notebooks see the following sources: * [R Markdown: The Definitive Guide](https://bookdown.org/yihui/rmarkdown/) section), Chapter [Notebook](https://bookdown.org/yihui/rmarkdown/notebook.html) * [Difference between R MarkDown and R NoteBook](https://stackoverflow.com/questions/43820483/difference-between-r-markdown-and-r-notebook/43898504#43898504) StackOverflow thread. ### Exercise 27\.2\.4 Create one new R Markdown document for each of the three built\-in formats: HTML, PDF and Word. Knit each of the three documents. How does the output differ? How does the input differ? (You may need to install LaTeX in order to build the PDF output — RStudio will prompt you if this is necessary.) They produce different outputs, both in the final documents and intermediate files (notably the type of plots produced). The only difference in the inputs is the value of `output` in the YAML header. The following `.Rmd` would be knit to HTML. ``` --- title: "Diamond sizes" date: 2016-08-25 output: html_document --- Text of the document. ``` If the value of the `output` key is changed to `word_document`, knitting the file will create a Word document (DOCX). ``` --- title: "Diamond sizes" date: 2016-08-25 output: word_document --- Text of the document. ``` Similarly, if the value of the `output` key is changed to `pdf_document`, knitting the file will create a PDF. ``` --- title: "Diamond sizes" date: 2016-08-25 output: pdf_document --- Text of the document. ``` If you click on the *Knit* menu button and then on one of *Knit to HTML*, *Knit to PDF*, or *Knit to Word*, you will see that the value of the `output` key will change to `html_document`, `pdf_document`, or `word_document`, respectively. You will see that the value of `output` will look a little different than the previous examples. It will add a new line with a value like, `pdf_document: default`. ``` --- title: "Diamond sizes" date: 2016-08-25 output: pdf_document: default --- Text of the document. ``` This format is more general, allows the document have multiple output formats as well as configuration settings that allow more fine\-grained control over the look of the output format. The chapter [R Markdown Formats](https://r4ds.had.co.nz/r-markdown-formats.html) discusses output formats for R markdown files in more detail. ### Exercise 27\.2\.1 Create a new notebook using *File \> New File \> R Notebook*. Read the instructions. Practice running the chunks. Verify that you can modify the code, re\-run it, and see modified output. This exercise is left to the reader. ### Exercise 27\.2\.2 Create a new R Markdown document with *File \> New File \> R Markdown …*. Knit it by clicking the appropriate button. Knit it by using the appropriate keyboard short cut. Verify that you can modify the input and see the output update. This exercise is mostly left to the reader. Recall that the keyboard shortcut to knit a file is `Cmd/Ctrl + Alt + K`. ### Exercise 27\.2\.3 Compare and contrast the R notebook and R markdown files you created above. How are the outputs similar? How are they different? How are the inputs similar? How are they different? What happens if you copy the YAML header from one to the other? R notebook files show the output of code chunks inside the editor, while hiding the console, when they are edited in RStudio. This contrasts with R markdown files, which show their output inside the console, and do not show output inside the editor. This makes R notebook documents appealing for interactive exploration. In this R markdown file, the plot is displayed in the “Plot” tab, while the output of `summary()` is displayed in the tab. However, when this same file is converted to a R notebook, the plot and `summary()` output are displayed in the “Editor” below the chunk of code which created them. Both R notebooks and R markdown files and can be knit to produce HTML output. R markdown files can be knit to a variety of formats including HTML, PDF, and DOCX. However, R notebooks can only be knit to HTML files, which are given the extension `.nb.html`. However, unlike R markdown files knit to HTML, the HTML output of an R notebook includes copy of the original `.Rmd` source. If a `.nb.html` file is opened in RStudio, the source of the `.Rmd` file can be extracted and edited. In contrast, there is no way to recover the original source of an R markdown file from its output, except through the parts that are displayed in the output itself. R markdown files and R notebooks differ in the value of `output` in their YAML headers. The YAML header for the R notebook will have the line, ``` --- ouptut: html_notebook --- ``` For example, this is a R notebook, ``` --- title: "Diamond sizes" date: 2016-08-25 output: html_notebook --- Text of the document. ``` The YAML header for the R markdown file will have the line, ``` ouptut: html_document ``` For example, this is a R markdown file. ``` --- title: "Diamond sizes" date: 2016-08-25 output: html_document --- Text of the document. ``` Copying the YAML header from an R notebook to a R markdown file changes it to an R notebook, and vice\-versa. More specifically, an `.Rmd` file can be changed to R markdown file or R notebook by changing the value of the `output` key in the header. The RStudio IDE and the rmarkdown package both use the YAML header of an `.Rmd` file to determine the document\-type of the file. For more information on R markdown notebooks see the following sources: * [R Markdown: The Definitive Guide](https://bookdown.org/yihui/rmarkdown/) section), Chapter [Notebook](https://bookdown.org/yihui/rmarkdown/notebook.html) * [Difference between R MarkDown and R NoteBook](https://stackoverflow.com/questions/43820483/difference-between-r-markdown-and-r-notebook/43898504#43898504) StackOverflow thread. ### Exercise 27\.2\.4 Create one new R Markdown document for each of the three built\-in formats: HTML, PDF and Word. Knit each of the three documents. How does the output differ? How does the input differ? (You may need to install LaTeX in order to build the PDF output — RStudio will prompt you if this is necessary.) They produce different outputs, both in the final documents and intermediate files (notably the type of plots produced). The only difference in the inputs is the value of `output` in the YAML header. The following `.Rmd` would be knit to HTML. ``` --- title: "Diamond sizes" date: 2016-08-25 output: html_document --- Text of the document. ``` If the value of the `output` key is changed to `word_document`, knitting the file will create a Word document (DOCX). ``` --- title: "Diamond sizes" date: 2016-08-25 output: word_document --- Text of the document. ``` Similarly, if the value of the `output` key is changed to `pdf_document`, knitting the file will create a PDF. ``` --- title: "Diamond sizes" date: 2016-08-25 output: pdf_document --- Text of the document. ``` If you click on the *Knit* menu button and then on one of *Knit to HTML*, *Knit to PDF*, or *Knit to Word*, you will see that the value of the `output` key will change to `html_document`, `pdf_document`, or `word_document`, respectively. You will see that the value of `output` will look a little different than the previous examples. It will add a new line with a value like, `pdf_document: default`. ``` --- title: "Diamond sizes" date: 2016-08-25 output: pdf_document: default --- Text of the document. ``` This format is more general, allows the document have multiple output formats as well as configuration settings that allow more fine\-grained control over the look of the output format. The chapter [R Markdown Formats](https://r4ds.had.co.nz/r-markdown-formats.html) discusses output formats for R markdown files in more detail. 27\.3 Text formatting with Markdown ----------------------------------- ### Exercise 27\.3\.1 Practice what you’ve learned by creating a brief CV. The title should be your name, and you should include headings for (at least) education or employment. Each of the sections should include a bulleted list of jobs/degrees. Highlight the year in bold. A minimal example is the following CV. ``` --- title: "Hadley Wickham" --- ## Employment - Chief Scientist, Rstudio, **2013--present**. - Adjust Professor, Rice University, Houston, TX, **2013--present**. - Assistant Professor, Rice University, Houston, TX, **2008--12**. ## Education - Ph.D. in Statistics, Iowa State University, Ames, IA, **2008** - M.Sc. in Statistics, University of Auckland, New Zealand, **2004** - B.Sc. in Statistics and Computer Science, First Class Honours, The University of Auckland, New Zealand, **2002**. - Bachelor of Human Biology, First Class Honours, The University of Auckland, Auckland, New Zealand, **1999**. ``` Your own example could be much more detailed. ### Exercise 27\.3\.2 Using the R Markdown quick reference, figure out how to: 1. Add a footnote. 2. Add a horizontal rule. 3. Add a block quote. ``` --- title: Horizontal Rules, Block Quotes, and Footnotes --- The quick brown fox jumped over the lazy dog.[^quick-fox] Use three or more `-` for a horizontal rule. For example, --- The horizontal rule uses the same syntax as a YAML block? So how does R markdown distinguish between the two? Three dashes ("---") is only treated the start of a YAML block if it is at the start of the document. > This would be a block quote. Generally, block quotes are used to indicate > quotes longer than a three or four lines. [^quick-fox]: This is an example of a footnote. The sentence this is footnoting is often used for displaying fonts because it includes all 26 letters of the English alphabet. ``` ### Exercise 27\.3\.3 Copy and paste the contents of `diamond-sizes.Rmd` from <https://github.com/hadley/r4ds/tree/master/rmarkdown> in to a local R markdown document. Check that you can run it, then add text after the frequency polygon that describes its most striking features. The following R markdown document answers this question as well as exercises [Exercise 27\.4\.1](r-markdown.html#exercise-27.4.1), [Exercise 27\.4\.2](r-markdown.html#exercise-27.4.2), and [Exercise 27\.4\.3](r-markdown.html#exercise-27.4.3). ``` --- title: "Diamond sizes" output: html_document date: '2018-07-15' --- ```{r knitr_opts, include = FALSE} knitr::opts_chunk$set(echo = FALSE) ``` ```{r setup, message = FALSE} library("ggplot2") library("dplyr") ``` ```{r} smaller <- diamonds %>% filter(carat <= 2.5) ``` ```{r include = FALSE, purl = FALSE} # Hide objects and functions ONLY used inline n_larger <- nrow(diamonds) - nrow(smaller) pct_larger <- n_larger / nrow(diamonds) * 100 comma <- function(x) { format(x, digits = 2, big.mark = ",") } ``` ## Size and Cut, Color, and Clarity Diamonds with lower quality cuts (cuts are ranked from "Ideal" to "Fair") tend to be be larger. ```{r} ggplot(diamonds, aes(y = carat, x = cut)) + geom_boxplot() ``` Likewise, diamonds with worse color (diamond colors are ranked from J (worst) to D (best)) tend to be larger: ```{r} ggplot(diamonds, aes(y = carat, x = color)) + geom_boxplot() ``` The pattern present in cut and color is also present in clarity. Diamonds with worse clarity (I1 (worst), SI1, SI2, VS1, VS2, VVS1, VVS2, IF (best)) tend to be larger: ```{r} ggplot(diamonds, aes(y = carat, x = clarity)) + geom_boxplot() ``` These patterns are consistent with there being a profitability threshold for retail diamonds that is a function of carat, clarity, color, cut and other characteristics. A diamond may be profitable to sell if a poor value of one feature, for example, poor clarity, color, or cut, is be offset by a good value of another feature, such as a large size. This can be considered an example of [Berkson's paradox](https://en.wikipedia.org/wiki/Berkson%27s_paradox). ## Largest Diamonds We have data about `r comma(nrow(diamonds))` diamonds. Only `r n_larger` (`r round(pct_larger, 1)`%) are larger than 2.5 carats. The distribution of the remainder is shown below: ```{r} smaller %>% ggplot(aes(carat)) + geom_freqpoly(binwidth = 0.01) ``` The frequency distribution of diamond sizes is marked by spikes at whole-number and half-carat values, as well as several other carat values corresponding to fractions. The largest twenty diamonds (by carat) in the datasets are, ```{r results = "asis"} diamonds %>% arrange(desc(carat)) %>% slice(1:20) %>% select(carat, cut, color, clarity) %>% knitr::kable( caption = "The largest 20 diamonds in the `diamonds` dataset." ) ``` Most of the twenty largest datasets are in the lowest clarity category ("I1"), with one being in the second best category ("VVS2") The top twenty diamonds have colors ranging from the worst, "J", to best, "D",categories, though most are in the lower categories "J" and "I". The top twenty diamonds are more evenly distributed among the cut categories, from "Fair" to "Ideal", although the worst category (Fair) is the most common. ``` ### Exercise 27\.3\.1 Practice what you’ve learned by creating a brief CV. The title should be your name, and you should include headings for (at least) education or employment. Each of the sections should include a bulleted list of jobs/degrees. Highlight the year in bold. A minimal example is the following CV. ``` --- title: "Hadley Wickham" --- ## Employment - Chief Scientist, Rstudio, **2013--present**. - Adjust Professor, Rice University, Houston, TX, **2013--present**. - Assistant Professor, Rice University, Houston, TX, **2008--12**. ## Education - Ph.D. in Statistics, Iowa State University, Ames, IA, **2008** - M.Sc. in Statistics, University of Auckland, New Zealand, **2004** - B.Sc. in Statistics and Computer Science, First Class Honours, The University of Auckland, New Zealand, **2002**. - Bachelor of Human Biology, First Class Honours, The University of Auckland, Auckland, New Zealand, **1999**. ``` Your own example could be much more detailed. ### Exercise 27\.3\.2 Using the R Markdown quick reference, figure out how to: 1. Add a footnote. 2. Add a horizontal rule. 3. Add a block quote. ``` --- title: Horizontal Rules, Block Quotes, and Footnotes --- The quick brown fox jumped over the lazy dog.[^quick-fox] Use three or more `-` for a horizontal rule. For example, --- The horizontal rule uses the same syntax as a YAML block? So how does R markdown distinguish between the two? Three dashes ("---") is only treated the start of a YAML block if it is at the start of the document. > This would be a block quote. Generally, block quotes are used to indicate > quotes longer than a three or four lines. [^quick-fox]: This is an example of a footnote. The sentence this is footnoting is often used for displaying fonts because it includes all 26 letters of the English alphabet. ``` ### Exercise 27\.3\.3 Copy and paste the contents of `diamond-sizes.Rmd` from <https://github.com/hadley/r4ds/tree/master/rmarkdown> in to a local R markdown document. Check that you can run it, then add text after the frequency polygon that describes its most striking features. The following R markdown document answers this question as well as exercises [Exercise 27\.4\.1](r-markdown.html#exercise-27.4.1), [Exercise 27\.4\.2](r-markdown.html#exercise-27.4.2), and [Exercise 27\.4\.3](r-markdown.html#exercise-27.4.3). ``` --- title: "Diamond sizes" output: html_document date: '2018-07-15' --- ```{r knitr_opts, include = FALSE} knitr::opts_chunk$set(echo = FALSE) ``` ```{r setup, message = FALSE} library("ggplot2") library("dplyr") ``` ```{r} smaller <- diamonds %>% filter(carat <= 2.5) ``` ```{r include = FALSE, purl = FALSE} # Hide objects and functions ONLY used inline n_larger <- nrow(diamonds) - nrow(smaller) pct_larger <- n_larger / nrow(diamonds) * 100 comma <- function(x) { format(x, digits = 2, big.mark = ",") } ``` ## Size and Cut, Color, and Clarity Diamonds with lower quality cuts (cuts are ranked from "Ideal" to "Fair") tend to be be larger. ```{r} ggplot(diamonds, aes(y = carat, x = cut)) + geom_boxplot() ``` Likewise, diamonds with worse color (diamond colors are ranked from J (worst) to D (best)) tend to be larger: ```{r} ggplot(diamonds, aes(y = carat, x = color)) + geom_boxplot() ``` The pattern present in cut and color is also present in clarity. Diamonds with worse clarity (I1 (worst), SI1, SI2, VS1, VS2, VVS1, VVS2, IF (best)) tend to be larger: ```{r} ggplot(diamonds, aes(y = carat, x = clarity)) + geom_boxplot() ``` These patterns are consistent with there being a profitability threshold for retail diamonds that is a function of carat, clarity, color, cut and other characteristics. A diamond may be profitable to sell if a poor value of one feature, for example, poor clarity, color, or cut, is be offset by a good value of another feature, such as a large size. This can be considered an example of [Berkson's paradox](https://en.wikipedia.org/wiki/Berkson%27s_paradox). ## Largest Diamonds We have data about `r comma(nrow(diamonds))` diamonds. Only `r n_larger` (`r round(pct_larger, 1)`%) are larger than 2.5 carats. The distribution of the remainder is shown below: ```{r} smaller %>% ggplot(aes(carat)) + geom_freqpoly(binwidth = 0.01) ``` The frequency distribution of diamond sizes is marked by spikes at whole-number and half-carat values, as well as several other carat values corresponding to fractions. The largest twenty diamonds (by carat) in the datasets are, ```{r results = "asis"} diamonds %>% arrange(desc(carat)) %>% slice(1:20) %>% select(carat, cut, color, clarity) %>% knitr::kable( caption = "The largest 20 diamonds in the `diamonds` dataset." ) ``` Most of the twenty largest datasets are in the lowest clarity category ("I1"), with one being in the second best category ("VVS2") The top twenty diamonds have colors ranging from the worst, "J", to best, "D",categories, though most are in the lower categories "J" and "I". The top twenty diamonds are more evenly distributed among the cut categories, from "Fair" to "Ideal", although the worst category (Fair) is the most common. ``` 27\.4 Code chunks ----------------- ### Exercise 27\.4\.1 Add a section that explores how diamond sizes vary by cut, color, and clarity. Assume you’re writing a report for someone who doesn’t know R, and instead of setting `echo = FALSE` on each chunk, set a global option. See the answer to [Exercise 27\.3\.3](r-markdown.html#exercise-27.3.3). ### Exercise 27\.4\.2 Download `diamond-sizes.Rmd` from <https://github.com/hadley/r4ds/tree/master/rmarkdown>. Add a section that describes the largest 20 diamonds, including a table that displays their most important attributes. See the answer to [Exercise 27\.3\.3](r-markdown.html#exercise-27.3.3). I use `arrange()` and `slice()` to select the largest twenty diamonds, and `knitr::kable()` to produce a formatted table. ### Exercise 27\.4\.3 Modify `diamonds-sizes.Rmd` to use `comma()` to produce nicely formatted output. Also include the percentage of diamonds that are larger than 2\.5 carats. See the answer to [Exercise 27\.3\.3](r-markdown.html#exercise-27.3.3). I moved the computation of the number larger and percent of diamonds larger than 2\.5 carats into a code chunk. I find that it is best to keep inline R expressions simple, usually consisting of an object and a formatting function. This makes it both easier to read and test the R code, while simultaneously making the prose easier to read. It helps the readability of the code and document to keep the computation of objects used in prose close to their use. Calculating those objects in a code chunk with the `include = FALSE` option (as is done in `diamonds-size.Rmd`) is useful in this regard. ### Exercise 27\.4\.4 Set up a network of chunks where `d` depends on `c` and `b`, and both `b` and `c` depend on `a`. Have each chunk print lubridate::now(), set cache \= TRUE, then verify your understanding of caching. ``` --- title: "Exercise 24.4.7.4" author: "Jeffrey Arnold" date: "2/1/2018" output: html_document --- ```{r setup, include=FALSE} knitr::opts_chunk$set(echo = TRUE, cache = TRUE) ``` The chunk `a` has no dependencies. ```{r a} print(lubridate::now()) x <- 1 ``` The chunk `b` depends on `a`. ```{r b, dependson = c("a")} print(lubridate::now()) y <- x + 1 ``` The chunk `c` depends on `a`. ```{r c, dependson = c("a")} print(lubridate::now()) z <- x * 2 ``` The chunk `d` depends on `c` and `b`: ```{r d, dependson = c("c", "b")} print(lubridate::now()) w <- y + z ``` If this document is knit repeatedly, the value printed by `lubridate::now()` will be the same for all chunks, and the same as the first time the document was run with caching. ``` ### Exercise 27\.4\.1 Add a section that explores how diamond sizes vary by cut, color, and clarity. Assume you’re writing a report for someone who doesn’t know R, and instead of setting `echo = FALSE` on each chunk, set a global option. See the answer to [Exercise 27\.3\.3](r-markdown.html#exercise-27.3.3). ### Exercise 27\.4\.2 Download `diamond-sizes.Rmd` from <https://github.com/hadley/r4ds/tree/master/rmarkdown>. Add a section that describes the largest 20 diamonds, including a table that displays their most important attributes. See the answer to [Exercise 27\.3\.3](r-markdown.html#exercise-27.3.3). I use `arrange()` and `slice()` to select the largest twenty diamonds, and `knitr::kable()` to produce a formatted table. ### Exercise 27\.4\.3 Modify `diamonds-sizes.Rmd` to use `comma()` to produce nicely formatted output. Also include the percentage of diamonds that are larger than 2\.5 carats. See the answer to [Exercise 27\.3\.3](r-markdown.html#exercise-27.3.3). I moved the computation of the number larger and percent of diamonds larger than 2\.5 carats into a code chunk. I find that it is best to keep inline R expressions simple, usually consisting of an object and a formatting function. This makes it both easier to read and test the R code, while simultaneously making the prose easier to read. It helps the readability of the code and document to keep the computation of objects used in prose close to their use. Calculating those objects in a code chunk with the `include = FALSE` option (as is done in `diamonds-size.Rmd`) is useful in this regard. ### Exercise 27\.4\.4 Set up a network of chunks where `d` depends on `c` and `b`, and both `b` and `c` depend on `a`. Have each chunk print lubridate::now(), set cache \= TRUE, then verify your understanding of caching. ``` --- title: "Exercise 24.4.7.4" author: "Jeffrey Arnold" date: "2/1/2018" output: html_document --- ```{r setup, include=FALSE} knitr::opts_chunk$set(echo = TRUE, cache = TRUE) ``` The chunk `a` has no dependencies. ```{r a} print(lubridate::now()) x <- 1 ``` The chunk `b` depends on `a`. ```{r b, dependson = c("a")} print(lubridate::now()) y <- x + 1 ``` The chunk `c` depends on `a`. ```{r c, dependson = c("a")} print(lubridate::now()) z <- x * 2 ``` The chunk `d` depends on `c` and `b`: ```{r d, dependson = c("c", "b")} print(lubridate::now()) w <- y + z ``` If this document is knit repeatedly, the value printed by `lubridate::now()` will be the same for all chunks, and the same as the first time the document was run with caching. ``` 27\.5 Troubleshooting --------------------- No exercises 27\.6 YAML header ----------------- No exercises 27\.7 Learning more ------------------- No exercises
Data Science
jrnold.github.io
https://jrnold.github.io/r4ds-exercise-solutions/graphics-for-communication.html
28 Graphics for communication ============================= 28\.1 Introduction ------------------ ``` library("tidyverse") library("modelr") library("lubridate") ``` 28\.2 Label ----------- ### Exercise 28\.2\.1 Create one plot on the fuel economy data with customized `title`, `subtitle`, `caption`, `x`, `y`, and `colour` labels. ``` ggplot( data = mpg, mapping = aes(x = fct_reorder(class, hwy), y = hwy) ) + geom_boxplot() + coord_flip() + labs( title = "Compact Cars have > 10 Hwy MPG than Pickup Trucks", subtitle = "Comparing the median highway mpg in each class", caption = "Data from fueleconomy.gov", x = "Car Class", y = "Highway Miles per Gallon" ) ``` ### Exercise 28\.2\.2 The `geom_smooth()` is somewhat misleading because the `hwy` for large engines is skewed upwards due to the inclusion of lightweight sports cars with big engines. Use your modeling tools to fit and display a better model. First, I’ll plot the relationship between fuel efficiency and engine size (displacement) using all cars. The plot shows a strong negative relationship. ``` ggplot(mpg, aes(displ, hwy)) + geom_point() + geom_smooth(method = "lm", se = FALSE) + labs( title = "Fuel Efficiency Decreases with Engine Size", caption = "Data from fueleconomy.gov", y = "Highway Miles per Gallon", x = "Engine Displacement" ) #> `geom_smooth()` using formula 'y ~ x' ``` However, if I disaggregate by car class, and plot the relationship between fuel efficiency and engine displacement within each class, I see a different relationship. 1. For all car class except subcompact cars, there is no relationship or only a small negative relationship between fuel efficiency and engine size. 2. For subcompact cars, there is a strong negative relationship between fuel efficiency and engine size. As the question noted, this is because the subcompact car class includes both small cheap cars, and sports cars with large engines. ``` ggplot(mpg, aes(displ, hwy, colour = class)) + geom_point() + geom_smooth(method = "lm", se = FALSE) + labs( title = "Fuel Efficiency Mostly Varies by Car Class", subtitle = "Subcompact caries fuel efficiency varies by engine size", caption = "Data from fueleconomy.gov", y = "Highway Miles per Gallon", x = "Engine Displacement" ) #> `geom_smooth()` using formula 'y ~ x' ``` Another way to model and visualize the relationship between fuel efficiency and engine displacement after accounting for car class is to regress fuel efficiency on car class, and plot the residuals of that regression against engine displacement. The residuals of the first regression are the variation in fuel efficiency not explained by engine displacement. The relationship between fuel efficiency and engine displacement is attenuated after accounting for car class. ``` mod <- lm(hwy ~ class, data = mpg) mpg %>% add_residuals(mod) %>% ggplot(aes(x = displ, y = resid)) + geom_point() + geom_smooth(method = "lm", se = FALSE) + labs( title = "Engine size has little effect on fuel efficiency", subtitle = "After accounting for car class", caption = "Data from fueleconomy.gov", y = "Highway MPG Relative to Class Average", x = "Engine Displacement" ) #> `geom_smooth()` using formula 'y ~ x' ``` ### Exercise 28\.2\.3 Take an exploratory graphic that you’ve created in the last month, and add informative titles to make it easier for others to understand. By its very nature, this exercise is left to readers. 28\.3 Annotations ----------------- ### Exercise 28\.3\.1 Use `geom_text()` with infinite positions to place text at the four corners of the plot. I can use similar code as the example in the text. However, I need to use `vjust` and `hjust` in order for the text to appear in the plot, and these need to be different for each corner. But, `geom_text()` takes `hjust` and `vjust` as aesthetics, I can add them to the data and mappings, and use a single `geom_text()` call instead of four different `geom_text()` calls with four different data arguments, and four different values of `hjust` and `vjust` arguments. ``` label <- tribble( ~displ, ~hwy, ~label, ~vjust, ~hjust, Inf, Inf, "Top right", "top", "right", Inf, -Inf, "Bottom right", "bottom", "right", -Inf, Inf, "Top left", "top", "left", -Inf, -Inf, "Bottom left", "bottom", "left" ) ggplot(mpg, aes(displ, hwy)) + geom_point() + geom_text(aes(label = label, vjust = vjust, hjust = hjust), data = label) ``` ### Exercise 28\.3\.2 Read the documentation for `annotate()`. How can you use it to add a text label to a plot without having to create a tibble? With annotate you use what would be aesthetic mappings directly as arguments: ``` ggplot(mpg, aes(displ, hwy)) + geom_point() + annotate("text", x = Inf, y = Inf, label = "Increasing engine size is \nrelated to decreasing fuel economy.", vjust = "top", hjust = "right" ) ``` ### Exercise 28\.3\.3 How do labels with `geom_text()` interact with faceting? How can you add a label to a single facet? How can you put a different label in each facet? (Hint: think about the underlying data.) If the facet variable is not specified, the text is drawn in all facets. ``` label <- tibble( displ = Inf, hwy = Inf, label = "Increasing engine size is \nrelated to decreasing fuel economy." ) ggplot(mpg, aes(displ, hwy)) + geom_point() + geom_text(aes(label = label), data = label, vjust = "top", hjust = "right", size = 2 ) + facet_wrap(~class) ``` To draw the label in only one facet, add a column to the label data frame with the value of the faceting variable(s) in which to draw it. ``` label <- tibble( displ = Inf, hwy = Inf, class = "2seater", label = "Increasing engine size is \nrelated to decreasing fuel economy." ) ggplot(mpg, aes(displ, hwy)) + geom_point() + geom_text(aes(label = label), data = label, vjust = "top", hjust = "right", size = 2 ) + facet_wrap(~class) ``` To draw labels in different plots, simply have the facetting variable(s): ``` label <- tibble( displ = Inf, hwy = Inf, class = unique(mpg$class), label = str_c("Label for ", class) ) ggplot(mpg, aes(displ, hwy)) + geom_point() + geom_text(aes(label = label), data = label, vjust = "top", hjust = "right", size = 3 ) + facet_wrap(~class) ``` ### Exercise 28\.3\.4 What arguments to `geom_label()` control the appearance of the background box? * `label.padding`: padding around label * `label.r`: amount of rounding in the corners * `label.size`: size of label border ### Exercise 28\.3\.5 What are the four arguments to `arrow()`? How do they work? Create a series of plots that demonstrate the most important options. The four arguments are (from the help for `arrow()`): * `angle` : angle of arrow head * `length` : length of the arrow head * `ends`: ends of the line to draw arrow head * `type`: `"open"` or `"close"`: whether the arrow head is a closed or open triangle 28\.4 Scales ------------ ### Exercise 28\.4\.1 Why doesn’t the following code override the default scale? ``` df <- tibble( x = rnorm(10000), y = rnorm(10000) ) ggplot(df, aes(x, y)) + geom_hex() + scale_colour_gradient(low = "white", high = "red") + coord_fixed() ``` It does not override the default scale because the colors in `geom_hex()` are set by the `fill` aesthetic, not the `color` aesthetic. ``` ggplot(df, aes(x, y)) + geom_hex() + scale_fill_gradient(low = "white", high = "red") + coord_fixed() ``` ### Exercise 28\.4\.2 The first argument to every scale is the label for the scale. It is equivalent to using the `labs` function. ``` ggplot(mpg, aes(displ, hwy)) + geom_point(aes(colour = class)) + geom_smooth(se = FALSE) + labs( x = "Engine displacement (L)", y = "Highway fuel economy (mpg)", colour = "Car type" ) #> `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy)) + geom_point(aes(colour = class)) + geom_smooth(se = FALSE) + scale_x_continuous("Engine displacement (L)") + scale_y_continuous("Highway fuel economy (mpg)") + scale_colour_discrete("Car type") #> `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ### Exercise 28\.4\.3 Change the display of the presidential terms by: 1. Combining the two variants shown above. 2. Improving the display of the y axis. 3. Labeling each term with the name of the president. 4. Adding informative plot labels. 5. Placing breaks every 4 years (this is trickier than it seems!). ``` fouryears <- lubridate::make_date(seq(year(min(presidential$start)), year(max(presidential$end)), by = 4 ), 1, 1) presidential %>% mutate( id = 33 + row_number(), name_id = fct_inorder(str_c(name, " (", id, ")")) ) %>% ggplot(aes(start, name_id, colour = party)) + geom_point() + geom_segment(aes(xend = end, yend = name_id)) + scale_colour_manual("Party", values = c(Republican = "red", Democratic = "blue")) + scale_y_discrete(NULL) + scale_x_date(NULL, breaks = presidential$start, date_labels = "'%y", minor_breaks = fouryears ) + ggtitle("Terms of US Presdients", subtitle = "Roosevelth (34th) to Obama (44th)" ) + theme( panel.grid.minor = element_blank(), axis.ticks.y = element_blank() ) ``` To include both the start dates of presidential terms and every four years, I use different levels of emphasis. The presidential term start years are used as major breaks with thicker lines and x\-axis labels. Lines for every four years is indicated with minor breaks that use thinner lines to distinguish them from presidential term start years and to avoid cluttering the plot. ### Exercise 28\.4\.4 Use `override.aes` to make the legend on the following plot easier to see. ``` ggplot(diamonds, aes(carat, price)) + geom_point(aes(colour = cut), alpha = 1 / 20) ``` The problem with the legend is that the `alpha` value make the colors hard to see. So I’ll override the alpha value to make the points solid in the legend. ``` ggplot(diamonds, aes(carat, price)) + geom_point(aes(colour = cut), alpha = 1 / 20) + theme(legend.position = "bottom") + guides(colour = guide_legend(nrow = 1, override.aes = list(alpha = 1))) ``` 28\.5 Zooming ------------- No exercises 28\.6 Themes ------------ No exercises 28\.7 Saving your plots ----------------------- No exercises 28\.8 Learning more ------------------- No exercises 28\.1 Introduction ------------------ ``` library("tidyverse") library("modelr") library("lubridate") ``` 28\.2 Label ----------- ### Exercise 28\.2\.1 Create one plot on the fuel economy data with customized `title`, `subtitle`, `caption`, `x`, `y`, and `colour` labels. ``` ggplot( data = mpg, mapping = aes(x = fct_reorder(class, hwy), y = hwy) ) + geom_boxplot() + coord_flip() + labs( title = "Compact Cars have > 10 Hwy MPG than Pickup Trucks", subtitle = "Comparing the median highway mpg in each class", caption = "Data from fueleconomy.gov", x = "Car Class", y = "Highway Miles per Gallon" ) ``` ### Exercise 28\.2\.2 The `geom_smooth()` is somewhat misleading because the `hwy` for large engines is skewed upwards due to the inclusion of lightweight sports cars with big engines. Use your modeling tools to fit and display a better model. First, I’ll plot the relationship between fuel efficiency and engine size (displacement) using all cars. The plot shows a strong negative relationship. ``` ggplot(mpg, aes(displ, hwy)) + geom_point() + geom_smooth(method = "lm", se = FALSE) + labs( title = "Fuel Efficiency Decreases with Engine Size", caption = "Data from fueleconomy.gov", y = "Highway Miles per Gallon", x = "Engine Displacement" ) #> `geom_smooth()` using formula 'y ~ x' ``` However, if I disaggregate by car class, and plot the relationship between fuel efficiency and engine displacement within each class, I see a different relationship. 1. For all car class except subcompact cars, there is no relationship or only a small negative relationship between fuel efficiency and engine size. 2. For subcompact cars, there is a strong negative relationship between fuel efficiency and engine size. As the question noted, this is because the subcompact car class includes both small cheap cars, and sports cars with large engines. ``` ggplot(mpg, aes(displ, hwy, colour = class)) + geom_point() + geom_smooth(method = "lm", se = FALSE) + labs( title = "Fuel Efficiency Mostly Varies by Car Class", subtitle = "Subcompact caries fuel efficiency varies by engine size", caption = "Data from fueleconomy.gov", y = "Highway Miles per Gallon", x = "Engine Displacement" ) #> `geom_smooth()` using formula 'y ~ x' ``` Another way to model and visualize the relationship between fuel efficiency and engine displacement after accounting for car class is to regress fuel efficiency on car class, and plot the residuals of that regression against engine displacement. The residuals of the first regression are the variation in fuel efficiency not explained by engine displacement. The relationship between fuel efficiency and engine displacement is attenuated after accounting for car class. ``` mod <- lm(hwy ~ class, data = mpg) mpg %>% add_residuals(mod) %>% ggplot(aes(x = displ, y = resid)) + geom_point() + geom_smooth(method = "lm", se = FALSE) + labs( title = "Engine size has little effect on fuel efficiency", subtitle = "After accounting for car class", caption = "Data from fueleconomy.gov", y = "Highway MPG Relative to Class Average", x = "Engine Displacement" ) #> `geom_smooth()` using formula 'y ~ x' ``` ### Exercise 28\.2\.3 Take an exploratory graphic that you’ve created in the last month, and add informative titles to make it easier for others to understand. By its very nature, this exercise is left to readers. ### Exercise 28\.2\.1 Create one plot on the fuel economy data with customized `title`, `subtitle`, `caption`, `x`, `y`, and `colour` labels. ``` ggplot( data = mpg, mapping = aes(x = fct_reorder(class, hwy), y = hwy) ) + geom_boxplot() + coord_flip() + labs( title = "Compact Cars have > 10 Hwy MPG than Pickup Trucks", subtitle = "Comparing the median highway mpg in each class", caption = "Data from fueleconomy.gov", x = "Car Class", y = "Highway Miles per Gallon" ) ``` ### Exercise 28\.2\.2 The `geom_smooth()` is somewhat misleading because the `hwy` for large engines is skewed upwards due to the inclusion of lightweight sports cars with big engines. Use your modeling tools to fit and display a better model. First, I’ll plot the relationship between fuel efficiency and engine size (displacement) using all cars. The plot shows a strong negative relationship. ``` ggplot(mpg, aes(displ, hwy)) + geom_point() + geom_smooth(method = "lm", se = FALSE) + labs( title = "Fuel Efficiency Decreases with Engine Size", caption = "Data from fueleconomy.gov", y = "Highway Miles per Gallon", x = "Engine Displacement" ) #> `geom_smooth()` using formula 'y ~ x' ``` However, if I disaggregate by car class, and plot the relationship between fuel efficiency and engine displacement within each class, I see a different relationship. 1. For all car class except subcompact cars, there is no relationship or only a small negative relationship between fuel efficiency and engine size. 2. For subcompact cars, there is a strong negative relationship between fuel efficiency and engine size. As the question noted, this is because the subcompact car class includes both small cheap cars, and sports cars with large engines. ``` ggplot(mpg, aes(displ, hwy, colour = class)) + geom_point() + geom_smooth(method = "lm", se = FALSE) + labs( title = "Fuel Efficiency Mostly Varies by Car Class", subtitle = "Subcompact caries fuel efficiency varies by engine size", caption = "Data from fueleconomy.gov", y = "Highway Miles per Gallon", x = "Engine Displacement" ) #> `geom_smooth()` using formula 'y ~ x' ``` Another way to model and visualize the relationship between fuel efficiency and engine displacement after accounting for car class is to regress fuel efficiency on car class, and plot the residuals of that regression against engine displacement. The residuals of the first regression are the variation in fuel efficiency not explained by engine displacement. The relationship between fuel efficiency and engine displacement is attenuated after accounting for car class. ``` mod <- lm(hwy ~ class, data = mpg) mpg %>% add_residuals(mod) %>% ggplot(aes(x = displ, y = resid)) + geom_point() + geom_smooth(method = "lm", se = FALSE) + labs( title = "Engine size has little effect on fuel efficiency", subtitle = "After accounting for car class", caption = "Data from fueleconomy.gov", y = "Highway MPG Relative to Class Average", x = "Engine Displacement" ) #> `geom_smooth()` using formula 'y ~ x' ``` ### Exercise 28\.2\.3 Take an exploratory graphic that you’ve created in the last month, and add informative titles to make it easier for others to understand. By its very nature, this exercise is left to readers. 28\.3 Annotations ----------------- ### Exercise 28\.3\.1 Use `geom_text()` with infinite positions to place text at the four corners of the plot. I can use similar code as the example in the text. However, I need to use `vjust` and `hjust` in order for the text to appear in the plot, and these need to be different for each corner. But, `geom_text()` takes `hjust` and `vjust` as aesthetics, I can add them to the data and mappings, and use a single `geom_text()` call instead of four different `geom_text()` calls with four different data arguments, and four different values of `hjust` and `vjust` arguments. ``` label <- tribble( ~displ, ~hwy, ~label, ~vjust, ~hjust, Inf, Inf, "Top right", "top", "right", Inf, -Inf, "Bottom right", "bottom", "right", -Inf, Inf, "Top left", "top", "left", -Inf, -Inf, "Bottom left", "bottom", "left" ) ggplot(mpg, aes(displ, hwy)) + geom_point() + geom_text(aes(label = label, vjust = vjust, hjust = hjust), data = label) ``` ### Exercise 28\.3\.2 Read the documentation for `annotate()`. How can you use it to add a text label to a plot without having to create a tibble? With annotate you use what would be aesthetic mappings directly as arguments: ``` ggplot(mpg, aes(displ, hwy)) + geom_point() + annotate("text", x = Inf, y = Inf, label = "Increasing engine size is \nrelated to decreasing fuel economy.", vjust = "top", hjust = "right" ) ``` ### Exercise 28\.3\.3 How do labels with `geom_text()` interact with faceting? How can you add a label to a single facet? How can you put a different label in each facet? (Hint: think about the underlying data.) If the facet variable is not specified, the text is drawn in all facets. ``` label <- tibble( displ = Inf, hwy = Inf, label = "Increasing engine size is \nrelated to decreasing fuel economy." ) ggplot(mpg, aes(displ, hwy)) + geom_point() + geom_text(aes(label = label), data = label, vjust = "top", hjust = "right", size = 2 ) + facet_wrap(~class) ``` To draw the label in only one facet, add a column to the label data frame with the value of the faceting variable(s) in which to draw it. ``` label <- tibble( displ = Inf, hwy = Inf, class = "2seater", label = "Increasing engine size is \nrelated to decreasing fuel economy." ) ggplot(mpg, aes(displ, hwy)) + geom_point() + geom_text(aes(label = label), data = label, vjust = "top", hjust = "right", size = 2 ) + facet_wrap(~class) ``` To draw labels in different plots, simply have the facetting variable(s): ``` label <- tibble( displ = Inf, hwy = Inf, class = unique(mpg$class), label = str_c("Label for ", class) ) ggplot(mpg, aes(displ, hwy)) + geom_point() + geom_text(aes(label = label), data = label, vjust = "top", hjust = "right", size = 3 ) + facet_wrap(~class) ``` ### Exercise 28\.3\.4 What arguments to `geom_label()` control the appearance of the background box? * `label.padding`: padding around label * `label.r`: amount of rounding in the corners * `label.size`: size of label border ### Exercise 28\.3\.5 What are the four arguments to `arrow()`? How do they work? Create a series of plots that demonstrate the most important options. The four arguments are (from the help for `arrow()`): * `angle` : angle of arrow head * `length` : length of the arrow head * `ends`: ends of the line to draw arrow head * `type`: `"open"` or `"close"`: whether the arrow head is a closed or open triangle ### Exercise 28\.3\.1 Use `geom_text()` with infinite positions to place text at the four corners of the plot. I can use similar code as the example in the text. However, I need to use `vjust` and `hjust` in order for the text to appear in the plot, and these need to be different for each corner. But, `geom_text()` takes `hjust` and `vjust` as aesthetics, I can add them to the data and mappings, and use a single `geom_text()` call instead of four different `geom_text()` calls with four different data arguments, and four different values of `hjust` and `vjust` arguments. ``` label <- tribble( ~displ, ~hwy, ~label, ~vjust, ~hjust, Inf, Inf, "Top right", "top", "right", Inf, -Inf, "Bottom right", "bottom", "right", -Inf, Inf, "Top left", "top", "left", -Inf, -Inf, "Bottom left", "bottom", "left" ) ggplot(mpg, aes(displ, hwy)) + geom_point() + geom_text(aes(label = label, vjust = vjust, hjust = hjust), data = label) ``` ### Exercise 28\.3\.2 Read the documentation for `annotate()`. How can you use it to add a text label to a plot without having to create a tibble? With annotate you use what would be aesthetic mappings directly as arguments: ``` ggplot(mpg, aes(displ, hwy)) + geom_point() + annotate("text", x = Inf, y = Inf, label = "Increasing engine size is \nrelated to decreasing fuel economy.", vjust = "top", hjust = "right" ) ``` ### Exercise 28\.3\.3 How do labels with `geom_text()` interact with faceting? How can you add a label to a single facet? How can you put a different label in each facet? (Hint: think about the underlying data.) If the facet variable is not specified, the text is drawn in all facets. ``` label <- tibble( displ = Inf, hwy = Inf, label = "Increasing engine size is \nrelated to decreasing fuel economy." ) ggplot(mpg, aes(displ, hwy)) + geom_point() + geom_text(aes(label = label), data = label, vjust = "top", hjust = "right", size = 2 ) + facet_wrap(~class) ``` To draw the label in only one facet, add a column to the label data frame with the value of the faceting variable(s) in which to draw it. ``` label <- tibble( displ = Inf, hwy = Inf, class = "2seater", label = "Increasing engine size is \nrelated to decreasing fuel economy." ) ggplot(mpg, aes(displ, hwy)) + geom_point() + geom_text(aes(label = label), data = label, vjust = "top", hjust = "right", size = 2 ) + facet_wrap(~class) ``` To draw labels in different plots, simply have the facetting variable(s): ``` label <- tibble( displ = Inf, hwy = Inf, class = unique(mpg$class), label = str_c("Label for ", class) ) ggplot(mpg, aes(displ, hwy)) + geom_point() + geom_text(aes(label = label), data = label, vjust = "top", hjust = "right", size = 3 ) + facet_wrap(~class) ``` ### Exercise 28\.3\.4 What arguments to `geom_label()` control the appearance of the background box? * `label.padding`: padding around label * `label.r`: amount of rounding in the corners * `label.size`: size of label border ### Exercise 28\.3\.5 What are the four arguments to `arrow()`? How do they work? Create a series of plots that demonstrate the most important options. The four arguments are (from the help for `arrow()`): * `angle` : angle of arrow head * `length` : length of the arrow head * `ends`: ends of the line to draw arrow head * `type`: `"open"` or `"close"`: whether the arrow head is a closed or open triangle 28\.4 Scales ------------ ### Exercise 28\.4\.1 Why doesn’t the following code override the default scale? ``` df <- tibble( x = rnorm(10000), y = rnorm(10000) ) ggplot(df, aes(x, y)) + geom_hex() + scale_colour_gradient(low = "white", high = "red") + coord_fixed() ``` It does not override the default scale because the colors in `geom_hex()` are set by the `fill` aesthetic, not the `color` aesthetic. ``` ggplot(df, aes(x, y)) + geom_hex() + scale_fill_gradient(low = "white", high = "red") + coord_fixed() ``` ### Exercise 28\.4\.2 The first argument to every scale is the label for the scale. It is equivalent to using the `labs` function. ``` ggplot(mpg, aes(displ, hwy)) + geom_point(aes(colour = class)) + geom_smooth(se = FALSE) + labs( x = "Engine displacement (L)", y = "Highway fuel economy (mpg)", colour = "Car type" ) #> `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy)) + geom_point(aes(colour = class)) + geom_smooth(se = FALSE) + scale_x_continuous("Engine displacement (L)") + scale_y_continuous("Highway fuel economy (mpg)") + scale_colour_discrete("Car type") #> `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ### Exercise 28\.4\.3 Change the display of the presidential terms by: 1. Combining the two variants shown above. 2. Improving the display of the y axis. 3. Labeling each term with the name of the president. 4. Adding informative plot labels. 5. Placing breaks every 4 years (this is trickier than it seems!). ``` fouryears <- lubridate::make_date(seq(year(min(presidential$start)), year(max(presidential$end)), by = 4 ), 1, 1) presidential %>% mutate( id = 33 + row_number(), name_id = fct_inorder(str_c(name, " (", id, ")")) ) %>% ggplot(aes(start, name_id, colour = party)) + geom_point() + geom_segment(aes(xend = end, yend = name_id)) + scale_colour_manual("Party", values = c(Republican = "red", Democratic = "blue")) + scale_y_discrete(NULL) + scale_x_date(NULL, breaks = presidential$start, date_labels = "'%y", minor_breaks = fouryears ) + ggtitle("Terms of US Presdients", subtitle = "Roosevelth (34th) to Obama (44th)" ) + theme( panel.grid.minor = element_blank(), axis.ticks.y = element_blank() ) ``` To include both the start dates of presidential terms and every four years, I use different levels of emphasis. The presidential term start years are used as major breaks with thicker lines and x\-axis labels. Lines for every four years is indicated with minor breaks that use thinner lines to distinguish them from presidential term start years and to avoid cluttering the plot. ### Exercise 28\.4\.4 Use `override.aes` to make the legend on the following plot easier to see. ``` ggplot(diamonds, aes(carat, price)) + geom_point(aes(colour = cut), alpha = 1 / 20) ``` The problem with the legend is that the `alpha` value make the colors hard to see. So I’ll override the alpha value to make the points solid in the legend. ``` ggplot(diamonds, aes(carat, price)) + geom_point(aes(colour = cut), alpha = 1 / 20) + theme(legend.position = "bottom") + guides(colour = guide_legend(nrow = 1, override.aes = list(alpha = 1))) ``` ### Exercise 28\.4\.1 Why doesn’t the following code override the default scale? ``` df <- tibble( x = rnorm(10000), y = rnorm(10000) ) ggplot(df, aes(x, y)) + geom_hex() + scale_colour_gradient(low = "white", high = "red") + coord_fixed() ``` It does not override the default scale because the colors in `geom_hex()` are set by the `fill` aesthetic, not the `color` aesthetic. ``` ggplot(df, aes(x, y)) + geom_hex() + scale_fill_gradient(low = "white", high = "red") + coord_fixed() ``` ### Exercise 28\.4\.2 The first argument to every scale is the label for the scale. It is equivalent to using the `labs` function. ``` ggplot(mpg, aes(displ, hwy)) + geom_point(aes(colour = class)) + geom_smooth(se = FALSE) + labs( x = "Engine displacement (L)", y = "Highway fuel economy (mpg)", colour = "Car type" ) #> `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy)) + geom_point(aes(colour = class)) + geom_smooth(se = FALSE) + scale_x_continuous("Engine displacement (L)") + scale_y_continuous("Highway fuel economy (mpg)") + scale_colour_discrete("Car type") #> `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ### Exercise 28\.4\.3 Change the display of the presidential terms by: 1. Combining the two variants shown above. 2. Improving the display of the y axis. 3. Labeling each term with the name of the president. 4. Adding informative plot labels. 5. Placing breaks every 4 years (this is trickier than it seems!). ``` fouryears <- lubridate::make_date(seq(year(min(presidential$start)), year(max(presidential$end)), by = 4 ), 1, 1) presidential %>% mutate( id = 33 + row_number(), name_id = fct_inorder(str_c(name, " (", id, ")")) ) %>% ggplot(aes(start, name_id, colour = party)) + geom_point() + geom_segment(aes(xend = end, yend = name_id)) + scale_colour_manual("Party", values = c(Republican = "red", Democratic = "blue")) + scale_y_discrete(NULL) + scale_x_date(NULL, breaks = presidential$start, date_labels = "'%y", minor_breaks = fouryears ) + ggtitle("Terms of US Presdients", subtitle = "Roosevelth (34th) to Obama (44th)" ) + theme( panel.grid.minor = element_blank(), axis.ticks.y = element_blank() ) ``` To include both the start dates of presidential terms and every four years, I use different levels of emphasis. The presidential term start years are used as major breaks with thicker lines and x\-axis labels. Lines for every four years is indicated with minor breaks that use thinner lines to distinguish them from presidential term start years and to avoid cluttering the plot. ### Exercise 28\.4\.4 Use `override.aes` to make the legend on the following plot easier to see. ``` ggplot(diamonds, aes(carat, price)) + geom_point(aes(colour = cut), alpha = 1 / 20) ``` The problem with the legend is that the `alpha` value make the colors hard to see. So I’ll override the alpha value to make the points solid in the legend. ``` ggplot(diamonds, aes(carat, price)) + geom_point(aes(colour = cut), alpha = 1 / 20) + theme(legend.position = "bottom") + guides(colour = guide_legend(nrow = 1, override.aes = list(alpha = 1))) ``` 28\.5 Zooming ------------- No exercises 28\.6 Themes ------------ No exercises 28\.7 Saving your plots ----------------------- No exercises 28\.8 Learning more ------------------- No exercises
Data Science
jrnold.github.io
https://jrnold.github.io/r4ds-exercise-solutions/r-markdown-formats.html
29 R Markdown formats ===================== No exercises
Data Science
jrnold.github.io
https://jrnold.github.io/r4ds-exercise-solutions/r-markdown-workflow.html
30 R Markdown workflow ====================== No exercises
Data Science
jrnold.github.io
https://jrnold.github.io/r4ds-exercise-solutions/references.html
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/index.html
Purpose ======= This book contains my solutions and notes to Garrett Grolemund and Hadley Wickham’s excellent book, [R for Data Science](https://r4ds.had.co.nz/) (Grolemund and Wickham [2017](#ref-WickhamGrolemund2017)). *R for Data Science* (R4DS) is my go\-to recommendation for people getting started in R programming, data science, or the “tidyverse”. First and foremost, this book was set\-up as a resource and refresher for myself[1](#fn1). If you are looking for a reliable solutions manual to check your answers as you work through R4DS, I would recommend using the solutions created and mantained by Jeffrey Arnold, [R for Data Science: Exercise Solutions](https://jrnold.github.io/r4ds-exercise-solutions/)[2](#fn2). Though feel free to use *Yet another ‘R for Data Science’ study guide* as another point of reference[3](#fn3). Origin ------ I first read and completed the exercises to R4DS in early 2017 on the tail\-end of completing a Master’s in Analytics program. My second time going through R4DS came in early 2018 when myself and Stephen Kimel organized an internal “R for Data Science” study group with our colleagues[4](#fn4). In June of 2019 I published my solutions and notes into this book. Organization and features ------------------------- *Chapters start with the following:* * A list of “Key exercises” deemed good for discussion in a study group * A list of functions (and sometimes notes) from the chapter[5](#fn5) *Chapters also contain:* * Solutions to exercises + Exercise subsections are arranged in the same chapter –\> section –\> subsection as the original book + Chapters, sections, and subsections without exercises are usually not included + The beginning of sections may occassionally contain additional notes, e.g. [3\.8: Position Adjustment](03-data-visualization.html#position-adjustment) * The “Appendix” sections in chapters typically contain alternative solutions to problems or additional notes/thoughts pertaining to the chapter or a related topic + I use the numbering scheme {chapter}.{section}.{subsection}.{problem number} to refer to exercise solutions in “Appendix” sections * There are a few cautions with using this book[6](#fn6) Acknowledgements ---------------- *Thank you:* * [Garrett Grolemund](https://twitter.com/StatGarrett) and [Hadley Wickham](https://twitter.com/hadleywickham) for writing a phenomenal book! * The various [tidyverse](https://www.tidyverse.org/) and [RStudio](https://www.rstudio.com/) developers for producing outstanding packages, products, as well as resources for learning * [R for Data Science Online Learning Community](https://www.rfordatasci.com/) and [\#rstats](https://twitter.com/hashtag/rstats?src=hash&lang=en) communities for creating inspiring, safe places to post ideas, ask questions, and grow your R skills * Stephen Kimel, who has co\-organized a data science study group with me and also provided feedback on my R4DS solutions. In many cases I changed my solution to an exercise to a method that mirrored his approach. License ------- This work is licensed under a [Creative Commons Attribution 4\.0 International License](https://creativecommons.org/licenses/by/4.0/). Origin ------ I first read and completed the exercises to R4DS in early 2017 on the tail\-end of completing a Master’s in Analytics program. My second time going through R4DS came in early 2018 when myself and Stephen Kimel organized an internal “R for Data Science” study group with our colleagues[4](#fn4). In June of 2019 I published my solutions and notes into this book. Organization and features ------------------------- *Chapters start with the following:* * A list of “Key exercises” deemed good for discussion in a study group * A list of functions (and sometimes notes) from the chapter[5](#fn5) *Chapters also contain:* * Solutions to exercises + Exercise subsections are arranged in the same chapter –\> section –\> subsection as the original book + Chapters, sections, and subsections without exercises are usually not included + The beginning of sections may occassionally contain additional notes, e.g. [3\.8: Position Adjustment](03-data-visualization.html#position-adjustment) * The “Appendix” sections in chapters typically contain alternative solutions to problems or additional notes/thoughts pertaining to the chapter or a related topic + I use the numbering scheme {chapter}.{section}.{subsection}.{problem number} to refer to exercise solutions in “Appendix” sections * There are a few cautions with using this book[6](#fn6) License ------- This work is licensed under a [Creative Commons Attribution 4\.0 International License](https://creativecommons.org/licenses/by/4.0/).
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/03-data-visualization.html
Ch 3: Data visualization ======================== **Key questions:** * 3\.6\.1\. \#6 * 3\.8\.1\. \#8 **Functions and notes:** * `geom_point`: Add points to plot, key args: `x`, `y`, `size`, `stroke`, `colour`, `alpha`, `shape` * `geom_smooth`: Add line and confidence intervals to x\-y plot, can use `se` to turn off standard errors, can use `method` to change algorithm to make line. `linetype` to make dotted line. * `geom_bar`: Stack values on top of each to make bars (default `stat = "count"`, can also change to `"identity"`. May want to make `y = ..prop..` to show y as proportion of values). `postion = "stacked"` may take on values of “`identity"`,”`dodge"`, "`fill"` * `geom_count`: Make bar charts out of discrete row values in dataframe. `fill` to fill bars, `colour` to outline. * `geom_jitter`: like `geom_point()` but with randomness added, use `width` and `height` args to control (could also use `geom_point()` with `position = "jitter"`) * `geom_boxplot`: box and whiskers plot * `geom_polygon`: Can use to plot points – use with objects created from `map_data()` * `geom_abline`: use args `intercept` and `slope` to create line * `facet_wrap`: Facet multiple charts by one variable; `scales = "free_x"` (or “free”, or “free\_y” are helpful) * `facet_grid`: Facet multiple by charts by one or two variables: `space` is helpful arg (not within `facet_wrap()`) * `stat_count`: Like `geom_bar()` * `stat_summary`: can use to explicitly show ranges, e.g. with args `fun.ymin = min`, `fun.ymax = max`, `fun.y = median` * `stat_bin`: Like `geom_histogram()` * `stat_smooth`: Sames as `geom_smooth` but can take non\-standard geom * `position` adjustments: + `identity`: ; `dodge`: ; `fill`: ; + `position_dodge` ; `position_fill` ; `position_identiy` ; `position_jitter` ; `position_stack` ; * ways to override default mapping + `coord_quickmap`: Set aspect ratio for maps + `coord_flip`: Flip x and y coordinates + `coord_polar`: Use polar coordinates – don’t use much (should set `theme(aspect.ratio = 1) +labs(x = NULL, y = NULL))` + `coord_fixed`: Fix x and y to be same size distance between tickmarks ``` ggplot(data = <DATA>) + <GEOM_FUNCTION>( mapping = aes(<MAPPINGS>), stat = <STAT>, position = <POSITION> ) + <COORDINATE_FUNCTION> + <FACET_FUNCTION> ``` 3\.2: First steps ----------------- ### 3\.2\.4 **1\. Run `ggplot(data = mpg)`. What do you see?** ``` ggplot(data = mpg) ``` A blank grey space. **2\. How many rows are in `mpg`? How many columns?** ``` nrow(mtcars) ``` ``` ## [1] 32 ``` ``` ncol(mtcars) ``` ``` ## [1] 11 ``` **3\. What does the `drv` variable describe? Read the help for `?mpg` to find out.** Front wheel, rear wheel or 4 wheel drive. **4\. Make a scatterplot of `hwy` vs `cyl`.** ``` ggplot(mpg)+ geom_point(aes(x = hwy, y = cyl)) ``` **5\. What happens if you make a scatterplot of `class` vs `drv`? Why is the plot not useful?** ``` ggplot(mpg)+ geom_point(aes(x = class, y = drv)) ``` The points stack\-up on top of one another so you don’t get a sense of how many are on each point. *Any ideas for what methods could you use to improve the view of this data?* Jitter points so they don’t line\-up, or make point size represent number of points that are stacked. 3\.3: Aesthetic mappings ------------------------ ### 3\.3\.1\. **1\. What’s gone wrong with this code? Why are the points not blue?** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy, color = "blue")) ``` The `color` field is in the `aes` function so it is expecting a character or factor variable. By inputting “blue” here, ggplot reads this as a character field with the value “blue” that it then supplies it’s default color schemes to (1st: salmon, 2nd: teal) **2\. Which variables in `mpg` are categorical? Which variables are continuous? (Hint: type `?mpg` to read the documentation for the dataset). How can you see this information when you run mpg?** ``` mpg ``` ``` ## # A tibble: 234 x 11 ## manufacturer model displ year cyl trans drv cty hwy fl class ## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr> ## 1 audi a4 1.8 1999 4 auto~ f 18 29 p comp~ ## 2 audi a4 1.8 1999 4 manu~ f 21 29 p comp~ ## 3 audi a4 2 2008 4 manu~ f 20 31 p comp~ ## 4 audi a4 2 2008 4 auto~ f 21 30 p comp~ ## 5 audi a4 2.8 1999 6 auto~ f 16 26 p comp~ ## 6 audi a4 2.8 1999 6 manu~ f 18 26 p comp~ ## 7 audi a4 3.1 2008 6 auto~ f 18 27 p comp~ ## 8 audi a4 q~ 1.8 1999 4 manu~ 4 18 26 p comp~ ## 9 audi a4 q~ 1.8 1999 4 auto~ 4 16 25 p comp~ ## 10 audi a4 q~ 2 2008 4 manu~ 4 20 28 p comp~ ## # ... with 224 more rows ``` The data is in tibble form already so just printing it shows the type, but could also use the `glimpse` and `str` functions. **3\. Map a continuous variable to `color`, `size`, and `shape.` How do these aesthetics behave differently for categorical vs. continuous variables?** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = cty, y = hwy, color = cyl, size = displ, shape = fl)) ``` `color`: For continuous applies a gradient, for categorical it applies distinct colors based on the number of categories. `size`: For continuous, applies in order, for categorical will apply in an order that may be arbitrary if there is not an order provided. `shape`: Will not allow you to input a continuous variable. **4\. What happens if you map the same variable to multiple aesthetics?** Will map onto both fields. Can be redundant in some cases, in others it can be valuable for clarity. ``` ggplot(data = mpg)+ geom_point(mapping = aes(x = cty, y = hwy, color = fl, shape = fl)) ``` **5\. What does the `stroke` aesthetic do? What shapes does it work with? (Hint: use `?geom_point`)** ``` ?geom_point ``` ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_point(shape = 21, colour = "black", fill = "white", size = 5, stroke = 5) ``` ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_point(shape = 21, colour = "black", fill = "white", size = 5, stroke = 3) ``` For shapes that have a border (like `shape = 21`), you can colour the inside and outside separately. Use the stroke aesthetic to modify the width of the border (can get similar effects by layering small point on bigger point). **6\. What happens if you map an aesthetic to something other than a variable name, like `aes(colour = displ < 5)`?** ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy, colour = displ < 5)) + geom_point() ``` The field becomes a logical operator in this case. 3\.5: Facets ------------ ### 3\.5\.1\. **1\. What happens if you facet on a continuous variable?** ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy))+ geom_point()+ facet_wrap(~cyl) ``` It will facet along all of the unique values. **2\. What do the empty cells in plot with `facet_grid(drv ~ cyl)` mean?** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy)) + facet_grid(drv ~ cyl) ``` **How do they relate to this plot?** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = drv, y = cyl)) ``` They represent the locations where there is no point on the above graph (could be made more clear by giving consistent order to axes). **3\. What plots does the following code make? What does `.` do?** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy)) + facet_grid(drv ~ .) ``` ``` ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy)) + facet_grid(. ~ cyl) ``` Can use to specify if to facet by rows or columns. **4\. Take the first faceted plot in this section:** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy)) + facet_wrap(~ class, nrow = 2) ``` **What are the advantages to using faceting instead of the colour aesthetic? What are the disadvantages? How might the balance change if you had a larger dataset?** Faceting prevents overlapping points in the data. A disadvantage is that you have to move your eye to look at different graphs. Some groups you don’t have much data on as well so those don’t present much information. If there is more data, you may be more comfortable using facetting as each group should have points that you can view. **5\. Read `?facet_wrap`. What does `nrow` do? What does `ncol` do? What other options control the layout of the individual panels? Why doesn’t `facet_grid()` have `nrow` and `ncol` argument?** `nrow` and `ncol` specify the number of columns or rows to facet by, `facet_grid` does not have this option because the splits are defined by the number of unique values in each variable. Other important options are `scales` which let you define if the scales are able to change between each plot. **6\. When using `facet_grid()` you should usually put the variable with more unique levels in the columns. Why?** I’m not sure why exactly, if I compare these, it’s not completely unclear. ``` #more unique levels on columns ggplot(data = mpg) + geom_point(mapping = aes(x = cty, y = hwy)) + facet_grid(year ~ class) ``` ``` #more unique levels on rows ggplot(data = mpg) + geom_point(mapping = aes(x = cty, y = hwy)) + facet_grid(class ~ year) ``` My guess though would be that it’s because our computer screens are generally wider than they are tall. Hence there will be more space for viewing a higher number of attributes going across columns than by rows. 3\.6: Geometric Objects ----------------------- ### 3\.6\.1 **1\. What geom would you use to draw a line chart? A boxplot? A histogram? An area chart?** \* `geom_line` \* `geom_boxplot` \* `geom_histogram` \* `geom_area` \+ Notice that `geom_area` is just a special case of `geom_ribbon` *Example of `geom_area`:* ``` huron <- data.frame(year = 1875:1972, level = as.vector(LakeHuron) - 575) h <- ggplot(huron, aes(year)) h + geom_ribbon(aes(ymin = 0, ymax = level)) ``` ``` h + geom_area(aes(y = level)) ``` ``` # Add aesthetic mappings h + geom_ribbon(aes(ymin = level - 1, ymax = level + 1), fill = "grey70") + geom_line(aes(y = level)) ``` ``` h + geom_area(aes(y = level), fill = "grey70") + geom_line(aes(y = level)) ``` **2\. Run this code in your head and predict what the output will look like. Then, run the code in R and check your predictions.** ``` ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = drv)) + geom_point() + geom_smooth(se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` **3\. What does `show.legend = FALSE` do? What happens if you remove it? Why do you think I used it earlier in the chapter?** ``` ggplot(data = mpg) + geom_smooth( mapping = aes(x = displ, y = hwy, color = drv), show.legend = FALSE ) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` It get’s rid of the legend that would be assogiated with this geom. You removed it previously to keep it consistent with your other graphs that did not include them to specify the `drv`. **4\. What does the `se` argument to `geom_smooth()` do?** `se` here stands for standard error, so if we specify it as `FALSE` we are saying we do not want to show the standard errors for the plot. **5\. Will these two graphs look different? Why/why not?** ``` ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) + geom_point() + geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot() + geom_point(data = mpg, mapping = aes(x = displ, y = hwy)) + geom_smooth(data = mpg, mapping = aes(x = displ, y = hwy)) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` No, because local mappings for each geom are the same as the global mappings in the other. **6\. Recreate the R code necessary to generate the following graphs.** ``` ggplot(mpg, aes(displ, hwy))+ geom_point() + geom_smooth(se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy, group = drv))+ geom_point() + geom_smooth(se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy, colour = drv))+ geom_point() + geom_smooth(se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy))+ geom_point(aes(colour = drv)) + geom_smooth(se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy))+ geom_point(aes(color = drv)) + geom_smooth(aes(linetype = drv), se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy)) + geom_point(colour = "white", size = 4) + geom_point(aes(colour = drv)) ``` 3\.7: Statistical transformations --------------------------------- ### 3\.7\.1\. **1\. What is the default `geom` associated with `stat_summary()`? How could you rewrite the previous plot to use that geom function instead of the stat function?** The default is `geom_pointrange`, the point being the mean, and the lines being the standard error on the y value (i.e. the deviation of the mean of the value). ``` ggplot(mpg) + stat_summary(aes(cyl, cty)) ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` *Rewritten with geom[7](#fn7):* ``` ggplot(mpg)+ geom_pointrange(aes(x = cyl, y = cty), stat = "summary") ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` The specific example though is actually not the default: ``` ggplot(data = diamonds) + stat_summary( mapping = aes(x = cut, y = depth), fun.ymin = min, fun.ymax = max, fun.y = median ) ``` *Rewritten with geom:* ``` ggplot(data = diamonds)+ geom_pointrange(aes(x = cut, y = depth), stat = "summary", fun.ymin = "min", fun.ymax = "max", fun.y = "median") ``` **2\. What does `geom_col()` do? How is it different to `geom_bar()`?** `geom_col` has `"identity"` as the default `stat`, so it expects to receive a variable that already has the value aggregated[8](#fn8) **3\.Most geoms and stats come in pairs that are almost always used in concert. Read through the documentation and make a list of all the pairs. What do they have in common?** ``` ?ggplot2 ``` **4\.What variables does `stat_smooth()` compute? What parameters control its behaviour?** * See here: [http://ggplot2\.tidyverse.org/reference/\#section\-layer\-stats](http://ggplot2.tidyverse.org/reference/#section-layer-stats) for a helpful resource. * Also, someone who aggregated some online: [http://sape.inf.usi.ch/quick\-reference/ggplot2/geom](http://sape.inf.usi.ch/quick-reference/ggplot2/geom)[9](#fn9) **5\. In our proportion bar chart, we need to set `group = 1`. Why? In other words what is the problem with these two graphs?** ``` ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, y = ..prop..)) ``` ``` ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, fill = color, y = ..prop..)) ``` **Fixed graphs:** 1. ``` ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, y = ..prop.., group = 1)) ``` 2. For this second graph though, I would think you would want something more like the following: ``` ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, fill = color), position = "fill")+ ylab("prop") # ylab would say "count" w/o this ``` (Which could be generated by this code as well, using `geom_count()`) ``` diamonds %>% count(cut, color) %>% group_by(cut) %>% mutate(prop = n / sum(n)) %>% ggplot(aes(x = cut, y = prop, fill = color))+ geom_col() ``` 3\.8: Position Adjustment ------------------------- *Some “dodge”" examples:* ``` ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, fill = clarity), position = "dodge") ``` ``` diamonds %>% count(cut, color) %>% ggplot(aes(x = cut, y = n, fill = color))+ geom_col(position = "dodge") ``` * the `interaction()` function can also sometimes be helpful for these types of charts Looking of `geom_jitter` and only changing width. ``` ggplot(data = mpg, mapping = aes(x = drv, y = hwy))+ geom_jitter(height = 0, width = .2) ``` ### 3\.8\.1\. **1\.What is the problem with this plot? How could you improve it?** ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_point() ``` The points overlap, could use `geom_jitter` instead ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_jitter() ``` **2\. What parameters to `geom_jitter()` control the amount of jittering?** `height` and `width` **3\. Compare and contrast `geom_jitter()` with `geom_count()`.** Take the above chart and instead use `geom_count`. ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_count() ``` Can also use `geom_count` with `color`, and can use “jitter” in `position` arg. ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy, colour = drv)) + geom_count() ``` ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy, colour = drv)) + geom_count(position = "jitter") ``` ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy, colour = drv)) + geom_jitter(size = 3, alpha = 0.3) ``` One problem with `geom_count` is that the shapes can still block\-out other shapes at that same point of different colors. You can flip the orderof the stacking order of the colors with `position` \= “dodge”. Still this seems limited. ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy, colour = drv)) + geom_count(position = "dodge") ``` ``` ## Warning: Width not defined. Set with `position_dodge(width = ?)` ``` **4\. What’s the default position adjustment for `geom_boxplot()`? Create a visualisation of the mpg dataset that demonstrates it.** `dodge` (but seems like changing to `identity` is the same) ``` ggplot(data=mpg, mapping=aes(x=class, y=hwy))+ geom_boxplot() ``` 3\.9: Coordinate systems ------------------------ * `coord_flip` is helpful, especially for quickly tackling issues with axis labels * `coord_quickmap` is important to remember if plotting spatial data. * `coord_polar` is important to remember if plotting spatial coordinates. * `map_data` for extracting data on maps of locations ### 3\.9\.1\. **1\.Turn a stacked bar chart into a pie chart using `coord_polar()`.** These are more illustrative than anything, here is a note from the documetantion: *NOTE: Use these plots with caution \- polar coordinates has major perceptual problems. The main point of these examples is to demonstrate how these common plots can be described in the grammar. Use with EXTREME caution.* ``` ggplot(mpg, aes(x = 1, fill = class))+ geom_bar(position = "fill") + coord_polar(theta = "y") + scale_x_continuous(labels = NULL) ``` If I want to make multiple levels: ``` ggplot(mpg, aes(x = as.factor(cyl), fill = class))+ geom_bar(position = "fill") + coord_polar(theta = "y") ``` **2\. What does `labs()` do? Read the documentation.** Used for giving labels. ``` ?labs ``` **3\. What’s the difference between `coord_quickmap()` and `coord_map()`?** The first is an approximation, useful for smaller regions to be proected. For this example, do not see substantial differences. ``` nz <- map_data("nz") ggplot(nz,aes(long,lat,group=group))+ geom_polygon(fill="red",colour="black")+ coord_quickmap() ``` ``` ggplot(nz,aes(long,lat,group=group))+ geom_polygon(fill="red",colour="black")+ coord_map() ``` **4\. What does the plot below tell you about the relationship between city and highway in `mpg`? Why is `coord_fixed()` important? What does `geom_abline()` do?** * `geom_abline()` adds a line with a given intercept and slope (either given by `aes` or by `intercept` and `slope` args) * `coord_fixed()` ensures that the ratios between the x and y axis stay at a specified relationship (default: 1\). This is important for easily seeing the magnitude of the relationship between variables. ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_point() + geom_abline() + coord_fixed() ``` Appendix -------- ### 3\.7\.1\.1 extension ``` ggplot(mpg, aes(x = cyl, y = cty, group = cyl))+ geom_pointrange(stat = "summary") ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` This seems to be the same as what you would get by doing the following with dplyr: ``` mpg %>% group_by(cyl) %>% dplyr::summarise(mean = mean(cty), sd = (sum((cty - mean(cty))^2) / (n() - 1))^0.5, n = n(), se = sd / n^0.5, lower = mean - se, upper = mean + se) %>% ggplot(aes(x = cyl, y = mean, group = cyl))+ geom_pointrange(aes(ymin = lower, ymax = upper)) ``` Other geoms you could have set stat\_summary to: `crossbar`: ``` ggplot(mpg) + stat_summary(aes(cyl, cty), geom = "crossbar") ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` `errorbar`: ``` ggplot(mpg) + stat_summary(aes(cyl, cty), geom = "errorbar") ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` `linerange`: ``` ggplot(mpg) + stat_summary(aes(cyl, cty), geom = "linerange") ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` 3\.2: First steps ----------------- ### 3\.2\.4 **1\. Run `ggplot(data = mpg)`. What do you see?** ``` ggplot(data = mpg) ``` A blank grey space. **2\. How many rows are in `mpg`? How many columns?** ``` nrow(mtcars) ``` ``` ## [1] 32 ``` ``` ncol(mtcars) ``` ``` ## [1] 11 ``` **3\. What does the `drv` variable describe? Read the help for `?mpg` to find out.** Front wheel, rear wheel or 4 wheel drive. **4\. Make a scatterplot of `hwy` vs `cyl`.** ``` ggplot(mpg)+ geom_point(aes(x = hwy, y = cyl)) ``` **5\. What happens if you make a scatterplot of `class` vs `drv`? Why is the plot not useful?** ``` ggplot(mpg)+ geom_point(aes(x = class, y = drv)) ``` The points stack\-up on top of one another so you don’t get a sense of how many are on each point. *Any ideas for what methods could you use to improve the view of this data?* Jitter points so they don’t line\-up, or make point size represent number of points that are stacked. ### 3\.2\.4 **1\. Run `ggplot(data = mpg)`. What do you see?** ``` ggplot(data = mpg) ``` A blank grey space. **2\. How many rows are in `mpg`? How many columns?** ``` nrow(mtcars) ``` ``` ## [1] 32 ``` ``` ncol(mtcars) ``` ``` ## [1] 11 ``` **3\. What does the `drv` variable describe? Read the help for `?mpg` to find out.** Front wheel, rear wheel or 4 wheel drive. **4\. Make a scatterplot of `hwy` vs `cyl`.** ``` ggplot(mpg)+ geom_point(aes(x = hwy, y = cyl)) ``` **5\. What happens if you make a scatterplot of `class` vs `drv`? Why is the plot not useful?** ``` ggplot(mpg)+ geom_point(aes(x = class, y = drv)) ``` The points stack\-up on top of one another so you don’t get a sense of how many are on each point. *Any ideas for what methods could you use to improve the view of this data?* Jitter points so they don’t line\-up, or make point size represent number of points that are stacked. 3\.3: Aesthetic mappings ------------------------ ### 3\.3\.1\. **1\. What’s gone wrong with this code? Why are the points not blue?** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy, color = "blue")) ``` The `color` field is in the `aes` function so it is expecting a character or factor variable. By inputting “blue” here, ggplot reads this as a character field with the value “blue” that it then supplies it’s default color schemes to (1st: salmon, 2nd: teal) **2\. Which variables in `mpg` are categorical? Which variables are continuous? (Hint: type `?mpg` to read the documentation for the dataset). How can you see this information when you run mpg?** ``` mpg ``` ``` ## # A tibble: 234 x 11 ## manufacturer model displ year cyl trans drv cty hwy fl class ## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr> ## 1 audi a4 1.8 1999 4 auto~ f 18 29 p comp~ ## 2 audi a4 1.8 1999 4 manu~ f 21 29 p comp~ ## 3 audi a4 2 2008 4 manu~ f 20 31 p comp~ ## 4 audi a4 2 2008 4 auto~ f 21 30 p comp~ ## 5 audi a4 2.8 1999 6 auto~ f 16 26 p comp~ ## 6 audi a4 2.8 1999 6 manu~ f 18 26 p comp~ ## 7 audi a4 3.1 2008 6 auto~ f 18 27 p comp~ ## 8 audi a4 q~ 1.8 1999 4 manu~ 4 18 26 p comp~ ## 9 audi a4 q~ 1.8 1999 4 auto~ 4 16 25 p comp~ ## 10 audi a4 q~ 2 2008 4 manu~ 4 20 28 p comp~ ## # ... with 224 more rows ``` The data is in tibble form already so just printing it shows the type, but could also use the `glimpse` and `str` functions. **3\. Map a continuous variable to `color`, `size`, and `shape.` How do these aesthetics behave differently for categorical vs. continuous variables?** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = cty, y = hwy, color = cyl, size = displ, shape = fl)) ``` `color`: For continuous applies a gradient, for categorical it applies distinct colors based on the number of categories. `size`: For continuous, applies in order, for categorical will apply in an order that may be arbitrary if there is not an order provided. `shape`: Will not allow you to input a continuous variable. **4\. What happens if you map the same variable to multiple aesthetics?** Will map onto both fields. Can be redundant in some cases, in others it can be valuable for clarity. ``` ggplot(data = mpg)+ geom_point(mapping = aes(x = cty, y = hwy, color = fl, shape = fl)) ``` **5\. What does the `stroke` aesthetic do? What shapes does it work with? (Hint: use `?geom_point`)** ``` ?geom_point ``` ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_point(shape = 21, colour = "black", fill = "white", size = 5, stroke = 5) ``` ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_point(shape = 21, colour = "black", fill = "white", size = 5, stroke = 3) ``` For shapes that have a border (like `shape = 21`), you can colour the inside and outside separately. Use the stroke aesthetic to modify the width of the border (can get similar effects by layering small point on bigger point). **6\. What happens if you map an aesthetic to something other than a variable name, like `aes(colour = displ < 5)`?** ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy, colour = displ < 5)) + geom_point() ``` The field becomes a logical operator in this case. ### 3\.3\.1\. **1\. What’s gone wrong with this code? Why are the points not blue?** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy, color = "blue")) ``` The `color` field is in the `aes` function so it is expecting a character or factor variable. By inputting “blue” here, ggplot reads this as a character field with the value “blue” that it then supplies it’s default color schemes to (1st: salmon, 2nd: teal) **2\. Which variables in `mpg` are categorical? Which variables are continuous? (Hint: type `?mpg` to read the documentation for the dataset). How can you see this information when you run mpg?** ``` mpg ``` ``` ## # A tibble: 234 x 11 ## manufacturer model displ year cyl trans drv cty hwy fl class ## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr> ## 1 audi a4 1.8 1999 4 auto~ f 18 29 p comp~ ## 2 audi a4 1.8 1999 4 manu~ f 21 29 p comp~ ## 3 audi a4 2 2008 4 manu~ f 20 31 p comp~ ## 4 audi a4 2 2008 4 auto~ f 21 30 p comp~ ## 5 audi a4 2.8 1999 6 auto~ f 16 26 p comp~ ## 6 audi a4 2.8 1999 6 manu~ f 18 26 p comp~ ## 7 audi a4 3.1 2008 6 auto~ f 18 27 p comp~ ## 8 audi a4 q~ 1.8 1999 4 manu~ 4 18 26 p comp~ ## 9 audi a4 q~ 1.8 1999 4 auto~ 4 16 25 p comp~ ## 10 audi a4 q~ 2 2008 4 manu~ 4 20 28 p comp~ ## # ... with 224 more rows ``` The data is in tibble form already so just printing it shows the type, but could also use the `glimpse` and `str` functions. **3\. Map a continuous variable to `color`, `size`, and `shape.` How do these aesthetics behave differently for categorical vs. continuous variables?** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = cty, y = hwy, color = cyl, size = displ, shape = fl)) ``` `color`: For continuous applies a gradient, for categorical it applies distinct colors based on the number of categories. `size`: For continuous, applies in order, for categorical will apply in an order that may be arbitrary if there is not an order provided. `shape`: Will not allow you to input a continuous variable. **4\. What happens if you map the same variable to multiple aesthetics?** Will map onto both fields. Can be redundant in some cases, in others it can be valuable for clarity. ``` ggplot(data = mpg)+ geom_point(mapping = aes(x = cty, y = hwy, color = fl, shape = fl)) ``` **5\. What does the `stroke` aesthetic do? What shapes does it work with? (Hint: use `?geom_point`)** ``` ?geom_point ``` ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_point(shape = 21, colour = "black", fill = "white", size = 5, stroke = 5) ``` ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_point(shape = 21, colour = "black", fill = "white", size = 5, stroke = 3) ``` For shapes that have a border (like `shape = 21`), you can colour the inside and outside separately. Use the stroke aesthetic to modify the width of the border (can get similar effects by layering small point on bigger point). **6\. What happens if you map an aesthetic to something other than a variable name, like `aes(colour = displ < 5)`?** ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy, colour = displ < 5)) + geom_point() ``` The field becomes a logical operator in this case. 3\.5: Facets ------------ ### 3\.5\.1\. **1\. What happens if you facet on a continuous variable?** ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy))+ geom_point()+ facet_wrap(~cyl) ``` It will facet along all of the unique values. **2\. What do the empty cells in plot with `facet_grid(drv ~ cyl)` mean?** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy)) + facet_grid(drv ~ cyl) ``` **How do they relate to this plot?** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = drv, y = cyl)) ``` They represent the locations where there is no point on the above graph (could be made more clear by giving consistent order to axes). **3\. What plots does the following code make? What does `.` do?** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy)) + facet_grid(drv ~ .) ``` ``` ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy)) + facet_grid(. ~ cyl) ``` Can use to specify if to facet by rows or columns. **4\. Take the first faceted plot in this section:** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy)) + facet_wrap(~ class, nrow = 2) ``` **What are the advantages to using faceting instead of the colour aesthetic? What are the disadvantages? How might the balance change if you had a larger dataset?** Faceting prevents overlapping points in the data. A disadvantage is that you have to move your eye to look at different graphs. Some groups you don’t have much data on as well so those don’t present much information. If there is more data, you may be more comfortable using facetting as each group should have points that you can view. **5\. Read `?facet_wrap`. What does `nrow` do? What does `ncol` do? What other options control the layout of the individual panels? Why doesn’t `facet_grid()` have `nrow` and `ncol` argument?** `nrow` and `ncol` specify the number of columns or rows to facet by, `facet_grid` does not have this option because the splits are defined by the number of unique values in each variable. Other important options are `scales` which let you define if the scales are able to change between each plot. **6\. When using `facet_grid()` you should usually put the variable with more unique levels in the columns. Why?** I’m not sure why exactly, if I compare these, it’s not completely unclear. ``` #more unique levels on columns ggplot(data = mpg) + geom_point(mapping = aes(x = cty, y = hwy)) + facet_grid(year ~ class) ``` ``` #more unique levels on rows ggplot(data = mpg) + geom_point(mapping = aes(x = cty, y = hwy)) + facet_grid(class ~ year) ``` My guess though would be that it’s because our computer screens are generally wider than they are tall. Hence there will be more space for viewing a higher number of attributes going across columns than by rows. ### 3\.5\.1\. **1\. What happens if you facet on a continuous variable?** ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy))+ geom_point()+ facet_wrap(~cyl) ``` It will facet along all of the unique values. **2\. What do the empty cells in plot with `facet_grid(drv ~ cyl)` mean?** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy)) + facet_grid(drv ~ cyl) ``` **How do they relate to this plot?** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = drv, y = cyl)) ``` They represent the locations where there is no point on the above graph (could be made more clear by giving consistent order to axes). **3\. What plots does the following code make? What does `.` do?** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy)) + facet_grid(drv ~ .) ``` ``` ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy)) + facet_grid(. ~ cyl) ``` Can use to specify if to facet by rows or columns. **4\. Take the first faceted plot in this section:** ``` ggplot(data = mpg) + geom_point(mapping = aes(x = displ, y = hwy)) + facet_wrap(~ class, nrow = 2) ``` **What are the advantages to using faceting instead of the colour aesthetic? What are the disadvantages? How might the balance change if you had a larger dataset?** Faceting prevents overlapping points in the data. A disadvantage is that you have to move your eye to look at different graphs. Some groups you don’t have much data on as well so those don’t present much information. If there is more data, you may be more comfortable using facetting as each group should have points that you can view. **5\. Read `?facet_wrap`. What does `nrow` do? What does `ncol` do? What other options control the layout of the individual panels? Why doesn’t `facet_grid()` have `nrow` and `ncol` argument?** `nrow` and `ncol` specify the number of columns or rows to facet by, `facet_grid` does not have this option because the splits are defined by the number of unique values in each variable. Other important options are `scales` which let you define if the scales are able to change between each plot. **6\. When using `facet_grid()` you should usually put the variable with more unique levels in the columns. Why?** I’m not sure why exactly, if I compare these, it’s not completely unclear. ``` #more unique levels on columns ggplot(data = mpg) + geom_point(mapping = aes(x = cty, y = hwy)) + facet_grid(year ~ class) ``` ``` #more unique levels on rows ggplot(data = mpg) + geom_point(mapping = aes(x = cty, y = hwy)) + facet_grid(class ~ year) ``` My guess though would be that it’s because our computer screens are generally wider than they are tall. Hence there will be more space for viewing a higher number of attributes going across columns than by rows. 3\.6: Geometric Objects ----------------------- ### 3\.6\.1 **1\. What geom would you use to draw a line chart? A boxplot? A histogram? An area chart?** \* `geom_line` \* `geom_boxplot` \* `geom_histogram` \* `geom_area` \+ Notice that `geom_area` is just a special case of `geom_ribbon` *Example of `geom_area`:* ``` huron <- data.frame(year = 1875:1972, level = as.vector(LakeHuron) - 575) h <- ggplot(huron, aes(year)) h + geom_ribbon(aes(ymin = 0, ymax = level)) ``` ``` h + geom_area(aes(y = level)) ``` ``` # Add aesthetic mappings h + geom_ribbon(aes(ymin = level - 1, ymax = level + 1), fill = "grey70") + geom_line(aes(y = level)) ``` ``` h + geom_area(aes(y = level), fill = "grey70") + geom_line(aes(y = level)) ``` **2\. Run this code in your head and predict what the output will look like. Then, run the code in R and check your predictions.** ``` ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = drv)) + geom_point() + geom_smooth(se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` **3\. What does `show.legend = FALSE` do? What happens if you remove it? Why do you think I used it earlier in the chapter?** ``` ggplot(data = mpg) + geom_smooth( mapping = aes(x = displ, y = hwy, color = drv), show.legend = FALSE ) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` It get’s rid of the legend that would be assogiated with this geom. You removed it previously to keep it consistent with your other graphs that did not include them to specify the `drv`. **4\. What does the `se` argument to `geom_smooth()` do?** `se` here stands for standard error, so if we specify it as `FALSE` we are saying we do not want to show the standard errors for the plot. **5\. Will these two graphs look different? Why/why not?** ``` ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) + geom_point() + geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot() + geom_point(data = mpg, mapping = aes(x = displ, y = hwy)) + geom_smooth(data = mpg, mapping = aes(x = displ, y = hwy)) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` No, because local mappings for each geom are the same as the global mappings in the other. **6\. Recreate the R code necessary to generate the following graphs.** ``` ggplot(mpg, aes(displ, hwy))+ geom_point() + geom_smooth(se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy, group = drv))+ geom_point() + geom_smooth(se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy, colour = drv))+ geom_point() + geom_smooth(se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy))+ geom_point(aes(colour = drv)) + geom_smooth(se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy))+ geom_point(aes(color = drv)) + geom_smooth(aes(linetype = drv), se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy)) + geom_point(colour = "white", size = 4) + geom_point(aes(colour = drv)) ``` ### 3\.6\.1 **1\. What geom would you use to draw a line chart? A boxplot? A histogram? An area chart?** \* `geom_line` \* `geom_boxplot` \* `geom_histogram` \* `geom_area` \+ Notice that `geom_area` is just a special case of `geom_ribbon` *Example of `geom_area`:* ``` huron <- data.frame(year = 1875:1972, level = as.vector(LakeHuron) - 575) h <- ggplot(huron, aes(year)) h + geom_ribbon(aes(ymin = 0, ymax = level)) ``` ``` h + geom_area(aes(y = level)) ``` ``` # Add aesthetic mappings h + geom_ribbon(aes(ymin = level - 1, ymax = level + 1), fill = "grey70") + geom_line(aes(y = level)) ``` ``` h + geom_area(aes(y = level), fill = "grey70") + geom_line(aes(y = level)) ``` **2\. Run this code in your head and predict what the output will look like. Then, run the code in R and check your predictions.** ``` ggplot(data = mpg, mapping = aes(x = displ, y = hwy, color = drv)) + geom_point() + geom_smooth(se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` **3\. What does `show.legend = FALSE` do? What happens if you remove it? Why do you think I used it earlier in the chapter?** ``` ggplot(data = mpg) + geom_smooth( mapping = aes(x = displ, y = hwy, color = drv), show.legend = FALSE ) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` It get’s rid of the legend that would be assogiated with this geom. You removed it previously to keep it consistent with your other graphs that did not include them to specify the `drv`. **4\. What does the `se` argument to `geom_smooth()` do?** `se` here stands for standard error, so if we specify it as `FALSE` we are saying we do not want to show the standard errors for the plot. **5\. Will these two graphs look different? Why/why not?** ``` ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) + geom_point() + geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot() + geom_point(data = mpg, mapping = aes(x = displ, y = hwy)) + geom_smooth(data = mpg, mapping = aes(x = displ, y = hwy)) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` No, because local mappings for each geom are the same as the global mappings in the other. **6\. Recreate the R code necessary to generate the following graphs.** ``` ggplot(mpg, aes(displ, hwy))+ geom_point() + geom_smooth(se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy, group = drv))+ geom_point() + geom_smooth(se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy, colour = drv))+ geom_point() + geom_smooth(se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy))+ geom_point(aes(colour = drv)) + geom_smooth(se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy))+ geom_point(aes(color = drv)) + geom_smooth(aes(linetype = drv), se = FALSE) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ggplot(mpg, aes(displ, hwy)) + geom_point(colour = "white", size = 4) + geom_point(aes(colour = drv)) ``` 3\.7: Statistical transformations --------------------------------- ### 3\.7\.1\. **1\. What is the default `geom` associated with `stat_summary()`? How could you rewrite the previous plot to use that geom function instead of the stat function?** The default is `geom_pointrange`, the point being the mean, and the lines being the standard error on the y value (i.e. the deviation of the mean of the value). ``` ggplot(mpg) + stat_summary(aes(cyl, cty)) ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` *Rewritten with geom[7](#fn7):* ``` ggplot(mpg)+ geom_pointrange(aes(x = cyl, y = cty), stat = "summary") ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` The specific example though is actually not the default: ``` ggplot(data = diamonds) + stat_summary( mapping = aes(x = cut, y = depth), fun.ymin = min, fun.ymax = max, fun.y = median ) ``` *Rewritten with geom:* ``` ggplot(data = diamonds)+ geom_pointrange(aes(x = cut, y = depth), stat = "summary", fun.ymin = "min", fun.ymax = "max", fun.y = "median") ``` **2\. What does `geom_col()` do? How is it different to `geom_bar()`?** `geom_col` has `"identity"` as the default `stat`, so it expects to receive a variable that already has the value aggregated[8](#fn8) **3\.Most geoms and stats come in pairs that are almost always used in concert. Read through the documentation and make a list of all the pairs. What do they have in common?** ``` ?ggplot2 ``` **4\.What variables does `stat_smooth()` compute? What parameters control its behaviour?** * See here: [http://ggplot2\.tidyverse.org/reference/\#section\-layer\-stats](http://ggplot2.tidyverse.org/reference/#section-layer-stats) for a helpful resource. * Also, someone who aggregated some online: [http://sape.inf.usi.ch/quick\-reference/ggplot2/geom](http://sape.inf.usi.ch/quick-reference/ggplot2/geom)[9](#fn9) **5\. In our proportion bar chart, we need to set `group = 1`. Why? In other words what is the problem with these two graphs?** ``` ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, y = ..prop..)) ``` ``` ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, fill = color, y = ..prop..)) ``` **Fixed graphs:** 1. ``` ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, y = ..prop.., group = 1)) ``` 2. For this second graph though, I would think you would want something more like the following: ``` ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, fill = color), position = "fill")+ ylab("prop") # ylab would say "count" w/o this ``` (Which could be generated by this code as well, using `geom_count()`) ``` diamonds %>% count(cut, color) %>% group_by(cut) %>% mutate(prop = n / sum(n)) %>% ggplot(aes(x = cut, y = prop, fill = color))+ geom_col() ``` ### 3\.7\.1\. **1\. What is the default `geom` associated with `stat_summary()`? How could you rewrite the previous plot to use that geom function instead of the stat function?** The default is `geom_pointrange`, the point being the mean, and the lines being the standard error on the y value (i.e. the deviation of the mean of the value). ``` ggplot(mpg) + stat_summary(aes(cyl, cty)) ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` *Rewritten with geom[7](#fn7):* ``` ggplot(mpg)+ geom_pointrange(aes(x = cyl, y = cty), stat = "summary") ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` The specific example though is actually not the default: ``` ggplot(data = diamonds) + stat_summary( mapping = aes(x = cut, y = depth), fun.ymin = min, fun.ymax = max, fun.y = median ) ``` *Rewritten with geom:* ``` ggplot(data = diamonds)+ geom_pointrange(aes(x = cut, y = depth), stat = "summary", fun.ymin = "min", fun.ymax = "max", fun.y = "median") ``` **2\. What does `geom_col()` do? How is it different to `geom_bar()`?** `geom_col` has `"identity"` as the default `stat`, so it expects to receive a variable that already has the value aggregated[8](#fn8) **3\.Most geoms and stats come in pairs that are almost always used in concert. Read through the documentation and make a list of all the pairs. What do they have in common?** ``` ?ggplot2 ``` **4\.What variables does `stat_smooth()` compute? What parameters control its behaviour?** * See here: [http://ggplot2\.tidyverse.org/reference/\#section\-layer\-stats](http://ggplot2.tidyverse.org/reference/#section-layer-stats) for a helpful resource. * Also, someone who aggregated some online: [http://sape.inf.usi.ch/quick\-reference/ggplot2/geom](http://sape.inf.usi.ch/quick-reference/ggplot2/geom)[9](#fn9) **5\. In our proportion bar chart, we need to set `group = 1`. Why? In other words what is the problem with these two graphs?** ``` ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, y = ..prop..)) ``` ``` ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, fill = color, y = ..prop..)) ``` **Fixed graphs:** 1. ``` ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, y = ..prop.., group = 1)) ``` 2. For this second graph though, I would think you would want something more like the following: ``` ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, fill = color), position = "fill")+ ylab("prop") # ylab would say "count" w/o this ``` (Which could be generated by this code as well, using `geom_count()`) ``` diamonds %>% count(cut, color) %>% group_by(cut) %>% mutate(prop = n / sum(n)) %>% ggplot(aes(x = cut, y = prop, fill = color))+ geom_col() ``` 3\.8: Position Adjustment ------------------------- *Some “dodge”" examples:* ``` ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, fill = clarity), position = "dodge") ``` ``` diamonds %>% count(cut, color) %>% ggplot(aes(x = cut, y = n, fill = color))+ geom_col(position = "dodge") ``` * the `interaction()` function can also sometimes be helpful for these types of charts Looking of `geom_jitter` and only changing width. ``` ggplot(data = mpg, mapping = aes(x = drv, y = hwy))+ geom_jitter(height = 0, width = .2) ``` ### 3\.8\.1\. **1\.What is the problem with this plot? How could you improve it?** ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_point() ``` The points overlap, could use `geom_jitter` instead ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_jitter() ``` **2\. What parameters to `geom_jitter()` control the amount of jittering?** `height` and `width` **3\. Compare and contrast `geom_jitter()` with `geom_count()`.** Take the above chart and instead use `geom_count`. ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_count() ``` Can also use `geom_count` with `color`, and can use “jitter” in `position` arg. ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy, colour = drv)) + geom_count() ``` ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy, colour = drv)) + geom_count(position = "jitter") ``` ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy, colour = drv)) + geom_jitter(size = 3, alpha = 0.3) ``` One problem with `geom_count` is that the shapes can still block\-out other shapes at that same point of different colors. You can flip the orderof the stacking order of the colors with `position` \= “dodge”. Still this seems limited. ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy, colour = drv)) + geom_count(position = "dodge") ``` ``` ## Warning: Width not defined. Set with `position_dodge(width = ?)` ``` **4\. What’s the default position adjustment for `geom_boxplot()`? Create a visualisation of the mpg dataset that demonstrates it.** `dodge` (but seems like changing to `identity` is the same) ``` ggplot(data=mpg, mapping=aes(x=class, y=hwy))+ geom_boxplot() ``` ### 3\.8\.1\. **1\.What is the problem with this plot? How could you improve it?** ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_point() ``` The points overlap, could use `geom_jitter` instead ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_jitter() ``` **2\. What parameters to `geom_jitter()` control the amount of jittering?** `height` and `width` **3\. Compare and contrast `geom_jitter()` with `geom_count()`.** Take the above chart and instead use `geom_count`. ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_count() ``` Can also use `geom_count` with `color`, and can use “jitter” in `position` arg. ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy, colour = drv)) + geom_count() ``` ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy, colour = drv)) + geom_count(position = "jitter") ``` ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy, colour = drv)) + geom_jitter(size = 3, alpha = 0.3) ``` One problem with `geom_count` is that the shapes can still block\-out other shapes at that same point of different colors. You can flip the orderof the stacking order of the colors with `position` \= “dodge”. Still this seems limited. ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy, colour = drv)) + geom_count(position = "dodge") ``` ``` ## Warning: Width not defined. Set with `position_dodge(width = ?)` ``` **4\. What’s the default position adjustment for `geom_boxplot()`? Create a visualisation of the mpg dataset that demonstrates it.** `dodge` (but seems like changing to `identity` is the same) ``` ggplot(data=mpg, mapping=aes(x=class, y=hwy))+ geom_boxplot() ``` 3\.9: Coordinate systems ------------------------ * `coord_flip` is helpful, especially for quickly tackling issues with axis labels * `coord_quickmap` is important to remember if plotting spatial data. * `coord_polar` is important to remember if plotting spatial coordinates. * `map_data` for extracting data on maps of locations ### 3\.9\.1\. **1\.Turn a stacked bar chart into a pie chart using `coord_polar()`.** These are more illustrative than anything, here is a note from the documetantion: *NOTE: Use these plots with caution \- polar coordinates has major perceptual problems. The main point of these examples is to demonstrate how these common plots can be described in the grammar. Use with EXTREME caution.* ``` ggplot(mpg, aes(x = 1, fill = class))+ geom_bar(position = "fill") + coord_polar(theta = "y") + scale_x_continuous(labels = NULL) ``` If I want to make multiple levels: ``` ggplot(mpg, aes(x = as.factor(cyl), fill = class))+ geom_bar(position = "fill") + coord_polar(theta = "y") ``` **2\. What does `labs()` do? Read the documentation.** Used for giving labels. ``` ?labs ``` **3\. What’s the difference between `coord_quickmap()` and `coord_map()`?** The first is an approximation, useful for smaller regions to be proected. For this example, do not see substantial differences. ``` nz <- map_data("nz") ggplot(nz,aes(long,lat,group=group))+ geom_polygon(fill="red",colour="black")+ coord_quickmap() ``` ``` ggplot(nz,aes(long,lat,group=group))+ geom_polygon(fill="red",colour="black")+ coord_map() ``` **4\. What does the plot below tell you about the relationship between city and highway in `mpg`? Why is `coord_fixed()` important? What does `geom_abline()` do?** * `geom_abline()` adds a line with a given intercept and slope (either given by `aes` or by `intercept` and `slope` args) * `coord_fixed()` ensures that the ratios between the x and y axis stay at a specified relationship (default: 1\). This is important for easily seeing the magnitude of the relationship between variables. ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_point() + geom_abline() + coord_fixed() ``` ### 3\.9\.1\. **1\.Turn a stacked bar chart into a pie chart using `coord_polar()`.** These are more illustrative than anything, here is a note from the documetantion: *NOTE: Use these plots with caution \- polar coordinates has major perceptual problems. The main point of these examples is to demonstrate how these common plots can be described in the grammar. Use with EXTREME caution.* ``` ggplot(mpg, aes(x = 1, fill = class))+ geom_bar(position = "fill") + coord_polar(theta = "y") + scale_x_continuous(labels = NULL) ``` If I want to make multiple levels: ``` ggplot(mpg, aes(x = as.factor(cyl), fill = class))+ geom_bar(position = "fill") + coord_polar(theta = "y") ``` **2\. What does `labs()` do? Read the documentation.** Used for giving labels. ``` ?labs ``` **3\. What’s the difference between `coord_quickmap()` and `coord_map()`?** The first is an approximation, useful for smaller regions to be proected. For this example, do not see substantial differences. ``` nz <- map_data("nz") ggplot(nz,aes(long,lat,group=group))+ geom_polygon(fill="red",colour="black")+ coord_quickmap() ``` ``` ggplot(nz,aes(long,lat,group=group))+ geom_polygon(fill="red",colour="black")+ coord_map() ``` **4\. What does the plot below tell you about the relationship between city and highway in `mpg`? Why is `coord_fixed()` important? What does `geom_abline()` do?** * `geom_abline()` adds a line with a given intercept and slope (either given by `aes` or by `intercept` and `slope` args) * `coord_fixed()` ensures that the ratios between the x and y axis stay at a specified relationship (default: 1\). This is important for easily seeing the magnitude of the relationship between variables. ``` ggplot(data = mpg, mapping = aes(x = cty, y = hwy)) + geom_point() + geom_abline() + coord_fixed() ``` Appendix -------- ### 3\.7\.1\.1 extension ``` ggplot(mpg, aes(x = cyl, y = cty, group = cyl))+ geom_pointrange(stat = "summary") ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` This seems to be the same as what you would get by doing the following with dplyr: ``` mpg %>% group_by(cyl) %>% dplyr::summarise(mean = mean(cty), sd = (sum((cty - mean(cty))^2) / (n() - 1))^0.5, n = n(), se = sd / n^0.5, lower = mean - se, upper = mean + se) %>% ggplot(aes(x = cyl, y = mean, group = cyl))+ geom_pointrange(aes(ymin = lower, ymax = upper)) ``` Other geoms you could have set stat\_summary to: `crossbar`: ``` ggplot(mpg) + stat_summary(aes(cyl, cty), geom = "crossbar") ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` `errorbar`: ``` ggplot(mpg) + stat_summary(aes(cyl, cty), geom = "errorbar") ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` `linerange`: ``` ggplot(mpg) + stat_summary(aes(cyl, cty), geom = "linerange") ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` ### 3\.7\.1\.1 extension ``` ggplot(mpg, aes(x = cyl, y = cty, group = cyl))+ geom_pointrange(stat = "summary") ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` This seems to be the same as what you would get by doing the following with dplyr: ``` mpg %>% group_by(cyl) %>% dplyr::summarise(mean = mean(cty), sd = (sum((cty - mean(cty))^2) / (n() - 1))^0.5, n = n(), se = sd / n^0.5, lower = mean - se, upper = mean + se) %>% ggplot(aes(x = cyl, y = mean, group = cyl))+ geom_pointrange(aes(ymin = lower, ymax = upper)) ``` Other geoms you could have set stat\_summary to: `crossbar`: ``` ggplot(mpg) + stat_summary(aes(cyl, cty), geom = "crossbar") ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` `errorbar`: ``` ggplot(mpg) + stat_summary(aes(cyl, cty), geom = "errorbar") ``` ``` ## No summary function supplied, defaulting to `mean_se() ``` `linerange`: ``` ggplot(mpg) + stat_summary(aes(cyl, cty), geom = "linerange") ``` ``` ## No summary function supplied, defaulting to `mean_se() ```
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/05-data-transformations.html
Ch. 5: Data transformations =========================== **Key questions:** * 5\.2\.4 \#1 * 5\.3\.1 \#4 * 5\.4\.1 \#1 * 5\.5\.2 \#1 * 5\.6\.7 \#1, 3, 6 * 5\.7\.1 \#2, 3, 4 **Functions and notes:** * `filter()`: for filtering rows by some condition(s) * `arrange()`: for ordering rows by some condition(s) + `desc`: order by descending instead (often use within arrange or with ranking functions) * `select()`: for selecting columns by name, position, or criteria + helper functions: `everything`, `starts_with`, `ends_with`, `contains`, `matches`: selects variables that match a regular expression, `num_range("x", 1:3)`: matches `x1`, `x2` and `x3` * `rename()`: rename variables w/o dropping variables not indicated * `mutate()`: for changing columns and adding new columns \* `group_by()`: for performing operations grouped by the values of some fields * `summarise()`: for collapsing dataframes into individual rows or aggregates – typically used in conjunction with group\_by(), typically used to aggregate * `%>%`: pass the previous output into the first position of the next argument, think of as saying, “then you do…” * `count`: shortcut for \<group\_by(\[var])\> –\> \<summarise(n \= n())\> * `near`: Are two values essentially equal (use to test equivalence and deals with oddities in floats) * `is.na`: TRUE output if `NA` (and related values) else FALSE * `between`: `between(Sepal.Length, 1, 3)` is equivalent to `Sepal.Length >=1 & Sepal.Length <=3` * `transmute`: mutate but only keep the outputted column(s) * `lead`, `lag`: take value n positions in lead or lag position * `log`, `log2`, `log10`: log funcitons of base `e`, 2, 10 * `cumsum`, `cumprod`, `cummin`, `cummax`, `cummean`: Common cumalitive functions * `<`, `<=`, `>`, `>=`, `!=`: Logical operators * `min_rank`, `row_number`, `dense_rank`, `percent_rank`, `cume_dist`, `ntile`: common ranking functions * Location: `mean`; `median` * Measures of spread: `sd`: standard deviation; `IQR()`: Interquartile range; `mad()`: median absolute deviaiton ``` x <- c(1, 2, 3, 4, 6, 7, 8, 8, 10, 100) IQR(x) ``` ``` ## [1] 4.75 ``` ``` mad(x) ``` ``` ## [1] 4.4478 ``` ``` sd(x) ``` ``` ## [1] 30.04238 ``` * Rank: `min`; `quantile`; `max` * Position: `first(x)`, `nth(x, 2)`, `last(x)`. These work similarly to `x[1]`, `x[2]`, and `x[length(x)]` but let you set a default value if that position does not exist ``` first(x) ``` ``` ## [1] 1 ``` ``` nth(x, 5) ``` ``` ## [1] 6 ``` ``` last(x) ``` ``` ## [1] 100 ``` * measures of rank: `min`, `max`, `rank`, `quantile(x, 0.25)` is just 0\.25 value (generalization of median, but allows you to specify) * counts: `n()` for rows, `sum(!is.na(x))` for non\-missing rows, for distinct count, use `n_distinct(x)` * Counts and proportions of logical values: `sum(x > 10)`, `mean(y == 0)` * `range()` returns vector containing min and max of values in a vector (so returns two values). * `vignette`: function to open vignettes + e.g. `vignette("window-functions")` 5\.2: Filter rows ----------------- ### 5\.2\.4\. **1\.Find all flights that…** *1\.1\.Find flights that had an arrival delay of 2 \+ hrs* ``` filter(flights, arr_delay >= 120) %>% glimpse() ``` ``` ## Observations: 10,200 ## Variables: 19 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ day <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ dep_time <int> 811, 848, 957, 1114, 1505, 1525, 1549, 1558, 17... ## $ sched_dep_time <int> 630, 1835, 733, 900, 1310, 1340, 1445, 1359, 16... ## $ dep_delay <dbl> 101, 853, 144, 134, 115, 105, 64, 119, 62, 103,... ## $ arr_time <int> 1047, 1001, 1056, 1447, 1638, 1831, 1912, 1718,... ## $ sched_arr_time <int> 830, 1950, 853, 1222, 1431, 1626, 1656, 1515, 1... ## $ arr_delay <dbl> 137, 851, 123, 145, 127, 125, 136, 123, 123, 13... ## $ carrier <chr> "MQ", "MQ", "UA", "UA", "EV", "B6", "EV", "EV",... ## $ flight <int> 4576, 3944, 856, 1086, 4497, 525, 4181, 5712, 4... ## $ tailnum <chr> "N531MQ", "N942MQ", "N534UA", "N76502", "N17984... ## $ origin <chr> "LGA", "JFK", "EWR", "LGA", "EWR", "EWR", "EWR"... ## $ dest <chr> "CLT", "BWI", "BOS", "IAH", "RIC", "MCO", "MCI"... ## $ air_time <dbl> 118, 41, 37, 248, 63, 152, 234, 53, 119, 154, 2... ## $ distance <dbl> 544, 184, 200, 1416, 277, 937, 1092, 228, 533, ... ## $ hour <dbl> 6, 18, 7, 9, 13, 13, 14, 13, 16, 16, 13, 14, 16... ## $ minute <dbl> 30, 35, 33, 0, 10, 40, 45, 59, 30, 20, 25, 22, ... ## $ time_hour <dttm> 2013-01-01 06:00:00, 2013-01-01 18:00:00, 2013... ``` *1\.2\.flew to Houston IAH or HOU* ``` filter(flights, dest %in% c("IAH", "HOU")) ``` ``` ## # A tibble: 9,313 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 517 515 2 830 ## 2 2013 1 1 533 529 4 850 ## 3 2013 1 1 623 627 -4 933 ## 4 2013 1 1 728 732 -4 1041 ## 5 2013 1 1 739 739 0 1104 ## 6 2013 1 1 908 908 0 1228 ## 7 2013 1 1 1028 1026 2 1350 ## 8 2013 1 1 1044 1045 -1 1352 ## 9 2013 1 1 1114 900 134 1447 ## 10 2013 1 1 1205 1200 5 1503 ## # ... with 9,303 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` *1\.3\.flew through American, United or Delta* ``` filter(flights, carrier %in% c("UA", "AA","DL")) ``` ``` ## # A tibble: 139,504 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 517 515 2 830 ## 2 2013 1 1 533 529 4 850 ## 3 2013 1 1 542 540 2 923 ## 4 2013 1 1 554 600 -6 812 ## 5 2013 1 1 554 558 -4 740 ## 6 2013 1 1 558 600 -2 753 ## 7 2013 1 1 558 600 -2 924 ## 8 2013 1 1 558 600 -2 923 ## 9 2013 1 1 559 600 -1 941 ## 10 2013 1 1 559 600 -1 854 ## # ... with 139,494 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` *1\.4\. Departed in Summer* ``` filter(flights, month <= 8 & month >= 6) ``` ``` ## # A tibble: 86,995 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 6 1 2 2359 3 341 ## 2 2013 6 1 451 500 -9 624 ## 3 2013 6 1 506 515 -9 715 ## 4 2013 6 1 534 545 -11 800 ## 5 2013 6 1 538 545 -7 925 ## 6 2013 6 1 539 540 -1 832 ## 7 2013 6 1 546 600 -14 850 ## 8 2013 6 1 551 600 -9 828 ## 9 2013 6 1 552 600 -8 647 ## 10 2013 6 1 553 600 -7 700 ## # ... with 86,985 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` *1\.5\. Arrived more than 2 hours late, but didn’t leave late* ``` filter(flights, arr_delay > 120, dep_delay >= 0) ``` ``` ## # A tibble: 10,008 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 811 630 101 1047 ## 2 2013 1 1 848 1835 853 1001 ## 3 2013 1 1 957 733 144 1056 ## 4 2013 1 1 1114 900 134 1447 ## 5 2013 1 1 1505 1310 115 1638 ## 6 2013 1 1 1525 1340 105 1831 ## 7 2013 1 1 1549 1445 64 1912 ## 8 2013 1 1 1558 1359 119 1718 ## 9 2013 1 1 1732 1630 62 2028 ## 10 2013 1 1 1803 1620 103 2008 ## # ... with 9,998 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` *1\.6\. were delayed at least an hour, but made up over 30 mins in flight* ``` filter(flights, (arr_delay - dep_delay) <= -30, dep_delay >= 60) ``` ``` ## # A tibble: 2,074 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 1716 1545 91 2140 ## 2 2013 1 1 2205 1720 285 46 ## 3 2013 1 1 2326 2130 116 131 ## 4 2013 1 3 1503 1221 162 1803 ## 5 2013 1 3 1821 1530 171 2131 ## 6 2013 1 3 1839 1700 99 2056 ## 7 2013 1 3 1850 1745 65 2148 ## 8 2013 1 3 1923 1815 68 2036 ## 9 2013 1 3 1941 1759 102 2246 ## 10 2013 1 3 1950 1845 65 2228 ## # ... with 2,064 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` ``` # #Equivalent solution: # filter(flights, (arr_delay - dep_delay) <= -30 & dep_delay >= 60) ``` *1\.7\. departed between midnight and 6am (inclusive)* ``` filter(flights, dep_time >= 0 & dep_time <= 600) ``` ``` ## # A tibble: 9,344 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 517 515 2 830 ## 2 2013 1 1 533 529 4 850 ## 3 2013 1 1 542 540 2 923 ## 4 2013 1 1 544 545 -1 1004 ## 5 2013 1 1 554 600 -6 812 ## 6 2013 1 1 554 558 -4 740 ## 7 2013 1 1 555 600 -5 913 ## 8 2013 1 1 557 600 -3 709 ## 9 2013 1 1 557 600 -3 838 ## 10 2013 1 1 558 600 -2 753 ## # ... with 9,334 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` ``` # # Equivalent solution: # filter(flights, dep_time >= 0, dep_time <= 600) ``` **2\. Another useful `dplyr` filtering helper is `between()`. What does it do? Can you use it to simplify the code needed to answer the previous challenges?** This is a shortcut for `x >= left & x <= right` solving 1\.7\. using `between`: ``` filter(flights, between(dep_time, 0, 600)) ``` **3\. How many flights have a missing `dep_time`? What other variables are missing? What might these rows represent?** ``` filter(flights, is.na(dep_time)) ``` ``` ## # A tibble: 8,255 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 NA 1630 NA NA ## 2 2013 1 1 NA 1935 NA NA ## 3 2013 1 1 NA 1500 NA NA ## 4 2013 1 1 NA 600 NA NA ## 5 2013 1 2 NA 1540 NA NA ## 6 2013 1 2 NA 1620 NA NA ## 7 2013 1 2 NA 1355 NA NA ## 8 2013 1 2 NA 1420 NA NA ## 9 2013 1 2 NA 1321 NA NA ## 10 2013 1 2 NA 1545 NA NA ## # ... with 8,245 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` 8255, perhaps these are canceled flights. **4\. Why is `NA ^ 0` not missing? Why is `NA | TRUE` not missing? Why is `FALSE & NA` not missing? Can you figure out the general rule? (`NA * 0` is a tricky counterexample!)** ``` NA^0 ``` ``` ## [1] 1 ``` Anything raised to the 0 is 1\. ``` FALSE & NA ``` ``` ## [1] FALSE ``` For the “AND” operator `&` for it to be `TRUE` both values would need to be `TRUE` so if one is `FALSE` the entire statment must be. ``` TRUE | NA ``` ``` ## [1] TRUE ``` The “OR” operator `|` specifies that if at least one of the values is `TRUE` the whole statement is, so because one is already `TRUE` the whole statement must be. ``` NA*0 ``` ``` ## [1] NA ``` This does not come\-out to 0 as expected because the laws of addition and multiplication here only hold for natural numbers, but it is possible that `NA` could represent `Inf` or `-Inf` in which case the outut is `NaN` rather than 0\. ``` Inf*0 ``` ``` ## [1] NaN ``` See this article for more details: [https://math.stackexchange.com/questions/28940/why\-is\-infinity\-multiplied\-by\-zero\-not\-an\-easy\-zero\-answer](https://math.stackexchange.com/questions/28940/why-is-infinity-multiplied-by-zero-not-an-easy-zero-answer) . 5\.3: Arrange rows ------------------ ### 5\.3\.1\. **1\. use `arrange()` to sort out all missing values to start** ``` df <- tibble(x = c(5, 2, NA)) arrange(df, !is.na(x)) ``` ``` ## # A tibble: 3 x 1 ## x ## <dbl> ## 1 NA ## 2 5 ## 3 2 ``` **2\. Find most delayed departures** ``` arrange(flights, desc(dep_delay)) %>% glimpse() ``` ``` ## Observations: 336,776 ## Variables: 19 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 6, 1, 9, 7, 4, 3, 6, 7, 12, 5, 1, 2, 5, 12, ... ## $ day <int> 9, 15, 10, 20, 22, 10, 17, 27, 22, 5, 3, 1, 10,... ## $ dep_time <int> 641, 1432, 1121, 1139, 845, 1100, 2321, 959, 22... ## $ sched_dep_time <int> 900, 1935, 1635, 1845, 1600, 1900, 810, 1900, 7... ## $ dep_delay <dbl> 1301, 1137, 1126, 1014, 1005, 960, 911, 899, 89... ## $ arr_time <int> 1242, 1607, 1239, 1457, 1044, 1342, 135, 1236, ... ## $ sched_arr_time <int> 1530, 2120, 1810, 2210, 1815, 2211, 1020, 2226,... ## $ arr_delay <dbl> 1272, 1127, 1109, 1007, 989, 931, 915, 850, 895... ## $ carrier <chr> "HA", "MQ", "MQ", "AA", "MQ", "DL", "DL", "DL",... ## $ flight <int> 51, 3535, 3695, 177, 3075, 2391, 2119, 2007, 20... ## $ tailnum <chr> "N384HA", "N504MQ", "N517MQ", "N338AA", "N665MQ... ## $ origin <chr> "JFK", "JFK", "EWR", "JFK", "JFK", "JFK", "LGA"... ## $ dest <chr> "HNL", "CMH", "ORD", "SFO", "CVG", "TPA", "MSP"... ## $ air_time <dbl> 640, 74, 111, 354, 96, 139, 167, 313, 109, 149,... ## $ distance <dbl> 4983, 483, 719, 2586, 589, 1005, 1020, 2454, 76... ## $ hour <dbl> 9, 19, 16, 18, 16, 19, 8, 19, 7, 17, 20, 18, 8,... ## $ minute <dbl> 0, 35, 35, 45, 0, 0, 10, 0, 59, 0, 55, 35, 30, ... ## $ time_hour <dttm> 2013-01-09 09:00:00, 2013-06-15 19:00:00, 2013... ``` **3\. Find the fastest flights** ``` arrange(flights, air_time) %>% glimpse() ``` ``` ## Observations: 336,776 ## Variables: 19 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 4, 12, 2, 2, 2, 3, 3, 3, 3, 5, 5, 6, 8, 9, 9... ## $ day <int> 16, 13, 6, 3, 5, 12, 2, 8, 18, 19, 8, 19, 12, 1... ## $ dep_time <int> 1355, 537, 922, 2153, 1303, 2123, 1450, 2026, 1... ## $ sched_dep_time <int> 1315, 527, 851, 2129, 1315, 2130, 1500, 1935, 1... ## $ dep_delay <dbl> 40, 10, 31, 24, -12, -7, -10, 51, 87, 41, 137, ... ## $ arr_time <int> 1442, 622, 1021, 2247, 1342, 2211, 1547, 2131, ... ## $ sched_arr_time <int> 1411, 628, 954, 2224, 1411, 2225, 1608, 2056, 1... ## $ arr_delay <dbl> 31, -6, 27, 23, -29, -14, -21, 35, 67, 19, 109,... ## $ carrier <chr> "EV", "EV", "EV", "EV", "EV", "EV", "US", "9E",... ## $ flight <int> 4368, 4631, 4276, 4619, 4368, 4619, 2132, 3650,... ## $ tailnum <chr> "N16911", "N12167", "N27200", "N13913", "N13955... ## $ origin <chr> "EWR", "EWR", "EWR", "EWR", "EWR", "EWR", "LGA"... ## $ dest <chr> "BDL", "BDL", "BDL", "PHL", "BDL", "PHL", "BOS"... ## $ air_time <dbl> 20, 20, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,... ## $ distance <dbl> 116, 116, 116, 80, 116, 80, 184, 94, 116, 116, ... ## $ hour <dbl> 13, 5, 8, 21, 13, 21, 15, 19, 13, 21, 21, 21, 2... ## $ minute <dbl> 15, 27, 51, 29, 15, 30, 0, 35, 29, 45, 59, 59, ... ## $ time_hour <dttm> 2013-01-16 13:00:00, 2013-04-13 05:00:00, 2013... ``` **4\. Flights traveling the longest distance** ``` arrange(flights, desc(distance)) %>% glimpse() ``` ``` ## Observations: 336,776 ## Variables: 19 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ day <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, ... ## $ dep_time <int> 857, 909, 914, 900, 858, 1019, 1042, 901, 641, ... ## $ sched_dep_time <int> 900, 900, 900, 900, 900, 900, 900, 900, 900, 90... ## $ dep_delay <dbl> -3, 9, 14, 0, -2, 79, 102, 1, 1301, -1, -5, 1, ... ## $ arr_time <int> 1516, 1525, 1504, 1516, 1519, 1558, 1620, 1504,... ## $ sched_arr_time <int> 1530, 1530, 1530, 1530, 1530, 1530, 1530, 1530,... ## $ arr_delay <dbl> -14, -5, -26, -14, -11, 28, 50, -26, 1272, -41,... ## $ carrier <chr> "HA", "HA", "HA", "HA", "HA", "HA", "HA", "HA",... ## $ flight <int> 51, 51, 51, 51, 51, 51, 51, 51, 51, 51, 51, 51,... ## $ tailnum <chr> "N380HA", "N380HA", "N380HA", "N384HA", "N381HA... ## $ origin <chr> "JFK", "JFK", "JFK", "JFK", "JFK", "JFK", "JFK"... ## $ dest <chr> "HNL", "HNL", "HNL", "HNL", "HNL", "HNL", "HNL"... ## $ air_time <dbl> 659, 638, 616, 639, 635, 611, 612, 645, 640, 63... ## $ distance <dbl> 4983, 4983, 4983, 4983, 4983, 4983, 4983, 4983,... ## $ hour <dbl> 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9,... ## $ minute <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ time_hour <dttm> 2013-01-01 09:00:00, 2013-01-02 09:00:00, 2013... ``` **and the shortest distance.** ``` arrange(flights, distance) %>% glimpse() ``` ``` ## Observations: 336,776 ## Variables: 19 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ day <int> 27, 3, 4, 4, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, ... ## $ dep_time <int> NA, 2127, 1240, 1829, 2128, 1155, 2125, 2124, 2... ## $ sched_dep_time <int> 106, 2129, 1200, 1615, 2129, 1200, 2129, 2129, ... ## $ dep_delay <dbl> NA, -2, 40, 134, -1, -5, -4, -5, -3, -3, 4, 6, ... ## $ arr_time <int> NA, 2222, 1333, 1937, 2218, 1241, 2224, 2212, 2... ## $ sched_arr_time <int> 245, 2224, 1306, 1721, 2224, 1306, 2224, 2224, ... ## $ arr_delay <dbl> NA, -2, 27, 136, -6, -25, 0, -12, 39, -7, -1, 9... ## $ carrier <chr> "US", "EV", "EV", "EV", "EV", "EV", "EV", "EV",... ## $ flight <int> 1632, 3833, 4193, 4502, 4645, 4193, 4619, 4619,... ## $ tailnum <chr> NA, "N13989", "N14972", "N15983", "N27962", "N1... ## $ origin <chr> "EWR", "EWR", "EWR", "EWR", "EWR", "EWR", "EWR"... ## $ dest <chr> "LGA", "PHL", "PHL", "PHL", "PHL", "PHL", "PHL"... ## $ air_time <dbl> NA, 30, 30, 28, 32, 29, 22, 25, 30, 27, 30, 30,... ## $ distance <dbl> 17, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80,... ## $ hour <dbl> 1, 21, 12, 16, 21, 12, 21, 21, 21, 21, 21, 21, ... ## $ minute <dbl> 6, 29, 0, 15, 29, 0, 29, 29, 30, 29, 29, 29, 17... ## $ time_hour <dttm> 2013-07-27 01:00:00, 2013-01-03 21:00:00, 2013... ``` 5\.4: Select columns -------------------- ### 5\.4\.1\. **1\. Brainstorm as many ways as possible to select `dep_time`, `dep_delay`, `arr_time`, and `arr_delay` from `flights`.** ``` vars <- c("dep_time", "dep_delay", "arr_time", "arr_delay") ``` ``` #method 1 select(flights, vars) #method 2, probably indexes <- which(names(flights) %in% vars) select(flights, indexes) #method 3 select(flights, contains("_time"), contains("_delay"), -contains("sched"), -contains("air")) ``` ``` #method 4 select(flights, starts_with("dep"), starts_with("arr")) %>% select(ends_with("time"), ends_with("delay")) ``` ``` ## # A tibble: 336,776 x 4 ## dep_time arr_time dep_delay arr_delay ## <int> <int> <dbl> <dbl> ## 1 517 830 2 11 ## 2 533 850 4 20 ## 3 542 923 2 33 ## 4 544 1004 -1 -18 ## 5 554 812 -6 -25 ## 6 554 740 -4 12 ## 7 555 913 -5 19 ## 8 557 709 -3 -14 ## 9 557 838 -3 -8 ## 10 558 753 -2 8 ## # ... with 336,766 more rows ``` **2\. What happens if you include the name of a variable multiple times in a `select()` call?** It only shows\-up once. **3\. What does the `one_of()` function do? Why might it be helpful in conjunction with this vector?** `vars <- c("year", "month", "day", "dep_delay", "arr_delay")` Can be used to select multiple variables with a character vector or to negate selecting certain variables. **4\. Does the result of running the following code surprise you? How do the select helpers deal with case by default? How can you change that default?** ``` select(flights, contains("TIME")) ``` ``` ## # A tibble: 336,776 x 6 ## dep_time sched_dep_time arr_time sched_arr_time air_time ## <int> <int> <int> <int> <dbl> ## 1 517 515 830 819 227 ## 2 533 529 850 830 227 ## 3 542 540 923 850 160 ## 4 544 545 1004 1022 183 ## 5 554 600 812 837 116 ## 6 554 558 740 728 150 ## 7 555 600 913 854 158 ## 8 557 600 709 723 53 ## 9 557 600 838 846 140 ## 10 558 600 753 745 138 ## # ... with 336,766 more rows, and 1 more variable: time_hour <dttm> ``` Default is case insensitive, to change this specify `ignore.case = FALSE` ``` select(flights, contains("TIME", ignore.case = FALSE)) ``` ``` ## # A tibble: 336,776 x 0 ``` 5\.5: Add new vars ------------------ Check\-out different rank functions ``` x <- c(1, 2, 3, 4, 4, 6, 7, 8, 8, 10) min_rank(x) ``` ``` ## [1] 1 2 3 4 4 6 7 8 8 10 ``` ``` dense_rank(x) ``` ``` ## [1] 1 2 3 4 4 5 6 7 7 8 ``` ``` percent_rank(x) ``` ``` ## [1] 0.0000000 0.1111111 0.2222222 0.3333333 0.3333333 0.5555556 0.6666667 ## [8] 0.7777778 0.7777778 1.0000000 ``` ``` cume_dist(x) ``` ``` ## [1] 0.1 0.2 0.3 0.5 0.5 0.6 0.7 0.9 0.9 1.0 ``` ### 5\.5\.2\. **1\. Currently `dep_time` and `sched_dep_time` are convenient to look at, but hard to compute with because they’re not really continuous numbers. Convert them to a more convenient representation of number of minutes since midnight.** ``` time_to_mins <- function(x) (60*(x %/% 100) + (x %% 100)) ``` ``` flights_new <- mutate(flights, DepTime_MinsToMid = time_to_mins(dep_time), #same thing as above, but without calling custom function DepTime_MinsToMid_copy = (60*(dep_time %/% 100) + (dep_time %% 100)), SchedDepTime_MinsToMid = time_to_mins(sched_dep_time)) ``` **2\. Compare `air_time` with `arr_time` \- `dep_time`. What do you expect to see? What do you see? What do you need to do to fix it?** You would expect that: \\(air\\\_time \= dep\\\_time \- arr\\\_time\\) However this does not seem to be the case when you look at `air_time` generally… see [5\.5\.2\.2\.](05-data-transformations.html#section-15) for more details. **3\. Compare `dep_time`, `sched_dep_time`, and `dep_delay`. How would you expect those three numbers to be related?** You would expect that: \\(dep\\\_delay \= dep\\\_time \- sched\\\_dep\\\_time\\) . Let’s see if this is the case by creating a var `dep_delay2` that uses this definition, then see if it is equal to the original `dep_delay` ``` ##maybe a couple off, but for the most part seems consistent mutate(flights, dep_delay2 = time_to_mins(dep_time) - time_to_mins(sched_dep_time), dep_same = dep_delay == dep_delay2) %>% count(dep_same) ``` ``` ## # A tibble: 3 x 2 ## dep_same n ## <lgl> <int> ## 1 FALSE 1207 ## 2 TRUE 327314 ## 3 NA 8255 ``` Seems generally to align (with `dep_delay`). Those that are inconsistent are when the delay bleeds into the next day, indicating a problem with my equation above, not the `dep_delay` value as you can see below. ``` mutate(flights, dep_delay2 = time_to_mins(dep_time) - time_to_mins(sched_dep_time), dep_same = dep_delay == dep_delay2) %>% filter(!dep_same) %>% glimpse() ``` ``` ## Observations: 1,207 ## Variables: 21 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ day <int> 1, 2, 2, 3, 3, 3, 4, 4, 5, 5, 6, 7, 9, 9, 9, 10... ## $ dep_time <int> 848, 42, 126, 32, 50, 235, 25, 106, 14, 37, 16,... ## $ sched_dep_time <int> 1835, 2359, 2250, 2359, 2145, 2359, 2359, 2245,... ## $ dep_delay <dbl> 853, 43, 156, 33, 185, 156, 26, 141, 15, 127, 1... ## $ arr_time <int> 1001, 518, 233, 504, 203, 700, 505, 201, 503, 3... ## $ sched_arr_time <int> 1950, 442, 2359, 442, 2311, 437, 442, 2356, 445... ## $ arr_delay <dbl> 851, 36, 154, 22, 172, 143, 23, 125, 18, 130, 9... ## $ carrier <chr> "MQ", "B6", "B6", "B6", "B6", "B6", "B6", "B6",... ## $ flight <int> 3944, 707, 22, 707, 104, 727, 707, 608, 739, 11... ## $ tailnum <chr> "N942MQ", "N580JB", "N636JB", "N763JB", "N329JB... ## $ origin <chr> "JFK", "JFK", "JFK", "JFK", "JFK", "JFK", "JFK"... ## $ dest <chr> "BWI", "SJU", "SYR", "SJU", "BUF", "BQN", "SJU"... ## $ air_time <dbl> 41, 189, 49, 193, 58, 186, 194, 44, 201, 163, 1... ## $ distance <dbl> 184, 1598, 209, 1598, 301, 1576, 1598, 273, 161... ## $ hour <dbl> 18, 23, 22, 23, 21, 23, 23, 22, 23, 22, 23, 23,... ## $ minute <dbl> 35, 59, 50, 59, 45, 59, 59, 45, 59, 30, 59, 59,... ## $ time_hour <dttm> 2013-01-01 18:00:00, 2013-01-02 23:00:00, 2013... ## $ dep_delay2 <dbl> -587, -1397, -1284, -1407, -1255, -1284, -1414,... ## $ dep_same <lgl> FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE... ``` **4\. Find the 10 most delayed flights using a ranking function. How do you want to handle ties? Carefully read the documentation for `min_rank()`.** ``` mutate(flights, rank_delay = min_rank(-arr_delay)) %>% arrange(rank_delay) %>% filter(rank_delay <= 10) %>% select(flight, sched_dep_time, arr_delay, rank_delay) ``` ``` ## # A tibble: 10 x 4 ## flight sched_dep_time arr_delay rank_delay ## <int> <int> <dbl> <int> ## 1 51 900 1272 1 ## 2 3535 1935 1127 2 ## 3 3695 1635 1109 3 ## 4 177 1845 1007 4 ## 5 3075 1600 989 5 ## 6 2391 1900 931 6 ## 7 2119 810 915 7 ## 8 2047 759 895 8 ## 9 172 1700 878 9 ## 10 3744 2055 875 10 ``` **5\. What does `1:3 + 1:10` return? Why?** ``` 1:3 + 1:10 ``` ``` ## Warning in 1:3 + 1:10: longer object length is not a multiple of shorter ## object length ``` ``` ## [1] 2 4 6 5 7 9 8 10 12 11 ``` This is returned because `1:3` is being recycled as each element is added to an element in 1:10\. **6\. What trigonometric functions does R provide?** ``` ?sin ``` 5\.6: Grouped summaries… ------------------------ ``` not_cancelled <- flights %>% filter(!is.na(dep_delay), !is.na(arr_delay)) not_cancelled %>% select(year, month, day, dep_time) %>% group_by(year, month, day) %>% mutate(r = min_rank(desc(dep_time))) %>% mutate(range_min = range(r)[1], range_max = range(r)[2]) %>% filter(r %in% range(r)) ``` ### 5\.6\.7\. **1\. Brainstorm at least 5 different ways to assess the typical delay characteristics of a group of flights.** *90th percentile for delays for flights by destination* ``` flights %>% group_by(dest) %>% summarise(delay.90 = quantile(arr_delay, 0.90, na.rm = TRUE)) %>% arrange(desc(delay.90)) ``` ``` ## # A tibble: 105 x 2 ## dest delay.90 ## <chr> <dbl> ## 1 TUL 126 ## 2 TYS 109. ## 3 CAE 107 ## 4 DSM 103 ## 5 OKC 99.6 ## 6 BHM 99.2 ## 7 RIC 90 ## 8 PVD 81.3 ## 9 CRW 80.8 ## 10 CVG 80 ## # ... with 95 more rows ``` *average `dep_delay` by hour of day* ``` flights %>% group_by(hour) %>% summarise(avg_delay = mean(arr_delay, na.rm = TRUE)) %>% ggplot(aes(x = hour, y = avg_delay))+ geom_point()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_smooth). ``` ``` ## Warning: Removed 1 rows containing missing values (geom_point). ``` *Percentage of flights delayed or canceled by `origin`* ``` flights %>% group_by(origin) %>% summarise(num_delayed = sum(arr_delay > 0, na.rm = TRUE)/n()) ``` ``` ## # A tibble: 3 x 2 ## origin num_delayed ## <chr> <dbl> ## 1 EWR 0.415 ## 2 JFK 0.385 ## 3 LGA 0.382 ``` *Percentage of flights canceled by airline* (technically not delays, but cancellations…) ``` flights %>% group_by(carrier) %>% summarise(perc_canceled = sum(is.na(arr_delay))/n(), n = n()) %>% ungroup() %>% filter(n >= 1000) %>% mutate(most_rank = min_rank(-perc_canceled)) %>% arrange(most_rank) ``` ``` ## # A tibble: 11 x 4 ## carrier perc_canceled n most_rank ## <chr> <dbl> <int> <int> ## 1 9E 0.0632 18460 1 ## 2 EV 0.0566 54173 2 ## 3 MQ 0.0515 26397 3 ## 4 US 0.0343 20536 4 ## 5 FL 0.0261 3260 5 ## 6 AA 0.0239 32729 6 ## 7 WN 0.0188 12275 7 ## 8 UA 0.0151 58665 8 ## 9 B6 0.0107 54635 9 ## 10 DL 0.00940 48110 10 ## 11 VX 0.00891 5162 11 ``` *Percentage of flights delayed by airline* ``` flights %>% group_by(carrier) %>% summarise(perc_delayed = sum(arr_delay > 0, na.rm = TRUE)/sum(!is.na(arr_delay)), n = n()) %>% ungroup() %>% filter(n >= 1000) %>% mutate(most_rank = min_rank(-perc_delayed)) %>% arrange(most_rank) ``` ``` ## # A tibble: 11 x 4 ## carrier perc_delayed n most_rank ## <chr> <dbl> <int> <int> ## 1 FL 0.597 3260 1 ## 2 EV 0.479 54173 2 ## 3 MQ 0.467 26397 3 ## 4 WN 0.440 12275 4 ## 5 B6 0.437 54635 5 ## 6 UA 0.385 58665 6 ## 7 9E 0.384 18460 7 ## 8 US 0.371 20536 8 ## 9 DL 0.344 48110 9 ## 10 VX 0.341 5162 10 ## 11 AA 0.335 32729 11 ``` **Consider the following scenarios:** *1\.1 A flight is 15 minutes early 50% of the time, and 15 minutes late 50% of the time.* ``` flights %>% group_by(flight) %>% # filter(!is.na(arr_delay)) %>% ##Keeping this in would exclude the possibility of canceled summarise(early.15 = sum(arr_delay <= -15, na.rm = TRUE)/n(), late.15 = sum(arr_delay >= 15, na.rm = TRUE)/n(), n = n()) %>% ungroup() %>% filter(early.15 == .5, late.15 == .5) ``` ``` ## # A tibble: 18 x 4 ## flight early.15 late.15 n ## <int> <dbl> <dbl> <int> ## 1 107 0.5 0.5 2 ## 2 2072 0.5 0.5 2 ## 3 2366 0.5 0.5 2 ## 4 2500 0.5 0.5 2 ## 5 2552 0.5 0.5 2 ## 6 3495 0.5 0.5 2 ## 7 3518 0.5 0.5 2 ## 8 3544 0.5 0.5 2 ## 9 3651 0.5 0.5 2 ## 10 3705 0.5 0.5 2 ## 11 3916 0.5 0.5 2 ## 12 3951 0.5 0.5 2 ## 13 4273 0.5 0.5 2 ## 14 4313 0.5 0.5 2 ## 15 5297 0.5 0.5 2 ## 16 5322 0.5 0.5 2 ## 17 5388 0.5 0.5 2 ## 18 5505 0.5 0.5 4 ``` *1\.2 A flight is always 10 minutes late.* ``` flights %>% group_by(flight) %>% summarise(late.10 = sum(arr_delay >= 10)/n()) %>% ungroup() %>% filter(late.10 == 1) ``` ``` ## # A tibble: 93 x 2 ## flight late.10 ## <int> <dbl> ## 1 94 1 ## 2 730 1 ## 3 974 1 ## 4 1084 1 ## 5 1226 1 ## 6 1510 1 ## 7 1514 1 ## 8 1859 1 ## 9 1868 1 ## 10 2101 1 ## # ... with 83 more rows ``` *1\.3 A flight is 30 minutes early 50% of the time, and 30 minutes late 50% of the time.* ``` flights %>% group_by(flight) %>% # filter(!is.na(arr_delay)) %>% ##Keeping this in would exclude the possibility of canceled summarise(early.30 = sum(arr_delay <= -30, na.rm = TRUE)/n(), late.30 = sum(arr_delay >= 30, na.rm = TRUE)/n(), n = n()) %>% ungroup() %>% filter(early.30 == .5, late.30 == .5) ``` ``` ## # A tibble: 3 x 4 ## flight early.30 late.30 n ## <int> <dbl> <dbl> <int> ## 1 3651 0.5 0.5 2 ## 2 3916 0.5 0.5 2 ## 3 3951 0.5 0.5 2 ``` *1\.4 99% of the time a flight is on time. 1% of the time it’s 2 hours late.* ``` flights %>% group_by(flight) %>% # filter(!is.na(arr_delay)) %>% ##Keeping this in would exclude the possibility of canceled summarise(ontime = sum(arr_delay <= 0, na.rm = TRUE)/n(), late.120 = sum(arr_delay >= 120, na.rm = TRUE)/n(), n = n()) %>% ungroup() %>% filter(ontime == .99, late.120 == .01) ``` ``` ## # A tibble: 0 x 4 ## # ... with 4 variables: flight <int>, ontime <dbl>, late.120 <dbl>, ## # n <int> ``` Looks like this exact proportion doesn’t happen. Let’s change this to be \>\= 99% and \<\= 1%. ``` flights %>% group_by(flight) %>% # filter(!is.na(arr_delay)) %>% ##Keeping this in would exclude the possibility of canceled summarise(ontime = sum(arr_delay <= 0, na.rm = TRUE)/n(), late.120 = sum(arr_delay >= 120, na.rm = TRUE)/n(), n = n()) %>% ungroup() %>% filter(ontime >= .99, late.120 <= .01) ``` ``` ## # A tibble: 391 x 4 ## flight ontime late.120 n ## <int> <dbl> <dbl> <int> ## 1 46 1 0 2 ## 2 52 1 0 2 ## 3 88 1 0 1 ## 4 90 1 0 1 ## 5 96 1 0 1 ## 6 99 1 0 1 ## 7 106 1 0 1 ## 8 122 1 0 1 ## 9 174 1 0 1 ## 10 202 1 0 5 ## # ... with 381 more rows ``` **2\. Which is more important: arrival delay or departure delay?** Arrival delay. **3\. Come up with another approach that will give you the same output as `not_cancelled %>% count(dest)` and `not_cancelled %>% count(tailnum, wt = distance)` (without using `count()`).** ``` not_cancelled <- flights %>% filter(!is.na(dep_delay), !is.na(arr_delay)) not_cancelled %>% group_by(dest) %>% summarise(n = n()) ``` ``` ## # A tibble: 104 x 2 ## dest n ## <chr> <int> ## 1 ABQ 254 ## 2 ACK 264 ## 3 ALB 418 ## 4 ANC 8 ## 5 ATL 16837 ## 6 AUS 2411 ## 7 AVL 261 ## 8 BDL 412 ## 9 BGR 358 ## 10 BHM 269 ## # ... with 94 more rows ``` ``` not_cancelled %>% group_by(tailnum) %>% summarise(n = sum(distance)) ``` ``` ## # A tibble: 4,037 x 2 ## tailnum n ## <chr> <dbl> ## 1 D942DN 3418 ## 2 N0EGMQ 239143 ## 3 N10156 109664 ## 4 N102UW 25722 ## 5 N103US 24619 ## 6 N104UW 24616 ## 7 N10575 139903 ## 8 N105UW 23618 ## 9 N107US 21677 ## 10 N108UW 32070 ## # ... with 4,027 more rows ``` **4\. Our definition of cancelled flights (`is.na(dep_delay) | is.na(arr_delay)`) is slightly suboptimal. Why? Which is the most important column?** You only need the `is.na(arr_delay)` column. By having both, it is doing more checks then is necessary. (While not a perfect method) you can see that the number of rows with just `is.na(arr_delay)` would be the same in either case. ``` filter(flights, is.na(dep_delay) | is.na(arr_delay)) %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 9430 ``` ``` filter(flights, is.na(arr_delay)) %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 9430 ``` To be more precise, you could check these with the `identical` function. ``` check_1 <- filter(flights, is.na(dep_delay) | is.na(arr_delay)) check_2 <- filter(flights, is.na(arr_delay)) identical(check_1, check_2) ``` ``` ## [1] TRUE ``` **5\. Look at the number of cancelled flights per day. Is there a pattern?** Number of canceled flights by day of month: ``` flights %>% group_by(day) %>% summarise(num = n(), cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), cancelled_perc = cancelled / num) %>% ggplot(aes(x = day, y = cancelled))+ geom_line() ``` * Some days of the month have more cancellations **Is the proportion of cancelled flights related to the average delay?** Proporton of canceled flights and then average delay of flights by day: ``` flights %>% group_by(day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% ggplot(aes(x = day, y = cancelled_perc))+ geom_line() flights %>% group_by(day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% ggplot(aes(x = day, y = avg_delayed))+ geom_line() ``` * Looks roughly like there is some overlap. Plot, treating day independently: ``` flights %>% group_by(day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% ggplot(aes(x = cancelled_perc, y = avg_delayed))+ geom_point()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` * suggests positive association **6\. Which carrier has the worst delays?** ``` flights %>% group_by(carrier) %>% summarise(avg_delay = mean(arr_delay, na.rm = TRUE), n = n()) %>% arrange(desc(avg_delay)) ``` ``` ## # A tibble: 16 x 3 ## carrier avg_delay n ## <chr> <dbl> <int> ## 1 F9 21.9 685 ## 2 FL 20.1 3260 ## 3 EV 15.8 54173 ## 4 YV 15.6 601 ## 5 OO 11.9 32 ## 6 MQ 10.8 26397 ## 7 WN 9.65 12275 ## 8 B6 9.46 54635 ## 9 9E 7.38 18460 ## 10 UA 3.56 58665 ## 11 US 2.13 20536 ## 12 VX 1.76 5162 ## 13 DL 1.64 48110 ## 14 AA 0.364 32729 ## 15 HA -6.92 342 ## 16 AS -9.93 714 ``` **Challenge: can you disentangle the effects of bad airports vs. bad carriers? Why/why not? (Hint: think about flights %\>% group\_by(carrier, dest) %\>% summarise(n()))** Somewhat difficult to untangle in the `origin` airports because carriers may predominantly go through one of the three. The code below produces the origin name that the carrier that flies from the most along with the proportion of associated flights. ``` flights %>% group_by(carrier, origin) %>% summarise(n = n()) %>% mutate(perc = n / sum(n)) %>% group_by(carrier) %>% mutate(rank = min_rank(-perc)) %>% arrange(carrier, rank) %>% filter(rank == 1) %>% select(carrier, highest_origin = origin, highest_prop = perc, n_total = n) %>% arrange(desc(n_total)) ``` ``` ## # A tibble: 16 x 4 ## # Groups: carrier [16] ## carrier highest_origin highest_prop n_total ## <chr> <chr> <dbl> <int> ## 1 UA EWR 0.786 46087 ## 2 EV EWR 0.811 43939 ## 3 B6 JFK 0.770 42076 ## 4 DL LGA 0.479 23067 ## 5 MQ LGA 0.641 16928 ## 6 AA LGA 0.472 15459 ## 7 9E JFK 0.794 14651 ## 8 US LGA 0.640 13136 ## 9 WN EWR 0.504 6188 ## 10 VX JFK 0.697 3596 ## 11 FL LGA 1 3260 ## 12 AS EWR 1 714 ## 13 F9 LGA 1 685 ## 14 YV LGA 1 601 ## 15 HA JFK 1 342 ## 16 OO LGA 0.812 26 ``` Below we look at destinations and the `carrier` that has the highest proportion of flights from one of the NYC destinations (ignoring for specific `origin` – JFK, LGA, etc. are not seperated). ``` flights %>% group_by(dest, carrier) %>% summarise(n = n()) %>% mutate(perc = n / sum(n)) %>% group_by(dest) %>% mutate(rank = min_rank(-perc)) %>% arrange(carrier, rank) %>% filter(rank == 1) %>% select(dest, highest_carrier = carrier, highest_perc = perc, n_total = n) %>% arrange(desc(n_total)) ``` ``` ## # A tibble: 105 x 4 ## # Groups: dest [105] ## dest highest_carrier highest_perc n_total ## <chr> <chr> <dbl> <int> ## 1 ATL DL 0.614 10571 ## 2 CLT US 0.614 8632 ## 3 DFW AA 0.831 7257 ## 4 MIA AA 0.617 7234 ## 5 ORD UA 0.404 6984 ## 6 IAH UA 0.962 6924 ## 7 SFO UA 0.512 6819 ## 8 FLL B6 0.544 6563 ## 9 MCO B6 0.460 6472 ## 10 LAX UA 0.360 5823 ## # ... with 95 more rows ``` To get at the question of ‘best carrier’, you may consider doing a grouped comparison of average delays or cancellataions controlling for where they are flying to and from what origin… Or build a linear model with the formula, `arr_delay ~ carrier + dest + origin`. **7\. What does the `sort` argument to `count()` do. When might you use it?** `sort` orders by `n`, you may want to use it when you want to see the highest frequency levels. 5\.7: Grouped mutates… ---------------------- ### 5\.7\.1\. **1\. Refer back to the lists of useful mutate and filtering functions. Describe how each operation changes when you combine it with grouping.** Performs operations on vectors for each group (rather than all together). **2\. Which plane (tailnum) has the worst on\-time record?** ``` flights %>% group_by(tailnum) %>% summarise(n = n(), num_not_delayed = sum(arr_delay <= 0, na.rm = TRUE), ontime_rate = num_not_delayed/ n, sum_delayed_time_grt0 = sum(ifelse(arr_delay >= 0, arr_delay, 0), na.rm = TRUE)) %>% filter(n > 100, !is.na(tailnum)) %>% arrange(ontime_rate) ``` ``` ## # A tibble: 1,200 x 5 ## tailnum n num_not_delayed ontime_rate sum_delayed_time_grt0 ## <chr> <int> <int> <dbl> <dbl> ## 1 N505MQ 242 83 0.343 5911 ## 2 N15910 280 105 0.375 8737 ## 3 N36915 228 86 0.377 6392 ## 4 N16919 251 96 0.382 7955 ## 5 N14998 230 88 0.383 7166 ## 6 N14953 256 100 0.391 6550 ## 7 N22971 230 90 0.391 6547 ## 8 N503MQ 191 75 0.393 4420 ## 9 N27152 109 43 0.394 2058 ## 10 N31131 109 43 0.394 2740 ## # ... with 1,190 more rows ``` N505MQ **3\. What time of day should you fly if you want to avoid delays as much as possible?** average `dep_delay` by hour of day ``` flights %>% group_by(hour) %>% summarise(med_delay = mean(arr_delay, na.rm = TRUE)) %>% ggplot(aes(x = hour, y = med_delay))+ geom_point()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_smooth). ``` ``` ## Warning: Removed 1 rows containing missing values (geom_point). ``` Fly in the morning. **4\. For each destination, compute the total minutes of delay. For each, flight, compute the proportion of the total delay for its destination.** ``` flights %>% filter(arr_delay > 0) %>% group_by(dest, flight) %>% summarise(TotalDelay_DestFlight = sum(arr_delay, na.rm = TRUE)) %>% mutate(TotalDelay_Dest = sum(TotalDelay_DestFlight), PropOfDest = TotalDelay_DestFlight / TotalDelay_Dest) ``` ``` ## # A tibble: 8,505 x 5 ## # Groups: dest [103] ## dest flight TotalDelay_DestFlight TotalDelay_Dest PropOfDest ## <chr> <int> <dbl> <dbl> <dbl> ## 1 ABQ 65 1943 4487 0.433 ## 2 ABQ 1505 2544 4487 0.567 ## 3 ACK 1191 1413 2974 0.475 ## 4 ACK 1195 62 2974 0.0208 ## 5 ACK 1291 267 2974 0.0898 ## 6 ACK 1491 1232 2974 0.414 ## 7 ALB 3260 111 9580 0.0116 ## 8 ALB 3264 4 9580 0.000418 ## 9 ALB 3811 599 9580 0.0625 ## 10 ALB 3817 196 9580 0.0205 ## # ... with 8,495 more rows ``` I did this such that flights could not have “negative” delays, this could have been approached differently such that “early arrivals” also got credit… **5\. Delays are typically temporally correlated: even once the problem that caused the initial delay has been resolved, later flights are delayed to allow earlier flights to leave. Using lag() explore how the delay of a flight is related to the delay of the immediately preceding flight.** ``` flights %>% group_by(origin) %>% mutate(delay_lag = lag(dep_delay, 1), diff_lag = abs(dep_delay -delay_lag)) %>% ungroup() %>% select(dep_delay, delay_lag) %>% na.omit() %>% cor() ``` ``` ## dep_delay delay_lag ## dep_delay 1.0000000 0.3506866 ## delay_lag 0.3506866 1.0000000 ``` Correlation of dep\_delayt\-1 with dep\_delayt is 0\.35\. Below is a function to get the correlation out for any lag level. ``` cor_by_lag <- function(lag){ flights %>% group_by(origin) %>% mutate(delay_lag = lag(dep_delay, lag), diff_lag = abs(dep_delay -delay_lag)) %>% ungroup() %>% select(dep_delay, delay_lag) %>% na.omit() %>% cor() %>% .[2,1] %>% as.vector() } ``` Let’s see the correlation pushing the lag time back. ``` cor_by_lag(1) ``` ``` ## [1] 0.3506866 ``` ``` cor_by_lag(10) ``` ``` ## [1] 0.2622796 ``` ``` cor_by_lag(100) ``` ``` ## [1] 0.04023232 ``` It makes sense that these values get smaller as flights that are further apart have delay lengths that are less correlated. See [5\.7\.1\.8\.](05-data-transformations.html#section-23) for the outputs if iterating this function across many lags. **6\. Look at each destination. Can you find flights that are suspiciously fast? (i.e. flights that represent a potential data entry error). Compute the air time a flight relative to the shortest flight to that destination. Which flights were most delayed in the air?** ``` flights %>% filter(!is.na(arr_delay)) %>% group_by(dest) %>% mutate(sd_air_time = sd(air_time), mean_air_time = mean(air_time)) %>% ungroup() %>% mutate(supect_fast_cutoff = mean_air_time - 4*sd_air_time, suspect_flag = air_time < supect_fast_cutoff) %>% select(dest, flight, hour, day, month, air_time, sd_air_time, mean_air_time, supect_fast_cutoff, suspect_flag, air_time, air_time) %>% filter(suspect_flag) ``` ``` ## # A tibble: 4 x 10 ## dest flight hour day month air_time sd_air_time mean_air_time ## <chr> <int> <dbl> <int> <int> <dbl> <dbl> <dbl> ## 1 BNA 3805 19 23 3 70 11.0 114. ## 2 GSP 4292 20 13 5 55 8.13 93.4 ## 3 ATL 1499 17 25 5 65 9.81 113. ## 4 MSP 4667 15 2 7 93 11.8 151. ## # ... with 2 more variables: supect_fast_cutoff <dbl>, suspect_flag <lgl> ``` **7\. Find all destinations that are flown by at least two carriers. Use that information to rank the carriers.** I found this quesiton ambiguous in terms of what it wants when it says “rank” the carriers using this. What I did was filter to just those destinations that have at least two carriers and then count the number of destinations with multiple carriers that each airline travels to. So it’s almost which airlines have more routes to ‘crowded’ destinations. ``` flights %>% group_by(dest) %>% mutate(n_carrier = n_distinct(carrier)) %>% filter(n_carrier > 1) %>% group_by(carrier) %>% summarise(n_dest = n_distinct(dest)) %>% mutate(rank = min_rank(-n_dest)) %>% arrange(rank) ``` ``` ## # A tibble: 16 x 3 ## carrier n_dest rank ## <chr> <int> <int> ## 1 EV 51 1 ## 2 9E 48 2 ## 3 UA 42 3 ## 4 DL 39 4 ## 5 B6 35 5 ## 6 AA 19 6 ## 7 MQ 19 6 ## 8 WN 10 8 ## 9 OO 5 9 ## 10 US 5 9 ## 11 VX 4 11 ## 12 YV 3 12 ## 13 FL 2 13 ## 14 AS 1 14 ## 15 F9 1 14 ## 16 HA 1 14 ``` Another way to approach this may have been to say to evaluate the delays between carriers going to the same destination and used that as a way of comparing and ‘ranking’ the best carriers. This would have been a more ambitious problem to answer. **8\. For each plane, count the number of flights before the first delay of greater than 1 hour.** ``` tail_nums_counts <- flights %>% arrange(tailnum, month, day, dep_time) %>% group_by(tailnum) %>% mutate(cum_sum = cumsum(arr_delay <= 60), nrow = row_number(), nrow_equal = nrow == cum_sum, cum_sum_before = cum_sum * nrow_equal) %>% mutate(total_before_hour = max(cum_sum_before, na.rm = TRUE)) %>% select(year, month, day, dep_time, tailnum, arr_delay, cum_sum, nrow, nrow_equal, cum_sum_before, total_before_hour) %>% ungroup() #let's change this to get rid of canceled flights, because those don't count as flights or delays. tail_nums_counts <- flights %>% filter(!is.na(arr_delay)) %>% select(tailnum, month, day, dep_time, arr_delay) %>% arrange(tailnum, month, day, dep_time) %>% group_by(tailnum) %>% mutate(cum_sum = cumsum(arr_delay <= 60), nrow = row_number(), nrow_equal = nrow == cum_sum, cum_sum_before = cum_sum * nrow_equal) %>% mutate(total_before_hour = max(cum_sum_before, na.rm = TRUE)) %>% select(month, day, dep_time, tailnum, arr_delay, cum_sum, nrow, nrow_equal, cum_sum_before, total_before_hour) %>% ungroup() tail_nums_counts %>% filter(!is.na(tailnum)) %>% arrange(desc(nrow), tailnum) %>% distinct(tailnum, .keep_all = TRUE) %>% select(tailnum, total_before_hour) %>% arrange(tailnum) ``` ``` ## # A tibble: 4,037 x 2 ## tailnum total_before_hour ## <chr> <dbl> ## 1 D942DN 0 ## 2 N0EGMQ 0 ## 3 N10156 9 ## 4 N102UW 25 ## 5 N103US 46 ## 6 N104UW 3 ## 7 N10575 0 ## 8 N105UW 22 ## 9 N107US 20 ## 10 N108UW 36 ## # ... with 4,027 more rows ``` Appendix -------- ### 5\.4\.1\.3\. You can also use `one_of()` for negating specific columns fields by name. ``` select(flights, -one_of(vars)) ``` ``` ## # A tibble: 336,776 x 15 ## year month day sched_dep_time sched_arr_time carrier flight tailnum ## <int> <int> <int> <int> <int> <chr> <int> <chr> ## 1 2013 1 1 515 819 UA 1545 N14228 ## 2 2013 1 1 529 830 UA 1714 N24211 ## 3 2013 1 1 540 850 AA 1141 N619AA ## 4 2013 1 1 545 1022 B6 725 N804JB ## 5 2013 1 1 600 837 DL 461 N668DN ## 6 2013 1 1 558 728 UA 1696 N39463 ## 7 2013 1 1 600 854 B6 507 N516JB ## 8 2013 1 1 600 723 EV 5708 N829AS ## 9 2013 1 1 600 846 B6 79 N593JB ## 10 2013 1 1 600 745 AA 301 N3ALAA ## # ... with 336,766 more rows, and 7 more variables: origin <chr>, ## # dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, ## # time_hour <dttm> ``` ### 5\.5\.2\.1\. Other, more sophisticated method[10](#fn10) ``` mutate_at(.tbl = flights, .vars = c("dep_time", "sched_dep_time"), .funs = funs(new = time_to_mins)) ``` ### 5\.5\.2\.2\. Let’s create this variable. I’ll name it `air_calc`. First method: ``` flights_new2 <- mutate(flights, # This air_time_clac step is necessary because you need to take into account red-eye flights in calculation air_time_calc = ifelse(dep_time > arr_time, arr_time + 2400, arr_time), air_calc = time_to_mins(air_time_calc) - time_to_mins(dep_time)) ``` The above method is the simple approach, though it doesn’t take into account the timezone of the arrivals locations. To handle this, I do a `left_join` on the `airports` dataframe and change `arr_time` to take into account the timezone and output the value in EST (as opposed to local time). We have not learned about ‘joins’ yet, so don’t worry if this loses you. ``` flights_new2 <- flights %>% left_join(select(nycflights13::airports, dest = faa, tz)) %>% mutate(arr_time_old = arr_time) %>% mutate(arr_time = arr_time - 100*(tz+5)) %>% mutate( # This arr_time_calc step is a helper variable I created to take into account the red-eye flights in calculation arr_time_calc = ifelse(dep_time > arr_time, arr_time + 2400, arr_time), air_calc = time_to_mins(arr_time_calc) - time_to_mins(dep_time)) %>% select(-arr_time_calc) ``` ``` ## Joining, by = "dest" ``` Curiouis if anyone explored the `air_time` variable and figured out the details of how exactly it was off if there was something systematic? I checked this briefly below, but did not go deep. **Closer look at `air_time`** Wanted to look at original `air_time` variable a little more. Histogram below shows that most differences are now between 20 \- 40 minutes from the actual time. ``` flights_new2 %>% group_by(dest) %>% summarise(distance_med = median(distance, na.rm = TRUE), air_calc_med = median(air_calc, na.rm = TRUE), air_old_med = median(air_time, na.rm = TRUE), diff_new_old = air_calc_med - air_old_med, diff_hrs = as.factor(round(diff_new_old/60)), num = n()) %>% ggplot(aes(diff_new_old))+ geom_histogram() ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` ``` ## Warning: Removed 5 rows containing non-finite values (stat_bin). ``` Regressing `diff` on `arr_delay` and `dep_delay` (remember `diff` is the difference between `air_time` and `air_calc`) ``` mod_air_time2 <- mutate(flights_new2, diff = (air_time - air_calc)) %>% select(-air_time, -air_calc, -flight, -tailnum, -dest) %>% na.omit() %>% lm(diff ~ dep_delay + arr_delay, data = .) summary(mod_air_time2) ``` ``` ## ## Call: ## lm(formula = diff ~ dep_delay + arr_delay, data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -93.168 -6.684 0.688 6.878 101.169 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -33.511843 0.024118 -1389.5 <2e-16 *** ## dep_delay 0.533376 0.001355 393.5 <2e-16 *** ## arr_delay -0.552852 0.001217 -454.2 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 12.43 on 319806 degrees of freedom ## Multiple R-squared: 0.3956, Adjusted R-squared: 0.3956 ## F-statistic: 1.047e+05 on 2 and 319806 DF, p-value: < 2.2e-16 ``` Doing such accounts for \~40% of the variation in the values. * note `dep_delay` and `arr_delay` variables are highly colinear – and the coefficients are opposite in the model. ``` flights_new2 %>% select(air_time, air_calc, arr_delay, dep_delay) %>% mutate(diff = air_time - air_calc) %>% select(-air_time, -air_calc) %>% na.omit() %>% cor() ``` ``` ## arr_delay dep_delay diff ## arr_delay 1.0000000 0.91531953 -0.32086698 ## dep_delay 0.9153195 1.00000000 -0.07582942 ## diff -0.3208670 -0.07582942 1.00000000 ``` Often this suggests you may not need to include both variables in the model as they will likely be providing the same information. Though here that is not the case as only including `arr_delay` associates with a steep decline in `R^2` to just account for \~10% of the variation. ``` mod_air_time <- mutate(flights_new2, diff = (air_time - air_calc)) %>% select(-air_time, -air_calc, -flight, -tailnum, -dest) %>% na.omit() %>% lm(diff ~ arr_delay, data = .) summary(mod_air_time) ``` ``` ## ## Call: ## lm(formula = diff ~ arr_delay, data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -182.960 -6.385 2.013 7.983 154.382 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -2.984e+01 2.710e-02 -1101.3 <2e-16 *** ## arr_delay -1.144e-01 5.972e-04 -191.6 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 15.14 on 319807 degrees of freedom ## Multiple R-squared: 0.103, Adjusted R-squared: 0.103 ## F-statistic: 3.67e+04 on 1 and 319807 DF, p-value: < 2.2e-16 ``` ### 5\.6\.7\.1\. Below is an extension on using the `quantile` method, but it is far beyond where we are right now. For the question *90th percentile for delays for flights by destination* we used `quantile` to output only the 90th percentile of values for each destination. Here, I want to address what if you had wanted to output the delays at multiple values, say, arbitrarily the 25th, 50th, 75th percentiles. One option would be to create a new variable for each value and in each quantile function sepcify 0\.25, 0\.50, 0\.75 respectively. ``` flights %>% group_by(dest) %>% summarise(delay.25 = quantile(arr_delay, 0.25, na.rm = TRUE), delay.50 = quantile(arr_delay, 0.50, na.rm = TRUE), delay.75 = quantile(arr_delay, 0.75, na.rm = TRUE)) ``` ``` ## # A tibble: 105 x 4 ## dest delay.25 delay.50 delay.75 ## <chr> <dbl> <dbl> <dbl> ## 1 ABQ -24 -5.5 22.8 ## 2 ACK -13 -3 10 ## 3 ALB -17 -4 28 ## 4 ANC -10.8 1.5 10 ## 5 ATL -12 -1 16 ## 6 AUS -19 -5 15 ## 7 AVL -11 -1 13 ## 8 BDL -18 -10 14 ## 9 BGR -21.8 -9 19.8 ## 10 BHM -20 -2 34 ## # ... with 95 more rows ``` But there is a lot of replication here and the `quantile` function is also able to output more than one value by specifying the `probs` argument. ``` quantile(c(1:100), probs = c(0.25, .50, 0.75)) ``` ``` ## 25% 50% 75% ## 25.75 50.50 75.25 ``` So, in theory, rather than calling `quantile` multiple times, you could just call it once. However for any variable you create `summarise` is expecting only a single value output for each row, so just passing it in as\-is will cause it to fail. ``` flights %>% group_by(dest) %>% summarise(delays = quantile(arr_delay, probs = c(0.25, .50, 0.75), na.rm = TRUE)) ``` ``` ## Error: Column `delays` must be length 1 (a summary value), not 3 ``` To make this work you need to make the value a list, so that it will output a single list in each row of the column\[This style is covered at the end of the book in the section ‘list\-columns’ in iteration.]\[Also you need your dataframe to be in a tibble form rather than traditional dataframes for list\-cols to work]. I am going to create another list\-column field of the quantiles I specified. ``` prob_vals <- seq(from = 0.25, to = 0.75, by = 0.25) flights_quantiles <- flights %>% group_by(dest) %>% summarise(delays_val = list(quantile(arr_delay, probs = prob_vals, na.rm = TRUE)), delays_q = list(c('25th', '50th', '75th'))) flights_quantiles ``` ``` ## # A tibble: 105 x 3 ## dest delays_val delays_q ## <chr> <list> <list> ## 1 ABQ <dbl [3]> <chr [3]> ## 2 ACK <dbl [3]> <chr [3]> ## 3 ALB <dbl [3]> <chr [3]> ## 4 ANC <dbl [3]> <chr [3]> ## 5 ATL <dbl [3]> <chr [3]> ## 6 AUS <dbl [3]> <chr [3]> ## 7 AVL <dbl [3]> <chr [3]> ## 8 BDL <dbl [3]> <chr [3]> ## 9 BGR <dbl [3]> <chr [3]> ## 10 BHM <dbl [3]> <chr [3]> ## # ... with 95 more rows ``` To convert these outputs out of the list\-col format, I can use the function `unnest`. ``` flights_quantiles %>% unnest() ``` ``` ## # A tibble: 315 x 3 ## dest delays_val delays_q ## <chr> <dbl> <chr> ## 1 ABQ -24 25th ## 2 ABQ -5.5 50th ## 3 ABQ 22.8 75th ## 4 ACK -13 25th ## 5 ACK -3 50th ## 6 ACK 10 75th ## 7 ALB -17 25th ## 8 ALB -4 50th ## 9 ALB 28 75th ## 10 ANC -10.8 25th ## # ... with 305 more rows ``` This will output the values as individual rows, repeating the `dest` value for the length of the list. If I want to spread the `delays_quantile` values into seperate columns I can use the `spread` function that is in the tidying R chapter. ``` flights_quantiles %>% unnest() %>% spread(key = delays_q, value = delays_val, sep = "_") ``` ``` ## # A tibble: 105 x 4 ## dest delays_q_25th delays_q_50th delays_q_75th ## <chr> <dbl> <dbl> <dbl> ## 1 ABQ -24 -5.5 22.8 ## 2 ACK -13 -3 10 ## 3 ALB -17 -4 28 ## 4 ANC -10.8 1.5 10 ## 5 ATL -12 -1 16 ## 6 AUS -19 -5 15 ## 7 AVL -11 -1 13 ## 8 BDL -18 -10 14 ## 9 BGR -21.8 -9 19.8 ## 10 BHM -20 -2 34 ## # ... with 95 more rows ``` Let’s plot our unnested (but not unspread) data to see roughly the distribution of the delays for each destination at our quantiles of interest[11](#fn11). ``` flights_quantiles %>% unnest() %>% # mutate(delays_q = forcats::fct_reorder(f = delays_q, x = delays_val, fun = mean, na.rm = TRUE)) %>% ggplot(aes(x = delays_q, y = delays_val))+ geom_boxplot() ``` ``` ## Warning: Removed 3 rows containing non-finite values (stat_boxplot). ``` It can be a hassle naming the values explicitly. `quantile`’s default `probs` argument value is 0, 0\.25, 0\.5, 0\.75, 1\. Rather than needing to type the `delays_q` values `list(c('0%', '25%', '50%', '75%', '100%'))` you could have generated the values of these names dynamically using the `map` function in the `purrr` package (see chapter on iteration) with example for this by passing the `names` function over each value in `delays_val`. ``` flights_quantiles2 <- flights %>% group_by(dest) %>% summarise(delays_val = list(quantile(arr_delay, na.rm = TRUE)), delays_q = list(c('0th', '25th', '50th', '75th', '100th'))) %>% mutate(delays_q2 = purrr::map(delays_val, names)) flights_quantiles2 ``` ``` ## # A tibble: 105 x 4 ## dest delays_val delays_q delays_q2 ## <chr> <list> <list> <list> ## 1 ABQ <dbl [5]> <chr [5]> <chr [5]> ## 2 ACK <dbl [5]> <chr [5]> <chr [5]> ## 3 ALB <dbl [5]> <chr [5]> <chr [5]> ## 4 ANC <dbl [5]> <chr [5]> <chr [5]> ## 5 ATL <dbl [5]> <chr [5]> <chr [5]> ## 6 AUS <dbl [5]> <chr [5]> <chr [5]> ## 7 AVL <dbl [5]> <chr [5]> <chr [5]> ## 8 BDL <dbl [5]> <chr [5]> <chr [5]> ## 9 BGR <dbl [5]> <chr [5]> <chr [5]> ## 10 BHM <dbl [5]> <chr [5]> <chr [5]> ## # ... with 95 more rows ``` And then let’s `unnest` the data[12](#fn12). ``` flights_quantiles2 %>% unnest() ``` ``` ## # A tibble: 525 x 4 ## dest delays_val delays_q delays_q2 ## <chr> <dbl> <chr> <chr> ## 1 ABQ -61 0th 0% ## 2 ABQ -24 25th 25% ## 3 ABQ -5.5 50th 50% ## 4 ABQ 22.8 75th 75% ## 5 ABQ 153 100th 100% ## 6 ACK -25 0th 0% ## 7 ACK -13 25th 25% ## 8 ACK -3 50th 50% ## 9 ACK 10 75th 75% ## 10 ACK 221 100th 100% ## # ... with 515 more rows ``` #### 5\.6\.7\.1\.4\. But let’s look at those flights that have the greatest differences in proportion on\-time vs. 2 hours late while still having values in both categories[13](#fn13). ``` flights %>% group_by(flight) %>% summarise(ontime = sum(arr_delay <= 0, na.rm = TRUE)/n(), late.120 = sum(arr_delay >= 120, na.rm = TRUE)/n(), n = n()) %>% ungroup() %>% filter_at(c("ontime", "late.120"), all_vars(. != 0 & . != 1)) %>% mutate(max_dist = abs(ontime - late.120)) %>% arrange(desc(max_dist)) ``` ``` ## # A tibble: 2,098 x 5 ## flight ontime late.120 n max_dist ## <int> <dbl> <dbl> <int> <dbl> ## 1 5288 0.927 0.0244 41 0.902 ## 2 2085 0.901 0.00658 152 0.895 ## 3 2174 0.914 0.0286 35 0.886 ## 4 2243 0.9 0.0167 120 0.883 ## 5 2180 0.889 0.0131 153 0.876 ## 6 2118 0.867 0.00699 143 0.860 ## 7 1167 0.864 0.00662 302 0.858 ## 8 3613 0.886 0.0286 35 0.857 ## 9 1772 0.891 0.0364 55 0.855 ## 10 1157 0.847 0.00667 150 0.84 ## # ... with 2,088 more rows ``` ### 5\.6\.7\.4\. To measure the difference in speed you can use the `microbenchmark` function ``` microbenchmark::microbenchmark(sub_optimal = filter(flights, is.na(dep_delay) | is.na(arr_delay)), optimal = filter(flights, is.na(arr_delay)), times = 10) ``` ``` ## Unit: milliseconds ## expr min lq mean median uq max neval cld ## sub_optimal 5.5279 6.2409 6.55796 6.74025 6.9686 7.2225 10 b ## optimal 3.9316 4.3135 4.55498 4.57885 4.8483 5.1514 10 a ``` ### 5\.6\.7\.5\. Explore the percentage delayed vs. percentage cancelled. ``` flights %>% group_by(day) %>% summarise(cancelled = sum(is.na(arr_delay)), delayed = sum(arr_delay > 0, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num, delayed_perc = delayed / num) %>% ggplot(aes(x = day))+ geom_line(aes(y = cancelled_perc), colour = "dark blue")+ geom_line(aes(y = delayed_perc), colour = "dark red") ``` Let’s try faceting by origin and looking at both values next to each other. ``` flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% gather(key = type, value = value, avg_delayed, cancelled_perc) %>% ggplot(aes(x = day, y = value))+ geom_line()+ facet_grid(type ~ origin, scales = "free_y") ``` Look’s like the relationship across origins with the delay overlaid with color (not actually crazy about how this look). ``` flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% ggplot(aes(x = day, y = cancelled_perc, colour = avg_delayed))+ geom_line()+ facet_grid(origin ~ .) ``` Let’s look at values as individual points and overlay a `geom_smooth` ``` flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% ggplot(aes(avg_delayed, cancelled_perc, colour = origin))+ geom_point()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` **Modeling approach:** We also could approach this using a model and regressing the average proportion of cancelled flights on average delay. ``` cancelled_mod1 <- flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% lm(cancelled_perc ~ avg_delayed, data = .) summary(cancelled_mod1) ``` ``` ## ## Call: ## lm(formula = cancelled_perc ~ avg_delayed, data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.026363 -0.009392 -0.002610 0.006196 0.048436 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.0152588 0.0020945 7.285 1.12e-10 *** ## avg_delayed 0.0018688 0.0002311 8.086 2.54e-12 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.01342 on 91 degrees of freedom ## Multiple R-squared: 0.4181, Adjusted R-squared: 0.4117 ## F-statistic: 65.39 on 1 and 91 DF, p-value: 2.537e-12 ``` ``` # ggplot(aes(x = day, y = cancelled_perc))+ # geom_line() ``` If you were confused by the `.` in `lm(cancelled_perc ~ avg_delayed, data = .)`, the dot specifies where the output from the prior steps should be piped into. The default is for it to go into the first argument, but for the `lm` function, data is not the first argument, so I have to explicitly tell it that the prior steps output should be inputted into the data argument of the `lm` function. See [On piping dots](05-data-transformations.html#on-piping-dots) for more details. The average delay accounts for 42% of the variation in the proportion of canceled flights. Modeling the log\-odds of the proportion of cancelled flights might be more successful as it produces a variable not constrained by 0 to 1, better aligning with the assumptions of linear regression. ``` cancelled_mod2 <- flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num, cancelled_logodds = log(cancelled / (num - cancelled))) %>% lm(cancelled_logodds ~ avg_delayed, data = .) ``` To convert logodds back to percentage, I built the following equation. ``` convert_logodds <- function(log_odds) exp(log_odds) / (1 + exp(log_odds)) ``` Let’s calculate the MAE or mean absolute error on our percentages. ``` cancelled_preds2 <- flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num, cancelled_logodds = log(cancelled / (num - cancelled))) %>% ungroup() %>% modelr::spread_predictions(cancelled_mod1, cancelled_mod2) %>% mutate(cancelled_mod2 = convert_logodds(cancelled_mod2)) cancelled_preds2 %>% summarise(MAE1 = mean(abs(cancelled_perc - cancelled_mod1), na.rm = TRUE), MAE2 = mean(abs(cancelled_perc - cancelled_mod2), na.rm = TRUE), mean_value = mean(cancelled_perc, na.rm = TRUE)) ``` ``` ## # A tibble: 1 x 3 ## MAE1 MAE2 mean_value ## <dbl> <dbl> <dbl> ## 1 0.0101 0.00954 0.0279 ``` Let’s look at the differences in the outputs of the predictions from these models. ``` cancelled_preds2 %>% ggplot(aes(avg_delayed, cancelled_perc))+ geom_point()+ scale_size_continuous(range = c(1, 2))+ geom_line(aes(y = cancelled_mod1), colour = "blue", size = 1)+ geom_line(aes(y = cancelled_mod2), colour = "red", size = 1) ``` [14](#fn14) ### 5\.6\.7\.6\. As an example, let’s look at just Atl flights from LGA and compare DL, FL, MQ. ``` flights %>% filter(dest == 'ATL', origin == 'LGA') %>% count(carrier) ``` ``` ## # A tibble: 5 x 2 ## carrier n ## <chr> <int> ## 1 DL 5544 ## 2 EV 1 ## 3 FL 2337 ## 4 MQ 2322 ## 5 WN 59 ``` And compare the median delays between the three primary carriers DL, FL, MQ. ``` carriers_lga_atl <- flights %>% filter(dest == 'ATL', origin == 'LGA') %>% group_by(carrier) %>% # filter out small samples mutate(n_tot = n()) %>% filter(n_tot > 100) %>% select(-n_tot) %>% ### filter(!is.na(arr_delay)) %>% ungroup() label <- carriers_lga_atl %>% group_by(carrier) %>% summarise(arr_delay = median(arr_delay, na.rm = TRUE)) carriers_lga_atl %>% select(carrier, arr_delay) %>% ggplot()+ geom_boxplot(aes(carrier, arr_delay, colour = carrier), outlier.shape = NA)+ coord_cartesian(y = c(-60, 75))+ geom_text(mapping = aes(x = carrier, group = carrier, y = arr_delay + 5, label = arr_delay), data = label) ``` Or perhaps you want to use a statistical method to compare if the differences in the grouped are significant… ``` carriers_lga_atl %>% lm(arr_delay ~ carrier, data = .) %>% summary() ``` ``` ## ## Call: ## lm(formula = arr_delay ~ carrier, data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -64.74 -22.33 -11.33 4.67 888.67 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 6.3273 0.6149 10.29 < 2e-16 *** ## carrierFL 14.4172 1.1340 12.71 < 2e-16 *** ## carrierMQ 7.7067 1.1417 6.75 1.56e-11 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 45.48 on 9979 degrees of freedom ## Multiple R-squared: 0.01692, Adjusted R-squared: 0.01672 ## F-statistic: 85.86 on 2 and 9979 DF, p-value: < 2.2e-16 ``` This shows the mean delay for DL is \~6\.3, FL is \~20\.7, MQ is \~14 and FL and MQ are significantly different from DL (and DL is significantly different from 0\)[15](#fn15). The carrier accouts for \~1\.6% of the variation in arrival… etc…. ### 5\.7\.1\.6\. Let’s look at the fastest 20 `air_time`s for each destination. ``` flights_new2 %>% group_by(dest) %>% mutate(min_rank = min_rank(air_time)) %>% filter(min_rank < 20) %>% ggplot(aes(distance, air_time, colour = dest))+ geom_point()+ guides(colour = FALSE) ``` Let’s do the same for my custom `air_time` calculation `air_calc`. ``` flights_new2 %>% group_by(dest) %>% mutate(min_rank = min_rank(air_calc)) %>% filter(min_rank < 20) %>% ggplot(aes(distance, air_calc, colour = dest))+ geom_point()+ guides(colour = FALSE) ``` *Rather than the fastest 20, let’s look at the mean `dist` and `air_time` for each[16](#fn16).* First using the `air_time` value. ``` flights_new2 %>% mutate_at(.vars = c("dep_time", "arr_time"), .funs = funs(time_to_mins)) %>% group_by(dest) %>% summarise(mean_air = mean(air_time, na.rm = TRUE), mean_dist = mean(distance, na.rm = TRUE)) %>% ggplot(., aes(x = mean_dist, y = mean_air))+ geom_point(aes(colour = dest))+ scale_y_continuous(breaks = seq(0, 660, 60))+ guides(colour = FALSE) ``` ``` ## Warning: funs() is soft deprecated as of dplyr 0.8.0 ## please use list() instead ## ## # Before: ## funs(name = f(.) ## ## # After: ## list(name = ~f(.)) ## This warning is displayed once per session. ``` ``` ## Warning: Removed 1 rows containing missing values (geom_point). ``` Then with the custom `air_calc`. ``` flights_new2 %>% mutate_at(.vars = c("dep_time", "arr_time"), .funs = funs(time_to_mins)) %>% group_by(dest) %>% summarise(mean_air = mean(air_calc, na.rm = TRUE), mean_dist = mean(distance, na.rm = TRUE)) %>% ggplot(., aes(x = mean_dist, y = mean_air))+ geom_point(aes(colour = dest))+ scale_y_continuous(breaks = seq(0, 660, 60))+ guides(colour = FALSE) ``` ``` ## Warning: Removed 5 rows containing missing values (geom_point). ``` ### 5\.7\.1\.5 Let’s run this for every 3 lags (1, 4, 7, …) and plot. ``` lags_cors <- tibble(lag = seq(1,200, 3)) %>% mutate(cor = purrr::map_dbl(lag, cor_by_lag)) lags_cors %>% ggplot(aes(x = lag, cor))+ geom_line()+ coord_cartesian(ylim = c(0, 0.40)) ``` ### 5\.7\.1\.8\. ``` tail_nums_counts %>% nest() %>% sample_n(10) %>% unnest() %>% View() ``` ### On piping dots The `.` let’s you explicitly state where to pipe the output from the prior steps. The default is to have it go into the first argument of the function. *Let’s look at an example:* ``` flights %>% filter(!is.na(arr_delay)) %>% count(origin) ``` ``` ## # A tibble: 3 x 2 ## origin n ## <chr> <int> ## 1 EWR 117127 ## 2 JFK 109079 ## 3 LGA 101140 ``` This is the exact same thing as the code below, I just added the dots to be explicit about where in the function the output from the prior steps will go: ``` flights %>% filter(., !is.na(arr_delay)) %>% count(., origin) ``` ``` ## # A tibble: 3 x 2 ## origin n ## <chr> <int> ## 1 EWR 117127 ## 2 JFK 109079 ## 3 LGA 101140 ``` Functions in dplyr, etc. expect dataframes in the first argument, so the default piping behavior works fine you don’t end\-up using the dot in this way. However functions outside of the tidyverse are not always so consistent and may expect the dataframe (or w/e your output from the prior step is) in a different location of the function, hence the need to use the dot to specify where it should go. The example below uses base R’s `lm` (linear models) function to regress `arr_delay` on `dep_delay` and `distance`[17](#fn17). The first argument expects a function, the second argument the data, hence the need for the dot. ``` flights %>% filter(., !is.na(arr_delay)) %>% lm(arr_delay ~ dep_delay + distance, .) ``` ``` ## ## Call: ## lm(formula = arr_delay ~ dep_delay + distance, data = .) ## ## Coefficients: ## (Intercept) dep_delay distance ## -3.212779 1.018077 -0.002551 ``` When using the `.` in piping, I will usually make the argment name I am piping into explicit. This makes it more clear and also means if I have the position order wrong it doesn’t matter. ``` flights %>% filter(., !is.na(arr_delay)) %>% lm(arr_delay ~ dep_delay + distance, data = .) ``` You can also use the `.` in conjunction with R’s subsetting to output vectors. In the example below I filter flights, then extract the `arr_delay` column as a vector and pipe it into the base R function `quantile`. ``` flights %>% filter(!is.na(arr_delay)) %>% .$arr_delay %>% quantile(probs = seq(from = 0, to = 1, by = 0.10)) ``` ``` ## 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% ## -86 -26 -19 -14 -10 -5 1 9 21 52 1272 ``` `quantile` is expecting a numeric vector in it’s first argument so the above works. If instead of `.$arr_delay`, you’d tried `select(arr_delay`) the function would have failed because the `select` statement outputs a dataframe rather than a vector (and `quantile` would have become very angry with you). One weakness with the above method is it only allows you to input a single vector into the base R function (while many funcitons can take in multiple vectors). A better way of doing this is to use the `with` function. The `with` function allows you to pipe a dataframe into the first argument and then reference the column in that dataframe with just the field names. This makes using those base R funcitons easier and more similar in syntax to tidyverse functions. For example, the above example would look become. ``` flights %>% filter(!is.na(arr_delay)) %>% with(quantile(arr_delay, probs = seq(from = 0, to = 1, by = 0.10))) ``` ``` ## 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% ## -86 -26 -19 -14 -10 -5 1 9 21 52 1272 ``` This method also makes it easy to input multiple field names in this style. Let’s look at this with the `table` function[18](#fn18) ``` flights %>% filter(!is.na(arr_delay)) %>% with(table(origin, carrier)) ``` ``` ## carrier ## origin 9E AA AS B6 DL EV F9 FL HA MQ OO ## EWR 1193 3363 709 6472 4295 41557 0 0 0 2097 6 ## JFK 13742 13600 0 41666 20559 1326 0 0 342 6838 0 ## LGA 2359 14984 0 5911 22804 8225 681 3175 0 16102 23 ## carrier ## origin UA US VX WN YV ## EWR 45501 4326 1552 6056 0 ## JFK 4478 2964 3564 0 0 ## LGA 7803 12541 0 5988 544 ``` ### plotly The `plotly` package has a cool function `ggplotly` that allows you to add wrappers `ggplot` that turn it into html that allow you to do things like zoom\-in and hover over points. It also has a `frame` argument that allows you to make animations or filter between points. Here is an example from the `flights` dataset. ``` p <- flights %>% group_by(hour, month) %>% summarise(avg_delay = mean(arr_delay, na.rm = TRUE)) %>% ggplot(aes(x = hour, y = avg_delay, group = month, frame = month))+ geom_point()+ geom_smooth() plotly::ggplotly(p) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_smooth). ``` This is the base from which this is built. ``` flights %>% group_by(hour, month) %>% summarise(avg_delay = mean(arr_delay, na.rm = TRUE)) %>% ggplot(aes(x = hour, y = avg_delay, group = month))+ geom_point()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_smooth). ``` ``` ## Warning: Removed 1 rows containing missing values (geom_point). ``` 5\.2: Filter rows ----------------- ### 5\.2\.4\. **1\.Find all flights that…** *1\.1\.Find flights that had an arrival delay of 2 \+ hrs* ``` filter(flights, arr_delay >= 120) %>% glimpse() ``` ``` ## Observations: 10,200 ## Variables: 19 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ day <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ dep_time <int> 811, 848, 957, 1114, 1505, 1525, 1549, 1558, 17... ## $ sched_dep_time <int> 630, 1835, 733, 900, 1310, 1340, 1445, 1359, 16... ## $ dep_delay <dbl> 101, 853, 144, 134, 115, 105, 64, 119, 62, 103,... ## $ arr_time <int> 1047, 1001, 1056, 1447, 1638, 1831, 1912, 1718,... ## $ sched_arr_time <int> 830, 1950, 853, 1222, 1431, 1626, 1656, 1515, 1... ## $ arr_delay <dbl> 137, 851, 123, 145, 127, 125, 136, 123, 123, 13... ## $ carrier <chr> "MQ", "MQ", "UA", "UA", "EV", "B6", "EV", "EV",... ## $ flight <int> 4576, 3944, 856, 1086, 4497, 525, 4181, 5712, 4... ## $ tailnum <chr> "N531MQ", "N942MQ", "N534UA", "N76502", "N17984... ## $ origin <chr> "LGA", "JFK", "EWR", "LGA", "EWR", "EWR", "EWR"... ## $ dest <chr> "CLT", "BWI", "BOS", "IAH", "RIC", "MCO", "MCI"... ## $ air_time <dbl> 118, 41, 37, 248, 63, 152, 234, 53, 119, 154, 2... ## $ distance <dbl> 544, 184, 200, 1416, 277, 937, 1092, 228, 533, ... ## $ hour <dbl> 6, 18, 7, 9, 13, 13, 14, 13, 16, 16, 13, 14, 16... ## $ minute <dbl> 30, 35, 33, 0, 10, 40, 45, 59, 30, 20, 25, 22, ... ## $ time_hour <dttm> 2013-01-01 06:00:00, 2013-01-01 18:00:00, 2013... ``` *1\.2\.flew to Houston IAH or HOU* ``` filter(flights, dest %in% c("IAH", "HOU")) ``` ``` ## # A tibble: 9,313 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 517 515 2 830 ## 2 2013 1 1 533 529 4 850 ## 3 2013 1 1 623 627 -4 933 ## 4 2013 1 1 728 732 -4 1041 ## 5 2013 1 1 739 739 0 1104 ## 6 2013 1 1 908 908 0 1228 ## 7 2013 1 1 1028 1026 2 1350 ## 8 2013 1 1 1044 1045 -1 1352 ## 9 2013 1 1 1114 900 134 1447 ## 10 2013 1 1 1205 1200 5 1503 ## # ... with 9,303 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` *1\.3\.flew through American, United or Delta* ``` filter(flights, carrier %in% c("UA", "AA","DL")) ``` ``` ## # A tibble: 139,504 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 517 515 2 830 ## 2 2013 1 1 533 529 4 850 ## 3 2013 1 1 542 540 2 923 ## 4 2013 1 1 554 600 -6 812 ## 5 2013 1 1 554 558 -4 740 ## 6 2013 1 1 558 600 -2 753 ## 7 2013 1 1 558 600 -2 924 ## 8 2013 1 1 558 600 -2 923 ## 9 2013 1 1 559 600 -1 941 ## 10 2013 1 1 559 600 -1 854 ## # ... with 139,494 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` *1\.4\. Departed in Summer* ``` filter(flights, month <= 8 & month >= 6) ``` ``` ## # A tibble: 86,995 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 6 1 2 2359 3 341 ## 2 2013 6 1 451 500 -9 624 ## 3 2013 6 1 506 515 -9 715 ## 4 2013 6 1 534 545 -11 800 ## 5 2013 6 1 538 545 -7 925 ## 6 2013 6 1 539 540 -1 832 ## 7 2013 6 1 546 600 -14 850 ## 8 2013 6 1 551 600 -9 828 ## 9 2013 6 1 552 600 -8 647 ## 10 2013 6 1 553 600 -7 700 ## # ... with 86,985 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` *1\.5\. Arrived more than 2 hours late, but didn’t leave late* ``` filter(flights, arr_delay > 120, dep_delay >= 0) ``` ``` ## # A tibble: 10,008 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 811 630 101 1047 ## 2 2013 1 1 848 1835 853 1001 ## 3 2013 1 1 957 733 144 1056 ## 4 2013 1 1 1114 900 134 1447 ## 5 2013 1 1 1505 1310 115 1638 ## 6 2013 1 1 1525 1340 105 1831 ## 7 2013 1 1 1549 1445 64 1912 ## 8 2013 1 1 1558 1359 119 1718 ## 9 2013 1 1 1732 1630 62 2028 ## 10 2013 1 1 1803 1620 103 2008 ## # ... with 9,998 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` *1\.6\. were delayed at least an hour, but made up over 30 mins in flight* ``` filter(flights, (arr_delay - dep_delay) <= -30, dep_delay >= 60) ``` ``` ## # A tibble: 2,074 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 1716 1545 91 2140 ## 2 2013 1 1 2205 1720 285 46 ## 3 2013 1 1 2326 2130 116 131 ## 4 2013 1 3 1503 1221 162 1803 ## 5 2013 1 3 1821 1530 171 2131 ## 6 2013 1 3 1839 1700 99 2056 ## 7 2013 1 3 1850 1745 65 2148 ## 8 2013 1 3 1923 1815 68 2036 ## 9 2013 1 3 1941 1759 102 2246 ## 10 2013 1 3 1950 1845 65 2228 ## # ... with 2,064 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` ``` # #Equivalent solution: # filter(flights, (arr_delay - dep_delay) <= -30 & dep_delay >= 60) ``` *1\.7\. departed between midnight and 6am (inclusive)* ``` filter(flights, dep_time >= 0 & dep_time <= 600) ``` ``` ## # A tibble: 9,344 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 517 515 2 830 ## 2 2013 1 1 533 529 4 850 ## 3 2013 1 1 542 540 2 923 ## 4 2013 1 1 544 545 -1 1004 ## 5 2013 1 1 554 600 -6 812 ## 6 2013 1 1 554 558 -4 740 ## 7 2013 1 1 555 600 -5 913 ## 8 2013 1 1 557 600 -3 709 ## 9 2013 1 1 557 600 -3 838 ## 10 2013 1 1 558 600 -2 753 ## # ... with 9,334 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` ``` # # Equivalent solution: # filter(flights, dep_time >= 0, dep_time <= 600) ``` **2\. Another useful `dplyr` filtering helper is `between()`. What does it do? Can you use it to simplify the code needed to answer the previous challenges?** This is a shortcut for `x >= left & x <= right` solving 1\.7\. using `between`: ``` filter(flights, between(dep_time, 0, 600)) ``` **3\. How many flights have a missing `dep_time`? What other variables are missing? What might these rows represent?** ``` filter(flights, is.na(dep_time)) ``` ``` ## # A tibble: 8,255 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 NA 1630 NA NA ## 2 2013 1 1 NA 1935 NA NA ## 3 2013 1 1 NA 1500 NA NA ## 4 2013 1 1 NA 600 NA NA ## 5 2013 1 2 NA 1540 NA NA ## 6 2013 1 2 NA 1620 NA NA ## 7 2013 1 2 NA 1355 NA NA ## 8 2013 1 2 NA 1420 NA NA ## 9 2013 1 2 NA 1321 NA NA ## 10 2013 1 2 NA 1545 NA NA ## # ... with 8,245 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` 8255, perhaps these are canceled flights. **4\. Why is `NA ^ 0` not missing? Why is `NA | TRUE` not missing? Why is `FALSE & NA` not missing? Can you figure out the general rule? (`NA * 0` is a tricky counterexample!)** ``` NA^0 ``` ``` ## [1] 1 ``` Anything raised to the 0 is 1\. ``` FALSE & NA ``` ``` ## [1] FALSE ``` For the “AND” operator `&` for it to be `TRUE` both values would need to be `TRUE` so if one is `FALSE` the entire statment must be. ``` TRUE | NA ``` ``` ## [1] TRUE ``` The “OR” operator `|` specifies that if at least one of the values is `TRUE` the whole statement is, so because one is already `TRUE` the whole statement must be. ``` NA*0 ``` ``` ## [1] NA ``` This does not come\-out to 0 as expected because the laws of addition and multiplication here only hold for natural numbers, but it is possible that `NA` could represent `Inf` or `-Inf` in which case the outut is `NaN` rather than 0\. ``` Inf*0 ``` ``` ## [1] NaN ``` See this article for more details: [https://math.stackexchange.com/questions/28940/why\-is\-infinity\-multiplied\-by\-zero\-not\-an\-easy\-zero\-answer](https://math.stackexchange.com/questions/28940/why-is-infinity-multiplied-by-zero-not-an-easy-zero-answer) . ### 5\.2\.4\. **1\.Find all flights that…** *1\.1\.Find flights that had an arrival delay of 2 \+ hrs* ``` filter(flights, arr_delay >= 120) %>% glimpse() ``` ``` ## Observations: 10,200 ## Variables: 19 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ day <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ dep_time <int> 811, 848, 957, 1114, 1505, 1525, 1549, 1558, 17... ## $ sched_dep_time <int> 630, 1835, 733, 900, 1310, 1340, 1445, 1359, 16... ## $ dep_delay <dbl> 101, 853, 144, 134, 115, 105, 64, 119, 62, 103,... ## $ arr_time <int> 1047, 1001, 1056, 1447, 1638, 1831, 1912, 1718,... ## $ sched_arr_time <int> 830, 1950, 853, 1222, 1431, 1626, 1656, 1515, 1... ## $ arr_delay <dbl> 137, 851, 123, 145, 127, 125, 136, 123, 123, 13... ## $ carrier <chr> "MQ", "MQ", "UA", "UA", "EV", "B6", "EV", "EV",... ## $ flight <int> 4576, 3944, 856, 1086, 4497, 525, 4181, 5712, 4... ## $ tailnum <chr> "N531MQ", "N942MQ", "N534UA", "N76502", "N17984... ## $ origin <chr> "LGA", "JFK", "EWR", "LGA", "EWR", "EWR", "EWR"... ## $ dest <chr> "CLT", "BWI", "BOS", "IAH", "RIC", "MCO", "MCI"... ## $ air_time <dbl> 118, 41, 37, 248, 63, 152, 234, 53, 119, 154, 2... ## $ distance <dbl> 544, 184, 200, 1416, 277, 937, 1092, 228, 533, ... ## $ hour <dbl> 6, 18, 7, 9, 13, 13, 14, 13, 16, 16, 13, 14, 16... ## $ minute <dbl> 30, 35, 33, 0, 10, 40, 45, 59, 30, 20, 25, 22, ... ## $ time_hour <dttm> 2013-01-01 06:00:00, 2013-01-01 18:00:00, 2013... ``` *1\.2\.flew to Houston IAH or HOU* ``` filter(flights, dest %in% c("IAH", "HOU")) ``` ``` ## # A tibble: 9,313 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 517 515 2 830 ## 2 2013 1 1 533 529 4 850 ## 3 2013 1 1 623 627 -4 933 ## 4 2013 1 1 728 732 -4 1041 ## 5 2013 1 1 739 739 0 1104 ## 6 2013 1 1 908 908 0 1228 ## 7 2013 1 1 1028 1026 2 1350 ## 8 2013 1 1 1044 1045 -1 1352 ## 9 2013 1 1 1114 900 134 1447 ## 10 2013 1 1 1205 1200 5 1503 ## # ... with 9,303 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` *1\.3\.flew through American, United or Delta* ``` filter(flights, carrier %in% c("UA", "AA","DL")) ``` ``` ## # A tibble: 139,504 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 517 515 2 830 ## 2 2013 1 1 533 529 4 850 ## 3 2013 1 1 542 540 2 923 ## 4 2013 1 1 554 600 -6 812 ## 5 2013 1 1 554 558 -4 740 ## 6 2013 1 1 558 600 -2 753 ## 7 2013 1 1 558 600 -2 924 ## 8 2013 1 1 558 600 -2 923 ## 9 2013 1 1 559 600 -1 941 ## 10 2013 1 1 559 600 -1 854 ## # ... with 139,494 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` *1\.4\. Departed in Summer* ``` filter(flights, month <= 8 & month >= 6) ``` ``` ## # A tibble: 86,995 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 6 1 2 2359 3 341 ## 2 2013 6 1 451 500 -9 624 ## 3 2013 6 1 506 515 -9 715 ## 4 2013 6 1 534 545 -11 800 ## 5 2013 6 1 538 545 -7 925 ## 6 2013 6 1 539 540 -1 832 ## 7 2013 6 1 546 600 -14 850 ## 8 2013 6 1 551 600 -9 828 ## 9 2013 6 1 552 600 -8 647 ## 10 2013 6 1 553 600 -7 700 ## # ... with 86,985 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` *1\.5\. Arrived more than 2 hours late, but didn’t leave late* ``` filter(flights, arr_delay > 120, dep_delay >= 0) ``` ``` ## # A tibble: 10,008 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 811 630 101 1047 ## 2 2013 1 1 848 1835 853 1001 ## 3 2013 1 1 957 733 144 1056 ## 4 2013 1 1 1114 900 134 1447 ## 5 2013 1 1 1505 1310 115 1638 ## 6 2013 1 1 1525 1340 105 1831 ## 7 2013 1 1 1549 1445 64 1912 ## 8 2013 1 1 1558 1359 119 1718 ## 9 2013 1 1 1732 1630 62 2028 ## 10 2013 1 1 1803 1620 103 2008 ## # ... with 9,998 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` *1\.6\. were delayed at least an hour, but made up over 30 mins in flight* ``` filter(flights, (arr_delay - dep_delay) <= -30, dep_delay >= 60) ``` ``` ## # A tibble: 2,074 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 1716 1545 91 2140 ## 2 2013 1 1 2205 1720 285 46 ## 3 2013 1 1 2326 2130 116 131 ## 4 2013 1 3 1503 1221 162 1803 ## 5 2013 1 3 1821 1530 171 2131 ## 6 2013 1 3 1839 1700 99 2056 ## 7 2013 1 3 1850 1745 65 2148 ## 8 2013 1 3 1923 1815 68 2036 ## 9 2013 1 3 1941 1759 102 2246 ## 10 2013 1 3 1950 1845 65 2228 ## # ... with 2,064 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` ``` # #Equivalent solution: # filter(flights, (arr_delay - dep_delay) <= -30 & dep_delay >= 60) ``` *1\.7\. departed between midnight and 6am (inclusive)* ``` filter(flights, dep_time >= 0 & dep_time <= 600) ``` ``` ## # A tibble: 9,344 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 517 515 2 830 ## 2 2013 1 1 533 529 4 850 ## 3 2013 1 1 542 540 2 923 ## 4 2013 1 1 544 545 -1 1004 ## 5 2013 1 1 554 600 -6 812 ## 6 2013 1 1 554 558 -4 740 ## 7 2013 1 1 555 600 -5 913 ## 8 2013 1 1 557 600 -3 709 ## 9 2013 1 1 557 600 -3 838 ## 10 2013 1 1 558 600 -2 753 ## # ... with 9,334 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` ``` # # Equivalent solution: # filter(flights, dep_time >= 0, dep_time <= 600) ``` **2\. Another useful `dplyr` filtering helper is `between()`. What does it do? Can you use it to simplify the code needed to answer the previous challenges?** This is a shortcut for `x >= left & x <= right` solving 1\.7\. using `between`: ``` filter(flights, between(dep_time, 0, 600)) ``` **3\. How many flights have a missing `dep_time`? What other variables are missing? What might these rows represent?** ``` filter(flights, is.na(dep_time)) ``` ``` ## # A tibble: 8,255 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 NA 1630 NA NA ## 2 2013 1 1 NA 1935 NA NA ## 3 2013 1 1 NA 1500 NA NA ## 4 2013 1 1 NA 600 NA NA ## 5 2013 1 2 NA 1540 NA NA ## 6 2013 1 2 NA 1620 NA NA ## 7 2013 1 2 NA 1355 NA NA ## 8 2013 1 2 NA 1420 NA NA ## 9 2013 1 2 NA 1321 NA NA ## 10 2013 1 2 NA 1545 NA NA ## # ... with 8,245 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` 8255, perhaps these are canceled flights. **4\. Why is `NA ^ 0` not missing? Why is `NA | TRUE` not missing? Why is `FALSE & NA` not missing? Can you figure out the general rule? (`NA * 0` is a tricky counterexample!)** ``` NA^0 ``` ``` ## [1] 1 ``` Anything raised to the 0 is 1\. ``` FALSE & NA ``` ``` ## [1] FALSE ``` For the “AND” operator `&` for it to be `TRUE` both values would need to be `TRUE` so if one is `FALSE` the entire statment must be. ``` TRUE | NA ``` ``` ## [1] TRUE ``` The “OR” operator `|` specifies that if at least one of the values is `TRUE` the whole statement is, so because one is already `TRUE` the whole statement must be. ``` NA*0 ``` ``` ## [1] NA ``` This does not come\-out to 0 as expected because the laws of addition and multiplication here only hold for natural numbers, but it is possible that `NA` could represent `Inf` or `-Inf` in which case the outut is `NaN` rather than 0\. ``` Inf*0 ``` ``` ## [1] NaN ``` See this article for more details: [https://math.stackexchange.com/questions/28940/why\-is\-infinity\-multiplied\-by\-zero\-not\-an\-easy\-zero\-answer](https://math.stackexchange.com/questions/28940/why-is-infinity-multiplied-by-zero-not-an-easy-zero-answer) . 5\.3: Arrange rows ------------------ ### 5\.3\.1\. **1\. use `arrange()` to sort out all missing values to start** ``` df <- tibble(x = c(5, 2, NA)) arrange(df, !is.na(x)) ``` ``` ## # A tibble: 3 x 1 ## x ## <dbl> ## 1 NA ## 2 5 ## 3 2 ``` **2\. Find most delayed departures** ``` arrange(flights, desc(dep_delay)) %>% glimpse() ``` ``` ## Observations: 336,776 ## Variables: 19 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 6, 1, 9, 7, 4, 3, 6, 7, 12, 5, 1, 2, 5, 12, ... ## $ day <int> 9, 15, 10, 20, 22, 10, 17, 27, 22, 5, 3, 1, 10,... ## $ dep_time <int> 641, 1432, 1121, 1139, 845, 1100, 2321, 959, 22... ## $ sched_dep_time <int> 900, 1935, 1635, 1845, 1600, 1900, 810, 1900, 7... ## $ dep_delay <dbl> 1301, 1137, 1126, 1014, 1005, 960, 911, 899, 89... ## $ arr_time <int> 1242, 1607, 1239, 1457, 1044, 1342, 135, 1236, ... ## $ sched_arr_time <int> 1530, 2120, 1810, 2210, 1815, 2211, 1020, 2226,... ## $ arr_delay <dbl> 1272, 1127, 1109, 1007, 989, 931, 915, 850, 895... ## $ carrier <chr> "HA", "MQ", "MQ", "AA", "MQ", "DL", "DL", "DL",... ## $ flight <int> 51, 3535, 3695, 177, 3075, 2391, 2119, 2007, 20... ## $ tailnum <chr> "N384HA", "N504MQ", "N517MQ", "N338AA", "N665MQ... ## $ origin <chr> "JFK", "JFK", "EWR", "JFK", "JFK", "JFK", "LGA"... ## $ dest <chr> "HNL", "CMH", "ORD", "SFO", "CVG", "TPA", "MSP"... ## $ air_time <dbl> 640, 74, 111, 354, 96, 139, 167, 313, 109, 149,... ## $ distance <dbl> 4983, 483, 719, 2586, 589, 1005, 1020, 2454, 76... ## $ hour <dbl> 9, 19, 16, 18, 16, 19, 8, 19, 7, 17, 20, 18, 8,... ## $ minute <dbl> 0, 35, 35, 45, 0, 0, 10, 0, 59, 0, 55, 35, 30, ... ## $ time_hour <dttm> 2013-01-09 09:00:00, 2013-06-15 19:00:00, 2013... ``` **3\. Find the fastest flights** ``` arrange(flights, air_time) %>% glimpse() ``` ``` ## Observations: 336,776 ## Variables: 19 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 4, 12, 2, 2, 2, 3, 3, 3, 3, 5, 5, 6, 8, 9, 9... ## $ day <int> 16, 13, 6, 3, 5, 12, 2, 8, 18, 19, 8, 19, 12, 1... ## $ dep_time <int> 1355, 537, 922, 2153, 1303, 2123, 1450, 2026, 1... ## $ sched_dep_time <int> 1315, 527, 851, 2129, 1315, 2130, 1500, 1935, 1... ## $ dep_delay <dbl> 40, 10, 31, 24, -12, -7, -10, 51, 87, 41, 137, ... ## $ arr_time <int> 1442, 622, 1021, 2247, 1342, 2211, 1547, 2131, ... ## $ sched_arr_time <int> 1411, 628, 954, 2224, 1411, 2225, 1608, 2056, 1... ## $ arr_delay <dbl> 31, -6, 27, 23, -29, -14, -21, 35, 67, 19, 109,... ## $ carrier <chr> "EV", "EV", "EV", "EV", "EV", "EV", "US", "9E",... ## $ flight <int> 4368, 4631, 4276, 4619, 4368, 4619, 2132, 3650,... ## $ tailnum <chr> "N16911", "N12167", "N27200", "N13913", "N13955... ## $ origin <chr> "EWR", "EWR", "EWR", "EWR", "EWR", "EWR", "LGA"... ## $ dest <chr> "BDL", "BDL", "BDL", "PHL", "BDL", "PHL", "BOS"... ## $ air_time <dbl> 20, 20, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,... ## $ distance <dbl> 116, 116, 116, 80, 116, 80, 184, 94, 116, 116, ... ## $ hour <dbl> 13, 5, 8, 21, 13, 21, 15, 19, 13, 21, 21, 21, 2... ## $ minute <dbl> 15, 27, 51, 29, 15, 30, 0, 35, 29, 45, 59, 59, ... ## $ time_hour <dttm> 2013-01-16 13:00:00, 2013-04-13 05:00:00, 2013... ``` **4\. Flights traveling the longest distance** ``` arrange(flights, desc(distance)) %>% glimpse() ``` ``` ## Observations: 336,776 ## Variables: 19 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ day <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, ... ## $ dep_time <int> 857, 909, 914, 900, 858, 1019, 1042, 901, 641, ... ## $ sched_dep_time <int> 900, 900, 900, 900, 900, 900, 900, 900, 900, 90... ## $ dep_delay <dbl> -3, 9, 14, 0, -2, 79, 102, 1, 1301, -1, -5, 1, ... ## $ arr_time <int> 1516, 1525, 1504, 1516, 1519, 1558, 1620, 1504,... ## $ sched_arr_time <int> 1530, 1530, 1530, 1530, 1530, 1530, 1530, 1530,... ## $ arr_delay <dbl> -14, -5, -26, -14, -11, 28, 50, -26, 1272, -41,... ## $ carrier <chr> "HA", "HA", "HA", "HA", "HA", "HA", "HA", "HA",... ## $ flight <int> 51, 51, 51, 51, 51, 51, 51, 51, 51, 51, 51, 51,... ## $ tailnum <chr> "N380HA", "N380HA", "N380HA", "N384HA", "N381HA... ## $ origin <chr> "JFK", "JFK", "JFK", "JFK", "JFK", "JFK", "JFK"... ## $ dest <chr> "HNL", "HNL", "HNL", "HNL", "HNL", "HNL", "HNL"... ## $ air_time <dbl> 659, 638, 616, 639, 635, 611, 612, 645, 640, 63... ## $ distance <dbl> 4983, 4983, 4983, 4983, 4983, 4983, 4983, 4983,... ## $ hour <dbl> 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9,... ## $ minute <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ time_hour <dttm> 2013-01-01 09:00:00, 2013-01-02 09:00:00, 2013... ``` **and the shortest distance.** ``` arrange(flights, distance) %>% glimpse() ``` ``` ## Observations: 336,776 ## Variables: 19 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ day <int> 27, 3, 4, 4, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, ... ## $ dep_time <int> NA, 2127, 1240, 1829, 2128, 1155, 2125, 2124, 2... ## $ sched_dep_time <int> 106, 2129, 1200, 1615, 2129, 1200, 2129, 2129, ... ## $ dep_delay <dbl> NA, -2, 40, 134, -1, -5, -4, -5, -3, -3, 4, 6, ... ## $ arr_time <int> NA, 2222, 1333, 1937, 2218, 1241, 2224, 2212, 2... ## $ sched_arr_time <int> 245, 2224, 1306, 1721, 2224, 1306, 2224, 2224, ... ## $ arr_delay <dbl> NA, -2, 27, 136, -6, -25, 0, -12, 39, -7, -1, 9... ## $ carrier <chr> "US", "EV", "EV", "EV", "EV", "EV", "EV", "EV",... ## $ flight <int> 1632, 3833, 4193, 4502, 4645, 4193, 4619, 4619,... ## $ tailnum <chr> NA, "N13989", "N14972", "N15983", "N27962", "N1... ## $ origin <chr> "EWR", "EWR", "EWR", "EWR", "EWR", "EWR", "EWR"... ## $ dest <chr> "LGA", "PHL", "PHL", "PHL", "PHL", "PHL", "PHL"... ## $ air_time <dbl> NA, 30, 30, 28, 32, 29, 22, 25, 30, 27, 30, 30,... ## $ distance <dbl> 17, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80,... ## $ hour <dbl> 1, 21, 12, 16, 21, 12, 21, 21, 21, 21, 21, 21, ... ## $ minute <dbl> 6, 29, 0, 15, 29, 0, 29, 29, 30, 29, 29, 29, 17... ## $ time_hour <dttm> 2013-07-27 01:00:00, 2013-01-03 21:00:00, 2013... ``` ### 5\.3\.1\. **1\. use `arrange()` to sort out all missing values to start** ``` df <- tibble(x = c(5, 2, NA)) arrange(df, !is.na(x)) ``` ``` ## # A tibble: 3 x 1 ## x ## <dbl> ## 1 NA ## 2 5 ## 3 2 ``` **2\. Find most delayed departures** ``` arrange(flights, desc(dep_delay)) %>% glimpse() ``` ``` ## Observations: 336,776 ## Variables: 19 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 6, 1, 9, 7, 4, 3, 6, 7, 12, 5, 1, 2, 5, 12, ... ## $ day <int> 9, 15, 10, 20, 22, 10, 17, 27, 22, 5, 3, 1, 10,... ## $ dep_time <int> 641, 1432, 1121, 1139, 845, 1100, 2321, 959, 22... ## $ sched_dep_time <int> 900, 1935, 1635, 1845, 1600, 1900, 810, 1900, 7... ## $ dep_delay <dbl> 1301, 1137, 1126, 1014, 1005, 960, 911, 899, 89... ## $ arr_time <int> 1242, 1607, 1239, 1457, 1044, 1342, 135, 1236, ... ## $ sched_arr_time <int> 1530, 2120, 1810, 2210, 1815, 2211, 1020, 2226,... ## $ arr_delay <dbl> 1272, 1127, 1109, 1007, 989, 931, 915, 850, 895... ## $ carrier <chr> "HA", "MQ", "MQ", "AA", "MQ", "DL", "DL", "DL",... ## $ flight <int> 51, 3535, 3695, 177, 3075, 2391, 2119, 2007, 20... ## $ tailnum <chr> "N384HA", "N504MQ", "N517MQ", "N338AA", "N665MQ... ## $ origin <chr> "JFK", "JFK", "EWR", "JFK", "JFK", "JFK", "LGA"... ## $ dest <chr> "HNL", "CMH", "ORD", "SFO", "CVG", "TPA", "MSP"... ## $ air_time <dbl> 640, 74, 111, 354, 96, 139, 167, 313, 109, 149,... ## $ distance <dbl> 4983, 483, 719, 2586, 589, 1005, 1020, 2454, 76... ## $ hour <dbl> 9, 19, 16, 18, 16, 19, 8, 19, 7, 17, 20, 18, 8,... ## $ minute <dbl> 0, 35, 35, 45, 0, 0, 10, 0, 59, 0, 55, 35, 30, ... ## $ time_hour <dttm> 2013-01-09 09:00:00, 2013-06-15 19:00:00, 2013... ``` **3\. Find the fastest flights** ``` arrange(flights, air_time) %>% glimpse() ``` ``` ## Observations: 336,776 ## Variables: 19 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 4, 12, 2, 2, 2, 3, 3, 3, 3, 5, 5, 6, 8, 9, 9... ## $ day <int> 16, 13, 6, 3, 5, 12, 2, 8, 18, 19, 8, 19, 12, 1... ## $ dep_time <int> 1355, 537, 922, 2153, 1303, 2123, 1450, 2026, 1... ## $ sched_dep_time <int> 1315, 527, 851, 2129, 1315, 2130, 1500, 1935, 1... ## $ dep_delay <dbl> 40, 10, 31, 24, -12, -7, -10, 51, 87, 41, 137, ... ## $ arr_time <int> 1442, 622, 1021, 2247, 1342, 2211, 1547, 2131, ... ## $ sched_arr_time <int> 1411, 628, 954, 2224, 1411, 2225, 1608, 2056, 1... ## $ arr_delay <dbl> 31, -6, 27, 23, -29, -14, -21, 35, 67, 19, 109,... ## $ carrier <chr> "EV", "EV", "EV", "EV", "EV", "EV", "US", "9E",... ## $ flight <int> 4368, 4631, 4276, 4619, 4368, 4619, 2132, 3650,... ## $ tailnum <chr> "N16911", "N12167", "N27200", "N13913", "N13955... ## $ origin <chr> "EWR", "EWR", "EWR", "EWR", "EWR", "EWR", "LGA"... ## $ dest <chr> "BDL", "BDL", "BDL", "PHL", "BDL", "PHL", "BOS"... ## $ air_time <dbl> 20, 20, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,... ## $ distance <dbl> 116, 116, 116, 80, 116, 80, 184, 94, 116, 116, ... ## $ hour <dbl> 13, 5, 8, 21, 13, 21, 15, 19, 13, 21, 21, 21, 2... ## $ minute <dbl> 15, 27, 51, 29, 15, 30, 0, 35, 29, 45, 59, 59, ... ## $ time_hour <dttm> 2013-01-16 13:00:00, 2013-04-13 05:00:00, 2013... ``` **4\. Flights traveling the longest distance** ``` arrange(flights, desc(distance)) %>% glimpse() ``` ``` ## Observations: 336,776 ## Variables: 19 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ day <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, ... ## $ dep_time <int> 857, 909, 914, 900, 858, 1019, 1042, 901, 641, ... ## $ sched_dep_time <int> 900, 900, 900, 900, 900, 900, 900, 900, 900, 90... ## $ dep_delay <dbl> -3, 9, 14, 0, -2, 79, 102, 1, 1301, -1, -5, 1, ... ## $ arr_time <int> 1516, 1525, 1504, 1516, 1519, 1558, 1620, 1504,... ## $ sched_arr_time <int> 1530, 1530, 1530, 1530, 1530, 1530, 1530, 1530,... ## $ arr_delay <dbl> -14, -5, -26, -14, -11, 28, 50, -26, 1272, -41,... ## $ carrier <chr> "HA", "HA", "HA", "HA", "HA", "HA", "HA", "HA",... ## $ flight <int> 51, 51, 51, 51, 51, 51, 51, 51, 51, 51, 51, 51,... ## $ tailnum <chr> "N380HA", "N380HA", "N380HA", "N384HA", "N381HA... ## $ origin <chr> "JFK", "JFK", "JFK", "JFK", "JFK", "JFK", "JFK"... ## $ dest <chr> "HNL", "HNL", "HNL", "HNL", "HNL", "HNL", "HNL"... ## $ air_time <dbl> 659, 638, 616, 639, 635, 611, 612, 645, 640, 63... ## $ distance <dbl> 4983, 4983, 4983, 4983, 4983, 4983, 4983, 4983,... ## $ hour <dbl> 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9,... ## $ minute <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ time_hour <dttm> 2013-01-01 09:00:00, 2013-01-02 09:00:00, 2013... ``` **and the shortest distance.** ``` arrange(flights, distance) %>% glimpse() ``` ``` ## Observations: 336,776 ## Variables: 19 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ day <int> 27, 3, 4, 4, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, ... ## $ dep_time <int> NA, 2127, 1240, 1829, 2128, 1155, 2125, 2124, 2... ## $ sched_dep_time <int> 106, 2129, 1200, 1615, 2129, 1200, 2129, 2129, ... ## $ dep_delay <dbl> NA, -2, 40, 134, -1, -5, -4, -5, -3, -3, 4, 6, ... ## $ arr_time <int> NA, 2222, 1333, 1937, 2218, 1241, 2224, 2212, 2... ## $ sched_arr_time <int> 245, 2224, 1306, 1721, 2224, 1306, 2224, 2224, ... ## $ arr_delay <dbl> NA, -2, 27, 136, -6, -25, 0, -12, 39, -7, -1, 9... ## $ carrier <chr> "US", "EV", "EV", "EV", "EV", "EV", "EV", "EV",... ## $ flight <int> 1632, 3833, 4193, 4502, 4645, 4193, 4619, 4619,... ## $ tailnum <chr> NA, "N13989", "N14972", "N15983", "N27962", "N1... ## $ origin <chr> "EWR", "EWR", "EWR", "EWR", "EWR", "EWR", "EWR"... ## $ dest <chr> "LGA", "PHL", "PHL", "PHL", "PHL", "PHL", "PHL"... ## $ air_time <dbl> NA, 30, 30, 28, 32, 29, 22, 25, 30, 27, 30, 30,... ## $ distance <dbl> 17, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80, 80,... ## $ hour <dbl> 1, 21, 12, 16, 21, 12, 21, 21, 21, 21, 21, 21, ... ## $ minute <dbl> 6, 29, 0, 15, 29, 0, 29, 29, 30, 29, 29, 29, 17... ## $ time_hour <dttm> 2013-07-27 01:00:00, 2013-01-03 21:00:00, 2013... ``` 5\.4: Select columns -------------------- ### 5\.4\.1\. **1\. Brainstorm as many ways as possible to select `dep_time`, `dep_delay`, `arr_time`, and `arr_delay` from `flights`.** ``` vars <- c("dep_time", "dep_delay", "arr_time", "arr_delay") ``` ``` #method 1 select(flights, vars) #method 2, probably indexes <- which(names(flights) %in% vars) select(flights, indexes) #method 3 select(flights, contains("_time"), contains("_delay"), -contains("sched"), -contains("air")) ``` ``` #method 4 select(flights, starts_with("dep"), starts_with("arr")) %>% select(ends_with("time"), ends_with("delay")) ``` ``` ## # A tibble: 336,776 x 4 ## dep_time arr_time dep_delay arr_delay ## <int> <int> <dbl> <dbl> ## 1 517 830 2 11 ## 2 533 850 4 20 ## 3 542 923 2 33 ## 4 544 1004 -1 -18 ## 5 554 812 -6 -25 ## 6 554 740 -4 12 ## 7 555 913 -5 19 ## 8 557 709 -3 -14 ## 9 557 838 -3 -8 ## 10 558 753 -2 8 ## # ... with 336,766 more rows ``` **2\. What happens if you include the name of a variable multiple times in a `select()` call?** It only shows\-up once. **3\. What does the `one_of()` function do? Why might it be helpful in conjunction with this vector?** `vars <- c("year", "month", "day", "dep_delay", "arr_delay")` Can be used to select multiple variables with a character vector or to negate selecting certain variables. **4\. Does the result of running the following code surprise you? How do the select helpers deal with case by default? How can you change that default?** ``` select(flights, contains("TIME")) ``` ``` ## # A tibble: 336,776 x 6 ## dep_time sched_dep_time arr_time sched_arr_time air_time ## <int> <int> <int> <int> <dbl> ## 1 517 515 830 819 227 ## 2 533 529 850 830 227 ## 3 542 540 923 850 160 ## 4 544 545 1004 1022 183 ## 5 554 600 812 837 116 ## 6 554 558 740 728 150 ## 7 555 600 913 854 158 ## 8 557 600 709 723 53 ## 9 557 600 838 846 140 ## 10 558 600 753 745 138 ## # ... with 336,766 more rows, and 1 more variable: time_hour <dttm> ``` Default is case insensitive, to change this specify `ignore.case = FALSE` ``` select(flights, contains("TIME", ignore.case = FALSE)) ``` ``` ## # A tibble: 336,776 x 0 ``` ### 5\.4\.1\. **1\. Brainstorm as many ways as possible to select `dep_time`, `dep_delay`, `arr_time`, and `arr_delay` from `flights`.** ``` vars <- c("dep_time", "dep_delay", "arr_time", "arr_delay") ``` ``` #method 1 select(flights, vars) #method 2, probably indexes <- which(names(flights) %in% vars) select(flights, indexes) #method 3 select(flights, contains("_time"), contains("_delay"), -contains("sched"), -contains("air")) ``` ``` #method 4 select(flights, starts_with("dep"), starts_with("arr")) %>% select(ends_with("time"), ends_with("delay")) ``` ``` ## # A tibble: 336,776 x 4 ## dep_time arr_time dep_delay arr_delay ## <int> <int> <dbl> <dbl> ## 1 517 830 2 11 ## 2 533 850 4 20 ## 3 542 923 2 33 ## 4 544 1004 -1 -18 ## 5 554 812 -6 -25 ## 6 554 740 -4 12 ## 7 555 913 -5 19 ## 8 557 709 -3 -14 ## 9 557 838 -3 -8 ## 10 558 753 -2 8 ## # ... with 336,766 more rows ``` **2\. What happens if you include the name of a variable multiple times in a `select()` call?** It only shows\-up once. **3\. What does the `one_of()` function do? Why might it be helpful in conjunction with this vector?** `vars <- c("year", "month", "day", "dep_delay", "arr_delay")` Can be used to select multiple variables with a character vector or to negate selecting certain variables. **4\. Does the result of running the following code surprise you? How do the select helpers deal with case by default? How can you change that default?** ``` select(flights, contains("TIME")) ``` ``` ## # A tibble: 336,776 x 6 ## dep_time sched_dep_time arr_time sched_arr_time air_time ## <int> <int> <int> <int> <dbl> ## 1 517 515 830 819 227 ## 2 533 529 850 830 227 ## 3 542 540 923 850 160 ## 4 544 545 1004 1022 183 ## 5 554 600 812 837 116 ## 6 554 558 740 728 150 ## 7 555 600 913 854 158 ## 8 557 600 709 723 53 ## 9 557 600 838 846 140 ## 10 558 600 753 745 138 ## # ... with 336,766 more rows, and 1 more variable: time_hour <dttm> ``` Default is case insensitive, to change this specify `ignore.case = FALSE` ``` select(flights, contains("TIME", ignore.case = FALSE)) ``` ``` ## # A tibble: 336,776 x 0 ``` 5\.5: Add new vars ------------------ Check\-out different rank functions ``` x <- c(1, 2, 3, 4, 4, 6, 7, 8, 8, 10) min_rank(x) ``` ``` ## [1] 1 2 3 4 4 6 7 8 8 10 ``` ``` dense_rank(x) ``` ``` ## [1] 1 2 3 4 4 5 6 7 7 8 ``` ``` percent_rank(x) ``` ``` ## [1] 0.0000000 0.1111111 0.2222222 0.3333333 0.3333333 0.5555556 0.6666667 ## [8] 0.7777778 0.7777778 1.0000000 ``` ``` cume_dist(x) ``` ``` ## [1] 0.1 0.2 0.3 0.5 0.5 0.6 0.7 0.9 0.9 1.0 ``` ### 5\.5\.2\. **1\. Currently `dep_time` and `sched_dep_time` are convenient to look at, but hard to compute with because they’re not really continuous numbers. Convert them to a more convenient representation of number of minutes since midnight.** ``` time_to_mins <- function(x) (60*(x %/% 100) + (x %% 100)) ``` ``` flights_new <- mutate(flights, DepTime_MinsToMid = time_to_mins(dep_time), #same thing as above, but without calling custom function DepTime_MinsToMid_copy = (60*(dep_time %/% 100) + (dep_time %% 100)), SchedDepTime_MinsToMid = time_to_mins(sched_dep_time)) ``` **2\. Compare `air_time` with `arr_time` \- `dep_time`. What do you expect to see? What do you see? What do you need to do to fix it?** You would expect that: \\(air\\\_time \= dep\\\_time \- arr\\\_time\\) However this does not seem to be the case when you look at `air_time` generally… see [5\.5\.2\.2\.](05-data-transformations.html#section-15) for more details. **3\. Compare `dep_time`, `sched_dep_time`, and `dep_delay`. How would you expect those three numbers to be related?** You would expect that: \\(dep\\\_delay \= dep\\\_time \- sched\\\_dep\\\_time\\) . Let’s see if this is the case by creating a var `dep_delay2` that uses this definition, then see if it is equal to the original `dep_delay` ``` ##maybe a couple off, but for the most part seems consistent mutate(flights, dep_delay2 = time_to_mins(dep_time) - time_to_mins(sched_dep_time), dep_same = dep_delay == dep_delay2) %>% count(dep_same) ``` ``` ## # A tibble: 3 x 2 ## dep_same n ## <lgl> <int> ## 1 FALSE 1207 ## 2 TRUE 327314 ## 3 NA 8255 ``` Seems generally to align (with `dep_delay`). Those that are inconsistent are when the delay bleeds into the next day, indicating a problem with my equation above, not the `dep_delay` value as you can see below. ``` mutate(flights, dep_delay2 = time_to_mins(dep_time) - time_to_mins(sched_dep_time), dep_same = dep_delay == dep_delay2) %>% filter(!dep_same) %>% glimpse() ``` ``` ## Observations: 1,207 ## Variables: 21 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ day <int> 1, 2, 2, 3, 3, 3, 4, 4, 5, 5, 6, 7, 9, 9, 9, 10... ## $ dep_time <int> 848, 42, 126, 32, 50, 235, 25, 106, 14, 37, 16,... ## $ sched_dep_time <int> 1835, 2359, 2250, 2359, 2145, 2359, 2359, 2245,... ## $ dep_delay <dbl> 853, 43, 156, 33, 185, 156, 26, 141, 15, 127, 1... ## $ arr_time <int> 1001, 518, 233, 504, 203, 700, 505, 201, 503, 3... ## $ sched_arr_time <int> 1950, 442, 2359, 442, 2311, 437, 442, 2356, 445... ## $ arr_delay <dbl> 851, 36, 154, 22, 172, 143, 23, 125, 18, 130, 9... ## $ carrier <chr> "MQ", "B6", "B6", "B6", "B6", "B6", "B6", "B6",... ## $ flight <int> 3944, 707, 22, 707, 104, 727, 707, 608, 739, 11... ## $ tailnum <chr> "N942MQ", "N580JB", "N636JB", "N763JB", "N329JB... ## $ origin <chr> "JFK", "JFK", "JFK", "JFK", "JFK", "JFK", "JFK"... ## $ dest <chr> "BWI", "SJU", "SYR", "SJU", "BUF", "BQN", "SJU"... ## $ air_time <dbl> 41, 189, 49, 193, 58, 186, 194, 44, 201, 163, 1... ## $ distance <dbl> 184, 1598, 209, 1598, 301, 1576, 1598, 273, 161... ## $ hour <dbl> 18, 23, 22, 23, 21, 23, 23, 22, 23, 22, 23, 23,... ## $ minute <dbl> 35, 59, 50, 59, 45, 59, 59, 45, 59, 30, 59, 59,... ## $ time_hour <dttm> 2013-01-01 18:00:00, 2013-01-02 23:00:00, 2013... ## $ dep_delay2 <dbl> -587, -1397, -1284, -1407, -1255, -1284, -1414,... ## $ dep_same <lgl> FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE... ``` **4\. Find the 10 most delayed flights using a ranking function. How do you want to handle ties? Carefully read the documentation for `min_rank()`.** ``` mutate(flights, rank_delay = min_rank(-arr_delay)) %>% arrange(rank_delay) %>% filter(rank_delay <= 10) %>% select(flight, sched_dep_time, arr_delay, rank_delay) ``` ``` ## # A tibble: 10 x 4 ## flight sched_dep_time arr_delay rank_delay ## <int> <int> <dbl> <int> ## 1 51 900 1272 1 ## 2 3535 1935 1127 2 ## 3 3695 1635 1109 3 ## 4 177 1845 1007 4 ## 5 3075 1600 989 5 ## 6 2391 1900 931 6 ## 7 2119 810 915 7 ## 8 2047 759 895 8 ## 9 172 1700 878 9 ## 10 3744 2055 875 10 ``` **5\. What does `1:3 + 1:10` return? Why?** ``` 1:3 + 1:10 ``` ``` ## Warning in 1:3 + 1:10: longer object length is not a multiple of shorter ## object length ``` ``` ## [1] 2 4 6 5 7 9 8 10 12 11 ``` This is returned because `1:3` is being recycled as each element is added to an element in 1:10\. **6\. What trigonometric functions does R provide?** ``` ?sin ``` ### 5\.5\.2\. **1\. Currently `dep_time` and `sched_dep_time` are convenient to look at, but hard to compute with because they’re not really continuous numbers. Convert them to a more convenient representation of number of minutes since midnight.** ``` time_to_mins <- function(x) (60*(x %/% 100) + (x %% 100)) ``` ``` flights_new <- mutate(flights, DepTime_MinsToMid = time_to_mins(dep_time), #same thing as above, but without calling custom function DepTime_MinsToMid_copy = (60*(dep_time %/% 100) + (dep_time %% 100)), SchedDepTime_MinsToMid = time_to_mins(sched_dep_time)) ``` **2\. Compare `air_time` with `arr_time` \- `dep_time`. What do you expect to see? What do you see? What do you need to do to fix it?** You would expect that: \\(air\\\_time \= dep\\\_time \- arr\\\_time\\) However this does not seem to be the case when you look at `air_time` generally… see [5\.5\.2\.2\.](05-data-transformations.html#section-15) for more details. **3\. Compare `dep_time`, `sched_dep_time`, and `dep_delay`. How would you expect those three numbers to be related?** You would expect that: \\(dep\\\_delay \= dep\\\_time \- sched\\\_dep\\\_time\\) . Let’s see if this is the case by creating a var `dep_delay2` that uses this definition, then see if it is equal to the original `dep_delay` ``` ##maybe a couple off, but for the most part seems consistent mutate(flights, dep_delay2 = time_to_mins(dep_time) - time_to_mins(sched_dep_time), dep_same = dep_delay == dep_delay2) %>% count(dep_same) ``` ``` ## # A tibble: 3 x 2 ## dep_same n ## <lgl> <int> ## 1 FALSE 1207 ## 2 TRUE 327314 ## 3 NA 8255 ``` Seems generally to align (with `dep_delay`). Those that are inconsistent are when the delay bleeds into the next day, indicating a problem with my equation above, not the `dep_delay` value as you can see below. ``` mutate(flights, dep_delay2 = time_to_mins(dep_time) - time_to_mins(sched_dep_time), dep_same = dep_delay == dep_delay2) %>% filter(!dep_same) %>% glimpse() ``` ``` ## Observations: 1,207 ## Variables: 21 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ day <int> 1, 2, 2, 3, 3, 3, 4, 4, 5, 5, 6, 7, 9, 9, 9, 10... ## $ dep_time <int> 848, 42, 126, 32, 50, 235, 25, 106, 14, 37, 16,... ## $ sched_dep_time <int> 1835, 2359, 2250, 2359, 2145, 2359, 2359, 2245,... ## $ dep_delay <dbl> 853, 43, 156, 33, 185, 156, 26, 141, 15, 127, 1... ## $ arr_time <int> 1001, 518, 233, 504, 203, 700, 505, 201, 503, 3... ## $ sched_arr_time <int> 1950, 442, 2359, 442, 2311, 437, 442, 2356, 445... ## $ arr_delay <dbl> 851, 36, 154, 22, 172, 143, 23, 125, 18, 130, 9... ## $ carrier <chr> "MQ", "B6", "B6", "B6", "B6", "B6", "B6", "B6",... ## $ flight <int> 3944, 707, 22, 707, 104, 727, 707, 608, 739, 11... ## $ tailnum <chr> "N942MQ", "N580JB", "N636JB", "N763JB", "N329JB... ## $ origin <chr> "JFK", "JFK", "JFK", "JFK", "JFK", "JFK", "JFK"... ## $ dest <chr> "BWI", "SJU", "SYR", "SJU", "BUF", "BQN", "SJU"... ## $ air_time <dbl> 41, 189, 49, 193, 58, 186, 194, 44, 201, 163, 1... ## $ distance <dbl> 184, 1598, 209, 1598, 301, 1576, 1598, 273, 161... ## $ hour <dbl> 18, 23, 22, 23, 21, 23, 23, 22, 23, 22, 23, 23,... ## $ minute <dbl> 35, 59, 50, 59, 45, 59, 59, 45, 59, 30, 59, 59,... ## $ time_hour <dttm> 2013-01-01 18:00:00, 2013-01-02 23:00:00, 2013... ## $ dep_delay2 <dbl> -587, -1397, -1284, -1407, -1255, -1284, -1414,... ## $ dep_same <lgl> FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE... ``` **4\. Find the 10 most delayed flights using a ranking function. How do you want to handle ties? Carefully read the documentation for `min_rank()`.** ``` mutate(flights, rank_delay = min_rank(-arr_delay)) %>% arrange(rank_delay) %>% filter(rank_delay <= 10) %>% select(flight, sched_dep_time, arr_delay, rank_delay) ``` ``` ## # A tibble: 10 x 4 ## flight sched_dep_time arr_delay rank_delay ## <int> <int> <dbl> <int> ## 1 51 900 1272 1 ## 2 3535 1935 1127 2 ## 3 3695 1635 1109 3 ## 4 177 1845 1007 4 ## 5 3075 1600 989 5 ## 6 2391 1900 931 6 ## 7 2119 810 915 7 ## 8 2047 759 895 8 ## 9 172 1700 878 9 ## 10 3744 2055 875 10 ``` **5\. What does `1:3 + 1:10` return? Why?** ``` 1:3 + 1:10 ``` ``` ## Warning in 1:3 + 1:10: longer object length is not a multiple of shorter ## object length ``` ``` ## [1] 2 4 6 5 7 9 8 10 12 11 ``` This is returned because `1:3` is being recycled as each element is added to an element in 1:10\. **6\. What trigonometric functions does R provide?** ``` ?sin ``` 5\.6: Grouped summaries… ------------------------ ``` not_cancelled <- flights %>% filter(!is.na(dep_delay), !is.na(arr_delay)) not_cancelled %>% select(year, month, day, dep_time) %>% group_by(year, month, day) %>% mutate(r = min_rank(desc(dep_time))) %>% mutate(range_min = range(r)[1], range_max = range(r)[2]) %>% filter(r %in% range(r)) ``` ### 5\.6\.7\. **1\. Brainstorm at least 5 different ways to assess the typical delay characteristics of a group of flights.** *90th percentile for delays for flights by destination* ``` flights %>% group_by(dest) %>% summarise(delay.90 = quantile(arr_delay, 0.90, na.rm = TRUE)) %>% arrange(desc(delay.90)) ``` ``` ## # A tibble: 105 x 2 ## dest delay.90 ## <chr> <dbl> ## 1 TUL 126 ## 2 TYS 109. ## 3 CAE 107 ## 4 DSM 103 ## 5 OKC 99.6 ## 6 BHM 99.2 ## 7 RIC 90 ## 8 PVD 81.3 ## 9 CRW 80.8 ## 10 CVG 80 ## # ... with 95 more rows ``` *average `dep_delay` by hour of day* ``` flights %>% group_by(hour) %>% summarise(avg_delay = mean(arr_delay, na.rm = TRUE)) %>% ggplot(aes(x = hour, y = avg_delay))+ geom_point()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_smooth). ``` ``` ## Warning: Removed 1 rows containing missing values (geom_point). ``` *Percentage of flights delayed or canceled by `origin`* ``` flights %>% group_by(origin) %>% summarise(num_delayed = sum(arr_delay > 0, na.rm = TRUE)/n()) ``` ``` ## # A tibble: 3 x 2 ## origin num_delayed ## <chr> <dbl> ## 1 EWR 0.415 ## 2 JFK 0.385 ## 3 LGA 0.382 ``` *Percentage of flights canceled by airline* (technically not delays, but cancellations…) ``` flights %>% group_by(carrier) %>% summarise(perc_canceled = sum(is.na(arr_delay))/n(), n = n()) %>% ungroup() %>% filter(n >= 1000) %>% mutate(most_rank = min_rank(-perc_canceled)) %>% arrange(most_rank) ``` ``` ## # A tibble: 11 x 4 ## carrier perc_canceled n most_rank ## <chr> <dbl> <int> <int> ## 1 9E 0.0632 18460 1 ## 2 EV 0.0566 54173 2 ## 3 MQ 0.0515 26397 3 ## 4 US 0.0343 20536 4 ## 5 FL 0.0261 3260 5 ## 6 AA 0.0239 32729 6 ## 7 WN 0.0188 12275 7 ## 8 UA 0.0151 58665 8 ## 9 B6 0.0107 54635 9 ## 10 DL 0.00940 48110 10 ## 11 VX 0.00891 5162 11 ``` *Percentage of flights delayed by airline* ``` flights %>% group_by(carrier) %>% summarise(perc_delayed = sum(arr_delay > 0, na.rm = TRUE)/sum(!is.na(arr_delay)), n = n()) %>% ungroup() %>% filter(n >= 1000) %>% mutate(most_rank = min_rank(-perc_delayed)) %>% arrange(most_rank) ``` ``` ## # A tibble: 11 x 4 ## carrier perc_delayed n most_rank ## <chr> <dbl> <int> <int> ## 1 FL 0.597 3260 1 ## 2 EV 0.479 54173 2 ## 3 MQ 0.467 26397 3 ## 4 WN 0.440 12275 4 ## 5 B6 0.437 54635 5 ## 6 UA 0.385 58665 6 ## 7 9E 0.384 18460 7 ## 8 US 0.371 20536 8 ## 9 DL 0.344 48110 9 ## 10 VX 0.341 5162 10 ## 11 AA 0.335 32729 11 ``` **Consider the following scenarios:** *1\.1 A flight is 15 minutes early 50% of the time, and 15 minutes late 50% of the time.* ``` flights %>% group_by(flight) %>% # filter(!is.na(arr_delay)) %>% ##Keeping this in would exclude the possibility of canceled summarise(early.15 = sum(arr_delay <= -15, na.rm = TRUE)/n(), late.15 = sum(arr_delay >= 15, na.rm = TRUE)/n(), n = n()) %>% ungroup() %>% filter(early.15 == .5, late.15 == .5) ``` ``` ## # A tibble: 18 x 4 ## flight early.15 late.15 n ## <int> <dbl> <dbl> <int> ## 1 107 0.5 0.5 2 ## 2 2072 0.5 0.5 2 ## 3 2366 0.5 0.5 2 ## 4 2500 0.5 0.5 2 ## 5 2552 0.5 0.5 2 ## 6 3495 0.5 0.5 2 ## 7 3518 0.5 0.5 2 ## 8 3544 0.5 0.5 2 ## 9 3651 0.5 0.5 2 ## 10 3705 0.5 0.5 2 ## 11 3916 0.5 0.5 2 ## 12 3951 0.5 0.5 2 ## 13 4273 0.5 0.5 2 ## 14 4313 0.5 0.5 2 ## 15 5297 0.5 0.5 2 ## 16 5322 0.5 0.5 2 ## 17 5388 0.5 0.5 2 ## 18 5505 0.5 0.5 4 ``` *1\.2 A flight is always 10 minutes late.* ``` flights %>% group_by(flight) %>% summarise(late.10 = sum(arr_delay >= 10)/n()) %>% ungroup() %>% filter(late.10 == 1) ``` ``` ## # A tibble: 93 x 2 ## flight late.10 ## <int> <dbl> ## 1 94 1 ## 2 730 1 ## 3 974 1 ## 4 1084 1 ## 5 1226 1 ## 6 1510 1 ## 7 1514 1 ## 8 1859 1 ## 9 1868 1 ## 10 2101 1 ## # ... with 83 more rows ``` *1\.3 A flight is 30 minutes early 50% of the time, and 30 minutes late 50% of the time.* ``` flights %>% group_by(flight) %>% # filter(!is.na(arr_delay)) %>% ##Keeping this in would exclude the possibility of canceled summarise(early.30 = sum(arr_delay <= -30, na.rm = TRUE)/n(), late.30 = sum(arr_delay >= 30, na.rm = TRUE)/n(), n = n()) %>% ungroup() %>% filter(early.30 == .5, late.30 == .5) ``` ``` ## # A tibble: 3 x 4 ## flight early.30 late.30 n ## <int> <dbl> <dbl> <int> ## 1 3651 0.5 0.5 2 ## 2 3916 0.5 0.5 2 ## 3 3951 0.5 0.5 2 ``` *1\.4 99% of the time a flight is on time. 1% of the time it’s 2 hours late.* ``` flights %>% group_by(flight) %>% # filter(!is.na(arr_delay)) %>% ##Keeping this in would exclude the possibility of canceled summarise(ontime = sum(arr_delay <= 0, na.rm = TRUE)/n(), late.120 = sum(arr_delay >= 120, na.rm = TRUE)/n(), n = n()) %>% ungroup() %>% filter(ontime == .99, late.120 == .01) ``` ``` ## # A tibble: 0 x 4 ## # ... with 4 variables: flight <int>, ontime <dbl>, late.120 <dbl>, ## # n <int> ``` Looks like this exact proportion doesn’t happen. Let’s change this to be \>\= 99% and \<\= 1%. ``` flights %>% group_by(flight) %>% # filter(!is.na(arr_delay)) %>% ##Keeping this in would exclude the possibility of canceled summarise(ontime = sum(arr_delay <= 0, na.rm = TRUE)/n(), late.120 = sum(arr_delay >= 120, na.rm = TRUE)/n(), n = n()) %>% ungroup() %>% filter(ontime >= .99, late.120 <= .01) ``` ``` ## # A tibble: 391 x 4 ## flight ontime late.120 n ## <int> <dbl> <dbl> <int> ## 1 46 1 0 2 ## 2 52 1 0 2 ## 3 88 1 0 1 ## 4 90 1 0 1 ## 5 96 1 0 1 ## 6 99 1 0 1 ## 7 106 1 0 1 ## 8 122 1 0 1 ## 9 174 1 0 1 ## 10 202 1 0 5 ## # ... with 381 more rows ``` **2\. Which is more important: arrival delay or departure delay?** Arrival delay. **3\. Come up with another approach that will give you the same output as `not_cancelled %>% count(dest)` and `not_cancelled %>% count(tailnum, wt = distance)` (without using `count()`).** ``` not_cancelled <- flights %>% filter(!is.na(dep_delay), !is.na(arr_delay)) not_cancelled %>% group_by(dest) %>% summarise(n = n()) ``` ``` ## # A tibble: 104 x 2 ## dest n ## <chr> <int> ## 1 ABQ 254 ## 2 ACK 264 ## 3 ALB 418 ## 4 ANC 8 ## 5 ATL 16837 ## 6 AUS 2411 ## 7 AVL 261 ## 8 BDL 412 ## 9 BGR 358 ## 10 BHM 269 ## # ... with 94 more rows ``` ``` not_cancelled %>% group_by(tailnum) %>% summarise(n = sum(distance)) ``` ``` ## # A tibble: 4,037 x 2 ## tailnum n ## <chr> <dbl> ## 1 D942DN 3418 ## 2 N0EGMQ 239143 ## 3 N10156 109664 ## 4 N102UW 25722 ## 5 N103US 24619 ## 6 N104UW 24616 ## 7 N10575 139903 ## 8 N105UW 23618 ## 9 N107US 21677 ## 10 N108UW 32070 ## # ... with 4,027 more rows ``` **4\. Our definition of cancelled flights (`is.na(dep_delay) | is.na(arr_delay)`) is slightly suboptimal. Why? Which is the most important column?** You only need the `is.na(arr_delay)` column. By having both, it is doing more checks then is necessary. (While not a perfect method) you can see that the number of rows with just `is.na(arr_delay)` would be the same in either case. ``` filter(flights, is.na(dep_delay) | is.na(arr_delay)) %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 9430 ``` ``` filter(flights, is.na(arr_delay)) %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 9430 ``` To be more precise, you could check these with the `identical` function. ``` check_1 <- filter(flights, is.na(dep_delay) | is.na(arr_delay)) check_2 <- filter(flights, is.na(arr_delay)) identical(check_1, check_2) ``` ``` ## [1] TRUE ``` **5\. Look at the number of cancelled flights per day. Is there a pattern?** Number of canceled flights by day of month: ``` flights %>% group_by(day) %>% summarise(num = n(), cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), cancelled_perc = cancelled / num) %>% ggplot(aes(x = day, y = cancelled))+ geom_line() ``` * Some days of the month have more cancellations **Is the proportion of cancelled flights related to the average delay?** Proporton of canceled flights and then average delay of flights by day: ``` flights %>% group_by(day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% ggplot(aes(x = day, y = cancelled_perc))+ geom_line() flights %>% group_by(day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% ggplot(aes(x = day, y = avg_delayed))+ geom_line() ``` * Looks roughly like there is some overlap. Plot, treating day independently: ``` flights %>% group_by(day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% ggplot(aes(x = cancelled_perc, y = avg_delayed))+ geom_point()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` * suggests positive association **6\. Which carrier has the worst delays?** ``` flights %>% group_by(carrier) %>% summarise(avg_delay = mean(arr_delay, na.rm = TRUE), n = n()) %>% arrange(desc(avg_delay)) ``` ``` ## # A tibble: 16 x 3 ## carrier avg_delay n ## <chr> <dbl> <int> ## 1 F9 21.9 685 ## 2 FL 20.1 3260 ## 3 EV 15.8 54173 ## 4 YV 15.6 601 ## 5 OO 11.9 32 ## 6 MQ 10.8 26397 ## 7 WN 9.65 12275 ## 8 B6 9.46 54635 ## 9 9E 7.38 18460 ## 10 UA 3.56 58665 ## 11 US 2.13 20536 ## 12 VX 1.76 5162 ## 13 DL 1.64 48110 ## 14 AA 0.364 32729 ## 15 HA -6.92 342 ## 16 AS -9.93 714 ``` **Challenge: can you disentangle the effects of bad airports vs. bad carriers? Why/why not? (Hint: think about flights %\>% group\_by(carrier, dest) %\>% summarise(n()))** Somewhat difficult to untangle in the `origin` airports because carriers may predominantly go through one of the three. The code below produces the origin name that the carrier that flies from the most along with the proportion of associated flights. ``` flights %>% group_by(carrier, origin) %>% summarise(n = n()) %>% mutate(perc = n / sum(n)) %>% group_by(carrier) %>% mutate(rank = min_rank(-perc)) %>% arrange(carrier, rank) %>% filter(rank == 1) %>% select(carrier, highest_origin = origin, highest_prop = perc, n_total = n) %>% arrange(desc(n_total)) ``` ``` ## # A tibble: 16 x 4 ## # Groups: carrier [16] ## carrier highest_origin highest_prop n_total ## <chr> <chr> <dbl> <int> ## 1 UA EWR 0.786 46087 ## 2 EV EWR 0.811 43939 ## 3 B6 JFK 0.770 42076 ## 4 DL LGA 0.479 23067 ## 5 MQ LGA 0.641 16928 ## 6 AA LGA 0.472 15459 ## 7 9E JFK 0.794 14651 ## 8 US LGA 0.640 13136 ## 9 WN EWR 0.504 6188 ## 10 VX JFK 0.697 3596 ## 11 FL LGA 1 3260 ## 12 AS EWR 1 714 ## 13 F9 LGA 1 685 ## 14 YV LGA 1 601 ## 15 HA JFK 1 342 ## 16 OO LGA 0.812 26 ``` Below we look at destinations and the `carrier` that has the highest proportion of flights from one of the NYC destinations (ignoring for specific `origin` – JFK, LGA, etc. are not seperated). ``` flights %>% group_by(dest, carrier) %>% summarise(n = n()) %>% mutate(perc = n / sum(n)) %>% group_by(dest) %>% mutate(rank = min_rank(-perc)) %>% arrange(carrier, rank) %>% filter(rank == 1) %>% select(dest, highest_carrier = carrier, highest_perc = perc, n_total = n) %>% arrange(desc(n_total)) ``` ``` ## # A tibble: 105 x 4 ## # Groups: dest [105] ## dest highest_carrier highest_perc n_total ## <chr> <chr> <dbl> <int> ## 1 ATL DL 0.614 10571 ## 2 CLT US 0.614 8632 ## 3 DFW AA 0.831 7257 ## 4 MIA AA 0.617 7234 ## 5 ORD UA 0.404 6984 ## 6 IAH UA 0.962 6924 ## 7 SFO UA 0.512 6819 ## 8 FLL B6 0.544 6563 ## 9 MCO B6 0.460 6472 ## 10 LAX UA 0.360 5823 ## # ... with 95 more rows ``` To get at the question of ‘best carrier’, you may consider doing a grouped comparison of average delays or cancellataions controlling for where they are flying to and from what origin… Or build a linear model with the formula, `arr_delay ~ carrier + dest + origin`. **7\. What does the `sort` argument to `count()` do. When might you use it?** `sort` orders by `n`, you may want to use it when you want to see the highest frequency levels. ### 5\.6\.7\. **1\. Brainstorm at least 5 different ways to assess the typical delay characteristics of a group of flights.** *90th percentile for delays for flights by destination* ``` flights %>% group_by(dest) %>% summarise(delay.90 = quantile(arr_delay, 0.90, na.rm = TRUE)) %>% arrange(desc(delay.90)) ``` ``` ## # A tibble: 105 x 2 ## dest delay.90 ## <chr> <dbl> ## 1 TUL 126 ## 2 TYS 109. ## 3 CAE 107 ## 4 DSM 103 ## 5 OKC 99.6 ## 6 BHM 99.2 ## 7 RIC 90 ## 8 PVD 81.3 ## 9 CRW 80.8 ## 10 CVG 80 ## # ... with 95 more rows ``` *average `dep_delay` by hour of day* ``` flights %>% group_by(hour) %>% summarise(avg_delay = mean(arr_delay, na.rm = TRUE)) %>% ggplot(aes(x = hour, y = avg_delay))+ geom_point()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_smooth). ``` ``` ## Warning: Removed 1 rows containing missing values (geom_point). ``` *Percentage of flights delayed or canceled by `origin`* ``` flights %>% group_by(origin) %>% summarise(num_delayed = sum(arr_delay > 0, na.rm = TRUE)/n()) ``` ``` ## # A tibble: 3 x 2 ## origin num_delayed ## <chr> <dbl> ## 1 EWR 0.415 ## 2 JFK 0.385 ## 3 LGA 0.382 ``` *Percentage of flights canceled by airline* (technically not delays, but cancellations…) ``` flights %>% group_by(carrier) %>% summarise(perc_canceled = sum(is.na(arr_delay))/n(), n = n()) %>% ungroup() %>% filter(n >= 1000) %>% mutate(most_rank = min_rank(-perc_canceled)) %>% arrange(most_rank) ``` ``` ## # A tibble: 11 x 4 ## carrier perc_canceled n most_rank ## <chr> <dbl> <int> <int> ## 1 9E 0.0632 18460 1 ## 2 EV 0.0566 54173 2 ## 3 MQ 0.0515 26397 3 ## 4 US 0.0343 20536 4 ## 5 FL 0.0261 3260 5 ## 6 AA 0.0239 32729 6 ## 7 WN 0.0188 12275 7 ## 8 UA 0.0151 58665 8 ## 9 B6 0.0107 54635 9 ## 10 DL 0.00940 48110 10 ## 11 VX 0.00891 5162 11 ``` *Percentage of flights delayed by airline* ``` flights %>% group_by(carrier) %>% summarise(perc_delayed = sum(arr_delay > 0, na.rm = TRUE)/sum(!is.na(arr_delay)), n = n()) %>% ungroup() %>% filter(n >= 1000) %>% mutate(most_rank = min_rank(-perc_delayed)) %>% arrange(most_rank) ``` ``` ## # A tibble: 11 x 4 ## carrier perc_delayed n most_rank ## <chr> <dbl> <int> <int> ## 1 FL 0.597 3260 1 ## 2 EV 0.479 54173 2 ## 3 MQ 0.467 26397 3 ## 4 WN 0.440 12275 4 ## 5 B6 0.437 54635 5 ## 6 UA 0.385 58665 6 ## 7 9E 0.384 18460 7 ## 8 US 0.371 20536 8 ## 9 DL 0.344 48110 9 ## 10 VX 0.341 5162 10 ## 11 AA 0.335 32729 11 ``` **Consider the following scenarios:** *1\.1 A flight is 15 minutes early 50% of the time, and 15 minutes late 50% of the time.* ``` flights %>% group_by(flight) %>% # filter(!is.na(arr_delay)) %>% ##Keeping this in would exclude the possibility of canceled summarise(early.15 = sum(arr_delay <= -15, na.rm = TRUE)/n(), late.15 = sum(arr_delay >= 15, na.rm = TRUE)/n(), n = n()) %>% ungroup() %>% filter(early.15 == .5, late.15 == .5) ``` ``` ## # A tibble: 18 x 4 ## flight early.15 late.15 n ## <int> <dbl> <dbl> <int> ## 1 107 0.5 0.5 2 ## 2 2072 0.5 0.5 2 ## 3 2366 0.5 0.5 2 ## 4 2500 0.5 0.5 2 ## 5 2552 0.5 0.5 2 ## 6 3495 0.5 0.5 2 ## 7 3518 0.5 0.5 2 ## 8 3544 0.5 0.5 2 ## 9 3651 0.5 0.5 2 ## 10 3705 0.5 0.5 2 ## 11 3916 0.5 0.5 2 ## 12 3951 0.5 0.5 2 ## 13 4273 0.5 0.5 2 ## 14 4313 0.5 0.5 2 ## 15 5297 0.5 0.5 2 ## 16 5322 0.5 0.5 2 ## 17 5388 0.5 0.5 2 ## 18 5505 0.5 0.5 4 ``` *1\.2 A flight is always 10 minutes late.* ``` flights %>% group_by(flight) %>% summarise(late.10 = sum(arr_delay >= 10)/n()) %>% ungroup() %>% filter(late.10 == 1) ``` ``` ## # A tibble: 93 x 2 ## flight late.10 ## <int> <dbl> ## 1 94 1 ## 2 730 1 ## 3 974 1 ## 4 1084 1 ## 5 1226 1 ## 6 1510 1 ## 7 1514 1 ## 8 1859 1 ## 9 1868 1 ## 10 2101 1 ## # ... with 83 more rows ``` *1\.3 A flight is 30 minutes early 50% of the time, and 30 minutes late 50% of the time.* ``` flights %>% group_by(flight) %>% # filter(!is.na(arr_delay)) %>% ##Keeping this in would exclude the possibility of canceled summarise(early.30 = sum(arr_delay <= -30, na.rm = TRUE)/n(), late.30 = sum(arr_delay >= 30, na.rm = TRUE)/n(), n = n()) %>% ungroup() %>% filter(early.30 == .5, late.30 == .5) ``` ``` ## # A tibble: 3 x 4 ## flight early.30 late.30 n ## <int> <dbl> <dbl> <int> ## 1 3651 0.5 0.5 2 ## 2 3916 0.5 0.5 2 ## 3 3951 0.5 0.5 2 ``` *1\.4 99% of the time a flight is on time. 1% of the time it’s 2 hours late.* ``` flights %>% group_by(flight) %>% # filter(!is.na(arr_delay)) %>% ##Keeping this in would exclude the possibility of canceled summarise(ontime = sum(arr_delay <= 0, na.rm = TRUE)/n(), late.120 = sum(arr_delay >= 120, na.rm = TRUE)/n(), n = n()) %>% ungroup() %>% filter(ontime == .99, late.120 == .01) ``` ``` ## # A tibble: 0 x 4 ## # ... with 4 variables: flight <int>, ontime <dbl>, late.120 <dbl>, ## # n <int> ``` Looks like this exact proportion doesn’t happen. Let’s change this to be \>\= 99% and \<\= 1%. ``` flights %>% group_by(flight) %>% # filter(!is.na(arr_delay)) %>% ##Keeping this in would exclude the possibility of canceled summarise(ontime = sum(arr_delay <= 0, na.rm = TRUE)/n(), late.120 = sum(arr_delay >= 120, na.rm = TRUE)/n(), n = n()) %>% ungroup() %>% filter(ontime >= .99, late.120 <= .01) ``` ``` ## # A tibble: 391 x 4 ## flight ontime late.120 n ## <int> <dbl> <dbl> <int> ## 1 46 1 0 2 ## 2 52 1 0 2 ## 3 88 1 0 1 ## 4 90 1 0 1 ## 5 96 1 0 1 ## 6 99 1 0 1 ## 7 106 1 0 1 ## 8 122 1 0 1 ## 9 174 1 0 1 ## 10 202 1 0 5 ## # ... with 381 more rows ``` **2\. Which is more important: arrival delay or departure delay?** Arrival delay. **3\. Come up with another approach that will give you the same output as `not_cancelled %>% count(dest)` and `not_cancelled %>% count(tailnum, wt = distance)` (without using `count()`).** ``` not_cancelled <- flights %>% filter(!is.na(dep_delay), !is.na(arr_delay)) not_cancelled %>% group_by(dest) %>% summarise(n = n()) ``` ``` ## # A tibble: 104 x 2 ## dest n ## <chr> <int> ## 1 ABQ 254 ## 2 ACK 264 ## 3 ALB 418 ## 4 ANC 8 ## 5 ATL 16837 ## 6 AUS 2411 ## 7 AVL 261 ## 8 BDL 412 ## 9 BGR 358 ## 10 BHM 269 ## # ... with 94 more rows ``` ``` not_cancelled %>% group_by(tailnum) %>% summarise(n = sum(distance)) ``` ``` ## # A tibble: 4,037 x 2 ## tailnum n ## <chr> <dbl> ## 1 D942DN 3418 ## 2 N0EGMQ 239143 ## 3 N10156 109664 ## 4 N102UW 25722 ## 5 N103US 24619 ## 6 N104UW 24616 ## 7 N10575 139903 ## 8 N105UW 23618 ## 9 N107US 21677 ## 10 N108UW 32070 ## # ... with 4,027 more rows ``` **4\. Our definition of cancelled flights (`is.na(dep_delay) | is.na(arr_delay)`) is slightly suboptimal. Why? Which is the most important column?** You only need the `is.na(arr_delay)` column. By having both, it is doing more checks then is necessary. (While not a perfect method) you can see that the number of rows with just `is.na(arr_delay)` would be the same in either case. ``` filter(flights, is.na(dep_delay) | is.na(arr_delay)) %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 9430 ``` ``` filter(flights, is.na(arr_delay)) %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 9430 ``` To be more precise, you could check these with the `identical` function. ``` check_1 <- filter(flights, is.na(dep_delay) | is.na(arr_delay)) check_2 <- filter(flights, is.na(arr_delay)) identical(check_1, check_2) ``` ``` ## [1] TRUE ``` **5\. Look at the number of cancelled flights per day. Is there a pattern?** Number of canceled flights by day of month: ``` flights %>% group_by(day) %>% summarise(num = n(), cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), cancelled_perc = cancelled / num) %>% ggplot(aes(x = day, y = cancelled))+ geom_line() ``` * Some days of the month have more cancellations **Is the proportion of cancelled flights related to the average delay?** Proporton of canceled flights and then average delay of flights by day: ``` flights %>% group_by(day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% ggplot(aes(x = day, y = cancelled_perc))+ geom_line() flights %>% group_by(day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% ggplot(aes(x = day, y = avg_delayed))+ geom_line() ``` * Looks roughly like there is some overlap. Plot, treating day independently: ``` flights %>% group_by(day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% ggplot(aes(x = cancelled_perc, y = avg_delayed))+ geom_point()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` * suggests positive association **6\. Which carrier has the worst delays?** ``` flights %>% group_by(carrier) %>% summarise(avg_delay = mean(arr_delay, na.rm = TRUE), n = n()) %>% arrange(desc(avg_delay)) ``` ``` ## # A tibble: 16 x 3 ## carrier avg_delay n ## <chr> <dbl> <int> ## 1 F9 21.9 685 ## 2 FL 20.1 3260 ## 3 EV 15.8 54173 ## 4 YV 15.6 601 ## 5 OO 11.9 32 ## 6 MQ 10.8 26397 ## 7 WN 9.65 12275 ## 8 B6 9.46 54635 ## 9 9E 7.38 18460 ## 10 UA 3.56 58665 ## 11 US 2.13 20536 ## 12 VX 1.76 5162 ## 13 DL 1.64 48110 ## 14 AA 0.364 32729 ## 15 HA -6.92 342 ## 16 AS -9.93 714 ``` **Challenge: can you disentangle the effects of bad airports vs. bad carriers? Why/why not? (Hint: think about flights %\>% group\_by(carrier, dest) %\>% summarise(n()))** Somewhat difficult to untangle in the `origin` airports because carriers may predominantly go through one of the three. The code below produces the origin name that the carrier that flies from the most along with the proportion of associated flights. ``` flights %>% group_by(carrier, origin) %>% summarise(n = n()) %>% mutate(perc = n / sum(n)) %>% group_by(carrier) %>% mutate(rank = min_rank(-perc)) %>% arrange(carrier, rank) %>% filter(rank == 1) %>% select(carrier, highest_origin = origin, highest_prop = perc, n_total = n) %>% arrange(desc(n_total)) ``` ``` ## # A tibble: 16 x 4 ## # Groups: carrier [16] ## carrier highest_origin highest_prop n_total ## <chr> <chr> <dbl> <int> ## 1 UA EWR 0.786 46087 ## 2 EV EWR 0.811 43939 ## 3 B6 JFK 0.770 42076 ## 4 DL LGA 0.479 23067 ## 5 MQ LGA 0.641 16928 ## 6 AA LGA 0.472 15459 ## 7 9E JFK 0.794 14651 ## 8 US LGA 0.640 13136 ## 9 WN EWR 0.504 6188 ## 10 VX JFK 0.697 3596 ## 11 FL LGA 1 3260 ## 12 AS EWR 1 714 ## 13 F9 LGA 1 685 ## 14 YV LGA 1 601 ## 15 HA JFK 1 342 ## 16 OO LGA 0.812 26 ``` Below we look at destinations and the `carrier` that has the highest proportion of flights from one of the NYC destinations (ignoring for specific `origin` – JFK, LGA, etc. are not seperated). ``` flights %>% group_by(dest, carrier) %>% summarise(n = n()) %>% mutate(perc = n / sum(n)) %>% group_by(dest) %>% mutate(rank = min_rank(-perc)) %>% arrange(carrier, rank) %>% filter(rank == 1) %>% select(dest, highest_carrier = carrier, highest_perc = perc, n_total = n) %>% arrange(desc(n_total)) ``` ``` ## # A tibble: 105 x 4 ## # Groups: dest [105] ## dest highest_carrier highest_perc n_total ## <chr> <chr> <dbl> <int> ## 1 ATL DL 0.614 10571 ## 2 CLT US 0.614 8632 ## 3 DFW AA 0.831 7257 ## 4 MIA AA 0.617 7234 ## 5 ORD UA 0.404 6984 ## 6 IAH UA 0.962 6924 ## 7 SFO UA 0.512 6819 ## 8 FLL B6 0.544 6563 ## 9 MCO B6 0.460 6472 ## 10 LAX UA 0.360 5823 ## # ... with 95 more rows ``` To get at the question of ‘best carrier’, you may consider doing a grouped comparison of average delays or cancellataions controlling for where they are flying to and from what origin… Or build a linear model with the formula, `arr_delay ~ carrier + dest + origin`. **7\. What does the `sort` argument to `count()` do. When might you use it?** `sort` orders by `n`, you may want to use it when you want to see the highest frequency levels. 5\.7: Grouped mutates… ---------------------- ### 5\.7\.1\. **1\. Refer back to the lists of useful mutate and filtering functions. Describe how each operation changes when you combine it with grouping.** Performs operations on vectors for each group (rather than all together). **2\. Which plane (tailnum) has the worst on\-time record?** ``` flights %>% group_by(tailnum) %>% summarise(n = n(), num_not_delayed = sum(arr_delay <= 0, na.rm = TRUE), ontime_rate = num_not_delayed/ n, sum_delayed_time_grt0 = sum(ifelse(arr_delay >= 0, arr_delay, 0), na.rm = TRUE)) %>% filter(n > 100, !is.na(tailnum)) %>% arrange(ontime_rate) ``` ``` ## # A tibble: 1,200 x 5 ## tailnum n num_not_delayed ontime_rate sum_delayed_time_grt0 ## <chr> <int> <int> <dbl> <dbl> ## 1 N505MQ 242 83 0.343 5911 ## 2 N15910 280 105 0.375 8737 ## 3 N36915 228 86 0.377 6392 ## 4 N16919 251 96 0.382 7955 ## 5 N14998 230 88 0.383 7166 ## 6 N14953 256 100 0.391 6550 ## 7 N22971 230 90 0.391 6547 ## 8 N503MQ 191 75 0.393 4420 ## 9 N27152 109 43 0.394 2058 ## 10 N31131 109 43 0.394 2740 ## # ... with 1,190 more rows ``` N505MQ **3\. What time of day should you fly if you want to avoid delays as much as possible?** average `dep_delay` by hour of day ``` flights %>% group_by(hour) %>% summarise(med_delay = mean(arr_delay, na.rm = TRUE)) %>% ggplot(aes(x = hour, y = med_delay))+ geom_point()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_smooth). ``` ``` ## Warning: Removed 1 rows containing missing values (geom_point). ``` Fly in the morning. **4\. For each destination, compute the total minutes of delay. For each, flight, compute the proportion of the total delay for its destination.** ``` flights %>% filter(arr_delay > 0) %>% group_by(dest, flight) %>% summarise(TotalDelay_DestFlight = sum(arr_delay, na.rm = TRUE)) %>% mutate(TotalDelay_Dest = sum(TotalDelay_DestFlight), PropOfDest = TotalDelay_DestFlight / TotalDelay_Dest) ``` ``` ## # A tibble: 8,505 x 5 ## # Groups: dest [103] ## dest flight TotalDelay_DestFlight TotalDelay_Dest PropOfDest ## <chr> <int> <dbl> <dbl> <dbl> ## 1 ABQ 65 1943 4487 0.433 ## 2 ABQ 1505 2544 4487 0.567 ## 3 ACK 1191 1413 2974 0.475 ## 4 ACK 1195 62 2974 0.0208 ## 5 ACK 1291 267 2974 0.0898 ## 6 ACK 1491 1232 2974 0.414 ## 7 ALB 3260 111 9580 0.0116 ## 8 ALB 3264 4 9580 0.000418 ## 9 ALB 3811 599 9580 0.0625 ## 10 ALB 3817 196 9580 0.0205 ## # ... with 8,495 more rows ``` I did this such that flights could not have “negative” delays, this could have been approached differently such that “early arrivals” also got credit… **5\. Delays are typically temporally correlated: even once the problem that caused the initial delay has been resolved, later flights are delayed to allow earlier flights to leave. Using lag() explore how the delay of a flight is related to the delay of the immediately preceding flight.** ``` flights %>% group_by(origin) %>% mutate(delay_lag = lag(dep_delay, 1), diff_lag = abs(dep_delay -delay_lag)) %>% ungroup() %>% select(dep_delay, delay_lag) %>% na.omit() %>% cor() ``` ``` ## dep_delay delay_lag ## dep_delay 1.0000000 0.3506866 ## delay_lag 0.3506866 1.0000000 ``` Correlation of dep\_delayt\-1 with dep\_delayt is 0\.35\. Below is a function to get the correlation out for any lag level. ``` cor_by_lag <- function(lag){ flights %>% group_by(origin) %>% mutate(delay_lag = lag(dep_delay, lag), diff_lag = abs(dep_delay -delay_lag)) %>% ungroup() %>% select(dep_delay, delay_lag) %>% na.omit() %>% cor() %>% .[2,1] %>% as.vector() } ``` Let’s see the correlation pushing the lag time back. ``` cor_by_lag(1) ``` ``` ## [1] 0.3506866 ``` ``` cor_by_lag(10) ``` ``` ## [1] 0.2622796 ``` ``` cor_by_lag(100) ``` ``` ## [1] 0.04023232 ``` It makes sense that these values get smaller as flights that are further apart have delay lengths that are less correlated. See [5\.7\.1\.8\.](05-data-transformations.html#section-23) for the outputs if iterating this function across many lags. **6\. Look at each destination. Can you find flights that are suspiciously fast? (i.e. flights that represent a potential data entry error). Compute the air time a flight relative to the shortest flight to that destination. Which flights were most delayed in the air?** ``` flights %>% filter(!is.na(arr_delay)) %>% group_by(dest) %>% mutate(sd_air_time = sd(air_time), mean_air_time = mean(air_time)) %>% ungroup() %>% mutate(supect_fast_cutoff = mean_air_time - 4*sd_air_time, suspect_flag = air_time < supect_fast_cutoff) %>% select(dest, flight, hour, day, month, air_time, sd_air_time, mean_air_time, supect_fast_cutoff, suspect_flag, air_time, air_time) %>% filter(suspect_flag) ``` ``` ## # A tibble: 4 x 10 ## dest flight hour day month air_time sd_air_time mean_air_time ## <chr> <int> <dbl> <int> <int> <dbl> <dbl> <dbl> ## 1 BNA 3805 19 23 3 70 11.0 114. ## 2 GSP 4292 20 13 5 55 8.13 93.4 ## 3 ATL 1499 17 25 5 65 9.81 113. ## 4 MSP 4667 15 2 7 93 11.8 151. ## # ... with 2 more variables: supect_fast_cutoff <dbl>, suspect_flag <lgl> ``` **7\. Find all destinations that are flown by at least two carriers. Use that information to rank the carriers.** I found this quesiton ambiguous in terms of what it wants when it says “rank” the carriers using this. What I did was filter to just those destinations that have at least two carriers and then count the number of destinations with multiple carriers that each airline travels to. So it’s almost which airlines have more routes to ‘crowded’ destinations. ``` flights %>% group_by(dest) %>% mutate(n_carrier = n_distinct(carrier)) %>% filter(n_carrier > 1) %>% group_by(carrier) %>% summarise(n_dest = n_distinct(dest)) %>% mutate(rank = min_rank(-n_dest)) %>% arrange(rank) ``` ``` ## # A tibble: 16 x 3 ## carrier n_dest rank ## <chr> <int> <int> ## 1 EV 51 1 ## 2 9E 48 2 ## 3 UA 42 3 ## 4 DL 39 4 ## 5 B6 35 5 ## 6 AA 19 6 ## 7 MQ 19 6 ## 8 WN 10 8 ## 9 OO 5 9 ## 10 US 5 9 ## 11 VX 4 11 ## 12 YV 3 12 ## 13 FL 2 13 ## 14 AS 1 14 ## 15 F9 1 14 ## 16 HA 1 14 ``` Another way to approach this may have been to say to evaluate the delays between carriers going to the same destination and used that as a way of comparing and ‘ranking’ the best carriers. This would have been a more ambitious problem to answer. **8\. For each plane, count the number of flights before the first delay of greater than 1 hour.** ``` tail_nums_counts <- flights %>% arrange(tailnum, month, day, dep_time) %>% group_by(tailnum) %>% mutate(cum_sum = cumsum(arr_delay <= 60), nrow = row_number(), nrow_equal = nrow == cum_sum, cum_sum_before = cum_sum * nrow_equal) %>% mutate(total_before_hour = max(cum_sum_before, na.rm = TRUE)) %>% select(year, month, day, dep_time, tailnum, arr_delay, cum_sum, nrow, nrow_equal, cum_sum_before, total_before_hour) %>% ungroup() #let's change this to get rid of canceled flights, because those don't count as flights or delays. tail_nums_counts <- flights %>% filter(!is.na(arr_delay)) %>% select(tailnum, month, day, dep_time, arr_delay) %>% arrange(tailnum, month, day, dep_time) %>% group_by(tailnum) %>% mutate(cum_sum = cumsum(arr_delay <= 60), nrow = row_number(), nrow_equal = nrow == cum_sum, cum_sum_before = cum_sum * nrow_equal) %>% mutate(total_before_hour = max(cum_sum_before, na.rm = TRUE)) %>% select(month, day, dep_time, tailnum, arr_delay, cum_sum, nrow, nrow_equal, cum_sum_before, total_before_hour) %>% ungroup() tail_nums_counts %>% filter(!is.na(tailnum)) %>% arrange(desc(nrow), tailnum) %>% distinct(tailnum, .keep_all = TRUE) %>% select(tailnum, total_before_hour) %>% arrange(tailnum) ``` ``` ## # A tibble: 4,037 x 2 ## tailnum total_before_hour ## <chr> <dbl> ## 1 D942DN 0 ## 2 N0EGMQ 0 ## 3 N10156 9 ## 4 N102UW 25 ## 5 N103US 46 ## 6 N104UW 3 ## 7 N10575 0 ## 8 N105UW 22 ## 9 N107US 20 ## 10 N108UW 36 ## # ... with 4,027 more rows ``` ### 5\.7\.1\. **1\. Refer back to the lists of useful mutate and filtering functions. Describe how each operation changes when you combine it with grouping.** Performs operations on vectors for each group (rather than all together). **2\. Which plane (tailnum) has the worst on\-time record?** ``` flights %>% group_by(tailnum) %>% summarise(n = n(), num_not_delayed = sum(arr_delay <= 0, na.rm = TRUE), ontime_rate = num_not_delayed/ n, sum_delayed_time_grt0 = sum(ifelse(arr_delay >= 0, arr_delay, 0), na.rm = TRUE)) %>% filter(n > 100, !is.na(tailnum)) %>% arrange(ontime_rate) ``` ``` ## # A tibble: 1,200 x 5 ## tailnum n num_not_delayed ontime_rate sum_delayed_time_grt0 ## <chr> <int> <int> <dbl> <dbl> ## 1 N505MQ 242 83 0.343 5911 ## 2 N15910 280 105 0.375 8737 ## 3 N36915 228 86 0.377 6392 ## 4 N16919 251 96 0.382 7955 ## 5 N14998 230 88 0.383 7166 ## 6 N14953 256 100 0.391 6550 ## 7 N22971 230 90 0.391 6547 ## 8 N503MQ 191 75 0.393 4420 ## 9 N27152 109 43 0.394 2058 ## 10 N31131 109 43 0.394 2740 ## # ... with 1,190 more rows ``` N505MQ **3\. What time of day should you fly if you want to avoid delays as much as possible?** average `dep_delay` by hour of day ``` flights %>% group_by(hour) %>% summarise(med_delay = mean(arr_delay, na.rm = TRUE)) %>% ggplot(aes(x = hour, y = med_delay))+ geom_point()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_smooth). ``` ``` ## Warning: Removed 1 rows containing missing values (geom_point). ``` Fly in the morning. **4\. For each destination, compute the total minutes of delay. For each, flight, compute the proportion of the total delay for its destination.** ``` flights %>% filter(arr_delay > 0) %>% group_by(dest, flight) %>% summarise(TotalDelay_DestFlight = sum(arr_delay, na.rm = TRUE)) %>% mutate(TotalDelay_Dest = sum(TotalDelay_DestFlight), PropOfDest = TotalDelay_DestFlight / TotalDelay_Dest) ``` ``` ## # A tibble: 8,505 x 5 ## # Groups: dest [103] ## dest flight TotalDelay_DestFlight TotalDelay_Dest PropOfDest ## <chr> <int> <dbl> <dbl> <dbl> ## 1 ABQ 65 1943 4487 0.433 ## 2 ABQ 1505 2544 4487 0.567 ## 3 ACK 1191 1413 2974 0.475 ## 4 ACK 1195 62 2974 0.0208 ## 5 ACK 1291 267 2974 0.0898 ## 6 ACK 1491 1232 2974 0.414 ## 7 ALB 3260 111 9580 0.0116 ## 8 ALB 3264 4 9580 0.000418 ## 9 ALB 3811 599 9580 0.0625 ## 10 ALB 3817 196 9580 0.0205 ## # ... with 8,495 more rows ``` I did this such that flights could not have “negative” delays, this could have been approached differently such that “early arrivals” also got credit… **5\. Delays are typically temporally correlated: even once the problem that caused the initial delay has been resolved, later flights are delayed to allow earlier flights to leave. Using lag() explore how the delay of a flight is related to the delay of the immediately preceding flight.** ``` flights %>% group_by(origin) %>% mutate(delay_lag = lag(dep_delay, 1), diff_lag = abs(dep_delay -delay_lag)) %>% ungroup() %>% select(dep_delay, delay_lag) %>% na.omit() %>% cor() ``` ``` ## dep_delay delay_lag ## dep_delay 1.0000000 0.3506866 ## delay_lag 0.3506866 1.0000000 ``` Correlation of dep\_delayt\-1 with dep\_delayt is 0\.35\. Below is a function to get the correlation out for any lag level. ``` cor_by_lag <- function(lag){ flights %>% group_by(origin) %>% mutate(delay_lag = lag(dep_delay, lag), diff_lag = abs(dep_delay -delay_lag)) %>% ungroup() %>% select(dep_delay, delay_lag) %>% na.omit() %>% cor() %>% .[2,1] %>% as.vector() } ``` Let’s see the correlation pushing the lag time back. ``` cor_by_lag(1) ``` ``` ## [1] 0.3506866 ``` ``` cor_by_lag(10) ``` ``` ## [1] 0.2622796 ``` ``` cor_by_lag(100) ``` ``` ## [1] 0.04023232 ``` It makes sense that these values get smaller as flights that are further apart have delay lengths that are less correlated. See [5\.7\.1\.8\.](05-data-transformations.html#section-23) for the outputs if iterating this function across many lags. **6\. Look at each destination. Can you find flights that are suspiciously fast? (i.e. flights that represent a potential data entry error). Compute the air time a flight relative to the shortest flight to that destination. Which flights were most delayed in the air?** ``` flights %>% filter(!is.na(arr_delay)) %>% group_by(dest) %>% mutate(sd_air_time = sd(air_time), mean_air_time = mean(air_time)) %>% ungroup() %>% mutate(supect_fast_cutoff = mean_air_time - 4*sd_air_time, suspect_flag = air_time < supect_fast_cutoff) %>% select(dest, flight, hour, day, month, air_time, sd_air_time, mean_air_time, supect_fast_cutoff, suspect_flag, air_time, air_time) %>% filter(suspect_flag) ``` ``` ## # A tibble: 4 x 10 ## dest flight hour day month air_time sd_air_time mean_air_time ## <chr> <int> <dbl> <int> <int> <dbl> <dbl> <dbl> ## 1 BNA 3805 19 23 3 70 11.0 114. ## 2 GSP 4292 20 13 5 55 8.13 93.4 ## 3 ATL 1499 17 25 5 65 9.81 113. ## 4 MSP 4667 15 2 7 93 11.8 151. ## # ... with 2 more variables: supect_fast_cutoff <dbl>, suspect_flag <lgl> ``` **7\. Find all destinations that are flown by at least two carriers. Use that information to rank the carriers.** I found this quesiton ambiguous in terms of what it wants when it says “rank” the carriers using this. What I did was filter to just those destinations that have at least two carriers and then count the number of destinations with multiple carriers that each airline travels to. So it’s almost which airlines have more routes to ‘crowded’ destinations. ``` flights %>% group_by(dest) %>% mutate(n_carrier = n_distinct(carrier)) %>% filter(n_carrier > 1) %>% group_by(carrier) %>% summarise(n_dest = n_distinct(dest)) %>% mutate(rank = min_rank(-n_dest)) %>% arrange(rank) ``` ``` ## # A tibble: 16 x 3 ## carrier n_dest rank ## <chr> <int> <int> ## 1 EV 51 1 ## 2 9E 48 2 ## 3 UA 42 3 ## 4 DL 39 4 ## 5 B6 35 5 ## 6 AA 19 6 ## 7 MQ 19 6 ## 8 WN 10 8 ## 9 OO 5 9 ## 10 US 5 9 ## 11 VX 4 11 ## 12 YV 3 12 ## 13 FL 2 13 ## 14 AS 1 14 ## 15 F9 1 14 ## 16 HA 1 14 ``` Another way to approach this may have been to say to evaluate the delays between carriers going to the same destination and used that as a way of comparing and ‘ranking’ the best carriers. This would have been a more ambitious problem to answer. **8\. For each plane, count the number of flights before the first delay of greater than 1 hour.** ``` tail_nums_counts <- flights %>% arrange(tailnum, month, day, dep_time) %>% group_by(tailnum) %>% mutate(cum_sum = cumsum(arr_delay <= 60), nrow = row_number(), nrow_equal = nrow == cum_sum, cum_sum_before = cum_sum * nrow_equal) %>% mutate(total_before_hour = max(cum_sum_before, na.rm = TRUE)) %>% select(year, month, day, dep_time, tailnum, arr_delay, cum_sum, nrow, nrow_equal, cum_sum_before, total_before_hour) %>% ungroup() #let's change this to get rid of canceled flights, because those don't count as flights or delays. tail_nums_counts <- flights %>% filter(!is.na(arr_delay)) %>% select(tailnum, month, day, dep_time, arr_delay) %>% arrange(tailnum, month, day, dep_time) %>% group_by(tailnum) %>% mutate(cum_sum = cumsum(arr_delay <= 60), nrow = row_number(), nrow_equal = nrow == cum_sum, cum_sum_before = cum_sum * nrow_equal) %>% mutate(total_before_hour = max(cum_sum_before, na.rm = TRUE)) %>% select(month, day, dep_time, tailnum, arr_delay, cum_sum, nrow, nrow_equal, cum_sum_before, total_before_hour) %>% ungroup() tail_nums_counts %>% filter(!is.na(tailnum)) %>% arrange(desc(nrow), tailnum) %>% distinct(tailnum, .keep_all = TRUE) %>% select(tailnum, total_before_hour) %>% arrange(tailnum) ``` ``` ## # A tibble: 4,037 x 2 ## tailnum total_before_hour ## <chr> <dbl> ## 1 D942DN 0 ## 2 N0EGMQ 0 ## 3 N10156 9 ## 4 N102UW 25 ## 5 N103US 46 ## 6 N104UW 3 ## 7 N10575 0 ## 8 N105UW 22 ## 9 N107US 20 ## 10 N108UW 36 ## # ... with 4,027 more rows ``` Appendix -------- ### 5\.4\.1\.3\. You can also use `one_of()` for negating specific columns fields by name. ``` select(flights, -one_of(vars)) ``` ``` ## # A tibble: 336,776 x 15 ## year month day sched_dep_time sched_arr_time carrier flight tailnum ## <int> <int> <int> <int> <int> <chr> <int> <chr> ## 1 2013 1 1 515 819 UA 1545 N14228 ## 2 2013 1 1 529 830 UA 1714 N24211 ## 3 2013 1 1 540 850 AA 1141 N619AA ## 4 2013 1 1 545 1022 B6 725 N804JB ## 5 2013 1 1 600 837 DL 461 N668DN ## 6 2013 1 1 558 728 UA 1696 N39463 ## 7 2013 1 1 600 854 B6 507 N516JB ## 8 2013 1 1 600 723 EV 5708 N829AS ## 9 2013 1 1 600 846 B6 79 N593JB ## 10 2013 1 1 600 745 AA 301 N3ALAA ## # ... with 336,766 more rows, and 7 more variables: origin <chr>, ## # dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, ## # time_hour <dttm> ``` ### 5\.5\.2\.1\. Other, more sophisticated method[10](#fn10) ``` mutate_at(.tbl = flights, .vars = c("dep_time", "sched_dep_time"), .funs = funs(new = time_to_mins)) ``` ### 5\.5\.2\.2\. Let’s create this variable. I’ll name it `air_calc`. First method: ``` flights_new2 <- mutate(flights, # This air_time_clac step is necessary because you need to take into account red-eye flights in calculation air_time_calc = ifelse(dep_time > arr_time, arr_time + 2400, arr_time), air_calc = time_to_mins(air_time_calc) - time_to_mins(dep_time)) ``` The above method is the simple approach, though it doesn’t take into account the timezone of the arrivals locations. To handle this, I do a `left_join` on the `airports` dataframe and change `arr_time` to take into account the timezone and output the value in EST (as opposed to local time). We have not learned about ‘joins’ yet, so don’t worry if this loses you. ``` flights_new2 <- flights %>% left_join(select(nycflights13::airports, dest = faa, tz)) %>% mutate(arr_time_old = arr_time) %>% mutate(arr_time = arr_time - 100*(tz+5)) %>% mutate( # This arr_time_calc step is a helper variable I created to take into account the red-eye flights in calculation arr_time_calc = ifelse(dep_time > arr_time, arr_time + 2400, arr_time), air_calc = time_to_mins(arr_time_calc) - time_to_mins(dep_time)) %>% select(-arr_time_calc) ``` ``` ## Joining, by = "dest" ``` Curiouis if anyone explored the `air_time` variable and figured out the details of how exactly it was off if there was something systematic? I checked this briefly below, but did not go deep. **Closer look at `air_time`** Wanted to look at original `air_time` variable a little more. Histogram below shows that most differences are now between 20 \- 40 minutes from the actual time. ``` flights_new2 %>% group_by(dest) %>% summarise(distance_med = median(distance, na.rm = TRUE), air_calc_med = median(air_calc, na.rm = TRUE), air_old_med = median(air_time, na.rm = TRUE), diff_new_old = air_calc_med - air_old_med, diff_hrs = as.factor(round(diff_new_old/60)), num = n()) %>% ggplot(aes(diff_new_old))+ geom_histogram() ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` ``` ## Warning: Removed 5 rows containing non-finite values (stat_bin). ``` Regressing `diff` on `arr_delay` and `dep_delay` (remember `diff` is the difference between `air_time` and `air_calc`) ``` mod_air_time2 <- mutate(flights_new2, diff = (air_time - air_calc)) %>% select(-air_time, -air_calc, -flight, -tailnum, -dest) %>% na.omit() %>% lm(diff ~ dep_delay + arr_delay, data = .) summary(mod_air_time2) ``` ``` ## ## Call: ## lm(formula = diff ~ dep_delay + arr_delay, data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -93.168 -6.684 0.688 6.878 101.169 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -33.511843 0.024118 -1389.5 <2e-16 *** ## dep_delay 0.533376 0.001355 393.5 <2e-16 *** ## arr_delay -0.552852 0.001217 -454.2 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 12.43 on 319806 degrees of freedom ## Multiple R-squared: 0.3956, Adjusted R-squared: 0.3956 ## F-statistic: 1.047e+05 on 2 and 319806 DF, p-value: < 2.2e-16 ``` Doing such accounts for \~40% of the variation in the values. * note `dep_delay` and `arr_delay` variables are highly colinear – and the coefficients are opposite in the model. ``` flights_new2 %>% select(air_time, air_calc, arr_delay, dep_delay) %>% mutate(diff = air_time - air_calc) %>% select(-air_time, -air_calc) %>% na.omit() %>% cor() ``` ``` ## arr_delay dep_delay diff ## arr_delay 1.0000000 0.91531953 -0.32086698 ## dep_delay 0.9153195 1.00000000 -0.07582942 ## diff -0.3208670 -0.07582942 1.00000000 ``` Often this suggests you may not need to include both variables in the model as they will likely be providing the same information. Though here that is not the case as only including `arr_delay` associates with a steep decline in `R^2` to just account for \~10% of the variation. ``` mod_air_time <- mutate(flights_new2, diff = (air_time - air_calc)) %>% select(-air_time, -air_calc, -flight, -tailnum, -dest) %>% na.omit() %>% lm(diff ~ arr_delay, data = .) summary(mod_air_time) ``` ``` ## ## Call: ## lm(formula = diff ~ arr_delay, data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -182.960 -6.385 2.013 7.983 154.382 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -2.984e+01 2.710e-02 -1101.3 <2e-16 *** ## arr_delay -1.144e-01 5.972e-04 -191.6 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 15.14 on 319807 degrees of freedom ## Multiple R-squared: 0.103, Adjusted R-squared: 0.103 ## F-statistic: 3.67e+04 on 1 and 319807 DF, p-value: < 2.2e-16 ``` ### 5\.6\.7\.1\. Below is an extension on using the `quantile` method, but it is far beyond where we are right now. For the question *90th percentile for delays for flights by destination* we used `quantile` to output only the 90th percentile of values for each destination. Here, I want to address what if you had wanted to output the delays at multiple values, say, arbitrarily the 25th, 50th, 75th percentiles. One option would be to create a new variable for each value and in each quantile function sepcify 0\.25, 0\.50, 0\.75 respectively. ``` flights %>% group_by(dest) %>% summarise(delay.25 = quantile(arr_delay, 0.25, na.rm = TRUE), delay.50 = quantile(arr_delay, 0.50, na.rm = TRUE), delay.75 = quantile(arr_delay, 0.75, na.rm = TRUE)) ``` ``` ## # A tibble: 105 x 4 ## dest delay.25 delay.50 delay.75 ## <chr> <dbl> <dbl> <dbl> ## 1 ABQ -24 -5.5 22.8 ## 2 ACK -13 -3 10 ## 3 ALB -17 -4 28 ## 4 ANC -10.8 1.5 10 ## 5 ATL -12 -1 16 ## 6 AUS -19 -5 15 ## 7 AVL -11 -1 13 ## 8 BDL -18 -10 14 ## 9 BGR -21.8 -9 19.8 ## 10 BHM -20 -2 34 ## # ... with 95 more rows ``` But there is a lot of replication here and the `quantile` function is also able to output more than one value by specifying the `probs` argument. ``` quantile(c(1:100), probs = c(0.25, .50, 0.75)) ``` ``` ## 25% 50% 75% ## 25.75 50.50 75.25 ``` So, in theory, rather than calling `quantile` multiple times, you could just call it once. However for any variable you create `summarise` is expecting only a single value output for each row, so just passing it in as\-is will cause it to fail. ``` flights %>% group_by(dest) %>% summarise(delays = quantile(arr_delay, probs = c(0.25, .50, 0.75), na.rm = TRUE)) ``` ``` ## Error: Column `delays` must be length 1 (a summary value), not 3 ``` To make this work you need to make the value a list, so that it will output a single list in each row of the column\[This style is covered at the end of the book in the section ‘list\-columns’ in iteration.]\[Also you need your dataframe to be in a tibble form rather than traditional dataframes for list\-cols to work]. I am going to create another list\-column field of the quantiles I specified. ``` prob_vals <- seq(from = 0.25, to = 0.75, by = 0.25) flights_quantiles <- flights %>% group_by(dest) %>% summarise(delays_val = list(quantile(arr_delay, probs = prob_vals, na.rm = TRUE)), delays_q = list(c('25th', '50th', '75th'))) flights_quantiles ``` ``` ## # A tibble: 105 x 3 ## dest delays_val delays_q ## <chr> <list> <list> ## 1 ABQ <dbl [3]> <chr [3]> ## 2 ACK <dbl [3]> <chr [3]> ## 3 ALB <dbl [3]> <chr [3]> ## 4 ANC <dbl [3]> <chr [3]> ## 5 ATL <dbl [3]> <chr [3]> ## 6 AUS <dbl [3]> <chr [3]> ## 7 AVL <dbl [3]> <chr [3]> ## 8 BDL <dbl [3]> <chr [3]> ## 9 BGR <dbl [3]> <chr [3]> ## 10 BHM <dbl [3]> <chr [3]> ## # ... with 95 more rows ``` To convert these outputs out of the list\-col format, I can use the function `unnest`. ``` flights_quantiles %>% unnest() ``` ``` ## # A tibble: 315 x 3 ## dest delays_val delays_q ## <chr> <dbl> <chr> ## 1 ABQ -24 25th ## 2 ABQ -5.5 50th ## 3 ABQ 22.8 75th ## 4 ACK -13 25th ## 5 ACK -3 50th ## 6 ACK 10 75th ## 7 ALB -17 25th ## 8 ALB -4 50th ## 9 ALB 28 75th ## 10 ANC -10.8 25th ## # ... with 305 more rows ``` This will output the values as individual rows, repeating the `dest` value for the length of the list. If I want to spread the `delays_quantile` values into seperate columns I can use the `spread` function that is in the tidying R chapter. ``` flights_quantiles %>% unnest() %>% spread(key = delays_q, value = delays_val, sep = "_") ``` ``` ## # A tibble: 105 x 4 ## dest delays_q_25th delays_q_50th delays_q_75th ## <chr> <dbl> <dbl> <dbl> ## 1 ABQ -24 -5.5 22.8 ## 2 ACK -13 -3 10 ## 3 ALB -17 -4 28 ## 4 ANC -10.8 1.5 10 ## 5 ATL -12 -1 16 ## 6 AUS -19 -5 15 ## 7 AVL -11 -1 13 ## 8 BDL -18 -10 14 ## 9 BGR -21.8 -9 19.8 ## 10 BHM -20 -2 34 ## # ... with 95 more rows ``` Let’s plot our unnested (but not unspread) data to see roughly the distribution of the delays for each destination at our quantiles of interest[11](#fn11). ``` flights_quantiles %>% unnest() %>% # mutate(delays_q = forcats::fct_reorder(f = delays_q, x = delays_val, fun = mean, na.rm = TRUE)) %>% ggplot(aes(x = delays_q, y = delays_val))+ geom_boxplot() ``` ``` ## Warning: Removed 3 rows containing non-finite values (stat_boxplot). ``` It can be a hassle naming the values explicitly. `quantile`’s default `probs` argument value is 0, 0\.25, 0\.5, 0\.75, 1\. Rather than needing to type the `delays_q` values `list(c('0%', '25%', '50%', '75%', '100%'))` you could have generated the values of these names dynamically using the `map` function in the `purrr` package (see chapter on iteration) with example for this by passing the `names` function over each value in `delays_val`. ``` flights_quantiles2 <- flights %>% group_by(dest) %>% summarise(delays_val = list(quantile(arr_delay, na.rm = TRUE)), delays_q = list(c('0th', '25th', '50th', '75th', '100th'))) %>% mutate(delays_q2 = purrr::map(delays_val, names)) flights_quantiles2 ``` ``` ## # A tibble: 105 x 4 ## dest delays_val delays_q delays_q2 ## <chr> <list> <list> <list> ## 1 ABQ <dbl [5]> <chr [5]> <chr [5]> ## 2 ACK <dbl [5]> <chr [5]> <chr [5]> ## 3 ALB <dbl [5]> <chr [5]> <chr [5]> ## 4 ANC <dbl [5]> <chr [5]> <chr [5]> ## 5 ATL <dbl [5]> <chr [5]> <chr [5]> ## 6 AUS <dbl [5]> <chr [5]> <chr [5]> ## 7 AVL <dbl [5]> <chr [5]> <chr [5]> ## 8 BDL <dbl [5]> <chr [5]> <chr [5]> ## 9 BGR <dbl [5]> <chr [5]> <chr [5]> ## 10 BHM <dbl [5]> <chr [5]> <chr [5]> ## # ... with 95 more rows ``` And then let’s `unnest` the data[12](#fn12). ``` flights_quantiles2 %>% unnest() ``` ``` ## # A tibble: 525 x 4 ## dest delays_val delays_q delays_q2 ## <chr> <dbl> <chr> <chr> ## 1 ABQ -61 0th 0% ## 2 ABQ -24 25th 25% ## 3 ABQ -5.5 50th 50% ## 4 ABQ 22.8 75th 75% ## 5 ABQ 153 100th 100% ## 6 ACK -25 0th 0% ## 7 ACK -13 25th 25% ## 8 ACK -3 50th 50% ## 9 ACK 10 75th 75% ## 10 ACK 221 100th 100% ## # ... with 515 more rows ``` #### 5\.6\.7\.1\.4\. But let’s look at those flights that have the greatest differences in proportion on\-time vs. 2 hours late while still having values in both categories[13](#fn13). ``` flights %>% group_by(flight) %>% summarise(ontime = sum(arr_delay <= 0, na.rm = TRUE)/n(), late.120 = sum(arr_delay >= 120, na.rm = TRUE)/n(), n = n()) %>% ungroup() %>% filter_at(c("ontime", "late.120"), all_vars(. != 0 & . != 1)) %>% mutate(max_dist = abs(ontime - late.120)) %>% arrange(desc(max_dist)) ``` ``` ## # A tibble: 2,098 x 5 ## flight ontime late.120 n max_dist ## <int> <dbl> <dbl> <int> <dbl> ## 1 5288 0.927 0.0244 41 0.902 ## 2 2085 0.901 0.00658 152 0.895 ## 3 2174 0.914 0.0286 35 0.886 ## 4 2243 0.9 0.0167 120 0.883 ## 5 2180 0.889 0.0131 153 0.876 ## 6 2118 0.867 0.00699 143 0.860 ## 7 1167 0.864 0.00662 302 0.858 ## 8 3613 0.886 0.0286 35 0.857 ## 9 1772 0.891 0.0364 55 0.855 ## 10 1157 0.847 0.00667 150 0.84 ## # ... with 2,088 more rows ``` ### 5\.6\.7\.4\. To measure the difference in speed you can use the `microbenchmark` function ``` microbenchmark::microbenchmark(sub_optimal = filter(flights, is.na(dep_delay) | is.na(arr_delay)), optimal = filter(flights, is.na(arr_delay)), times = 10) ``` ``` ## Unit: milliseconds ## expr min lq mean median uq max neval cld ## sub_optimal 5.5279 6.2409 6.55796 6.74025 6.9686 7.2225 10 b ## optimal 3.9316 4.3135 4.55498 4.57885 4.8483 5.1514 10 a ``` ### 5\.6\.7\.5\. Explore the percentage delayed vs. percentage cancelled. ``` flights %>% group_by(day) %>% summarise(cancelled = sum(is.na(arr_delay)), delayed = sum(arr_delay > 0, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num, delayed_perc = delayed / num) %>% ggplot(aes(x = day))+ geom_line(aes(y = cancelled_perc), colour = "dark blue")+ geom_line(aes(y = delayed_perc), colour = "dark red") ``` Let’s try faceting by origin and looking at both values next to each other. ``` flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% gather(key = type, value = value, avg_delayed, cancelled_perc) %>% ggplot(aes(x = day, y = value))+ geom_line()+ facet_grid(type ~ origin, scales = "free_y") ``` Look’s like the relationship across origins with the delay overlaid with color (not actually crazy about how this look). ``` flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% ggplot(aes(x = day, y = cancelled_perc, colour = avg_delayed))+ geom_line()+ facet_grid(origin ~ .) ``` Let’s look at values as individual points and overlay a `geom_smooth` ``` flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% ggplot(aes(avg_delayed, cancelled_perc, colour = origin))+ geom_point()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` **Modeling approach:** We also could approach this using a model and regressing the average proportion of cancelled flights on average delay. ``` cancelled_mod1 <- flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% lm(cancelled_perc ~ avg_delayed, data = .) summary(cancelled_mod1) ``` ``` ## ## Call: ## lm(formula = cancelled_perc ~ avg_delayed, data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.026363 -0.009392 -0.002610 0.006196 0.048436 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.0152588 0.0020945 7.285 1.12e-10 *** ## avg_delayed 0.0018688 0.0002311 8.086 2.54e-12 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.01342 on 91 degrees of freedom ## Multiple R-squared: 0.4181, Adjusted R-squared: 0.4117 ## F-statistic: 65.39 on 1 and 91 DF, p-value: 2.537e-12 ``` ``` # ggplot(aes(x = day, y = cancelled_perc))+ # geom_line() ``` If you were confused by the `.` in `lm(cancelled_perc ~ avg_delayed, data = .)`, the dot specifies where the output from the prior steps should be piped into. The default is for it to go into the first argument, but for the `lm` function, data is not the first argument, so I have to explicitly tell it that the prior steps output should be inputted into the data argument of the `lm` function. See [On piping dots](05-data-transformations.html#on-piping-dots) for more details. The average delay accounts for 42% of the variation in the proportion of canceled flights. Modeling the log\-odds of the proportion of cancelled flights might be more successful as it produces a variable not constrained by 0 to 1, better aligning with the assumptions of linear regression. ``` cancelled_mod2 <- flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num, cancelled_logodds = log(cancelled / (num - cancelled))) %>% lm(cancelled_logodds ~ avg_delayed, data = .) ``` To convert logodds back to percentage, I built the following equation. ``` convert_logodds <- function(log_odds) exp(log_odds) / (1 + exp(log_odds)) ``` Let’s calculate the MAE or mean absolute error on our percentages. ``` cancelled_preds2 <- flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num, cancelled_logodds = log(cancelled / (num - cancelled))) %>% ungroup() %>% modelr::spread_predictions(cancelled_mod1, cancelled_mod2) %>% mutate(cancelled_mod2 = convert_logodds(cancelled_mod2)) cancelled_preds2 %>% summarise(MAE1 = mean(abs(cancelled_perc - cancelled_mod1), na.rm = TRUE), MAE2 = mean(abs(cancelled_perc - cancelled_mod2), na.rm = TRUE), mean_value = mean(cancelled_perc, na.rm = TRUE)) ``` ``` ## # A tibble: 1 x 3 ## MAE1 MAE2 mean_value ## <dbl> <dbl> <dbl> ## 1 0.0101 0.00954 0.0279 ``` Let’s look at the differences in the outputs of the predictions from these models. ``` cancelled_preds2 %>% ggplot(aes(avg_delayed, cancelled_perc))+ geom_point()+ scale_size_continuous(range = c(1, 2))+ geom_line(aes(y = cancelled_mod1), colour = "blue", size = 1)+ geom_line(aes(y = cancelled_mod2), colour = "red", size = 1) ``` [14](#fn14) ### 5\.6\.7\.6\. As an example, let’s look at just Atl flights from LGA and compare DL, FL, MQ. ``` flights %>% filter(dest == 'ATL', origin == 'LGA') %>% count(carrier) ``` ``` ## # A tibble: 5 x 2 ## carrier n ## <chr> <int> ## 1 DL 5544 ## 2 EV 1 ## 3 FL 2337 ## 4 MQ 2322 ## 5 WN 59 ``` And compare the median delays between the three primary carriers DL, FL, MQ. ``` carriers_lga_atl <- flights %>% filter(dest == 'ATL', origin == 'LGA') %>% group_by(carrier) %>% # filter out small samples mutate(n_tot = n()) %>% filter(n_tot > 100) %>% select(-n_tot) %>% ### filter(!is.na(arr_delay)) %>% ungroup() label <- carriers_lga_atl %>% group_by(carrier) %>% summarise(arr_delay = median(arr_delay, na.rm = TRUE)) carriers_lga_atl %>% select(carrier, arr_delay) %>% ggplot()+ geom_boxplot(aes(carrier, arr_delay, colour = carrier), outlier.shape = NA)+ coord_cartesian(y = c(-60, 75))+ geom_text(mapping = aes(x = carrier, group = carrier, y = arr_delay + 5, label = arr_delay), data = label) ``` Or perhaps you want to use a statistical method to compare if the differences in the grouped are significant… ``` carriers_lga_atl %>% lm(arr_delay ~ carrier, data = .) %>% summary() ``` ``` ## ## Call: ## lm(formula = arr_delay ~ carrier, data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -64.74 -22.33 -11.33 4.67 888.67 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 6.3273 0.6149 10.29 < 2e-16 *** ## carrierFL 14.4172 1.1340 12.71 < 2e-16 *** ## carrierMQ 7.7067 1.1417 6.75 1.56e-11 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 45.48 on 9979 degrees of freedom ## Multiple R-squared: 0.01692, Adjusted R-squared: 0.01672 ## F-statistic: 85.86 on 2 and 9979 DF, p-value: < 2.2e-16 ``` This shows the mean delay for DL is \~6\.3, FL is \~20\.7, MQ is \~14 and FL and MQ are significantly different from DL (and DL is significantly different from 0\)[15](#fn15). The carrier accouts for \~1\.6% of the variation in arrival… etc…. ### 5\.7\.1\.6\. Let’s look at the fastest 20 `air_time`s for each destination. ``` flights_new2 %>% group_by(dest) %>% mutate(min_rank = min_rank(air_time)) %>% filter(min_rank < 20) %>% ggplot(aes(distance, air_time, colour = dest))+ geom_point()+ guides(colour = FALSE) ``` Let’s do the same for my custom `air_time` calculation `air_calc`. ``` flights_new2 %>% group_by(dest) %>% mutate(min_rank = min_rank(air_calc)) %>% filter(min_rank < 20) %>% ggplot(aes(distance, air_calc, colour = dest))+ geom_point()+ guides(colour = FALSE) ``` *Rather than the fastest 20, let’s look at the mean `dist` and `air_time` for each[16](#fn16).* First using the `air_time` value. ``` flights_new2 %>% mutate_at(.vars = c("dep_time", "arr_time"), .funs = funs(time_to_mins)) %>% group_by(dest) %>% summarise(mean_air = mean(air_time, na.rm = TRUE), mean_dist = mean(distance, na.rm = TRUE)) %>% ggplot(., aes(x = mean_dist, y = mean_air))+ geom_point(aes(colour = dest))+ scale_y_continuous(breaks = seq(0, 660, 60))+ guides(colour = FALSE) ``` ``` ## Warning: funs() is soft deprecated as of dplyr 0.8.0 ## please use list() instead ## ## # Before: ## funs(name = f(.) ## ## # After: ## list(name = ~f(.)) ## This warning is displayed once per session. ``` ``` ## Warning: Removed 1 rows containing missing values (geom_point). ``` Then with the custom `air_calc`. ``` flights_new2 %>% mutate_at(.vars = c("dep_time", "arr_time"), .funs = funs(time_to_mins)) %>% group_by(dest) %>% summarise(mean_air = mean(air_calc, na.rm = TRUE), mean_dist = mean(distance, na.rm = TRUE)) %>% ggplot(., aes(x = mean_dist, y = mean_air))+ geom_point(aes(colour = dest))+ scale_y_continuous(breaks = seq(0, 660, 60))+ guides(colour = FALSE) ``` ``` ## Warning: Removed 5 rows containing missing values (geom_point). ``` ### 5\.7\.1\.5 Let’s run this for every 3 lags (1, 4, 7, …) and plot. ``` lags_cors <- tibble(lag = seq(1,200, 3)) %>% mutate(cor = purrr::map_dbl(lag, cor_by_lag)) lags_cors %>% ggplot(aes(x = lag, cor))+ geom_line()+ coord_cartesian(ylim = c(0, 0.40)) ``` ### 5\.7\.1\.8\. ``` tail_nums_counts %>% nest() %>% sample_n(10) %>% unnest() %>% View() ``` ### On piping dots The `.` let’s you explicitly state where to pipe the output from the prior steps. The default is to have it go into the first argument of the function. *Let’s look at an example:* ``` flights %>% filter(!is.na(arr_delay)) %>% count(origin) ``` ``` ## # A tibble: 3 x 2 ## origin n ## <chr> <int> ## 1 EWR 117127 ## 2 JFK 109079 ## 3 LGA 101140 ``` This is the exact same thing as the code below, I just added the dots to be explicit about where in the function the output from the prior steps will go: ``` flights %>% filter(., !is.na(arr_delay)) %>% count(., origin) ``` ``` ## # A tibble: 3 x 2 ## origin n ## <chr> <int> ## 1 EWR 117127 ## 2 JFK 109079 ## 3 LGA 101140 ``` Functions in dplyr, etc. expect dataframes in the first argument, so the default piping behavior works fine you don’t end\-up using the dot in this way. However functions outside of the tidyverse are not always so consistent and may expect the dataframe (or w/e your output from the prior step is) in a different location of the function, hence the need to use the dot to specify where it should go. The example below uses base R’s `lm` (linear models) function to regress `arr_delay` on `dep_delay` and `distance`[17](#fn17). The first argument expects a function, the second argument the data, hence the need for the dot. ``` flights %>% filter(., !is.na(arr_delay)) %>% lm(arr_delay ~ dep_delay + distance, .) ``` ``` ## ## Call: ## lm(formula = arr_delay ~ dep_delay + distance, data = .) ## ## Coefficients: ## (Intercept) dep_delay distance ## -3.212779 1.018077 -0.002551 ``` When using the `.` in piping, I will usually make the argment name I am piping into explicit. This makes it more clear and also means if I have the position order wrong it doesn’t matter. ``` flights %>% filter(., !is.na(arr_delay)) %>% lm(arr_delay ~ dep_delay + distance, data = .) ``` You can also use the `.` in conjunction with R’s subsetting to output vectors. In the example below I filter flights, then extract the `arr_delay` column as a vector and pipe it into the base R function `quantile`. ``` flights %>% filter(!is.na(arr_delay)) %>% .$arr_delay %>% quantile(probs = seq(from = 0, to = 1, by = 0.10)) ``` ``` ## 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% ## -86 -26 -19 -14 -10 -5 1 9 21 52 1272 ``` `quantile` is expecting a numeric vector in it’s first argument so the above works. If instead of `.$arr_delay`, you’d tried `select(arr_delay`) the function would have failed because the `select` statement outputs a dataframe rather than a vector (and `quantile` would have become very angry with you). One weakness with the above method is it only allows you to input a single vector into the base R function (while many funcitons can take in multiple vectors). A better way of doing this is to use the `with` function. The `with` function allows you to pipe a dataframe into the first argument and then reference the column in that dataframe with just the field names. This makes using those base R funcitons easier and more similar in syntax to tidyverse functions. For example, the above example would look become. ``` flights %>% filter(!is.na(arr_delay)) %>% with(quantile(arr_delay, probs = seq(from = 0, to = 1, by = 0.10))) ``` ``` ## 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% ## -86 -26 -19 -14 -10 -5 1 9 21 52 1272 ``` This method also makes it easy to input multiple field names in this style. Let’s look at this with the `table` function[18](#fn18) ``` flights %>% filter(!is.na(arr_delay)) %>% with(table(origin, carrier)) ``` ``` ## carrier ## origin 9E AA AS B6 DL EV F9 FL HA MQ OO ## EWR 1193 3363 709 6472 4295 41557 0 0 0 2097 6 ## JFK 13742 13600 0 41666 20559 1326 0 0 342 6838 0 ## LGA 2359 14984 0 5911 22804 8225 681 3175 0 16102 23 ## carrier ## origin UA US VX WN YV ## EWR 45501 4326 1552 6056 0 ## JFK 4478 2964 3564 0 0 ## LGA 7803 12541 0 5988 544 ``` ### plotly The `plotly` package has a cool function `ggplotly` that allows you to add wrappers `ggplot` that turn it into html that allow you to do things like zoom\-in and hover over points. It also has a `frame` argument that allows you to make animations or filter between points. Here is an example from the `flights` dataset. ``` p <- flights %>% group_by(hour, month) %>% summarise(avg_delay = mean(arr_delay, na.rm = TRUE)) %>% ggplot(aes(x = hour, y = avg_delay, group = month, frame = month))+ geom_point()+ geom_smooth() plotly::ggplotly(p) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_smooth). ``` This is the base from which this is built. ``` flights %>% group_by(hour, month) %>% summarise(avg_delay = mean(arr_delay, na.rm = TRUE)) %>% ggplot(aes(x = hour, y = avg_delay, group = month))+ geom_point()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_smooth). ``` ``` ## Warning: Removed 1 rows containing missing values (geom_point). ``` ### 5\.4\.1\.3\. You can also use `one_of()` for negating specific columns fields by name. ``` select(flights, -one_of(vars)) ``` ``` ## # A tibble: 336,776 x 15 ## year month day sched_dep_time sched_arr_time carrier flight tailnum ## <int> <int> <int> <int> <int> <chr> <int> <chr> ## 1 2013 1 1 515 819 UA 1545 N14228 ## 2 2013 1 1 529 830 UA 1714 N24211 ## 3 2013 1 1 540 850 AA 1141 N619AA ## 4 2013 1 1 545 1022 B6 725 N804JB ## 5 2013 1 1 600 837 DL 461 N668DN ## 6 2013 1 1 558 728 UA 1696 N39463 ## 7 2013 1 1 600 854 B6 507 N516JB ## 8 2013 1 1 600 723 EV 5708 N829AS ## 9 2013 1 1 600 846 B6 79 N593JB ## 10 2013 1 1 600 745 AA 301 N3ALAA ## # ... with 336,766 more rows, and 7 more variables: origin <chr>, ## # dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, ## # time_hour <dttm> ``` ### 5\.5\.2\.1\. Other, more sophisticated method[10](#fn10) ``` mutate_at(.tbl = flights, .vars = c("dep_time", "sched_dep_time"), .funs = funs(new = time_to_mins)) ``` ### 5\.5\.2\.2\. Let’s create this variable. I’ll name it `air_calc`. First method: ``` flights_new2 <- mutate(flights, # This air_time_clac step is necessary because you need to take into account red-eye flights in calculation air_time_calc = ifelse(dep_time > arr_time, arr_time + 2400, arr_time), air_calc = time_to_mins(air_time_calc) - time_to_mins(dep_time)) ``` The above method is the simple approach, though it doesn’t take into account the timezone of the arrivals locations. To handle this, I do a `left_join` on the `airports` dataframe and change `arr_time` to take into account the timezone and output the value in EST (as opposed to local time). We have not learned about ‘joins’ yet, so don’t worry if this loses you. ``` flights_new2 <- flights %>% left_join(select(nycflights13::airports, dest = faa, tz)) %>% mutate(arr_time_old = arr_time) %>% mutate(arr_time = arr_time - 100*(tz+5)) %>% mutate( # This arr_time_calc step is a helper variable I created to take into account the red-eye flights in calculation arr_time_calc = ifelse(dep_time > arr_time, arr_time + 2400, arr_time), air_calc = time_to_mins(arr_time_calc) - time_to_mins(dep_time)) %>% select(-arr_time_calc) ``` ``` ## Joining, by = "dest" ``` Curiouis if anyone explored the `air_time` variable and figured out the details of how exactly it was off if there was something systematic? I checked this briefly below, but did not go deep. **Closer look at `air_time`** Wanted to look at original `air_time` variable a little more. Histogram below shows that most differences are now between 20 \- 40 minutes from the actual time. ``` flights_new2 %>% group_by(dest) %>% summarise(distance_med = median(distance, na.rm = TRUE), air_calc_med = median(air_calc, na.rm = TRUE), air_old_med = median(air_time, na.rm = TRUE), diff_new_old = air_calc_med - air_old_med, diff_hrs = as.factor(round(diff_new_old/60)), num = n()) %>% ggplot(aes(diff_new_old))+ geom_histogram() ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` ``` ## Warning: Removed 5 rows containing non-finite values (stat_bin). ``` Regressing `diff` on `arr_delay` and `dep_delay` (remember `diff` is the difference between `air_time` and `air_calc`) ``` mod_air_time2 <- mutate(flights_new2, diff = (air_time - air_calc)) %>% select(-air_time, -air_calc, -flight, -tailnum, -dest) %>% na.omit() %>% lm(diff ~ dep_delay + arr_delay, data = .) summary(mod_air_time2) ``` ``` ## ## Call: ## lm(formula = diff ~ dep_delay + arr_delay, data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -93.168 -6.684 0.688 6.878 101.169 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -33.511843 0.024118 -1389.5 <2e-16 *** ## dep_delay 0.533376 0.001355 393.5 <2e-16 *** ## arr_delay -0.552852 0.001217 -454.2 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 12.43 on 319806 degrees of freedom ## Multiple R-squared: 0.3956, Adjusted R-squared: 0.3956 ## F-statistic: 1.047e+05 on 2 and 319806 DF, p-value: < 2.2e-16 ``` Doing such accounts for \~40% of the variation in the values. * note `dep_delay` and `arr_delay` variables are highly colinear – and the coefficients are opposite in the model. ``` flights_new2 %>% select(air_time, air_calc, arr_delay, dep_delay) %>% mutate(diff = air_time - air_calc) %>% select(-air_time, -air_calc) %>% na.omit() %>% cor() ``` ``` ## arr_delay dep_delay diff ## arr_delay 1.0000000 0.91531953 -0.32086698 ## dep_delay 0.9153195 1.00000000 -0.07582942 ## diff -0.3208670 -0.07582942 1.00000000 ``` Often this suggests you may not need to include both variables in the model as they will likely be providing the same information. Though here that is not the case as only including `arr_delay` associates with a steep decline in `R^2` to just account for \~10% of the variation. ``` mod_air_time <- mutate(flights_new2, diff = (air_time - air_calc)) %>% select(-air_time, -air_calc, -flight, -tailnum, -dest) %>% na.omit() %>% lm(diff ~ arr_delay, data = .) summary(mod_air_time) ``` ``` ## ## Call: ## lm(formula = diff ~ arr_delay, data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -182.960 -6.385 2.013 7.983 154.382 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -2.984e+01 2.710e-02 -1101.3 <2e-16 *** ## arr_delay -1.144e-01 5.972e-04 -191.6 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 15.14 on 319807 degrees of freedom ## Multiple R-squared: 0.103, Adjusted R-squared: 0.103 ## F-statistic: 3.67e+04 on 1 and 319807 DF, p-value: < 2.2e-16 ``` ### 5\.6\.7\.1\. Below is an extension on using the `quantile` method, but it is far beyond where we are right now. For the question *90th percentile for delays for flights by destination* we used `quantile` to output only the 90th percentile of values for each destination. Here, I want to address what if you had wanted to output the delays at multiple values, say, arbitrarily the 25th, 50th, 75th percentiles. One option would be to create a new variable for each value and in each quantile function sepcify 0\.25, 0\.50, 0\.75 respectively. ``` flights %>% group_by(dest) %>% summarise(delay.25 = quantile(arr_delay, 0.25, na.rm = TRUE), delay.50 = quantile(arr_delay, 0.50, na.rm = TRUE), delay.75 = quantile(arr_delay, 0.75, na.rm = TRUE)) ``` ``` ## # A tibble: 105 x 4 ## dest delay.25 delay.50 delay.75 ## <chr> <dbl> <dbl> <dbl> ## 1 ABQ -24 -5.5 22.8 ## 2 ACK -13 -3 10 ## 3 ALB -17 -4 28 ## 4 ANC -10.8 1.5 10 ## 5 ATL -12 -1 16 ## 6 AUS -19 -5 15 ## 7 AVL -11 -1 13 ## 8 BDL -18 -10 14 ## 9 BGR -21.8 -9 19.8 ## 10 BHM -20 -2 34 ## # ... with 95 more rows ``` But there is a lot of replication here and the `quantile` function is also able to output more than one value by specifying the `probs` argument. ``` quantile(c(1:100), probs = c(0.25, .50, 0.75)) ``` ``` ## 25% 50% 75% ## 25.75 50.50 75.25 ``` So, in theory, rather than calling `quantile` multiple times, you could just call it once. However for any variable you create `summarise` is expecting only a single value output for each row, so just passing it in as\-is will cause it to fail. ``` flights %>% group_by(dest) %>% summarise(delays = quantile(arr_delay, probs = c(0.25, .50, 0.75), na.rm = TRUE)) ``` ``` ## Error: Column `delays` must be length 1 (a summary value), not 3 ``` To make this work you need to make the value a list, so that it will output a single list in each row of the column\[This style is covered at the end of the book in the section ‘list\-columns’ in iteration.]\[Also you need your dataframe to be in a tibble form rather than traditional dataframes for list\-cols to work]. I am going to create another list\-column field of the quantiles I specified. ``` prob_vals <- seq(from = 0.25, to = 0.75, by = 0.25) flights_quantiles <- flights %>% group_by(dest) %>% summarise(delays_val = list(quantile(arr_delay, probs = prob_vals, na.rm = TRUE)), delays_q = list(c('25th', '50th', '75th'))) flights_quantiles ``` ``` ## # A tibble: 105 x 3 ## dest delays_val delays_q ## <chr> <list> <list> ## 1 ABQ <dbl [3]> <chr [3]> ## 2 ACK <dbl [3]> <chr [3]> ## 3 ALB <dbl [3]> <chr [3]> ## 4 ANC <dbl [3]> <chr [3]> ## 5 ATL <dbl [3]> <chr [3]> ## 6 AUS <dbl [3]> <chr [3]> ## 7 AVL <dbl [3]> <chr [3]> ## 8 BDL <dbl [3]> <chr [3]> ## 9 BGR <dbl [3]> <chr [3]> ## 10 BHM <dbl [3]> <chr [3]> ## # ... with 95 more rows ``` To convert these outputs out of the list\-col format, I can use the function `unnest`. ``` flights_quantiles %>% unnest() ``` ``` ## # A tibble: 315 x 3 ## dest delays_val delays_q ## <chr> <dbl> <chr> ## 1 ABQ -24 25th ## 2 ABQ -5.5 50th ## 3 ABQ 22.8 75th ## 4 ACK -13 25th ## 5 ACK -3 50th ## 6 ACK 10 75th ## 7 ALB -17 25th ## 8 ALB -4 50th ## 9 ALB 28 75th ## 10 ANC -10.8 25th ## # ... with 305 more rows ``` This will output the values as individual rows, repeating the `dest` value for the length of the list. If I want to spread the `delays_quantile` values into seperate columns I can use the `spread` function that is in the tidying R chapter. ``` flights_quantiles %>% unnest() %>% spread(key = delays_q, value = delays_val, sep = "_") ``` ``` ## # A tibble: 105 x 4 ## dest delays_q_25th delays_q_50th delays_q_75th ## <chr> <dbl> <dbl> <dbl> ## 1 ABQ -24 -5.5 22.8 ## 2 ACK -13 -3 10 ## 3 ALB -17 -4 28 ## 4 ANC -10.8 1.5 10 ## 5 ATL -12 -1 16 ## 6 AUS -19 -5 15 ## 7 AVL -11 -1 13 ## 8 BDL -18 -10 14 ## 9 BGR -21.8 -9 19.8 ## 10 BHM -20 -2 34 ## # ... with 95 more rows ``` Let’s plot our unnested (but not unspread) data to see roughly the distribution of the delays for each destination at our quantiles of interest[11](#fn11). ``` flights_quantiles %>% unnest() %>% # mutate(delays_q = forcats::fct_reorder(f = delays_q, x = delays_val, fun = mean, na.rm = TRUE)) %>% ggplot(aes(x = delays_q, y = delays_val))+ geom_boxplot() ``` ``` ## Warning: Removed 3 rows containing non-finite values (stat_boxplot). ``` It can be a hassle naming the values explicitly. `quantile`’s default `probs` argument value is 0, 0\.25, 0\.5, 0\.75, 1\. Rather than needing to type the `delays_q` values `list(c('0%', '25%', '50%', '75%', '100%'))` you could have generated the values of these names dynamically using the `map` function in the `purrr` package (see chapter on iteration) with example for this by passing the `names` function over each value in `delays_val`. ``` flights_quantiles2 <- flights %>% group_by(dest) %>% summarise(delays_val = list(quantile(arr_delay, na.rm = TRUE)), delays_q = list(c('0th', '25th', '50th', '75th', '100th'))) %>% mutate(delays_q2 = purrr::map(delays_val, names)) flights_quantiles2 ``` ``` ## # A tibble: 105 x 4 ## dest delays_val delays_q delays_q2 ## <chr> <list> <list> <list> ## 1 ABQ <dbl [5]> <chr [5]> <chr [5]> ## 2 ACK <dbl [5]> <chr [5]> <chr [5]> ## 3 ALB <dbl [5]> <chr [5]> <chr [5]> ## 4 ANC <dbl [5]> <chr [5]> <chr [5]> ## 5 ATL <dbl [5]> <chr [5]> <chr [5]> ## 6 AUS <dbl [5]> <chr [5]> <chr [5]> ## 7 AVL <dbl [5]> <chr [5]> <chr [5]> ## 8 BDL <dbl [5]> <chr [5]> <chr [5]> ## 9 BGR <dbl [5]> <chr [5]> <chr [5]> ## 10 BHM <dbl [5]> <chr [5]> <chr [5]> ## # ... with 95 more rows ``` And then let’s `unnest` the data[12](#fn12). ``` flights_quantiles2 %>% unnest() ``` ``` ## # A tibble: 525 x 4 ## dest delays_val delays_q delays_q2 ## <chr> <dbl> <chr> <chr> ## 1 ABQ -61 0th 0% ## 2 ABQ -24 25th 25% ## 3 ABQ -5.5 50th 50% ## 4 ABQ 22.8 75th 75% ## 5 ABQ 153 100th 100% ## 6 ACK -25 0th 0% ## 7 ACK -13 25th 25% ## 8 ACK -3 50th 50% ## 9 ACK 10 75th 75% ## 10 ACK 221 100th 100% ## # ... with 515 more rows ``` #### 5\.6\.7\.1\.4\. But let’s look at those flights that have the greatest differences in proportion on\-time vs. 2 hours late while still having values in both categories[13](#fn13). ``` flights %>% group_by(flight) %>% summarise(ontime = sum(arr_delay <= 0, na.rm = TRUE)/n(), late.120 = sum(arr_delay >= 120, na.rm = TRUE)/n(), n = n()) %>% ungroup() %>% filter_at(c("ontime", "late.120"), all_vars(. != 0 & . != 1)) %>% mutate(max_dist = abs(ontime - late.120)) %>% arrange(desc(max_dist)) ``` ``` ## # A tibble: 2,098 x 5 ## flight ontime late.120 n max_dist ## <int> <dbl> <dbl> <int> <dbl> ## 1 5288 0.927 0.0244 41 0.902 ## 2 2085 0.901 0.00658 152 0.895 ## 3 2174 0.914 0.0286 35 0.886 ## 4 2243 0.9 0.0167 120 0.883 ## 5 2180 0.889 0.0131 153 0.876 ## 6 2118 0.867 0.00699 143 0.860 ## 7 1167 0.864 0.00662 302 0.858 ## 8 3613 0.886 0.0286 35 0.857 ## 9 1772 0.891 0.0364 55 0.855 ## 10 1157 0.847 0.00667 150 0.84 ## # ... with 2,088 more rows ``` #### 5\.6\.7\.1\.4\. But let’s look at those flights that have the greatest differences in proportion on\-time vs. 2 hours late while still having values in both categories[13](#fn13). ``` flights %>% group_by(flight) %>% summarise(ontime = sum(arr_delay <= 0, na.rm = TRUE)/n(), late.120 = sum(arr_delay >= 120, na.rm = TRUE)/n(), n = n()) %>% ungroup() %>% filter_at(c("ontime", "late.120"), all_vars(. != 0 & . != 1)) %>% mutate(max_dist = abs(ontime - late.120)) %>% arrange(desc(max_dist)) ``` ``` ## # A tibble: 2,098 x 5 ## flight ontime late.120 n max_dist ## <int> <dbl> <dbl> <int> <dbl> ## 1 5288 0.927 0.0244 41 0.902 ## 2 2085 0.901 0.00658 152 0.895 ## 3 2174 0.914 0.0286 35 0.886 ## 4 2243 0.9 0.0167 120 0.883 ## 5 2180 0.889 0.0131 153 0.876 ## 6 2118 0.867 0.00699 143 0.860 ## 7 1167 0.864 0.00662 302 0.858 ## 8 3613 0.886 0.0286 35 0.857 ## 9 1772 0.891 0.0364 55 0.855 ## 10 1157 0.847 0.00667 150 0.84 ## # ... with 2,088 more rows ``` ### 5\.6\.7\.4\. To measure the difference in speed you can use the `microbenchmark` function ``` microbenchmark::microbenchmark(sub_optimal = filter(flights, is.na(dep_delay) | is.na(arr_delay)), optimal = filter(flights, is.na(arr_delay)), times = 10) ``` ``` ## Unit: milliseconds ## expr min lq mean median uq max neval cld ## sub_optimal 5.5279 6.2409 6.55796 6.74025 6.9686 7.2225 10 b ## optimal 3.9316 4.3135 4.55498 4.57885 4.8483 5.1514 10 a ``` ### 5\.6\.7\.5\. Explore the percentage delayed vs. percentage cancelled. ``` flights %>% group_by(day) %>% summarise(cancelled = sum(is.na(arr_delay)), delayed = sum(arr_delay > 0, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num, delayed_perc = delayed / num) %>% ggplot(aes(x = day))+ geom_line(aes(y = cancelled_perc), colour = "dark blue")+ geom_line(aes(y = delayed_perc), colour = "dark red") ``` Let’s try faceting by origin and looking at both values next to each other. ``` flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% gather(key = type, value = value, avg_delayed, cancelled_perc) %>% ggplot(aes(x = day, y = value))+ geom_line()+ facet_grid(type ~ origin, scales = "free_y") ``` Look’s like the relationship across origins with the delay overlaid with color (not actually crazy about how this look). ``` flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% ggplot(aes(x = day, y = cancelled_perc, colour = avg_delayed))+ geom_line()+ facet_grid(origin ~ .) ``` Let’s look at values as individual points and overlay a `geom_smooth` ``` flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% ggplot(aes(avg_delayed, cancelled_perc, colour = origin))+ geom_point()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` **Modeling approach:** We also could approach this using a model and regressing the average proportion of cancelled flights on average delay. ``` cancelled_mod1 <- flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num) %>% lm(cancelled_perc ~ avg_delayed, data = .) summary(cancelled_mod1) ``` ``` ## ## Call: ## lm(formula = cancelled_perc ~ avg_delayed, data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.026363 -0.009392 -0.002610 0.006196 0.048436 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.0152588 0.0020945 7.285 1.12e-10 *** ## avg_delayed 0.0018688 0.0002311 8.086 2.54e-12 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.01342 on 91 degrees of freedom ## Multiple R-squared: 0.4181, Adjusted R-squared: 0.4117 ## F-statistic: 65.39 on 1 and 91 DF, p-value: 2.537e-12 ``` ``` # ggplot(aes(x = day, y = cancelled_perc))+ # geom_line() ``` If you were confused by the `.` in `lm(cancelled_perc ~ avg_delayed, data = .)`, the dot specifies where the output from the prior steps should be piped into. The default is for it to go into the first argument, but for the `lm` function, data is not the first argument, so I have to explicitly tell it that the prior steps output should be inputted into the data argument of the `lm` function. See [On piping dots](05-data-transformations.html#on-piping-dots) for more details. The average delay accounts for 42% of the variation in the proportion of canceled flights. Modeling the log\-odds of the proportion of cancelled flights might be more successful as it produces a variable not constrained by 0 to 1, better aligning with the assumptions of linear regression. ``` cancelled_mod2 <- flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num, cancelled_logodds = log(cancelled / (num - cancelled))) %>% lm(cancelled_logodds ~ avg_delayed, data = .) ``` To convert logodds back to percentage, I built the following equation. ``` convert_logodds <- function(log_odds) exp(log_odds) / (1 + exp(log_odds)) ``` Let’s calculate the MAE or mean absolute error on our percentages. ``` cancelled_preds2 <- flights %>% group_by(origin, day) %>% summarise(cancelled = sum(is.na(arr_delay)), avg_delayed = mean(arr_delay, na.rm = TRUE), num = n(), cancelled_perc = cancelled / num, cancelled_logodds = log(cancelled / (num - cancelled))) %>% ungroup() %>% modelr::spread_predictions(cancelled_mod1, cancelled_mod2) %>% mutate(cancelled_mod2 = convert_logodds(cancelled_mod2)) cancelled_preds2 %>% summarise(MAE1 = mean(abs(cancelled_perc - cancelled_mod1), na.rm = TRUE), MAE2 = mean(abs(cancelled_perc - cancelled_mod2), na.rm = TRUE), mean_value = mean(cancelled_perc, na.rm = TRUE)) ``` ``` ## # A tibble: 1 x 3 ## MAE1 MAE2 mean_value ## <dbl> <dbl> <dbl> ## 1 0.0101 0.00954 0.0279 ``` Let’s look at the differences in the outputs of the predictions from these models. ``` cancelled_preds2 %>% ggplot(aes(avg_delayed, cancelled_perc))+ geom_point()+ scale_size_continuous(range = c(1, 2))+ geom_line(aes(y = cancelled_mod1), colour = "blue", size = 1)+ geom_line(aes(y = cancelled_mod2), colour = "red", size = 1) ``` [14](#fn14) ### 5\.6\.7\.6\. As an example, let’s look at just Atl flights from LGA and compare DL, FL, MQ. ``` flights %>% filter(dest == 'ATL', origin == 'LGA') %>% count(carrier) ``` ``` ## # A tibble: 5 x 2 ## carrier n ## <chr> <int> ## 1 DL 5544 ## 2 EV 1 ## 3 FL 2337 ## 4 MQ 2322 ## 5 WN 59 ``` And compare the median delays between the three primary carriers DL, FL, MQ. ``` carriers_lga_atl <- flights %>% filter(dest == 'ATL', origin == 'LGA') %>% group_by(carrier) %>% # filter out small samples mutate(n_tot = n()) %>% filter(n_tot > 100) %>% select(-n_tot) %>% ### filter(!is.na(arr_delay)) %>% ungroup() label <- carriers_lga_atl %>% group_by(carrier) %>% summarise(arr_delay = median(arr_delay, na.rm = TRUE)) carriers_lga_atl %>% select(carrier, arr_delay) %>% ggplot()+ geom_boxplot(aes(carrier, arr_delay, colour = carrier), outlier.shape = NA)+ coord_cartesian(y = c(-60, 75))+ geom_text(mapping = aes(x = carrier, group = carrier, y = arr_delay + 5, label = arr_delay), data = label) ``` Or perhaps you want to use a statistical method to compare if the differences in the grouped are significant… ``` carriers_lga_atl %>% lm(arr_delay ~ carrier, data = .) %>% summary() ``` ``` ## ## Call: ## lm(formula = arr_delay ~ carrier, data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -64.74 -22.33 -11.33 4.67 888.67 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 6.3273 0.6149 10.29 < 2e-16 *** ## carrierFL 14.4172 1.1340 12.71 < 2e-16 *** ## carrierMQ 7.7067 1.1417 6.75 1.56e-11 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 45.48 on 9979 degrees of freedom ## Multiple R-squared: 0.01692, Adjusted R-squared: 0.01672 ## F-statistic: 85.86 on 2 and 9979 DF, p-value: < 2.2e-16 ``` This shows the mean delay for DL is \~6\.3, FL is \~20\.7, MQ is \~14 and FL and MQ are significantly different from DL (and DL is significantly different from 0\)[15](#fn15). The carrier accouts for \~1\.6% of the variation in arrival… etc…. ### 5\.7\.1\.6\. Let’s look at the fastest 20 `air_time`s for each destination. ``` flights_new2 %>% group_by(dest) %>% mutate(min_rank = min_rank(air_time)) %>% filter(min_rank < 20) %>% ggplot(aes(distance, air_time, colour = dest))+ geom_point()+ guides(colour = FALSE) ``` Let’s do the same for my custom `air_time` calculation `air_calc`. ``` flights_new2 %>% group_by(dest) %>% mutate(min_rank = min_rank(air_calc)) %>% filter(min_rank < 20) %>% ggplot(aes(distance, air_calc, colour = dest))+ geom_point()+ guides(colour = FALSE) ``` *Rather than the fastest 20, let’s look at the mean `dist` and `air_time` for each[16](#fn16).* First using the `air_time` value. ``` flights_new2 %>% mutate_at(.vars = c("dep_time", "arr_time"), .funs = funs(time_to_mins)) %>% group_by(dest) %>% summarise(mean_air = mean(air_time, na.rm = TRUE), mean_dist = mean(distance, na.rm = TRUE)) %>% ggplot(., aes(x = mean_dist, y = mean_air))+ geom_point(aes(colour = dest))+ scale_y_continuous(breaks = seq(0, 660, 60))+ guides(colour = FALSE) ``` ``` ## Warning: funs() is soft deprecated as of dplyr 0.8.0 ## please use list() instead ## ## # Before: ## funs(name = f(.) ## ## # After: ## list(name = ~f(.)) ## This warning is displayed once per session. ``` ``` ## Warning: Removed 1 rows containing missing values (geom_point). ``` Then with the custom `air_calc`. ``` flights_new2 %>% mutate_at(.vars = c("dep_time", "arr_time"), .funs = funs(time_to_mins)) %>% group_by(dest) %>% summarise(mean_air = mean(air_calc, na.rm = TRUE), mean_dist = mean(distance, na.rm = TRUE)) %>% ggplot(., aes(x = mean_dist, y = mean_air))+ geom_point(aes(colour = dest))+ scale_y_continuous(breaks = seq(0, 660, 60))+ guides(colour = FALSE) ``` ``` ## Warning: Removed 5 rows containing missing values (geom_point). ``` ### 5\.7\.1\.5 Let’s run this for every 3 lags (1, 4, 7, …) and plot. ``` lags_cors <- tibble(lag = seq(1,200, 3)) %>% mutate(cor = purrr::map_dbl(lag, cor_by_lag)) lags_cors %>% ggplot(aes(x = lag, cor))+ geom_line()+ coord_cartesian(ylim = c(0, 0.40)) ``` ### 5\.7\.1\.8\. ``` tail_nums_counts %>% nest() %>% sample_n(10) %>% unnest() %>% View() ``` ### On piping dots The `.` let’s you explicitly state where to pipe the output from the prior steps. The default is to have it go into the first argument of the function. *Let’s look at an example:* ``` flights %>% filter(!is.na(arr_delay)) %>% count(origin) ``` ``` ## # A tibble: 3 x 2 ## origin n ## <chr> <int> ## 1 EWR 117127 ## 2 JFK 109079 ## 3 LGA 101140 ``` This is the exact same thing as the code below, I just added the dots to be explicit about where in the function the output from the prior steps will go: ``` flights %>% filter(., !is.na(arr_delay)) %>% count(., origin) ``` ``` ## # A tibble: 3 x 2 ## origin n ## <chr> <int> ## 1 EWR 117127 ## 2 JFK 109079 ## 3 LGA 101140 ``` Functions in dplyr, etc. expect dataframes in the first argument, so the default piping behavior works fine you don’t end\-up using the dot in this way. However functions outside of the tidyverse are not always so consistent and may expect the dataframe (or w/e your output from the prior step is) in a different location of the function, hence the need to use the dot to specify where it should go. The example below uses base R’s `lm` (linear models) function to regress `arr_delay` on `dep_delay` and `distance`[17](#fn17). The first argument expects a function, the second argument the data, hence the need for the dot. ``` flights %>% filter(., !is.na(arr_delay)) %>% lm(arr_delay ~ dep_delay + distance, .) ``` ``` ## ## Call: ## lm(formula = arr_delay ~ dep_delay + distance, data = .) ## ## Coefficients: ## (Intercept) dep_delay distance ## -3.212779 1.018077 -0.002551 ``` When using the `.` in piping, I will usually make the argment name I am piping into explicit. This makes it more clear and also means if I have the position order wrong it doesn’t matter. ``` flights %>% filter(., !is.na(arr_delay)) %>% lm(arr_delay ~ dep_delay + distance, data = .) ``` You can also use the `.` in conjunction with R’s subsetting to output vectors. In the example below I filter flights, then extract the `arr_delay` column as a vector and pipe it into the base R function `quantile`. ``` flights %>% filter(!is.na(arr_delay)) %>% .$arr_delay %>% quantile(probs = seq(from = 0, to = 1, by = 0.10)) ``` ``` ## 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% ## -86 -26 -19 -14 -10 -5 1 9 21 52 1272 ``` `quantile` is expecting a numeric vector in it’s first argument so the above works. If instead of `.$arr_delay`, you’d tried `select(arr_delay`) the function would have failed because the `select` statement outputs a dataframe rather than a vector (and `quantile` would have become very angry with you). One weakness with the above method is it only allows you to input a single vector into the base R function (while many funcitons can take in multiple vectors). A better way of doing this is to use the `with` function. The `with` function allows you to pipe a dataframe into the first argument and then reference the column in that dataframe with just the field names. This makes using those base R funcitons easier and more similar in syntax to tidyverse functions. For example, the above example would look become. ``` flights %>% filter(!is.na(arr_delay)) %>% with(quantile(arr_delay, probs = seq(from = 0, to = 1, by = 0.10))) ``` ``` ## 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% ## -86 -26 -19 -14 -10 -5 1 9 21 52 1272 ``` This method also makes it easy to input multiple field names in this style. Let’s look at this with the `table` function[18](#fn18) ``` flights %>% filter(!is.na(arr_delay)) %>% with(table(origin, carrier)) ``` ``` ## carrier ## origin 9E AA AS B6 DL EV F9 FL HA MQ OO ## EWR 1193 3363 709 6472 4295 41557 0 0 0 2097 6 ## JFK 13742 13600 0 41666 20559 1326 0 0 342 6838 0 ## LGA 2359 14984 0 5911 22804 8225 681 3175 0 16102 23 ## carrier ## origin UA US VX WN YV ## EWR 45501 4326 1552 6056 0 ## JFK 4478 2964 3564 0 0 ## LGA 7803 12541 0 5988 544 ``` ### plotly The `plotly` package has a cool function `ggplotly` that allows you to add wrappers `ggplot` that turn it into html that allow you to do things like zoom\-in and hover over points. It also has a `frame` argument that allows you to make animations or filter between points. Here is an example from the `flights` dataset. ``` p <- flights %>% group_by(hour, month) %>% summarise(avg_delay = mean(arr_delay, na.rm = TRUE)) %>% ggplot(aes(x = hour, y = avg_delay, group = month, frame = month))+ geom_point()+ geom_smooth() plotly::ggplotly(p) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_smooth). ``` This is the base from which this is built. ``` flights %>% group_by(hour, month) %>% summarise(avg_delay = mean(arr_delay, na.rm = TRUE)) %>% ggplot(aes(x = hour, y = avg_delay, group = month))+ geom_point()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_smooth). ``` ``` ## Warning: Removed 1 rows containing missing values (geom_point). ```
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/07-exploratory-data-analysis.html
Ch. 7: Data exploration ======================= **Key questions:** * 7\.3\.4\. \#1 * 7\.5\.1\.1 \#2 * 7\.5\.3\.1\. \#2, 4 **Functions and notes:** * `cut_width`: specify binsize of each cut (often use with `geom_boxplot`) * `cut_number`: specify number of groups to make, allowing for variable binsize (often use with `geom_boxplot`) * `geom_histogram`: key args are `bins`, `binwidth` * `geom_freqpoly`: for if you want to have overlapping histograms (so outputs lines instead) + can set y as `..density..` to equalize scale of each (similar to how `geom_density` does). * `geom_boxplot`: adjust outliers with `outlier.colour`, `outlier.fill`, … * `geom_violin`: Creates double sided histograms for each factor of x * `geom_bin2d`: scatter plot of x and y values, but use shading to determine count/density in each point * `geom_hex`: same as `geom_bin2d` but hexagon instead of square shapes are shaded in * `reorder`: arg1 \= variable to reorder, arg2 \= variable to reorder it by arg3 \= function to reorder by (e.g. median, mean, max…) * `coord_cartesian`: adjust x,y window w/o filtering out values that are excluded from view * `xlim`; `ylim`: adjust window and filter out values not within window (same method as `scale_x(/y)_continuous`) + these v. `coord_cartesian` is important for geoms like `geom_smooth` that aggregate as they visualize * `ifelse`: vectorized if else (not to be confused with `if` and `else` functions) + `dplyr::if_else` is more strict alternative * `case_when`: create new variable that relies on complex combination of existing variables + often use when you have complex or multiple `ifelse` statements accruing 7\.3: Variation --------------- ### 7\.3\.4\. *1\. Explore the distribution of each of the x, y, and z variables in diamonds. What do you learn? Think about a diamond and how you might decide which dimension is the length, width, and depth.* x has some 0s which signifies a data colletion error, y and z have extreme outliers (z more so). ``` x_hist <- ggplot(diamonds)+ geom_histogram(aes(x = x), binwidth = 0.1)+ coord_cartesian(xlim = c(0, 10)) y_hist <- ggplot(diamonds)+ geom_histogram(aes(x = y), binwidth = 0.1)+ coord_cartesian(xlim = c(0, 10)) z_hist <- ggplot(diamonds)+ geom_histogram(aes(x = z), binwidth = 0.1)+ coord_cartesian(xlim = c(0, 10)) gridExtra::grid.arrange(x_hist, y_hist, z_hist, ncol = 1) ``` * All three have peaks and troughs on even points. X and y have more similar distributions than z. I would say that x and y are likely length and width and z depth because diamonds are typically circular on the face so will have the same ratio of length and width and we see this is the case for the x and y dimensions, whereas z tends to be more shallow. ``` diamonds %>% sample_n(1000) %>% ggplot()+ geom_point(aes(x, y))+ coord_fixed() diamonds %>% sample_n(1000) %>% ggplot()+ geom_point(aes(x, z))+ coord_fixed() ``` *2\. Explore the distribution of price. Do you discover anything unusual or surprising? (Hint: Carefully think about the binwidth and make sure you try a wide range of values.)* ``` ggplot(diamonds)+ geom_histogram(aes(x = price), binwidth=10) ``` Price is right skewed. Also notice that from \~1450 to \~1550 there are diamonds. ``` ggplot(diamonds)+ geom_histogram(aes(x = price), binwidth = 5)+coord_cartesian(xlim = c(1400,1600)) ``` *3\. How many diamonds are 0\.99 carat? How many are 1 carat? What do you think is the cause of the difference?* ``` filter(diamonds, carat == 0.99) %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 23 ``` ``` filter(diamonds, carat == 1) %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 1558 ``` For visual scale. ``` ggplot(diamonds)+ geom_histogram(aes(x=carat), binwidth=.01)+ coord_cartesian(xlim=c(.99,1)) ``` The difference may be caused by jewlers rounding\-up because people want to buy ‘1’ carat diamonds not 0\.99 carat diamonds. It could also be that some listings are simpoly only in integers[19](#fn19). *4\.Compare and contrast `coord_cartesian()` vs `xlim()` or `ylim()` when zooming in on a histogram. What happens if you leave binwidth unset? What happens if you try and zoom so only half a bar shows?* `coord_cartesian` does not change data ust window view where as `xlim` and `ylim` will get rid of data outside of domain[20](#fn20). 7\.4: Missing values -------------------- ### 7\.4\.1\. *1\. What happens to missing values in a histogram? What happens to missing values in a bar chart? Why is there a difference?* With numeric data they both filter out NAs, though for categorical / character variables the `barplot` will create a seperate olumn with the category. This is because `NA` can just be thought of as another category though it is difficulty to place it within a distribution of values. Treats these the same. ``` mutate(diamonds, carattest=ifelse(carat<1.5 & carat>.7, NA, carat)) %>% ggplot() + geom_histogram(aes(x=carattest)) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` ``` ## Warning: Removed 20543 rows containing non-finite values (stat_bin). ``` ``` mutate(diamonds, carattest=ifelse(carat<1.5 & carat>.7, NA, color)) %>% ggplot() + geom_bar(aes(x=carattest)) ``` ``` ## Warning: Removed 20543 rows containing non-finite values (stat_count). ``` For character than it creates a new bar for `NA`s ``` mutate(diamonds, carattest=ifelse(carat<1.5 & carat>.7, NA, color)) %>% ggplot() + geom_bar(aes(x = as.character(carattest))) ``` *2\. What does na.rm \= TRUE do in `mean()` and `sum()`?* Filters it out of the vector of values. 7\.5: Covariation ----------------- ### 7\.5\.1\.1\. *1\. Use what you’ve learned to improve the visualisation of the departure times of cancelled vs. non\-cancelled flights.* Looks like while non\-cancelled flights happen at similar frequency in mornings and evenings, cancelled flights happen at a greater frequency in the evenings. ``` nycflights13::flights %>% mutate( cancelled = is.na(dep_time), sched_hour = sched_dep_time %/% 100, sched_min = sched_dep_time %% 100, sched_dep_time = sched_hour + sched_min / 60 ) %>% ggplot(mapping = aes(x=sched_dep_time, y=..density..)) + geom_freqpoly(mapping = aes(colour = cancelled), binwidth = .25)+ xlim(c(5,25)) ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_bin). ``` ``` ## Warning: Removed 4 rows containing missing values (geom_path). ``` Let’s look at the same plot but smooth the distributions to make the pattern easier to see. ``` nycflights13::flights %>% mutate( cancelled = is.na(dep_time), sched_hour = sched_dep_time %/% 100, sched_min = sched_dep_time %% 100, sched_dep_time = sched_hour + sched_min / 60 ) %>% ggplot(mapping = aes(x=sched_dep_time)) + geom_density(mapping = aes(fill = cancelled), alpha = 0.30)+ xlim(c(5,25)) ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_density). ``` *2\.What variable in the diamonds dataset is most important for predicting the price of a diamond? How is that variable correlated with cut? Why does the combination of those two relationships lead to lower quality diamonds being more expensive?* `carat` is the most important for predicting price. ``` cor(diamonds$price, select(diamonds, carat, depth, table, x, y, z)) ``` ``` ## carat depth table x y z ## [1,] 0.9215913 -0.0106474 0.1271339 0.8844352 0.8654209 0.8612494 ``` fair `cut` seem to associate with a higher `carat` thus while lower quality diamonds may be selling for more that is being driven by the `carat` of the diamond (the most important factor in `price`) and the quality simply cannot offset this. ``` ggplot(data = diamonds, aes(x = cut, y = carat))+ geom_boxplot()+ coord_flip() ``` *3\.Install the `ggstance` package, and create a horizontal boxplot. How does this compare to using `coord_flip()`?* ``` ggplot(diamonds)+ ggstance::geom_boxploth(aes(x = carat, y = cut)) ggplot(diamonds)+ geom_boxplot(aes(x = cut, y = carat))+ coord_flip() ``` * Looks like it does the exact same thing as flipping `x` and `y` and using `coord_flip()` *4\. One problem with boxplots is that they were developed in an era of much smaller datasets and tend to display a prohibitively large number of “outlying values”. One approach to remedy this problem is the letter value plot. Install the lvplot package, and try using `geom_lv()` to display the distribution of `price` vs `cut`. What do you learn? How do you interpret the plots?* I found [this](https://stats.stackexchange.com/questions/301159/understanding-and-interpreting-letter-value-boxplots) helpful This produces a ‘letter\-value’ boxplot which means that in the first box you have the middle \~1/2 of data, then in the adoining boxes the next \~1/4, so within the middle 3 boxes you have the middle \~3/4 of data, next two boxes is \~7/8ths, then \~15/16th etc. ``` set.seed(1234) a <- diamonds %>% ggplot()+ lvplot::geom_lv(aes(x = cut, y = price)) set.seed(1234) b <- diamonds %>% ggplot()+ geom_boxplot(aes(x = cut, y = price)) ``` Perhaps a helpful way to understand this is to see what it looks like at different specified ‘k’ values (which) You can see the letters when you add `fill = ..LV..` to the aesthetic. ``` diamonds %>% ggplot()+ lvplot::geom_lv(aes(x = cut, y = price, alpha = ..LV..), fill = "blue")+ scale_alpha_discrete(range = c(0.7, 0)) ``` ``` ## Warning: Using alpha for a discrete variable is not advised. ``` ``` diamonds %>% ggplot()+ lvplot::geom_lv(aes(x = cut, y = price, fill = ..LV..)) ``` Letters represent ‘median’, ‘fourths’, ‘eights’… *5\. Compare and contrast `geom_violin()` with a facetted `geom_histogram()`, or a coloured `geom_freqpoly()`. What are the pros and cons of each method?* ``` ggplot(diamonds,aes(x = cut, y = carat))+ geom_violin() ggplot(diamonds,aes(colour = cut, x = carat, y = ..density..))+ geom_freqpoly() ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` ``` ggplot(diamonds, aes(x = carat, y = ..density..))+ geom_histogram()+ facet_wrap(~cut) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` I like how `geom_freqpoly` has points directly overlaying but it can also be tough to read some, and the lines can overlap and be tough to tell apart, you also have to specify `density` for this and `geom_histogram` whereas for `geom_violin` it is the default. The tails in `geom_violin` can be easy to read but they also pull these for each of the of the values whereas by faceting `geomo_histogram` and setting `scales = "free"` you can have independent scales. I think the biggest advantage of the histogram is that it is the most familiar so people will know what you’re looking at. *6\. If you have a small dataset, it’s sometimes useful to use `geom_jitter()` to see the relationship between a continuous and categorical variable. The ggbeeswarm package provides a number of methods similar to `geom_jitter()`. List them and briefly describe what each one does.* ``` ggplot(mpg, aes(x = displ, y = cty, color = drv))+ geom_point() ggplot(mpg, aes(x = displ, y = cty, color = drv))+ geom_jitter() ggplot(mpg, aes(x = displ, y = cty, color = drv))+ geom_beeswarm() ``` ``` ## Warning in f(...): The default behavior of beeswarm has changed in version ## 0.6.0. In versions <0.6.0, this plot would have been dodged on the y- ## axis. In versions >=0.6.0, grouponX=FALSE must be explicitly set to group ## on y-axis. Please set grouponX=TRUE/FALSE to avoid this warning and ensure ## proper axis choice. ``` ``` ggplot(mpg, aes(x = displ, y = cty, color = drv))+ geom_quasirandom() ``` ``` ## Warning in f(...): The default behavior of beeswarm has changed in version ## 0.6.0. In versions <0.6.0, this plot would have been dodged on the y- ## axis. In versions >=0.6.0, grouponX=FALSE must be explicitly set to group ## on y-axis. Please set grouponX=TRUE/FALSE to avoid this warning and ensure ## proper axis choice. ``` `geom_jitter` is similar to `geom_point` but it provides random noise to the points. You can control these with the `width` and `height` arguments. This is valuable as it allows you to better see points that may overlap one another. `geom_beeswarm` adds variation in a uniform pattern by default across only the x\-axis. `geom-quasirandom` also defaults to distributing the points across the x\-axis however it produces quasi\-random variation, ‘quasi’ because it looks as though points follow some interrelationship[21](#fn21) and if you run the plot multiple times you will get the exact same plot whereas for `geom_jitter` you will get a slightly different plot each time. To see the differences between `geom_beeswarm` and geom\_quasirandom\` it’s helpful to look at the plots above, but holding the y value constant at 1\. ``` plot_orig <- ggplot(mpg, aes(x = displ, y = cty, color = drv))+ geom_point() plot_bees <- ggplot(mpg, aes(x = 1, y = cty, color = drv))+ geom_beeswarm() plot_quasi <- ggplot(mpg, aes(x = 1, y = cty, color = drv))+ geom_quasirandom() gridExtra::grid.arrange(plot_orig, plot_bees, plot_quasi, ncol = 1) ``` ### 7\.5\.2\.1\. *1\. How could you rescale the count dataset above to more clearly show the distribution of cut within colour, or colour within cut?* Proportion cut in color: (change `group_by()` to `group_by(cut, color)` to set\-up the converse) ``` cut_in_color_graph <- diamonds %>% group_by(color, cut) %>% summarise(n = n()) %>% mutate(proportion_cut_in_color = n/sum(n)) %>% ggplot(aes(x = color, y = cut))+ geom_tile(aes(fill = proportion_cut_in_color))+ labs(fill = "proportion\ncut in color") cut_in_color_graph ``` This makes it clear that `ideal` cuts dominate the proportions of multiple colors, not ust G[22](#fn22) *2\. Use `geom_tile()` together with dplyr to explore how average flight delays vary by destination and month of year. What makes the plot difficult to read? How could you improve it?* I improved the original graph by adding in a filter so that only destinations that received over 10000 flights were included: ``` flights %>% group_by(dest, month) %>% summarise(delay_mean = mean(dep_delay, na.rm=TRUE), n = n()) %>% mutate(sum_n = sum(n)) %>% select(dest, month, delay_mean, n, sum_n) %>% as.data.frame() %>% filter(dest == "ABQ") %>% #the sum on n will be at the dest level here filter(sum_n > 30) %>% ggplot(aes(x = as.factor(month), y = dest, fill = delay_mean))+ geom_tile() ``` Another way to improve it may be to group the destinations into regions. This also will prevent you from filtering out data. We aren’t given region information, but we do have lat and long points in the `airports` dataset. See [Appendix](28-graphics-for-communication.html#appendix-13) for notes *3\. Why is it slightly better to use `aes(x = color, y = cut)` rather than `aes(x = cut, y = color)` in the example above?* If you’re comparing the proportion of cut in color and want to be looking at how the specific cut proportion is changing, it may easier to view this while looking left to right vs. down to up. Compare the two plots below. ``` cut_in_color_graph cut_in_color_graph+ coord_flip() ``` ### 7\.5\.3\. Two\-d histograms ``` smaller <- diamonds %>% filter(carat < 3) ggplot(data = smaller) + geom_hex(mapping = aes(x = carat, y = price)) #can change bin number ggplot(data = smaller) + geom_bin2d(mapping = aes(x = carat, y = price), bins = c(30, 30)) # #or binwidth (roughly equivalent chart would be created) # ggplot(data = smaller) + # geom_bin2d(mapping = aes(x = carat, y = price), binwidth = c(.1, 1000)) ``` Binned boxplots, violins, and lvs ``` #split by width ggplot(smaller, aes(x = carat, y = price))+ geom_boxplot(aes(group = cut_width(carat, 0.1))) #split to get approximately same number in each box with cut_number() ggplot(smaller, aes(x = carat, y = price))+ geom_boxplot(aes(group = cut_number(carat, 20))) ``` These methods don’t seem to work quite as well with violin plots or letter value plots: ``` ##violin ggplot(smaller, aes(x = carat, y = price))+ geom_violin(aes(group = cut_width(carat, 0.1))) ggplot(smaller, aes(x = carat, y = price))+ geom_violin(aes(group = cut_number(carat, 20))) ##letter value ggplot(smaller, aes(x = carat, y = price))+ lvplot::geom_lv(aes(group = cut_width(carat, 0.1))) ggplot(smaller, aes(x = carat, y = price))+ lvplot::geom_lv(aes(group = cut_number(carat, 20))) ``` They look a little bit improved if you allow for fewer values per bin compared to the examples with `geom_boxplot()` ``` ggplot(smaller, aes(x = carat, y = price))+ geom_violin(aes(group = cut_number(carat, 10))) ggplot(smaller, aes(x = carat, y = price))+ geom_violin(aes(group = cut_width(carat, 0.25))) ``` ### 7\.5\.3\.1\. *1\. Instead of summarising the conditional distribution with a boxplot, you could use a frequency polygon. What do you need to consider when using `cut_width()` vs `cut_number()`? How does that impact a visualisation of the 2d distribution of carat and price?* You should keep in mind how many lines you are going to create, they may overlap each other and look busy if you’re not careful. ``` ggplot(smaller, aes(x = price)) + geom_freqpoly(aes(colour = cut_number(carat, 10))) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` For the visualization below I wrapped it in the funciton `plotly::ggplotly()`. This funciton wraps your ggplot in html so that you can do things like hover over the points. ``` p <- ggplot(smaller, aes(x=price))+ geom_freqpoly(aes(colour = cut_width(carat, 0.25))) plotly::ggplotly(p) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` *2\. Visualise the distribution of `carat`, partitioned by `price.`* ``` ggplot(diamonds, aes(x = price, y = carat))+ geom_violin(aes(group = cut_width(price, 2500))) ``` *3\. How does the `price` distribution of very large diamonds compare to small diamonds. Is it as you expect, or does it surprise you?* ``` diamonds %>% mutate(percent_rank = percent_rank(carat), small = percent_rank < 0.025, large = percent_rank > 0.975) %>% filter(small | large) %>% ggplot(aes(large, price)) + geom_violin()+ facet_wrap(~large) ``` Small diamonds have a left\-skewed `price` distribution, large diamonds have a right skewed `price` distribution. *4\. Combine two of the techniques you’ve learned to visualise the combined distribution of cut, carat, and price.* ``` ggplot(diamonds, aes(x = carat, y = price))+ geom_jitter(aes(colour = cut), alpha = 0.2)+ geom_smooth(aes(colour = cut)) ``` ``` ## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")' ``` ``` ggplot(diamonds, aes(x = carat, y = price))+ geom_boxplot(aes(group = cut_width(carat, 0.5), colour = cut))+ facet_grid(. ~ cut) ##I think this gives a better visualization, but is a little more complicated to produce, I also have the github version of ggplot and do not know whether the `preserve` arg is available in current CRAN installation. diamonds %>% mutate(carat = cut(carat, 5)) %>% ggplot(aes(x = carat, y = price))+ geom_boxplot(aes(group = interaction(cut_width(carat, 0.5), cut), fill = cut), position = position_dodge(preserve = "single")) ``` *5\.Two dimensional plots reveal outliers that are not visible in one dimensional plots. For example, some points in the plot below have an unusual combination of x and y values, which makes the points outliers even though their x and y values appear normal when examined separately.* ``` ggplot(data = diamonds) + geom_point(mapping = aes(x = x, y = y)) + coord_cartesian(xlim = c(4, 11), ylim = c(4, 11)) ``` *Why is a scatterplot a better display than a binned plot for this case?* Binned plots give less precise value estimates at each point (constrained by the granularity of the binning) so outliers do not show\-up as clearly. They also show less precise relationships between the data. The level of variability (at least with boxplots) can also be tougher to intuit. For example, let’s look at the plot below as a binned boxplot. ``` ggplot(data = diamonds) + geom_boxplot(mapping = aes(x = cut_width(x, 1), y = y)) + coord_cartesian(xlim = c(4, 11), ylim = c(4, 11)) ``` Appendix -------- ### 7\.5\.2\.1\.2\. Plot below shows four regions I’ll split the country into. Seems like for a few destinations the lat and long points were likely misentered (probably backwards). ``` all_states <- map_data("state") p <- geom_polygon(data = all_states, aes(x = long, y = lat, group = group, label = NULL), colour = "white", fill = "grey10") dest_regions <- nycflights13::airports %>% mutate(lat_cut = cut(percent_rank(lat), 2, labels = c("S", "N")), lon_cut = cut(percent_rank(lon), 2, labels = c("W", "E")), quadrant = paste0(lat_cut, lon_cut)) point_plot <- dest_regions %>% ggplot(aes(lon, lat, colour = quadrant))+ p+ geom_point() point_plot+ coord_quickmap() ``` Now let’s join our region information with our flight data and do our calculations grouping by `quadrant` rather than `dest`. Note that those `quadrant`s with `NA` (did not join with `flights`) looked to be Pueorto Rico or other non\-state locations. ``` flights %>% left_join(dest_regions, by = c("dest" = "faa")) %>% group_by(quadrant, month) %>% summarise(delay_mean = mean(dep_delay, na.rm=TRUE), n = n()) %>% mutate(sum_n = sum(n)) %>% #the sum on n will be at the dest level here # filter(sum_n > 10000) %>% ggplot(aes(x = as.factor(month), y = quadrant, fill = delay_mean))+ geom_tile()+ scale_fill_gradient2(low = "blue", high = "red") ``` ### 7\.5\.3\.1\.4\. To get the `fill` value to vary need to iterate through and make each graph seperate, can’t ust use facet. ``` diamonds_nest <- diamonds %>% group_by(cut) %>% tidyr::nest() plot_free <- function(df, name){ ggplot(df)+ geom_bin2d(aes(carat, price))+ ggtitle(name) } gridExtra::grid.arrange(grobs = mutate(diamonds_nest, out = purrr::map2(data, cut, plot_free))$out) ``` ``` diamonds %>% mutate(cut = forcats::as_factor(as.character(cut), levels = c("Fair", "Good", "Very Good", "Premium", "Ideal"))) %>% # with(contrasts(cut)) lm(log(price) ~ log(carat) + cut, data = .) %>% summary() ``` ``` ## ## Call: ## lm(formula = log(price) ~ log(carat) + cut, data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.52247 -0.16484 -0.00587 0.16087 1.38115 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 8.517337 0.001996 4267.70 <2e-16 *** ## log(carat) 1.695771 0.001910 887.68 <2e-16 *** ## cutPremium -0.078994 0.002810 -28.11 <2e-16 *** ## cutGood -0.153967 0.004046 -38.06 <2e-16 *** ## cutVery Good -0.076458 0.002904 -26.32 <2e-16 *** ## cutFair -0.317212 0.006632 -47.83 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.2545 on 53934 degrees of freedom ## Multiple R-squared: 0.9371, Adjusted R-squared: 0.9371 ## F-statistic: 1.607e+05 on 5 and 53934 DF, p-value: < 2.2e-16 ``` ``` contrasts(diamonds$cut) ``` ``` ## .L .Q .C ^4 ## [1,] -0.6324555 0.5345225 -3.162278e-01 0.1195229 ## [2,] -0.3162278 -0.2672612 6.324555e-01 -0.4780914 ## [3,] 0.0000000 -0.5345225 -4.095972e-16 0.7171372 ## [4,] 0.3162278 -0.2672612 -6.324555e-01 -0.4780914 ## [5,] 0.6324555 0.5345225 3.162278e-01 0.1195229 ``` ``` count(diamonds, cut) ``` ``` ## # A tibble: 5 x 2 ## cut n ## <ord> <int> ## 1 Fair 1610 ## 2 Good 4906 ## 3 Very Good 12082 ## 4 Premium 13791 ## 5 Ideal 21551 ``` 7\.3: Variation --------------- ### 7\.3\.4\. *1\. Explore the distribution of each of the x, y, and z variables in diamonds. What do you learn? Think about a diamond and how you might decide which dimension is the length, width, and depth.* x has some 0s which signifies a data colletion error, y and z have extreme outliers (z more so). ``` x_hist <- ggplot(diamonds)+ geom_histogram(aes(x = x), binwidth = 0.1)+ coord_cartesian(xlim = c(0, 10)) y_hist <- ggplot(diamonds)+ geom_histogram(aes(x = y), binwidth = 0.1)+ coord_cartesian(xlim = c(0, 10)) z_hist <- ggplot(diamonds)+ geom_histogram(aes(x = z), binwidth = 0.1)+ coord_cartesian(xlim = c(0, 10)) gridExtra::grid.arrange(x_hist, y_hist, z_hist, ncol = 1) ``` * All three have peaks and troughs on even points. X and y have more similar distributions than z. I would say that x and y are likely length and width and z depth because diamonds are typically circular on the face so will have the same ratio of length and width and we see this is the case for the x and y dimensions, whereas z tends to be more shallow. ``` diamonds %>% sample_n(1000) %>% ggplot()+ geom_point(aes(x, y))+ coord_fixed() diamonds %>% sample_n(1000) %>% ggplot()+ geom_point(aes(x, z))+ coord_fixed() ``` *2\. Explore the distribution of price. Do you discover anything unusual or surprising? (Hint: Carefully think about the binwidth and make sure you try a wide range of values.)* ``` ggplot(diamonds)+ geom_histogram(aes(x = price), binwidth=10) ``` Price is right skewed. Also notice that from \~1450 to \~1550 there are diamonds. ``` ggplot(diamonds)+ geom_histogram(aes(x = price), binwidth = 5)+coord_cartesian(xlim = c(1400,1600)) ``` *3\. How many diamonds are 0\.99 carat? How many are 1 carat? What do you think is the cause of the difference?* ``` filter(diamonds, carat == 0.99) %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 23 ``` ``` filter(diamonds, carat == 1) %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 1558 ``` For visual scale. ``` ggplot(diamonds)+ geom_histogram(aes(x=carat), binwidth=.01)+ coord_cartesian(xlim=c(.99,1)) ``` The difference may be caused by jewlers rounding\-up because people want to buy ‘1’ carat diamonds not 0\.99 carat diamonds. It could also be that some listings are simpoly only in integers[19](#fn19). *4\.Compare and contrast `coord_cartesian()` vs `xlim()` or `ylim()` when zooming in on a histogram. What happens if you leave binwidth unset? What happens if you try and zoom so only half a bar shows?* `coord_cartesian` does not change data ust window view where as `xlim` and `ylim` will get rid of data outside of domain[20](#fn20). ### 7\.3\.4\. *1\. Explore the distribution of each of the x, y, and z variables in diamonds. What do you learn? Think about a diamond and how you might decide which dimension is the length, width, and depth.* x has some 0s which signifies a data colletion error, y and z have extreme outliers (z more so). ``` x_hist <- ggplot(diamonds)+ geom_histogram(aes(x = x), binwidth = 0.1)+ coord_cartesian(xlim = c(0, 10)) y_hist <- ggplot(diamonds)+ geom_histogram(aes(x = y), binwidth = 0.1)+ coord_cartesian(xlim = c(0, 10)) z_hist <- ggplot(diamonds)+ geom_histogram(aes(x = z), binwidth = 0.1)+ coord_cartesian(xlim = c(0, 10)) gridExtra::grid.arrange(x_hist, y_hist, z_hist, ncol = 1) ``` * All three have peaks and troughs on even points. X and y have more similar distributions than z. I would say that x and y are likely length and width and z depth because diamonds are typically circular on the face so will have the same ratio of length and width and we see this is the case for the x and y dimensions, whereas z tends to be more shallow. ``` diamonds %>% sample_n(1000) %>% ggplot()+ geom_point(aes(x, y))+ coord_fixed() diamonds %>% sample_n(1000) %>% ggplot()+ geom_point(aes(x, z))+ coord_fixed() ``` *2\. Explore the distribution of price. Do you discover anything unusual or surprising? (Hint: Carefully think about the binwidth and make sure you try a wide range of values.)* ``` ggplot(diamonds)+ geom_histogram(aes(x = price), binwidth=10) ``` Price is right skewed. Also notice that from \~1450 to \~1550 there are diamonds. ``` ggplot(diamonds)+ geom_histogram(aes(x = price), binwidth = 5)+coord_cartesian(xlim = c(1400,1600)) ``` *3\. How many diamonds are 0\.99 carat? How many are 1 carat? What do you think is the cause of the difference?* ``` filter(diamonds, carat == 0.99) %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 23 ``` ``` filter(diamonds, carat == 1) %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 1558 ``` For visual scale. ``` ggplot(diamonds)+ geom_histogram(aes(x=carat), binwidth=.01)+ coord_cartesian(xlim=c(.99,1)) ``` The difference may be caused by jewlers rounding\-up because people want to buy ‘1’ carat diamonds not 0\.99 carat diamonds. It could also be that some listings are simpoly only in integers[19](#fn19). *4\.Compare and contrast `coord_cartesian()` vs `xlim()` or `ylim()` when zooming in on a histogram. What happens if you leave binwidth unset? What happens if you try and zoom so only half a bar shows?* `coord_cartesian` does not change data ust window view where as `xlim` and `ylim` will get rid of data outside of domain[20](#fn20). 7\.4: Missing values -------------------- ### 7\.4\.1\. *1\. What happens to missing values in a histogram? What happens to missing values in a bar chart? Why is there a difference?* With numeric data they both filter out NAs, though for categorical / character variables the `barplot` will create a seperate olumn with the category. This is because `NA` can just be thought of as another category though it is difficulty to place it within a distribution of values. Treats these the same. ``` mutate(diamonds, carattest=ifelse(carat<1.5 & carat>.7, NA, carat)) %>% ggplot() + geom_histogram(aes(x=carattest)) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` ``` ## Warning: Removed 20543 rows containing non-finite values (stat_bin). ``` ``` mutate(diamonds, carattest=ifelse(carat<1.5 & carat>.7, NA, color)) %>% ggplot() + geom_bar(aes(x=carattest)) ``` ``` ## Warning: Removed 20543 rows containing non-finite values (stat_count). ``` For character than it creates a new bar for `NA`s ``` mutate(diamonds, carattest=ifelse(carat<1.5 & carat>.7, NA, color)) %>% ggplot() + geom_bar(aes(x = as.character(carattest))) ``` *2\. What does na.rm \= TRUE do in `mean()` and `sum()`?* Filters it out of the vector of values. ### 7\.4\.1\. *1\. What happens to missing values in a histogram? What happens to missing values in a bar chart? Why is there a difference?* With numeric data they both filter out NAs, though for categorical / character variables the `barplot` will create a seperate olumn with the category. This is because `NA` can just be thought of as another category though it is difficulty to place it within a distribution of values. Treats these the same. ``` mutate(diamonds, carattest=ifelse(carat<1.5 & carat>.7, NA, carat)) %>% ggplot() + geom_histogram(aes(x=carattest)) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` ``` ## Warning: Removed 20543 rows containing non-finite values (stat_bin). ``` ``` mutate(diamonds, carattest=ifelse(carat<1.5 & carat>.7, NA, color)) %>% ggplot() + geom_bar(aes(x=carattest)) ``` ``` ## Warning: Removed 20543 rows containing non-finite values (stat_count). ``` For character than it creates a new bar for `NA`s ``` mutate(diamonds, carattest=ifelse(carat<1.5 & carat>.7, NA, color)) %>% ggplot() + geom_bar(aes(x = as.character(carattest))) ``` *2\. What does na.rm \= TRUE do in `mean()` and `sum()`?* Filters it out of the vector of values. 7\.5: Covariation ----------------- ### 7\.5\.1\.1\. *1\. Use what you’ve learned to improve the visualisation of the departure times of cancelled vs. non\-cancelled flights.* Looks like while non\-cancelled flights happen at similar frequency in mornings and evenings, cancelled flights happen at a greater frequency in the evenings. ``` nycflights13::flights %>% mutate( cancelled = is.na(dep_time), sched_hour = sched_dep_time %/% 100, sched_min = sched_dep_time %% 100, sched_dep_time = sched_hour + sched_min / 60 ) %>% ggplot(mapping = aes(x=sched_dep_time, y=..density..)) + geom_freqpoly(mapping = aes(colour = cancelled), binwidth = .25)+ xlim(c(5,25)) ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_bin). ``` ``` ## Warning: Removed 4 rows containing missing values (geom_path). ``` Let’s look at the same plot but smooth the distributions to make the pattern easier to see. ``` nycflights13::flights %>% mutate( cancelled = is.na(dep_time), sched_hour = sched_dep_time %/% 100, sched_min = sched_dep_time %% 100, sched_dep_time = sched_hour + sched_min / 60 ) %>% ggplot(mapping = aes(x=sched_dep_time)) + geom_density(mapping = aes(fill = cancelled), alpha = 0.30)+ xlim(c(5,25)) ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_density). ``` *2\.What variable in the diamonds dataset is most important for predicting the price of a diamond? How is that variable correlated with cut? Why does the combination of those two relationships lead to lower quality diamonds being more expensive?* `carat` is the most important for predicting price. ``` cor(diamonds$price, select(diamonds, carat, depth, table, x, y, z)) ``` ``` ## carat depth table x y z ## [1,] 0.9215913 -0.0106474 0.1271339 0.8844352 0.8654209 0.8612494 ``` fair `cut` seem to associate with a higher `carat` thus while lower quality diamonds may be selling for more that is being driven by the `carat` of the diamond (the most important factor in `price`) and the quality simply cannot offset this. ``` ggplot(data = diamonds, aes(x = cut, y = carat))+ geom_boxplot()+ coord_flip() ``` *3\.Install the `ggstance` package, and create a horizontal boxplot. How does this compare to using `coord_flip()`?* ``` ggplot(diamonds)+ ggstance::geom_boxploth(aes(x = carat, y = cut)) ggplot(diamonds)+ geom_boxplot(aes(x = cut, y = carat))+ coord_flip() ``` * Looks like it does the exact same thing as flipping `x` and `y` and using `coord_flip()` *4\. One problem with boxplots is that they were developed in an era of much smaller datasets and tend to display a prohibitively large number of “outlying values”. One approach to remedy this problem is the letter value plot. Install the lvplot package, and try using `geom_lv()` to display the distribution of `price` vs `cut`. What do you learn? How do you interpret the plots?* I found [this](https://stats.stackexchange.com/questions/301159/understanding-and-interpreting-letter-value-boxplots) helpful This produces a ‘letter\-value’ boxplot which means that in the first box you have the middle \~1/2 of data, then in the adoining boxes the next \~1/4, so within the middle 3 boxes you have the middle \~3/4 of data, next two boxes is \~7/8ths, then \~15/16th etc. ``` set.seed(1234) a <- diamonds %>% ggplot()+ lvplot::geom_lv(aes(x = cut, y = price)) set.seed(1234) b <- diamonds %>% ggplot()+ geom_boxplot(aes(x = cut, y = price)) ``` Perhaps a helpful way to understand this is to see what it looks like at different specified ‘k’ values (which) You can see the letters when you add `fill = ..LV..` to the aesthetic. ``` diamonds %>% ggplot()+ lvplot::geom_lv(aes(x = cut, y = price, alpha = ..LV..), fill = "blue")+ scale_alpha_discrete(range = c(0.7, 0)) ``` ``` ## Warning: Using alpha for a discrete variable is not advised. ``` ``` diamonds %>% ggplot()+ lvplot::geom_lv(aes(x = cut, y = price, fill = ..LV..)) ``` Letters represent ‘median’, ‘fourths’, ‘eights’… *5\. Compare and contrast `geom_violin()` with a facetted `geom_histogram()`, or a coloured `geom_freqpoly()`. What are the pros and cons of each method?* ``` ggplot(diamonds,aes(x = cut, y = carat))+ geom_violin() ggplot(diamonds,aes(colour = cut, x = carat, y = ..density..))+ geom_freqpoly() ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` ``` ggplot(diamonds, aes(x = carat, y = ..density..))+ geom_histogram()+ facet_wrap(~cut) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` I like how `geom_freqpoly` has points directly overlaying but it can also be tough to read some, and the lines can overlap and be tough to tell apart, you also have to specify `density` for this and `geom_histogram` whereas for `geom_violin` it is the default. The tails in `geom_violin` can be easy to read but they also pull these for each of the of the values whereas by faceting `geomo_histogram` and setting `scales = "free"` you can have independent scales. I think the biggest advantage of the histogram is that it is the most familiar so people will know what you’re looking at. *6\. If you have a small dataset, it’s sometimes useful to use `geom_jitter()` to see the relationship between a continuous and categorical variable. The ggbeeswarm package provides a number of methods similar to `geom_jitter()`. List them and briefly describe what each one does.* ``` ggplot(mpg, aes(x = displ, y = cty, color = drv))+ geom_point() ggplot(mpg, aes(x = displ, y = cty, color = drv))+ geom_jitter() ggplot(mpg, aes(x = displ, y = cty, color = drv))+ geom_beeswarm() ``` ``` ## Warning in f(...): The default behavior of beeswarm has changed in version ## 0.6.0. In versions <0.6.0, this plot would have been dodged on the y- ## axis. In versions >=0.6.0, grouponX=FALSE must be explicitly set to group ## on y-axis. Please set grouponX=TRUE/FALSE to avoid this warning and ensure ## proper axis choice. ``` ``` ggplot(mpg, aes(x = displ, y = cty, color = drv))+ geom_quasirandom() ``` ``` ## Warning in f(...): The default behavior of beeswarm has changed in version ## 0.6.0. In versions <0.6.0, this plot would have been dodged on the y- ## axis. In versions >=0.6.0, grouponX=FALSE must be explicitly set to group ## on y-axis. Please set grouponX=TRUE/FALSE to avoid this warning and ensure ## proper axis choice. ``` `geom_jitter` is similar to `geom_point` but it provides random noise to the points. You can control these with the `width` and `height` arguments. This is valuable as it allows you to better see points that may overlap one another. `geom_beeswarm` adds variation in a uniform pattern by default across only the x\-axis. `geom-quasirandom` also defaults to distributing the points across the x\-axis however it produces quasi\-random variation, ‘quasi’ because it looks as though points follow some interrelationship[21](#fn21) and if you run the plot multiple times you will get the exact same plot whereas for `geom_jitter` you will get a slightly different plot each time. To see the differences between `geom_beeswarm` and geom\_quasirandom\` it’s helpful to look at the plots above, but holding the y value constant at 1\. ``` plot_orig <- ggplot(mpg, aes(x = displ, y = cty, color = drv))+ geom_point() plot_bees <- ggplot(mpg, aes(x = 1, y = cty, color = drv))+ geom_beeswarm() plot_quasi <- ggplot(mpg, aes(x = 1, y = cty, color = drv))+ geom_quasirandom() gridExtra::grid.arrange(plot_orig, plot_bees, plot_quasi, ncol = 1) ``` ### 7\.5\.2\.1\. *1\. How could you rescale the count dataset above to more clearly show the distribution of cut within colour, or colour within cut?* Proportion cut in color: (change `group_by()` to `group_by(cut, color)` to set\-up the converse) ``` cut_in_color_graph <- diamonds %>% group_by(color, cut) %>% summarise(n = n()) %>% mutate(proportion_cut_in_color = n/sum(n)) %>% ggplot(aes(x = color, y = cut))+ geom_tile(aes(fill = proportion_cut_in_color))+ labs(fill = "proportion\ncut in color") cut_in_color_graph ``` This makes it clear that `ideal` cuts dominate the proportions of multiple colors, not ust G[22](#fn22) *2\. Use `geom_tile()` together with dplyr to explore how average flight delays vary by destination and month of year. What makes the plot difficult to read? How could you improve it?* I improved the original graph by adding in a filter so that only destinations that received over 10000 flights were included: ``` flights %>% group_by(dest, month) %>% summarise(delay_mean = mean(dep_delay, na.rm=TRUE), n = n()) %>% mutate(sum_n = sum(n)) %>% select(dest, month, delay_mean, n, sum_n) %>% as.data.frame() %>% filter(dest == "ABQ") %>% #the sum on n will be at the dest level here filter(sum_n > 30) %>% ggplot(aes(x = as.factor(month), y = dest, fill = delay_mean))+ geom_tile() ``` Another way to improve it may be to group the destinations into regions. This also will prevent you from filtering out data. We aren’t given region information, but we do have lat and long points in the `airports` dataset. See [Appendix](28-graphics-for-communication.html#appendix-13) for notes *3\. Why is it slightly better to use `aes(x = color, y = cut)` rather than `aes(x = cut, y = color)` in the example above?* If you’re comparing the proportion of cut in color and want to be looking at how the specific cut proportion is changing, it may easier to view this while looking left to right vs. down to up. Compare the two plots below. ``` cut_in_color_graph cut_in_color_graph+ coord_flip() ``` ### 7\.5\.3\. Two\-d histograms ``` smaller <- diamonds %>% filter(carat < 3) ggplot(data = smaller) + geom_hex(mapping = aes(x = carat, y = price)) #can change bin number ggplot(data = smaller) + geom_bin2d(mapping = aes(x = carat, y = price), bins = c(30, 30)) # #or binwidth (roughly equivalent chart would be created) # ggplot(data = smaller) + # geom_bin2d(mapping = aes(x = carat, y = price), binwidth = c(.1, 1000)) ``` Binned boxplots, violins, and lvs ``` #split by width ggplot(smaller, aes(x = carat, y = price))+ geom_boxplot(aes(group = cut_width(carat, 0.1))) #split to get approximately same number in each box with cut_number() ggplot(smaller, aes(x = carat, y = price))+ geom_boxplot(aes(group = cut_number(carat, 20))) ``` These methods don’t seem to work quite as well with violin plots or letter value plots: ``` ##violin ggplot(smaller, aes(x = carat, y = price))+ geom_violin(aes(group = cut_width(carat, 0.1))) ggplot(smaller, aes(x = carat, y = price))+ geom_violin(aes(group = cut_number(carat, 20))) ##letter value ggplot(smaller, aes(x = carat, y = price))+ lvplot::geom_lv(aes(group = cut_width(carat, 0.1))) ggplot(smaller, aes(x = carat, y = price))+ lvplot::geom_lv(aes(group = cut_number(carat, 20))) ``` They look a little bit improved if you allow for fewer values per bin compared to the examples with `geom_boxplot()` ``` ggplot(smaller, aes(x = carat, y = price))+ geom_violin(aes(group = cut_number(carat, 10))) ggplot(smaller, aes(x = carat, y = price))+ geom_violin(aes(group = cut_width(carat, 0.25))) ``` ### 7\.5\.3\.1\. *1\. Instead of summarising the conditional distribution with a boxplot, you could use a frequency polygon. What do you need to consider when using `cut_width()` vs `cut_number()`? How does that impact a visualisation of the 2d distribution of carat and price?* You should keep in mind how many lines you are going to create, they may overlap each other and look busy if you’re not careful. ``` ggplot(smaller, aes(x = price)) + geom_freqpoly(aes(colour = cut_number(carat, 10))) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` For the visualization below I wrapped it in the funciton `plotly::ggplotly()`. This funciton wraps your ggplot in html so that you can do things like hover over the points. ``` p <- ggplot(smaller, aes(x=price))+ geom_freqpoly(aes(colour = cut_width(carat, 0.25))) plotly::ggplotly(p) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` *2\. Visualise the distribution of `carat`, partitioned by `price.`* ``` ggplot(diamonds, aes(x = price, y = carat))+ geom_violin(aes(group = cut_width(price, 2500))) ``` *3\. How does the `price` distribution of very large diamonds compare to small diamonds. Is it as you expect, or does it surprise you?* ``` diamonds %>% mutate(percent_rank = percent_rank(carat), small = percent_rank < 0.025, large = percent_rank > 0.975) %>% filter(small | large) %>% ggplot(aes(large, price)) + geom_violin()+ facet_wrap(~large) ``` Small diamonds have a left\-skewed `price` distribution, large diamonds have a right skewed `price` distribution. *4\. Combine two of the techniques you’ve learned to visualise the combined distribution of cut, carat, and price.* ``` ggplot(diamonds, aes(x = carat, y = price))+ geom_jitter(aes(colour = cut), alpha = 0.2)+ geom_smooth(aes(colour = cut)) ``` ``` ## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")' ``` ``` ggplot(diamonds, aes(x = carat, y = price))+ geom_boxplot(aes(group = cut_width(carat, 0.5), colour = cut))+ facet_grid(. ~ cut) ##I think this gives a better visualization, but is a little more complicated to produce, I also have the github version of ggplot and do not know whether the `preserve` arg is available in current CRAN installation. diamonds %>% mutate(carat = cut(carat, 5)) %>% ggplot(aes(x = carat, y = price))+ geom_boxplot(aes(group = interaction(cut_width(carat, 0.5), cut), fill = cut), position = position_dodge(preserve = "single")) ``` *5\.Two dimensional plots reveal outliers that are not visible in one dimensional plots. For example, some points in the plot below have an unusual combination of x and y values, which makes the points outliers even though their x and y values appear normal when examined separately.* ``` ggplot(data = diamonds) + geom_point(mapping = aes(x = x, y = y)) + coord_cartesian(xlim = c(4, 11), ylim = c(4, 11)) ``` *Why is a scatterplot a better display than a binned plot for this case?* Binned plots give less precise value estimates at each point (constrained by the granularity of the binning) so outliers do not show\-up as clearly. They also show less precise relationships between the data. The level of variability (at least with boxplots) can also be tougher to intuit. For example, let’s look at the plot below as a binned boxplot. ``` ggplot(data = diamonds) + geom_boxplot(mapping = aes(x = cut_width(x, 1), y = y)) + coord_cartesian(xlim = c(4, 11), ylim = c(4, 11)) ``` ### 7\.5\.1\.1\. *1\. Use what you’ve learned to improve the visualisation of the departure times of cancelled vs. non\-cancelled flights.* Looks like while non\-cancelled flights happen at similar frequency in mornings and evenings, cancelled flights happen at a greater frequency in the evenings. ``` nycflights13::flights %>% mutate( cancelled = is.na(dep_time), sched_hour = sched_dep_time %/% 100, sched_min = sched_dep_time %% 100, sched_dep_time = sched_hour + sched_min / 60 ) %>% ggplot(mapping = aes(x=sched_dep_time, y=..density..)) + geom_freqpoly(mapping = aes(colour = cancelled), binwidth = .25)+ xlim(c(5,25)) ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_bin). ``` ``` ## Warning: Removed 4 rows containing missing values (geom_path). ``` Let’s look at the same plot but smooth the distributions to make the pattern easier to see. ``` nycflights13::flights %>% mutate( cancelled = is.na(dep_time), sched_hour = sched_dep_time %/% 100, sched_min = sched_dep_time %% 100, sched_dep_time = sched_hour + sched_min / 60 ) %>% ggplot(mapping = aes(x=sched_dep_time)) + geom_density(mapping = aes(fill = cancelled), alpha = 0.30)+ xlim(c(5,25)) ``` ``` ## Warning: Removed 1 rows containing non-finite values (stat_density). ``` *2\.What variable in the diamonds dataset is most important for predicting the price of a diamond? How is that variable correlated with cut? Why does the combination of those two relationships lead to lower quality diamonds being more expensive?* `carat` is the most important for predicting price. ``` cor(diamonds$price, select(diamonds, carat, depth, table, x, y, z)) ``` ``` ## carat depth table x y z ## [1,] 0.9215913 -0.0106474 0.1271339 0.8844352 0.8654209 0.8612494 ``` fair `cut` seem to associate with a higher `carat` thus while lower quality diamonds may be selling for more that is being driven by the `carat` of the diamond (the most important factor in `price`) and the quality simply cannot offset this. ``` ggplot(data = diamonds, aes(x = cut, y = carat))+ geom_boxplot()+ coord_flip() ``` *3\.Install the `ggstance` package, and create a horizontal boxplot. How does this compare to using `coord_flip()`?* ``` ggplot(diamonds)+ ggstance::geom_boxploth(aes(x = carat, y = cut)) ggplot(diamonds)+ geom_boxplot(aes(x = cut, y = carat))+ coord_flip() ``` * Looks like it does the exact same thing as flipping `x` and `y` and using `coord_flip()` *4\. One problem with boxplots is that they were developed in an era of much smaller datasets and tend to display a prohibitively large number of “outlying values”. One approach to remedy this problem is the letter value plot. Install the lvplot package, and try using `geom_lv()` to display the distribution of `price` vs `cut`. What do you learn? How do you interpret the plots?* I found [this](https://stats.stackexchange.com/questions/301159/understanding-and-interpreting-letter-value-boxplots) helpful This produces a ‘letter\-value’ boxplot which means that in the first box you have the middle \~1/2 of data, then in the adoining boxes the next \~1/4, so within the middle 3 boxes you have the middle \~3/4 of data, next two boxes is \~7/8ths, then \~15/16th etc. ``` set.seed(1234) a <- diamonds %>% ggplot()+ lvplot::geom_lv(aes(x = cut, y = price)) set.seed(1234) b <- diamonds %>% ggplot()+ geom_boxplot(aes(x = cut, y = price)) ``` Perhaps a helpful way to understand this is to see what it looks like at different specified ‘k’ values (which) You can see the letters when you add `fill = ..LV..` to the aesthetic. ``` diamonds %>% ggplot()+ lvplot::geom_lv(aes(x = cut, y = price, alpha = ..LV..), fill = "blue")+ scale_alpha_discrete(range = c(0.7, 0)) ``` ``` ## Warning: Using alpha for a discrete variable is not advised. ``` ``` diamonds %>% ggplot()+ lvplot::geom_lv(aes(x = cut, y = price, fill = ..LV..)) ``` Letters represent ‘median’, ‘fourths’, ‘eights’… *5\. Compare and contrast `geom_violin()` with a facetted `geom_histogram()`, or a coloured `geom_freqpoly()`. What are the pros and cons of each method?* ``` ggplot(diamonds,aes(x = cut, y = carat))+ geom_violin() ggplot(diamonds,aes(colour = cut, x = carat, y = ..density..))+ geom_freqpoly() ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` ``` ggplot(diamonds, aes(x = carat, y = ..density..))+ geom_histogram()+ facet_wrap(~cut) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` I like how `geom_freqpoly` has points directly overlaying but it can also be tough to read some, and the lines can overlap and be tough to tell apart, you also have to specify `density` for this and `geom_histogram` whereas for `geom_violin` it is the default. The tails in `geom_violin` can be easy to read but they also pull these for each of the of the values whereas by faceting `geomo_histogram` and setting `scales = "free"` you can have independent scales. I think the biggest advantage of the histogram is that it is the most familiar so people will know what you’re looking at. *6\. If you have a small dataset, it’s sometimes useful to use `geom_jitter()` to see the relationship between a continuous and categorical variable. The ggbeeswarm package provides a number of methods similar to `geom_jitter()`. List them and briefly describe what each one does.* ``` ggplot(mpg, aes(x = displ, y = cty, color = drv))+ geom_point() ggplot(mpg, aes(x = displ, y = cty, color = drv))+ geom_jitter() ggplot(mpg, aes(x = displ, y = cty, color = drv))+ geom_beeswarm() ``` ``` ## Warning in f(...): The default behavior of beeswarm has changed in version ## 0.6.0. In versions <0.6.0, this plot would have been dodged on the y- ## axis. In versions >=0.6.0, grouponX=FALSE must be explicitly set to group ## on y-axis. Please set grouponX=TRUE/FALSE to avoid this warning and ensure ## proper axis choice. ``` ``` ggplot(mpg, aes(x = displ, y = cty, color = drv))+ geom_quasirandom() ``` ``` ## Warning in f(...): The default behavior of beeswarm has changed in version ## 0.6.0. In versions <0.6.0, this plot would have been dodged on the y- ## axis. In versions >=0.6.0, grouponX=FALSE must be explicitly set to group ## on y-axis. Please set grouponX=TRUE/FALSE to avoid this warning and ensure ## proper axis choice. ``` `geom_jitter` is similar to `geom_point` but it provides random noise to the points. You can control these with the `width` and `height` arguments. This is valuable as it allows you to better see points that may overlap one another. `geom_beeswarm` adds variation in a uniform pattern by default across only the x\-axis. `geom-quasirandom` also defaults to distributing the points across the x\-axis however it produces quasi\-random variation, ‘quasi’ because it looks as though points follow some interrelationship[21](#fn21) and if you run the plot multiple times you will get the exact same plot whereas for `geom_jitter` you will get a slightly different plot each time. To see the differences between `geom_beeswarm` and geom\_quasirandom\` it’s helpful to look at the plots above, but holding the y value constant at 1\. ``` plot_orig <- ggplot(mpg, aes(x = displ, y = cty, color = drv))+ geom_point() plot_bees <- ggplot(mpg, aes(x = 1, y = cty, color = drv))+ geom_beeswarm() plot_quasi <- ggplot(mpg, aes(x = 1, y = cty, color = drv))+ geom_quasirandom() gridExtra::grid.arrange(plot_orig, plot_bees, plot_quasi, ncol = 1) ``` ### 7\.5\.2\.1\. *1\. How could you rescale the count dataset above to more clearly show the distribution of cut within colour, or colour within cut?* Proportion cut in color: (change `group_by()` to `group_by(cut, color)` to set\-up the converse) ``` cut_in_color_graph <- diamonds %>% group_by(color, cut) %>% summarise(n = n()) %>% mutate(proportion_cut_in_color = n/sum(n)) %>% ggplot(aes(x = color, y = cut))+ geom_tile(aes(fill = proportion_cut_in_color))+ labs(fill = "proportion\ncut in color") cut_in_color_graph ``` This makes it clear that `ideal` cuts dominate the proportions of multiple colors, not ust G[22](#fn22) *2\. Use `geom_tile()` together with dplyr to explore how average flight delays vary by destination and month of year. What makes the plot difficult to read? How could you improve it?* I improved the original graph by adding in a filter so that only destinations that received over 10000 flights were included: ``` flights %>% group_by(dest, month) %>% summarise(delay_mean = mean(dep_delay, na.rm=TRUE), n = n()) %>% mutate(sum_n = sum(n)) %>% select(dest, month, delay_mean, n, sum_n) %>% as.data.frame() %>% filter(dest == "ABQ") %>% #the sum on n will be at the dest level here filter(sum_n > 30) %>% ggplot(aes(x = as.factor(month), y = dest, fill = delay_mean))+ geom_tile() ``` Another way to improve it may be to group the destinations into regions. This also will prevent you from filtering out data. We aren’t given region information, but we do have lat and long points in the `airports` dataset. See [Appendix](28-graphics-for-communication.html#appendix-13) for notes *3\. Why is it slightly better to use `aes(x = color, y = cut)` rather than `aes(x = cut, y = color)` in the example above?* If you’re comparing the proportion of cut in color and want to be looking at how the specific cut proportion is changing, it may easier to view this while looking left to right vs. down to up. Compare the two plots below. ``` cut_in_color_graph cut_in_color_graph+ coord_flip() ``` ### 7\.5\.3\. Two\-d histograms ``` smaller <- diamonds %>% filter(carat < 3) ggplot(data = smaller) + geom_hex(mapping = aes(x = carat, y = price)) #can change bin number ggplot(data = smaller) + geom_bin2d(mapping = aes(x = carat, y = price), bins = c(30, 30)) # #or binwidth (roughly equivalent chart would be created) # ggplot(data = smaller) + # geom_bin2d(mapping = aes(x = carat, y = price), binwidth = c(.1, 1000)) ``` Binned boxplots, violins, and lvs ``` #split by width ggplot(smaller, aes(x = carat, y = price))+ geom_boxplot(aes(group = cut_width(carat, 0.1))) #split to get approximately same number in each box with cut_number() ggplot(smaller, aes(x = carat, y = price))+ geom_boxplot(aes(group = cut_number(carat, 20))) ``` These methods don’t seem to work quite as well with violin plots or letter value plots: ``` ##violin ggplot(smaller, aes(x = carat, y = price))+ geom_violin(aes(group = cut_width(carat, 0.1))) ggplot(smaller, aes(x = carat, y = price))+ geom_violin(aes(group = cut_number(carat, 20))) ##letter value ggplot(smaller, aes(x = carat, y = price))+ lvplot::geom_lv(aes(group = cut_width(carat, 0.1))) ggplot(smaller, aes(x = carat, y = price))+ lvplot::geom_lv(aes(group = cut_number(carat, 20))) ``` They look a little bit improved if you allow for fewer values per bin compared to the examples with `geom_boxplot()` ``` ggplot(smaller, aes(x = carat, y = price))+ geom_violin(aes(group = cut_number(carat, 10))) ggplot(smaller, aes(x = carat, y = price))+ geom_violin(aes(group = cut_width(carat, 0.25))) ``` ### 7\.5\.3\.1\. *1\. Instead of summarising the conditional distribution with a boxplot, you could use a frequency polygon. What do you need to consider when using `cut_width()` vs `cut_number()`? How does that impact a visualisation of the 2d distribution of carat and price?* You should keep in mind how many lines you are going to create, they may overlap each other and look busy if you’re not careful. ``` ggplot(smaller, aes(x = price)) + geom_freqpoly(aes(colour = cut_number(carat, 10))) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` For the visualization below I wrapped it in the funciton `plotly::ggplotly()`. This funciton wraps your ggplot in html so that you can do things like hover over the points. ``` p <- ggplot(smaller, aes(x=price))+ geom_freqpoly(aes(colour = cut_width(carat, 0.25))) plotly::ggplotly(p) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` *2\. Visualise the distribution of `carat`, partitioned by `price.`* ``` ggplot(diamonds, aes(x = price, y = carat))+ geom_violin(aes(group = cut_width(price, 2500))) ``` *3\. How does the `price` distribution of very large diamonds compare to small diamonds. Is it as you expect, or does it surprise you?* ``` diamonds %>% mutate(percent_rank = percent_rank(carat), small = percent_rank < 0.025, large = percent_rank > 0.975) %>% filter(small | large) %>% ggplot(aes(large, price)) + geom_violin()+ facet_wrap(~large) ``` Small diamonds have a left\-skewed `price` distribution, large diamonds have a right skewed `price` distribution. *4\. Combine two of the techniques you’ve learned to visualise the combined distribution of cut, carat, and price.* ``` ggplot(diamonds, aes(x = carat, y = price))+ geom_jitter(aes(colour = cut), alpha = 0.2)+ geom_smooth(aes(colour = cut)) ``` ``` ## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")' ``` ``` ggplot(diamonds, aes(x = carat, y = price))+ geom_boxplot(aes(group = cut_width(carat, 0.5), colour = cut))+ facet_grid(. ~ cut) ##I think this gives a better visualization, but is a little more complicated to produce, I also have the github version of ggplot and do not know whether the `preserve` arg is available in current CRAN installation. diamonds %>% mutate(carat = cut(carat, 5)) %>% ggplot(aes(x = carat, y = price))+ geom_boxplot(aes(group = interaction(cut_width(carat, 0.5), cut), fill = cut), position = position_dodge(preserve = "single")) ``` *5\.Two dimensional plots reveal outliers that are not visible in one dimensional plots. For example, some points in the plot below have an unusual combination of x and y values, which makes the points outliers even though their x and y values appear normal when examined separately.* ``` ggplot(data = diamonds) + geom_point(mapping = aes(x = x, y = y)) + coord_cartesian(xlim = c(4, 11), ylim = c(4, 11)) ``` *Why is a scatterplot a better display than a binned plot for this case?* Binned plots give less precise value estimates at each point (constrained by the granularity of the binning) so outliers do not show\-up as clearly. They also show less precise relationships between the data. The level of variability (at least with boxplots) can also be tougher to intuit. For example, let’s look at the plot below as a binned boxplot. ``` ggplot(data = diamonds) + geom_boxplot(mapping = aes(x = cut_width(x, 1), y = y)) + coord_cartesian(xlim = c(4, 11), ylim = c(4, 11)) ``` Appendix -------- ### 7\.5\.2\.1\.2\. Plot below shows four regions I’ll split the country into. Seems like for a few destinations the lat and long points were likely misentered (probably backwards). ``` all_states <- map_data("state") p <- geom_polygon(data = all_states, aes(x = long, y = lat, group = group, label = NULL), colour = "white", fill = "grey10") dest_regions <- nycflights13::airports %>% mutate(lat_cut = cut(percent_rank(lat), 2, labels = c("S", "N")), lon_cut = cut(percent_rank(lon), 2, labels = c("W", "E")), quadrant = paste0(lat_cut, lon_cut)) point_plot <- dest_regions %>% ggplot(aes(lon, lat, colour = quadrant))+ p+ geom_point() point_plot+ coord_quickmap() ``` Now let’s join our region information with our flight data and do our calculations grouping by `quadrant` rather than `dest`. Note that those `quadrant`s with `NA` (did not join with `flights`) looked to be Pueorto Rico or other non\-state locations. ``` flights %>% left_join(dest_regions, by = c("dest" = "faa")) %>% group_by(quadrant, month) %>% summarise(delay_mean = mean(dep_delay, na.rm=TRUE), n = n()) %>% mutate(sum_n = sum(n)) %>% #the sum on n will be at the dest level here # filter(sum_n > 10000) %>% ggplot(aes(x = as.factor(month), y = quadrant, fill = delay_mean))+ geom_tile()+ scale_fill_gradient2(low = "blue", high = "red") ``` ### 7\.5\.3\.1\.4\. To get the `fill` value to vary need to iterate through and make each graph seperate, can’t ust use facet. ``` diamonds_nest <- diamonds %>% group_by(cut) %>% tidyr::nest() plot_free <- function(df, name){ ggplot(df)+ geom_bin2d(aes(carat, price))+ ggtitle(name) } gridExtra::grid.arrange(grobs = mutate(diamonds_nest, out = purrr::map2(data, cut, plot_free))$out) ``` ``` diamonds %>% mutate(cut = forcats::as_factor(as.character(cut), levels = c("Fair", "Good", "Very Good", "Premium", "Ideal"))) %>% # with(contrasts(cut)) lm(log(price) ~ log(carat) + cut, data = .) %>% summary() ``` ``` ## ## Call: ## lm(formula = log(price) ~ log(carat) + cut, data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.52247 -0.16484 -0.00587 0.16087 1.38115 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 8.517337 0.001996 4267.70 <2e-16 *** ## log(carat) 1.695771 0.001910 887.68 <2e-16 *** ## cutPremium -0.078994 0.002810 -28.11 <2e-16 *** ## cutGood -0.153967 0.004046 -38.06 <2e-16 *** ## cutVery Good -0.076458 0.002904 -26.32 <2e-16 *** ## cutFair -0.317212 0.006632 -47.83 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.2545 on 53934 degrees of freedom ## Multiple R-squared: 0.9371, Adjusted R-squared: 0.9371 ## F-statistic: 1.607e+05 on 5 and 53934 DF, p-value: < 2.2e-16 ``` ``` contrasts(diamonds$cut) ``` ``` ## .L .Q .C ^4 ## [1,] -0.6324555 0.5345225 -3.162278e-01 0.1195229 ## [2,] -0.3162278 -0.2672612 6.324555e-01 -0.4780914 ## [3,] 0.0000000 -0.5345225 -4.095972e-16 0.7171372 ## [4,] 0.3162278 -0.2672612 -6.324555e-01 -0.4780914 ## [5,] 0.6324555 0.5345225 3.162278e-01 0.1195229 ``` ``` count(diamonds, cut) ``` ``` ## # A tibble: 5 x 2 ## cut n ## <ord> <int> ## 1 Fair 1610 ## 2 Good 4906 ## 3 Very Good 12082 ## 4 Premium 13791 ## 5 Ideal 21551 ``` ### 7\.5\.2\.1\.2\. Plot below shows four regions I’ll split the country into. Seems like for a few destinations the lat and long points were likely misentered (probably backwards). ``` all_states <- map_data("state") p <- geom_polygon(data = all_states, aes(x = long, y = lat, group = group, label = NULL), colour = "white", fill = "grey10") dest_regions <- nycflights13::airports %>% mutate(lat_cut = cut(percent_rank(lat), 2, labels = c("S", "N")), lon_cut = cut(percent_rank(lon), 2, labels = c("W", "E")), quadrant = paste0(lat_cut, lon_cut)) point_plot <- dest_regions %>% ggplot(aes(lon, lat, colour = quadrant))+ p+ geom_point() point_plot+ coord_quickmap() ``` Now let’s join our region information with our flight data and do our calculations grouping by `quadrant` rather than `dest`. Note that those `quadrant`s with `NA` (did not join with `flights`) looked to be Pueorto Rico or other non\-state locations. ``` flights %>% left_join(dest_regions, by = c("dest" = "faa")) %>% group_by(quadrant, month) %>% summarise(delay_mean = mean(dep_delay, na.rm=TRUE), n = n()) %>% mutate(sum_n = sum(n)) %>% #the sum on n will be at the dest level here # filter(sum_n > 10000) %>% ggplot(aes(x = as.factor(month), y = quadrant, fill = delay_mean))+ geom_tile()+ scale_fill_gradient2(low = "blue", high = "red") ``` ### 7\.5\.3\.1\.4\. To get the `fill` value to vary need to iterate through and make each graph seperate, can’t ust use facet. ``` diamonds_nest <- diamonds %>% group_by(cut) %>% tidyr::nest() plot_free <- function(df, name){ ggplot(df)+ geom_bin2d(aes(carat, price))+ ggtitle(name) } gridExtra::grid.arrange(grobs = mutate(diamonds_nest, out = purrr::map2(data, cut, plot_free))$out) ``` ``` diamonds %>% mutate(cut = forcats::as_factor(as.character(cut), levels = c("Fair", "Good", "Very Good", "Premium", "Ideal"))) %>% # with(contrasts(cut)) lm(log(price) ~ log(carat) + cut, data = .) %>% summary() ``` ``` ## ## Call: ## lm(formula = log(price) ~ log(carat) + cut, data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.52247 -0.16484 -0.00587 0.16087 1.38115 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 8.517337 0.001996 4267.70 <2e-16 *** ## log(carat) 1.695771 0.001910 887.68 <2e-16 *** ## cutPremium -0.078994 0.002810 -28.11 <2e-16 *** ## cutGood -0.153967 0.004046 -38.06 <2e-16 *** ## cutVery Good -0.076458 0.002904 -26.32 <2e-16 *** ## cutFair -0.317212 0.006632 -47.83 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.2545 on 53934 degrees of freedom ## Multiple R-squared: 0.9371, Adjusted R-squared: 0.9371 ## F-statistic: 1.607e+05 on 5 and 53934 DF, p-value: < 2.2e-16 ``` ``` contrasts(diamonds$cut) ``` ``` ## .L .Q .C ^4 ## [1,] -0.6324555 0.5345225 -3.162278e-01 0.1195229 ## [2,] -0.3162278 -0.2672612 6.324555e-01 -0.4780914 ## [3,] 0.0000000 -0.5345225 -4.095972e-16 0.7171372 ## [4,] 0.3162278 -0.2672612 -6.324555e-01 -0.4780914 ## [5,] 0.6324555 0.5345225 3.162278e-01 0.1195229 ``` ``` count(diamonds, cut) ``` ``` ## # A tibble: 5 x 2 ## cut n ## <ord> <int> ## 1 Fair 1610 ## 2 Good 4906 ## 3 Very Good 12082 ## 4 Premium 13791 ## 5 Ideal 21551 ```
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/10-tibbles.html
Ch. 10: Tibbles =============== **Key questions:** *none* **Functions and notes:** * `tibble`: produces a dataframe w/ some other helpful qualities that have advantages over `data.frame` + see `vignette("tibble")` * `as_tibble`: convert to a tibble * `tribble`: transposed tibble \- set\-up for data entry into a tibble in code * `print`: can use print to set how the tibble will print ``` nycflights13::flights %>% print(n = 2, width = Inf) ``` ``` ## # A tibble: 336,776 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 517 515 2 830 ## 2 2013 1 1 533 529 4 850 ## sched_arr_time arr_delay carrier flight tailnum origin dest air_time ## <int> <dbl> <chr> <int> <chr> <chr> <chr> <dbl> ## 1 819 11 UA 1545 N14228 EWR IAH 227 ## 2 830 20 UA 1714 N24211 LGA IAH 227 ## distance hour minute time_hour ## <dbl> <dbl> <dbl> <dttm> ## 1 1400 5 15 2013-01-01 05:00:00 ## 2 1416 5 29 2013-01-01 05:00:00 ## # ... with 3.368e+05 more rows ``` ``` * Also can convert with `as.data.frame` or use `options`, see [10.5: Exercises], problem 6 ``` * `enframe`: let’s you encode name and value, see [10\.5: Exercises](10-tibbles.html#exercises), problem 5 below * `class`: for checking the class of the object + Though is not fully accurate, in that the actual object class of vectors is “base”, not double, etc., so kind of lies… 10\.5: Exercises ---------------- *1\. How can you tell if an object is a tibble? (Hint: try printing mtcars, which is a regular data frame).* Could look at printing, e.g. only prints first 15 rows and enough variables where you can see them all, or by checking explicitly the `class` function[23](#fn23) *2\. Compare and contrast the following operations on a data.frame and equivalent tibble. What is different? Why might the default data frame behaviours cause you frustration?* * Tibbles never change type of input e.g. from strings to factors * Tibbles never change names of variables, never creates row names * Tibbles print in a more concise and readable format + This difference is made more stark if working with list\-columns *3\. If you have the name of a variable stored in an object, e.g. var \<\- “mpg”, how can you extract the reference variable from a tibble?* ``` var <- "var_name" # Will extract the column as an atomic vector df[[var]] ``` *4\. Practice referring to non\-syntactic names in the following data frame by:* ``` df <- tibble(`1` = 1:10, `2` = 11:20) ``` *a. Extracting the variable called 1\.* ``` df %>% select(1) ``` ``` ## # A tibble: 10 x 1 ## `1` ## <int> ## 1 1 ## 2 2 ## 3 3 ## 4 4 ## 5 5 ## 6 6 ## 7 7 ## 8 8 ## 9 9 ## 10 10 ``` *b. Plotting a scatterplot of 1 vs 2\.* ``` df %>% ggplot(aes(x = `1`, y = `2`))+ geom_point() ``` *c. Creating a new column called 3 which is 2 divided by 1\.* ``` df %>% mutate(`3` = `1` / `2`) ``` ``` ## # A tibble: 10 x 3 ## `1` `2` `3` ## <int> <int> <dbl> ## 1 1 11 0.0909 ## 2 2 12 0.167 ## 3 3 13 0.231 ## 4 4 14 0.286 ## 5 5 15 0.333 ## 6 6 16 0.375 ## 7 7 17 0.412 ## 8 8 18 0.444 ## 9 9 19 0.474 ## 10 10 20 0.5 ``` *d. Renaming the columns to one, two and three.* ``` df %>% mutate(`3` = `1` / `2`) %>% rename(one = `1`, two = `2`, three = `3`) ``` ``` ## # A tibble: 10 x 3 ## one two three ## <int> <int> <dbl> ## 1 1 11 0.0909 ## 2 2 12 0.167 ## 3 3 13 0.231 ## 4 4 14 0.286 ## 5 5 15 0.333 ## 6 6 16 0.375 ## 7 7 17 0.412 ## 8 8 18 0.444 ## 9 9 19 0.474 ## 10 10 20 0.5 ``` *5\. What does `tibble::enframe()` do? When might you use it?* Let’s you encode “name” and “value” as a tibble from a named vector ``` tibble::enframe(c(a = 5, b = 8)) ``` ``` ## # A tibble: 2 x 2 ## name value ## <chr> <dbl> ## 1 a 5 ## 2 b 8 ``` ``` tibble::enframe(c(a = 5:8, b = 7:10)) ``` ``` ## # A tibble: 8 x 2 ## name value ## <chr> <int> ## 1 a1 5 ## 2 a2 6 ## 3 a3 7 ## 4 a4 8 ## 5 b1 7 ## 6 b2 8 ## 7 b3 9 ## 8 b4 10 ``` *6\. What option controls how many additional column names are printed at the footer of a tibble?* * argument `tibble.width` ``` options(tibble.print_max = n, tibble.print_min = m) options(tibble.width = Inf) options(dplyr.print_min = Inf) #to always show all rows ```
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/11-data-import.html
Ch. 11: Data import =================== **Key questions:** * 11\.2\.2 \# 1, 5 **Functions and notes:** * `read_csv()` reads comma delimited files, `read_csv2()` reads semicolon separated files (common in countries where `,` is used as the decimal place), `read_tsv()` reads tab delimited files, and `read_delim()` reads in files with any delimiter. * `read_fwf()` reads fixed width files. You can specify fields either by their widths with `fwf_widths()` or their position with `fwf_positions()`. `read_table()` reads a common variation of fixed width files where columns are separated by white space. * `data.table::fread`, good for raw speed * `read_log()` reads Apache style log files. (But also check out [webreadr](https://github.com/Ironholds/webreadr) which is built on top of `read_log()` and provides many more helpful tools.) ``` read_log(readr_example("example.log")) ``` ``` ## Parsed with column specification: ## cols( ## X1 = col_character(), ## X2 = col_character(), ## X3 = col_character(), ## X4 = col_character(), ## X5 = col_character(), ## X6 = col_integer(), ## X7 = col_integer() ## ) ``` ``` ## # A tibble: 2 x 7 ## X1 X2 X3 X4 X5 X6 X7 ## <chr> <chr> <chr> <chr> <chr> <int> <int> ## 1 172.21.~ <NA> "Microsoft~ 08/Apr/2001~ GET /scripts/iisadmi~ 200 3401 ## 2 127.0.0~ <NA> frank 10/Oct/2000~ GET /apache_pb.gif H~ 200 2326 ``` ``` # readr_example finds the correct path associated with the package by doing the following: ## system.file("extdata", path, package = "readr", mustWork = TRUE) ``` * `parse_*()`: take character vector and return more specialized vector + `parse_logical`, `parse_integer`, `parse_double`, `parse_number`[24](#fn24), `parse_character`, `parse_factor`(has `levels` as an argument), `parse_datetime`, `parse_date`, `parse_time` (these last three have an arg offormat\`) * `locale` argument for use in parse functions to affect formatting and to pass into argument `locale = locale(<arg> = "<value>"))` + for double, e.g. `locale = locale(decimal_mark = ",")` + for number, e.g. `locale = locale(grouping_mark = ".")` + for character, e.g. `locale = locale(encoding = "Latin1")` + for dates, e.g. `locale = locale(lang = "fr")` (to see built\-in language options use `date_name_langs` and can create own with `date_names`) * `problems`: returns problems on import * `charToRaw` will show underlying representation of a character[25](#fn25) * `guess_encoding`: can guess encoding – generally would use this with `charToRaw` and helps avoid figuring out encoding by hand ``` x1 <- "El Ni\xf1o was particularly bad this year" guess_encoding(charToRaw(x1)) ``` ``` ## # A tibble: 2 x 2 ## encoding confidence ## <chr> <dbl> ## 1 ISO-8859-1 0.46 ## 2 ISO-8859-9 0.23 ``` * If defaults don’t work (primarily for dates, times, numbers) can use following to specify parsing: Year : `%Y` (4 digits). : `%y` (2 digits); 00\-69 \-\> 2000\-2069, 70\-99 \-\> 1970\-1999\. Month `%m` (2 digits). `%b` (abbreviated name, like “Jan”). `%B` (full name, “January”). Day `%d` (2 digits). `%e` (optional leading space). Time `%H` 0\-23 hour. `%I` 0\-12, must be used with `%p`. `%p` AM/PM indicator. `%M` minutes. `%S` integer seconds. `%OS` real seconds. `%Z` Time zone (as name, e.g. `America/Chicago`). Beware of abbreviations: if you’re American, note that “EST” is a Canadian time zone that does not have daylight savings time. It is *not* Eastern Standard Time! We’ll come back to this \[time zones]. `%z` (as offset from UTC, e.g. `+0800`). Non\-digits `%.` skips one non\-digit character. `%*` skips any number of non\-digits. * `guess_parser`: returns what readr would think the character vector you provide it should be parsed into * `parse_guess`: uses readr’s guess of the vector type to parse the column * `col_*`: counterpoint to `parse_*` functions except for use when data is in a file rather than a string already loaded in R (as needed for `parse_*`) + `cols`: use this to pass in the `col_*` types, + `col_types = cols( x = col_double(), y = col_date() )` + to read in all columns as character use `col_types = cols(.default = col_character())` + can set `n_max` to smallish number if reading in large file and still debugging parsing issues * Recommend always input `cols`, if you want to be strict when loading in data set `stop_for_problems` * `read_lines` read into character vector of lines (use when having major issues) * `read_file` read in as character vector of length 1 (use when having major issues) * `read_rds` reads in R’s custom binary format[26](#fn26) * `feather::read_feather`: fast binary file format shared across languages[27](#fn27) * writing files^\[readr functions will encodes strings in UTF\-8 and saves dates and date\-times in ISO8601\): + `write_csv`, `write_tsv`, `write_excel_csv`, `write_rds`[28](#fn28), \* `feather::read_feather` * other packages for reading\-in / writing data: `haven`, `readxl`, `DBI`, `odbc`, `jsonlite`, `xml2`, `rio` 11\.2: Getting started ---------------------- ### 11\.2\.2\. *1\. What function would you use to read a file where fields were separated with “\|”?* `read_delim` for example: ``` read_delim("a|b|c\n1|2|3", delim = "|") ``` ``` ## # A tibble: 1 x 3 ## a b c ## <int> <int> <int> ## 1 1 2 3 ``` *2\. Apart from `file`, `skip`, and `comment`, what other arguments do `read_csv()` and `read_tsv()` have in common?*[29](#fn29) `col_names`, `col_types`, `locale`, `na`, `quoted_na`, `quote`, `trim_ws`, `skip`, `n_max`, `guess_max`, `progress` *3\. What are the most important arguments to `read_fwf()`?* `widths` *4\. Sometimes strings in a CSV file contain commas. To prevent them from causing problems they need to be surrounded by a quoting character, like " or ’.* By convention, `read_csv()` assumes that the quoting character will be ", and if you want to change it you’ll need to use `read_delim()` instead. *What arguments do you need to specify to read the following text into a data frame?* ``` "x,y\n1,'a,b'" ``` ``` ## [1] "x,y\n1,'a,b'" ``` ``` read_delim("x,y\n1,'a,b'", delim = ",", quote = "'") ``` ``` ## # A tibble: 1 x 2 ## x y ## <int> <chr> ## 1 1 a,b ``` *5\. Identify what is wrong with each of the following inline CSV files. What happens when you run the code?* * `read_csv("a,b\n1,2,3\n4,5,6")` + needs 3rd column header, skips 3rd argument on each line, corrected: `read_csv("a,b\n1,2\n3,4\n5,6")` * `read_csv("a,b,c\n1,2\n1,2,3,4")` + missing 3rd value on 2nd line so currently makes NA, corrected: `read_csv("a,b,c\n1,2, 1\n2,3,4")` * `read_csv("a,b\n\"1")` + 2nd value missing and 2nd quote mark missing (though quotes are unnecessary), corrected: `read_csv("a,b\n\"1\",\"2\"")` * `read_csv("a,b\n1,2\na,b")` + Have character and numeric types, * `read_csv("a;b\n1;3")` + need to make read\_csv2() because is seperated by semicolons, corrected: `read_csv2("a;b\n1;3")` 11\.3: Parsing a vector ----------------------- ### 11\.3\.5\. *1\. What are the most important arguments to `locale()`?* * It depends on the `parse_*` type, e.g. + for double, e.g. `locale = locale(decimal_mark = ",")` + for number, e.g. `locale = locale(grouping_mark = ".")` + for character, e.g. `locale = locale(encoding = "Latin1")` + for dates, e.g. `locale = locale(lang = "fr")` * Below are a few examples for double and number ``` parse_double("1.23") ``` ``` ## [1] 1.23 ``` ``` parse_double("1,23", locale = locale(decimal_mark=",")) ``` ``` ## [1] 1.23 ``` ``` parse_number("the cost is $125.34, it's a good deal") #Slightly different than book, captures decimal ``` ``` ## [1] 125.34 ``` ``` parse_number("$123,456,789") ``` ``` ## [1] 123456789 ``` ``` parse_number("$123.456.789") ``` ``` ## [1] 123.456 ``` ``` parse_number("$123.456.789", locale = locale(grouping_mark = "."))#used in europe ``` ``` ## [1] 123456789 ``` ``` parse_number("$123'456'789", locale = locale(grouping_mark = "'"))#used in Switzerland ``` ``` ## [1] 123456789 ``` *2\. What happens if you try and set `decimal_mark` and `grouping_mark` to the same character? What happens to the default value of `grouping_mark` when you set `decimal_mark` to “,”? What happens to the default value of decimal\_mark when you set the grouping\_mark to “.”?* * can’t set both to be same–if you change one, other automatically changes ``` parse_number("$135.435,45", locale = locale(grouping_mark = ".", decimal_mark = ",")) ``` ``` ## [1] 135435.4 ``` ``` parse_number("$135.435,45", locale = locale(grouping_mark = ".")) ``` ``` ## [1] 135435.4 ``` *3\. I didn’t discuss the `date_format` and `time_format` options to `locale()`. What do they do? Construct an example that shows when they might be useful.* * `date_format` and `time_format` in `locale()` let you set the default date and time formats ``` parse_date("31 january 2015", format = "%d %B %Y") ``` ``` ## [1] "2015-01-31" ``` ``` parse_date("31 january 2015", locale = locale(date_format = "%d %B %Y")) ``` ``` ## [1] "2015-01-31" ``` ``` #let's you change it in locale() ``` *4\. If you live outside the US, create a new locale object that encapsulates the settings for the types of file you read most commonly.* * I live in the US. *5\. What’s the difference between `read_csv()` and `read_csv2()`?* \* Second expects semicolons *6\. What are the most common encodings used in Europe? What are the most common encodings used in Asia? Do some googling to find out.* * Europe tends to use “%d\-%m\-%Y” * Asia tends to use “%d.%m.%Y” *7\. Generate the correct format string to parse each of the following dates and times:* ``` d1 <- "January 1, 2010" d2 <- "2015-Mar-07" d3 <- "06-Jun-2017" d4 <- c("August 19 (2015)", "July 1 (2015)") d5 <- "12/30/14" # Dec 30, 2014 t1 <- "1705" t2 <- "11:15:10.12 PM" t3 <- "11:::15:10.12 PM" ``` *Solutions:* ``` parse_date(d1, "%B %d, %Y") ``` ``` ## [1] "2010-01-01" ``` ``` parse_date(d2, "%Y-%b-%d") ``` ``` ## [1] "2015-03-07" ``` ``` parse_date(d3, "%d-%b-%Y") ``` ``` ## [1] "2017-06-06" ``` ``` parse_date(d3, "%d%.%b-%Y") #could use this alternatively ``` ``` ## [1] "2017-06-06" ``` ``` parse_date(d4, "%B %d (%Y)") ``` ``` ## [1] "2015-08-19" "2015-07-01" ``` ``` parse_date(d5, "%m/%d/%y") ``` ``` ## [1] "2014-12-30" ``` ``` parse_time(t1, "%H%M") ``` ``` ## 17:05:00 ``` ``` parse_time(t2, "%I:%M:%OS %p") ``` ``` ## 23:15:10.12 ``` ``` parse_time(t3, "%I%*%M:%OS %p") ``` ``` ## 23:15:10.12 ``` 11\.2: Getting started ---------------------- ### 11\.2\.2\. *1\. What function would you use to read a file where fields were separated with “\|”?* `read_delim` for example: ``` read_delim("a|b|c\n1|2|3", delim = "|") ``` ``` ## # A tibble: 1 x 3 ## a b c ## <int> <int> <int> ## 1 1 2 3 ``` *2\. Apart from `file`, `skip`, and `comment`, what other arguments do `read_csv()` and `read_tsv()` have in common?*[29](#fn29) `col_names`, `col_types`, `locale`, `na`, `quoted_na`, `quote`, `trim_ws`, `skip`, `n_max`, `guess_max`, `progress` *3\. What are the most important arguments to `read_fwf()`?* `widths` *4\. Sometimes strings in a CSV file contain commas. To prevent them from causing problems they need to be surrounded by a quoting character, like " or ’.* By convention, `read_csv()` assumes that the quoting character will be ", and if you want to change it you’ll need to use `read_delim()` instead. *What arguments do you need to specify to read the following text into a data frame?* ``` "x,y\n1,'a,b'" ``` ``` ## [1] "x,y\n1,'a,b'" ``` ``` read_delim("x,y\n1,'a,b'", delim = ",", quote = "'") ``` ``` ## # A tibble: 1 x 2 ## x y ## <int> <chr> ## 1 1 a,b ``` *5\. Identify what is wrong with each of the following inline CSV files. What happens when you run the code?* * `read_csv("a,b\n1,2,3\n4,5,6")` + needs 3rd column header, skips 3rd argument on each line, corrected: `read_csv("a,b\n1,2\n3,4\n5,6")` * `read_csv("a,b,c\n1,2\n1,2,3,4")` + missing 3rd value on 2nd line so currently makes NA, corrected: `read_csv("a,b,c\n1,2, 1\n2,3,4")` * `read_csv("a,b\n\"1")` + 2nd value missing and 2nd quote mark missing (though quotes are unnecessary), corrected: `read_csv("a,b\n\"1\",\"2\"")` * `read_csv("a,b\n1,2\na,b")` + Have character and numeric types, * `read_csv("a;b\n1;3")` + need to make read\_csv2() because is seperated by semicolons, corrected: `read_csv2("a;b\n1;3")` ### 11\.2\.2\. *1\. What function would you use to read a file where fields were separated with “\|”?* `read_delim` for example: ``` read_delim("a|b|c\n1|2|3", delim = "|") ``` ``` ## # A tibble: 1 x 3 ## a b c ## <int> <int> <int> ## 1 1 2 3 ``` *2\. Apart from `file`, `skip`, and `comment`, what other arguments do `read_csv()` and `read_tsv()` have in common?*[29](#fn29) `col_names`, `col_types`, `locale`, `na`, `quoted_na`, `quote`, `trim_ws`, `skip`, `n_max`, `guess_max`, `progress` *3\. What are the most important arguments to `read_fwf()`?* `widths` *4\. Sometimes strings in a CSV file contain commas. To prevent them from causing problems they need to be surrounded by a quoting character, like " or ’.* By convention, `read_csv()` assumes that the quoting character will be ", and if you want to change it you’ll need to use `read_delim()` instead. *What arguments do you need to specify to read the following text into a data frame?* ``` "x,y\n1,'a,b'" ``` ``` ## [1] "x,y\n1,'a,b'" ``` ``` read_delim("x,y\n1,'a,b'", delim = ",", quote = "'") ``` ``` ## # A tibble: 1 x 2 ## x y ## <int> <chr> ## 1 1 a,b ``` *5\. Identify what is wrong with each of the following inline CSV files. What happens when you run the code?* * `read_csv("a,b\n1,2,3\n4,5,6")` + needs 3rd column header, skips 3rd argument on each line, corrected: `read_csv("a,b\n1,2\n3,4\n5,6")` * `read_csv("a,b,c\n1,2\n1,2,3,4")` + missing 3rd value on 2nd line so currently makes NA, corrected: `read_csv("a,b,c\n1,2, 1\n2,3,4")` * `read_csv("a,b\n\"1")` + 2nd value missing and 2nd quote mark missing (though quotes are unnecessary), corrected: `read_csv("a,b\n\"1\",\"2\"")` * `read_csv("a,b\n1,2\na,b")` + Have character and numeric types, * `read_csv("a;b\n1;3")` + need to make read\_csv2() because is seperated by semicolons, corrected: `read_csv2("a;b\n1;3")` 11\.3: Parsing a vector ----------------------- ### 11\.3\.5\. *1\. What are the most important arguments to `locale()`?* * It depends on the `parse_*` type, e.g. + for double, e.g. `locale = locale(decimal_mark = ",")` + for number, e.g. `locale = locale(grouping_mark = ".")` + for character, e.g. `locale = locale(encoding = "Latin1")` + for dates, e.g. `locale = locale(lang = "fr")` * Below are a few examples for double and number ``` parse_double("1.23") ``` ``` ## [1] 1.23 ``` ``` parse_double("1,23", locale = locale(decimal_mark=",")) ``` ``` ## [1] 1.23 ``` ``` parse_number("the cost is $125.34, it's a good deal") #Slightly different than book, captures decimal ``` ``` ## [1] 125.34 ``` ``` parse_number("$123,456,789") ``` ``` ## [1] 123456789 ``` ``` parse_number("$123.456.789") ``` ``` ## [1] 123.456 ``` ``` parse_number("$123.456.789", locale = locale(grouping_mark = "."))#used in europe ``` ``` ## [1] 123456789 ``` ``` parse_number("$123'456'789", locale = locale(grouping_mark = "'"))#used in Switzerland ``` ``` ## [1] 123456789 ``` *2\. What happens if you try and set `decimal_mark` and `grouping_mark` to the same character? What happens to the default value of `grouping_mark` when you set `decimal_mark` to “,”? What happens to the default value of decimal\_mark when you set the grouping\_mark to “.”?* * can’t set both to be same–if you change one, other automatically changes ``` parse_number("$135.435,45", locale = locale(grouping_mark = ".", decimal_mark = ",")) ``` ``` ## [1] 135435.4 ``` ``` parse_number("$135.435,45", locale = locale(grouping_mark = ".")) ``` ``` ## [1] 135435.4 ``` *3\. I didn’t discuss the `date_format` and `time_format` options to `locale()`. What do they do? Construct an example that shows when they might be useful.* * `date_format` and `time_format` in `locale()` let you set the default date and time formats ``` parse_date("31 january 2015", format = "%d %B %Y") ``` ``` ## [1] "2015-01-31" ``` ``` parse_date("31 january 2015", locale = locale(date_format = "%d %B %Y")) ``` ``` ## [1] "2015-01-31" ``` ``` #let's you change it in locale() ``` *4\. If you live outside the US, create a new locale object that encapsulates the settings for the types of file you read most commonly.* * I live in the US. *5\. What’s the difference between `read_csv()` and `read_csv2()`?* \* Second expects semicolons *6\. What are the most common encodings used in Europe? What are the most common encodings used in Asia? Do some googling to find out.* * Europe tends to use “%d\-%m\-%Y” * Asia tends to use “%d.%m.%Y” *7\. Generate the correct format string to parse each of the following dates and times:* ``` d1 <- "January 1, 2010" d2 <- "2015-Mar-07" d3 <- "06-Jun-2017" d4 <- c("August 19 (2015)", "July 1 (2015)") d5 <- "12/30/14" # Dec 30, 2014 t1 <- "1705" t2 <- "11:15:10.12 PM" t3 <- "11:::15:10.12 PM" ``` *Solutions:* ``` parse_date(d1, "%B %d, %Y") ``` ``` ## [1] "2010-01-01" ``` ``` parse_date(d2, "%Y-%b-%d") ``` ``` ## [1] "2015-03-07" ``` ``` parse_date(d3, "%d-%b-%Y") ``` ``` ## [1] "2017-06-06" ``` ``` parse_date(d3, "%d%.%b-%Y") #could use this alternatively ``` ``` ## [1] "2017-06-06" ``` ``` parse_date(d4, "%B %d (%Y)") ``` ``` ## [1] "2015-08-19" "2015-07-01" ``` ``` parse_date(d5, "%m/%d/%y") ``` ``` ## [1] "2014-12-30" ``` ``` parse_time(t1, "%H%M") ``` ``` ## 17:05:00 ``` ``` parse_time(t2, "%I:%M:%OS %p") ``` ``` ## 23:15:10.12 ``` ``` parse_time(t3, "%I%*%M:%OS %p") ``` ``` ## 23:15:10.12 ``` ### 11\.3\.5\. *1\. What are the most important arguments to `locale()`?* * It depends on the `parse_*` type, e.g. + for double, e.g. `locale = locale(decimal_mark = ",")` + for number, e.g. `locale = locale(grouping_mark = ".")` + for character, e.g. `locale = locale(encoding = "Latin1")` + for dates, e.g. `locale = locale(lang = "fr")` * Below are a few examples for double and number ``` parse_double("1.23") ``` ``` ## [1] 1.23 ``` ``` parse_double("1,23", locale = locale(decimal_mark=",")) ``` ``` ## [1] 1.23 ``` ``` parse_number("the cost is $125.34, it's a good deal") #Slightly different than book, captures decimal ``` ``` ## [1] 125.34 ``` ``` parse_number("$123,456,789") ``` ``` ## [1] 123456789 ``` ``` parse_number("$123.456.789") ``` ``` ## [1] 123.456 ``` ``` parse_number("$123.456.789", locale = locale(grouping_mark = "."))#used in europe ``` ``` ## [1] 123456789 ``` ``` parse_number("$123'456'789", locale = locale(grouping_mark = "'"))#used in Switzerland ``` ``` ## [1] 123456789 ``` *2\. What happens if you try and set `decimal_mark` and `grouping_mark` to the same character? What happens to the default value of `grouping_mark` when you set `decimal_mark` to “,”? What happens to the default value of decimal\_mark when you set the grouping\_mark to “.”?* * can’t set both to be same–if you change one, other automatically changes ``` parse_number("$135.435,45", locale = locale(grouping_mark = ".", decimal_mark = ",")) ``` ``` ## [1] 135435.4 ``` ``` parse_number("$135.435,45", locale = locale(grouping_mark = ".")) ``` ``` ## [1] 135435.4 ``` *3\. I didn’t discuss the `date_format` and `time_format` options to `locale()`. What do they do? Construct an example that shows when they might be useful.* * `date_format` and `time_format` in `locale()` let you set the default date and time formats ``` parse_date("31 january 2015", format = "%d %B %Y") ``` ``` ## [1] "2015-01-31" ``` ``` parse_date("31 january 2015", locale = locale(date_format = "%d %B %Y")) ``` ``` ## [1] "2015-01-31" ``` ``` #let's you change it in locale() ``` *4\. If you live outside the US, create a new locale object that encapsulates the settings for the types of file you read most commonly.* * I live in the US. *5\. What’s the difference between `read_csv()` and `read_csv2()`?* \* Second expects semicolons *6\. What are the most common encodings used in Europe? What are the most common encodings used in Asia? Do some googling to find out.* * Europe tends to use “%d\-%m\-%Y” * Asia tends to use “%d.%m.%Y” *7\. Generate the correct format string to parse each of the following dates and times:* ``` d1 <- "January 1, 2010" d2 <- "2015-Mar-07" d3 <- "06-Jun-2017" d4 <- c("August 19 (2015)", "July 1 (2015)") d5 <- "12/30/14" # Dec 30, 2014 t1 <- "1705" t2 <- "11:15:10.12 PM" t3 <- "11:::15:10.12 PM" ``` *Solutions:* ``` parse_date(d1, "%B %d, %Y") ``` ``` ## [1] "2010-01-01" ``` ``` parse_date(d2, "%Y-%b-%d") ``` ``` ## [1] "2015-03-07" ``` ``` parse_date(d3, "%d-%b-%Y") ``` ``` ## [1] "2017-06-06" ``` ``` parse_date(d3, "%d%.%b-%Y") #could use this alternatively ``` ``` ## [1] "2017-06-06" ``` ``` parse_date(d4, "%B %d (%Y)") ``` ``` ## [1] "2015-08-19" "2015-07-01" ``` ``` parse_date(d5, "%m/%d/%y") ``` ``` ## [1] "2014-12-30" ``` ``` parse_time(t1, "%H%M") ``` ``` ## 17:05:00 ``` ``` parse_time(t2, "%I:%M:%OS %p") ``` ``` ## 23:15:10.12 ``` ``` parse_time(t3, "%I%*%M:%OS %p") ``` ``` ## 23:15:10.12 ```
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/12-tidy-data.html
Ch. 12: Tidy data ================= **Key questions:** * 12\.3\.3\. \#4 * 12\.4\.3\. \#1 * 12\.6\.1 \#4 **Functions and notes:** * `spread`: pivot, e.g. `spread(iris, Species)` * `gather`: unpivot, e.g. `gather(mpg, drv, class, key = "drive_or_class", value = "value")` * `separate`: one column into many, e.g. `separate(table3, rate, into = c("cases", "population"), sep = "/")` + default uses non\-alphanumeric character as `sep`, can also use number to separate by width * `extract` similar to separate but specify what to pull\-out rather than what to split by * `unite` inverse of separate ``` # example distinguishing separate, extract, unite tibble(x = c("a,b,c", "d,e,f", "h,i,j", "k,l,m")) %>% tidyr::separate(x, c("one", "two", "three"), sep = ",", remove = FALSE) %>% tidyr::unite(one, two, three, col = "x2", sep = ",", remove = FALSE) %>% tidyr::extract(x2, into = c("a", "b", "c"), regex = "([a-z]+),([a-z]+),([a-z]+)", remove = FALSE) ``` ``` ## # A tibble: 4 x 8 ## x x2 a b c one two three ## <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> ## 1 a,b,c a,b,c a b c a b c ## 2 d,e,f d,e,f d e f d e f ## 3 h,i,j h,i,j h i j h i j ## 4 k,l,m k,l,m k l m k l m ``` * `complete()` takes a set of columns, and finds all unique combinations. It then ensures the original dataset contains all those values, filling in explicit NAs where necessary. * `fill()` takes a set of columns where you want missing values to be replaced by the most recent non\-missing value (sometimes called last observation carried forward). ``` # examples of complete and fill treatment <- tribble( ~ person, ~ treatment, ~response, "Derrick Whitmore", 1, 7, NA, 2, 10, NA, 3, 9, "Katherine Burke", 1, 4 ) treatment %>% fill(person) ``` ``` ## # A tibble: 4 x 3 ## person treatment response ## <chr> <dbl> <dbl> ## 1 Derrick Whitmore 1 7 ## 2 Derrick Whitmore 2 10 ## 3 Derrick Whitmore 3 9 ## 4 Katherine Burke 1 4 ``` ``` treatment %>% fill(person) %>% complete(person, treatment) ``` ``` ## # A tibble: 6 x 3 ## person treatment response ## <chr> <dbl> <dbl> ## 1 Derrick Whitmore 1 7 ## 2 Derrick Whitmore 2 10 ## 3 Derrick Whitmore 3 9 ## 4 Katherine Burke 1 4 ## 5 Katherine Burke 2 NA ## 6 Katherine Burke 3 NA ``` 12\.2: Tidy data ---------------- ### 12\.2\.1\. *1\. Using prose, describe how the variables and observations are organised in each of the sample tables.* * `table1`: each country\-year is a row with cases and pop as values * `table2`: each country\-year\-type is a row * `table3`: each country\-year is a row with rate containing values for both `cases` and `population` * `table4a` and `table4b`: a represents cases, b population, each row is a country and then column are the year for the value *2\. Compute the `rate` for `table2`, and `table4a` \+ `table4b`. You will need to perform four operations:* 1. Extract the number of TB cases per country per year. 2. Extract the matching population per country per year. 3. Divide cases by population, and multiply by 10000\. 4. Store back in the appropriate place. 5. Which representation is easiest to work with? Which is hardest? Why? with `table2`: ``` table2 %>% spread(type, count) %>% mutate(rate = 1000 * cases / population) %>% arrange(country, year) ``` ``` ## # A tibble: 6 x 5 ## country year cases population rate ## <chr> <int> <int> <int> <dbl> ## 1 Afghanistan 1999 745 19987071 0.0373 ## 2 Afghanistan 2000 2666 20595360 0.129 ## 3 Brazil 1999 37737 172006362 0.219 ## 4 Brazil 2000 80488 174504898 0.461 ## 5 China 1999 212258 1272915272 0.167 ## 6 China 2000 213766 1280428583 0.167 ``` with `table4` ‘a’ and ‘b’\`: ``` table4a %>% gather(2,3, key = "year", value = "cases") %>% inner_join(table4b %>% gather(c(2,3), key = "year", value = "population"), by = c("country", "year")) %>% mutate(rate = 1000 * cases / population) ``` ``` ## # A tibble: 6 x 5 ## country year cases population rate ## <chr> <chr> <int> <int> <dbl> ## 1 Afghanistan 1999 745 19987071 0.0373 ## 2 Brazil 1999 37737 172006362 0.219 ## 3 China 1999 212258 1272915272 0.167 ## 4 Afghanistan 2000 2666 20595360 0.129 ## 5 Brazil 2000 80488 174504898 0.461 ## 6 China 2000 213766 1280428583 0.167 ``` * between these, `table2` was easier, though `table1` would have been easiest – is fewer steps to get 1 row \= 1 observation (if we define an observation as a country in a year with certain attributes) *3\. Recreate the plot showing change in cases over time using `table2` instead of `table1`. What do you need to do first?* ``` table2 %>% spread(type, count) %>% ggplot(aes(x = year, y = cases, group = country))+ geom_line(colour = "grey50")+ geom_point(aes(colour = country)) ``` * first had to spread data 12\.3: Spreading and gathering ------------------------------ ### 12\.3\.3\. *1\. Why are `gather()` and `spread()` not perfectly symmetrical?* Carefully consider the following example: ``` stocks <- tibble( year = c(2015, 2015, 2016, 2016), half = c( 1, 2, 1, 2), return = c(1.88, 0.59, 0.92, 0.17) ) stocks %>% spread(year, return) %>% gather("year", "return", `2015`:`2016`) ``` ``` ## # A tibble: 4 x 3 ## half year return ## <dbl> <chr> <dbl> ## 1 1 2015 1.88 ## 2 2 2015 0.59 ## 3 1 2016 0.92 ## 4 2 2016 0.17 ``` (Hint: look at the variable types and think about column names.) * are not perfectly symmetrical, because type for key \= changes to character when using `gather` – column type information is not transferred. * position of columns change as well * Both spread() and gather() have a convert argument. What does it do?\* Use this to automatically change `key` column type, otherwise will default in `gather` for example to become a character type. *2\. Why does this code fail?* ``` table4a %>% gather(1999, 2000, key = "year", value = "cases") ``` ``` ## Error in inds_combine(.vars, ind_list): Position must be between 0 and n ``` Need backticks on year column names ``` table4a %>% gather(`1999`, `2000`, key = "year", value = "cases") ``` ``` ## # A tibble: 6 x 3 ## country year cases ## <chr> <chr> <int> ## 1 Afghanistan 1999 745 ## 2 Brazil 1999 37737 ## 3 China 1999 212258 ## 4 Afghanistan 2000 2666 ## 5 Brazil 2000 80488 ## 6 China 2000 213766 ``` *3\. Why does spreading this tibble fail? How could you add a new column to fix the problem?* ``` people <- tribble( ~name, ~key, ~value, #-----------------|--------|------ "Phillip Woods", "age", 45, "Phillip Woods", "height", 186, "Phillip Woods", "age", 50, "Jessica Cordero", "age", 37, "Jessica Cordero", "height", 156 ) people %>% spread(key = "key", value = "value") ``` ``` ## Error: Each row of output must be identified by a unique combination of keys. ## Keys are shared for 2 rows: ## * 1, 3 ## Do you need to create unique ID with tibble::rowid_to_column()? ``` Fails because you have more than one age for philip woods, could add a unique ID column and it will work. ``` people %>% mutate(id = 1:n()) %>% spread(key = "key", value = "value") ``` ``` ## # A tibble: 5 x 4 ## name id age height ## <chr> <int> <dbl> <dbl> ## 1 Jessica Cordero 4 37 NA ## 2 Jessica Cordero 5 NA 156 ## 3 Phillip Woods 1 45 NA ## 4 Phillip Woods 2 NA 186 ## 5 Phillip Woods 3 50 NA ``` *4\. Tidy the simple tibble below. Do you need to spread or gather it? What are the variables?* ``` preg <- tribble( ~pregnant, ~male, ~female, "yes", NA, 10, "no", 20, 12 ) ``` Need to gather `gender` ``` preg %>% gather(male, female, key="gender", value="Number") ``` ``` ## # A tibble: 4 x 3 ## pregnant gender Number ## <chr> <chr> <dbl> ## 1 yes male NA ## 2 no male 20 ## 3 yes female 10 ## 4 no female 12 ``` 12\.4: Separating and uniting ----------------------------- ### 12\.4\.3\. *1\. What do the `extra` and `fill` arguments do in `separate()`? Experiment with the various options for the following two toy datasets.* ``` tibble(x = c("a,b,c", "d,e,f,g", "h,i,j")) %>% separate(x, c("one", "two", "three")) ``` ``` ## Warning: Expected 3 pieces. Additional pieces discarded in 1 rows [2]. ``` ``` ## # A tibble: 3 x 3 ## one two three ## <chr> <chr> <chr> ## 1 a b c ## 2 d e f ## 3 h i j ``` ``` tibble(x = c("a,b,c", "d,e", "f,g,i")) %>% separate(x, c("one", "two", "three")) ``` ``` ## Warning: Expected 3 pieces. Missing pieces filled with `NA` in 1 rows [2]. ``` ``` ## # A tibble: 3 x 3 ## one two three ## <chr> <chr> <chr> ## 1 a b c ## 2 d e <NA> ## 3 f g i ``` `fill` determines what to do when there are too few arguments, default is to fill right arguments with `NA` can change this though. ``` tribble(~a,~b, "so it goes","hello,you,are") %>% separate(b, into=c("e","f","g", "h"), sep=",", fill = "left") ``` ``` ## # A tibble: 1 x 5 ## a e f g h ## <chr> <chr> <chr> <chr> <chr> ## 1 so it goes <NA> hello you are ``` `extra` determines what to do when you have more splits than you do `into` spaces. Default is to drop extra Can change to limit num of splits to length of `into` with `extra = "merge"` ``` tribble(~a,~b, "so it goes","hello,you,are") %>% separate(b, into = c("e", "f"), sep = ",", extra = "merge") ``` ``` ## # A tibble: 1 x 3 ## a e f ## <chr> <chr> <chr> ## 1 so it goes hello you,are ``` *2\. Both `unite()` and `separate()` have a `remove` argument. What does it do? Why would you set it to `FALSE`?* `remove = FALSE` allows you to specify to keep the input column(s) ``` tibble(x = c("a,b,c", "d,e,f", "h,i,j", "k,l,m")) %>% separate(x, c("one", "two", "three"), remove = FALSE) %>% unite(one, two, three, col = "x2", sep = ",", remove = FALSE) ``` ``` ## # A tibble: 4 x 5 ## x x2 one two three ## <chr> <chr> <chr> <chr> <chr> ## 1 a,b,c a,b,c a b c ## 2 d,e,f d,e,f d e f ## 3 h,i,j h,i,j h i j ## 4 k,l,m k,l,m k l m ``` *3\. Compare and contrast `separate()` and `extract()`. Why are there three variations of separation (by position, by separator, and with groups), but only one unite?* `extract()` is like `separate()` but provide what to capture rather than what to split by as in `regex` instead of `sep`. ``` df <- data.frame(x = c("a-b", "a-d", "b-c", "d&e", NA), y = 1) df %>% extract(col = x, into = c("1st", "2nd"), regex = "([A-z]).([A-z])") ``` ``` ## 1st 2nd y ## 1 a b 1 ## 2 a d 1 ## 3 b c 1 ## 4 d e 1 ## 5 <NA> <NA> 1 ``` ``` df %>% separate(col = x, into = c("1st", "2nd"), sep = "[^A-z]") ``` ``` ## 1st 2nd y ## 1 a b 1 ## 2 a d 1 ## 3 b c 1 ## 4 d e 1 ## 5 <NA> <NA> 1 ``` Because there are many ways to split something up, but only one way to bring multiple things together… 12\.5: Missing values --------------------- ### 12\.5\.1\. *1\. Compare and contrast the fill arguments to `spread()` and `complete()`.* Both create open cells by filling out those that are not currently in the dataset, `complete` though does it by adding rows of iterations not included, whereas `spread` does it by the process of spreading out fields and naturally generating values that did not have row values previously. The`fill` in each specifies what value should go into these created cells. ``` treatment2 <- tribble( ~ person, ~ treatment, ~response, "Derrick Whitmore", 1, 7, "Derrick Whitmore", 2, 10, "Derrick Whitmore", 3, 9, "Katherine Burke", 1, 4 ) treatment2 %>% complete(person, treatment, fill = list(response = 0)) ``` ``` ## # A tibble: 6 x 3 ## person treatment response ## <chr> <dbl> <dbl> ## 1 Derrick Whitmore 1 7 ## 2 Derrick Whitmore 2 10 ## 3 Derrick Whitmore 3 9 ## 4 Katherine Burke 1 4 ## 5 Katherine Burke 2 0 ## 6 Katherine Burke 3 0 ``` ``` treatment2 %>% spread(key = treatment, value = response, fill = 0) ``` ``` ## # A tibble: 2 x 4 ## person `1` `2` `3` ## <chr> <dbl> <dbl> <dbl> ## 1 Derrick Whitmore 7 10 9 ## 2 Katherine Burke 4 0 0 ``` *2\. What does the `.direction` argument to `fill()` do?* Let’s you fill either up or down. E.g. below is filling up example. ``` treatment <- tribble( ~ person, ~ treatment, ~response, "Derrick Whitmore", 1, 7, NA, 2, 10, NA, 3, 9, "Katherine Burke", 1, 4 ) treatment %>% fill(person, .direction = "up") ``` ``` ## # A tibble: 4 x 3 ## person treatment response ## <chr> <dbl> <dbl> ## 1 Derrick Whitmore 1 7 ## 2 Katherine Burke 2 10 ## 3 Katherine Burke 3 9 ## 4 Katherine Burke 1 4 ``` 12\.6 Case Study ---------------- ### 12\.6\.1\. *1\. In this case study I set `na.rm = TRUE` just to make it easier to check that we had the correct values. Is this reasonable? Think about how missing values are represented in this dataset. Are there implicit missing values? What’s the difference between an `NA` and zero?* In this case it’s reasonable, an `NA` perhaps means the metric wasn’t recorded in that year, whereas 0 means it was recorded but there were 0 cases. Implicit missing values represented by say Afghanistan not having any reported cases for females. *2\. What happens if you neglect the `mutate()` step? (`mutate(key = stringr::str_replace(key, "newrel", "new_rel"))`)* You would have had one less column, so ‘newtype’ would have been on column, rather than these splitting. *3\. I claimed that `iso2` and `iso3` were redundant with `country`. Confirm this claim.* ``` who %>% select(1:3) %>% distinct() %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 219 ``` ``` who %>% select(1:3) %>% distinct() %>% unite(country, iso2, iso3, col = "country_combined") %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 219 ``` Both of the above are the same length. *4\. For each country, year, and sex compute the total number of cases of TB. Make an informative visualisation of the data.* ``` who_present <- who %>% gather(code, value, new_sp_m014:newrel_f65, na.rm = TRUE) %>% mutate(code = stringr::str_replace(code, "newrel", "new_rel")) %>% separate(code, c("new", "var", "sexage")) %>% select(-new, -iso2, -iso3) %>% separate(sexage, c("sex", "age"), sep = 1) ``` ``` who_present %>% group_by(sex, year, country) %>% summarise(mean=mean(value)) %>% ggplot(aes(x=year, y=mean, colour=sex))+ geom_point()+ geom_jitter() ``` ``` #ratio of female tb cases over time who_present %>% group_by(sex, year) %>% summarise(meansex=sum(value)) %>% ungroup() %>% group_by(year) %>% mutate(tot=sum(meansex)) %>% ungroup() %>% mutate(ratio=meansex/tot) %>% filter(sex=="f") %>% ggplot(aes(x=year, y=ratio, colour=sex))+ geom_line() ``` ``` #countries with the most outbreaks who_present %>% group_by(country, year) %>% summarise(n=sum(value)) %>% ungroup() %>% group_by(country) %>% mutate(total_country=sum(n)) %>% filter(total_country>1000000) %>% ggplot(aes(x=year,y=n,colour=country))+ geom_line() ``` ``` #countries with the most split by gender as well who_present %>% group_by(country, sex, year) %>% summarise(n=sum(value)) %>% ungroup() %>% group_by(country) %>% mutate(total_country=sum(n)) %>% filter(total_country>1000000) %>% ggplot(aes(x=year,y=n,colour=sex))+ geom_line()+ facet_wrap(~country) ``` ``` #take log and summarise who_present %>% group_by(country, year) %>% summarise(n=sum(value), logn=log(n)) %>% ungroup() %>% group_by(country) %>% mutate(total_c=sum(n)) %>% filter(total_c>1000000) %>% ggplot(aes(x=year,y=logn, colour=country))+ geom_line(show.legend=TRUE) ``` ``` #average # of countries with more female TB cases who_present %>% group_by(country, year, sex) %>% summarise(n=sum(value), logn=log(n)) %>% ungroup() %>% group_by(country, year) %>% mutate(total_c=sum(n)) %>% ungroup() %>% mutate(perc_gender=n/total_c, femalemore=ifelse(perc_gender>.5,1,0)) %>% filter(sex=="f") %>% group_by(year) %>% summarise(summaryfem=mean(femalemore,na.rm=TRUE )) %>% ggplot(aes(x=year,y=summaryfem))+ geom_line() ``` 12\.2: Tidy data ---------------- ### 12\.2\.1\. *1\. Using prose, describe how the variables and observations are organised in each of the sample tables.* * `table1`: each country\-year is a row with cases and pop as values * `table2`: each country\-year\-type is a row * `table3`: each country\-year is a row with rate containing values for both `cases` and `population` * `table4a` and `table4b`: a represents cases, b population, each row is a country and then column are the year for the value *2\. Compute the `rate` for `table2`, and `table4a` \+ `table4b`. You will need to perform four operations:* 1. Extract the number of TB cases per country per year. 2. Extract the matching population per country per year. 3. Divide cases by population, and multiply by 10000\. 4. Store back in the appropriate place. 5. Which representation is easiest to work with? Which is hardest? Why? with `table2`: ``` table2 %>% spread(type, count) %>% mutate(rate = 1000 * cases / population) %>% arrange(country, year) ``` ``` ## # A tibble: 6 x 5 ## country year cases population rate ## <chr> <int> <int> <int> <dbl> ## 1 Afghanistan 1999 745 19987071 0.0373 ## 2 Afghanistan 2000 2666 20595360 0.129 ## 3 Brazil 1999 37737 172006362 0.219 ## 4 Brazil 2000 80488 174504898 0.461 ## 5 China 1999 212258 1272915272 0.167 ## 6 China 2000 213766 1280428583 0.167 ``` with `table4` ‘a’ and ‘b’\`: ``` table4a %>% gather(2,3, key = "year", value = "cases") %>% inner_join(table4b %>% gather(c(2,3), key = "year", value = "population"), by = c("country", "year")) %>% mutate(rate = 1000 * cases / population) ``` ``` ## # A tibble: 6 x 5 ## country year cases population rate ## <chr> <chr> <int> <int> <dbl> ## 1 Afghanistan 1999 745 19987071 0.0373 ## 2 Brazil 1999 37737 172006362 0.219 ## 3 China 1999 212258 1272915272 0.167 ## 4 Afghanistan 2000 2666 20595360 0.129 ## 5 Brazil 2000 80488 174504898 0.461 ## 6 China 2000 213766 1280428583 0.167 ``` * between these, `table2` was easier, though `table1` would have been easiest – is fewer steps to get 1 row \= 1 observation (if we define an observation as a country in a year with certain attributes) *3\. Recreate the plot showing change in cases over time using `table2` instead of `table1`. What do you need to do first?* ``` table2 %>% spread(type, count) %>% ggplot(aes(x = year, y = cases, group = country))+ geom_line(colour = "grey50")+ geom_point(aes(colour = country)) ``` * first had to spread data ### 12\.2\.1\. *1\. Using prose, describe how the variables and observations are organised in each of the sample tables.* * `table1`: each country\-year is a row with cases and pop as values * `table2`: each country\-year\-type is a row * `table3`: each country\-year is a row with rate containing values for both `cases` and `population` * `table4a` and `table4b`: a represents cases, b population, each row is a country and then column are the year for the value *2\. Compute the `rate` for `table2`, and `table4a` \+ `table4b`. You will need to perform four operations:* 1. Extract the number of TB cases per country per year. 2. Extract the matching population per country per year. 3. Divide cases by population, and multiply by 10000\. 4. Store back in the appropriate place. 5. Which representation is easiest to work with? Which is hardest? Why? with `table2`: ``` table2 %>% spread(type, count) %>% mutate(rate = 1000 * cases / population) %>% arrange(country, year) ``` ``` ## # A tibble: 6 x 5 ## country year cases population rate ## <chr> <int> <int> <int> <dbl> ## 1 Afghanistan 1999 745 19987071 0.0373 ## 2 Afghanistan 2000 2666 20595360 0.129 ## 3 Brazil 1999 37737 172006362 0.219 ## 4 Brazil 2000 80488 174504898 0.461 ## 5 China 1999 212258 1272915272 0.167 ## 6 China 2000 213766 1280428583 0.167 ``` with `table4` ‘a’ and ‘b’\`: ``` table4a %>% gather(2,3, key = "year", value = "cases") %>% inner_join(table4b %>% gather(c(2,3), key = "year", value = "population"), by = c("country", "year")) %>% mutate(rate = 1000 * cases / population) ``` ``` ## # A tibble: 6 x 5 ## country year cases population rate ## <chr> <chr> <int> <int> <dbl> ## 1 Afghanistan 1999 745 19987071 0.0373 ## 2 Brazil 1999 37737 172006362 0.219 ## 3 China 1999 212258 1272915272 0.167 ## 4 Afghanistan 2000 2666 20595360 0.129 ## 5 Brazil 2000 80488 174504898 0.461 ## 6 China 2000 213766 1280428583 0.167 ``` * between these, `table2` was easier, though `table1` would have been easiest – is fewer steps to get 1 row \= 1 observation (if we define an observation as a country in a year with certain attributes) *3\. Recreate the plot showing change in cases over time using `table2` instead of `table1`. What do you need to do first?* ``` table2 %>% spread(type, count) %>% ggplot(aes(x = year, y = cases, group = country))+ geom_line(colour = "grey50")+ geom_point(aes(colour = country)) ``` * first had to spread data 12\.3: Spreading and gathering ------------------------------ ### 12\.3\.3\. *1\. Why are `gather()` and `spread()` not perfectly symmetrical?* Carefully consider the following example: ``` stocks <- tibble( year = c(2015, 2015, 2016, 2016), half = c( 1, 2, 1, 2), return = c(1.88, 0.59, 0.92, 0.17) ) stocks %>% spread(year, return) %>% gather("year", "return", `2015`:`2016`) ``` ``` ## # A tibble: 4 x 3 ## half year return ## <dbl> <chr> <dbl> ## 1 1 2015 1.88 ## 2 2 2015 0.59 ## 3 1 2016 0.92 ## 4 2 2016 0.17 ``` (Hint: look at the variable types and think about column names.) * are not perfectly symmetrical, because type for key \= changes to character when using `gather` – column type information is not transferred. * position of columns change as well * Both spread() and gather() have a convert argument. What does it do?\* Use this to automatically change `key` column type, otherwise will default in `gather` for example to become a character type. *2\. Why does this code fail?* ``` table4a %>% gather(1999, 2000, key = "year", value = "cases") ``` ``` ## Error in inds_combine(.vars, ind_list): Position must be between 0 and n ``` Need backticks on year column names ``` table4a %>% gather(`1999`, `2000`, key = "year", value = "cases") ``` ``` ## # A tibble: 6 x 3 ## country year cases ## <chr> <chr> <int> ## 1 Afghanistan 1999 745 ## 2 Brazil 1999 37737 ## 3 China 1999 212258 ## 4 Afghanistan 2000 2666 ## 5 Brazil 2000 80488 ## 6 China 2000 213766 ``` *3\. Why does spreading this tibble fail? How could you add a new column to fix the problem?* ``` people <- tribble( ~name, ~key, ~value, #-----------------|--------|------ "Phillip Woods", "age", 45, "Phillip Woods", "height", 186, "Phillip Woods", "age", 50, "Jessica Cordero", "age", 37, "Jessica Cordero", "height", 156 ) people %>% spread(key = "key", value = "value") ``` ``` ## Error: Each row of output must be identified by a unique combination of keys. ## Keys are shared for 2 rows: ## * 1, 3 ## Do you need to create unique ID with tibble::rowid_to_column()? ``` Fails because you have more than one age for philip woods, could add a unique ID column and it will work. ``` people %>% mutate(id = 1:n()) %>% spread(key = "key", value = "value") ``` ``` ## # A tibble: 5 x 4 ## name id age height ## <chr> <int> <dbl> <dbl> ## 1 Jessica Cordero 4 37 NA ## 2 Jessica Cordero 5 NA 156 ## 3 Phillip Woods 1 45 NA ## 4 Phillip Woods 2 NA 186 ## 5 Phillip Woods 3 50 NA ``` *4\. Tidy the simple tibble below. Do you need to spread or gather it? What are the variables?* ``` preg <- tribble( ~pregnant, ~male, ~female, "yes", NA, 10, "no", 20, 12 ) ``` Need to gather `gender` ``` preg %>% gather(male, female, key="gender", value="Number") ``` ``` ## # A tibble: 4 x 3 ## pregnant gender Number ## <chr> <chr> <dbl> ## 1 yes male NA ## 2 no male 20 ## 3 yes female 10 ## 4 no female 12 ``` ### 12\.3\.3\. *1\. Why are `gather()` and `spread()` not perfectly symmetrical?* Carefully consider the following example: ``` stocks <- tibble( year = c(2015, 2015, 2016, 2016), half = c( 1, 2, 1, 2), return = c(1.88, 0.59, 0.92, 0.17) ) stocks %>% spread(year, return) %>% gather("year", "return", `2015`:`2016`) ``` ``` ## # A tibble: 4 x 3 ## half year return ## <dbl> <chr> <dbl> ## 1 1 2015 1.88 ## 2 2 2015 0.59 ## 3 1 2016 0.92 ## 4 2 2016 0.17 ``` (Hint: look at the variable types and think about column names.) * are not perfectly symmetrical, because type for key \= changes to character when using `gather` – column type information is not transferred. * position of columns change as well * Both spread() and gather() have a convert argument. What does it do?\* Use this to automatically change `key` column type, otherwise will default in `gather` for example to become a character type. *2\. Why does this code fail?* ``` table4a %>% gather(1999, 2000, key = "year", value = "cases") ``` ``` ## Error in inds_combine(.vars, ind_list): Position must be between 0 and n ``` Need backticks on year column names ``` table4a %>% gather(`1999`, `2000`, key = "year", value = "cases") ``` ``` ## # A tibble: 6 x 3 ## country year cases ## <chr> <chr> <int> ## 1 Afghanistan 1999 745 ## 2 Brazil 1999 37737 ## 3 China 1999 212258 ## 4 Afghanistan 2000 2666 ## 5 Brazil 2000 80488 ## 6 China 2000 213766 ``` *3\. Why does spreading this tibble fail? How could you add a new column to fix the problem?* ``` people <- tribble( ~name, ~key, ~value, #-----------------|--------|------ "Phillip Woods", "age", 45, "Phillip Woods", "height", 186, "Phillip Woods", "age", 50, "Jessica Cordero", "age", 37, "Jessica Cordero", "height", 156 ) people %>% spread(key = "key", value = "value") ``` ``` ## Error: Each row of output must be identified by a unique combination of keys. ## Keys are shared for 2 rows: ## * 1, 3 ## Do you need to create unique ID with tibble::rowid_to_column()? ``` Fails because you have more than one age for philip woods, could add a unique ID column and it will work. ``` people %>% mutate(id = 1:n()) %>% spread(key = "key", value = "value") ``` ``` ## # A tibble: 5 x 4 ## name id age height ## <chr> <int> <dbl> <dbl> ## 1 Jessica Cordero 4 37 NA ## 2 Jessica Cordero 5 NA 156 ## 3 Phillip Woods 1 45 NA ## 4 Phillip Woods 2 NA 186 ## 5 Phillip Woods 3 50 NA ``` *4\. Tidy the simple tibble below. Do you need to spread or gather it? What are the variables?* ``` preg <- tribble( ~pregnant, ~male, ~female, "yes", NA, 10, "no", 20, 12 ) ``` Need to gather `gender` ``` preg %>% gather(male, female, key="gender", value="Number") ``` ``` ## # A tibble: 4 x 3 ## pregnant gender Number ## <chr> <chr> <dbl> ## 1 yes male NA ## 2 no male 20 ## 3 yes female 10 ## 4 no female 12 ``` 12\.4: Separating and uniting ----------------------------- ### 12\.4\.3\. *1\. What do the `extra` and `fill` arguments do in `separate()`? Experiment with the various options for the following two toy datasets.* ``` tibble(x = c("a,b,c", "d,e,f,g", "h,i,j")) %>% separate(x, c("one", "two", "three")) ``` ``` ## Warning: Expected 3 pieces. Additional pieces discarded in 1 rows [2]. ``` ``` ## # A tibble: 3 x 3 ## one two three ## <chr> <chr> <chr> ## 1 a b c ## 2 d e f ## 3 h i j ``` ``` tibble(x = c("a,b,c", "d,e", "f,g,i")) %>% separate(x, c("one", "two", "three")) ``` ``` ## Warning: Expected 3 pieces. Missing pieces filled with `NA` in 1 rows [2]. ``` ``` ## # A tibble: 3 x 3 ## one two three ## <chr> <chr> <chr> ## 1 a b c ## 2 d e <NA> ## 3 f g i ``` `fill` determines what to do when there are too few arguments, default is to fill right arguments with `NA` can change this though. ``` tribble(~a,~b, "so it goes","hello,you,are") %>% separate(b, into=c("e","f","g", "h"), sep=",", fill = "left") ``` ``` ## # A tibble: 1 x 5 ## a e f g h ## <chr> <chr> <chr> <chr> <chr> ## 1 so it goes <NA> hello you are ``` `extra` determines what to do when you have more splits than you do `into` spaces. Default is to drop extra Can change to limit num of splits to length of `into` with `extra = "merge"` ``` tribble(~a,~b, "so it goes","hello,you,are") %>% separate(b, into = c("e", "f"), sep = ",", extra = "merge") ``` ``` ## # A tibble: 1 x 3 ## a e f ## <chr> <chr> <chr> ## 1 so it goes hello you,are ``` *2\. Both `unite()` and `separate()` have a `remove` argument. What does it do? Why would you set it to `FALSE`?* `remove = FALSE` allows you to specify to keep the input column(s) ``` tibble(x = c("a,b,c", "d,e,f", "h,i,j", "k,l,m")) %>% separate(x, c("one", "two", "three"), remove = FALSE) %>% unite(one, two, three, col = "x2", sep = ",", remove = FALSE) ``` ``` ## # A tibble: 4 x 5 ## x x2 one two three ## <chr> <chr> <chr> <chr> <chr> ## 1 a,b,c a,b,c a b c ## 2 d,e,f d,e,f d e f ## 3 h,i,j h,i,j h i j ## 4 k,l,m k,l,m k l m ``` *3\. Compare and contrast `separate()` and `extract()`. Why are there three variations of separation (by position, by separator, and with groups), but only one unite?* `extract()` is like `separate()` but provide what to capture rather than what to split by as in `regex` instead of `sep`. ``` df <- data.frame(x = c("a-b", "a-d", "b-c", "d&e", NA), y = 1) df %>% extract(col = x, into = c("1st", "2nd"), regex = "([A-z]).([A-z])") ``` ``` ## 1st 2nd y ## 1 a b 1 ## 2 a d 1 ## 3 b c 1 ## 4 d e 1 ## 5 <NA> <NA> 1 ``` ``` df %>% separate(col = x, into = c("1st", "2nd"), sep = "[^A-z]") ``` ``` ## 1st 2nd y ## 1 a b 1 ## 2 a d 1 ## 3 b c 1 ## 4 d e 1 ## 5 <NA> <NA> 1 ``` Because there are many ways to split something up, but only one way to bring multiple things together… ### 12\.4\.3\. *1\. What do the `extra` and `fill` arguments do in `separate()`? Experiment with the various options for the following two toy datasets.* ``` tibble(x = c("a,b,c", "d,e,f,g", "h,i,j")) %>% separate(x, c("one", "two", "three")) ``` ``` ## Warning: Expected 3 pieces. Additional pieces discarded in 1 rows [2]. ``` ``` ## # A tibble: 3 x 3 ## one two three ## <chr> <chr> <chr> ## 1 a b c ## 2 d e f ## 3 h i j ``` ``` tibble(x = c("a,b,c", "d,e", "f,g,i")) %>% separate(x, c("one", "two", "three")) ``` ``` ## Warning: Expected 3 pieces. Missing pieces filled with `NA` in 1 rows [2]. ``` ``` ## # A tibble: 3 x 3 ## one two three ## <chr> <chr> <chr> ## 1 a b c ## 2 d e <NA> ## 3 f g i ``` `fill` determines what to do when there are too few arguments, default is to fill right arguments with `NA` can change this though. ``` tribble(~a,~b, "so it goes","hello,you,are") %>% separate(b, into=c("e","f","g", "h"), sep=",", fill = "left") ``` ``` ## # A tibble: 1 x 5 ## a e f g h ## <chr> <chr> <chr> <chr> <chr> ## 1 so it goes <NA> hello you are ``` `extra` determines what to do when you have more splits than you do `into` spaces. Default is to drop extra Can change to limit num of splits to length of `into` with `extra = "merge"` ``` tribble(~a,~b, "so it goes","hello,you,are") %>% separate(b, into = c("e", "f"), sep = ",", extra = "merge") ``` ``` ## # A tibble: 1 x 3 ## a e f ## <chr> <chr> <chr> ## 1 so it goes hello you,are ``` *2\. Both `unite()` and `separate()` have a `remove` argument. What does it do? Why would you set it to `FALSE`?* `remove = FALSE` allows you to specify to keep the input column(s) ``` tibble(x = c("a,b,c", "d,e,f", "h,i,j", "k,l,m")) %>% separate(x, c("one", "two", "three"), remove = FALSE) %>% unite(one, two, three, col = "x2", sep = ",", remove = FALSE) ``` ``` ## # A tibble: 4 x 5 ## x x2 one two three ## <chr> <chr> <chr> <chr> <chr> ## 1 a,b,c a,b,c a b c ## 2 d,e,f d,e,f d e f ## 3 h,i,j h,i,j h i j ## 4 k,l,m k,l,m k l m ``` *3\. Compare and contrast `separate()` and `extract()`. Why are there three variations of separation (by position, by separator, and with groups), but only one unite?* `extract()` is like `separate()` but provide what to capture rather than what to split by as in `regex` instead of `sep`. ``` df <- data.frame(x = c("a-b", "a-d", "b-c", "d&e", NA), y = 1) df %>% extract(col = x, into = c("1st", "2nd"), regex = "([A-z]).([A-z])") ``` ``` ## 1st 2nd y ## 1 a b 1 ## 2 a d 1 ## 3 b c 1 ## 4 d e 1 ## 5 <NA> <NA> 1 ``` ``` df %>% separate(col = x, into = c("1st", "2nd"), sep = "[^A-z]") ``` ``` ## 1st 2nd y ## 1 a b 1 ## 2 a d 1 ## 3 b c 1 ## 4 d e 1 ## 5 <NA> <NA> 1 ``` Because there are many ways to split something up, but only one way to bring multiple things together… 12\.5: Missing values --------------------- ### 12\.5\.1\. *1\. Compare and contrast the fill arguments to `spread()` and `complete()`.* Both create open cells by filling out those that are not currently in the dataset, `complete` though does it by adding rows of iterations not included, whereas `spread` does it by the process of spreading out fields and naturally generating values that did not have row values previously. The`fill` in each specifies what value should go into these created cells. ``` treatment2 <- tribble( ~ person, ~ treatment, ~response, "Derrick Whitmore", 1, 7, "Derrick Whitmore", 2, 10, "Derrick Whitmore", 3, 9, "Katherine Burke", 1, 4 ) treatment2 %>% complete(person, treatment, fill = list(response = 0)) ``` ``` ## # A tibble: 6 x 3 ## person treatment response ## <chr> <dbl> <dbl> ## 1 Derrick Whitmore 1 7 ## 2 Derrick Whitmore 2 10 ## 3 Derrick Whitmore 3 9 ## 4 Katherine Burke 1 4 ## 5 Katherine Burke 2 0 ## 6 Katherine Burke 3 0 ``` ``` treatment2 %>% spread(key = treatment, value = response, fill = 0) ``` ``` ## # A tibble: 2 x 4 ## person `1` `2` `3` ## <chr> <dbl> <dbl> <dbl> ## 1 Derrick Whitmore 7 10 9 ## 2 Katherine Burke 4 0 0 ``` *2\. What does the `.direction` argument to `fill()` do?* Let’s you fill either up or down. E.g. below is filling up example. ``` treatment <- tribble( ~ person, ~ treatment, ~response, "Derrick Whitmore", 1, 7, NA, 2, 10, NA, 3, 9, "Katherine Burke", 1, 4 ) treatment %>% fill(person, .direction = "up") ``` ``` ## # A tibble: 4 x 3 ## person treatment response ## <chr> <dbl> <dbl> ## 1 Derrick Whitmore 1 7 ## 2 Katherine Burke 2 10 ## 3 Katherine Burke 3 9 ## 4 Katherine Burke 1 4 ``` ### 12\.5\.1\. *1\. Compare and contrast the fill arguments to `spread()` and `complete()`.* Both create open cells by filling out those that are not currently in the dataset, `complete` though does it by adding rows of iterations not included, whereas `spread` does it by the process of spreading out fields and naturally generating values that did not have row values previously. The`fill` in each specifies what value should go into these created cells. ``` treatment2 <- tribble( ~ person, ~ treatment, ~response, "Derrick Whitmore", 1, 7, "Derrick Whitmore", 2, 10, "Derrick Whitmore", 3, 9, "Katherine Burke", 1, 4 ) treatment2 %>% complete(person, treatment, fill = list(response = 0)) ``` ``` ## # A tibble: 6 x 3 ## person treatment response ## <chr> <dbl> <dbl> ## 1 Derrick Whitmore 1 7 ## 2 Derrick Whitmore 2 10 ## 3 Derrick Whitmore 3 9 ## 4 Katherine Burke 1 4 ## 5 Katherine Burke 2 0 ## 6 Katherine Burke 3 0 ``` ``` treatment2 %>% spread(key = treatment, value = response, fill = 0) ``` ``` ## # A tibble: 2 x 4 ## person `1` `2` `3` ## <chr> <dbl> <dbl> <dbl> ## 1 Derrick Whitmore 7 10 9 ## 2 Katherine Burke 4 0 0 ``` *2\. What does the `.direction` argument to `fill()` do?* Let’s you fill either up or down. E.g. below is filling up example. ``` treatment <- tribble( ~ person, ~ treatment, ~response, "Derrick Whitmore", 1, 7, NA, 2, 10, NA, 3, 9, "Katherine Burke", 1, 4 ) treatment %>% fill(person, .direction = "up") ``` ``` ## # A tibble: 4 x 3 ## person treatment response ## <chr> <dbl> <dbl> ## 1 Derrick Whitmore 1 7 ## 2 Katherine Burke 2 10 ## 3 Katherine Burke 3 9 ## 4 Katherine Burke 1 4 ``` 12\.6 Case Study ---------------- ### 12\.6\.1\. *1\. In this case study I set `na.rm = TRUE` just to make it easier to check that we had the correct values. Is this reasonable? Think about how missing values are represented in this dataset. Are there implicit missing values? What’s the difference between an `NA` and zero?* In this case it’s reasonable, an `NA` perhaps means the metric wasn’t recorded in that year, whereas 0 means it was recorded but there were 0 cases. Implicit missing values represented by say Afghanistan not having any reported cases for females. *2\. What happens if you neglect the `mutate()` step? (`mutate(key = stringr::str_replace(key, "newrel", "new_rel"))`)* You would have had one less column, so ‘newtype’ would have been on column, rather than these splitting. *3\. I claimed that `iso2` and `iso3` were redundant with `country`. Confirm this claim.* ``` who %>% select(1:3) %>% distinct() %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 219 ``` ``` who %>% select(1:3) %>% distinct() %>% unite(country, iso2, iso3, col = "country_combined") %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 219 ``` Both of the above are the same length. *4\. For each country, year, and sex compute the total number of cases of TB. Make an informative visualisation of the data.* ``` who_present <- who %>% gather(code, value, new_sp_m014:newrel_f65, na.rm = TRUE) %>% mutate(code = stringr::str_replace(code, "newrel", "new_rel")) %>% separate(code, c("new", "var", "sexage")) %>% select(-new, -iso2, -iso3) %>% separate(sexage, c("sex", "age"), sep = 1) ``` ``` who_present %>% group_by(sex, year, country) %>% summarise(mean=mean(value)) %>% ggplot(aes(x=year, y=mean, colour=sex))+ geom_point()+ geom_jitter() ``` ``` #ratio of female tb cases over time who_present %>% group_by(sex, year) %>% summarise(meansex=sum(value)) %>% ungroup() %>% group_by(year) %>% mutate(tot=sum(meansex)) %>% ungroup() %>% mutate(ratio=meansex/tot) %>% filter(sex=="f") %>% ggplot(aes(x=year, y=ratio, colour=sex))+ geom_line() ``` ``` #countries with the most outbreaks who_present %>% group_by(country, year) %>% summarise(n=sum(value)) %>% ungroup() %>% group_by(country) %>% mutate(total_country=sum(n)) %>% filter(total_country>1000000) %>% ggplot(aes(x=year,y=n,colour=country))+ geom_line() ``` ``` #countries with the most split by gender as well who_present %>% group_by(country, sex, year) %>% summarise(n=sum(value)) %>% ungroup() %>% group_by(country) %>% mutate(total_country=sum(n)) %>% filter(total_country>1000000) %>% ggplot(aes(x=year,y=n,colour=sex))+ geom_line()+ facet_wrap(~country) ``` ``` #take log and summarise who_present %>% group_by(country, year) %>% summarise(n=sum(value), logn=log(n)) %>% ungroup() %>% group_by(country) %>% mutate(total_c=sum(n)) %>% filter(total_c>1000000) %>% ggplot(aes(x=year,y=logn, colour=country))+ geom_line(show.legend=TRUE) ``` ``` #average # of countries with more female TB cases who_present %>% group_by(country, year, sex) %>% summarise(n=sum(value), logn=log(n)) %>% ungroup() %>% group_by(country, year) %>% mutate(total_c=sum(n)) %>% ungroup() %>% mutate(perc_gender=n/total_c, femalemore=ifelse(perc_gender>.5,1,0)) %>% filter(sex=="f") %>% group_by(year) %>% summarise(summaryfem=mean(femalemore,na.rm=TRUE )) %>% ggplot(aes(x=year,y=summaryfem))+ geom_line() ``` ### 12\.6\.1\. *1\. In this case study I set `na.rm = TRUE` just to make it easier to check that we had the correct values. Is this reasonable? Think about how missing values are represented in this dataset. Are there implicit missing values? What’s the difference between an `NA` and zero?* In this case it’s reasonable, an `NA` perhaps means the metric wasn’t recorded in that year, whereas 0 means it was recorded but there were 0 cases. Implicit missing values represented by say Afghanistan not having any reported cases for females. *2\. What happens if you neglect the `mutate()` step? (`mutate(key = stringr::str_replace(key, "newrel", "new_rel"))`)* You would have had one less column, so ‘newtype’ would have been on column, rather than these splitting. *3\. I claimed that `iso2` and `iso3` were redundant with `country`. Confirm this claim.* ``` who %>% select(1:3) %>% distinct() %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 219 ``` ``` who %>% select(1:3) %>% distinct() %>% unite(country, iso2, iso3, col = "country_combined") %>% count() ``` ``` ## # A tibble: 1 x 1 ## n ## <int> ## 1 219 ``` Both of the above are the same length. *4\. For each country, year, and sex compute the total number of cases of TB. Make an informative visualisation of the data.* ``` who_present <- who %>% gather(code, value, new_sp_m014:newrel_f65, na.rm = TRUE) %>% mutate(code = stringr::str_replace(code, "newrel", "new_rel")) %>% separate(code, c("new", "var", "sexage")) %>% select(-new, -iso2, -iso3) %>% separate(sexage, c("sex", "age"), sep = 1) ``` ``` who_present %>% group_by(sex, year, country) %>% summarise(mean=mean(value)) %>% ggplot(aes(x=year, y=mean, colour=sex))+ geom_point()+ geom_jitter() ``` ``` #ratio of female tb cases over time who_present %>% group_by(sex, year) %>% summarise(meansex=sum(value)) %>% ungroup() %>% group_by(year) %>% mutate(tot=sum(meansex)) %>% ungroup() %>% mutate(ratio=meansex/tot) %>% filter(sex=="f") %>% ggplot(aes(x=year, y=ratio, colour=sex))+ geom_line() ``` ``` #countries with the most outbreaks who_present %>% group_by(country, year) %>% summarise(n=sum(value)) %>% ungroup() %>% group_by(country) %>% mutate(total_country=sum(n)) %>% filter(total_country>1000000) %>% ggplot(aes(x=year,y=n,colour=country))+ geom_line() ``` ``` #countries with the most split by gender as well who_present %>% group_by(country, sex, year) %>% summarise(n=sum(value)) %>% ungroup() %>% group_by(country) %>% mutate(total_country=sum(n)) %>% filter(total_country>1000000) %>% ggplot(aes(x=year,y=n,colour=sex))+ geom_line()+ facet_wrap(~country) ``` ``` #take log and summarise who_present %>% group_by(country, year) %>% summarise(n=sum(value), logn=log(n)) %>% ungroup() %>% group_by(country) %>% mutate(total_c=sum(n)) %>% filter(total_c>1000000) %>% ggplot(aes(x=year,y=logn, colour=country))+ geom_line(show.legend=TRUE) ``` ``` #average # of countries with more female TB cases who_present %>% group_by(country, year, sex) %>% summarise(n=sum(value), logn=log(n)) %>% ungroup() %>% group_by(country, year) %>% mutate(total_c=sum(n)) %>% ungroup() %>% mutate(perc_gender=n/total_c, femalemore=ifelse(perc_gender>.5,1,0)) %>% filter(sex=="f") %>% group_by(year) %>% summarise(summaryfem=mean(femalemore,na.rm=TRUE )) %>% ggplot(aes(x=year,y=summaryfem))+ geom_line() ```
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/13-relational-data.html
Ch. 13: Relational data ======================= **Key questions:** * 13\.4\.6 , \#1, 3, 4 * 13\.5\.1, \#2, 4 **Functions and notes:** > “The relations of three or more tables are always a property of the relations between each pairs.” *Three families of verbs in relational data:* * **Mutating joins**, which add new variables to one data frame from matching observations in another. + `inner_join`: match when equal + `left_join`: keep all observations in table in 1st arg + `right_join`: keep all observations in table in 2nd arg + `full_join`: keep all observations in table in 1st and 2nd arg * **Filtering joins**, which filter observations from one data frame based on whether or not they match an observation in the other table. + `semi_join(x, y)` **keeps** all observations in `x` that have a match in `y`. + `anti_join(x, y)` **drops** all observations in `x` that have a match in `y`. * **Set operations**, which treat observations as if they were set elements. + `intersect(x, y)`: return only observations in both `x` and `y` (when inputs are a df, is comparing across all values in a row). + `union(x, y)`: return unique observations in `x` and `y`. + `setdiff(x, y)`: return observations in `x`, but not in `y`. `base::merge()` can perform all four types of mutating join: | dplyr | merge | | --- | --- | | `inner_join(x, y)` | `merge(x, y)` | | `left_join(x, y)` | `merge(x, y, all.x = TRUE)` | | `right_join(x, y)` | `merge(x, y, all.y = TRUE)`, | | `full_join(x, y)` | `merge(x, y, all.x = TRUE, all.y = TRUE)` | SQL is the inspiration for dplyr’s conventions, so the translation is straightforward: | dplyr | SQL | | --- | --- | | `inner_join(x, y, by = "z")` | `SELECT * FROM x INNER JOIN y USING (z)` | | `left_join(x, y, by = "z")` | `SELECT * FROM x LEFT OUTER JOIN y USING (z)` | | `right_join(x, y, by = "z")` | `SELECT * FROM x RIGHT OUTER JOIN y USING (z)` | | `full_join(x, y, by = "z")` | `SELECT * FROM x FULL OUTER JOIN y USING (z)` | 13\.2 nycflights13 ------------------ ``` flights airlines airports planes weather ``` ### 13\.2\.1 1. *Imagine you wanted to draw (approximately) the route each plane flies from* *its origin to its destination. What variables would you need? What tables* *would you need to combine?* To draw a line from origin to destination, I need the lat lon points from airports`as well as the dest and origin variables from`flights\`. 2. *I forgot to draw the relationship between `weather` and `airports`.* *What is the relationship and how should it appear in the diagram?* `origin` from `weather connects to`faa`from`airports\` in a many to one relationship 3. *`weather` only contains information for the origin (NYC) airports. If* *it contained weather records for all airports in the USA, what additional* *relation would it define with `flights`?* It would connect to `dest`. 4. *We know that some days of the year are “special”, and fewer people than* *usual fly on them. How might you represent that data as a data frame?* *What would be the primary keys of that table? How would it connect to the* *existing tables?* Make a set of days that are less popular and have these dates connect by month and day 13\.3 Keys ---------- ### 13\.3\.1 1. *Add a surrogate key to `flights`.* ``` flights %>% mutate(surrogate_key = row_number()) %>% glimpse() ``` ``` ## Observations: 336,776 ## Variables: 20 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ day <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ dep_time <int> 517, 533, 542, 544, 554, 554, 555, 557, 557, 55... ## $ sched_dep_time <int> 515, 529, 540, 545, 600, 558, 600, 600, 600, 60... ## $ dep_delay <dbl> 2, 4, 2, -1, -6, -4, -5, -3, -3, -2, -2, -2, -2... ## $ arr_time <int> 830, 850, 923, 1004, 812, 740, 913, 709, 838, 7... ## $ sched_arr_time <int> 819, 830, 850, 1022, 837, 728, 854, 723, 846, 7... ## $ arr_delay <dbl> 11, 20, 33, -18, -25, 12, 19, -14, -8, 8, -2, -... ## $ carrier <chr> "UA", "UA", "AA", "B6", "DL", "UA", "B6", "EV",... ## $ flight <int> 1545, 1714, 1141, 725, 461, 1696, 507, 5708, 79... ## $ tailnum <chr> "N14228", "N24211", "N619AA", "N804JB", "N668DN... ## $ origin <chr> "EWR", "LGA", "JFK", "JFK", "LGA", "EWR", "EWR"... ## $ dest <chr> "IAH", "IAH", "MIA", "BQN", "ATL", "ORD", "FLL"... ## $ air_time <dbl> 227, 227, 160, 183, 116, 150, 158, 53, 140, 138... ## $ distance <dbl> 1400, 1416, 1089, 1576, 762, 719, 1065, 229, 94... ## $ hour <dbl> 5, 5, 5, 5, 6, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5,... ## $ minute <dbl> 15, 29, 40, 45, 0, 58, 0, 0, 0, 0, 0, 0, 0, 0, ... ## $ time_hour <dttm> 2013-01-01 05:00:00, 2013-01-01 05:00:00, 2013... ## $ surrogate_key <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, ... ``` 2. *Identify the keys in the following datasets* 1. `Lahman::Batting`: player, year, stint ``` Lahman::Batting %>% count(playerID, yearID, stint) %>% filter(n > 1) ``` ``` ## # A tibble: 0 x 4 ## # ... with 4 variables: playerID <chr>, yearID <int>, stint <int>, n <int> ``` 2. `babynames::babynames`: name, sex, year ``` babynames::babynames %>% count(name, sex, year) %>% filter(n > 1) ``` ``` ## # A tibble: 0 x 4 ## # ... with 4 variables: name <chr>, sex <chr>, year <dbl>, n <int> ``` 3. `nasaweather::atmos`: lat, long, year, month ``` nasaweather::atmos %>% count(lat, long, year, month) %>% filter(n > 1) ``` ``` ## # A tibble: 0 x 5 ## # ... with 5 variables: lat <dbl>, long <dbl>, year <int>, month <int>, ## # n <int> ``` 4. `fueleconomy::vehicles`: id ``` fueleconomy::vehicles %>% count(id) %>% filter(n > 1) ``` ``` ## # A tibble: 0 x 2 ## # ... with 2 variables: id <int>, n <int> ``` 5. `ggplot2::diamonds`: needs surrogate ``` diamonds %>% count(x, y, z, depth, table, carat, cut, color, price, clarity ) %>% filter(n > 1) ``` ``` ## # A tibble: 143 x 11 ## x y z depth table carat cut color price clarity n ## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <ord> <ord> <int> <ord> <int> ## 1 0 0 0 64.1 60 0.71 Good F 2130 SI2 2 ## 2 4.23 4.26 2.69 63.4 57 0.3 Good J 394 VS1 2 ## 3 4.26 4.23 2.69 63.4 57 0.3 Very Good J 506 VS1 2 ## 4 4.26 4.29 2.66 62.2 57 0.3 Ideal H 450 SI1 2 ## 5 4.27 4.28 2.66 62.2 57 0.3 Ideal H 450 SI1 2 ## 6 4.29 4.31 2.71 63 55 0.3 Very Good G 526 VS2 2 ## 7 4.29 4.31 2.73 63.5 56 0.31 Good D 571 SI1 2 ## 8 4.31 4.28 2.67 62.2 58 0.3 Premium D 709 SI1 2 ## 9 4.31 4.29 2.71 63 55 0.3 Ideal G 675 VS2 2 ## 10 4.31 4.29 2.73 63.5 56 0.31 Very Good D 732 SI1 2 ## # ... with 133 more rows ``` ``` diamonds %>% mutate(surrogate_id = row_number()) ``` ``` ## # A tibble: 53,940 x 11 ## carat cut color clarity depth table price x y z ## <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl> ## 1 0.23 Ideal E SI2 61.5 55 326 3.95 3.98 2.43 ## 2 0.21 Prem~ E SI1 59.8 61 326 3.89 3.84 2.31 ## 3 0.23 Good E VS1 56.9 65 327 4.05 4.07 2.31 ## 4 0.290 Prem~ I VS2 62.4 58 334 4.2 4.23 2.63 ## 5 0.31 Good J SI2 63.3 58 335 4.34 4.35 2.75 ## 6 0.24 Very~ J VVS2 62.8 57 336 3.94 3.96 2.48 ## 7 0.24 Very~ I VVS1 62.3 57 336 3.95 3.98 2.47 ## 8 0.26 Very~ H SI1 61.9 55 337 4.07 4.11 2.53 ## 9 0.22 Fair E VS2 65.1 61 337 3.87 3.78 2.49 ## 10 0.23 Very~ H VS1 59.4 61 338 4 4.05 2.39 ## # ... with 53,930 more rows, and 1 more variable: surrogate_id <int> ``` 3. *Draw a diagram illustrating the connections between the `Batting`,`Master`, and `Salaries` tables in the Lahman package. Draw another diagram that shows the relationship between `Master`, `Managers`, `AwardsManagers`.* * `Lahman::Batting` and `Lahman::Master` combine by `playerID` * `Lahman::Batting` and `Lahman::Salaries` combine by `playerID`, `yearID` * `Lahman::Master` and `Lahman::Salaries` combine by `playerID` * `Lahman::Master` and `Lahman::Managers` combine by `playerID` * `Lahman::Master` and `Lahman::AwardsManagers` combine by `playerID`*How would you characterise the relationship between the `Batting`, `Pitching`, and `Fielding` tables?* * All connect by `playerID`, `yearID`, `stint` 13\.4 Mutating joins -------------------- > The most commonly used join is the left join: you use this whenever you look up additional data from another table, because it preserves the original observations even when there isn’t a match. ### 13\.4\.6 1. *Compute the average delay by destination, then join on the `airports`* *data frame so you can show the spatial distribution of delays. Here’s an* *easy way to draw a map of the United States:* ``` flights %>% semi_join(airports, c("dest" = "faa")) %>% group_by(dest) %>% summarise(delay = mean(arr_delay, na.rm=TRUE)) %>% left_join(airports, by = c("dest"="faa")) %>% ggplot(aes(lon, lat)) + borders("state") + geom_point(aes(colour = delay)) + coord_quickmap()+ # see chapter 28 for information on scales scale_color_gradient2(low = "blue", high = "red") ``` 2. *Add the location of the origin *and* destination (i.e. the `lat` and `lon`)* *to `flights`.* ``` flights %>% left_join(airports, by = c("dest" = "faa")) %>% left_join(airports, by = c("origin" = "faa"), suffix = c("_dest", "_origin")) %>% select(flight, carrier, dest, lat_dest, lon_dest, origin, lat_origin, lon_origin) ``` ``` ## # A tibble: 336,776 x 8 ## flight carrier dest lat_dest lon_dest origin lat_origin lon_origin ## <int> <chr> <chr> <dbl> <dbl> <chr> <dbl> <dbl> ## 1 1545 UA IAH 30.0 -95.3 EWR 40.7 -74.2 ## 2 1714 UA IAH 30.0 -95.3 LGA 40.8 -73.9 ## 3 1141 AA MIA 25.8 -80.3 JFK 40.6 -73.8 ## 4 725 B6 BQN NA NA JFK 40.6 -73.8 ## 5 461 DL ATL 33.6 -84.4 LGA 40.8 -73.9 ## 6 1696 UA ORD 42.0 -87.9 EWR 40.7 -74.2 ## 7 507 B6 FLL 26.1 -80.2 EWR 40.7 -74.2 ## 8 5708 EV IAD 38.9 -77.5 LGA 40.8 -73.9 ## 9 79 B6 MCO 28.4 -81.3 JFK 40.6 -73.8 ## 10 301 AA ORD 42.0 -87.9 LGA 40.8 -73.9 ## # ... with 336,766 more rows ``` Note that the suffix allows you to tag names onto first and second table, hence why vector is length 2 3. *Is there a relationship between the age of a plane and its delays?* ``` group_by(flights, tailnum) %>% summarise(avg_delay = mean(arr_delay, na.rm=TRUE), n = n()) %>% left_join(planes, by="tailnum") %>% mutate(age = 2013 - year) %>% filter(n > 50, age < 30) %>% ggplot(aes(x = age, y = avg_delay))+ ggbeeswarm::geom_quasirandom()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")' ``` Looks as though planes that are roughly 5 to 10 years old have higher delays… Let’s look at same thing using boxplots. ``` group_by(flights, tailnum) %>% summarise(avg_delay = mean(arr_delay, na.rm=TRUE), n = n()) %>% left_join(planes, by="tailnum") %>% mutate(age = 2013 - year) %>% filter(n > 50, age <= 30, age >= 0) %>% ggplot()+ geom_boxplot(aes(x = cut_width(age, 2, boundary = 0), y = avg_delay))+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` Perhaps there is not an overall trend association between age and delays, though it seems that the particular group of planes in that time range seem to have delays than either newer or older planes. On the other hand, there does almost look to be a seasonality pattern – though this may just be me seeing things… perhaps worth exploring more… A simple way to test for a non\-linear relationship would be to discretize age and then pass it through an anova… ``` nycflights13::flights %>% select(arr_delay, tailnum) %>% left_join(planes, by="tailnum") %>% filter(!is.na(arr_delay)) %>% mutate(age = 2013 - year, age_round_5 = (5 * age %/% 5) %>% as.factor()) %>% with(aov(arr_delay ~ age_round_5)) %>% summary() ``` ``` ## Df Sum Sq Mean Sq F value Pr(>F) ## age_round_5 11 1062080 96553 47.92 <2e-16 *** ## Residuals 273841 551756442 2015 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## 53493 observations deleted due to missingness ``` * There are weaknesses to using anova, but the low p\-value above suggests test arrival delay is not randomly distributed across age * The reason for such a difference may be trivial or may be confounded by a more interesting pattern… but these are deeper questions 4. *What weather conditions make it more likely to see a delay?* There are a lot of ways you could have approached this problem. Below, I look at the average weather value for each of the groups `FALSE`, `TRUE` and `Canceled` – `FALSE` corresponding with non\-delayed flights, `TRUE` with delayed flights and `Canceled` with flights that were canceled. If I were feeling fancy, I would have also added the standard errors on these… ``` flights_weath <- mutate(flights, delay_TF = dep_delay > 0) %>% separate(sched_dep_time, into = c("hour_sched", "min_sched"), sep = -3, remove = FALSE, convert = TRUE) %>% left_join(weather, by = c("origin", "year","month", "day", "hour_sched"="hour")) flights_weath_gath <- flights_weath %>% select(sched_dep_time, delay_TF, sched_dep_time, temp:visib) %>% mutate(key = row_number()) %>% gather(temp, dewp, humid, wind_dir, wind_speed, wind_gust, precip, pressure, visib, key="weather", value="values") flights_summarized <- flights_weath_gath %>% group_by(weather, delay_TF) %>% summarise(median_weath = median(values, na.rm = TRUE), mean_weath = mean(values, na.rm = TRUE), sum_n = sum(!is.na(values))) %>% ungroup() %>% mutate(delay_TF = ifelse(is.na(delay_TF), "Canceled", delay_TF), delay_TF = forcats::as_factor(delay_TF, c(FALSE, TRUE, "Canceled"))) flights_summarized %>% ggplot(aes(x = delay_TF, y = mean_weath, fill = delay_TF))+ geom_col()+ facet_wrap(~weather, scales = "free_y")+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` While precipitation is the largest difference, my guess is that the standard error on this would be much greater day to day because as you can see the values are very low, so it could be that a few cases with a lot of rain may tick it up, but it may be tough to actually use this as a predictor… 5. *What happened on June 13 2013? Display the spatial pattern of delays,* *and then use Google to cross\-reference with the weather.* Looks like East coast is getting hammered and flights arriving from Atlanta an similar locations were very delayed. Guessing either weather issue, or problem in Atl or delta. 13\.5 Filtering joins --------------------- ### 13\.5\.1 1. *What does it mean for a flight to have a missing `tailnum`?* All flights with a missing tailnum in the `flights` table were cancelled as you can see below. ``` flights %>% count(is.na(tailnum), is.na(arr_delay)) ``` ``` ## # A tibble: 3 x 3 ## `is.na(tailnum)` `is.na(arr_delay)` n ## <lgl> <lgl> <int> ## 1 FALSE FALSE 327346 ## 2 FALSE TRUE 6918 ## 3 TRUE TRUE 2512 ``` *What do the tail numbers that don’t have a matching record in `planes` have in common?* *(Hint: one variable explains \~90% of the problems.)* ``` flights %>% anti_join(planes, by="tailnum") %>% count(carrier, sort = TRUE) ``` ``` ## # A tibble: 10 x 2 ## carrier n ## <chr> <int> ## 1 MQ 25397 ## 2 AA 22558 ## 3 UA 1693 ## 4 9E 1044 ## 5 B6 830 ## 6 US 699 ## 7 FL 187 ## 8 DL 110 ## 9 F9 50 ## 10 WN 38 ``` ``` flights %>% mutate(in_planes = tailnum %in% planes$tailnum) %>% group_by(carrier) %>% summarise(flights_inPlanes = sum(in_planes), n = n(), perc_inPlanes = flights_inPlanes / n) %>% ungroup() ``` ``` ## # A tibble: 16 x 4 ## carrier flights_inPlanes n perc_inPlanes ## <chr> <int> <int> <dbl> ## 1 9E 17416 18460 0.943 ## 2 AA 10171 32729 0.311 ## 3 AS 714 714 1 ## 4 B6 53805 54635 0.985 ## 5 DL 48000 48110 0.998 ## 6 EV 54173 54173 1 ## 7 F9 635 685 0.927 ## 8 FL 3073 3260 0.943 ## 9 HA 342 342 1 ## 10 MQ 1000 26397 0.0379 ## 11 OO 32 32 1 ## 12 UA 56972 58665 0.971 ## 13 US 19837 20536 0.966 ## 14 VX 5162 5162 1 ## 15 WN 12237 12275 0.997 ## 16 YV 601 601 1 ``` Some carriers do not have many of their tailnums data in the `planes` table. (Come back.) 2. *Filter flights to only show flights with planes that have flown at least 100 flights.* ``` planes_many <- flights %>% count(tailnum, sort=TRUE) %>% filter(n > 100) semi_join(flights, planes_many) ``` ``` ## Joining, by = "tailnum" ``` ``` ## # A tibble: 229,202 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 517 515 2 830 ## 2 2013 1 1 533 529 4 850 ## 3 2013 1 1 544 545 -1 1004 ## 4 2013 1 1 554 558 -4 740 ## 5 2013 1 1 555 600 -5 913 ## 6 2013 1 1 557 600 -3 709 ## 7 2013 1 1 557 600 -3 838 ## 8 2013 1 1 558 600 -2 849 ## 9 2013 1 1 558 600 -2 853 ## 10 2013 1 1 558 600 -2 923 ## # ... with 229,192 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` * `add_count()` is another helpful function that could have been used here 3. *Combine `fueleconomy::vehicles` and `fueleconomy::common` to find only the records for the most common models.* ``` fueleconomy::vehicles %>% semi_join(fueleconomy::common, by=c("make", "model")) ``` ``` ## # A tibble: 14,531 x 12 ## id make model year class trans drive cyl displ fuel hwy cty ## <int> <chr> <chr> <int> <chr> <chr> <chr> <int> <dbl> <chr> <int> <int> ## 1 1833 Acura Integ~ 1986 Subc~ Auto~ Fron~ 4 1.6 Regu~ 28 22 ## 2 1834 Acura Integ~ 1986 Subc~ Manu~ Fron~ 4 1.6 Regu~ 28 23 ## 3 3037 Acura Integ~ 1987 Subc~ Auto~ Fron~ 4 1.6 Regu~ 28 22 ## 4 3038 Acura Integ~ 1987 Subc~ Manu~ Fron~ 4 1.6 Regu~ 28 23 ## 5 4183 Acura Integ~ 1988 Subc~ Auto~ Fron~ 4 1.6 Regu~ 27 22 ## 6 4184 Acura Integ~ 1988 Subc~ Manu~ Fron~ 4 1.6 Regu~ 28 23 ## 7 5303 Acura Integ~ 1989 Subc~ Auto~ Fron~ 4 1.6 Regu~ 27 22 ## 8 5304 Acura Integ~ 1989 Subc~ Manu~ Fron~ 4 1.6 Regu~ 28 23 ## 9 6442 Acura Integ~ 1990 Subc~ Auto~ Fron~ 4 1.8 Regu~ 24 20 ## 10 6443 Acura Integ~ 1990 Subc~ Manu~ Fron~ 4 1.8 Regu~ 26 21 ## # ... with 14,521 more rows ``` 4. *Find the 48 hours (over the course of the whole year) that have the worst delays. Cross\-reference it with the `weather` data. Can you see any patterns?* First: Create two variables that together capture all 48 hour time windows across the year, at the day window of granularity (e.g. the time of day the flight takes off does not matter in establishing time windows for this example, only the day). Second: Gather these time windows into a single dataframe (note that this will increase the length of your data by \~364/365 \* 100 %) Third: Group by `window_start_date` and calculate average `arr_delay` and related metrics. ``` delays_windows <- flights %>% #First mutate(date_flight = lubridate::as_date(time_hour)) %>% mutate(startdate_window1 = cut.Date(date_flight, "2 day")) %>% mutate(date_flight2 = ifelse(!(date_flight == min(date_flight, na.rm = TRUE)), date_flight, NA), date_flight2 = lubridate::as_date(date_flight2), startdate_window2 = cut.Date(date_flight2, "2 day")) %>% select(-date_flight, -date_flight2) %>% #Second gather(startdate_window1, startdate_window2, key = "start_window", value = "window_start_date") %>% filter(!is.na(window_start_date)) %>% #Third group_by(window_start_date) %>% summarise(num = n(), perc_cancelled = sum(is.na(arr_delay)) / n(), mean_delay = mean(arr_delay, na.rm = TRUE), perc_delay = mean(arr_delay > 0, na.rm = TRUE), total_delay_mins = sum(arr_delay, na.rm = TRUE)) %>% ungroup() ``` ``` ## Warning: attributes are not identical across measure variables; ## they will be dropped ``` ``` #don't worry about warning of 'attributes are not identical...', that is #because the cut function assigns attributes to the value, it's fine if #these are dropped here. ``` Create tibble of worst 2\-day period for mean `arr_delay` ``` WorstWindow <- delays_windows %>% mutate(mean_delay_rank = dplyr::min_rank(-mean_delay)) %>% filter(mean_delay_rank <= 1) WorstDates <- tibble(dates = c(lubridate::as_date(WorstWindow$window_start_date), lubridate::as_date(WorstWindow$window_start_date) + lubridate::duration(1, "days"))) ``` Ammend weather data so that weather is an average across three NY locations rather than seperate for each[30](#fn30) ``` weather_ammended <- weather %>% mutate(time_hour = lubridate::make_datetime(year, month, day, hour)) %>% select(-one_of("origin", "year", "month", "day", "hour")) %>% group_by(time_hour) %>% summarise_all(mean, na.rm = TRUE) %>% ungroup() ``` Filtering join to just times weather for worst 2 days ``` weather_worst <- weather_ammended %>% mutate(dates = as_date(time_hour)) %>% semi_join(WorstDates) ``` ``` ## Joining, by = "dates" ``` Plot of hourly weather values across 48 hour time windows. ``` weather_worst %>% select(-dates) %>% gather(temp:visib, key = "weather_type", value = "weather_value") %>% ggplot(aes(x = time_hour, y = weather_value))+ geom_line()+ facet_grid(weather_type ~ ., scales = "free_y")+ labs(title = 'Hourly weather values across worst 48 hours of delays') ``` Patterns: * `wind_gust` and `wind_speed` are the same. * See high level of colinearity in spikes and changes, e.g. increase in `precip` corresponds with decrease in `visib` and perhaps uptick in `wind_spee`Perhaps, we want to view how the average hourly weather values compare on the worst days to average weather days. Create summary of average hourly weather values for worst 48 hour period, for average period, and then append these and plot. ``` bind_rows( weather_worst %>% summarise_at(vars(temp:visib), mean, na.rm = TRUE) %>% mutate(category = "weather on worst 48") %>% gather(temp:visib, key = weather_type, value = weather_val) , weather_ammended %>% summarise_at(vars(temp:visib), mean, na.rm = TRUE) %>% mutate(category = "weather on average") %>% gather(temp:visib, key = weather_type, value = weather_val) ) %>% ggplot(aes(x = category, y = weather_val, fill = category))+ geom_col()+ facet_wrap(~weather_type, scales = "free_y")+ labs(title = "Hourly average weather values on worst 48 hour window of delays vs. hourly average weather across year", caption = "Note that delays are based on mean(arr_delay, na.rm = TRUE)") ``` For this to be the worst 48 hour period, the weather doesn’t actually seem to be as extreme as I would have guessed. Let’s add\-in average `arr_delay` by planned departure time to this to see how the delay times throughout the day varied, to see if there was a surge or change in weather that led to the huge change in delays. ``` flights %>% mutate(dates = as_date(time_hour)) %>% semi_join(WorstDates) %>% group_by(time_hour) %>% summarise(value = mean(arr_delay, na.rm = TRUE)) %>% ungroup() %>% mutate(value_type = "Mean_ArrDelay") %>% bind_rows( weather_worst %>% select(-dates) %>% gather(temp:visib, key = "value_type", value = "value") ) %>% mutate(weather_attr = !(value_type == "Mean_ArrDelay"), value_type = forcats::fct_relevel(value_type, "Mean_ArrDelay")) %>% ggplot(aes(x = time_hour, value, colour = weather_attr))+ geom_line()+ facet_grid(value_type ~ ., scales = "free_y")+ labs(title = 'Hourly weather and delay values across worst 48 hours of delays') ``` ``` ## Joining, by = "dates" ``` Maybe that first uptick in precipitation corresponded with the increase in delay… but still, looks extreme like an incident caused this. I cheched the news and it looks like a plane was crash landed onto the tarmac at one of the airports on this day [https://en.wikipedia.org/wiki/Southwest\_Airlines\_Flight\_345\#cite\_note\-DMN\_Aircraft\_Totaled\_20160808\-4](https://en.wikipedia.org/wiki/Southwest_Airlines_Flight_345#cite_note-DMN_Aircraft_Totaled_20160808-4) , I checked the incident time and it occurred at 17:45 Jul 22, looks like it overlaps with the time we see the uptick in delays. I show plots and models of 48 hour time windows in a variety of other contexts and detail in [Appendix](28-graphics-for-communication.html#appendix-13) 5. *What does `anti_join(flights, airports, by = c("dest" = "faa"))` tell you? What does `anti_join(airports, flights, by = c("faa" = "dest"))` tell you?* * `anti_join(flights, airports, by = c("dest" = "faa"))` – tells me the flight dests missing an airport * `anti_join(airports, flights, by = c("faa" = "dest"))` – tells me the airports with no flights coming to them 6. *You might expect that there’s an implicit relationship between plane and airline, because each plane is flown by a single airline. Confirm or reject this hypothesis using the tools you’ve learned above.* ``` tail_carr <- flights %>% filter(!is.na(tailnum)) %>% distinct(carrier, tailnum) %>% count(tailnum, sort=TRUE) tail_carr %>% filter(n > 1) ``` ``` ## # A tibble: 17 x 2 ## tailnum n ## <chr> <int> ## 1 N146PQ 2 ## 2 N153PQ 2 ## 3 N176PQ 2 ## 4 N181PQ 2 ## 5 N197PQ 2 ## 6 N200PQ 2 ## 7 N228PQ 2 ## 8 N232PQ 2 ## 9 N933AT 2 ## 10 N935AT 2 ## 11 N977AT 2 ## 12 N978AT 2 ## 13 N979AT 2 ## 14 N981AT 2 ## 15 N989AT 2 ## 16 N990AT 2 ## 17 N994AT 2 ``` You should reject that hypothesis, you can see that 17 `tailnum`s are duplicated on multiple carriers. Below is code to show those 17 tailnums ``` flights %>% distinct(carrier, tailnum) %>% filter(!is.na(tailnum)) %>% group_by(tailnum) %>% mutate(n_tail = n()) %>% ungroup() %>% filter(n_tail > 1) %>% arrange(desc(n_tail), tailnum) ``` ``` ## # A tibble: 34 x 3 ## carrier tailnum n_tail ## <chr> <chr> <int> ## 1 9E N146PQ 2 ## 2 EV N146PQ 2 ## 3 9E N153PQ 2 ## 4 EV N153PQ 2 ## 5 9E N176PQ 2 ## 6 EV N176PQ 2 ## 7 9E N181PQ 2 ## 8 EV N181PQ 2 ## 9 9E N197PQ 2 ## 10 EV N197PQ 2 ## # ... with 24 more rows ``` Appendix -------- ### 13\.5\.1\.4 Graph all of these metrics at once using roughly the same method as used on 13\.4\.6 \#4\. ``` delays_windows %>% gather(perc_cancelled, mean_delay, perc_delay, key = value_type, value = val) %>% mutate(window_start_date = lubridate::as_date(window_start_date)) %>% ggplot(aes(window_start_date, val))+ geom_line()+ facet_wrap(~value_type, scales = "free_y", ncol = 1)+ scale_x_date(date_labels = "%b %d")+ labs(title = 'Measures of delay across 48 hour time windows') ``` Create 48 hour windows for weather data. Follow exact same steps as above. ``` weather_windows <- weather_ammended %>% mutate(date_flight = lubridate::as_date(time_hour)) %>% mutate(startdate_window1 = cut.Date(date_flight, "2 day")) %>% mutate(date_flight2 = ifelse(!(date_flight == min(date_flight, na.rm = TRUE)), date_flight, NA), date_flight2 = lubridate::as_date(date_flight2), startdate_window2 = cut.Date(date_flight2, "2 day")) %>% select(-date_flight, -date_flight2) %>% #Second gather(startdate_window1, startdate_window2, key = "start_window", value = "window_start_date") %>% filter(!is.na(window_start_date)) %>% #Third group_by(window_start_date) %>% summarise_at(vars(temp:visib), mean, na.rm = TRUE) %>% ungroup() %>% select(-wind_gust) ``` ``` ## Warning: attributes are not identical across measure variables; ## they will be dropped ``` Graph using same method as above… ``` weather_windows %>% gather(temp:visib, key = weather_type, value = val) %>% mutate(window_start_date = lubridate::as_date(window_start_date)) %>% ggplot(aes(x = window_start_date, y = val))+ geom_line()+ facet_wrap(~weather_type, ncol = 1, scales = "free_y")+ scale_x_date(date_labels = "%b %d")+ labs(title = 'Measures of weather across 48 hour time windows') ``` Connect delays and weather data ``` weather_delay_joined <- left_join(delays_windows, weather_windows, by = "window_start_date") %>% select(mean_delay, temp:visib, window_start_date) %>% select(-dewp) %>% #is almost completely correlated with temp so removed one of them... na.omit() ``` Plot of 48 hour window of weather scores against mean delay keeping intact order of observations ``` weather_delay_joined %>% gather(mean_delay, temp:visib, key = value_type, value = val) %>% mutate(window_start_date = lubridate::as_date(window_start_date), value_type = forcats::fct_relevel(value_type, "mean_delay")) %>% ggplot(aes(x = window_start_date, y = val, colour = ! value_type == "mean_delay"))+ geom_line()+ facet_wrap(~value_type, scales = "free_y", ncol = 1)+ labs(colour = "Weather value", title = "Mean delay and weather value in 2-day rolling window") ``` Plot of mean\_delay against weather type, each point representing a different ‘window’ ``` weather_delay_joined %>% gather(temp:visib, key = weather_type, value = weather_val) %>% ggplot(aes(x = weather_val, y = mean_delay))+ geom_point()+ geom_smooth()+ facet_wrap(~weather_type, scales = "free_x") ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` In a sense, these plots are not really valid as they obscure the fact that each point is not an independent observation (because there is a high level of association with w/e the value was on a single day with what it was in the previous day). E.g. mean\_delay has a correlation of \~ 0\.68 with prior days value as shown below… This is often ignored and we can also ignore it for now as it gets into time series and things we don’t need to worry about for now… but somthing to be aware… ``` weather_delay_joined %>% mutate(mean_delay_lag = lag(mean_delay)) %>% select(mean_delay, mean_delay_lag) %>% na.omit() %>% cor() ``` ``` ## mean_delay mean_delay_lag ## mean_delay 1.0000000 0.6795631 ## mean_delay_lag 0.6795631 1.0000000 ``` Data is not Independent (as mentioned above) and many problems associated with this… but let’s ignore this for now and just look at a few statisitics… Can see below that raw correlation of `mean_delay` is highest with humid. ``` weather_delay_joined %>% select(-window_start_date) %>% cor() ``` ``` ## mean_delay temp humid wind_dir wind_speed ## mean_delay 1.00000000 0.08515338 0.4549140 -0.05371522 0.16262585 ## temp 0.08515338 1.00000000 0.3036520 -0.25906906 -0.40160692 ## humid 0.45491403 0.30365205 1.0000000 -0.51010505 -0.30383181 ## wind_dir -0.05371522 -0.25906906 -0.5101050 1.00000000 0.50039832 ## wind_speed 0.16262585 -0.40160692 -0.3038318 0.50039832 1.00000000 ## precip 0.36475598 0.02775525 0.4481898 -0.12853817 0.11176053 ## pressure -0.31716918 -0.23873857 -0.2363718 -0.26627495 -0.25716938 ## visib -0.38740156 0.12290097 -0.6647598 0.26307685 0.05275072 ## precip pressure visib ## mean_delay 0.36475598 -0.3171692 -0.38740156 ## temp 0.02775525 -0.2387386 0.12290097 ## humid 0.44818978 -0.2363718 -0.66475984 ## wind_dir -0.12853817 -0.2662749 0.26307685 ## wind_speed 0.11176053 -0.2571694 0.05275072 ## precip 1.00000000 -0.2265636 -0.44400337 ## pressure -0.22656357 1.0000000 0.12032520 ## visib -0.44400337 0.1203252 1.00000000 ``` When accounting for other variables, see relationship with windspeed seems to emerge as important… ``` weather_delay_joined %>% select(-window_start_date) %>% lm(mean_delay ~ ., data = .) %>% summary() ``` ``` ## ## Call: ## lm(formula = mean_delay ~ ., data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -26.179 -7.581 -1.374 5.271 38.008 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 169.56872 132.07737 1.284 0.2000 ## temp 0.05460 0.04702 1.161 0.2464 ## humid 0.48158 0.09088 5.299 2.04e-07 *** ## wind_dir 0.01420 0.01376 1.032 0.3026 ## wind_speed 1.15641 0.25561 4.524 8.28e-06 *** ## precip 140.84141 78.84192 1.786 0.0749 . ## pressure -0.19722 0.12476 -1.581 0.1148 ## visib -1.15009 0.80567 -1.427 0.1543 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 11.64 on 356 degrees of freedom ## Multiple R-squared: 0.3332, Adjusted R-squared: 0.3201 ## F-statistic: 25.42 on 7 and 356 DF, p-value: < 2.2e-16 ``` There a variety of reasons[31](#fn31) you may want to evaluate how the change in an attribute relates to the change in another attribute. In the cases below I plot the diffs for example: *(average value on 2013\-02\-07 to 2013\-02\-08\) \- (average value on 2013\-02\-08 to 2013\-02\-09\)* Note that the time windows are not distinct but overlap by 24 hours. If doing a thorough account of time\-series you would do a lot more than I show below… ``` weather_delay_joined %>% gather(mean_delay, temp:visib, key = value_type, value = val) %>% mutate(window_start_date = lubridate::as_date(window_start_date), value_type = forcats::fct_relevel(value_type, "mean_delay")) %>% group_by(value_type) %>% mutate(value_diff = val - lag(val)) %>% ggplot(aes(x = window_start_date, y = value_diff, colour = !value_type == "mean_delay"))+ geom_line()+ facet_wrap(~value_type, scales = "free_y", ncol = 1)+ labs(colour = "Weather value", title = "Plot of diffs in value") ``` ``` ## Warning: Removed 2 rows containing missing values (geom_path). ``` Let’s plot these diffs as a scatter plot now (no longer looking at the order in which the observations emerged) ``` weather_delay_joined %>% gather(temp:visib, key = weather_type, value = val) %>% group_by(weather_type) %>% mutate(weather_diff = val - lag(val), delay_diff = mean_delay - lag(mean_delay)) %>% ungroup() %>% ggplot(aes(x = weather_diff, y = delay_diff))+ geom_point()+ geom_smooth()+ facet_wrap(~weather_type, scales = "free_x")+ labs(title = "scatter plot of diffs in value") ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ## Warning: Removed 7 rows containing non-finite values (stat_smooth). ``` ``` ## Warning: Removed 7 rows containing missing values (geom_point). ``` Let’s look at the correlatioin and regression against these diffs ``` diff_data <- weather_delay_joined %>% gather(mean_delay, temp:visib, key = value_type, value = val) %>% group_by(value_type) %>% mutate(diff = val - lag(val)) %>% ungroup() %>% select(-val) %>% spread(key = value_type, value = diff) diff_data %>% select(-window_start_date) %>% na.omit() %>% cor() ``` ``` ## humid mean_delay precip pressure temp ## humid 1.0000000 0.54331654 0.48014091 -0.3427556 0.318534448 ## mean_delay 0.5433165 1.00000000 0.51510649 -0.3247584 0.150601446 ## precip 0.4801409 0.51510649 1.00000000 -0.3014413 0.074916969 ## pressure -0.3427556 -0.32475840 -0.30144131 1.0000000 -0.488629288 ## temp 0.3185344 0.15060145 0.07491697 -0.4886293 1.000000000 ## visib -0.7393902 -0.53844191 -0.49795469 0.2721685 -0.206815887 ## wind_dir -0.4978895 -0.20689204 -0.20823801 -0.2443716 -0.003608694 ## wind_speed -0.1964910 0.05738881 0.15742776 -0.3687487 -0.085437521 ## visib wind_dir wind_speed ## humid -0.73939024 -0.497889528 -0.19649100 ## mean_delay -0.53844191 -0.206892045 0.05738881 ## precip -0.49795469 -0.208238012 0.15742776 ## pressure 0.27216848 -0.244371617 -0.36874869 ## temp -0.20681589 -0.003608694 -0.08543752 ## visib 1.00000000 0.378625695 0.06152223 ## wind_dir 0.37862569 1.000000000 0.43970745 ## wind_speed 0.06152223 0.439707451 1.00000000 ``` ``` diff_data %>% select(-window_start_date) %>% lm(mean_delay ~ ., data = .) %>% summary() ``` ``` ## ## Call: ## lm(formula = mean_delay ~ ., data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -32.843 -4.394 -0.189 3.749 27.177 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.022454 0.460301 -0.049 0.961121 ## humid 0.281416 0.082305 3.419 0.000701 *** ## precip 324.087906 63.453719 5.107 5.34e-07 *** ## pressure -0.275033 0.149084 -1.845 0.065895 . ## temp -0.127570 0.143134 -0.891 0.373394 ## visib -2.420046 0.728749 -3.321 0.000991 *** ## wind_dir 0.002373 0.012316 0.193 0.847329 ## wind_speed 0.128749 0.226138 0.569 0.569487 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 8.77 on 355 degrees of freedom ## (1 observation deleted due to missingness) ## Multiple R-squared: 0.4111, Adjusted R-squared: 0.3995 ## F-statistic: 35.4 on 7 and 355 DF, p-value: < 2.2e-16 ``` 13\.2 nycflights13 ------------------ ``` flights airlines airports planes weather ``` ### 13\.2\.1 1. *Imagine you wanted to draw (approximately) the route each plane flies from* *its origin to its destination. What variables would you need? What tables* *would you need to combine?* To draw a line from origin to destination, I need the lat lon points from airports`as well as the dest and origin variables from`flights\`. 2. *I forgot to draw the relationship between `weather` and `airports`.* *What is the relationship and how should it appear in the diagram?* `origin` from `weather connects to`faa`from`airports\` in a many to one relationship 3. *`weather` only contains information for the origin (NYC) airports. If* *it contained weather records for all airports in the USA, what additional* *relation would it define with `flights`?* It would connect to `dest`. 4. *We know that some days of the year are “special”, and fewer people than* *usual fly on them. How might you represent that data as a data frame?* *What would be the primary keys of that table? How would it connect to the* *existing tables?* Make a set of days that are less popular and have these dates connect by month and day ### 13\.2\.1 1. *Imagine you wanted to draw (approximately) the route each plane flies from* *its origin to its destination. What variables would you need? What tables* *would you need to combine?* To draw a line from origin to destination, I need the lat lon points from airports`as well as the dest and origin variables from`flights\`. 2. *I forgot to draw the relationship between `weather` and `airports`.* *What is the relationship and how should it appear in the diagram?* `origin` from `weather connects to`faa`from`airports\` in a many to one relationship 3. *`weather` only contains information for the origin (NYC) airports. If* *it contained weather records for all airports in the USA, what additional* *relation would it define with `flights`?* It would connect to `dest`. 4. *We know that some days of the year are “special”, and fewer people than* *usual fly on them. How might you represent that data as a data frame?* *What would be the primary keys of that table? How would it connect to the* *existing tables?* Make a set of days that are less popular and have these dates connect by month and day 13\.3 Keys ---------- ### 13\.3\.1 1. *Add a surrogate key to `flights`.* ``` flights %>% mutate(surrogate_key = row_number()) %>% glimpse() ``` ``` ## Observations: 336,776 ## Variables: 20 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ day <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ dep_time <int> 517, 533, 542, 544, 554, 554, 555, 557, 557, 55... ## $ sched_dep_time <int> 515, 529, 540, 545, 600, 558, 600, 600, 600, 60... ## $ dep_delay <dbl> 2, 4, 2, -1, -6, -4, -5, -3, -3, -2, -2, -2, -2... ## $ arr_time <int> 830, 850, 923, 1004, 812, 740, 913, 709, 838, 7... ## $ sched_arr_time <int> 819, 830, 850, 1022, 837, 728, 854, 723, 846, 7... ## $ arr_delay <dbl> 11, 20, 33, -18, -25, 12, 19, -14, -8, 8, -2, -... ## $ carrier <chr> "UA", "UA", "AA", "B6", "DL", "UA", "B6", "EV",... ## $ flight <int> 1545, 1714, 1141, 725, 461, 1696, 507, 5708, 79... ## $ tailnum <chr> "N14228", "N24211", "N619AA", "N804JB", "N668DN... ## $ origin <chr> "EWR", "LGA", "JFK", "JFK", "LGA", "EWR", "EWR"... ## $ dest <chr> "IAH", "IAH", "MIA", "BQN", "ATL", "ORD", "FLL"... ## $ air_time <dbl> 227, 227, 160, 183, 116, 150, 158, 53, 140, 138... ## $ distance <dbl> 1400, 1416, 1089, 1576, 762, 719, 1065, 229, 94... ## $ hour <dbl> 5, 5, 5, 5, 6, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5,... ## $ minute <dbl> 15, 29, 40, 45, 0, 58, 0, 0, 0, 0, 0, 0, 0, 0, ... ## $ time_hour <dttm> 2013-01-01 05:00:00, 2013-01-01 05:00:00, 2013... ## $ surrogate_key <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, ... ``` 2. *Identify the keys in the following datasets* 1. `Lahman::Batting`: player, year, stint ``` Lahman::Batting %>% count(playerID, yearID, stint) %>% filter(n > 1) ``` ``` ## # A tibble: 0 x 4 ## # ... with 4 variables: playerID <chr>, yearID <int>, stint <int>, n <int> ``` 2. `babynames::babynames`: name, sex, year ``` babynames::babynames %>% count(name, sex, year) %>% filter(n > 1) ``` ``` ## # A tibble: 0 x 4 ## # ... with 4 variables: name <chr>, sex <chr>, year <dbl>, n <int> ``` 3. `nasaweather::atmos`: lat, long, year, month ``` nasaweather::atmos %>% count(lat, long, year, month) %>% filter(n > 1) ``` ``` ## # A tibble: 0 x 5 ## # ... with 5 variables: lat <dbl>, long <dbl>, year <int>, month <int>, ## # n <int> ``` 4. `fueleconomy::vehicles`: id ``` fueleconomy::vehicles %>% count(id) %>% filter(n > 1) ``` ``` ## # A tibble: 0 x 2 ## # ... with 2 variables: id <int>, n <int> ``` 5. `ggplot2::diamonds`: needs surrogate ``` diamonds %>% count(x, y, z, depth, table, carat, cut, color, price, clarity ) %>% filter(n > 1) ``` ``` ## # A tibble: 143 x 11 ## x y z depth table carat cut color price clarity n ## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <ord> <ord> <int> <ord> <int> ## 1 0 0 0 64.1 60 0.71 Good F 2130 SI2 2 ## 2 4.23 4.26 2.69 63.4 57 0.3 Good J 394 VS1 2 ## 3 4.26 4.23 2.69 63.4 57 0.3 Very Good J 506 VS1 2 ## 4 4.26 4.29 2.66 62.2 57 0.3 Ideal H 450 SI1 2 ## 5 4.27 4.28 2.66 62.2 57 0.3 Ideal H 450 SI1 2 ## 6 4.29 4.31 2.71 63 55 0.3 Very Good G 526 VS2 2 ## 7 4.29 4.31 2.73 63.5 56 0.31 Good D 571 SI1 2 ## 8 4.31 4.28 2.67 62.2 58 0.3 Premium D 709 SI1 2 ## 9 4.31 4.29 2.71 63 55 0.3 Ideal G 675 VS2 2 ## 10 4.31 4.29 2.73 63.5 56 0.31 Very Good D 732 SI1 2 ## # ... with 133 more rows ``` ``` diamonds %>% mutate(surrogate_id = row_number()) ``` ``` ## # A tibble: 53,940 x 11 ## carat cut color clarity depth table price x y z ## <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl> ## 1 0.23 Ideal E SI2 61.5 55 326 3.95 3.98 2.43 ## 2 0.21 Prem~ E SI1 59.8 61 326 3.89 3.84 2.31 ## 3 0.23 Good E VS1 56.9 65 327 4.05 4.07 2.31 ## 4 0.290 Prem~ I VS2 62.4 58 334 4.2 4.23 2.63 ## 5 0.31 Good J SI2 63.3 58 335 4.34 4.35 2.75 ## 6 0.24 Very~ J VVS2 62.8 57 336 3.94 3.96 2.48 ## 7 0.24 Very~ I VVS1 62.3 57 336 3.95 3.98 2.47 ## 8 0.26 Very~ H SI1 61.9 55 337 4.07 4.11 2.53 ## 9 0.22 Fair E VS2 65.1 61 337 3.87 3.78 2.49 ## 10 0.23 Very~ H VS1 59.4 61 338 4 4.05 2.39 ## # ... with 53,930 more rows, and 1 more variable: surrogate_id <int> ``` 3. *Draw a diagram illustrating the connections between the `Batting`,`Master`, and `Salaries` tables in the Lahman package. Draw another diagram that shows the relationship between `Master`, `Managers`, `AwardsManagers`.* * `Lahman::Batting` and `Lahman::Master` combine by `playerID` * `Lahman::Batting` and `Lahman::Salaries` combine by `playerID`, `yearID` * `Lahman::Master` and `Lahman::Salaries` combine by `playerID` * `Lahman::Master` and `Lahman::Managers` combine by `playerID` * `Lahman::Master` and `Lahman::AwardsManagers` combine by `playerID`*How would you characterise the relationship between the `Batting`, `Pitching`, and `Fielding` tables?* * All connect by `playerID`, `yearID`, `stint` ### 13\.3\.1 1. *Add a surrogate key to `flights`.* ``` flights %>% mutate(surrogate_key = row_number()) %>% glimpse() ``` ``` ## Observations: 336,776 ## Variables: 20 ## $ year <int> 2013, 2013, 2013, 2013, 2013, 2013, 2013, 2013,... ## $ month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ day <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## $ dep_time <int> 517, 533, 542, 544, 554, 554, 555, 557, 557, 55... ## $ sched_dep_time <int> 515, 529, 540, 545, 600, 558, 600, 600, 600, 60... ## $ dep_delay <dbl> 2, 4, 2, -1, -6, -4, -5, -3, -3, -2, -2, -2, -2... ## $ arr_time <int> 830, 850, 923, 1004, 812, 740, 913, 709, 838, 7... ## $ sched_arr_time <int> 819, 830, 850, 1022, 837, 728, 854, 723, 846, 7... ## $ arr_delay <dbl> 11, 20, 33, -18, -25, 12, 19, -14, -8, 8, -2, -... ## $ carrier <chr> "UA", "UA", "AA", "B6", "DL", "UA", "B6", "EV",... ## $ flight <int> 1545, 1714, 1141, 725, 461, 1696, 507, 5708, 79... ## $ tailnum <chr> "N14228", "N24211", "N619AA", "N804JB", "N668DN... ## $ origin <chr> "EWR", "LGA", "JFK", "JFK", "LGA", "EWR", "EWR"... ## $ dest <chr> "IAH", "IAH", "MIA", "BQN", "ATL", "ORD", "FLL"... ## $ air_time <dbl> 227, 227, 160, 183, 116, 150, 158, 53, 140, 138... ## $ distance <dbl> 1400, 1416, 1089, 1576, 762, 719, 1065, 229, 94... ## $ hour <dbl> 5, 5, 5, 5, 6, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5,... ## $ minute <dbl> 15, 29, 40, 45, 0, 58, 0, 0, 0, 0, 0, 0, 0, 0, ... ## $ time_hour <dttm> 2013-01-01 05:00:00, 2013-01-01 05:00:00, 2013... ## $ surrogate_key <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, ... ``` 2. *Identify the keys in the following datasets* 1. `Lahman::Batting`: player, year, stint ``` Lahman::Batting %>% count(playerID, yearID, stint) %>% filter(n > 1) ``` ``` ## # A tibble: 0 x 4 ## # ... with 4 variables: playerID <chr>, yearID <int>, stint <int>, n <int> ``` 2. `babynames::babynames`: name, sex, year ``` babynames::babynames %>% count(name, sex, year) %>% filter(n > 1) ``` ``` ## # A tibble: 0 x 4 ## # ... with 4 variables: name <chr>, sex <chr>, year <dbl>, n <int> ``` 3. `nasaweather::atmos`: lat, long, year, month ``` nasaweather::atmos %>% count(lat, long, year, month) %>% filter(n > 1) ``` ``` ## # A tibble: 0 x 5 ## # ... with 5 variables: lat <dbl>, long <dbl>, year <int>, month <int>, ## # n <int> ``` 4. `fueleconomy::vehicles`: id ``` fueleconomy::vehicles %>% count(id) %>% filter(n > 1) ``` ``` ## # A tibble: 0 x 2 ## # ... with 2 variables: id <int>, n <int> ``` 5. `ggplot2::diamonds`: needs surrogate ``` diamonds %>% count(x, y, z, depth, table, carat, cut, color, price, clarity ) %>% filter(n > 1) ``` ``` ## # A tibble: 143 x 11 ## x y z depth table carat cut color price clarity n ## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <ord> <ord> <int> <ord> <int> ## 1 0 0 0 64.1 60 0.71 Good F 2130 SI2 2 ## 2 4.23 4.26 2.69 63.4 57 0.3 Good J 394 VS1 2 ## 3 4.26 4.23 2.69 63.4 57 0.3 Very Good J 506 VS1 2 ## 4 4.26 4.29 2.66 62.2 57 0.3 Ideal H 450 SI1 2 ## 5 4.27 4.28 2.66 62.2 57 0.3 Ideal H 450 SI1 2 ## 6 4.29 4.31 2.71 63 55 0.3 Very Good G 526 VS2 2 ## 7 4.29 4.31 2.73 63.5 56 0.31 Good D 571 SI1 2 ## 8 4.31 4.28 2.67 62.2 58 0.3 Premium D 709 SI1 2 ## 9 4.31 4.29 2.71 63 55 0.3 Ideal G 675 VS2 2 ## 10 4.31 4.29 2.73 63.5 56 0.31 Very Good D 732 SI1 2 ## # ... with 133 more rows ``` ``` diamonds %>% mutate(surrogate_id = row_number()) ``` ``` ## # A tibble: 53,940 x 11 ## carat cut color clarity depth table price x y z ## <dbl> <ord> <ord> <ord> <dbl> <dbl> <int> <dbl> <dbl> <dbl> ## 1 0.23 Ideal E SI2 61.5 55 326 3.95 3.98 2.43 ## 2 0.21 Prem~ E SI1 59.8 61 326 3.89 3.84 2.31 ## 3 0.23 Good E VS1 56.9 65 327 4.05 4.07 2.31 ## 4 0.290 Prem~ I VS2 62.4 58 334 4.2 4.23 2.63 ## 5 0.31 Good J SI2 63.3 58 335 4.34 4.35 2.75 ## 6 0.24 Very~ J VVS2 62.8 57 336 3.94 3.96 2.48 ## 7 0.24 Very~ I VVS1 62.3 57 336 3.95 3.98 2.47 ## 8 0.26 Very~ H SI1 61.9 55 337 4.07 4.11 2.53 ## 9 0.22 Fair E VS2 65.1 61 337 3.87 3.78 2.49 ## 10 0.23 Very~ H VS1 59.4 61 338 4 4.05 2.39 ## # ... with 53,930 more rows, and 1 more variable: surrogate_id <int> ``` 3. *Draw a diagram illustrating the connections between the `Batting`,`Master`, and `Salaries` tables in the Lahman package. Draw another diagram that shows the relationship between `Master`, `Managers`, `AwardsManagers`.* * `Lahman::Batting` and `Lahman::Master` combine by `playerID` * `Lahman::Batting` and `Lahman::Salaries` combine by `playerID`, `yearID` * `Lahman::Master` and `Lahman::Salaries` combine by `playerID` * `Lahman::Master` and `Lahman::Managers` combine by `playerID` * `Lahman::Master` and `Lahman::AwardsManagers` combine by `playerID`*How would you characterise the relationship between the `Batting`, `Pitching`, and `Fielding` tables?* * All connect by `playerID`, `yearID`, `stint` 13\.4 Mutating joins -------------------- > The most commonly used join is the left join: you use this whenever you look up additional data from another table, because it preserves the original observations even when there isn’t a match. ### 13\.4\.6 1. *Compute the average delay by destination, then join on the `airports`* *data frame so you can show the spatial distribution of delays. Here’s an* *easy way to draw a map of the United States:* ``` flights %>% semi_join(airports, c("dest" = "faa")) %>% group_by(dest) %>% summarise(delay = mean(arr_delay, na.rm=TRUE)) %>% left_join(airports, by = c("dest"="faa")) %>% ggplot(aes(lon, lat)) + borders("state") + geom_point(aes(colour = delay)) + coord_quickmap()+ # see chapter 28 for information on scales scale_color_gradient2(low = "blue", high = "red") ``` 2. *Add the location of the origin *and* destination (i.e. the `lat` and `lon`)* *to `flights`.* ``` flights %>% left_join(airports, by = c("dest" = "faa")) %>% left_join(airports, by = c("origin" = "faa"), suffix = c("_dest", "_origin")) %>% select(flight, carrier, dest, lat_dest, lon_dest, origin, lat_origin, lon_origin) ``` ``` ## # A tibble: 336,776 x 8 ## flight carrier dest lat_dest lon_dest origin lat_origin lon_origin ## <int> <chr> <chr> <dbl> <dbl> <chr> <dbl> <dbl> ## 1 1545 UA IAH 30.0 -95.3 EWR 40.7 -74.2 ## 2 1714 UA IAH 30.0 -95.3 LGA 40.8 -73.9 ## 3 1141 AA MIA 25.8 -80.3 JFK 40.6 -73.8 ## 4 725 B6 BQN NA NA JFK 40.6 -73.8 ## 5 461 DL ATL 33.6 -84.4 LGA 40.8 -73.9 ## 6 1696 UA ORD 42.0 -87.9 EWR 40.7 -74.2 ## 7 507 B6 FLL 26.1 -80.2 EWR 40.7 -74.2 ## 8 5708 EV IAD 38.9 -77.5 LGA 40.8 -73.9 ## 9 79 B6 MCO 28.4 -81.3 JFK 40.6 -73.8 ## 10 301 AA ORD 42.0 -87.9 LGA 40.8 -73.9 ## # ... with 336,766 more rows ``` Note that the suffix allows you to tag names onto first and second table, hence why vector is length 2 3. *Is there a relationship between the age of a plane and its delays?* ``` group_by(flights, tailnum) %>% summarise(avg_delay = mean(arr_delay, na.rm=TRUE), n = n()) %>% left_join(planes, by="tailnum") %>% mutate(age = 2013 - year) %>% filter(n > 50, age < 30) %>% ggplot(aes(x = age, y = avg_delay))+ ggbeeswarm::geom_quasirandom()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")' ``` Looks as though planes that are roughly 5 to 10 years old have higher delays… Let’s look at same thing using boxplots. ``` group_by(flights, tailnum) %>% summarise(avg_delay = mean(arr_delay, na.rm=TRUE), n = n()) %>% left_join(planes, by="tailnum") %>% mutate(age = 2013 - year) %>% filter(n > 50, age <= 30, age >= 0) %>% ggplot()+ geom_boxplot(aes(x = cut_width(age, 2, boundary = 0), y = avg_delay))+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` Perhaps there is not an overall trend association between age and delays, though it seems that the particular group of planes in that time range seem to have delays than either newer or older planes. On the other hand, there does almost look to be a seasonality pattern – though this may just be me seeing things… perhaps worth exploring more… A simple way to test for a non\-linear relationship would be to discretize age and then pass it through an anova… ``` nycflights13::flights %>% select(arr_delay, tailnum) %>% left_join(planes, by="tailnum") %>% filter(!is.na(arr_delay)) %>% mutate(age = 2013 - year, age_round_5 = (5 * age %/% 5) %>% as.factor()) %>% with(aov(arr_delay ~ age_round_5)) %>% summary() ``` ``` ## Df Sum Sq Mean Sq F value Pr(>F) ## age_round_5 11 1062080 96553 47.92 <2e-16 *** ## Residuals 273841 551756442 2015 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## 53493 observations deleted due to missingness ``` * There are weaknesses to using anova, but the low p\-value above suggests test arrival delay is not randomly distributed across age * The reason for such a difference may be trivial or may be confounded by a more interesting pattern… but these are deeper questions 4. *What weather conditions make it more likely to see a delay?* There are a lot of ways you could have approached this problem. Below, I look at the average weather value for each of the groups `FALSE`, `TRUE` and `Canceled` – `FALSE` corresponding with non\-delayed flights, `TRUE` with delayed flights and `Canceled` with flights that were canceled. If I were feeling fancy, I would have also added the standard errors on these… ``` flights_weath <- mutate(flights, delay_TF = dep_delay > 0) %>% separate(sched_dep_time, into = c("hour_sched", "min_sched"), sep = -3, remove = FALSE, convert = TRUE) %>% left_join(weather, by = c("origin", "year","month", "day", "hour_sched"="hour")) flights_weath_gath <- flights_weath %>% select(sched_dep_time, delay_TF, sched_dep_time, temp:visib) %>% mutate(key = row_number()) %>% gather(temp, dewp, humid, wind_dir, wind_speed, wind_gust, precip, pressure, visib, key="weather", value="values") flights_summarized <- flights_weath_gath %>% group_by(weather, delay_TF) %>% summarise(median_weath = median(values, na.rm = TRUE), mean_weath = mean(values, na.rm = TRUE), sum_n = sum(!is.na(values))) %>% ungroup() %>% mutate(delay_TF = ifelse(is.na(delay_TF), "Canceled", delay_TF), delay_TF = forcats::as_factor(delay_TF, c(FALSE, TRUE, "Canceled"))) flights_summarized %>% ggplot(aes(x = delay_TF, y = mean_weath, fill = delay_TF))+ geom_col()+ facet_wrap(~weather, scales = "free_y")+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` While precipitation is the largest difference, my guess is that the standard error on this would be much greater day to day because as you can see the values are very low, so it could be that a few cases with a lot of rain may tick it up, but it may be tough to actually use this as a predictor… 5. *What happened on June 13 2013? Display the spatial pattern of delays,* *and then use Google to cross\-reference with the weather.* Looks like East coast is getting hammered and flights arriving from Atlanta an similar locations were very delayed. Guessing either weather issue, or problem in Atl or delta. ### 13\.4\.6 1. *Compute the average delay by destination, then join on the `airports`* *data frame so you can show the spatial distribution of delays. Here’s an* *easy way to draw a map of the United States:* ``` flights %>% semi_join(airports, c("dest" = "faa")) %>% group_by(dest) %>% summarise(delay = mean(arr_delay, na.rm=TRUE)) %>% left_join(airports, by = c("dest"="faa")) %>% ggplot(aes(lon, lat)) + borders("state") + geom_point(aes(colour = delay)) + coord_quickmap()+ # see chapter 28 for information on scales scale_color_gradient2(low = "blue", high = "red") ``` 2. *Add the location of the origin *and* destination (i.e. the `lat` and `lon`)* *to `flights`.* ``` flights %>% left_join(airports, by = c("dest" = "faa")) %>% left_join(airports, by = c("origin" = "faa"), suffix = c("_dest", "_origin")) %>% select(flight, carrier, dest, lat_dest, lon_dest, origin, lat_origin, lon_origin) ``` ``` ## # A tibble: 336,776 x 8 ## flight carrier dest lat_dest lon_dest origin lat_origin lon_origin ## <int> <chr> <chr> <dbl> <dbl> <chr> <dbl> <dbl> ## 1 1545 UA IAH 30.0 -95.3 EWR 40.7 -74.2 ## 2 1714 UA IAH 30.0 -95.3 LGA 40.8 -73.9 ## 3 1141 AA MIA 25.8 -80.3 JFK 40.6 -73.8 ## 4 725 B6 BQN NA NA JFK 40.6 -73.8 ## 5 461 DL ATL 33.6 -84.4 LGA 40.8 -73.9 ## 6 1696 UA ORD 42.0 -87.9 EWR 40.7 -74.2 ## 7 507 B6 FLL 26.1 -80.2 EWR 40.7 -74.2 ## 8 5708 EV IAD 38.9 -77.5 LGA 40.8 -73.9 ## 9 79 B6 MCO 28.4 -81.3 JFK 40.6 -73.8 ## 10 301 AA ORD 42.0 -87.9 LGA 40.8 -73.9 ## # ... with 336,766 more rows ``` Note that the suffix allows you to tag names onto first and second table, hence why vector is length 2 3. *Is there a relationship between the age of a plane and its delays?* ``` group_by(flights, tailnum) %>% summarise(avg_delay = mean(arr_delay, na.rm=TRUE), n = n()) %>% left_join(planes, by="tailnum") %>% mutate(age = 2013 - year) %>% filter(n > 50, age < 30) %>% ggplot(aes(x = age, y = avg_delay))+ ggbeeswarm::geom_quasirandom()+ geom_smooth() ``` ``` ## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")' ``` Looks as though planes that are roughly 5 to 10 years old have higher delays… Let’s look at same thing using boxplots. ``` group_by(flights, tailnum) %>% summarise(avg_delay = mean(arr_delay, na.rm=TRUE), n = n()) %>% left_join(planes, by="tailnum") %>% mutate(age = 2013 - year) %>% filter(n > 50, age <= 30, age >= 0) %>% ggplot()+ geom_boxplot(aes(x = cut_width(age, 2, boundary = 0), y = avg_delay))+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` Perhaps there is not an overall trend association between age and delays, though it seems that the particular group of planes in that time range seem to have delays than either newer or older planes. On the other hand, there does almost look to be a seasonality pattern – though this may just be me seeing things… perhaps worth exploring more… A simple way to test for a non\-linear relationship would be to discretize age and then pass it through an anova… ``` nycflights13::flights %>% select(arr_delay, tailnum) %>% left_join(planes, by="tailnum") %>% filter(!is.na(arr_delay)) %>% mutate(age = 2013 - year, age_round_5 = (5 * age %/% 5) %>% as.factor()) %>% with(aov(arr_delay ~ age_round_5)) %>% summary() ``` ``` ## Df Sum Sq Mean Sq F value Pr(>F) ## age_round_5 11 1062080 96553 47.92 <2e-16 *** ## Residuals 273841 551756442 2015 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## 53493 observations deleted due to missingness ``` * There are weaknesses to using anova, but the low p\-value above suggests test arrival delay is not randomly distributed across age * The reason for such a difference may be trivial or may be confounded by a more interesting pattern… but these are deeper questions 4. *What weather conditions make it more likely to see a delay?* There are a lot of ways you could have approached this problem. Below, I look at the average weather value for each of the groups `FALSE`, `TRUE` and `Canceled` – `FALSE` corresponding with non\-delayed flights, `TRUE` with delayed flights and `Canceled` with flights that were canceled. If I were feeling fancy, I would have also added the standard errors on these… ``` flights_weath <- mutate(flights, delay_TF = dep_delay > 0) %>% separate(sched_dep_time, into = c("hour_sched", "min_sched"), sep = -3, remove = FALSE, convert = TRUE) %>% left_join(weather, by = c("origin", "year","month", "day", "hour_sched"="hour")) flights_weath_gath <- flights_weath %>% select(sched_dep_time, delay_TF, sched_dep_time, temp:visib) %>% mutate(key = row_number()) %>% gather(temp, dewp, humid, wind_dir, wind_speed, wind_gust, precip, pressure, visib, key="weather", value="values") flights_summarized <- flights_weath_gath %>% group_by(weather, delay_TF) %>% summarise(median_weath = median(values, na.rm = TRUE), mean_weath = mean(values, na.rm = TRUE), sum_n = sum(!is.na(values))) %>% ungroup() %>% mutate(delay_TF = ifelse(is.na(delay_TF), "Canceled", delay_TF), delay_TF = forcats::as_factor(delay_TF, c(FALSE, TRUE, "Canceled"))) flights_summarized %>% ggplot(aes(x = delay_TF, y = mean_weath, fill = delay_TF))+ geom_col()+ facet_wrap(~weather, scales = "free_y")+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` While precipitation is the largest difference, my guess is that the standard error on this would be much greater day to day because as you can see the values are very low, so it could be that a few cases with a lot of rain may tick it up, but it may be tough to actually use this as a predictor… 5. *What happened on June 13 2013? Display the spatial pattern of delays,* *and then use Google to cross\-reference with the weather.* Looks like East coast is getting hammered and flights arriving from Atlanta an similar locations were very delayed. Guessing either weather issue, or problem in Atl or delta. 13\.5 Filtering joins --------------------- ### 13\.5\.1 1. *What does it mean for a flight to have a missing `tailnum`?* All flights with a missing tailnum in the `flights` table were cancelled as you can see below. ``` flights %>% count(is.na(tailnum), is.na(arr_delay)) ``` ``` ## # A tibble: 3 x 3 ## `is.na(tailnum)` `is.na(arr_delay)` n ## <lgl> <lgl> <int> ## 1 FALSE FALSE 327346 ## 2 FALSE TRUE 6918 ## 3 TRUE TRUE 2512 ``` *What do the tail numbers that don’t have a matching record in `planes` have in common?* *(Hint: one variable explains \~90% of the problems.)* ``` flights %>% anti_join(planes, by="tailnum") %>% count(carrier, sort = TRUE) ``` ``` ## # A tibble: 10 x 2 ## carrier n ## <chr> <int> ## 1 MQ 25397 ## 2 AA 22558 ## 3 UA 1693 ## 4 9E 1044 ## 5 B6 830 ## 6 US 699 ## 7 FL 187 ## 8 DL 110 ## 9 F9 50 ## 10 WN 38 ``` ``` flights %>% mutate(in_planes = tailnum %in% planes$tailnum) %>% group_by(carrier) %>% summarise(flights_inPlanes = sum(in_planes), n = n(), perc_inPlanes = flights_inPlanes / n) %>% ungroup() ``` ``` ## # A tibble: 16 x 4 ## carrier flights_inPlanes n perc_inPlanes ## <chr> <int> <int> <dbl> ## 1 9E 17416 18460 0.943 ## 2 AA 10171 32729 0.311 ## 3 AS 714 714 1 ## 4 B6 53805 54635 0.985 ## 5 DL 48000 48110 0.998 ## 6 EV 54173 54173 1 ## 7 F9 635 685 0.927 ## 8 FL 3073 3260 0.943 ## 9 HA 342 342 1 ## 10 MQ 1000 26397 0.0379 ## 11 OO 32 32 1 ## 12 UA 56972 58665 0.971 ## 13 US 19837 20536 0.966 ## 14 VX 5162 5162 1 ## 15 WN 12237 12275 0.997 ## 16 YV 601 601 1 ``` Some carriers do not have many of their tailnums data in the `planes` table. (Come back.) 2. *Filter flights to only show flights with planes that have flown at least 100 flights.* ``` planes_many <- flights %>% count(tailnum, sort=TRUE) %>% filter(n > 100) semi_join(flights, planes_many) ``` ``` ## Joining, by = "tailnum" ``` ``` ## # A tibble: 229,202 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 517 515 2 830 ## 2 2013 1 1 533 529 4 850 ## 3 2013 1 1 544 545 -1 1004 ## 4 2013 1 1 554 558 -4 740 ## 5 2013 1 1 555 600 -5 913 ## 6 2013 1 1 557 600 -3 709 ## 7 2013 1 1 557 600 -3 838 ## 8 2013 1 1 558 600 -2 849 ## 9 2013 1 1 558 600 -2 853 ## 10 2013 1 1 558 600 -2 923 ## # ... with 229,192 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` * `add_count()` is another helpful function that could have been used here 3. *Combine `fueleconomy::vehicles` and `fueleconomy::common` to find only the records for the most common models.* ``` fueleconomy::vehicles %>% semi_join(fueleconomy::common, by=c("make", "model")) ``` ``` ## # A tibble: 14,531 x 12 ## id make model year class trans drive cyl displ fuel hwy cty ## <int> <chr> <chr> <int> <chr> <chr> <chr> <int> <dbl> <chr> <int> <int> ## 1 1833 Acura Integ~ 1986 Subc~ Auto~ Fron~ 4 1.6 Regu~ 28 22 ## 2 1834 Acura Integ~ 1986 Subc~ Manu~ Fron~ 4 1.6 Regu~ 28 23 ## 3 3037 Acura Integ~ 1987 Subc~ Auto~ Fron~ 4 1.6 Regu~ 28 22 ## 4 3038 Acura Integ~ 1987 Subc~ Manu~ Fron~ 4 1.6 Regu~ 28 23 ## 5 4183 Acura Integ~ 1988 Subc~ Auto~ Fron~ 4 1.6 Regu~ 27 22 ## 6 4184 Acura Integ~ 1988 Subc~ Manu~ Fron~ 4 1.6 Regu~ 28 23 ## 7 5303 Acura Integ~ 1989 Subc~ Auto~ Fron~ 4 1.6 Regu~ 27 22 ## 8 5304 Acura Integ~ 1989 Subc~ Manu~ Fron~ 4 1.6 Regu~ 28 23 ## 9 6442 Acura Integ~ 1990 Subc~ Auto~ Fron~ 4 1.8 Regu~ 24 20 ## 10 6443 Acura Integ~ 1990 Subc~ Manu~ Fron~ 4 1.8 Regu~ 26 21 ## # ... with 14,521 more rows ``` 4. *Find the 48 hours (over the course of the whole year) that have the worst delays. Cross\-reference it with the `weather` data. Can you see any patterns?* First: Create two variables that together capture all 48 hour time windows across the year, at the day window of granularity (e.g. the time of day the flight takes off does not matter in establishing time windows for this example, only the day). Second: Gather these time windows into a single dataframe (note that this will increase the length of your data by \~364/365 \* 100 %) Third: Group by `window_start_date` and calculate average `arr_delay` and related metrics. ``` delays_windows <- flights %>% #First mutate(date_flight = lubridate::as_date(time_hour)) %>% mutate(startdate_window1 = cut.Date(date_flight, "2 day")) %>% mutate(date_flight2 = ifelse(!(date_flight == min(date_flight, na.rm = TRUE)), date_flight, NA), date_flight2 = lubridate::as_date(date_flight2), startdate_window2 = cut.Date(date_flight2, "2 day")) %>% select(-date_flight, -date_flight2) %>% #Second gather(startdate_window1, startdate_window2, key = "start_window", value = "window_start_date") %>% filter(!is.na(window_start_date)) %>% #Third group_by(window_start_date) %>% summarise(num = n(), perc_cancelled = sum(is.na(arr_delay)) / n(), mean_delay = mean(arr_delay, na.rm = TRUE), perc_delay = mean(arr_delay > 0, na.rm = TRUE), total_delay_mins = sum(arr_delay, na.rm = TRUE)) %>% ungroup() ``` ``` ## Warning: attributes are not identical across measure variables; ## they will be dropped ``` ``` #don't worry about warning of 'attributes are not identical...', that is #because the cut function assigns attributes to the value, it's fine if #these are dropped here. ``` Create tibble of worst 2\-day period for mean `arr_delay` ``` WorstWindow <- delays_windows %>% mutate(mean_delay_rank = dplyr::min_rank(-mean_delay)) %>% filter(mean_delay_rank <= 1) WorstDates <- tibble(dates = c(lubridate::as_date(WorstWindow$window_start_date), lubridate::as_date(WorstWindow$window_start_date) + lubridate::duration(1, "days"))) ``` Ammend weather data so that weather is an average across three NY locations rather than seperate for each[30](#fn30) ``` weather_ammended <- weather %>% mutate(time_hour = lubridate::make_datetime(year, month, day, hour)) %>% select(-one_of("origin", "year", "month", "day", "hour")) %>% group_by(time_hour) %>% summarise_all(mean, na.rm = TRUE) %>% ungroup() ``` Filtering join to just times weather for worst 2 days ``` weather_worst <- weather_ammended %>% mutate(dates = as_date(time_hour)) %>% semi_join(WorstDates) ``` ``` ## Joining, by = "dates" ``` Plot of hourly weather values across 48 hour time windows. ``` weather_worst %>% select(-dates) %>% gather(temp:visib, key = "weather_type", value = "weather_value") %>% ggplot(aes(x = time_hour, y = weather_value))+ geom_line()+ facet_grid(weather_type ~ ., scales = "free_y")+ labs(title = 'Hourly weather values across worst 48 hours of delays') ``` Patterns: * `wind_gust` and `wind_speed` are the same. * See high level of colinearity in spikes and changes, e.g. increase in `precip` corresponds with decrease in `visib` and perhaps uptick in `wind_spee`Perhaps, we want to view how the average hourly weather values compare on the worst days to average weather days. Create summary of average hourly weather values for worst 48 hour period, for average period, and then append these and plot. ``` bind_rows( weather_worst %>% summarise_at(vars(temp:visib), mean, na.rm = TRUE) %>% mutate(category = "weather on worst 48") %>% gather(temp:visib, key = weather_type, value = weather_val) , weather_ammended %>% summarise_at(vars(temp:visib), mean, na.rm = TRUE) %>% mutate(category = "weather on average") %>% gather(temp:visib, key = weather_type, value = weather_val) ) %>% ggplot(aes(x = category, y = weather_val, fill = category))+ geom_col()+ facet_wrap(~weather_type, scales = "free_y")+ labs(title = "Hourly average weather values on worst 48 hour window of delays vs. hourly average weather across year", caption = "Note that delays are based on mean(arr_delay, na.rm = TRUE)") ``` For this to be the worst 48 hour period, the weather doesn’t actually seem to be as extreme as I would have guessed. Let’s add\-in average `arr_delay` by planned departure time to this to see how the delay times throughout the day varied, to see if there was a surge or change in weather that led to the huge change in delays. ``` flights %>% mutate(dates = as_date(time_hour)) %>% semi_join(WorstDates) %>% group_by(time_hour) %>% summarise(value = mean(arr_delay, na.rm = TRUE)) %>% ungroup() %>% mutate(value_type = "Mean_ArrDelay") %>% bind_rows( weather_worst %>% select(-dates) %>% gather(temp:visib, key = "value_type", value = "value") ) %>% mutate(weather_attr = !(value_type == "Mean_ArrDelay"), value_type = forcats::fct_relevel(value_type, "Mean_ArrDelay")) %>% ggplot(aes(x = time_hour, value, colour = weather_attr))+ geom_line()+ facet_grid(value_type ~ ., scales = "free_y")+ labs(title = 'Hourly weather and delay values across worst 48 hours of delays') ``` ``` ## Joining, by = "dates" ``` Maybe that first uptick in precipitation corresponded with the increase in delay… but still, looks extreme like an incident caused this. I cheched the news and it looks like a plane was crash landed onto the tarmac at one of the airports on this day [https://en.wikipedia.org/wiki/Southwest\_Airlines\_Flight\_345\#cite\_note\-DMN\_Aircraft\_Totaled\_20160808\-4](https://en.wikipedia.org/wiki/Southwest_Airlines_Flight_345#cite_note-DMN_Aircraft_Totaled_20160808-4) , I checked the incident time and it occurred at 17:45 Jul 22, looks like it overlaps with the time we see the uptick in delays. I show plots and models of 48 hour time windows in a variety of other contexts and detail in [Appendix](28-graphics-for-communication.html#appendix-13) 5. *What does `anti_join(flights, airports, by = c("dest" = "faa"))` tell you? What does `anti_join(airports, flights, by = c("faa" = "dest"))` tell you?* * `anti_join(flights, airports, by = c("dest" = "faa"))` – tells me the flight dests missing an airport * `anti_join(airports, flights, by = c("faa" = "dest"))` – tells me the airports with no flights coming to them 6. *You might expect that there’s an implicit relationship between plane and airline, because each plane is flown by a single airline. Confirm or reject this hypothesis using the tools you’ve learned above.* ``` tail_carr <- flights %>% filter(!is.na(tailnum)) %>% distinct(carrier, tailnum) %>% count(tailnum, sort=TRUE) tail_carr %>% filter(n > 1) ``` ``` ## # A tibble: 17 x 2 ## tailnum n ## <chr> <int> ## 1 N146PQ 2 ## 2 N153PQ 2 ## 3 N176PQ 2 ## 4 N181PQ 2 ## 5 N197PQ 2 ## 6 N200PQ 2 ## 7 N228PQ 2 ## 8 N232PQ 2 ## 9 N933AT 2 ## 10 N935AT 2 ## 11 N977AT 2 ## 12 N978AT 2 ## 13 N979AT 2 ## 14 N981AT 2 ## 15 N989AT 2 ## 16 N990AT 2 ## 17 N994AT 2 ``` You should reject that hypothesis, you can see that 17 `tailnum`s are duplicated on multiple carriers. Below is code to show those 17 tailnums ``` flights %>% distinct(carrier, tailnum) %>% filter(!is.na(tailnum)) %>% group_by(tailnum) %>% mutate(n_tail = n()) %>% ungroup() %>% filter(n_tail > 1) %>% arrange(desc(n_tail), tailnum) ``` ``` ## # A tibble: 34 x 3 ## carrier tailnum n_tail ## <chr> <chr> <int> ## 1 9E N146PQ 2 ## 2 EV N146PQ 2 ## 3 9E N153PQ 2 ## 4 EV N153PQ 2 ## 5 9E N176PQ 2 ## 6 EV N176PQ 2 ## 7 9E N181PQ 2 ## 8 EV N181PQ 2 ## 9 9E N197PQ 2 ## 10 EV N197PQ 2 ## # ... with 24 more rows ``` ### 13\.5\.1 1. *What does it mean for a flight to have a missing `tailnum`?* All flights with a missing tailnum in the `flights` table were cancelled as you can see below. ``` flights %>% count(is.na(tailnum), is.na(arr_delay)) ``` ``` ## # A tibble: 3 x 3 ## `is.na(tailnum)` `is.na(arr_delay)` n ## <lgl> <lgl> <int> ## 1 FALSE FALSE 327346 ## 2 FALSE TRUE 6918 ## 3 TRUE TRUE 2512 ``` *What do the tail numbers that don’t have a matching record in `planes` have in common?* *(Hint: one variable explains \~90% of the problems.)* ``` flights %>% anti_join(planes, by="tailnum") %>% count(carrier, sort = TRUE) ``` ``` ## # A tibble: 10 x 2 ## carrier n ## <chr> <int> ## 1 MQ 25397 ## 2 AA 22558 ## 3 UA 1693 ## 4 9E 1044 ## 5 B6 830 ## 6 US 699 ## 7 FL 187 ## 8 DL 110 ## 9 F9 50 ## 10 WN 38 ``` ``` flights %>% mutate(in_planes = tailnum %in% planes$tailnum) %>% group_by(carrier) %>% summarise(flights_inPlanes = sum(in_planes), n = n(), perc_inPlanes = flights_inPlanes / n) %>% ungroup() ``` ``` ## # A tibble: 16 x 4 ## carrier flights_inPlanes n perc_inPlanes ## <chr> <int> <int> <dbl> ## 1 9E 17416 18460 0.943 ## 2 AA 10171 32729 0.311 ## 3 AS 714 714 1 ## 4 B6 53805 54635 0.985 ## 5 DL 48000 48110 0.998 ## 6 EV 54173 54173 1 ## 7 F9 635 685 0.927 ## 8 FL 3073 3260 0.943 ## 9 HA 342 342 1 ## 10 MQ 1000 26397 0.0379 ## 11 OO 32 32 1 ## 12 UA 56972 58665 0.971 ## 13 US 19837 20536 0.966 ## 14 VX 5162 5162 1 ## 15 WN 12237 12275 0.997 ## 16 YV 601 601 1 ``` Some carriers do not have many of their tailnums data in the `planes` table. (Come back.) 2. *Filter flights to only show flights with planes that have flown at least 100 flights.* ``` planes_many <- flights %>% count(tailnum, sort=TRUE) %>% filter(n > 100) semi_join(flights, planes_many) ``` ``` ## Joining, by = "tailnum" ``` ``` ## # A tibble: 229,202 x 19 ## year month day dep_time sched_dep_time dep_delay arr_time ## <int> <int> <int> <int> <int> <dbl> <int> ## 1 2013 1 1 517 515 2 830 ## 2 2013 1 1 533 529 4 850 ## 3 2013 1 1 544 545 -1 1004 ## 4 2013 1 1 554 558 -4 740 ## 5 2013 1 1 555 600 -5 913 ## 6 2013 1 1 557 600 -3 709 ## 7 2013 1 1 557 600 -3 838 ## 8 2013 1 1 558 600 -2 849 ## 9 2013 1 1 558 600 -2 853 ## 10 2013 1 1 558 600 -2 923 ## # ... with 229,192 more rows, and 12 more variables: sched_arr_time <int>, ## # arr_delay <dbl>, carrier <chr>, flight <int>, tailnum <chr>, ## # origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>, hour <dbl>, ## # minute <dbl>, time_hour <dttm> ``` * `add_count()` is another helpful function that could have been used here 3. *Combine `fueleconomy::vehicles` and `fueleconomy::common` to find only the records for the most common models.* ``` fueleconomy::vehicles %>% semi_join(fueleconomy::common, by=c("make", "model")) ``` ``` ## # A tibble: 14,531 x 12 ## id make model year class trans drive cyl displ fuel hwy cty ## <int> <chr> <chr> <int> <chr> <chr> <chr> <int> <dbl> <chr> <int> <int> ## 1 1833 Acura Integ~ 1986 Subc~ Auto~ Fron~ 4 1.6 Regu~ 28 22 ## 2 1834 Acura Integ~ 1986 Subc~ Manu~ Fron~ 4 1.6 Regu~ 28 23 ## 3 3037 Acura Integ~ 1987 Subc~ Auto~ Fron~ 4 1.6 Regu~ 28 22 ## 4 3038 Acura Integ~ 1987 Subc~ Manu~ Fron~ 4 1.6 Regu~ 28 23 ## 5 4183 Acura Integ~ 1988 Subc~ Auto~ Fron~ 4 1.6 Regu~ 27 22 ## 6 4184 Acura Integ~ 1988 Subc~ Manu~ Fron~ 4 1.6 Regu~ 28 23 ## 7 5303 Acura Integ~ 1989 Subc~ Auto~ Fron~ 4 1.6 Regu~ 27 22 ## 8 5304 Acura Integ~ 1989 Subc~ Manu~ Fron~ 4 1.6 Regu~ 28 23 ## 9 6442 Acura Integ~ 1990 Subc~ Auto~ Fron~ 4 1.8 Regu~ 24 20 ## 10 6443 Acura Integ~ 1990 Subc~ Manu~ Fron~ 4 1.8 Regu~ 26 21 ## # ... with 14,521 more rows ``` 4. *Find the 48 hours (over the course of the whole year) that have the worst delays. Cross\-reference it with the `weather` data. Can you see any patterns?* First: Create two variables that together capture all 48 hour time windows across the year, at the day window of granularity (e.g. the time of day the flight takes off does not matter in establishing time windows for this example, only the day). Second: Gather these time windows into a single dataframe (note that this will increase the length of your data by \~364/365 \* 100 %) Third: Group by `window_start_date` and calculate average `arr_delay` and related metrics. ``` delays_windows <- flights %>% #First mutate(date_flight = lubridate::as_date(time_hour)) %>% mutate(startdate_window1 = cut.Date(date_flight, "2 day")) %>% mutate(date_flight2 = ifelse(!(date_flight == min(date_flight, na.rm = TRUE)), date_flight, NA), date_flight2 = lubridate::as_date(date_flight2), startdate_window2 = cut.Date(date_flight2, "2 day")) %>% select(-date_flight, -date_flight2) %>% #Second gather(startdate_window1, startdate_window2, key = "start_window", value = "window_start_date") %>% filter(!is.na(window_start_date)) %>% #Third group_by(window_start_date) %>% summarise(num = n(), perc_cancelled = sum(is.na(arr_delay)) / n(), mean_delay = mean(arr_delay, na.rm = TRUE), perc_delay = mean(arr_delay > 0, na.rm = TRUE), total_delay_mins = sum(arr_delay, na.rm = TRUE)) %>% ungroup() ``` ``` ## Warning: attributes are not identical across measure variables; ## they will be dropped ``` ``` #don't worry about warning of 'attributes are not identical...', that is #because the cut function assigns attributes to the value, it's fine if #these are dropped here. ``` Create tibble of worst 2\-day period for mean `arr_delay` ``` WorstWindow <- delays_windows %>% mutate(mean_delay_rank = dplyr::min_rank(-mean_delay)) %>% filter(mean_delay_rank <= 1) WorstDates <- tibble(dates = c(lubridate::as_date(WorstWindow$window_start_date), lubridate::as_date(WorstWindow$window_start_date) + lubridate::duration(1, "days"))) ``` Ammend weather data so that weather is an average across three NY locations rather than seperate for each[30](#fn30) ``` weather_ammended <- weather %>% mutate(time_hour = lubridate::make_datetime(year, month, day, hour)) %>% select(-one_of("origin", "year", "month", "day", "hour")) %>% group_by(time_hour) %>% summarise_all(mean, na.rm = TRUE) %>% ungroup() ``` Filtering join to just times weather for worst 2 days ``` weather_worst <- weather_ammended %>% mutate(dates = as_date(time_hour)) %>% semi_join(WorstDates) ``` ``` ## Joining, by = "dates" ``` Plot of hourly weather values across 48 hour time windows. ``` weather_worst %>% select(-dates) %>% gather(temp:visib, key = "weather_type", value = "weather_value") %>% ggplot(aes(x = time_hour, y = weather_value))+ geom_line()+ facet_grid(weather_type ~ ., scales = "free_y")+ labs(title = 'Hourly weather values across worst 48 hours of delays') ``` Patterns: * `wind_gust` and `wind_speed` are the same. * See high level of colinearity in spikes and changes, e.g. increase in `precip` corresponds with decrease in `visib` and perhaps uptick in `wind_spee`Perhaps, we want to view how the average hourly weather values compare on the worst days to average weather days. Create summary of average hourly weather values for worst 48 hour period, for average period, and then append these and plot. ``` bind_rows( weather_worst %>% summarise_at(vars(temp:visib), mean, na.rm = TRUE) %>% mutate(category = "weather on worst 48") %>% gather(temp:visib, key = weather_type, value = weather_val) , weather_ammended %>% summarise_at(vars(temp:visib), mean, na.rm = TRUE) %>% mutate(category = "weather on average") %>% gather(temp:visib, key = weather_type, value = weather_val) ) %>% ggplot(aes(x = category, y = weather_val, fill = category))+ geom_col()+ facet_wrap(~weather_type, scales = "free_y")+ labs(title = "Hourly average weather values on worst 48 hour window of delays vs. hourly average weather across year", caption = "Note that delays are based on mean(arr_delay, na.rm = TRUE)") ``` For this to be the worst 48 hour period, the weather doesn’t actually seem to be as extreme as I would have guessed. Let’s add\-in average `arr_delay` by planned departure time to this to see how the delay times throughout the day varied, to see if there was a surge or change in weather that led to the huge change in delays. ``` flights %>% mutate(dates = as_date(time_hour)) %>% semi_join(WorstDates) %>% group_by(time_hour) %>% summarise(value = mean(arr_delay, na.rm = TRUE)) %>% ungroup() %>% mutate(value_type = "Mean_ArrDelay") %>% bind_rows( weather_worst %>% select(-dates) %>% gather(temp:visib, key = "value_type", value = "value") ) %>% mutate(weather_attr = !(value_type == "Mean_ArrDelay"), value_type = forcats::fct_relevel(value_type, "Mean_ArrDelay")) %>% ggplot(aes(x = time_hour, value, colour = weather_attr))+ geom_line()+ facet_grid(value_type ~ ., scales = "free_y")+ labs(title = 'Hourly weather and delay values across worst 48 hours of delays') ``` ``` ## Joining, by = "dates" ``` Maybe that first uptick in precipitation corresponded with the increase in delay… but still, looks extreme like an incident caused this. I cheched the news and it looks like a plane was crash landed onto the tarmac at one of the airports on this day [https://en.wikipedia.org/wiki/Southwest\_Airlines\_Flight\_345\#cite\_note\-DMN\_Aircraft\_Totaled\_20160808\-4](https://en.wikipedia.org/wiki/Southwest_Airlines_Flight_345#cite_note-DMN_Aircraft_Totaled_20160808-4) , I checked the incident time and it occurred at 17:45 Jul 22, looks like it overlaps with the time we see the uptick in delays. I show plots and models of 48 hour time windows in a variety of other contexts and detail in [Appendix](28-graphics-for-communication.html#appendix-13) 5. *What does `anti_join(flights, airports, by = c("dest" = "faa"))` tell you? What does `anti_join(airports, flights, by = c("faa" = "dest"))` tell you?* * `anti_join(flights, airports, by = c("dest" = "faa"))` – tells me the flight dests missing an airport * `anti_join(airports, flights, by = c("faa" = "dest"))` – tells me the airports with no flights coming to them 6. *You might expect that there’s an implicit relationship between plane and airline, because each plane is flown by a single airline. Confirm or reject this hypothesis using the tools you’ve learned above.* ``` tail_carr <- flights %>% filter(!is.na(tailnum)) %>% distinct(carrier, tailnum) %>% count(tailnum, sort=TRUE) tail_carr %>% filter(n > 1) ``` ``` ## # A tibble: 17 x 2 ## tailnum n ## <chr> <int> ## 1 N146PQ 2 ## 2 N153PQ 2 ## 3 N176PQ 2 ## 4 N181PQ 2 ## 5 N197PQ 2 ## 6 N200PQ 2 ## 7 N228PQ 2 ## 8 N232PQ 2 ## 9 N933AT 2 ## 10 N935AT 2 ## 11 N977AT 2 ## 12 N978AT 2 ## 13 N979AT 2 ## 14 N981AT 2 ## 15 N989AT 2 ## 16 N990AT 2 ## 17 N994AT 2 ``` You should reject that hypothesis, you can see that 17 `tailnum`s are duplicated on multiple carriers. Below is code to show those 17 tailnums ``` flights %>% distinct(carrier, tailnum) %>% filter(!is.na(tailnum)) %>% group_by(tailnum) %>% mutate(n_tail = n()) %>% ungroup() %>% filter(n_tail > 1) %>% arrange(desc(n_tail), tailnum) ``` ``` ## # A tibble: 34 x 3 ## carrier tailnum n_tail ## <chr> <chr> <int> ## 1 9E N146PQ 2 ## 2 EV N146PQ 2 ## 3 9E N153PQ 2 ## 4 EV N153PQ 2 ## 5 9E N176PQ 2 ## 6 EV N176PQ 2 ## 7 9E N181PQ 2 ## 8 EV N181PQ 2 ## 9 9E N197PQ 2 ## 10 EV N197PQ 2 ## # ... with 24 more rows ``` Appendix -------- ### 13\.5\.1\.4 Graph all of these metrics at once using roughly the same method as used on 13\.4\.6 \#4\. ``` delays_windows %>% gather(perc_cancelled, mean_delay, perc_delay, key = value_type, value = val) %>% mutate(window_start_date = lubridate::as_date(window_start_date)) %>% ggplot(aes(window_start_date, val))+ geom_line()+ facet_wrap(~value_type, scales = "free_y", ncol = 1)+ scale_x_date(date_labels = "%b %d")+ labs(title = 'Measures of delay across 48 hour time windows') ``` Create 48 hour windows for weather data. Follow exact same steps as above. ``` weather_windows <- weather_ammended %>% mutate(date_flight = lubridate::as_date(time_hour)) %>% mutate(startdate_window1 = cut.Date(date_flight, "2 day")) %>% mutate(date_flight2 = ifelse(!(date_flight == min(date_flight, na.rm = TRUE)), date_flight, NA), date_flight2 = lubridate::as_date(date_flight2), startdate_window2 = cut.Date(date_flight2, "2 day")) %>% select(-date_flight, -date_flight2) %>% #Second gather(startdate_window1, startdate_window2, key = "start_window", value = "window_start_date") %>% filter(!is.na(window_start_date)) %>% #Third group_by(window_start_date) %>% summarise_at(vars(temp:visib), mean, na.rm = TRUE) %>% ungroup() %>% select(-wind_gust) ``` ``` ## Warning: attributes are not identical across measure variables; ## they will be dropped ``` Graph using same method as above… ``` weather_windows %>% gather(temp:visib, key = weather_type, value = val) %>% mutate(window_start_date = lubridate::as_date(window_start_date)) %>% ggplot(aes(x = window_start_date, y = val))+ geom_line()+ facet_wrap(~weather_type, ncol = 1, scales = "free_y")+ scale_x_date(date_labels = "%b %d")+ labs(title = 'Measures of weather across 48 hour time windows') ``` Connect delays and weather data ``` weather_delay_joined <- left_join(delays_windows, weather_windows, by = "window_start_date") %>% select(mean_delay, temp:visib, window_start_date) %>% select(-dewp) %>% #is almost completely correlated with temp so removed one of them... na.omit() ``` Plot of 48 hour window of weather scores against mean delay keeping intact order of observations ``` weather_delay_joined %>% gather(mean_delay, temp:visib, key = value_type, value = val) %>% mutate(window_start_date = lubridate::as_date(window_start_date), value_type = forcats::fct_relevel(value_type, "mean_delay")) %>% ggplot(aes(x = window_start_date, y = val, colour = ! value_type == "mean_delay"))+ geom_line()+ facet_wrap(~value_type, scales = "free_y", ncol = 1)+ labs(colour = "Weather value", title = "Mean delay and weather value in 2-day rolling window") ``` Plot of mean\_delay against weather type, each point representing a different ‘window’ ``` weather_delay_joined %>% gather(temp:visib, key = weather_type, value = weather_val) %>% ggplot(aes(x = weather_val, y = mean_delay))+ geom_point()+ geom_smooth()+ facet_wrap(~weather_type, scales = "free_x") ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` In a sense, these plots are not really valid as they obscure the fact that each point is not an independent observation (because there is a high level of association with w/e the value was on a single day with what it was in the previous day). E.g. mean\_delay has a correlation of \~ 0\.68 with prior days value as shown below… This is often ignored and we can also ignore it for now as it gets into time series and things we don’t need to worry about for now… but somthing to be aware… ``` weather_delay_joined %>% mutate(mean_delay_lag = lag(mean_delay)) %>% select(mean_delay, mean_delay_lag) %>% na.omit() %>% cor() ``` ``` ## mean_delay mean_delay_lag ## mean_delay 1.0000000 0.6795631 ## mean_delay_lag 0.6795631 1.0000000 ``` Data is not Independent (as mentioned above) and many problems associated with this… but let’s ignore this for now and just look at a few statisitics… Can see below that raw correlation of `mean_delay` is highest with humid. ``` weather_delay_joined %>% select(-window_start_date) %>% cor() ``` ``` ## mean_delay temp humid wind_dir wind_speed ## mean_delay 1.00000000 0.08515338 0.4549140 -0.05371522 0.16262585 ## temp 0.08515338 1.00000000 0.3036520 -0.25906906 -0.40160692 ## humid 0.45491403 0.30365205 1.0000000 -0.51010505 -0.30383181 ## wind_dir -0.05371522 -0.25906906 -0.5101050 1.00000000 0.50039832 ## wind_speed 0.16262585 -0.40160692 -0.3038318 0.50039832 1.00000000 ## precip 0.36475598 0.02775525 0.4481898 -0.12853817 0.11176053 ## pressure -0.31716918 -0.23873857 -0.2363718 -0.26627495 -0.25716938 ## visib -0.38740156 0.12290097 -0.6647598 0.26307685 0.05275072 ## precip pressure visib ## mean_delay 0.36475598 -0.3171692 -0.38740156 ## temp 0.02775525 -0.2387386 0.12290097 ## humid 0.44818978 -0.2363718 -0.66475984 ## wind_dir -0.12853817 -0.2662749 0.26307685 ## wind_speed 0.11176053 -0.2571694 0.05275072 ## precip 1.00000000 -0.2265636 -0.44400337 ## pressure -0.22656357 1.0000000 0.12032520 ## visib -0.44400337 0.1203252 1.00000000 ``` When accounting for other variables, see relationship with windspeed seems to emerge as important… ``` weather_delay_joined %>% select(-window_start_date) %>% lm(mean_delay ~ ., data = .) %>% summary() ``` ``` ## ## Call: ## lm(formula = mean_delay ~ ., data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -26.179 -7.581 -1.374 5.271 38.008 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 169.56872 132.07737 1.284 0.2000 ## temp 0.05460 0.04702 1.161 0.2464 ## humid 0.48158 0.09088 5.299 2.04e-07 *** ## wind_dir 0.01420 0.01376 1.032 0.3026 ## wind_speed 1.15641 0.25561 4.524 8.28e-06 *** ## precip 140.84141 78.84192 1.786 0.0749 . ## pressure -0.19722 0.12476 -1.581 0.1148 ## visib -1.15009 0.80567 -1.427 0.1543 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 11.64 on 356 degrees of freedom ## Multiple R-squared: 0.3332, Adjusted R-squared: 0.3201 ## F-statistic: 25.42 on 7 and 356 DF, p-value: < 2.2e-16 ``` There a variety of reasons[31](#fn31) you may want to evaluate how the change in an attribute relates to the change in another attribute. In the cases below I plot the diffs for example: *(average value on 2013\-02\-07 to 2013\-02\-08\) \- (average value on 2013\-02\-08 to 2013\-02\-09\)* Note that the time windows are not distinct but overlap by 24 hours. If doing a thorough account of time\-series you would do a lot more than I show below… ``` weather_delay_joined %>% gather(mean_delay, temp:visib, key = value_type, value = val) %>% mutate(window_start_date = lubridate::as_date(window_start_date), value_type = forcats::fct_relevel(value_type, "mean_delay")) %>% group_by(value_type) %>% mutate(value_diff = val - lag(val)) %>% ggplot(aes(x = window_start_date, y = value_diff, colour = !value_type == "mean_delay"))+ geom_line()+ facet_wrap(~value_type, scales = "free_y", ncol = 1)+ labs(colour = "Weather value", title = "Plot of diffs in value") ``` ``` ## Warning: Removed 2 rows containing missing values (geom_path). ``` Let’s plot these diffs as a scatter plot now (no longer looking at the order in which the observations emerged) ``` weather_delay_joined %>% gather(temp:visib, key = weather_type, value = val) %>% group_by(weather_type) %>% mutate(weather_diff = val - lag(val), delay_diff = mean_delay - lag(mean_delay)) %>% ungroup() %>% ggplot(aes(x = weather_diff, y = delay_diff))+ geom_point()+ geom_smooth()+ facet_wrap(~weather_type, scales = "free_x")+ labs(title = "scatter plot of diffs in value") ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ## Warning: Removed 7 rows containing non-finite values (stat_smooth). ``` ``` ## Warning: Removed 7 rows containing missing values (geom_point). ``` Let’s look at the correlatioin and regression against these diffs ``` diff_data <- weather_delay_joined %>% gather(mean_delay, temp:visib, key = value_type, value = val) %>% group_by(value_type) %>% mutate(diff = val - lag(val)) %>% ungroup() %>% select(-val) %>% spread(key = value_type, value = diff) diff_data %>% select(-window_start_date) %>% na.omit() %>% cor() ``` ``` ## humid mean_delay precip pressure temp ## humid 1.0000000 0.54331654 0.48014091 -0.3427556 0.318534448 ## mean_delay 0.5433165 1.00000000 0.51510649 -0.3247584 0.150601446 ## precip 0.4801409 0.51510649 1.00000000 -0.3014413 0.074916969 ## pressure -0.3427556 -0.32475840 -0.30144131 1.0000000 -0.488629288 ## temp 0.3185344 0.15060145 0.07491697 -0.4886293 1.000000000 ## visib -0.7393902 -0.53844191 -0.49795469 0.2721685 -0.206815887 ## wind_dir -0.4978895 -0.20689204 -0.20823801 -0.2443716 -0.003608694 ## wind_speed -0.1964910 0.05738881 0.15742776 -0.3687487 -0.085437521 ## visib wind_dir wind_speed ## humid -0.73939024 -0.497889528 -0.19649100 ## mean_delay -0.53844191 -0.206892045 0.05738881 ## precip -0.49795469 -0.208238012 0.15742776 ## pressure 0.27216848 -0.244371617 -0.36874869 ## temp -0.20681589 -0.003608694 -0.08543752 ## visib 1.00000000 0.378625695 0.06152223 ## wind_dir 0.37862569 1.000000000 0.43970745 ## wind_speed 0.06152223 0.439707451 1.00000000 ``` ``` diff_data %>% select(-window_start_date) %>% lm(mean_delay ~ ., data = .) %>% summary() ``` ``` ## ## Call: ## lm(formula = mean_delay ~ ., data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -32.843 -4.394 -0.189 3.749 27.177 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.022454 0.460301 -0.049 0.961121 ## humid 0.281416 0.082305 3.419 0.000701 *** ## precip 324.087906 63.453719 5.107 5.34e-07 *** ## pressure -0.275033 0.149084 -1.845 0.065895 . ## temp -0.127570 0.143134 -0.891 0.373394 ## visib -2.420046 0.728749 -3.321 0.000991 *** ## wind_dir 0.002373 0.012316 0.193 0.847329 ## wind_speed 0.128749 0.226138 0.569 0.569487 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 8.77 on 355 degrees of freedom ## (1 observation deleted due to missingness) ## Multiple R-squared: 0.4111, Adjusted R-squared: 0.3995 ## F-statistic: 35.4 on 7 and 355 DF, p-value: < 2.2e-16 ``` ### 13\.5\.1\.4 Graph all of these metrics at once using roughly the same method as used on 13\.4\.6 \#4\. ``` delays_windows %>% gather(perc_cancelled, mean_delay, perc_delay, key = value_type, value = val) %>% mutate(window_start_date = lubridate::as_date(window_start_date)) %>% ggplot(aes(window_start_date, val))+ geom_line()+ facet_wrap(~value_type, scales = "free_y", ncol = 1)+ scale_x_date(date_labels = "%b %d")+ labs(title = 'Measures of delay across 48 hour time windows') ``` Create 48 hour windows for weather data. Follow exact same steps as above. ``` weather_windows <- weather_ammended %>% mutate(date_flight = lubridate::as_date(time_hour)) %>% mutate(startdate_window1 = cut.Date(date_flight, "2 day")) %>% mutate(date_flight2 = ifelse(!(date_flight == min(date_flight, na.rm = TRUE)), date_flight, NA), date_flight2 = lubridate::as_date(date_flight2), startdate_window2 = cut.Date(date_flight2, "2 day")) %>% select(-date_flight, -date_flight2) %>% #Second gather(startdate_window1, startdate_window2, key = "start_window", value = "window_start_date") %>% filter(!is.na(window_start_date)) %>% #Third group_by(window_start_date) %>% summarise_at(vars(temp:visib), mean, na.rm = TRUE) %>% ungroup() %>% select(-wind_gust) ``` ``` ## Warning: attributes are not identical across measure variables; ## they will be dropped ``` Graph using same method as above… ``` weather_windows %>% gather(temp:visib, key = weather_type, value = val) %>% mutate(window_start_date = lubridate::as_date(window_start_date)) %>% ggplot(aes(x = window_start_date, y = val))+ geom_line()+ facet_wrap(~weather_type, ncol = 1, scales = "free_y")+ scale_x_date(date_labels = "%b %d")+ labs(title = 'Measures of weather across 48 hour time windows') ``` Connect delays and weather data ``` weather_delay_joined <- left_join(delays_windows, weather_windows, by = "window_start_date") %>% select(mean_delay, temp:visib, window_start_date) %>% select(-dewp) %>% #is almost completely correlated with temp so removed one of them... na.omit() ``` Plot of 48 hour window of weather scores against mean delay keeping intact order of observations ``` weather_delay_joined %>% gather(mean_delay, temp:visib, key = value_type, value = val) %>% mutate(window_start_date = lubridate::as_date(window_start_date), value_type = forcats::fct_relevel(value_type, "mean_delay")) %>% ggplot(aes(x = window_start_date, y = val, colour = ! value_type == "mean_delay"))+ geom_line()+ facet_wrap(~value_type, scales = "free_y", ncol = 1)+ labs(colour = "Weather value", title = "Mean delay and weather value in 2-day rolling window") ``` Plot of mean\_delay against weather type, each point representing a different ‘window’ ``` weather_delay_joined %>% gather(temp:visib, key = weather_type, value = weather_val) %>% ggplot(aes(x = weather_val, y = mean_delay))+ geom_point()+ geom_smooth()+ facet_wrap(~weather_type, scales = "free_x") ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` In a sense, these plots are not really valid as they obscure the fact that each point is not an independent observation (because there is a high level of association with w/e the value was on a single day with what it was in the previous day). E.g. mean\_delay has a correlation of \~ 0\.68 with prior days value as shown below… This is often ignored and we can also ignore it for now as it gets into time series and things we don’t need to worry about for now… but somthing to be aware… ``` weather_delay_joined %>% mutate(mean_delay_lag = lag(mean_delay)) %>% select(mean_delay, mean_delay_lag) %>% na.omit() %>% cor() ``` ``` ## mean_delay mean_delay_lag ## mean_delay 1.0000000 0.6795631 ## mean_delay_lag 0.6795631 1.0000000 ``` Data is not Independent (as mentioned above) and many problems associated with this… but let’s ignore this for now and just look at a few statisitics… Can see below that raw correlation of `mean_delay` is highest with humid. ``` weather_delay_joined %>% select(-window_start_date) %>% cor() ``` ``` ## mean_delay temp humid wind_dir wind_speed ## mean_delay 1.00000000 0.08515338 0.4549140 -0.05371522 0.16262585 ## temp 0.08515338 1.00000000 0.3036520 -0.25906906 -0.40160692 ## humid 0.45491403 0.30365205 1.0000000 -0.51010505 -0.30383181 ## wind_dir -0.05371522 -0.25906906 -0.5101050 1.00000000 0.50039832 ## wind_speed 0.16262585 -0.40160692 -0.3038318 0.50039832 1.00000000 ## precip 0.36475598 0.02775525 0.4481898 -0.12853817 0.11176053 ## pressure -0.31716918 -0.23873857 -0.2363718 -0.26627495 -0.25716938 ## visib -0.38740156 0.12290097 -0.6647598 0.26307685 0.05275072 ## precip pressure visib ## mean_delay 0.36475598 -0.3171692 -0.38740156 ## temp 0.02775525 -0.2387386 0.12290097 ## humid 0.44818978 -0.2363718 -0.66475984 ## wind_dir -0.12853817 -0.2662749 0.26307685 ## wind_speed 0.11176053 -0.2571694 0.05275072 ## precip 1.00000000 -0.2265636 -0.44400337 ## pressure -0.22656357 1.0000000 0.12032520 ## visib -0.44400337 0.1203252 1.00000000 ``` When accounting for other variables, see relationship with windspeed seems to emerge as important… ``` weather_delay_joined %>% select(-window_start_date) %>% lm(mean_delay ~ ., data = .) %>% summary() ``` ``` ## ## Call: ## lm(formula = mean_delay ~ ., data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -26.179 -7.581 -1.374 5.271 38.008 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 169.56872 132.07737 1.284 0.2000 ## temp 0.05460 0.04702 1.161 0.2464 ## humid 0.48158 0.09088 5.299 2.04e-07 *** ## wind_dir 0.01420 0.01376 1.032 0.3026 ## wind_speed 1.15641 0.25561 4.524 8.28e-06 *** ## precip 140.84141 78.84192 1.786 0.0749 . ## pressure -0.19722 0.12476 -1.581 0.1148 ## visib -1.15009 0.80567 -1.427 0.1543 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 11.64 on 356 degrees of freedom ## Multiple R-squared: 0.3332, Adjusted R-squared: 0.3201 ## F-statistic: 25.42 on 7 and 356 DF, p-value: < 2.2e-16 ``` There a variety of reasons[31](#fn31) you may want to evaluate how the change in an attribute relates to the change in another attribute. In the cases below I plot the diffs for example: *(average value on 2013\-02\-07 to 2013\-02\-08\) \- (average value on 2013\-02\-08 to 2013\-02\-09\)* Note that the time windows are not distinct but overlap by 24 hours. If doing a thorough account of time\-series you would do a lot more than I show below… ``` weather_delay_joined %>% gather(mean_delay, temp:visib, key = value_type, value = val) %>% mutate(window_start_date = lubridate::as_date(window_start_date), value_type = forcats::fct_relevel(value_type, "mean_delay")) %>% group_by(value_type) %>% mutate(value_diff = val - lag(val)) %>% ggplot(aes(x = window_start_date, y = value_diff, colour = !value_type == "mean_delay"))+ geom_line()+ facet_wrap(~value_type, scales = "free_y", ncol = 1)+ labs(colour = "Weather value", title = "Plot of diffs in value") ``` ``` ## Warning: Removed 2 rows containing missing values (geom_path). ``` Let’s plot these diffs as a scatter plot now (no longer looking at the order in which the observations emerged) ``` weather_delay_joined %>% gather(temp:visib, key = weather_type, value = val) %>% group_by(weather_type) %>% mutate(weather_diff = val - lag(val), delay_diff = mean_delay - lag(mean_delay)) %>% ungroup() %>% ggplot(aes(x = weather_diff, y = delay_diff))+ geom_point()+ geom_smooth()+ facet_wrap(~weather_type, scales = "free_x")+ labs(title = "scatter plot of diffs in value") ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ``` ## Warning: Removed 7 rows containing non-finite values (stat_smooth). ``` ``` ## Warning: Removed 7 rows containing missing values (geom_point). ``` Let’s look at the correlatioin and regression against these diffs ``` diff_data <- weather_delay_joined %>% gather(mean_delay, temp:visib, key = value_type, value = val) %>% group_by(value_type) %>% mutate(diff = val - lag(val)) %>% ungroup() %>% select(-val) %>% spread(key = value_type, value = diff) diff_data %>% select(-window_start_date) %>% na.omit() %>% cor() ``` ``` ## humid mean_delay precip pressure temp ## humid 1.0000000 0.54331654 0.48014091 -0.3427556 0.318534448 ## mean_delay 0.5433165 1.00000000 0.51510649 -0.3247584 0.150601446 ## precip 0.4801409 0.51510649 1.00000000 -0.3014413 0.074916969 ## pressure -0.3427556 -0.32475840 -0.30144131 1.0000000 -0.488629288 ## temp 0.3185344 0.15060145 0.07491697 -0.4886293 1.000000000 ## visib -0.7393902 -0.53844191 -0.49795469 0.2721685 -0.206815887 ## wind_dir -0.4978895 -0.20689204 -0.20823801 -0.2443716 -0.003608694 ## wind_speed -0.1964910 0.05738881 0.15742776 -0.3687487 -0.085437521 ## visib wind_dir wind_speed ## humid -0.73939024 -0.497889528 -0.19649100 ## mean_delay -0.53844191 -0.206892045 0.05738881 ## precip -0.49795469 -0.208238012 0.15742776 ## pressure 0.27216848 -0.244371617 -0.36874869 ## temp -0.20681589 -0.003608694 -0.08543752 ## visib 1.00000000 0.378625695 0.06152223 ## wind_dir 0.37862569 1.000000000 0.43970745 ## wind_speed 0.06152223 0.439707451 1.00000000 ``` ``` diff_data %>% select(-window_start_date) %>% lm(mean_delay ~ ., data = .) %>% summary() ``` ``` ## ## Call: ## lm(formula = mean_delay ~ ., data = .) ## ## Residuals: ## Min 1Q Median 3Q Max ## -32.843 -4.394 -0.189 3.749 27.177 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.022454 0.460301 -0.049 0.961121 ## humid 0.281416 0.082305 3.419 0.000701 *** ## precip 324.087906 63.453719 5.107 5.34e-07 *** ## pressure -0.275033 0.149084 -1.845 0.065895 . ## temp -0.127570 0.143134 -0.891 0.373394 ## visib -2.420046 0.728749 -3.321 0.000991 *** ## wind_dir 0.002373 0.012316 0.193 0.847329 ## wind_speed 0.128749 0.226138 0.569 0.569487 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 8.77 on 355 degrees of freedom ## (1 observation deleted due to missingness) ## Multiple R-squared: 0.4111, Adjusted R-squared: 0.3995 ## F-statistic: 35.4 on 7 and 355 DF, p-value: < 2.2e-16 ```
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/14-strings.html
Ch. 14: Strings =============== **Key questions:** * 14\.2\.5\. \#3, 6 * 14\.3\.2\.1\. \#2 * 14\.3\.3\.1 \#1, \#2 **Functions and notes:** * `writeLines`: see raw contents of a string (prints each string in a vector on a new line) * `str_length`: number of characters in a string * `str_c`: combine two or more strings + use `collapse` arg to make vector of strings to single string * `str_replace_na`: print `NA` as “NA” * `str_sub`: `start` and `end` args to specify position to remove (or replace), can use negative numbers as well to represent from back * `str_to_lower`, `str_to_upper`, `str_to_upper`: for changing string case + `locale` arg (to handle slight differences in characters) * `str_order`, `str_sort`: more robust version of `order` and `sort` which take allow a `locale` argument * `str_view`, `str_view_all`: shows how character and regular expression match * `\d`: matches any digit. * `\s`: matches any whitespace (e.g. space, tab, newline). * `[abc]`: matches a, b, or c. * `[^abc]`: matches anything except a, b, or c. * `{n}`: exactly n * `{n,}`: n or more * `{,m}`: at most m * `{n,m}`: between n and m * `str_detect`: returns logical vector of `TRUE`/`FALSE` values * `str_subset`: subset of `TRUE` values from `str_detect` * `str_count`: number of matches in a string * `str_extract`: extract actual text of a match * `str_extract_all`: returns list with all matches + `simplify = TRUE` returns a matrix * `str_match`: similar to `str_extract` but gives each individual component of match in a matrix, rather than a character vector (also have a `str_match_all`) * `tidyr::extract`: like `str_match` but name columns with matches which are moved into new columns * `str_replace`, `str_replace_all`: replace matches with new strings * `str_split` split a string into pieces – default is individual words (returns list) + `simplify = TRUE` again will return a matrix * `boundary` use to specify level of split, e.g. `str_view_all(x, boundary("word"))` * `str_locate`, `str_locate_all`: gives starting an dending positions of each match * `regex` use in match to specify more options, e.g. `str_view(bananas, regex("banana", ignore_case = TRUE))` + `multiline = TRUE` allows `^` and `$` to match start and end of each line (rather than of string) + `comments = TRUE` allows you to add comments on a complex regular expression + `dotall = TRUE` allows `.` to match more than just letters e.g. `\\n` * `fixed`, `coll` related alternatives to `regex` * `apropos` searches all objects available from global environment (e.g. say you can’t remember function name) * `dir`: lists all files in a directory + `pattern` arg takes a regex * `stringi` more comprehensive package than `stringr` (\~5x as many funs) 14\.2: String basics -------------------- Use `wrteLines` to show what string ‘This string has a \\n new line’ looks like printed. ``` string_exp <- 'This string has a \n new line' print(string_exp) ``` ``` ## [1] "This string has a \n new line" ``` ``` writeLines(string_exp) ``` ``` ## This string has a ## new line ``` To see full list of specifal characters: ``` ?'"' ``` Objects of length 0 are silently dropped. This is particularly useful in conjunction with `if`: ``` name <- "Bryan" time_of_day <- "morning" birthday <- FALSE str_c( "Good ", time_of_day, " ", name, if (birthday) " and HAPPY BIRTHDAY", "." ) ``` ``` ## [1] "Good morning Bryan." ``` Collapse vectors into single string ``` str_c(c("x", "y", "z"), c("a", "b", "c"), collapse = ", ") ``` ``` ## [1] "xa, yb, zc" ``` Can use assignment form of `str_sub()` ``` x <- c("Apple", "Banana", "Pear") str_sub(x, 1, 1) <- str_to_lower(str_sub(x, 1, 1)) x ``` ``` ## [1] "apple" "banana" "pear" ``` `str_pad` looks interesting ``` str_pad("the dogs come for you.", width = 40, pad = ",", side = "both") #must specify width =, side = default is left ``` ``` ## [1] ",,,,,,,,,the dogs come for you.,,,,,,,,," ``` ### 14\.2\.5 1. *In code that doesn’t use stringr, you’ll often see `paste()` and `paste0()`. What’s the difference between the two functions?* * `paste0()` has no `sep` argument and just appends any value provided like another string vector. * They differ from `str_c()` in that they automatically convert `NA` values to character. ``` paste("a", "b", "c", c("x", "y"), sep = "-") ``` ``` ## [1] "a-b-c-x" "a-b-c-y" ``` ``` paste0("a", "b", "c", c("x", "y"), sep = "-") ``` ``` ## [1] "abcx-" "abcy-" ``` *What `stringr` function are they equivalent to?* `paste()` and `paste0()` are similar to `str_c()` though are different in how they handle NAs (see below). They also will return a warning when recycling vectors whose legth do not have a common factor. ``` paste(c("a", "b", "x"), c("x", "y"), sep = "-") ``` ``` ## [1] "a-x" "b-y" "x-x" ``` ``` str_c(c("a", "b", "x"), c("x", "y"), sep = "-") ``` ``` ## Warning in stri_c(..., sep = sep, collapse = collapse, ignore_null = TRUE): ## longer object length is not a multiple of shorter object length ``` ``` ## [1] "a-x" "b-y" "x-x" ``` *How do the functions differ in their handling of `NA`?* ``` paste(c("a", "b"), c(NA, "y"), sep = "-") ``` ``` ## [1] "a-NA" "b-y" ``` ``` str_c(c("a", "b"), c(NA, "y"), sep = "-") ``` ``` ## [1] NA "b-y" ``` 2. *In your own words, describe the difference between the `sep` and `collapse` arguments to `str_c()`.* `sep` puts characters between items within a vector, collapse puts a character between vectors being collapsed 3. *Use `str_length()` and `str_sub()` to extract the middle character from a string.* ``` x <- "world" str_sub(x, start = ceiling(str_length(x) / 2), end = ceiling(str_length(x) / 2)) ``` ``` ## [1] "r" ``` *What will you do if the string has an even number of characters?* In this circumstance the above solution would take the anterior middle value, below is a solution that would return both middle values. ``` x <- "worlds" str_sub(x, ceiling(str_length(x) / 2 + 1), start = ceiling(str_length(x) / 2 + 1)) ``` ``` ## [1] "l" ``` ``` str_sub(x, start = ifelse(str_length(x) %% 2 == 0, floor(str_length(x) / 2), ceiling(str_length(x) / 2 )), end = floor(str_length(x) / 2) + 1) ``` ``` ## [1] "rl" ``` 4. *What does `str_wrap()` do? When might you want to use it?* * Use `indent` for first line, `exdent` for others * could use `str_wrap()` for editing of documents etc., setting `width = 1` will give each word its own line ``` str_wrap("Tonight, we dine in Hell.", width = 10, indent = 0, exdent = 3) %>% writeLines() ``` ``` ## Tonight, ## we dine in ## Hell. ``` 5. *What does `str_trim()` do? What’s the opposite of `str_trim()`?* Removes whitespace from beginning and end of character, `side` argument specifies which side ``` str_trim(" so much white space ", side = "right") # (default is 'both') ``` ``` ## [1] " so much white space" ``` 6. *Write a function that turns (e.g.) a vector `c("a", "b", "c")` into the string `a, b, and c`. Think carefully about what it should do if given a vector of length 0, 1, or 2\.* ``` vec_to_string <- function(x) { #If 1 or 0 length vector if (length(x) < 2) return(x) comma <- ifelse(length(x) > 2, ", ", " ") b <- str_c(x, collapse = comma) #replace ',' with 'and' in last str_sub(b,-(str_length(x)[length(x)] + 1), -(str_length(x)[length(x)] + 1)) <- " and " return(b) } x <- c("a", "b", "c", "d") vec_to_string(x) ``` ``` ## [1] "a, b, c, and d" ``` 14\.3: Matching patterns w/ regex --------------------------------- ``` x <- c("apple", "banana", "pear") str_view(x, "an") ``` To match a literal `\` need `\\\\` because both string and regex will escape it. ``` x <- "a\\b" writeLines(x) ``` ``` ## a\b ``` ``` str_view(x,"\\\\") ``` Using `\b` to set boundary between words (not used often) ``` apropos("\\bsum\\b") ``` ``` ## [1] "contr.sum" "sum" ``` ``` apropos("^(sum)$") ``` ``` ## [1] "sum" ``` Other special characters: * `\d`: matches any digit. * `\s`: matches any whitespace (e.g. space, tab, newline). * `[abc]`: matches a, b, or c. * `[^abc]`: matches anything except a, b, or c. Controlling number of times: * `?`: 0 or 1 * `+`: 1 or more * `*`: 0 or more * `{n}`: exactly n * `{n,}`: n or more * `{,m}`: at most m * `{n,m}`: between n and m By default these matches are “greedy”: they will match the longest string possible. You can make them “lazy”, matching the shortest string possible by putting a `?` after them. This is an advanced feature of regular expressions, but it’s useful to know that it exists: ``` x <- "1888 is the longest year in Roman numerals: MDCCCLXXXVIII" str_view(x, 'C{2,3}') ``` ``` str_view(x, 'C{2,3}?') ``` ### 14\.3\.1\.1 1. *Explain why each of these strings don’t match a `\`: `"\"`, `"\\"`, `"\\\"`.* `"\"` \-\> leaves open quote string because escapes quote `"\\"`, \-\> escapes second `\` so left with blank `"\\\"` \-\> third `\` escapes quote so left with open quote as well 2. *How would you match the sequence `"'\`?* ``` x <- "alfred\"'\\goes" writeLines(x) ``` ``` ## alfred"'\goes ``` ``` str_view(x, "\\\"'\\\\") ``` 3. *What patterns will the regular expression `\..\..\..` match?* Would match 6 character string of following form “(dot)(anychar)(dot)(anychar)(dot)(anychar)” ``` x <- c("alf.r.e.dd.ss..lsdf.d.kj") str_view(x, pattern = "\\..\\..\\..") ``` *How would you represent it as a string?* ``` x_pattern <- "\\..\\..\\.." writeLines(x_pattern) ``` ``` ## \..\..\.. ``` ### 14\.3\.2\.1 1. *How would you match the literal string `"$^$"`?* ``` x <- "so it goes $^$ here" str_view(x, "\\$\\^\\$") ``` 2. *Given the corpus of common words in `stringr::words`, create regular expressions that find all words that:* 1. *Start with “y”.* ``` str_view(stringr::words, "^y", match = TRUE) ``` 2. *End with “x”* ``` str_view(stringr::words, "x$", match = TRUE) ``` 3. *Are exactly three letters long. (Don’t cheat by using `str_length()`!)* ``` str_view(stringr::words, "^...$", match = TRUE) ``` 4. *Have seven letters or more.* ``` str_view(stringr::words, ".......", match = TRUE) ``` Since this list is long, you might want to use the `match` argument to `str_view()` to show only the matching or non\-matching words. ### 14\.3\.3\.1 1. *Create regular expressions to find all words that:* 1. *Start with a vowel.* ``` str_view(stringr::words, "^[aeiou]", match = TRUE) ``` 2. *That only contain consonants. (Hint: thinking about matching “not”\-vowels.)* ``` str_view(stringr::words, "^[^aeiou]*[^aeiouy]$", match = TRUE) ``` 3. *End with `ed`, but not with `eed`.* ``` str_view(stringr::words, "[^e]ed$", match = TRUE) ``` 4. *End with `ing` or `ise`.* ``` str_view(stringr::words, "(ing|ise)$", match = TRUE) ``` 2. *Empirically verify the rule “i before e except after c”.* ``` str_view(stringr::words, "(^(ei))|cie|[^c]ei", match = TRUE) ``` 3. *Is “q” always followed by a “u”?* ``` str_view(stringr::words, "q[^u]", match = TRUE) ``` of the words in list, yes. 4. *Write a regular expression that matches a word if it’s probably written in British English, not American English.* ``` str_view(stringr::words, "(l|b)our|parat", match = TRUE) ``` 5. *Create a regular expression that will match telephone numbers as commonly written in your country.* ``` x <- c("dkl kls. klk. _", "(425) 591-6020", "her number is (581) 434-3242", "442", " dsi") str_view(x, "\\(\\d\\d\\d\\)\\s\\d\\d\\d-\\d\\d\\d\\d") ``` Aboves not a good way to solve this, will see better methods in next section. ### 14\.3\.4\.1 1. *Describe the equivalents of `?`, `+`, `*` in `{m,n}` form.* `?` : `{0,1}` `+` : `{1, }` `*` : `{0, }` 2. *Describe in words what these regular expressions match: (read carefully to see if I’m using a regular expression or a string that defines a regular expression.)* 1. `^.*$` : starts with anything, and ends with anything–matches whole thing ``` str_view(x, "^.*$") ``` 2. `"\\{.+\\}"` : match text in brackets greater than nothing ``` x <- c("test", "some in {brackets}", "just {} no match") str_view(x, "\\{.+\\}") ``` 3. `\d{4}-\d{2}-\d{2}`: 4 numbers \- 2 numbers \- 2 numbers ``` x <- c("4444-22-22", "test", "333-4444-22") str_view(x, "\\d{4}-\\d{2}-\\d{2}") ``` 4. `"\\\\{4}"`: 4 brackets ``` x <- c("\\\\\\\\", "\\\\\\", "\\\\", "\\") writeLines(x) ``` ``` ## \\\\ ## \\\ ## \\ ## \ ``` ``` str_view(x, "\\\\{4}") ``` ``` x <- c("\\\\\\\\", "\\\\\\", "\\\\", "\\") str_view(x, "\\\\\\\\") ``` 3. *Create regular expressions to find all words that:* 1. find all words that start with three consonants ``` str_view(stringr::words, "^[^aeoiouy]{3}", match = TRUE) ``` * Include `y` because when it shows up otherwise, is in vowel form. 2. have three or more vowels in a row ``` str_view(stringr::words, "[aeiou]{3}", match = TRUE) ``` In this case, do not include the `y`. 3. have 2 or more vowel\-consonant pairs in a row ``` str_view(stringr::words, "([aeiou][^aeiou]){2,}", match = TRUE) ``` 4. *Solve the beginner regexp crosswords at* *<https://regexcrossword.com/challenges/beginner>.* ### 14\.3\.5\.1 1. *Describe, in words, what these expressions will match:* * I change questions 1 and 3 to what I think they were meant to be written as `(.)\\1\\1` and `(.)\\1` respectively. 1. `(.)\\1\\1` : repeat the char in the first group, and then repeat that char again 2. `"(.)(.)\\2\\1"` : 1st char, 2nd char followed by 2nd char, first char 3. `(..)\\1` : 2 chars repeated twice 4. `"(.).\\1.\\1"` : chars shows\-up 3 times with one character between each 5. `"(.)(.)(.).*\\3\\2\\1"` : 3 chars in one order with \* chars between, then 3 chars with 3 letters in the reverse order of what it started ``` x <- c("steefddff", "ssdfsdfffsdasdlkd", "DLKKJIOWdkl", "klnlsd", "t11", "(.)\1\1") str_view_all(x, "(.)\\1\\1", match = TRUE) #xxx ``` ``` str_view_all(fruit, "(.)(.)\\2\\1", match = TRUE) #xyyx ``` ``` str_view_all(fruit, "(..)\\1", match = TRUE) #xxyy ``` ``` str_view(stringr::words, "(.).\\1.\\1", match = TRUE) #x.x.x ``` ``` str_view(stringr::words, "(.)(.)(.).*\\3\\2\\1", match = TRUE) #xyz.*zyx ``` 2. *Construct regular expressions to match words that:* 1. *Start and end with the same character.* ``` str_view(stringr::words, "^(.).*\\1$", match = TRUE) ``` 2. *Contain a repeated pair of letters* (e.g. “church” contains “ch” repeated twice.) ``` str_view(stringr::words, "(..).*\\1", match = TRUE) ``` 3. *Contain one letter repeated in at least three places* (e.g. “eleven” contains three “e”s.) ``` str_view(stringr::words, "(.).+\\1.+\\1", match = TRUE) ``` 14\.4 Tools ----------- ``` noun <- "(a|the) ([^ \\.]+)" has_noun <- sentences %>% str_subset(noun) %>% head(10) has_noun %>% str_extract_all(noun, simplify = TRUE) #creates split into seperate pieces has_noun %>% str_match_all(noun) #Can make dataframe with, but need to name all tibble(has_noun = has_noun) %>% extract(has_noun, into = c("article", "noun"), regex = noun) ``` * When using `boundary()` with `str_split` can set to “character”, “line”, “sentence”, and “word” and gives alternative to splitting by pattern. ### 14\.4\.2 1. *For each of the following challenges, try solving it by using both a single* *regular expression, and a combination of multiple `str_detect()` calls.* 1. *Find all words that start or end with `x`.* ``` str_subset(words, "^x|x$") ``` ``` ## [1] "box" "sex" "six" "tax" ``` 2. *Find all words that start with a vowel and end with a consonant.* ``` str_subset(words, "^[aeiou].*[^aeiouy]$") ``` ``` ## [1] "about" "accept" "account" "across" "act" ## [6] "actual" "add" "address" "admit" "affect" ## [11] "afford" "after" "afternoon" "again" "against" ## [16] "agent" "air" "all" "allow" "almost" ## [21] "along" "alright" "although" "always" "amount" ## [26] "and" "another" "answer" "apart" "apparent" ## [31] "appear" "appoint" "approach" "arm" "around" ## [36] "art" "as" "ask" "at" "attend" ## [41] "awful" "each" "east" "eat" "effect" ## [46] "egg" "eight" "either" "elect" "electric" ## [51] "eleven" "end" "english" "enough" "enter" ## [56] "environment" "equal" "especial" "even" "evening" ## [61] "ever" "exact" "except" "exist" "expect" ## [66] "explain" "express" "if" "important" "in" ## [71] "indeed" "individual" "inform" "instead" "interest" ## [76] "invest" "it" "item" "obvious" "occasion" ## [81] "odd" "of" "off" "offer" "often" ## [86] "old" "on" "open" "or" "order" ## [91] "original" "other" "ought" "out" "over" ## [96] "own" "under" "understand" "union" "unit" ## [101] "unless" "until" "up" "upon" "usual" ``` Counted `y` as a vowel if ending with, but not to start. This does not work perfect. For example words like `ygritte` would still be included even though `y` is activng as a vowel there whereas words like `boy` would be excluded even though acting as a consonant there. From here on out I am going to always exclude `y`. 3. *Are there any words that contain at least one of each different vowel?* ``` vowels <- c("a","e","i","o","u") words[str_detect(words, "a") & str_detect(words, "e") & str_detect(words, "i") & str_detect(words, "o") & str_detect(words, "u")] ``` ``` ## character(0) ``` No. 2. *What word has the highest number of vowels? What word has the highest* *proportion of vowels? (Hint: what is the denominator?)* ``` vowel_counts <- tibble(words = words, n_string = str_length(words), n_vowel = str_count(words, vowels), prop_vowel = n_vowel / n_string) ``` ‘Experience’ has the most vowels ``` vowel_counts %>% arrange(desc(n_vowel)) ``` ``` ## # A tibble: 980 x 4 ## words n_string n_vowel prop_vowel ## <chr> <int> <int> <dbl> ## 1 experience 10 4 0.4 ## 2 individual 10 3 0.3 ## 3 achieve 7 2 0.286 ## 4 actual 6 2 0.333 ## 5 afternoon 9 2 0.222 ## 6 against 7 2 0.286 ## 7 already 7 2 0.286 ## 8 america 7 2 0.286 ## 9 benefit 7 2 0.286 ## 10 choose 6 2 0.333 ## # ... with 970 more rows ``` ‘a’ has the highest proportion ``` vowel_counts %>% arrange(desc(prop_vowel)) ``` ``` ## # A tibble: 980 x 4 ## words n_string n_vowel prop_vowel ## <chr> <int> <int> <dbl> ## 1 a 1 1 1 ## 2 too 3 2 0.667 ## 3 wee 3 2 0.667 ## 4 feed 4 2 0.5 ## 5 in 2 1 0.5 ## 6 look 4 2 0.5 ## 7 need 4 2 0.5 ## 8 room 4 2 0.5 ## 9 so 2 1 0.5 ## 10 soon 4 2 0.5 ## # ... with 970 more rows ``` ### 14\.4\.3\.1 1. *In the previous example, you might have noticed that the regular* *expression matched “flickered”, which is not a colour. Modify the* *regex to fix the problem.* Add space in front of colors: ``` colours <- c("red", "orange", "yellow", "green", "blue", "purple") %>% paste0(" ", .) colour_match <- str_c(colours, collapse = "|") more <- sentences[str_count(sentences, colour_match) > 1] str_view_all(more, colour_match) ``` 2. *From the Harvard sentences data, extract:* 1. *The first word from each sentence.* ``` str_extract(sentences, "[A-z]*") ``` 2. *All words ending in `ing`.* ``` #ends in "ing" or "ing." sent_ing <- str_subset(sentences, ".*ing(\\.|\\s)") str_extract_all(sent_ing, "[A-z]+ing", simplify=TRUE) ``` 3. *All plurals.* ``` str_subset(sentences, "[A-z]*s(\\.|\\s)") %>% #take all sentences that have a word ending in s str_extract_all("[A-z]*s\\b", simplify = TRUE) %>% .[str_length(.) > 3] %>% #get rid of the short words str_subset(".*[^s]s$") %>% #get rid of words ending in 'ss' str_subset(".*[^i]s$") #get rid of 'this' ``` ### 14\.4\.4\.1 1. *Find all words that come after a “number” like “one”, “two”, “three” etc.* *Pull out both the number and the word.* ``` #Create regex expression nums <- c("one", "two", "three", "four", "five", "six", "seven", "eight", "nine") nums_c <- str_c(nums, collapse = "|") # see stringr cheatsheet: "(?<![:alpha:])" means not preceded by re <- str_c("(", "(?<![:alpha:])", "(", nums_c, "))", " ", "([^ \\.]+)", sep = "") sentences %>% str_subset(regex(re, ignore_case = TRUE)) %>% str_extract_all(regex(re, ignore_case = TRUE)) %>% unlist() %>% tibble::enframe(name = NULL) %>% separate(col = "value", into = c("num", "following"), remove = FALSE) ``` ``` ## # A tibble: 30 x 3 ## value num following ## <chr> <chr> <chr> ## 1 Four hours Four hours ## 2 Two blue Two blue ## 3 seven books seven books ## 4 two met two met ## 5 two factors two factors ## 6 three lists three lists ## 7 Two plus Two plus ## 8 seven is seven is ## 9 two when two when ## 10 Eight miles Eight miles ## # ... with 20 more rows ``` * I’d initially appended `"\\b"` in front of each number to prevent things like “someone” being captured – however this didn’t work with cases where a sentence started with a number – hence switched to using the *not preceded by* method in the [stringr cheatsheet](https://www.rstudio.com/resources/cheatsheets/). 2. *Find all contractions. Separate out the pieces before and after the* *apostrophe.* ``` #note the () facilitate the split with functions contr <- "([^ \\.]+)'([^ \\.]*)" sentences %>% #note the improvement this word definition is to the above [^ ]+ str_subset(contr) %>% str_match_all(contr) ``` ``` ## [[1]] ## [,1] [,2] [,3] ## [1,] "It's" "It" "s" ## ## [[2]] ## [,1] [,2] [,3] ## [1,] "man's" "man" "s" ## ## [[3]] ## [,1] [,2] [,3] ## [1,] "don't" "don" "t" ## ## [[4]] ## [,1] [,2] [,3] ## [1,] "store's" "store" "s" ## ## [[5]] ## [,1] [,2] [,3] ## [1,] "workmen's" "workmen" "s" ## ## [[6]] ## [,1] [,2] [,3] ## [1,] "Let's" "Let" "s" ## ## [[7]] ## [,1] [,2] [,3] ## [1,] "sun's" "sun" "s" ## ## [[8]] ## [,1] [,2] [,3] ## [1,] "child's" "child" "s" ## ## [[9]] ## [,1] [,2] [,3] ## [1,] "king's" "king" "s" ## ## [[10]] ## [,1] [,2] [,3] ## [1,] "It's" "It" "s" ## ## [[11]] ## [,1] [,2] [,3] ## [1,] "don't" "don" "t" ## ## [[12]] ## [,1] [,2] [,3] ## [1,] "queen's" "queen" "s" ## ## [[13]] ## [,1] [,2] [,3] ## [1,] "don't" "don" "t" ## ## [[14]] ## [,1] [,2] [,3] ## [1,] "pirate's" "pirate" "s" ## ## [[15]] ## [,1] [,2] [,3] ## [1,] "neighbor's" "neighbor" "s" ``` ### 14\.4\.5\.1 1. *Replace all forward slashes in a string with backslashes.* ``` x <- c("test/dklsk/") str_replace_all(x, "/", "\\\\") %>% writeLines() ``` ``` ## test\dklsk\ ``` 2. *Implement a simple version of `str_to_lower()` using `replace_all()`.* ``` x <- c("BIdklsKOS") str_replace_all(x, "([A-Z])", tolower) ``` ``` ## [1] "bidklskos" ``` 3. *Switch the first and last letters in `words`. Which of those strings* *are still words?* ``` str_replace(words, "(^.)(.*)(.$)", "\\3\\2\\1") ``` Any words that start and end with the same letter, e.g. ‘treat’, as well as a few other examples like, war –\> raw . ### 14\.4\.6\.1 1. *Split up a string like `"apples, pears, and bananas"` into individual* *components.* ``` x <- "apples, pears, and bananas" str_split(x, ",* ") #note that regular expression works to handle commas as well ``` ``` ## [[1]] ## [1] "apples" "pears" "and" "bananas" ``` 2. *Why is it better to split up by `boundary("word")` than `" "`?* Handles commas and punctuation[32](#fn32). ``` str_split(x, boundary("word")) ``` ``` ## [[1]] ## [1] "apples" "pears" "and" "bananas" ``` 3. *What does splitting with an empty string (`""`) do? Experiment, and* *then read the documentation.* Splitting by an empty string splits up each character. ``` str_split(x,"") ``` ``` ## [[1]] ## [1] "a" "p" "p" "l" "e" "s" "," " " "p" "e" "a" "r" "s" "," " " "a" "n" ## [18] "d" " " "b" "a" "n" "a" "n" "a" "s" ``` * splits each character into an individual element (and creates elements for spaces between strings) 14\.5: Other types of patterns ------------------------------ `regex` args to know: * `ignore_case = TRUE` allows characters to match either their uppercase or lowercase forms. This always uses the current locale. * `multiline = TRUE` allows `^` and `$` to match the start and end of each line rather than the start and end of the complete string. * `comments = TRUE` allows you to use comments and white space to make complex regular expressions more understandable. Spaces are ignored, as is everything after `#`. To match a literal space, you’ll need to escape it: `"\\ "`. * `dotall = TRUE` allows `.` to match everything, including `\n`. Alternatives to `regex()`: \* `fixed()`: matches exactly the specified sequence of bytes. It ignores all special regular expressions and operates at a very low level. This allows you to avoid complex escaping and can be much faster than regular expressions. \* `coll()`: compare strings using standard **coll**ation rules. This is useful for doing case insensitive matching. Note that `coll()` takes a `locale` parameter that controls which rules are used for comparing characters. ### 14\.5\.1 1. *How would you find all strings containing `\` with `regex()` vs.* *with `fixed()`?* would be `\\` instead of `\\\\` ``` str_view_all("so \\ the party is on\\ right?", fixed("\\")) ``` 2. *What are the five most common words in `sentences`?* ``` str_extract_all(sentences, boundary("word"), simplify = TRUE) %>% as_tibble() %>% gather(V1:V12, value = "words", key = "order") %>% mutate(words = str_to_lower(words)) %>% filter(!words == "") %>% count(words, sort = TRUE) %>% head(5) ``` ``` ## Warning: `as_tibble.matrix()` requires a matrix with column names or a `.name_repair` argument. Using compatibility `.name_repair`. ## This warning is displayed once per session. ``` ``` ## # A tibble: 5 x 2 ## words n ## <chr> <int> ## 1 the 751 ## 2 a 202 ## 3 of 132 ## 4 to 123 ## 5 and 118 ``` 14\.7: stringi -------------- Other functions: * `apropos` searches all objects available from the global environment–useful if you can’t remember fun name. Check those that start with `replace`: ``` apropos("^(replace)") ``` ``` ## [1] "replace" "replace_na" ``` Check those that start with `str`, but not `stri` ``` apropos("^(str)[^i]") ``` ``` ## [1] "str_c" "str_conv" "str_count" ## [4] "str_detect" "str_dup" "str_extract" ## [7] "str_extract_all" "str_flatten" "str_glue" ## [10] "str_glue_data" "str_interp" "str_length" ## [13] "str_locate" "str_locate_all" "str_match" ## [16] "str_match_all" "str_order" "str_pad" ## [19] "str_remove" "str_remove_all" "str_replace" ## [22] "str_replace_all" "str_replace_na" "str_sort" ## [25] "str_split" "str_split_fixed" "str_squish" ## [28] "str_sub" "str_sub<-" "str_subset" ## [31] "str_to_lower" "str_to_title" "str_to_upper" ## [34] "str_trim" "str_trunc" "str_view" ## [37] "str_view_all" "str_which" "str_wrap" ## [40] "strcapture" "strftime" "strheight" ## [43] "strOptions" "strptime" "strrep" ## [46] "strsplit" "strtoi" "strtrim" ## [49] "StructTS" "structure" "strwidth" ## [52] "strwrap" ``` ### 14\.7\.1 1. *Find the stringi functions that:* 1. *Count the number of words.* – `stri_count` 2. *Find duplicated strings.* – `stri_duplicated` 3. *Generate random text.* – `str_rand_strings` 2. *How do you control the language that `stri_sort()` uses for sorting?* The `decreasing` argument Appendix -------- ### 14\.4\.2\.3 One way of doing this using iteration methods: ``` vowels <- c("a","e","i","o","u") tibble(vowels = vowels, words = list(words)) %>% mutate(detect_vowels = purrr::map2(words, vowels, str_detect)) %>% spread(key = vowels, value = detect_vowels) %>% unnest() %>% mutate(unique_vowels = rowSums(.[2:6])) %>% arrange(desc(unique_vowels)) ``` ``` ## # A tibble: 980 x 7 ## words a e i o u unique_vowels ## <chr> <lgl> <lgl> <lgl> <lgl> <lgl> <dbl> ## 1 absolute TRUE TRUE FALSE TRUE TRUE 4 ## 2 appropriate TRUE TRUE TRUE TRUE FALSE 4 ## 3 associate TRUE TRUE TRUE TRUE FALSE 4 ## 4 authority TRUE FALSE TRUE TRUE TRUE 4 ## 5 colleague TRUE TRUE FALSE TRUE TRUE 4 ## 6 continue FALSE TRUE TRUE TRUE TRUE 4 ## 7 encourage TRUE TRUE FALSE TRUE TRUE 4 ## 8 introduce FALSE TRUE TRUE TRUE TRUE 4 ## 9 organize TRUE TRUE TRUE TRUE FALSE 4 ## 10 previous FALSE TRUE TRUE TRUE TRUE 4 ## # ... with 970 more rows ``` ``` #seems that nothing gets over 4 ``` 14\.2: String basics -------------------- Use `wrteLines` to show what string ‘This string has a \\n new line’ looks like printed. ``` string_exp <- 'This string has a \n new line' print(string_exp) ``` ``` ## [1] "This string has a \n new line" ``` ``` writeLines(string_exp) ``` ``` ## This string has a ## new line ``` To see full list of specifal characters: ``` ?'"' ``` Objects of length 0 are silently dropped. This is particularly useful in conjunction with `if`: ``` name <- "Bryan" time_of_day <- "morning" birthday <- FALSE str_c( "Good ", time_of_day, " ", name, if (birthday) " and HAPPY BIRTHDAY", "." ) ``` ``` ## [1] "Good morning Bryan." ``` Collapse vectors into single string ``` str_c(c("x", "y", "z"), c("a", "b", "c"), collapse = ", ") ``` ``` ## [1] "xa, yb, zc" ``` Can use assignment form of `str_sub()` ``` x <- c("Apple", "Banana", "Pear") str_sub(x, 1, 1) <- str_to_lower(str_sub(x, 1, 1)) x ``` ``` ## [1] "apple" "banana" "pear" ``` `str_pad` looks interesting ``` str_pad("the dogs come for you.", width = 40, pad = ",", side = "both") #must specify width =, side = default is left ``` ``` ## [1] ",,,,,,,,,the dogs come for you.,,,,,,,,," ``` ### 14\.2\.5 1. *In code that doesn’t use stringr, you’ll often see `paste()` and `paste0()`. What’s the difference between the two functions?* * `paste0()` has no `sep` argument and just appends any value provided like another string vector. * They differ from `str_c()` in that they automatically convert `NA` values to character. ``` paste("a", "b", "c", c("x", "y"), sep = "-") ``` ``` ## [1] "a-b-c-x" "a-b-c-y" ``` ``` paste0("a", "b", "c", c("x", "y"), sep = "-") ``` ``` ## [1] "abcx-" "abcy-" ``` *What `stringr` function are they equivalent to?* `paste()` and `paste0()` are similar to `str_c()` though are different in how they handle NAs (see below). They also will return a warning when recycling vectors whose legth do not have a common factor. ``` paste(c("a", "b", "x"), c("x", "y"), sep = "-") ``` ``` ## [1] "a-x" "b-y" "x-x" ``` ``` str_c(c("a", "b", "x"), c("x", "y"), sep = "-") ``` ``` ## Warning in stri_c(..., sep = sep, collapse = collapse, ignore_null = TRUE): ## longer object length is not a multiple of shorter object length ``` ``` ## [1] "a-x" "b-y" "x-x" ``` *How do the functions differ in their handling of `NA`?* ``` paste(c("a", "b"), c(NA, "y"), sep = "-") ``` ``` ## [1] "a-NA" "b-y" ``` ``` str_c(c("a", "b"), c(NA, "y"), sep = "-") ``` ``` ## [1] NA "b-y" ``` 2. *In your own words, describe the difference between the `sep` and `collapse` arguments to `str_c()`.* `sep` puts characters between items within a vector, collapse puts a character between vectors being collapsed 3. *Use `str_length()` and `str_sub()` to extract the middle character from a string.* ``` x <- "world" str_sub(x, start = ceiling(str_length(x) / 2), end = ceiling(str_length(x) / 2)) ``` ``` ## [1] "r" ``` *What will you do if the string has an even number of characters?* In this circumstance the above solution would take the anterior middle value, below is a solution that would return both middle values. ``` x <- "worlds" str_sub(x, ceiling(str_length(x) / 2 + 1), start = ceiling(str_length(x) / 2 + 1)) ``` ``` ## [1] "l" ``` ``` str_sub(x, start = ifelse(str_length(x) %% 2 == 0, floor(str_length(x) / 2), ceiling(str_length(x) / 2 )), end = floor(str_length(x) / 2) + 1) ``` ``` ## [1] "rl" ``` 4. *What does `str_wrap()` do? When might you want to use it?* * Use `indent` for first line, `exdent` for others * could use `str_wrap()` for editing of documents etc., setting `width = 1` will give each word its own line ``` str_wrap("Tonight, we dine in Hell.", width = 10, indent = 0, exdent = 3) %>% writeLines() ``` ``` ## Tonight, ## we dine in ## Hell. ``` 5. *What does `str_trim()` do? What’s the opposite of `str_trim()`?* Removes whitespace from beginning and end of character, `side` argument specifies which side ``` str_trim(" so much white space ", side = "right") # (default is 'both') ``` ``` ## [1] " so much white space" ``` 6. *Write a function that turns (e.g.) a vector `c("a", "b", "c")` into the string `a, b, and c`. Think carefully about what it should do if given a vector of length 0, 1, or 2\.* ``` vec_to_string <- function(x) { #If 1 or 0 length vector if (length(x) < 2) return(x) comma <- ifelse(length(x) > 2, ", ", " ") b <- str_c(x, collapse = comma) #replace ',' with 'and' in last str_sub(b,-(str_length(x)[length(x)] + 1), -(str_length(x)[length(x)] + 1)) <- " and " return(b) } x <- c("a", "b", "c", "d") vec_to_string(x) ``` ``` ## [1] "a, b, c, and d" ``` ### 14\.2\.5 1. *In code that doesn’t use stringr, you’ll often see `paste()` and `paste0()`. What’s the difference between the two functions?* * `paste0()` has no `sep` argument and just appends any value provided like another string vector. * They differ from `str_c()` in that they automatically convert `NA` values to character. ``` paste("a", "b", "c", c("x", "y"), sep = "-") ``` ``` ## [1] "a-b-c-x" "a-b-c-y" ``` ``` paste0("a", "b", "c", c("x", "y"), sep = "-") ``` ``` ## [1] "abcx-" "abcy-" ``` *What `stringr` function are they equivalent to?* `paste()` and `paste0()` are similar to `str_c()` though are different in how they handle NAs (see below). They also will return a warning when recycling vectors whose legth do not have a common factor. ``` paste(c("a", "b", "x"), c("x", "y"), sep = "-") ``` ``` ## [1] "a-x" "b-y" "x-x" ``` ``` str_c(c("a", "b", "x"), c("x", "y"), sep = "-") ``` ``` ## Warning in stri_c(..., sep = sep, collapse = collapse, ignore_null = TRUE): ## longer object length is not a multiple of shorter object length ``` ``` ## [1] "a-x" "b-y" "x-x" ``` *How do the functions differ in their handling of `NA`?* ``` paste(c("a", "b"), c(NA, "y"), sep = "-") ``` ``` ## [1] "a-NA" "b-y" ``` ``` str_c(c("a", "b"), c(NA, "y"), sep = "-") ``` ``` ## [1] NA "b-y" ``` 2. *In your own words, describe the difference between the `sep` and `collapse` arguments to `str_c()`.* `sep` puts characters between items within a vector, collapse puts a character between vectors being collapsed 3. *Use `str_length()` and `str_sub()` to extract the middle character from a string.* ``` x <- "world" str_sub(x, start = ceiling(str_length(x) / 2), end = ceiling(str_length(x) / 2)) ``` ``` ## [1] "r" ``` *What will you do if the string has an even number of characters?* In this circumstance the above solution would take the anterior middle value, below is a solution that would return both middle values. ``` x <- "worlds" str_sub(x, ceiling(str_length(x) / 2 + 1), start = ceiling(str_length(x) / 2 + 1)) ``` ``` ## [1] "l" ``` ``` str_sub(x, start = ifelse(str_length(x) %% 2 == 0, floor(str_length(x) / 2), ceiling(str_length(x) / 2 )), end = floor(str_length(x) / 2) + 1) ``` ``` ## [1] "rl" ``` 4. *What does `str_wrap()` do? When might you want to use it?* * Use `indent` for first line, `exdent` for others * could use `str_wrap()` for editing of documents etc., setting `width = 1` will give each word its own line ``` str_wrap("Tonight, we dine in Hell.", width = 10, indent = 0, exdent = 3) %>% writeLines() ``` ``` ## Tonight, ## we dine in ## Hell. ``` 5. *What does `str_trim()` do? What’s the opposite of `str_trim()`?* Removes whitespace from beginning and end of character, `side` argument specifies which side ``` str_trim(" so much white space ", side = "right") # (default is 'both') ``` ``` ## [1] " so much white space" ``` 6. *Write a function that turns (e.g.) a vector `c("a", "b", "c")` into the string `a, b, and c`. Think carefully about what it should do if given a vector of length 0, 1, or 2\.* ``` vec_to_string <- function(x) { #If 1 or 0 length vector if (length(x) < 2) return(x) comma <- ifelse(length(x) > 2, ", ", " ") b <- str_c(x, collapse = comma) #replace ',' with 'and' in last str_sub(b,-(str_length(x)[length(x)] + 1), -(str_length(x)[length(x)] + 1)) <- " and " return(b) } x <- c("a", "b", "c", "d") vec_to_string(x) ``` ``` ## [1] "a, b, c, and d" ``` 14\.3: Matching patterns w/ regex --------------------------------- ``` x <- c("apple", "banana", "pear") str_view(x, "an") ``` To match a literal `\` need `\\\\` because both string and regex will escape it. ``` x <- "a\\b" writeLines(x) ``` ``` ## a\b ``` ``` str_view(x,"\\\\") ``` Using `\b` to set boundary between words (not used often) ``` apropos("\\bsum\\b") ``` ``` ## [1] "contr.sum" "sum" ``` ``` apropos("^(sum)$") ``` ``` ## [1] "sum" ``` Other special characters: * `\d`: matches any digit. * `\s`: matches any whitespace (e.g. space, tab, newline). * `[abc]`: matches a, b, or c. * `[^abc]`: matches anything except a, b, or c. Controlling number of times: * `?`: 0 or 1 * `+`: 1 or more * `*`: 0 or more * `{n}`: exactly n * `{n,}`: n or more * `{,m}`: at most m * `{n,m}`: between n and m By default these matches are “greedy”: they will match the longest string possible. You can make them “lazy”, matching the shortest string possible by putting a `?` after them. This is an advanced feature of regular expressions, but it’s useful to know that it exists: ``` x <- "1888 is the longest year in Roman numerals: MDCCCLXXXVIII" str_view(x, 'C{2,3}') ``` ``` str_view(x, 'C{2,3}?') ``` ### 14\.3\.1\.1 1. *Explain why each of these strings don’t match a `\`: `"\"`, `"\\"`, `"\\\"`.* `"\"` \-\> leaves open quote string because escapes quote `"\\"`, \-\> escapes second `\` so left with blank `"\\\"` \-\> third `\` escapes quote so left with open quote as well 2. *How would you match the sequence `"'\`?* ``` x <- "alfred\"'\\goes" writeLines(x) ``` ``` ## alfred"'\goes ``` ``` str_view(x, "\\\"'\\\\") ``` 3. *What patterns will the regular expression `\..\..\..` match?* Would match 6 character string of following form “(dot)(anychar)(dot)(anychar)(dot)(anychar)” ``` x <- c("alf.r.e.dd.ss..lsdf.d.kj") str_view(x, pattern = "\\..\\..\\..") ``` *How would you represent it as a string?* ``` x_pattern <- "\\..\\..\\.." writeLines(x_pattern) ``` ``` ## \..\..\.. ``` ### 14\.3\.2\.1 1. *How would you match the literal string `"$^$"`?* ``` x <- "so it goes $^$ here" str_view(x, "\\$\\^\\$") ``` 2. *Given the corpus of common words in `stringr::words`, create regular expressions that find all words that:* 1. *Start with “y”.* ``` str_view(stringr::words, "^y", match = TRUE) ``` 2. *End with “x”* ``` str_view(stringr::words, "x$", match = TRUE) ``` 3. *Are exactly three letters long. (Don’t cheat by using `str_length()`!)* ``` str_view(stringr::words, "^...$", match = TRUE) ``` 4. *Have seven letters or more.* ``` str_view(stringr::words, ".......", match = TRUE) ``` Since this list is long, you might want to use the `match` argument to `str_view()` to show only the matching or non\-matching words. ### 14\.3\.3\.1 1. *Create regular expressions to find all words that:* 1. *Start with a vowel.* ``` str_view(stringr::words, "^[aeiou]", match = TRUE) ``` 2. *That only contain consonants. (Hint: thinking about matching “not”\-vowels.)* ``` str_view(stringr::words, "^[^aeiou]*[^aeiouy]$", match = TRUE) ``` 3. *End with `ed`, but not with `eed`.* ``` str_view(stringr::words, "[^e]ed$", match = TRUE) ``` 4. *End with `ing` or `ise`.* ``` str_view(stringr::words, "(ing|ise)$", match = TRUE) ``` 2. *Empirically verify the rule “i before e except after c”.* ``` str_view(stringr::words, "(^(ei))|cie|[^c]ei", match = TRUE) ``` 3. *Is “q” always followed by a “u”?* ``` str_view(stringr::words, "q[^u]", match = TRUE) ``` of the words in list, yes. 4. *Write a regular expression that matches a word if it’s probably written in British English, not American English.* ``` str_view(stringr::words, "(l|b)our|parat", match = TRUE) ``` 5. *Create a regular expression that will match telephone numbers as commonly written in your country.* ``` x <- c("dkl kls. klk. _", "(425) 591-6020", "her number is (581) 434-3242", "442", " dsi") str_view(x, "\\(\\d\\d\\d\\)\\s\\d\\d\\d-\\d\\d\\d\\d") ``` Aboves not a good way to solve this, will see better methods in next section. ### 14\.3\.4\.1 1. *Describe the equivalents of `?`, `+`, `*` in `{m,n}` form.* `?` : `{0,1}` `+` : `{1, }` `*` : `{0, }` 2. *Describe in words what these regular expressions match: (read carefully to see if I’m using a regular expression or a string that defines a regular expression.)* 1. `^.*$` : starts with anything, and ends with anything–matches whole thing ``` str_view(x, "^.*$") ``` 2. `"\\{.+\\}"` : match text in brackets greater than nothing ``` x <- c("test", "some in {brackets}", "just {} no match") str_view(x, "\\{.+\\}") ``` 3. `\d{4}-\d{2}-\d{2}`: 4 numbers \- 2 numbers \- 2 numbers ``` x <- c("4444-22-22", "test", "333-4444-22") str_view(x, "\\d{4}-\\d{2}-\\d{2}") ``` 4. `"\\\\{4}"`: 4 brackets ``` x <- c("\\\\\\\\", "\\\\\\", "\\\\", "\\") writeLines(x) ``` ``` ## \\\\ ## \\\ ## \\ ## \ ``` ``` str_view(x, "\\\\{4}") ``` ``` x <- c("\\\\\\\\", "\\\\\\", "\\\\", "\\") str_view(x, "\\\\\\\\") ``` 3. *Create regular expressions to find all words that:* 1. find all words that start with three consonants ``` str_view(stringr::words, "^[^aeoiouy]{3}", match = TRUE) ``` * Include `y` because when it shows up otherwise, is in vowel form. 2. have three or more vowels in a row ``` str_view(stringr::words, "[aeiou]{3}", match = TRUE) ``` In this case, do not include the `y`. 3. have 2 or more vowel\-consonant pairs in a row ``` str_view(stringr::words, "([aeiou][^aeiou]){2,}", match = TRUE) ``` 4. *Solve the beginner regexp crosswords at* *<https://regexcrossword.com/challenges/beginner>.* ### 14\.3\.5\.1 1. *Describe, in words, what these expressions will match:* * I change questions 1 and 3 to what I think they were meant to be written as `(.)\\1\\1` and `(.)\\1` respectively. 1. `(.)\\1\\1` : repeat the char in the first group, and then repeat that char again 2. `"(.)(.)\\2\\1"` : 1st char, 2nd char followed by 2nd char, first char 3. `(..)\\1` : 2 chars repeated twice 4. `"(.).\\1.\\1"` : chars shows\-up 3 times with one character between each 5. `"(.)(.)(.).*\\3\\2\\1"` : 3 chars in one order with \* chars between, then 3 chars with 3 letters in the reverse order of what it started ``` x <- c("steefddff", "ssdfsdfffsdasdlkd", "DLKKJIOWdkl", "klnlsd", "t11", "(.)\1\1") str_view_all(x, "(.)\\1\\1", match = TRUE) #xxx ``` ``` str_view_all(fruit, "(.)(.)\\2\\1", match = TRUE) #xyyx ``` ``` str_view_all(fruit, "(..)\\1", match = TRUE) #xxyy ``` ``` str_view(stringr::words, "(.).\\1.\\1", match = TRUE) #x.x.x ``` ``` str_view(stringr::words, "(.)(.)(.).*\\3\\2\\1", match = TRUE) #xyz.*zyx ``` 2. *Construct regular expressions to match words that:* 1. *Start and end with the same character.* ``` str_view(stringr::words, "^(.).*\\1$", match = TRUE) ``` 2. *Contain a repeated pair of letters* (e.g. “church” contains “ch” repeated twice.) ``` str_view(stringr::words, "(..).*\\1", match = TRUE) ``` 3. *Contain one letter repeated in at least three places* (e.g. “eleven” contains three “e”s.) ``` str_view(stringr::words, "(.).+\\1.+\\1", match = TRUE) ``` ### 14\.3\.1\.1 1. *Explain why each of these strings don’t match a `\`: `"\"`, `"\\"`, `"\\\"`.* `"\"` \-\> leaves open quote string because escapes quote `"\\"`, \-\> escapes second `\` so left with blank `"\\\"` \-\> third `\` escapes quote so left with open quote as well 2. *How would you match the sequence `"'\`?* ``` x <- "alfred\"'\\goes" writeLines(x) ``` ``` ## alfred"'\goes ``` ``` str_view(x, "\\\"'\\\\") ``` 3. *What patterns will the regular expression `\..\..\..` match?* Would match 6 character string of following form “(dot)(anychar)(dot)(anychar)(dot)(anychar)” ``` x <- c("alf.r.e.dd.ss..lsdf.d.kj") str_view(x, pattern = "\\..\\..\\..") ``` *How would you represent it as a string?* ``` x_pattern <- "\\..\\..\\.." writeLines(x_pattern) ``` ``` ## \..\..\.. ``` ### 14\.3\.2\.1 1. *How would you match the literal string `"$^$"`?* ``` x <- "so it goes $^$ here" str_view(x, "\\$\\^\\$") ``` 2. *Given the corpus of common words in `stringr::words`, create regular expressions that find all words that:* 1. *Start with “y”.* ``` str_view(stringr::words, "^y", match = TRUE) ``` 2. *End with “x”* ``` str_view(stringr::words, "x$", match = TRUE) ``` 3. *Are exactly three letters long. (Don’t cheat by using `str_length()`!)* ``` str_view(stringr::words, "^...$", match = TRUE) ``` 4. *Have seven letters or more.* ``` str_view(stringr::words, ".......", match = TRUE) ``` Since this list is long, you might want to use the `match` argument to `str_view()` to show only the matching or non\-matching words. ### 14\.3\.3\.1 1. *Create regular expressions to find all words that:* 1. *Start with a vowel.* ``` str_view(stringr::words, "^[aeiou]", match = TRUE) ``` 2. *That only contain consonants. (Hint: thinking about matching “not”\-vowels.)* ``` str_view(stringr::words, "^[^aeiou]*[^aeiouy]$", match = TRUE) ``` 3. *End with `ed`, but not with `eed`.* ``` str_view(stringr::words, "[^e]ed$", match = TRUE) ``` 4. *End with `ing` or `ise`.* ``` str_view(stringr::words, "(ing|ise)$", match = TRUE) ``` 2. *Empirically verify the rule “i before e except after c”.* ``` str_view(stringr::words, "(^(ei))|cie|[^c]ei", match = TRUE) ``` 3. *Is “q” always followed by a “u”?* ``` str_view(stringr::words, "q[^u]", match = TRUE) ``` of the words in list, yes. 4. *Write a regular expression that matches a word if it’s probably written in British English, not American English.* ``` str_view(stringr::words, "(l|b)our|parat", match = TRUE) ``` 5. *Create a regular expression that will match telephone numbers as commonly written in your country.* ``` x <- c("dkl kls. klk. _", "(425) 591-6020", "her number is (581) 434-3242", "442", " dsi") str_view(x, "\\(\\d\\d\\d\\)\\s\\d\\d\\d-\\d\\d\\d\\d") ``` Aboves not a good way to solve this, will see better methods in next section. ### 14\.3\.4\.1 1. *Describe the equivalents of `?`, `+`, `*` in `{m,n}` form.* `?` : `{0,1}` `+` : `{1, }` `*` : `{0, }` 2. *Describe in words what these regular expressions match: (read carefully to see if I’m using a regular expression or a string that defines a regular expression.)* 1. `^.*$` : starts with anything, and ends with anything–matches whole thing ``` str_view(x, "^.*$") ``` 2. `"\\{.+\\}"` : match text in brackets greater than nothing ``` x <- c("test", "some in {brackets}", "just {} no match") str_view(x, "\\{.+\\}") ``` 3. `\d{4}-\d{2}-\d{2}`: 4 numbers \- 2 numbers \- 2 numbers ``` x <- c("4444-22-22", "test", "333-4444-22") str_view(x, "\\d{4}-\\d{2}-\\d{2}") ``` 4. `"\\\\{4}"`: 4 brackets ``` x <- c("\\\\\\\\", "\\\\\\", "\\\\", "\\") writeLines(x) ``` ``` ## \\\\ ## \\\ ## \\ ## \ ``` ``` str_view(x, "\\\\{4}") ``` ``` x <- c("\\\\\\\\", "\\\\\\", "\\\\", "\\") str_view(x, "\\\\\\\\") ``` 3. *Create regular expressions to find all words that:* 1. find all words that start with three consonants ``` str_view(stringr::words, "^[^aeoiouy]{3}", match = TRUE) ``` * Include `y` because when it shows up otherwise, is in vowel form. 2. have three or more vowels in a row ``` str_view(stringr::words, "[aeiou]{3}", match = TRUE) ``` In this case, do not include the `y`. 3. have 2 or more vowel\-consonant pairs in a row ``` str_view(stringr::words, "([aeiou][^aeiou]){2,}", match = TRUE) ``` 4. *Solve the beginner regexp crosswords at* *<https://regexcrossword.com/challenges/beginner>.* ### 14\.3\.5\.1 1. *Describe, in words, what these expressions will match:* * I change questions 1 and 3 to what I think they were meant to be written as `(.)\\1\\1` and `(.)\\1` respectively. 1. `(.)\\1\\1` : repeat the char in the first group, and then repeat that char again 2. `"(.)(.)\\2\\1"` : 1st char, 2nd char followed by 2nd char, first char 3. `(..)\\1` : 2 chars repeated twice 4. `"(.).\\1.\\1"` : chars shows\-up 3 times with one character between each 5. `"(.)(.)(.).*\\3\\2\\1"` : 3 chars in one order with \* chars between, then 3 chars with 3 letters in the reverse order of what it started ``` x <- c("steefddff", "ssdfsdfffsdasdlkd", "DLKKJIOWdkl", "klnlsd", "t11", "(.)\1\1") str_view_all(x, "(.)\\1\\1", match = TRUE) #xxx ``` ``` str_view_all(fruit, "(.)(.)\\2\\1", match = TRUE) #xyyx ``` ``` str_view_all(fruit, "(..)\\1", match = TRUE) #xxyy ``` ``` str_view(stringr::words, "(.).\\1.\\1", match = TRUE) #x.x.x ``` ``` str_view(stringr::words, "(.)(.)(.).*\\3\\2\\1", match = TRUE) #xyz.*zyx ``` 2. *Construct regular expressions to match words that:* 1. *Start and end with the same character.* ``` str_view(stringr::words, "^(.).*\\1$", match = TRUE) ``` 2. *Contain a repeated pair of letters* (e.g. “church” contains “ch” repeated twice.) ``` str_view(stringr::words, "(..).*\\1", match = TRUE) ``` 3. *Contain one letter repeated in at least three places* (e.g. “eleven” contains three “e”s.) ``` str_view(stringr::words, "(.).+\\1.+\\1", match = TRUE) ``` 14\.4 Tools ----------- ``` noun <- "(a|the) ([^ \\.]+)" has_noun <- sentences %>% str_subset(noun) %>% head(10) has_noun %>% str_extract_all(noun, simplify = TRUE) #creates split into seperate pieces has_noun %>% str_match_all(noun) #Can make dataframe with, but need to name all tibble(has_noun = has_noun) %>% extract(has_noun, into = c("article", "noun"), regex = noun) ``` * When using `boundary()` with `str_split` can set to “character”, “line”, “sentence”, and “word” and gives alternative to splitting by pattern. ### 14\.4\.2 1. *For each of the following challenges, try solving it by using both a single* *regular expression, and a combination of multiple `str_detect()` calls.* 1. *Find all words that start or end with `x`.* ``` str_subset(words, "^x|x$") ``` ``` ## [1] "box" "sex" "six" "tax" ``` 2. *Find all words that start with a vowel and end with a consonant.* ``` str_subset(words, "^[aeiou].*[^aeiouy]$") ``` ``` ## [1] "about" "accept" "account" "across" "act" ## [6] "actual" "add" "address" "admit" "affect" ## [11] "afford" "after" "afternoon" "again" "against" ## [16] "agent" "air" "all" "allow" "almost" ## [21] "along" "alright" "although" "always" "amount" ## [26] "and" "another" "answer" "apart" "apparent" ## [31] "appear" "appoint" "approach" "arm" "around" ## [36] "art" "as" "ask" "at" "attend" ## [41] "awful" "each" "east" "eat" "effect" ## [46] "egg" "eight" "either" "elect" "electric" ## [51] "eleven" "end" "english" "enough" "enter" ## [56] "environment" "equal" "especial" "even" "evening" ## [61] "ever" "exact" "except" "exist" "expect" ## [66] "explain" "express" "if" "important" "in" ## [71] "indeed" "individual" "inform" "instead" "interest" ## [76] "invest" "it" "item" "obvious" "occasion" ## [81] "odd" "of" "off" "offer" "often" ## [86] "old" "on" "open" "or" "order" ## [91] "original" "other" "ought" "out" "over" ## [96] "own" "under" "understand" "union" "unit" ## [101] "unless" "until" "up" "upon" "usual" ``` Counted `y` as a vowel if ending with, but not to start. This does not work perfect. For example words like `ygritte` would still be included even though `y` is activng as a vowel there whereas words like `boy` would be excluded even though acting as a consonant there. From here on out I am going to always exclude `y`. 3. *Are there any words that contain at least one of each different vowel?* ``` vowels <- c("a","e","i","o","u") words[str_detect(words, "a") & str_detect(words, "e") & str_detect(words, "i") & str_detect(words, "o") & str_detect(words, "u")] ``` ``` ## character(0) ``` No. 2. *What word has the highest number of vowels? What word has the highest* *proportion of vowels? (Hint: what is the denominator?)* ``` vowel_counts <- tibble(words = words, n_string = str_length(words), n_vowel = str_count(words, vowels), prop_vowel = n_vowel / n_string) ``` ‘Experience’ has the most vowels ``` vowel_counts %>% arrange(desc(n_vowel)) ``` ``` ## # A tibble: 980 x 4 ## words n_string n_vowel prop_vowel ## <chr> <int> <int> <dbl> ## 1 experience 10 4 0.4 ## 2 individual 10 3 0.3 ## 3 achieve 7 2 0.286 ## 4 actual 6 2 0.333 ## 5 afternoon 9 2 0.222 ## 6 against 7 2 0.286 ## 7 already 7 2 0.286 ## 8 america 7 2 0.286 ## 9 benefit 7 2 0.286 ## 10 choose 6 2 0.333 ## # ... with 970 more rows ``` ‘a’ has the highest proportion ``` vowel_counts %>% arrange(desc(prop_vowel)) ``` ``` ## # A tibble: 980 x 4 ## words n_string n_vowel prop_vowel ## <chr> <int> <int> <dbl> ## 1 a 1 1 1 ## 2 too 3 2 0.667 ## 3 wee 3 2 0.667 ## 4 feed 4 2 0.5 ## 5 in 2 1 0.5 ## 6 look 4 2 0.5 ## 7 need 4 2 0.5 ## 8 room 4 2 0.5 ## 9 so 2 1 0.5 ## 10 soon 4 2 0.5 ## # ... with 970 more rows ``` ### 14\.4\.3\.1 1. *In the previous example, you might have noticed that the regular* *expression matched “flickered”, which is not a colour. Modify the* *regex to fix the problem.* Add space in front of colors: ``` colours <- c("red", "orange", "yellow", "green", "blue", "purple") %>% paste0(" ", .) colour_match <- str_c(colours, collapse = "|") more <- sentences[str_count(sentences, colour_match) > 1] str_view_all(more, colour_match) ``` 2. *From the Harvard sentences data, extract:* 1. *The first word from each sentence.* ``` str_extract(sentences, "[A-z]*") ``` 2. *All words ending in `ing`.* ``` #ends in "ing" or "ing." sent_ing <- str_subset(sentences, ".*ing(\\.|\\s)") str_extract_all(sent_ing, "[A-z]+ing", simplify=TRUE) ``` 3. *All plurals.* ``` str_subset(sentences, "[A-z]*s(\\.|\\s)") %>% #take all sentences that have a word ending in s str_extract_all("[A-z]*s\\b", simplify = TRUE) %>% .[str_length(.) > 3] %>% #get rid of the short words str_subset(".*[^s]s$") %>% #get rid of words ending in 'ss' str_subset(".*[^i]s$") #get rid of 'this' ``` ### 14\.4\.4\.1 1. *Find all words that come after a “number” like “one”, “two”, “three” etc.* *Pull out both the number and the word.* ``` #Create regex expression nums <- c("one", "two", "three", "four", "five", "six", "seven", "eight", "nine") nums_c <- str_c(nums, collapse = "|") # see stringr cheatsheet: "(?<![:alpha:])" means not preceded by re <- str_c("(", "(?<![:alpha:])", "(", nums_c, "))", " ", "([^ \\.]+)", sep = "") sentences %>% str_subset(regex(re, ignore_case = TRUE)) %>% str_extract_all(regex(re, ignore_case = TRUE)) %>% unlist() %>% tibble::enframe(name = NULL) %>% separate(col = "value", into = c("num", "following"), remove = FALSE) ``` ``` ## # A tibble: 30 x 3 ## value num following ## <chr> <chr> <chr> ## 1 Four hours Four hours ## 2 Two blue Two blue ## 3 seven books seven books ## 4 two met two met ## 5 two factors two factors ## 6 three lists three lists ## 7 Two plus Two plus ## 8 seven is seven is ## 9 two when two when ## 10 Eight miles Eight miles ## # ... with 20 more rows ``` * I’d initially appended `"\\b"` in front of each number to prevent things like “someone” being captured – however this didn’t work with cases where a sentence started with a number – hence switched to using the *not preceded by* method in the [stringr cheatsheet](https://www.rstudio.com/resources/cheatsheets/). 2. *Find all contractions. Separate out the pieces before and after the* *apostrophe.* ``` #note the () facilitate the split with functions contr <- "([^ \\.]+)'([^ \\.]*)" sentences %>% #note the improvement this word definition is to the above [^ ]+ str_subset(contr) %>% str_match_all(contr) ``` ``` ## [[1]] ## [,1] [,2] [,3] ## [1,] "It's" "It" "s" ## ## [[2]] ## [,1] [,2] [,3] ## [1,] "man's" "man" "s" ## ## [[3]] ## [,1] [,2] [,3] ## [1,] "don't" "don" "t" ## ## [[4]] ## [,1] [,2] [,3] ## [1,] "store's" "store" "s" ## ## [[5]] ## [,1] [,2] [,3] ## [1,] "workmen's" "workmen" "s" ## ## [[6]] ## [,1] [,2] [,3] ## [1,] "Let's" "Let" "s" ## ## [[7]] ## [,1] [,2] [,3] ## [1,] "sun's" "sun" "s" ## ## [[8]] ## [,1] [,2] [,3] ## [1,] "child's" "child" "s" ## ## [[9]] ## [,1] [,2] [,3] ## [1,] "king's" "king" "s" ## ## [[10]] ## [,1] [,2] [,3] ## [1,] "It's" "It" "s" ## ## [[11]] ## [,1] [,2] [,3] ## [1,] "don't" "don" "t" ## ## [[12]] ## [,1] [,2] [,3] ## [1,] "queen's" "queen" "s" ## ## [[13]] ## [,1] [,2] [,3] ## [1,] "don't" "don" "t" ## ## [[14]] ## [,1] [,2] [,3] ## [1,] "pirate's" "pirate" "s" ## ## [[15]] ## [,1] [,2] [,3] ## [1,] "neighbor's" "neighbor" "s" ``` ### 14\.4\.5\.1 1. *Replace all forward slashes in a string with backslashes.* ``` x <- c("test/dklsk/") str_replace_all(x, "/", "\\\\") %>% writeLines() ``` ``` ## test\dklsk\ ``` 2. *Implement a simple version of `str_to_lower()` using `replace_all()`.* ``` x <- c("BIdklsKOS") str_replace_all(x, "([A-Z])", tolower) ``` ``` ## [1] "bidklskos" ``` 3. *Switch the first and last letters in `words`. Which of those strings* *are still words?* ``` str_replace(words, "(^.)(.*)(.$)", "\\3\\2\\1") ``` Any words that start and end with the same letter, e.g. ‘treat’, as well as a few other examples like, war –\> raw . ### 14\.4\.6\.1 1. *Split up a string like `"apples, pears, and bananas"` into individual* *components.* ``` x <- "apples, pears, and bananas" str_split(x, ",* ") #note that regular expression works to handle commas as well ``` ``` ## [[1]] ## [1] "apples" "pears" "and" "bananas" ``` 2. *Why is it better to split up by `boundary("word")` than `" "`?* Handles commas and punctuation[32](#fn32). ``` str_split(x, boundary("word")) ``` ``` ## [[1]] ## [1] "apples" "pears" "and" "bananas" ``` 3. *What does splitting with an empty string (`""`) do? Experiment, and* *then read the documentation.* Splitting by an empty string splits up each character. ``` str_split(x,"") ``` ``` ## [[1]] ## [1] "a" "p" "p" "l" "e" "s" "," " " "p" "e" "a" "r" "s" "," " " "a" "n" ## [18] "d" " " "b" "a" "n" "a" "n" "a" "s" ``` * splits each character into an individual element (and creates elements for spaces between strings) ### 14\.4\.2 1. *For each of the following challenges, try solving it by using both a single* *regular expression, and a combination of multiple `str_detect()` calls.* 1. *Find all words that start or end with `x`.* ``` str_subset(words, "^x|x$") ``` ``` ## [1] "box" "sex" "six" "tax" ``` 2. *Find all words that start with a vowel and end with a consonant.* ``` str_subset(words, "^[aeiou].*[^aeiouy]$") ``` ``` ## [1] "about" "accept" "account" "across" "act" ## [6] "actual" "add" "address" "admit" "affect" ## [11] "afford" "after" "afternoon" "again" "against" ## [16] "agent" "air" "all" "allow" "almost" ## [21] "along" "alright" "although" "always" "amount" ## [26] "and" "another" "answer" "apart" "apparent" ## [31] "appear" "appoint" "approach" "arm" "around" ## [36] "art" "as" "ask" "at" "attend" ## [41] "awful" "each" "east" "eat" "effect" ## [46] "egg" "eight" "either" "elect" "electric" ## [51] "eleven" "end" "english" "enough" "enter" ## [56] "environment" "equal" "especial" "even" "evening" ## [61] "ever" "exact" "except" "exist" "expect" ## [66] "explain" "express" "if" "important" "in" ## [71] "indeed" "individual" "inform" "instead" "interest" ## [76] "invest" "it" "item" "obvious" "occasion" ## [81] "odd" "of" "off" "offer" "often" ## [86] "old" "on" "open" "or" "order" ## [91] "original" "other" "ought" "out" "over" ## [96] "own" "under" "understand" "union" "unit" ## [101] "unless" "until" "up" "upon" "usual" ``` Counted `y` as a vowel if ending with, but not to start. This does not work perfect. For example words like `ygritte` would still be included even though `y` is activng as a vowel there whereas words like `boy` would be excluded even though acting as a consonant there. From here on out I am going to always exclude `y`. 3. *Are there any words that contain at least one of each different vowel?* ``` vowels <- c("a","e","i","o","u") words[str_detect(words, "a") & str_detect(words, "e") & str_detect(words, "i") & str_detect(words, "o") & str_detect(words, "u")] ``` ``` ## character(0) ``` No. 2. *What word has the highest number of vowels? What word has the highest* *proportion of vowels? (Hint: what is the denominator?)* ``` vowel_counts <- tibble(words = words, n_string = str_length(words), n_vowel = str_count(words, vowels), prop_vowel = n_vowel / n_string) ``` ‘Experience’ has the most vowels ``` vowel_counts %>% arrange(desc(n_vowel)) ``` ``` ## # A tibble: 980 x 4 ## words n_string n_vowel prop_vowel ## <chr> <int> <int> <dbl> ## 1 experience 10 4 0.4 ## 2 individual 10 3 0.3 ## 3 achieve 7 2 0.286 ## 4 actual 6 2 0.333 ## 5 afternoon 9 2 0.222 ## 6 against 7 2 0.286 ## 7 already 7 2 0.286 ## 8 america 7 2 0.286 ## 9 benefit 7 2 0.286 ## 10 choose 6 2 0.333 ## # ... with 970 more rows ``` ‘a’ has the highest proportion ``` vowel_counts %>% arrange(desc(prop_vowel)) ``` ``` ## # A tibble: 980 x 4 ## words n_string n_vowel prop_vowel ## <chr> <int> <int> <dbl> ## 1 a 1 1 1 ## 2 too 3 2 0.667 ## 3 wee 3 2 0.667 ## 4 feed 4 2 0.5 ## 5 in 2 1 0.5 ## 6 look 4 2 0.5 ## 7 need 4 2 0.5 ## 8 room 4 2 0.5 ## 9 so 2 1 0.5 ## 10 soon 4 2 0.5 ## # ... with 970 more rows ``` ### 14\.4\.3\.1 1. *In the previous example, you might have noticed that the regular* *expression matched “flickered”, which is not a colour. Modify the* *regex to fix the problem.* Add space in front of colors: ``` colours <- c("red", "orange", "yellow", "green", "blue", "purple") %>% paste0(" ", .) colour_match <- str_c(colours, collapse = "|") more <- sentences[str_count(sentences, colour_match) > 1] str_view_all(more, colour_match) ``` 2. *From the Harvard sentences data, extract:* 1. *The first word from each sentence.* ``` str_extract(sentences, "[A-z]*") ``` 2. *All words ending in `ing`.* ``` #ends in "ing" or "ing." sent_ing <- str_subset(sentences, ".*ing(\\.|\\s)") str_extract_all(sent_ing, "[A-z]+ing", simplify=TRUE) ``` 3. *All plurals.* ``` str_subset(sentences, "[A-z]*s(\\.|\\s)") %>% #take all sentences that have a word ending in s str_extract_all("[A-z]*s\\b", simplify = TRUE) %>% .[str_length(.) > 3] %>% #get rid of the short words str_subset(".*[^s]s$") %>% #get rid of words ending in 'ss' str_subset(".*[^i]s$") #get rid of 'this' ``` ### 14\.4\.4\.1 1. *Find all words that come after a “number” like “one”, “two”, “three” etc.* *Pull out both the number and the word.* ``` #Create regex expression nums <- c("one", "two", "three", "four", "five", "six", "seven", "eight", "nine") nums_c <- str_c(nums, collapse = "|") # see stringr cheatsheet: "(?<![:alpha:])" means not preceded by re <- str_c("(", "(?<![:alpha:])", "(", nums_c, "))", " ", "([^ \\.]+)", sep = "") sentences %>% str_subset(regex(re, ignore_case = TRUE)) %>% str_extract_all(regex(re, ignore_case = TRUE)) %>% unlist() %>% tibble::enframe(name = NULL) %>% separate(col = "value", into = c("num", "following"), remove = FALSE) ``` ``` ## # A tibble: 30 x 3 ## value num following ## <chr> <chr> <chr> ## 1 Four hours Four hours ## 2 Two blue Two blue ## 3 seven books seven books ## 4 two met two met ## 5 two factors two factors ## 6 three lists three lists ## 7 Two plus Two plus ## 8 seven is seven is ## 9 two when two when ## 10 Eight miles Eight miles ## # ... with 20 more rows ``` * I’d initially appended `"\\b"` in front of each number to prevent things like “someone” being captured – however this didn’t work with cases where a sentence started with a number – hence switched to using the *not preceded by* method in the [stringr cheatsheet](https://www.rstudio.com/resources/cheatsheets/). 2. *Find all contractions. Separate out the pieces before and after the* *apostrophe.* ``` #note the () facilitate the split with functions contr <- "([^ \\.]+)'([^ \\.]*)" sentences %>% #note the improvement this word definition is to the above [^ ]+ str_subset(contr) %>% str_match_all(contr) ``` ``` ## [[1]] ## [,1] [,2] [,3] ## [1,] "It's" "It" "s" ## ## [[2]] ## [,1] [,2] [,3] ## [1,] "man's" "man" "s" ## ## [[3]] ## [,1] [,2] [,3] ## [1,] "don't" "don" "t" ## ## [[4]] ## [,1] [,2] [,3] ## [1,] "store's" "store" "s" ## ## [[5]] ## [,1] [,2] [,3] ## [1,] "workmen's" "workmen" "s" ## ## [[6]] ## [,1] [,2] [,3] ## [1,] "Let's" "Let" "s" ## ## [[7]] ## [,1] [,2] [,3] ## [1,] "sun's" "sun" "s" ## ## [[8]] ## [,1] [,2] [,3] ## [1,] "child's" "child" "s" ## ## [[9]] ## [,1] [,2] [,3] ## [1,] "king's" "king" "s" ## ## [[10]] ## [,1] [,2] [,3] ## [1,] "It's" "It" "s" ## ## [[11]] ## [,1] [,2] [,3] ## [1,] "don't" "don" "t" ## ## [[12]] ## [,1] [,2] [,3] ## [1,] "queen's" "queen" "s" ## ## [[13]] ## [,1] [,2] [,3] ## [1,] "don't" "don" "t" ## ## [[14]] ## [,1] [,2] [,3] ## [1,] "pirate's" "pirate" "s" ## ## [[15]] ## [,1] [,2] [,3] ## [1,] "neighbor's" "neighbor" "s" ``` ### 14\.4\.5\.1 1. *Replace all forward slashes in a string with backslashes.* ``` x <- c("test/dklsk/") str_replace_all(x, "/", "\\\\") %>% writeLines() ``` ``` ## test\dklsk\ ``` 2. *Implement a simple version of `str_to_lower()` using `replace_all()`.* ``` x <- c("BIdklsKOS") str_replace_all(x, "([A-Z])", tolower) ``` ``` ## [1] "bidklskos" ``` 3. *Switch the first and last letters in `words`. Which of those strings* *are still words?* ``` str_replace(words, "(^.)(.*)(.$)", "\\3\\2\\1") ``` Any words that start and end with the same letter, e.g. ‘treat’, as well as a few other examples like, war –\> raw . ### 14\.4\.6\.1 1. *Split up a string like `"apples, pears, and bananas"` into individual* *components.* ``` x <- "apples, pears, and bananas" str_split(x, ",* ") #note that regular expression works to handle commas as well ``` ``` ## [[1]] ## [1] "apples" "pears" "and" "bananas" ``` 2. *Why is it better to split up by `boundary("word")` than `" "`?* Handles commas and punctuation[32](#fn32). ``` str_split(x, boundary("word")) ``` ``` ## [[1]] ## [1] "apples" "pears" "and" "bananas" ``` 3. *What does splitting with an empty string (`""`) do? Experiment, and* *then read the documentation.* Splitting by an empty string splits up each character. ``` str_split(x,"") ``` ``` ## [[1]] ## [1] "a" "p" "p" "l" "e" "s" "," " " "p" "e" "a" "r" "s" "," " " "a" "n" ## [18] "d" " " "b" "a" "n" "a" "n" "a" "s" ``` * splits each character into an individual element (and creates elements for spaces between strings) 14\.5: Other types of patterns ------------------------------ `regex` args to know: * `ignore_case = TRUE` allows characters to match either their uppercase or lowercase forms. This always uses the current locale. * `multiline = TRUE` allows `^` and `$` to match the start and end of each line rather than the start and end of the complete string. * `comments = TRUE` allows you to use comments and white space to make complex regular expressions more understandable. Spaces are ignored, as is everything after `#`. To match a literal space, you’ll need to escape it: `"\\ "`. * `dotall = TRUE` allows `.` to match everything, including `\n`. Alternatives to `regex()`: \* `fixed()`: matches exactly the specified sequence of bytes. It ignores all special regular expressions and operates at a very low level. This allows you to avoid complex escaping and can be much faster than regular expressions. \* `coll()`: compare strings using standard **coll**ation rules. This is useful for doing case insensitive matching. Note that `coll()` takes a `locale` parameter that controls which rules are used for comparing characters. ### 14\.5\.1 1. *How would you find all strings containing `\` with `regex()` vs.* *with `fixed()`?* would be `\\` instead of `\\\\` ``` str_view_all("so \\ the party is on\\ right?", fixed("\\")) ``` 2. *What are the five most common words in `sentences`?* ``` str_extract_all(sentences, boundary("word"), simplify = TRUE) %>% as_tibble() %>% gather(V1:V12, value = "words", key = "order") %>% mutate(words = str_to_lower(words)) %>% filter(!words == "") %>% count(words, sort = TRUE) %>% head(5) ``` ``` ## Warning: `as_tibble.matrix()` requires a matrix with column names or a `.name_repair` argument. Using compatibility `.name_repair`. ## This warning is displayed once per session. ``` ``` ## # A tibble: 5 x 2 ## words n ## <chr> <int> ## 1 the 751 ## 2 a 202 ## 3 of 132 ## 4 to 123 ## 5 and 118 ``` ### 14\.5\.1 1. *How would you find all strings containing `\` with `regex()` vs.* *with `fixed()`?* would be `\\` instead of `\\\\` ``` str_view_all("so \\ the party is on\\ right?", fixed("\\")) ``` 2. *What are the five most common words in `sentences`?* ``` str_extract_all(sentences, boundary("word"), simplify = TRUE) %>% as_tibble() %>% gather(V1:V12, value = "words", key = "order") %>% mutate(words = str_to_lower(words)) %>% filter(!words == "") %>% count(words, sort = TRUE) %>% head(5) ``` ``` ## Warning: `as_tibble.matrix()` requires a matrix with column names or a `.name_repair` argument. Using compatibility `.name_repair`. ## This warning is displayed once per session. ``` ``` ## # A tibble: 5 x 2 ## words n ## <chr> <int> ## 1 the 751 ## 2 a 202 ## 3 of 132 ## 4 to 123 ## 5 and 118 ``` 14\.7: stringi -------------- Other functions: * `apropos` searches all objects available from the global environment–useful if you can’t remember fun name. Check those that start with `replace`: ``` apropos("^(replace)") ``` ``` ## [1] "replace" "replace_na" ``` Check those that start with `str`, but not `stri` ``` apropos("^(str)[^i]") ``` ``` ## [1] "str_c" "str_conv" "str_count" ## [4] "str_detect" "str_dup" "str_extract" ## [7] "str_extract_all" "str_flatten" "str_glue" ## [10] "str_glue_data" "str_interp" "str_length" ## [13] "str_locate" "str_locate_all" "str_match" ## [16] "str_match_all" "str_order" "str_pad" ## [19] "str_remove" "str_remove_all" "str_replace" ## [22] "str_replace_all" "str_replace_na" "str_sort" ## [25] "str_split" "str_split_fixed" "str_squish" ## [28] "str_sub" "str_sub<-" "str_subset" ## [31] "str_to_lower" "str_to_title" "str_to_upper" ## [34] "str_trim" "str_trunc" "str_view" ## [37] "str_view_all" "str_which" "str_wrap" ## [40] "strcapture" "strftime" "strheight" ## [43] "strOptions" "strptime" "strrep" ## [46] "strsplit" "strtoi" "strtrim" ## [49] "StructTS" "structure" "strwidth" ## [52] "strwrap" ``` ### 14\.7\.1 1. *Find the stringi functions that:* 1. *Count the number of words.* – `stri_count` 2. *Find duplicated strings.* – `stri_duplicated` 3. *Generate random text.* – `str_rand_strings` 2. *How do you control the language that `stri_sort()` uses for sorting?* The `decreasing` argument ### 14\.7\.1 1. *Find the stringi functions that:* 1. *Count the number of words.* – `stri_count` 2. *Find duplicated strings.* – `stri_duplicated` 3. *Generate random text.* – `str_rand_strings` 2. *How do you control the language that `stri_sort()` uses for sorting?* The `decreasing` argument Appendix -------- ### 14\.4\.2\.3 One way of doing this using iteration methods: ``` vowels <- c("a","e","i","o","u") tibble(vowels = vowels, words = list(words)) %>% mutate(detect_vowels = purrr::map2(words, vowels, str_detect)) %>% spread(key = vowels, value = detect_vowels) %>% unnest() %>% mutate(unique_vowels = rowSums(.[2:6])) %>% arrange(desc(unique_vowels)) ``` ``` ## # A tibble: 980 x 7 ## words a e i o u unique_vowels ## <chr> <lgl> <lgl> <lgl> <lgl> <lgl> <dbl> ## 1 absolute TRUE TRUE FALSE TRUE TRUE 4 ## 2 appropriate TRUE TRUE TRUE TRUE FALSE 4 ## 3 associate TRUE TRUE TRUE TRUE FALSE 4 ## 4 authority TRUE FALSE TRUE TRUE TRUE 4 ## 5 colleague TRUE TRUE FALSE TRUE TRUE 4 ## 6 continue FALSE TRUE TRUE TRUE TRUE 4 ## 7 encourage TRUE TRUE FALSE TRUE TRUE 4 ## 8 introduce FALSE TRUE TRUE TRUE TRUE 4 ## 9 organize TRUE TRUE TRUE TRUE FALSE 4 ## 10 previous FALSE TRUE TRUE TRUE TRUE 4 ## # ... with 970 more rows ``` ``` #seems that nothing gets over 4 ``` ### 14\.4\.2\.3 One way of doing this using iteration methods: ``` vowels <- c("a","e","i","o","u") tibble(vowels = vowels, words = list(words)) %>% mutate(detect_vowels = purrr::map2(words, vowels, str_detect)) %>% spread(key = vowels, value = detect_vowels) %>% unnest() %>% mutate(unique_vowels = rowSums(.[2:6])) %>% arrange(desc(unique_vowels)) ``` ``` ## # A tibble: 980 x 7 ## words a e i o u unique_vowels ## <chr> <lgl> <lgl> <lgl> <lgl> <lgl> <dbl> ## 1 absolute TRUE TRUE FALSE TRUE TRUE 4 ## 2 appropriate TRUE TRUE TRUE TRUE FALSE 4 ## 3 associate TRUE TRUE TRUE TRUE FALSE 4 ## 4 authority TRUE FALSE TRUE TRUE TRUE 4 ## 5 colleague TRUE TRUE FALSE TRUE TRUE 4 ## 6 continue FALSE TRUE TRUE TRUE TRUE 4 ## 7 encourage TRUE TRUE FALSE TRUE TRUE 4 ## 8 introduce FALSE TRUE TRUE TRUE TRUE 4 ## 9 organize TRUE TRUE TRUE TRUE FALSE 4 ## 10 previous FALSE TRUE TRUE TRUE TRUE 4 ## # ... with 970 more rows ``` ``` #seems that nothing gets over 4 ```
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/15-factors.html
Ch. 15: Factors =============== **Key questions:** * 15\.3\.1\. \#1, 3 (make the visualization and table) * 15\.5\.1\. \#1 **Functions and notes:** * `factor` make variable a factor based on `levels` provided * `fct_rev` reverses order of factors * `fct_infreq` orders levels in increasing frequency * `fct_relevel` lets you move levels to front of order * `fct_inorder` orders existing factor by order values show\-up in in data * `fct_reorder` orders input factors by other specified variables value (median by default), 3 inputs: `f`: factor to modify, `x`: input var to order by, `fun`: function to use on x, also have `desc` option * `fct_reorder2` orders input factor by max of other specified variable (good for making legends align as expected) * `fct_recode` lets you change value of each level * `fct_collapse` is variant of `fct_recode` that allows you to provide multiple old levels as a vector * `fct_lump` allows you to lump together small groups, use `n` to specify number of groups to end with Create factors by order they come\-in: Avoiding dropping levels with `drop = FALSE` ``` gss_cat %>% ggplot(aes(race))+ geom_bar()+ scale_x_discrete(drop = FALSE) ``` 15\.4: General Social Survey ---------------------------- ### 15\.3\.1 1. Explore the distribution of `rincome` (reported income). What makes the default bar chart hard to understand? How could you improve the plot? * Default bar chart has categories across the x\-asix, I flipped these to be across the y\-axis * Also, have highest values at the bottom rather than at the top and have different version of NA showing\-up at both top and bottom, all should be on one side * In `bar_prep`, I used reg expressions to extract the numeric values, arrange by that, and then set factor levels according to the new order + Solution is probably unnecessarily complicated… ``` bar_prep <- gss_cat %>% tidyr::extract(col = rincome, into =c("dollars1", "dollars2"), "([0-9]+)[^0-9]*([0-9]*)", remove = FALSE) %>% mutate_at(c("dollars1", "dollars2"), ~ifelse(is.na(.) | . == "", 0, as.numeric(.))) %>% arrange(dollars1, dollars2) %>% mutate(rincome = fct_inorder(rincome)) bar_prep %>% ggplot(aes(x = rincome)) + geom_bar() + scale_x_discrete(drop = FALSE) + coord_flip() ``` 2. What is the most common `relig` in this survey? What’s the most common `partyid`? ``` gss_cat %>% count(relig, sort = TRUE) ``` ``` ## # A tibble: 15 x 2 ## relig n ## <fct> <int> ## 1 Protestant 10846 ## 2 Catholic 5124 ## 3 None 3523 ## 4 Christian 689 ## 5 Jewish 388 ## 6 Other 224 ## 7 Buddhism 147 ## 8 Inter-nondenominational 109 ## 9 Moslem/islam 104 ## 10 Orthodox-christian 95 ## 11 No answer 93 ## 12 Hinduism 71 ## 13 Other eastern 32 ## 14 Native american 23 ## 15 Don't know 15 ``` ``` gss_cat %>% count(partyid, sort = TRUE) ``` ``` ## # A tibble: 10 x 2 ## partyid n ## <fct> <int> ## 1 Independent 4119 ## 2 Not str democrat 3690 ## 3 Strong democrat 3490 ## 4 Not str republican 3032 ## 5 Ind,near dem 2499 ## 6 Strong republican 2314 ## 7 Ind,near rep 1791 ## 8 Other party 393 ## 9 No answer 154 ## 10 Don't know 1 ``` * `relig` most common – Protestant, 10846, * `partyid` most common – Independent, 4119 3. Which `relig` does `denom` (denomination) apply to? How can you find out with a table? How can you find out with a visualisation? *With visualization:* ``` gss_cat %>% ggplot(aes(x=relig, fill=denom))+ geom_bar()+ coord_flip() ``` * Notice which have the widest variety of colours – are protestant, and Christian slightly*With table:* ``` gss_cat %>% count(relig, denom) %>% count(relig, sort = TRUE) ``` ``` ## # A tibble: 15 x 2 ## relig n ## <fct> <int> ## 1 Protestant 29 ## 2 Christian 4 ## 3 Other 2 ## 4 No answer 1 ## 5 Don't know 1 ## 6 Inter-nondenominational 1 ## 7 Native american 1 ## 8 Orthodox-christian 1 ## 9 Moslem/islam 1 ## 10 Other eastern 1 ## 11 Hinduism 1 ## 12 Buddhism 1 ## 13 None 1 ## 14 Jewish 1 ## 15 Catholic 1 ``` 15\.4: Modifying factor order ----------------------------- ### 15\.4\.1 1. There are some suspiciously high numbers in `tvhours`. Is the mean a good summary? ``` gss_cat %>% mutate(tvhours_fct = factor(tvhours)) %>% ggplot(aes(x = tvhours_fct)) + geom_bar() ``` * Distribution is reasonably skewed with some values showing\-up as 24 hours which seems impossible, in addition to this we have a lot of `NA` values, this may skew results * Given high number of missing values, `tvhours` may also just not be reliable, do `NA`s associate with other variables? – Perhaps could try and impute these `NA`s 2. For each factor in `gss_cat` identify whether the order of the levels is arbitrary or principled. ``` gss_cat %>% purrr::keep(is.factor) %>% purrr::map(levels) ``` ``` ## $marital ## [1] "No answer" "Never married" "Separated" "Divorced" ## [5] "Widowed" "Married" ## ## $race ## [1] "Other" "Black" "White" "Not applicable" ## ## $rincome ## [1] "No answer" "Don't know" "Refused" "$25000 or more" ## [5] "$20000 - 24999" "$15000 - 19999" "$10000 - 14999" "$8000 to 9999" ## [9] "$7000 to 7999" "$6000 to 6999" "$5000 to 5999" "$4000 to 4999" ## [13] "$3000 to 3999" "$1000 to 2999" "Lt $1000" "Not applicable" ## ## $partyid ## [1] "No answer" "Don't know" "Other party" ## [4] "Strong republican" "Not str republican" "Ind,near rep" ## [7] "Independent" "Ind,near dem" "Not str democrat" ## [10] "Strong democrat" ## ## $relig ## [1] "No answer" "Don't know" ## [3] "Inter-nondenominational" "Native american" ## [5] "Christian" "Orthodox-christian" ## [7] "Moslem/islam" "Other eastern" ## [9] "Hinduism" "Buddhism" ## [11] "Other" "None" ## [13] "Jewish" "Catholic" ## [15] "Protestant" "Not applicable" ## ## $denom ## [1] "No answer" "Don't know" "No denomination" ## [4] "Other" "Episcopal" "Presbyterian-dk wh" ## [7] "Presbyterian, merged" "Other presbyterian" "United pres ch in us" ## [10] "Presbyterian c in us" "Lutheran-dk which" "Evangelical luth" ## [13] "Other lutheran" "Wi evan luth synod" "Lutheran-mo synod" ## [16] "Luth ch in america" "Am lutheran" "Methodist-dk which" ## [19] "Other methodist" "United methodist" "Afr meth ep zion" ## [22] "Afr meth episcopal" "Baptist-dk which" "Other baptists" ## [25] "Southern baptist" "Nat bapt conv usa" "Nat bapt conv of am" ## [28] "Am bapt ch in usa" "Am baptist asso" "Not applicable" ``` * `rincome` is principaled, rest are arbitrary 3. Why did moving “Not applicable” to the front of the levels move it to the bottom of the plot? * Becuase is moving this factor to be first in order 15\.5: Modifying factor levels ------------------------------ Example with `fct_recode` ``` gss_cat %>% mutate(partyid = fct_recode(partyid, "Republican, strong" = "Strong republican", "Republican, weak" = "Not str republican", "Independent, near rep" = "Ind,near rep", "Independent, near dem" = "Ind,near dem", "Democrat, weak" = "Not str democrat", "Democrat, strong" = "Strong democrat" )) %>% count(partyid) ``` ``` ## # A tibble: 10 x 2 ## partyid n ## <fct> <int> ## 1 No answer 154 ## 2 Don't know 1 ## 3 Other party 393 ## 4 Republican, strong 2314 ## 5 Republican, weak 3032 ## 6 Independent, near rep 1791 ## 7 Independent 4119 ## 8 Independent, near dem 2499 ## 9 Democrat, weak 3690 ## 10 Democrat, strong 3490 ``` ### 15\.5\.1 1. How have the proportions of people identifying as Democrat, Republican, and Independent changed over time? *As a line plot:* ``` gss_cat %>% mutate(partyid = fct_collapse( partyid, other = c("No answer", "Don't know", "Other party"), rep = c("Strong republican", "Not str republican"), ind = c("Ind,near rep", "Independent", "Ind,near dem"), dem = c("Not str democrat", "Strong democrat") )) %>% count(year, partyid) %>% group_by(year) %>% mutate(prop = n / sum(n)) %>% ungroup() %>% ggplot(aes( x = year, y = prop, colour = fct_reorder2(partyid, year, prop) )) + geom_line() + labs(colour = "partyid") ``` *As a bar plot:* ``` gss_cat %>% mutate(partyid = fct_collapse( partyid, other = c("No answer", "Don't know", "Other party"), rep = c("Strong republican", "Not str republican"), ind = c("Ind,near rep", "Independent", "Ind,near dem"), dem = c("Not str democrat", "Strong democrat") )) %>% count(year, partyid) %>% group_by(year) %>% mutate(prop = n / sum(n)) %>% ungroup() %>% ggplot(aes( x = year, y = prop, fill = fct_reorder2(partyid, year, prop) )) + geom_col() + labs(colour = "partyid") ``` * Suggests proportion of republicans has gone down with independents and other going up. 2. How could you collapse `rincome` into a small set of categories? ``` other = c("No answer", "Don't know", "Refused", "Not applicable") high = c("$25000 or more", "$20000 - 24999", "$15000 - 19999", "$10000 - 14999") med = c("$8000 to 9999", "$7000 to 7999", "$6000 to 6999", "$5000 to 5999") low = c("$4000 to 4999", "$3000 to 3999", "$1000 to 2999", "Lt $1000") mutate(gss_cat, rincome = fct_collapse( rincome, other = other, high = high, med = med, low = low )) %>% count(rincome) ``` ``` ## # A tibble: 4 x 2 ## rincome n ## <fct> <int> ## 1 other 8468 ## 2 high 10862 ## 3 med 970 ## 4 low 1183 ``` Appendix -------- ### Viewing all levels A few ways to get an initial look at the levels or counts across a dataset ``` gss_cat %>% purrr::map(unique) gss_cat %>% purrr::map(table) gss_cat %>% purrr::map(table) %>% purrr::map(plot) gss_cat %>% mutate_if(is.factor, ~fct_lump(., 14)) %>% sample_n(1000) %>% GGally::ggpairs() ``` *Percentage NA each level*: ``` gss_cat %>% purrr::map(~(sum(is.na(.x)) / length(.x))) %>% as_tibble() # essentially equivalent... gss_cat %>% summarise_all(~(sum(is.na(.)) / length(.))) ``` *Print all levels of tibble*: ``` gss_cat %>% count(age) %>% print(n = Inf) ``` 15\.4: General Social Survey ---------------------------- ### 15\.3\.1 1. Explore the distribution of `rincome` (reported income). What makes the default bar chart hard to understand? How could you improve the plot? * Default bar chart has categories across the x\-asix, I flipped these to be across the y\-axis * Also, have highest values at the bottom rather than at the top and have different version of NA showing\-up at both top and bottom, all should be on one side * In `bar_prep`, I used reg expressions to extract the numeric values, arrange by that, and then set factor levels according to the new order + Solution is probably unnecessarily complicated… ``` bar_prep <- gss_cat %>% tidyr::extract(col = rincome, into =c("dollars1", "dollars2"), "([0-9]+)[^0-9]*([0-9]*)", remove = FALSE) %>% mutate_at(c("dollars1", "dollars2"), ~ifelse(is.na(.) | . == "", 0, as.numeric(.))) %>% arrange(dollars1, dollars2) %>% mutate(rincome = fct_inorder(rincome)) bar_prep %>% ggplot(aes(x = rincome)) + geom_bar() + scale_x_discrete(drop = FALSE) + coord_flip() ``` 2. What is the most common `relig` in this survey? What’s the most common `partyid`? ``` gss_cat %>% count(relig, sort = TRUE) ``` ``` ## # A tibble: 15 x 2 ## relig n ## <fct> <int> ## 1 Protestant 10846 ## 2 Catholic 5124 ## 3 None 3523 ## 4 Christian 689 ## 5 Jewish 388 ## 6 Other 224 ## 7 Buddhism 147 ## 8 Inter-nondenominational 109 ## 9 Moslem/islam 104 ## 10 Orthodox-christian 95 ## 11 No answer 93 ## 12 Hinduism 71 ## 13 Other eastern 32 ## 14 Native american 23 ## 15 Don't know 15 ``` ``` gss_cat %>% count(partyid, sort = TRUE) ``` ``` ## # A tibble: 10 x 2 ## partyid n ## <fct> <int> ## 1 Independent 4119 ## 2 Not str democrat 3690 ## 3 Strong democrat 3490 ## 4 Not str republican 3032 ## 5 Ind,near dem 2499 ## 6 Strong republican 2314 ## 7 Ind,near rep 1791 ## 8 Other party 393 ## 9 No answer 154 ## 10 Don't know 1 ``` * `relig` most common – Protestant, 10846, * `partyid` most common – Independent, 4119 3. Which `relig` does `denom` (denomination) apply to? How can you find out with a table? How can you find out with a visualisation? *With visualization:* ``` gss_cat %>% ggplot(aes(x=relig, fill=denom))+ geom_bar()+ coord_flip() ``` * Notice which have the widest variety of colours – are protestant, and Christian slightly*With table:* ``` gss_cat %>% count(relig, denom) %>% count(relig, sort = TRUE) ``` ``` ## # A tibble: 15 x 2 ## relig n ## <fct> <int> ## 1 Protestant 29 ## 2 Christian 4 ## 3 Other 2 ## 4 No answer 1 ## 5 Don't know 1 ## 6 Inter-nondenominational 1 ## 7 Native american 1 ## 8 Orthodox-christian 1 ## 9 Moslem/islam 1 ## 10 Other eastern 1 ## 11 Hinduism 1 ## 12 Buddhism 1 ## 13 None 1 ## 14 Jewish 1 ## 15 Catholic 1 ``` ### 15\.3\.1 1. Explore the distribution of `rincome` (reported income). What makes the default bar chart hard to understand? How could you improve the plot? * Default bar chart has categories across the x\-asix, I flipped these to be across the y\-axis * Also, have highest values at the bottom rather than at the top and have different version of NA showing\-up at both top and bottom, all should be on one side * In `bar_prep`, I used reg expressions to extract the numeric values, arrange by that, and then set factor levels according to the new order + Solution is probably unnecessarily complicated… ``` bar_prep <- gss_cat %>% tidyr::extract(col = rincome, into =c("dollars1", "dollars2"), "([0-9]+)[^0-9]*([0-9]*)", remove = FALSE) %>% mutate_at(c("dollars1", "dollars2"), ~ifelse(is.na(.) | . == "", 0, as.numeric(.))) %>% arrange(dollars1, dollars2) %>% mutate(rincome = fct_inorder(rincome)) bar_prep %>% ggplot(aes(x = rincome)) + geom_bar() + scale_x_discrete(drop = FALSE) + coord_flip() ``` 2. What is the most common `relig` in this survey? What’s the most common `partyid`? ``` gss_cat %>% count(relig, sort = TRUE) ``` ``` ## # A tibble: 15 x 2 ## relig n ## <fct> <int> ## 1 Protestant 10846 ## 2 Catholic 5124 ## 3 None 3523 ## 4 Christian 689 ## 5 Jewish 388 ## 6 Other 224 ## 7 Buddhism 147 ## 8 Inter-nondenominational 109 ## 9 Moslem/islam 104 ## 10 Orthodox-christian 95 ## 11 No answer 93 ## 12 Hinduism 71 ## 13 Other eastern 32 ## 14 Native american 23 ## 15 Don't know 15 ``` ``` gss_cat %>% count(partyid, sort = TRUE) ``` ``` ## # A tibble: 10 x 2 ## partyid n ## <fct> <int> ## 1 Independent 4119 ## 2 Not str democrat 3690 ## 3 Strong democrat 3490 ## 4 Not str republican 3032 ## 5 Ind,near dem 2499 ## 6 Strong republican 2314 ## 7 Ind,near rep 1791 ## 8 Other party 393 ## 9 No answer 154 ## 10 Don't know 1 ``` * `relig` most common – Protestant, 10846, * `partyid` most common – Independent, 4119 3. Which `relig` does `denom` (denomination) apply to? How can you find out with a table? How can you find out with a visualisation? *With visualization:* ``` gss_cat %>% ggplot(aes(x=relig, fill=denom))+ geom_bar()+ coord_flip() ``` * Notice which have the widest variety of colours – are protestant, and Christian slightly*With table:* ``` gss_cat %>% count(relig, denom) %>% count(relig, sort = TRUE) ``` ``` ## # A tibble: 15 x 2 ## relig n ## <fct> <int> ## 1 Protestant 29 ## 2 Christian 4 ## 3 Other 2 ## 4 No answer 1 ## 5 Don't know 1 ## 6 Inter-nondenominational 1 ## 7 Native american 1 ## 8 Orthodox-christian 1 ## 9 Moslem/islam 1 ## 10 Other eastern 1 ## 11 Hinduism 1 ## 12 Buddhism 1 ## 13 None 1 ## 14 Jewish 1 ## 15 Catholic 1 ``` 15\.4: Modifying factor order ----------------------------- ### 15\.4\.1 1. There are some suspiciously high numbers in `tvhours`. Is the mean a good summary? ``` gss_cat %>% mutate(tvhours_fct = factor(tvhours)) %>% ggplot(aes(x = tvhours_fct)) + geom_bar() ``` * Distribution is reasonably skewed with some values showing\-up as 24 hours which seems impossible, in addition to this we have a lot of `NA` values, this may skew results * Given high number of missing values, `tvhours` may also just not be reliable, do `NA`s associate with other variables? – Perhaps could try and impute these `NA`s 2. For each factor in `gss_cat` identify whether the order of the levels is arbitrary or principled. ``` gss_cat %>% purrr::keep(is.factor) %>% purrr::map(levels) ``` ``` ## $marital ## [1] "No answer" "Never married" "Separated" "Divorced" ## [5] "Widowed" "Married" ## ## $race ## [1] "Other" "Black" "White" "Not applicable" ## ## $rincome ## [1] "No answer" "Don't know" "Refused" "$25000 or more" ## [5] "$20000 - 24999" "$15000 - 19999" "$10000 - 14999" "$8000 to 9999" ## [9] "$7000 to 7999" "$6000 to 6999" "$5000 to 5999" "$4000 to 4999" ## [13] "$3000 to 3999" "$1000 to 2999" "Lt $1000" "Not applicable" ## ## $partyid ## [1] "No answer" "Don't know" "Other party" ## [4] "Strong republican" "Not str republican" "Ind,near rep" ## [7] "Independent" "Ind,near dem" "Not str democrat" ## [10] "Strong democrat" ## ## $relig ## [1] "No answer" "Don't know" ## [3] "Inter-nondenominational" "Native american" ## [5] "Christian" "Orthodox-christian" ## [7] "Moslem/islam" "Other eastern" ## [9] "Hinduism" "Buddhism" ## [11] "Other" "None" ## [13] "Jewish" "Catholic" ## [15] "Protestant" "Not applicable" ## ## $denom ## [1] "No answer" "Don't know" "No denomination" ## [4] "Other" "Episcopal" "Presbyterian-dk wh" ## [7] "Presbyterian, merged" "Other presbyterian" "United pres ch in us" ## [10] "Presbyterian c in us" "Lutheran-dk which" "Evangelical luth" ## [13] "Other lutheran" "Wi evan luth synod" "Lutheran-mo synod" ## [16] "Luth ch in america" "Am lutheran" "Methodist-dk which" ## [19] "Other methodist" "United methodist" "Afr meth ep zion" ## [22] "Afr meth episcopal" "Baptist-dk which" "Other baptists" ## [25] "Southern baptist" "Nat bapt conv usa" "Nat bapt conv of am" ## [28] "Am bapt ch in usa" "Am baptist asso" "Not applicable" ``` * `rincome` is principaled, rest are arbitrary 3. Why did moving “Not applicable” to the front of the levels move it to the bottom of the plot? * Becuase is moving this factor to be first in order ### 15\.4\.1 1. There are some suspiciously high numbers in `tvhours`. Is the mean a good summary? ``` gss_cat %>% mutate(tvhours_fct = factor(tvhours)) %>% ggplot(aes(x = tvhours_fct)) + geom_bar() ``` * Distribution is reasonably skewed with some values showing\-up as 24 hours which seems impossible, in addition to this we have a lot of `NA` values, this may skew results * Given high number of missing values, `tvhours` may also just not be reliable, do `NA`s associate with other variables? – Perhaps could try and impute these `NA`s 2. For each factor in `gss_cat` identify whether the order of the levels is arbitrary or principled. ``` gss_cat %>% purrr::keep(is.factor) %>% purrr::map(levels) ``` ``` ## $marital ## [1] "No answer" "Never married" "Separated" "Divorced" ## [5] "Widowed" "Married" ## ## $race ## [1] "Other" "Black" "White" "Not applicable" ## ## $rincome ## [1] "No answer" "Don't know" "Refused" "$25000 or more" ## [5] "$20000 - 24999" "$15000 - 19999" "$10000 - 14999" "$8000 to 9999" ## [9] "$7000 to 7999" "$6000 to 6999" "$5000 to 5999" "$4000 to 4999" ## [13] "$3000 to 3999" "$1000 to 2999" "Lt $1000" "Not applicable" ## ## $partyid ## [1] "No answer" "Don't know" "Other party" ## [4] "Strong republican" "Not str republican" "Ind,near rep" ## [7] "Independent" "Ind,near dem" "Not str democrat" ## [10] "Strong democrat" ## ## $relig ## [1] "No answer" "Don't know" ## [3] "Inter-nondenominational" "Native american" ## [5] "Christian" "Orthodox-christian" ## [7] "Moslem/islam" "Other eastern" ## [9] "Hinduism" "Buddhism" ## [11] "Other" "None" ## [13] "Jewish" "Catholic" ## [15] "Protestant" "Not applicable" ## ## $denom ## [1] "No answer" "Don't know" "No denomination" ## [4] "Other" "Episcopal" "Presbyterian-dk wh" ## [7] "Presbyterian, merged" "Other presbyterian" "United pres ch in us" ## [10] "Presbyterian c in us" "Lutheran-dk which" "Evangelical luth" ## [13] "Other lutheran" "Wi evan luth synod" "Lutheran-mo synod" ## [16] "Luth ch in america" "Am lutheran" "Methodist-dk which" ## [19] "Other methodist" "United methodist" "Afr meth ep zion" ## [22] "Afr meth episcopal" "Baptist-dk which" "Other baptists" ## [25] "Southern baptist" "Nat bapt conv usa" "Nat bapt conv of am" ## [28] "Am bapt ch in usa" "Am baptist asso" "Not applicable" ``` * `rincome` is principaled, rest are arbitrary 3. Why did moving “Not applicable” to the front of the levels move it to the bottom of the plot? * Becuase is moving this factor to be first in order 15\.5: Modifying factor levels ------------------------------ Example with `fct_recode` ``` gss_cat %>% mutate(partyid = fct_recode(partyid, "Republican, strong" = "Strong republican", "Republican, weak" = "Not str republican", "Independent, near rep" = "Ind,near rep", "Independent, near dem" = "Ind,near dem", "Democrat, weak" = "Not str democrat", "Democrat, strong" = "Strong democrat" )) %>% count(partyid) ``` ``` ## # A tibble: 10 x 2 ## partyid n ## <fct> <int> ## 1 No answer 154 ## 2 Don't know 1 ## 3 Other party 393 ## 4 Republican, strong 2314 ## 5 Republican, weak 3032 ## 6 Independent, near rep 1791 ## 7 Independent 4119 ## 8 Independent, near dem 2499 ## 9 Democrat, weak 3690 ## 10 Democrat, strong 3490 ``` ### 15\.5\.1 1. How have the proportions of people identifying as Democrat, Republican, and Independent changed over time? *As a line plot:* ``` gss_cat %>% mutate(partyid = fct_collapse( partyid, other = c("No answer", "Don't know", "Other party"), rep = c("Strong republican", "Not str republican"), ind = c("Ind,near rep", "Independent", "Ind,near dem"), dem = c("Not str democrat", "Strong democrat") )) %>% count(year, partyid) %>% group_by(year) %>% mutate(prop = n / sum(n)) %>% ungroup() %>% ggplot(aes( x = year, y = prop, colour = fct_reorder2(partyid, year, prop) )) + geom_line() + labs(colour = "partyid") ``` *As a bar plot:* ``` gss_cat %>% mutate(partyid = fct_collapse( partyid, other = c("No answer", "Don't know", "Other party"), rep = c("Strong republican", "Not str republican"), ind = c("Ind,near rep", "Independent", "Ind,near dem"), dem = c("Not str democrat", "Strong democrat") )) %>% count(year, partyid) %>% group_by(year) %>% mutate(prop = n / sum(n)) %>% ungroup() %>% ggplot(aes( x = year, y = prop, fill = fct_reorder2(partyid, year, prop) )) + geom_col() + labs(colour = "partyid") ``` * Suggests proportion of republicans has gone down with independents and other going up. 2. How could you collapse `rincome` into a small set of categories? ``` other = c("No answer", "Don't know", "Refused", "Not applicable") high = c("$25000 or more", "$20000 - 24999", "$15000 - 19999", "$10000 - 14999") med = c("$8000 to 9999", "$7000 to 7999", "$6000 to 6999", "$5000 to 5999") low = c("$4000 to 4999", "$3000 to 3999", "$1000 to 2999", "Lt $1000") mutate(gss_cat, rincome = fct_collapse( rincome, other = other, high = high, med = med, low = low )) %>% count(rincome) ``` ``` ## # A tibble: 4 x 2 ## rincome n ## <fct> <int> ## 1 other 8468 ## 2 high 10862 ## 3 med 970 ## 4 low 1183 ``` ### 15\.5\.1 1. How have the proportions of people identifying as Democrat, Republican, and Independent changed over time? *As a line plot:* ``` gss_cat %>% mutate(partyid = fct_collapse( partyid, other = c("No answer", "Don't know", "Other party"), rep = c("Strong republican", "Not str republican"), ind = c("Ind,near rep", "Independent", "Ind,near dem"), dem = c("Not str democrat", "Strong democrat") )) %>% count(year, partyid) %>% group_by(year) %>% mutate(prop = n / sum(n)) %>% ungroup() %>% ggplot(aes( x = year, y = prop, colour = fct_reorder2(partyid, year, prop) )) + geom_line() + labs(colour = "partyid") ``` *As a bar plot:* ``` gss_cat %>% mutate(partyid = fct_collapse( partyid, other = c("No answer", "Don't know", "Other party"), rep = c("Strong republican", "Not str republican"), ind = c("Ind,near rep", "Independent", "Ind,near dem"), dem = c("Not str democrat", "Strong democrat") )) %>% count(year, partyid) %>% group_by(year) %>% mutate(prop = n / sum(n)) %>% ungroup() %>% ggplot(aes( x = year, y = prop, fill = fct_reorder2(partyid, year, prop) )) + geom_col() + labs(colour = "partyid") ``` * Suggests proportion of republicans has gone down with independents and other going up. 2. How could you collapse `rincome` into a small set of categories? ``` other = c("No answer", "Don't know", "Refused", "Not applicable") high = c("$25000 or more", "$20000 - 24999", "$15000 - 19999", "$10000 - 14999") med = c("$8000 to 9999", "$7000 to 7999", "$6000 to 6999", "$5000 to 5999") low = c("$4000 to 4999", "$3000 to 3999", "$1000 to 2999", "Lt $1000") mutate(gss_cat, rincome = fct_collapse( rincome, other = other, high = high, med = med, low = low )) %>% count(rincome) ``` ``` ## # A tibble: 4 x 2 ## rincome n ## <fct> <int> ## 1 other 8468 ## 2 high 10862 ## 3 med 970 ## 4 low 1183 ``` Appendix -------- ### Viewing all levels A few ways to get an initial look at the levels or counts across a dataset ``` gss_cat %>% purrr::map(unique) gss_cat %>% purrr::map(table) gss_cat %>% purrr::map(table) %>% purrr::map(plot) gss_cat %>% mutate_if(is.factor, ~fct_lump(., 14)) %>% sample_n(1000) %>% GGally::ggpairs() ``` *Percentage NA each level*: ``` gss_cat %>% purrr::map(~(sum(is.na(.x)) / length(.x))) %>% as_tibble() # essentially equivalent... gss_cat %>% summarise_all(~(sum(is.na(.)) / length(.))) ``` *Print all levels of tibble*: ``` gss_cat %>% count(age) %>% print(n = Inf) ``` ### Viewing all levels A few ways to get an initial look at the levels or counts across a dataset ``` gss_cat %>% purrr::map(unique) gss_cat %>% purrr::map(table) gss_cat %>% purrr::map(table) %>% purrr::map(plot) gss_cat %>% mutate_if(is.factor, ~fct_lump(., 14)) %>% sample_n(1000) %>% GGally::ggpairs() ``` *Percentage NA each level*: ``` gss_cat %>% purrr::map(~(sum(is.na(.x)) / length(.x))) %>% as_tibble() # essentially equivalent... gss_cat %>% summarise_all(~(sum(is.na(.)) / length(.))) ``` *Print all levels of tibble*: ``` gss_cat %>% count(age) %>% print(n = Inf) ```
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/16-dates-and-times.html
Ch. 16: Dates and times ======================= **Key questions:** * 16\.2\.4\. \#3 * 16\.3\.4\. \#1, 4, 5 * 16\.4\.5\. \#4 **Functions and notes:** * `today` get current date * `now` get current date\-time * `ymd_hms` one example of straight\-forward set\-of of functions that take either strings or unquoted numbers and output dates or date\-times * `make_datetime` create date\-time from individual components, e.g. make\_datetime(year, month, day, hour, minute) * `as_date_time` and `as_date` let you switch between date\-time and dates, e.g. `as_datetime(today())` or `as_date(now())` * Accessor functions let you pull out components from an existing date\-time: + `year`, `month`, `mday`, `yday`, `wday`, `hour`, `minute`, `second` - `month` and `wday` have `label = TRUE` to pull the abbreviated name rather than the number, and pull full name with `abbr = FALSE` + You can also use these to set particular components `year(datetime) <- 2020` * `update` allows you to specify multiple values at one time, e.g. `update(datetime, year = 2020, month = 2, mday = 2, hour = 2)` + When values are too big they roll\-over e.g. `update(ymd("2015-02-01"), mday = 30)` will become ‘2015\-03\-02’ * Rounding functions to nearest unit of time + `floor_date`, `round_date`, `ceiling_date` * `as.duration` convert diff\-time to a duration * Durations (can add and multiply): + `dseconds`, `dhours`, `ddays`, `dweeks`, `dyears` * Periods (can add and multiply), more likely to do what you expect than duration: + `seconds`, `minutes`, `hours`, `days`, `weeks`, `months` * Interval is a duration with a starting point, making it precise and possible to determine EXACT length + e.g. `(today() %--% next_year) / ddays(1)` to find exact duration * `Sys.timezone` to see what R thinks your current time zone is * `tz =` arg in `ymd_hms` let’s you change printing behavior (not underlying value, as assumes UTC unless changed) * `with_tz` allows you to print an existing date\-time object to a specific other timezone * `force_tz` when have an object that’s been labeled with wrong time\-zone and need to fix it 16\.2: Creating date/times -------------------------- Note that 1 in date\-times is treated as 1 \- second in numeric contexts, so example below sets `binwidth = 86400` to specify 1 day ``` make_datetime_100 <- function(year, month, day, time) { make_datetime(year, month, day, time %/% 100, time %% 100) } flights_dt <- flights %>% filter(!is.na(dep_time), !is.na(arr_time)) %>% mutate_at(c("dep_time", "arr_time", "sched_dep_time", "sched_arr_time"), ~make_datetime_100(year, month, day, .)) %>% select(origin, dest, ends_with("delay"), ends_with("time")) flights_dt %>% ggplot(aes(dep_time)) + geom_freqpoly(binwidth = 86400) ``` ### 16\.2\.4 1. What happens if you parse a string that contains invalid dates? ``` ymd(c("2010-10-10", "bananas")) ``` ``` ## Warning: 1 failed to parse. ``` ``` ## [1] "2010-10-10" NA ``` * Outputs an NA and sends warning of number that failed to parse 2. What does the `tzone` argument to `today()` do? Why is it important? * Let’s you specify timezones, may be different days depending on location ``` today(tzone = "MST") ``` ``` ## [1] "2019-06-05" ``` ``` now(tzone = "MST") ``` ``` ## [1] "2019-06-05 16:27:06 MST" ``` 3. Use the appropriate lubridate function to parse each of the following dates: ``` d1 <- "January 1, 2010" d2 <- "2015-Mar-07" d3 <- "06-Jun-2017" d4 <- c("August 19 (2015)", "July 1 (2015)") d5 <- "12/30/14" # Dec 30, 2014 ``` ``` mdy(d1) ``` ``` ## [1] "2010-01-01" ``` ``` ymd(d2) ``` ``` ## [1] "2015-03-07" ``` ``` dmy(d3) ``` ``` ## [1] "2017-06-06" ``` ``` mdy(d4) ``` ``` ## [1] "2015-08-19" "2015-07-01" ``` ``` mdy(d5) ``` ``` ## [1] "2014-12-30" ``` 16\.3: Date\-time components ---------------------------- This allows you to plot the number of flights per week ``` flights_dt %>% count(week = floor_date(dep_time, "week")) %>% ggplot(aes(week, n)) + geom_line() ``` ### 16\.3\.4 1. How does the distribution of flight times within a day change over the course of the year? *Median flight time by day* ``` flights_dt %>% transmute(quarter_dep = quarter(dep_time) %>% factor(), day_dep = as_date(dep_time), dep_time = as.hms(dep_time)) %>% group_by(quarter_dep, day_dep) %>% summarise(day_median = median(dep_time)) %>% ungroup() %>% ggplot(aes(x = day_dep, y = day_median)) + geom_line(aes(colour = quarter_dep, group = 1)) + labs(title = "Median flight times by day, coloured by quarter", subtitle = "Typical flight times change with daylight savings times")+ geom_vline(xintercept = ymd("20130310"), linetype = 2)+ geom_vline(xintercept = ymd("20131103"), linetype = 2) ``` * First couple and last couple months tend to have slightly earlier start times*Quantiles of flight times by month* ``` flights_dt %>% transmute(month_dep = month(dep_time, label = TRUE), quarter_dep = quarter(dep_time) %>% factor(), wk_dep = week(dep_time), dep_time = as.hms(dep_time)) %>% group_by(month_dep, wk_dep) %>% ungroup() %>% ggplot(aes(x = month_dep, y = dep_time, group = month_dep)) + geom_boxplot() ``` * Reinforces prior plot, shows that first couple and last couple months of year tend to have slightly higher proportion of flights earlier in day * Last week of the year have a lower proportion of late flights, and a higher proportion of morning flightsSee [16\.3\.4\.1](16-dates-and-times.html#section-64) for a few other plots I looked at. 2. Compare `dep_time`, `sched_dep_time` and `dep_delay`. Are they consistent? Explain your findings. ``` flights_dt %>% mutate(dep_delay_check = (dep_time - sched_dep_time) / dminutes(1), same = dep_delay == dep_delay_check, difference = dep_delay_check - dep_delay) %>% filter(abs(difference) > 0) ``` ``` ## # A tibble: 1,205 x 12 ## origin dest dep_delay arr_delay dep_time sched_dep_time ## <chr> <chr> <dbl> <dbl> <dttm> <dttm> ## 1 JFK BWI 853 851 2013-01-01 08:48:00 2013-01-01 18:35:00 ## 2 JFK SJU 43 36 2013-01-02 00:42:00 2013-01-02 23:59:00 ## 3 JFK SYR 156 154 2013-01-02 01:26:00 2013-01-02 22:50:00 ## 4 JFK SJU 33 22 2013-01-03 00:32:00 2013-01-03 23:59:00 ## 5 JFK BUF 185 172 2013-01-03 00:50:00 2013-01-03 21:45:00 ## 6 JFK BQN 156 143 2013-01-03 02:35:00 2013-01-03 23:59:00 ## 7 JFK SJU 26 23 2013-01-04 00:25:00 2013-01-04 23:59:00 ## 8 JFK PWM 141 125 2013-01-04 01:06:00 2013-01-04 22:45:00 ## 9 JFK PSE 15 18 2013-01-05 00:14:00 2013-01-05 23:59:00 ## 10 JFK FLL 127 130 2013-01-05 00:37:00 2013-01-05 22:30:00 ## # ... with 1,195 more rows, and 6 more variables: arr_time <dttm>, ## # sched_arr_time <dttm>, air_time <dbl>, dep_delay_check <dbl>, ## # same <lgl>, difference <dbl> ``` * They are except in the case when it goes over a day, the day is not pushed forward so it counts it as being 24 hours off 3. Compare `air_time` with the duration between the departure and arrival. Explain your findings. (Hint: consider the location of the airport.) ``` flights_dt %>% mutate(air_time_check = (arr_time - dep_time) / dminutes(1)) %>% select(air_time_check, air_time, dep_time, arr_time, everything()) ``` ``` ## # A tibble: 328,063 x 10 ## air_time_check air_time dep_time arr_time origin ## <dbl> <dbl> <dttm> <dttm> <chr> ## 1 193 227 2013-01-01 05:17:00 2013-01-01 08:30:00 EWR ## 2 197 227 2013-01-01 05:33:00 2013-01-01 08:50:00 LGA ## 3 221 160 2013-01-01 05:42:00 2013-01-01 09:23:00 JFK ## 4 260 183 2013-01-01 05:44:00 2013-01-01 10:04:00 JFK ## 5 138 116 2013-01-01 05:54:00 2013-01-01 08:12:00 LGA ## 6 106 150 2013-01-01 05:54:00 2013-01-01 07:40:00 EWR ## 7 198 158 2013-01-01 05:55:00 2013-01-01 09:13:00 EWR ## 8 72 53 2013-01-01 05:57:00 2013-01-01 07:09:00 LGA ## 9 161 140 2013-01-01 05:57:00 2013-01-01 08:38:00 JFK ## 10 115 138 2013-01-01 05:58:00 2013-01-01 07:53:00 LGA ## # ... with 328,053 more rows, and 5 more variables: dest <chr>, ## # dep_delay <dbl>, arr_delay <dbl>, sched_dep_time <dttm>, ## # sched_arr_time <dttm> ``` * Initial check is off, so need to take into account the time\-zone and difference from NYC, so join timezone document ``` flights_dt %>% left_join(select(nycflights13::airports, dest = faa, tz), by = "dest") %>% mutate(arr_time_new = arr_time - dhours(tz + 5)) %>% mutate(air_time_tz = (arr_time_new - dep_time) / dminutes(1), diff_Airtime = air_time_tz - air_time) %>% select( origin, dest, tz, contains("time"), -(contains("sched"))) ``` ``` ## # A tibble: 328,063 x 9 ## origin dest tz dep_time arr_time air_time ## <chr> <chr> <dbl> <dttm> <dttm> <dbl> ## 1 EWR IAH -6 2013-01-01 05:17:00 2013-01-01 08:30:00 227 ## 2 LGA IAH -6 2013-01-01 05:33:00 2013-01-01 08:50:00 227 ## 3 JFK MIA -5 2013-01-01 05:42:00 2013-01-01 09:23:00 160 ## 4 JFK BQN NA 2013-01-01 05:44:00 2013-01-01 10:04:00 183 ## 5 LGA ATL -5 2013-01-01 05:54:00 2013-01-01 08:12:00 116 ## 6 EWR ORD -6 2013-01-01 05:54:00 2013-01-01 07:40:00 150 ## 7 EWR FLL -5 2013-01-01 05:55:00 2013-01-01 09:13:00 158 ## 8 LGA IAD -5 2013-01-01 05:57:00 2013-01-01 07:09:00 53 ## 9 JFK MCO -5 2013-01-01 05:57:00 2013-01-01 08:38:00 140 ## 10 LGA ORD -6 2013-01-01 05:58:00 2013-01-01 07:53:00 138 ## # ... with 328,053 more rows, and 3 more variables: arr_time_new <dttm>, ## # air_time_tz <dbl>, diff_Airtime <dbl> ``` * Is closer but still off. In chapter 5, problem 5\.5\.2\.1 I go further into this * In [Appendix](28-graphics-for-communication.html#appendix-13) section [16\.3\.4\.3](16-dates-and-times.html#section-65) filter to NAs 4. How does the average delay time change over the course of a day? Should you use `dep_time` or `sched_dep_time`? Why? ``` flights_dt %>% mutate(sched_dep_time = as.hms(floor_date(sched_dep_time, "30 mins"))) %>% group_by(sched_dep_time) %>% summarise(delay_mean = mean(arr_delay, na.rm = TRUE), n = n(), n_na = sum(is.na(arr_delay)) / n, delay_median = median(arr_delay, na.rm = TRUE)) %>% ggplot(aes(x = sched_dep_time, y = delay_mean, size = n)) + geom_point() ``` * It goes\-up throughout the day * Use `sched_dep_time` because it has the correct day 5. On what day of the week should you leave if you want to minimise the chance of a delay? ``` flights_dt %>% mutate(weekday = wday(sched_dep_time, label = TRUE)) %>% group_by(weekday) %>% summarise(prop_delay = sum(dep_delay > 0) / n()) ``` ``` ## # A tibble: 7 x 2 ## weekday prop_delay ## <ord> <dbl> ## 1 Sun 0.383 ## 2 Mon 0.401 ## 3 Tue 0.364 ## 4 Wed 0.372 ## 5 Thu 0.431 ## 6 Fri 0.425 ## 7 Sat 0.348 ``` * wknd has a slightly lower proportion of flights delayed (Thursday has the worst) 6. What makes the distribution of `diamonds$carat` and `flights$sched_dep_time` similar? ``` ggplot(diamonds, aes(x = carat)) + geom_histogram(bins = 500)+ labs(title = "Distribution of carat in diamonds dataset") ggplot(flights, aes(x = as.hms(sched_dep_time))) + geom_histogram(bins = 24*6)+ labs(title = "Distribution of scheduled departure times in flights dataset") ``` * Both have gaps and peaks at ‘attractive’ values 7. Confirm my hypothesis that the early departures of flights in minutes 20\-30 and 50\-60 are caused by scheduled flights that leave early. Hint: create a binary variable that tells you whether or not a flight was delayed. ``` mutate(flights_dt, mins_dep = minute(dep_time), mins_sched = minute(sched_dep_time), delayed = dep_delay > 0) %>% group_by(mins_dep) %>% summarise(prop_delayed = sum(delayed) / n()) %>% ggplot(aes(x = mins_dep, y = prop_delayed)) + geom_line() ``` * Consistent with above hypothesis 16\.4: Time spans ----------------- * **durations**, which represent an exact number of seconds. * **periods**, which represent human units like weeks and months. * **intervals**, which represent a starting and ending point. Permitted arithmetic operations between different data types Periods example, using durations to fix oddity of problem when flight arrives overnight ``` flights_dt <- flights_dt %>% mutate( overnight = arr_time < dep_time, arr_time = arr_time + days(overnight * 1), sched_arr_time = sched_arr_time + days(overnight * 1) ) ``` Intervals example to get precise number of days dependent on specific time ``` next_year <- today() + years(1) (today() %--% next_year) / ddays(1) ``` ``` ## [1] 366 ``` To find out how many periods fall in an interval, need to use integer division ``` (today() %--% next_year) %/% days(1) ``` ``` ## Note: method with signature 'Timespan#Timespan' chosen for function '%/%', ## target signature 'Interval#Period'. ## "Interval#ANY", "ANY#Period" would also be valid ``` ``` ## [1] 366 ``` ### 16\.4\.5 1. Why is there `months()` but no `dmonths()`? * the duration varies from month to month 2. Explain `days(overnight * 1)` to someone who has just started learning R. How does it work? * this used in the example above makes it such that if `overnight` is TRUE, it will return the same time period but one day ahead, if false, does not change (as is adding 0 days) 3. 1. Create a vector of dates giving the first day of every month in 2015\. ``` x <- ymd("2015-01-01") mons <- c(0:11) (x + months(mons)) %>% wday(label = TRUE) ``` ``` ## [1] Thu Sun Sun Wed Fri Mon Wed Sat Tue Thu Sun Tue ## Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` 2. Create a vector of dates giving the first day of every month in the *current* year. ``` x <- today() %>% update(month = 1, mday = 1) mons <- c(0:11) (x + months(mons)) %>% wday(label=TRUE) ``` ``` ## [1] Tue Fri Fri Mon Wed Sat Mon Thu Sun Tue Fri Sun ## Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` 4. Write a function that given your birthday (as a date), returns how old you are in years. ``` birthday_age <- function(birthday) { (ymd(birthday) %--% today()) %/% years(1) } birthday_age("1989-09-07") ``` ``` ## [1] 29 ``` 5. Why can’t `(today() %--% (today() + years(1)) / months(1)` work? * Can’t add and subtract intervals Appendix -------- ### 16\.3\.4\.1 *Weekly flight proportions by 4 hour blocks* ``` flights_dt %>% transmute(month_dep = month(dep_time, label = TRUE), wk_dep = week(dep_time), dep_time_4hrs = floor_date(dep_time, "4 hours"), hour_dep_4hrs = hour(dep_time_4hrs) %>% factor) %>% count(wk_dep, hour_dep_4hrs) %>% group_by(wk_dep) %>% mutate(wk_tot = sum(n), wk_prop = round(n / wk_tot, 3)) %>% ungroup() %>% ggplot(aes(x = wk_dep, y = wk_prop)) + geom_col(aes(fill = hour_dep_4hrs)) ``` *Weekly median fight time* ``` flights_dt %>% transmute(quarter_dep = quarter(dep_time) %>% factor(), day_dep = as_date(dep_time), wk_dep = floor_date(dep_time, "1 week") %>% as_date, dep_time = as.hms(dep_time)) %>% group_by(quarter_dep, wk_dep) %>% summarise(wk_median = median(dep_time)) %>% ungroup() %>% mutate(wk_median = as.hms(wk_median)) %>% ggplot(aes(x = wk_dep, y = wk_median)) + geom_line(aes(colour = quarter_dep, group = 1)) ``` *Proportion of flights in each hour, by quarter* ``` flights_dt %>% transmute(quarter_dep = quarter(dep_time) %>% factor(), hour_dep = hour(dep_time)) %>% count(quarter_dep, hour_dep) %>% group_by(quarter_dep) %>% mutate(quarter_tot = sum(n), quarter_prop = round(n / quarter_tot, 3)) %>% ungroup() %>% ggplot(aes(x = hour_dep, y = quarter_prop)) + geom_line(aes(colour = quarter_dep)) ``` * Q1 seems to be a little more extreme at the local maximas *Look at proportion of flights by hour faceted by each month* ``` flights_dt %>% transmute(month_dep = month(dep_time, label = TRUE), hour_dep = hour(dep_time)) %>% count(month_dep, hour_dep) %>% group_by(month_dep) %>% mutate(month_tot = sum(n), month_prop = round(n / month_tot, 3)) %>% ungroup() %>% ggplot(aes(x = hour_dep, y = month_prop)) + geom_line() + facet_wrap( ~ month_dep) ``` ### 16\.3\.4\.3 * Perhaps these are flights where landed in different location… ``` flights_dt %>% mutate(arr_delay_test = (arr_time - sched_arr_time) / dminutes(1)) %>% select( origin, dest, dep_delay, arr_delay, arr_delay_test, contains("time")) %>% filter(is.na(arr_delay)) ``` ``` ## # A tibble: 717 x 10 ## origin dest dep_delay arr_delay arr_delay_test dep_time ## <chr> <chr> <dbl> <dbl> <dbl> <dttm> ## 1 LGA XNA -5 NA 89 2013-01-01 15:25:00 ## 2 EWR STL 29 NA 195 2013-01-01 15:28:00 ## 3 LGA XNA -5 NA 98 2013-01-01 17:40:00 ## 4 EWR SAN 29 NA 108 2013-01-01 18:07:00 ## 5 JFK DFW 59 NA -1282 2013-01-01 19:39:00 ## 6 EWR TUL 22 NA 111 2013-01-01 19:52:00 ## 7 EWR XNA 43 NA 148 2013-01-02 09:05:00 ## 8 LGA GRR 120 NA 179 2013-01-02 11:25:00 ## 9 JFK DFW 8 NA 102 2013-01-02 18:48:00 ## 10 EWR MCI 85 NA 177 2013-01-02 18:49:00 ## # ... with 707 more rows, and 4 more variables: sched_dep_time <dttm>, ## # arr_time <dttm>, sched_arr_time <dttm>, air_time <dbl> ``` ### 16\.3\.4\.4 Below started looking at proportions… ``` mutate(flights_dt, dep_old = dep_time, sched_old = sched_dep_time, dep_time = floor_date(dep_time, "5 minutes"), sched_dep_time = floor_date(sched_dep_time, "5 minutes"), mins_dep = minute(dep_time), mins_sched = minute(sched_dep_time), delayed = dep_delay > 0) %>% group_by(mins_dep, mins_sched) %>% summarise(num_delayed = sum(delayed), num = n(), prop_delayed = num_delayed / num) %>% group_by(mins_dep) %>% mutate(num_tot = sum(num), prop_sched = num / num_tot, sched_dep_diff = mins_dep - mins_sched) %>% ungroup() %>% ggplot(aes(x = mins_dep, y = prop_sched, fill = factor(mins_sched))) + geom_col()+ labs(title = "Proportion of early flights by minute scheduled v. minute departed") ``` ``` mutate(flights_dt, dep_old = dep_time, sched_old = sched_dep_time, # dep_time = floor_date(dep_time, "5 minutes"), # sched_dep_time = floor_date(sched_dep_time, "5 minutes"), mins_dep = minute(dep_time), mins_sched = minute(sched_dep_time), early_less10 = dep_delay >= -10) %>% filter(dep_delay < 0) %>% group_by(mins_dep) %>% summarise(num = n(), sum_recent10 = sum(early_less10), prop_recent10 = sum_recent10 / num) %>% ungroup() %>% ggplot(aes(x = mins_dep, y = prop_recent10)) + geom_line()+ labs(title = "proportion of early flights that were scheduled to leave within 10 mins of when they did") ``` 16\.2: Creating date/times -------------------------- Note that 1 in date\-times is treated as 1 \- second in numeric contexts, so example below sets `binwidth = 86400` to specify 1 day ``` make_datetime_100 <- function(year, month, day, time) { make_datetime(year, month, day, time %/% 100, time %% 100) } flights_dt <- flights %>% filter(!is.na(dep_time), !is.na(arr_time)) %>% mutate_at(c("dep_time", "arr_time", "sched_dep_time", "sched_arr_time"), ~make_datetime_100(year, month, day, .)) %>% select(origin, dest, ends_with("delay"), ends_with("time")) flights_dt %>% ggplot(aes(dep_time)) + geom_freqpoly(binwidth = 86400) ``` ### 16\.2\.4 1. What happens if you parse a string that contains invalid dates? ``` ymd(c("2010-10-10", "bananas")) ``` ``` ## Warning: 1 failed to parse. ``` ``` ## [1] "2010-10-10" NA ``` * Outputs an NA and sends warning of number that failed to parse 2. What does the `tzone` argument to `today()` do? Why is it important? * Let’s you specify timezones, may be different days depending on location ``` today(tzone = "MST") ``` ``` ## [1] "2019-06-05" ``` ``` now(tzone = "MST") ``` ``` ## [1] "2019-06-05 16:27:06 MST" ``` 3. Use the appropriate lubridate function to parse each of the following dates: ``` d1 <- "January 1, 2010" d2 <- "2015-Mar-07" d3 <- "06-Jun-2017" d4 <- c("August 19 (2015)", "July 1 (2015)") d5 <- "12/30/14" # Dec 30, 2014 ``` ``` mdy(d1) ``` ``` ## [1] "2010-01-01" ``` ``` ymd(d2) ``` ``` ## [1] "2015-03-07" ``` ``` dmy(d3) ``` ``` ## [1] "2017-06-06" ``` ``` mdy(d4) ``` ``` ## [1] "2015-08-19" "2015-07-01" ``` ``` mdy(d5) ``` ``` ## [1] "2014-12-30" ``` ### 16\.2\.4 1. What happens if you parse a string that contains invalid dates? ``` ymd(c("2010-10-10", "bananas")) ``` ``` ## Warning: 1 failed to parse. ``` ``` ## [1] "2010-10-10" NA ``` * Outputs an NA and sends warning of number that failed to parse 2. What does the `tzone` argument to `today()` do? Why is it important? * Let’s you specify timezones, may be different days depending on location ``` today(tzone = "MST") ``` ``` ## [1] "2019-06-05" ``` ``` now(tzone = "MST") ``` ``` ## [1] "2019-06-05 16:27:06 MST" ``` 3. Use the appropriate lubridate function to parse each of the following dates: ``` d1 <- "January 1, 2010" d2 <- "2015-Mar-07" d3 <- "06-Jun-2017" d4 <- c("August 19 (2015)", "July 1 (2015)") d5 <- "12/30/14" # Dec 30, 2014 ``` ``` mdy(d1) ``` ``` ## [1] "2010-01-01" ``` ``` ymd(d2) ``` ``` ## [1] "2015-03-07" ``` ``` dmy(d3) ``` ``` ## [1] "2017-06-06" ``` ``` mdy(d4) ``` ``` ## [1] "2015-08-19" "2015-07-01" ``` ``` mdy(d5) ``` ``` ## [1] "2014-12-30" ``` 16\.3: Date\-time components ---------------------------- This allows you to plot the number of flights per week ``` flights_dt %>% count(week = floor_date(dep_time, "week")) %>% ggplot(aes(week, n)) + geom_line() ``` ### 16\.3\.4 1. How does the distribution of flight times within a day change over the course of the year? *Median flight time by day* ``` flights_dt %>% transmute(quarter_dep = quarter(dep_time) %>% factor(), day_dep = as_date(dep_time), dep_time = as.hms(dep_time)) %>% group_by(quarter_dep, day_dep) %>% summarise(day_median = median(dep_time)) %>% ungroup() %>% ggplot(aes(x = day_dep, y = day_median)) + geom_line(aes(colour = quarter_dep, group = 1)) + labs(title = "Median flight times by day, coloured by quarter", subtitle = "Typical flight times change with daylight savings times")+ geom_vline(xintercept = ymd("20130310"), linetype = 2)+ geom_vline(xintercept = ymd("20131103"), linetype = 2) ``` * First couple and last couple months tend to have slightly earlier start times*Quantiles of flight times by month* ``` flights_dt %>% transmute(month_dep = month(dep_time, label = TRUE), quarter_dep = quarter(dep_time) %>% factor(), wk_dep = week(dep_time), dep_time = as.hms(dep_time)) %>% group_by(month_dep, wk_dep) %>% ungroup() %>% ggplot(aes(x = month_dep, y = dep_time, group = month_dep)) + geom_boxplot() ``` * Reinforces prior plot, shows that first couple and last couple months of year tend to have slightly higher proportion of flights earlier in day * Last week of the year have a lower proportion of late flights, and a higher proportion of morning flightsSee [16\.3\.4\.1](16-dates-and-times.html#section-64) for a few other plots I looked at. 2. Compare `dep_time`, `sched_dep_time` and `dep_delay`. Are they consistent? Explain your findings. ``` flights_dt %>% mutate(dep_delay_check = (dep_time - sched_dep_time) / dminutes(1), same = dep_delay == dep_delay_check, difference = dep_delay_check - dep_delay) %>% filter(abs(difference) > 0) ``` ``` ## # A tibble: 1,205 x 12 ## origin dest dep_delay arr_delay dep_time sched_dep_time ## <chr> <chr> <dbl> <dbl> <dttm> <dttm> ## 1 JFK BWI 853 851 2013-01-01 08:48:00 2013-01-01 18:35:00 ## 2 JFK SJU 43 36 2013-01-02 00:42:00 2013-01-02 23:59:00 ## 3 JFK SYR 156 154 2013-01-02 01:26:00 2013-01-02 22:50:00 ## 4 JFK SJU 33 22 2013-01-03 00:32:00 2013-01-03 23:59:00 ## 5 JFK BUF 185 172 2013-01-03 00:50:00 2013-01-03 21:45:00 ## 6 JFK BQN 156 143 2013-01-03 02:35:00 2013-01-03 23:59:00 ## 7 JFK SJU 26 23 2013-01-04 00:25:00 2013-01-04 23:59:00 ## 8 JFK PWM 141 125 2013-01-04 01:06:00 2013-01-04 22:45:00 ## 9 JFK PSE 15 18 2013-01-05 00:14:00 2013-01-05 23:59:00 ## 10 JFK FLL 127 130 2013-01-05 00:37:00 2013-01-05 22:30:00 ## # ... with 1,195 more rows, and 6 more variables: arr_time <dttm>, ## # sched_arr_time <dttm>, air_time <dbl>, dep_delay_check <dbl>, ## # same <lgl>, difference <dbl> ``` * They are except in the case when it goes over a day, the day is not pushed forward so it counts it as being 24 hours off 3. Compare `air_time` with the duration between the departure and arrival. Explain your findings. (Hint: consider the location of the airport.) ``` flights_dt %>% mutate(air_time_check = (arr_time - dep_time) / dminutes(1)) %>% select(air_time_check, air_time, dep_time, arr_time, everything()) ``` ``` ## # A tibble: 328,063 x 10 ## air_time_check air_time dep_time arr_time origin ## <dbl> <dbl> <dttm> <dttm> <chr> ## 1 193 227 2013-01-01 05:17:00 2013-01-01 08:30:00 EWR ## 2 197 227 2013-01-01 05:33:00 2013-01-01 08:50:00 LGA ## 3 221 160 2013-01-01 05:42:00 2013-01-01 09:23:00 JFK ## 4 260 183 2013-01-01 05:44:00 2013-01-01 10:04:00 JFK ## 5 138 116 2013-01-01 05:54:00 2013-01-01 08:12:00 LGA ## 6 106 150 2013-01-01 05:54:00 2013-01-01 07:40:00 EWR ## 7 198 158 2013-01-01 05:55:00 2013-01-01 09:13:00 EWR ## 8 72 53 2013-01-01 05:57:00 2013-01-01 07:09:00 LGA ## 9 161 140 2013-01-01 05:57:00 2013-01-01 08:38:00 JFK ## 10 115 138 2013-01-01 05:58:00 2013-01-01 07:53:00 LGA ## # ... with 328,053 more rows, and 5 more variables: dest <chr>, ## # dep_delay <dbl>, arr_delay <dbl>, sched_dep_time <dttm>, ## # sched_arr_time <dttm> ``` * Initial check is off, so need to take into account the time\-zone and difference from NYC, so join timezone document ``` flights_dt %>% left_join(select(nycflights13::airports, dest = faa, tz), by = "dest") %>% mutate(arr_time_new = arr_time - dhours(tz + 5)) %>% mutate(air_time_tz = (arr_time_new - dep_time) / dminutes(1), diff_Airtime = air_time_tz - air_time) %>% select( origin, dest, tz, contains("time"), -(contains("sched"))) ``` ``` ## # A tibble: 328,063 x 9 ## origin dest tz dep_time arr_time air_time ## <chr> <chr> <dbl> <dttm> <dttm> <dbl> ## 1 EWR IAH -6 2013-01-01 05:17:00 2013-01-01 08:30:00 227 ## 2 LGA IAH -6 2013-01-01 05:33:00 2013-01-01 08:50:00 227 ## 3 JFK MIA -5 2013-01-01 05:42:00 2013-01-01 09:23:00 160 ## 4 JFK BQN NA 2013-01-01 05:44:00 2013-01-01 10:04:00 183 ## 5 LGA ATL -5 2013-01-01 05:54:00 2013-01-01 08:12:00 116 ## 6 EWR ORD -6 2013-01-01 05:54:00 2013-01-01 07:40:00 150 ## 7 EWR FLL -5 2013-01-01 05:55:00 2013-01-01 09:13:00 158 ## 8 LGA IAD -5 2013-01-01 05:57:00 2013-01-01 07:09:00 53 ## 9 JFK MCO -5 2013-01-01 05:57:00 2013-01-01 08:38:00 140 ## 10 LGA ORD -6 2013-01-01 05:58:00 2013-01-01 07:53:00 138 ## # ... with 328,053 more rows, and 3 more variables: arr_time_new <dttm>, ## # air_time_tz <dbl>, diff_Airtime <dbl> ``` * Is closer but still off. In chapter 5, problem 5\.5\.2\.1 I go further into this * In [Appendix](28-graphics-for-communication.html#appendix-13) section [16\.3\.4\.3](16-dates-and-times.html#section-65) filter to NAs 4. How does the average delay time change over the course of a day? Should you use `dep_time` or `sched_dep_time`? Why? ``` flights_dt %>% mutate(sched_dep_time = as.hms(floor_date(sched_dep_time, "30 mins"))) %>% group_by(sched_dep_time) %>% summarise(delay_mean = mean(arr_delay, na.rm = TRUE), n = n(), n_na = sum(is.na(arr_delay)) / n, delay_median = median(arr_delay, na.rm = TRUE)) %>% ggplot(aes(x = sched_dep_time, y = delay_mean, size = n)) + geom_point() ``` * It goes\-up throughout the day * Use `sched_dep_time` because it has the correct day 5. On what day of the week should you leave if you want to minimise the chance of a delay? ``` flights_dt %>% mutate(weekday = wday(sched_dep_time, label = TRUE)) %>% group_by(weekday) %>% summarise(prop_delay = sum(dep_delay > 0) / n()) ``` ``` ## # A tibble: 7 x 2 ## weekday prop_delay ## <ord> <dbl> ## 1 Sun 0.383 ## 2 Mon 0.401 ## 3 Tue 0.364 ## 4 Wed 0.372 ## 5 Thu 0.431 ## 6 Fri 0.425 ## 7 Sat 0.348 ``` * wknd has a slightly lower proportion of flights delayed (Thursday has the worst) 6. What makes the distribution of `diamonds$carat` and `flights$sched_dep_time` similar? ``` ggplot(diamonds, aes(x = carat)) + geom_histogram(bins = 500)+ labs(title = "Distribution of carat in diamonds dataset") ggplot(flights, aes(x = as.hms(sched_dep_time))) + geom_histogram(bins = 24*6)+ labs(title = "Distribution of scheduled departure times in flights dataset") ``` * Both have gaps and peaks at ‘attractive’ values 7. Confirm my hypothesis that the early departures of flights in minutes 20\-30 and 50\-60 are caused by scheduled flights that leave early. Hint: create a binary variable that tells you whether or not a flight was delayed. ``` mutate(flights_dt, mins_dep = minute(dep_time), mins_sched = minute(sched_dep_time), delayed = dep_delay > 0) %>% group_by(mins_dep) %>% summarise(prop_delayed = sum(delayed) / n()) %>% ggplot(aes(x = mins_dep, y = prop_delayed)) + geom_line() ``` * Consistent with above hypothesis ### 16\.3\.4 1. How does the distribution of flight times within a day change over the course of the year? *Median flight time by day* ``` flights_dt %>% transmute(quarter_dep = quarter(dep_time) %>% factor(), day_dep = as_date(dep_time), dep_time = as.hms(dep_time)) %>% group_by(quarter_dep, day_dep) %>% summarise(day_median = median(dep_time)) %>% ungroup() %>% ggplot(aes(x = day_dep, y = day_median)) + geom_line(aes(colour = quarter_dep, group = 1)) + labs(title = "Median flight times by day, coloured by quarter", subtitle = "Typical flight times change with daylight savings times")+ geom_vline(xintercept = ymd("20130310"), linetype = 2)+ geom_vline(xintercept = ymd("20131103"), linetype = 2) ``` * First couple and last couple months tend to have slightly earlier start times*Quantiles of flight times by month* ``` flights_dt %>% transmute(month_dep = month(dep_time, label = TRUE), quarter_dep = quarter(dep_time) %>% factor(), wk_dep = week(dep_time), dep_time = as.hms(dep_time)) %>% group_by(month_dep, wk_dep) %>% ungroup() %>% ggplot(aes(x = month_dep, y = dep_time, group = month_dep)) + geom_boxplot() ``` * Reinforces prior plot, shows that first couple and last couple months of year tend to have slightly higher proportion of flights earlier in day * Last week of the year have a lower proportion of late flights, and a higher proportion of morning flightsSee [16\.3\.4\.1](16-dates-and-times.html#section-64) for a few other plots I looked at. 2. Compare `dep_time`, `sched_dep_time` and `dep_delay`. Are they consistent? Explain your findings. ``` flights_dt %>% mutate(dep_delay_check = (dep_time - sched_dep_time) / dminutes(1), same = dep_delay == dep_delay_check, difference = dep_delay_check - dep_delay) %>% filter(abs(difference) > 0) ``` ``` ## # A tibble: 1,205 x 12 ## origin dest dep_delay arr_delay dep_time sched_dep_time ## <chr> <chr> <dbl> <dbl> <dttm> <dttm> ## 1 JFK BWI 853 851 2013-01-01 08:48:00 2013-01-01 18:35:00 ## 2 JFK SJU 43 36 2013-01-02 00:42:00 2013-01-02 23:59:00 ## 3 JFK SYR 156 154 2013-01-02 01:26:00 2013-01-02 22:50:00 ## 4 JFK SJU 33 22 2013-01-03 00:32:00 2013-01-03 23:59:00 ## 5 JFK BUF 185 172 2013-01-03 00:50:00 2013-01-03 21:45:00 ## 6 JFK BQN 156 143 2013-01-03 02:35:00 2013-01-03 23:59:00 ## 7 JFK SJU 26 23 2013-01-04 00:25:00 2013-01-04 23:59:00 ## 8 JFK PWM 141 125 2013-01-04 01:06:00 2013-01-04 22:45:00 ## 9 JFK PSE 15 18 2013-01-05 00:14:00 2013-01-05 23:59:00 ## 10 JFK FLL 127 130 2013-01-05 00:37:00 2013-01-05 22:30:00 ## # ... with 1,195 more rows, and 6 more variables: arr_time <dttm>, ## # sched_arr_time <dttm>, air_time <dbl>, dep_delay_check <dbl>, ## # same <lgl>, difference <dbl> ``` * They are except in the case when it goes over a day, the day is not pushed forward so it counts it as being 24 hours off 3. Compare `air_time` with the duration between the departure and arrival. Explain your findings. (Hint: consider the location of the airport.) ``` flights_dt %>% mutate(air_time_check = (arr_time - dep_time) / dminutes(1)) %>% select(air_time_check, air_time, dep_time, arr_time, everything()) ``` ``` ## # A tibble: 328,063 x 10 ## air_time_check air_time dep_time arr_time origin ## <dbl> <dbl> <dttm> <dttm> <chr> ## 1 193 227 2013-01-01 05:17:00 2013-01-01 08:30:00 EWR ## 2 197 227 2013-01-01 05:33:00 2013-01-01 08:50:00 LGA ## 3 221 160 2013-01-01 05:42:00 2013-01-01 09:23:00 JFK ## 4 260 183 2013-01-01 05:44:00 2013-01-01 10:04:00 JFK ## 5 138 116 2013-01-01 05:54:00 2013-01-01 08:12:00 LGA ## 6 106 150 2013-01-01 05:54:00 2013-01-01 07:40:00 EWR ## 7 198 158 2013-01-01 05:55:00 2013-01-01 09:13:00 EWR ## 8 72 53 2013-01-01 05:57:00 2013-01-01 07:09:00 LGA ## 9 161 140 2013-01-01 05:57:00 2013-01-01 08:38:00 JFK ## 10 115 138 2013-01-01 05:58:00 2013-01-01 07:53:00 LGA ## # ... with 328,053 more rows, and 5 more variables: dest <chr>, ## # dep_delay <dbl>, arr_delay <dbl>, sched_dep_time <dttm>, ## # sched_arr_time <dttm> ``` * Initial check is off, so need to take into account the time\-zone and difference from NYC, so join timezone document ``` flights_dt %>% left_join(select(nycflights13::airports, dest = faa, tz), by = "dest") %>% mutate(arr_time_new = arr_time - dhours(tz + 5)) %>% mutate(air_time_tz = (arr_time_new - dep_time) / dminutes(1), diff_Airtime = air_time_tz - air_time) %>% select( origin, dest, tz, contains("time"), -(contains("sched"))) ``` ``` ## # A tibble: 328,063 x 9 ## origin dest tz dep_time arr_time air_time ## <chr> <chr> <dbl> <dttm> <dttm> <dbl> ## 1 EWR IAH -6 2013-01-01 05:17:00 2013-01-01 08:30:00 227 ## 2 LGA IAH -6 2013-01-01 05:33:00 2013-01-01 08:50:00 227 ## 3 JFK MIA -5 2013-01-01 05:42:00 2013-01-01 09:23:00 160 ## 4 JFK BQN NA 2013-01-01 05:44:00 2013-01-01 10:04:00 183 ## 5 LGA ATL -5 2013-01-01 05:54:00 2013-01-01 08:12:00 116 ## 6 EWR ORD -6 2013-01-01 05:54:00 2013-01-01 07:40:00 150 ## 7 EWR FLL -5 2013-01-01 05:55:00 2013-01-01 09:13:00 158 ## 8 LGA IAD -5 2013-01-01 05:57:00 2013-01-01 07:09:00 53 ## 9 JFK MCO -5 2013-01-01 05:57:00 2013-01-01 08:38:00 140 ## 10 LGA ORD -6 2013-01-01 05:58:00 2013-01-01 07:53:00 138 ## # ... with 328,053 more rows, and 3 more variables: arr_time_new <dttm>, ## # air_time_tz <dbl>, diff_Airtime <dbl> ``` * Is closer but still off. In chapter 5, problem 5\.5\.2\.1 I go further into this * In [Appendix](28-graphics-for-communication.html#appendix-13) section [16\.3\.4\.3](16-dates-and-times.html#section-65) filter to NAs 4. How does the average delay time change over the course of a day? Should you use `dep_time` or `sched_dep_time`? Why? ``` flights_dt %>% mutate(sched_dep_time = as.hms(floor_date(sched_dep_time, "30 mins"))) %>% group_by(sched_dep_time) %>% summarise(delay_mean = mean(arr_delay, na.rm = TRUE), n = n(), n_na = sum(is.na(arr_delay)) / n, delay_median = median(arr_delay, na.rm = TRUE)) %>% ggplot(aes(x = sched_dep_time, y = delay_mean, size = n)) + geom_point() ``` * It goes\-up throughout the day * Use `sched_dep_time` because it has the correct day 5. On what day of the week should you leave if you want to minimise the chance of a delay? ``` flights_dt %>% mutate(weekday = wday(sched_dep_time, label = TRUE)) %>% group_by(weekday) %>% summarise(prop_delay = sum(dep_delay > 0) / n()) ``` ``` ## # A tibble: 7 x 2 ## weekday prop_delay ## <ord> <dbl> ## 1 Sun 0.383 ## 2 Mon 0.401 ## 3 Tue 0.364 ## 4 Wed 0.372 ## 5 Thu 0.431 ## 6 Fri 0.425 ## 7 Sat 0.348 ``` * wknd has a slightly lower proportion of flights delayed (Thursday has the worst) 6. What makes the distribution of `diamonds$carat` and `flights$sched_dep_time` similar? ``` ggplot(diamonds, aes(x = carat)) + geom_histogram(bins = 500)+ labs(title = "Distribution of carat in diamonds dataset") ggplot(flights, aes(x = as.hms(sched_dep_time))) + geom_histogram(bins = 24*6)+ labs(title = "Distribution of scheduled departure times in flights dataset") ``` * Both have gaps and peaks at ‘attractive’ values 7. Confirm my hypothesis that the early departures of flights in minutes 20\-30 and 50\-60 are caused by scheduled flights that leave early. Hint: create a binary variable that tells you whether or not a flight was delayed. ``` mutate(flights_dt, mins_dep = minute(dep_time), mins_sched = minute(sched_dep_time), delayed = dep_delay > 0) %>% group_by(mins_dep) %>% summarise(prop_delayed = sum(delayed) / n()) %>% ggplot(aes(x = mins_dep, y = prop_delayed)) + geom_line() ``` * Consistent with above hypothesis 16\.4: Time spans ----------------- * **durations**, which represent an exact number of seconds. * **periods**, which represent human units like weeks and months. * **intervals**, which represent a starting and ending point. Permitted arithmetic operations between different data types Periods example, using durations to fix oddity of problem when flight arrives overnight ``` flights_dt <- flights_dt %>% mutate( overnight = arr_time < dep_time, arr_time = arr_time + days(overnight * 1), sched_arr_time = sched_arr_time + days(overnight * 1) ) ``` Intervals example to get precise number of days dependent on specific time ``` next_year <- today() + years(1) (today() %--% next_year) / ddays(1) ``` ``` ## [1] 366 ``` To find out how many periods fall in an interval, need to use integer division ``` (today() %--% next_year) %/% days(1) ``` ``` ## Note: method with signature 'Timespan#Timespan' chosen for function '%/%', ## target signature 'Interval#Period'. ## "Interval#ANY", "ANY#Period" would also be valid ``` ``` ## [1] 366 ``` ### 16\.4\.5 1. Why is there `months()` but no `dmonths()`? * the duration varies from month to month 2. Explain `days(overnight * 1)` to someone who has just started learning R. How does it work? * this used in the example above makes it such that if `overnight` is TRUE, it will return the same time period but one day ahead, if false, does not change (as is adding 0 days) 3. 1. Create a vector of dates giving the first day of every month in 2015\. ``` x <- ymd("2015-01-01") mons <- c(0:11) (x + months(mons)) %>% wday(label = TRUE) ``` ``` ## [1] Thu Sun Sun Wed Fri Mon Wed Sat Tue Thu Sun Tue ## Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` 2. Create a vector of dates giving the first day of every month in the *current* year. ``` x <- today() %>% update(month = 1, mday = 1) mons <- c(0:11) (x + months(mons)) %>% wday(label=TRUE) ``` ``` ## [1] Tue Fri Fri Mon Wed Sat Mon Thu Sun Tue Fri Sun ## Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` 4. Write a function that given your birthday (as a date), returns how old you are in years. ``` birthday_age <- function(birthday) { (ymd(birthday) %--% today()) %/% years(1) } birthday_age("1989-09-07") ``` ``` ## [1] 29 ``` 5. Why can’t `(today() %--% (today() + years(1)) / months(1)` work? * Can’t add and subtract intervals ### 16\.4\.5 1. Why is there `months()` but no `dmonths()`? * the duration varies from month to month 2. Explain `days(overnight * 1)` to someone who has just started learning R. How does it work? * this used in the example above makes it such that if `overnight` is TRUE, it will return the same time period but one day ahead, if false, does not change (as is adding 0 days) 3. 1. Create a vector of dates giving the first day of every month in 2015\. ``` x <- ymd("2015-01-01") mons <- c(0:11) (x + months(mons)) %>% wday(label = TRUE) ``` ``` ## [1] Thu Sun Sun Wed Fri Mon Wed Sat Tue Thu Sun Tue ## Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` 2. Create a vector of dates giving the first day of every month in the *current* year. ``` x <- today() %>% update(month = 1, mday = 1) mons <- c(0:11) (x + months(mons)) %>% wday(label=TRUE) ``` ``` ## [1] Tue Fri Fri Mon Wed Sat Mon Thu Sun Tue Fri Sun ## Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` 4. Write a function that given your birthday (as a date), returns how old you are in years. ``` birthday_age <- function(birthday) { (ymd(birthday) %--% today()) %/% years(1) } birthday_age("1989-09-07") ``` ``` ## [1] 29 ``` 5. Why can’t `(today() %--% (today() + years(1)) / months(1)` work? * Can’t add and subtract intervals Appendix -------- ### 16\.3\.4\.1 *Weekly flight proportions by 4 hour blocks* ``` flights_dt %>% transmute(month_dep = month(dep_time, label = TRUE), wk_dep = week(dep_time), dep_time_4hrs = floor_date(dep_time, "4 hours"), hour_dep_4hrs = hour(dep_time_4hrs) %>% factor) %>% count(wk_dep, hour_dep_4hrs) %>% group_by(wk_dep) %>% mutate(wk_tot = sum(n), wk_prop = round(n / wk_tot, 3)) %>% ungroup() %>% ggplot(aes(x = wk_dep, y = wk_prop)) + geom_col(aes(fill = hour_dep_4hrs)) ``` *Weekly median fight time* ``` flights_dt %>% transmute(quarter_dep = quarter(dep_time) %>% factor(), day_dep = as_date(dep_time), wk_dep = floor_date(dep_time, "1 week") %>% as_date, dep_time = as.hms(dep_time)) %>% group_by(quarter_dep, wk_dep) %>% summarise(wk_median = median(dep_time)) %>% ungroup() %>% mutate(wk_median = as.hms(wk_median)) %>% ggplot(aes(x = wk_dep, y = wk_median)) + geom_line(aes(colour = quarter_dep, group = 1)) ``` *Proportion of flights in each hour, by quarter* ``` flights_dt %>% transmute(quarter_dep = quarter(dep_time) %>% factor(), hour_dep = hour(dep_time)) %>% count(quarter_dep, hour_dep) %>% group_by(quarter_dep) %>% mutate(quarter_tot = sum(n), quarter_prop = round(n / quarter_tot, 3)) %>% ungroup() %>% ggplot(aes(x = hour_dep, y = quarter_prop)) + geom_line(aes(colour = quarter_dep)) ``` * Q1 seems to be a little more extreme at the local maximas *Look at proportion of flights by hour faceted by each month* ``` flights_dt %>% transmute(month_dep = month(dep_time, label = TRUE), hour_dep = hour(dep_time)) %>% count(month_dep, hour_dep) %>% group_by(month_dep) %>% mutate(month_tot = sum(n), month_prop = round(n / month_tot, 3)) %>% ungroup() %>% ggplot(aes(x = hour_dep, y = month_prop)) + geom_line() + facet_wrap( ~ month_dep) ``` ### 16\.3\.4\.3 * Perhaps these are flights where landed in different location… ``` flights_dt %>% mutate(arr_delay_test = (arr_time - sched_arr_time) / dminutes(1)) %>% select( origin, dest, dep_delay, arr_delay, arr_delay_test, contains("time")) %>% filter(is.na(arr_delay)) ``` ``` ## # A tibble: 717 x 10 ## origin dest dep_delay arr_delay arr_delay_test dep_time ## <chr> <chr> <dbl> <dbl> <dbl> <dttm> ## 1 LGA XNA -5 NA 89 2013-01-01 15:25:00 ## 2 EWR STL 29 NA 195 2013-01-01 15:28:00 ## 3 LGA XNA -5 NA 98 2013-01-01 17:40:00 ## 4 EWR SAN 29 NA 108 2013-01-01 18:07:00 ## 5 JFK DFW 59 NA -1282 2013-01-01 19:39:00 ## 6 EWR TUL 22 NA 111 2013-01-01 19:52:00 ## 7 EWR XNA 43 NA 148 2013-01-02 09:05:00 ## 8 LGA GRR 120 NA 179 2013-01-02 11:25:00 ## 9 JFK DFW 8 NA 102 2013-01-02 18:48:00 ## 10 EWR MCI 85 NA 177 2013-01-02 18:49:00 ## # ... with 707 more rows, and 4 more variables: sched_dep_time <dttm>, ## # arr_time <dttm>, sched_arr_time <dttm>, air_time <dbl> ``` ### 16\.3\.4\.4 Below started looking at proportions… ``` mutate(flights_dt, dep_old = dep_time, sched_old = sched_dep_time, dep_time = floor_date(dep_time, "5 minutes"), sched_dep_time = floor_date(sched_dep_time, "5 minutes"), mins_dep = minute(dep_time), mins_sched = minute(sched_dep_time), delayed = dep_delay > 0) %>% group_by(mins_dep, mins_sched) %>% summarise(num_delayed = sum(delayed), num = n(), prop_delayed = num_delayed / num) %>% group_by(mins_dep) %>% mutate(num_tot = sum(num), prop_sched = num / num_tot, sched_dep_diff = mins_dep - mins_sched) %>% ungroup() %>% ggplot(aes(x = mins_dep, y = prop_sched, fill = factor(mins_sched))) + geom_col()+ labs(title = "Proportion of early flights by minute scheduled v. minute departed") ``` ``` mutate(flights_dt, dep_old = dep_time, sched_old = sched_dep_time, # dep_time = floor_date(dep_time, "5 minutes"), # sched_dep_time = floor_date(sched_dep_time, "5 minutes"), mins_dep = minute(dep_time), mins_sched = minute(sched_dep_time), early_less10 = dep_delay >= -10) %>% filter(dep_delay < 0) %>% group_by(mins_dep) %>% summarise(num = n(), sum_recent10 = sum(early_less10), prop_recent10 = sum_recent10 / num) %>% ungroup() %>% ggplot(aes(x = mins_dep, y = prop_recent10)) + geom_line()+ labs(title = "proportion of early flights that were scheduled to leave within 10 mins of when they did") ``` ### 16\.3\.4\.1 *Weekly flight proportions by 4 hour blocks* ``` flights_dt %>% transmute(month_dep = month(dep_time, label = TRUE), wk_dep = week(dep_time), dep_time_4hrs = floor_date(dep_time, "4 hours"), hour_dep_4hrs = hour(dep_time_4hrs) %>% factor) %>% count(wk_dep, hour_dep_4hrs) %>% group_by(wk_dep) %>% mutate(wk_tot = sum(n), wk_prop = round(n / wk_tot, 3)) %>% ungroup() %>% ggplot(aes(x = wk_dep, y = wk_prop)) + geom_col(aes(fill = hour_dep_4hrs)) ``` *Weekly median fight time* ``` flights_dt %>% transmute(quarter_dep = quarter(dep_time) %>% factor(), day_dep = as_date(dep_time), wk_dep = floor_date(dep_time, "1 week") %>% as_date, dep_time = as.hms(dep_time)) %>% group_by(quarter_dep, wk_dep) %>% summarise(wk_median = median(dep_time)) %>% ungroup() %>% mutate(wk_median = as.hms(wk_median)) %>% ggplot(aes(x = wk_dep, y = wk_median)) + geom_line(aes(colour = quarter_dep, group = 1)) ``` *Proportion of flights in each hour, by quarter* ``` flights_dt %>% transmute(quarter_dep = quarter(dep_time) %>% factor(), hour_dep = hour(dep_time)) %>% count(quarter_dep, hour_dep) %>% group_by(quarter_dep) %>% mutate(quarter_tot = sum(n), quarter_prop = round(n / quarter_tot, 3)) %>% ungroup() %>% ggplot(aes(x = hour_dep, y = quarter_prop)) + geom_line(aes(colour = quarter_dep)) ``` * Q1 seems to be a little more extreme at the local maximas *Look at proportion of flights by hour faceted by each month* ``` flights_dt %>% transmute(month_dep = month(dep_time, label = TRUE), hour_dep = hour(dep_time)) %>% count(month_dep, hour_dep) %>% group_by(month_dep) %>% mutate(month_tot = sum(n), month_prop = round(n / month_tot, 3)) %>% ungroup() %>% ggplot(aes(x = hour_dep, y = month_prop)) + geom_line() + facet_wrap( ~ month_dep) ``` ### 16\.3\.4\.3 * Perhaps these are flights where landed in different location… ``` flights_dt %>% mutate(arr_delay_test = (arr_time - sched_arr_time) / dminutes(1)) %>% select( origin, dest, dep_delay, arr_delay, arr_delay_test, contains("time")) %>% filter(is.na(arr_delay)) ``` ``` ## # A tibble: 717 x 10 ## origin dest dep_delay arr_delay arr_delay_test dep_time ## <chr> <chr> <dbl> <dbl> <dbl> <dttm> ## 1 LGA XNA -5 NA 89 2013-01-01 15:25:00 ## 2 EWR STL 29 NA 195 2013-01-01 15:28:00 ## 3 LGA XNA -5 NA 98 2013-01-01 17:40:00 ## 4 EWR SAN 29 NA 108 2013-01-01 18:07:00 ## 5 JFK DFW 59 NA -1282 2013-01-01 19:39:00 ## 6 EWR TUL 22 NA 111 2013-01-01 19:52:00 ## 7 EWR XNA 43 NA 148 2013-01-02 09:05:00 ## 8 LGA GRR 120 NA 179 2013-01-02 11:25:00 ## 9 JFK DFW 8 NA 102 2013-01-02 18:48:00 ## 10 EWR MCI 85 NA 177 2013-01-02 18:49:00 ## # ... with 707 more rows, and 4 more variables: sched_dep_time <dttm>, ## # arr_time <dttm>, sched_arr_time <dttm>, air_time <dbl> ``` ### 16\.3\.4\.4 Below started looking at proportions… ``` mutate(flights_dt, dep_old = dep_time, sched_old = sched_dep_time, dep_time = floor_date(dep_time, "5 minutes"), sched_dep_time = floor_date(sched_dep_time, "5 minutes"), mins_dep = minute(dep_time), mins_sched = minute(sched_dep_time), delayed = dep_delay > 0) %>% group_by(mins_dep, mins_sched) %>% summarise(num_delayed = sum(delayed), num = n(), prop_delayed = num_delayed / num) %>% group_by(mins_dep) %>% mutate(num_tot = sum(num), prop_sched = num / num_tot, sched_dep_diff = mins_dep - mins_sched) %>% ungroup() %>% ggplot(aes(x = mins_dep, y = prop_sched, fill = factor(mins_sched))) + geom_col()+ labs(title = "Proportion of early flights by minute scheduled v. minute departed") ``` ``` mutate(flights_dt, dep_old = dep_time, sched_old = sched_dep_time, # dep_time = floor_date(dep_time, "5 minutes"), # sched_dep_time = floor_date(sched_dep_time, "5 minutes"), mins_dep = minute(dep_time), mins_sched = minute(sched_dep_time), early_less10 = dep_delay >= -10) %>% filter(dep_delay < 0) %>% group_by(mins_dep) %>% summarise(num = n(), sum_recent10 = sum(early_less10), prop_recent10 = sum_recent10 / num) %>% ungroup() %>% ggplot(aes(x = mins_dep, y = prop_recent10)) + geom_line()+ labs(title = "proportion of early flights that were scheduled to leave within 10 mins of when they did") ```
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/18-pipes.html
Ch. 18: Pipes (notes only) ========================== * `pryr::object_size` gives the memory occupied by all of its arguments (note that built\-in object.size does not allow measuring multiple objects so can’t see shared space). This function is actually shown in chapter 18: Pipes * Some functions do not work naturally with the pipe. + If you want to use `assign` with the pipe, you must be explicit about the environment ``` env <- environment() assign("x", 100, envir = env) ``` * `try`, `tryCatch`, `suppressMessages`, and `suppressWarnings` from base R all also do not work well Other pipes \= ‘T pipe’, `%T>%` that returns left\-hand side rather than right. Will let the plot output, but then continues. Notice that this doesn’t work quite the same way for ggplot as ggplot does output something ``` rnorm(100) %>% matrix(ncol = 2) %T>% plot() %>% str() ``` ``` ## num [1:50, 1:2] -0.25 -0.924 0.894 0.298 0.347 ... ``` ``` iris %>% select(Sepal.Length, Sepal.Width) %T>% plot() %>% select(Sepal.Length) %>% head(10) ``` ``` ## Sepal.Length ## 1 5.1 ## 2 4.9 ## 3 4.7 ## 4 4.6 ## 5 5.0 ## 6 5.4 ## 7 4.6 ## 8 5.0 ## 9 4.4 ## 10 4.9 ``` * `%$%` allows you to blow out the names of the arguments, I personally prefer using the `with()` function for this instead as I find it to be a little more readable… + The two examples below are equivalent ``` mtcars %$% cor(disp, mpg) ``` ``` ## [1] -0.8475514 ``` ``` mtcars %>% with(cor(disp, mpg)) ``` ``` ## [1] -0.8475514 ```
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/19-functions.html
Ch. 19: Functions ================= **Key questions:** * 19\.3\.1\. \#1 * 19\.4\.4\. \#2, 3 * 19\.5\.5\. \#2 (actually make a function that fixes this) **Functions and notes:** * `function_name <- function(input1, input2) {}` * `if () {}` * `else if () {}` * `else {}` * `||` (or) , `&&` (and) – used to combine multiple logical expressions * `|` and `&` are vectorized operations not to be used with `if` statements * `any` checks if any values in vector are `TRUE` * `all` checks if all values in vector are `TRUE` * `identical` strict check for if values are the same * `dplyr::near` for more lenient comparison and typically better than `identical` as unaffected by floating point numbers and fine with different types * `switch` evaluate selected code based on position or name (good for replacing long chain of `if` statements). e.g. ``` Operation_Times2 <- function(x, y, op){ first_op <- switch(op, plus = x + y, minus = x - y, times = x * y, divide = x / y, stop("Unknown op!") ) first_op * 2 } Operation_Times2(5, 7, "plus") ``` ``` ## [1] 24 ``` * `stop` stops expression and executes error action, note that `.call` default is `FALSE` * `warning` generates a warning that corresponds with arguments, `suppressWarnings` may also be useful to no * `stopifnot` test multiple args and will produce error message if any are not true – compromise that prevents you from tedious work of putting multiple `if(){} else stop()` statements * `...` useful catch\-all if your function primarily wraps another function (note that misspelled arguments will not raise an error), e.g. ``` commas <- function(...) stringr::str_c(..., collapse = ", ") commas(letters[1:10]) ``` ``` ## [1] "a, b, c, d, e, f, g, h, i, j" ``` * `cut` can be used to discretize continuous variables (also saves long `if` statements) * `return` allows you to return the function early, typically reserve use for when the function can return early with a simpler function * `cat` function to print label in output * `invisible` input does not get printed out 19\.2: When should you write a function? ---------------------------------------- ### 19\.2\.1 1. Why is `TRUE` not a parameter to `rescale01()`? What would happen if `x` contained a single missing value, and `na.rm` was `FALSE`? * `TRUE` doesn’t change between uses. * The output would be `NA` 2. In the second variant of `rescale01()`, infinite values are left unchanged. Rewrite `rescale01()` so that `-Inf` is mapped to 0, and `Inf` is mapped to 1\. ``` rescale01_inf <- function(x){ rng <- range(x, na.rm = TRUE, finite = TRUE) x_scaled <- (x - rng[1]) / (rng[2] - rng[1]) is_inf <- is.infinite(x) is_less0 <- x < 0 x_scaled[is_inf & is_less0] <- 0 x_scaled[is_inf & (!is_less0)] <- 1 x_scaled } x <- c(Inf, -Inf, 0, 3, -5) rescale01_inf(x) ``` ``` ## [1] 1.000 0.000 0.625 1.000 0.000 ``` 3. Practice turning the following code snippets into functions. Think about what each function does. What would you call it? How many arguments does it need? Can you rewrite it to be more expressive or less duplicative? ``` mean(is.na(x)) x / sum(x, na.rm = TRUE) sd(x, na.rm = TRUE) / mean(x, na.rm = TRUE) ``` * See solutions below: ``` x <- c(1, 4, 2, 0, NA, 3, NA) #mean(is.na(x)) perc_na <- function(x) { is.na(x) %>% mean() } perc_na(x) ``` ``` ## [1] 0.2857143 ``` ``` #x / sum(x, na.rm = TRUE) prop_weighted <- function(x) { x / sum(x, na.rm = TRUE) } prop_weighted(x) ``` ``` ## [1] 0.1 0.4 0.2 0.0 NA 0.3 NA ``` ``` #sd(x, na.rm = TRUE) / mean(x, na.rm = TRUE) CoefficientOfVariation <- function(x) { sd(x, na.rm = TRUE) / mean(x, na.rm = TRUE) } CoefficientOfVariation(x) ``` ``` ## [1] 0.7905694 ``` 4. Follow [http://nicercode.github.io/intro/writing\-functions.html](http://nicercode.github.io/intro/writing-functions.html) to write your own functions to compute the variance and skew of a numeric vector. * Re\-do below to write measures for skew and variance (e.g. kurtosis, etc.) ``` var_bry <- function(x){ sum((x - mean(x)) ^ 2) / (length(x) - 1) } skewness_bry <- function(x) { mean((x - mean(x)) ^ 3) / var_bry(x) ^ (3 / 2) } ``` Let’s create some samples of distributions – normal, t (with 7 degrees of freedom), unifrom, poisson (with lambda of 2\). Note that an example with a cauchy distribution and looking at difference in kurtosis between that and a normal distribution has been moved to the [Appendix](28-graphics-for-communication.html#appendix-13) section [19\.2\.1\.4](19-functions.html#section-71). ``` nd <- rnorm(10000) td_df7 <- rt(10000, df = 7) ud <- runif(10000) pd_l2 <- rpois(10000, 2) ``` Verify that these functions match with established functions ``` dplyr::near(skewness_bry(pd_l2), e1071::skewness(pd_l2, type = 3)) ``` ``` ## [1] TRUE ``` ``` dplyr::near(var_bry(pd_l2), var(pd_l2)) ``` ``` ## [1] TRUE ``` Let’s look at the distributions as well as their variance an skewness ``` distributions_df <- tibble(normal_dist = nd, t_7df_dist = td_df7, uniform_dist = ud, poisson_dist = pd_l2) distributions_df %>% gather(normal_dist:poisson_dist, value = "sample", key = "dist_type") %>% mutate(dist_type = factor(forcats::fct_inorder(dist_type))) %>% ggplot(aes(x = sample))+ geom_histogram()+ facet_wrap(~ dist_type, scales = "free") ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` ``` tibble(dist_type = names(distributions_df), skewness = purrr::map_dbl(distributions_df, skewness_bry), variance = purrr::map_dbl(distributions_df, var_bry)) ``` ``` ## # A tibble: 4 x 3 ## dist_type skewness variance ## <chr> <dbl> <dbl> ## 1 normal_dist 0.0561 0.988 ## 2 t_7df_dist -0.0263 1.40 ## 3 uniform_dist -0.00678 0.0833 ## 4 poisson_dist 0.680 1.98 ``` * excellent video explaining intuition behind skewness: [https://www.youtube.com/watch?v\=z3XaFUP1rAM](https://www.youtube.com/watch?v=z3XaFUP1rAM) 5. Write `both_na()`, a function that takes two vectors of the same length and returns the number of positions that have an `NA` in both vectors. ``` both_na <- function(x, y) { if (length(x) == length(y)) { sum(is.na(x) & is.na(y)) } else stop("Vectors are not equal length") } x <- c(4, NA, 7, NA, 3) y <- c(NA, NA, 5, NA, 0) z <- c(NA, 4) both_na(x, y) ``` ``` ## [1] 2 ``` ``` both_na(x, z) ``` ``` ## Error in both_na(x, z): Vectors are not equal length ``` 6. What do the following functions do? Why are they useful even though they are so short? ``` is_directory <- function(x) file.info(x)$isdir is_readable <- function(x) file.access(x, 4) == 0 ``` * first checks if what is being referred to is actually a directory * second checks if a specific file is readable 7. Read the [complete lyrics](https://en.wikipedia.org/wiki/Little_Bunny_Foo_Foo) to “Little Bunny Foo Foo”. There’s a lot of duplication in this song. Extend the initial piping example to recreate the complete song, and use functions to reduce the duplication. 19\.3: Functions are for humans and computers --------------------------------------------- * Recommends snake\_case over camelCase, but just choose one and be consistent * When functions have a link, common prefix over suffix (i.e. input\_select, input\_text over, select\_input, text\_input) * ctrl \+ shift \+ r creates section breaks in R scripts like below `# test label --------------------------------------------------------------` * (though these cannot be made in markdown documents) ### 19\.3\.1 1. Read the source code for each of the following three functions, puzzle out what they do, and then brainstorm better names. ``` f1 <- function(string, prefix) { substr(string, 1, nchar(prefix)) == prefix } f2 <- function(x) { if (length(x) <= 1) return(NULL) x[-length(x)] } f3 <- function(x, y) { rep(y, length.out = length(x)) } ``` * `f1`: `check_prefix` * `f2`: `return_not_last` * `f3`: `repeat_for_length` 2. Take a function that you’ve written recently and spend 5 minutes brainstorming a better name for it and its arguments. * done seperately 3. Compare and contrast `rnorm()` and `MASS::mvrnorm()`. How could you make them more consistent? * uses mu \= and Sigma \= instead of mean \= and sd \= , and has extra parameters like tol, empirical, EISPACK * Similar in that both are pulling samples from gaussian distribution * `mvrnorm` is multivariate though, could change name to `rnorm_mv` 4. Make a case for why `norm_r()`, `norm_d()` etc would be better than `rnorm()`, `dnorm()`. Make a case for the opposite. * `norm_*` would show the commonality of them being from the same distribution. One could argue the important commonality though may be more related to it being either a random sample or a density distribution, in which case the `r*` or `d*` coming first may make more sense. To me, the fact that the help pages has all of the ‘normal distribution’ functions on the same page suggests the former may make more sense. However, I actually like having it be set\-up the way it is, because I am more likely to forget the name of the distribution type I want over the fact that I want a random sample, so it’s easier to type `r` and then do ctrl \+ space and have autocomplete help me find the specific distribution I want, e.g. `rnorm`, `runif`, `rpois`, `rbinom`… 19\.4: Conditional execution ---------------------------- Function example that uses `if` statement: ``` has_name <- function(x) { nms <- names(x) if (is.null(nms)) { rep(FALSE, length(x)) } else { !is.na(nms) & nms != "" } } ``` * note that if all names are blank, it returns the one\-unit vector value `NULL`, hence the need for the `if` statement here…[33](#fn33) ### 19\.4\.4\. 1. What’s the difference between `if` and `ifelse()`? Carefully read the help and construct three examples that illustrate the key differences. * `ifelse` is vectorized, `if` is not + Typically use `if` in functions when giving conditional options for how to evaluate + Typically use `ifelse` when changing specific values in a vector * If you supply `if` with a vector of length \> 1, it will use the first value ``` x <- c(3, 4, 6) y <- c("5", "c", "9") # Use `ifelse` simple transformations of values ifelse(x < 5, 0, x) ``` ``` ## [1] 0 0 6 ``` ``` # Use `if` for single condition tests cutoff_make0 <- function(x, cutoff = 0){ if(is.numeric(x)){ ifelse(x < cutoff, 0, x) } else stop("The input provided is not a numeric vector") } cutoff_make0(x, cutoff = 4) ``` ``` ## [1] 0 4 6 ``` ``` cutoff_make0(y, cutoff = 4) ``` ``` ## Error in cutoff_make0(y, cutoff = 4): The input provided is not a numeric vector ``` 2. Write a greeting function that says “good morning”, “good afternoon”, or “good evening”, depending on the time of day. (Hint: use a time argument that defaults to `lubridate::now()`. That will make it easier to test your function.) ``` greeting <- function(when) { time <- hour(when) if (time < 12 && time > 4) { greating <- "good morning" } else if (time < 17 && time >= 12) { greeting <- "good afternoon" } else greeting <- "good evening" when_char <- as.character(when) mid <- ", it is: " cat(greeting, mid, when_char, sep = "") } greeting(now()) ``` ``` ## good evening, it is: 2019-06-05 19:28:02 ``` 3. Implement a `fizzbuzz` function. It takes a single number as input. If the number is divisible by three, it returns “fizz”. If it’s divisible by five it returns “buzz”. If it’s divisible by three and five, it returns “fizzbuzz”. Otherwise, it returns the number. Make sure you first write working code before you create the function. ``` fizzbuzz <- function(x){ if(is.numeric(x) && length(x) == 1){ y <- "" if (x %% 5 == 0) y <- str_c(y, "fizz") if (x %% 3 == 0) y <- str_c(y, "buzz") if (str_length(y) == 0) { print(x) } else print(y) } else stop("Input is not a numeric vector with length 1") } fizzbuzz(4) ``` ``` ## [1] 4 ``` ``` fizzbuzz(10) ``` ``` ## [1] "fizz" ``` ``` fizzbuzz(6) ``` ``` ## [1] "buzz" ``` ``` fizzbuzz(30) ``` ``` ## [1] "fizzbuzz" ``` ``` fizzbuzz(c(34, 21)) ``` ``` ## Error in fizzbuzz(c(34, 21)): Input is not a numeric vector with length 1 ``` 4. How could you use `cut()` to simplify this set of nested if\-else statements? ``` if (temp <= 0) { "freezing" } else if (temp <= 10) { "cold" } else if (temp <= 20) { "cool" } else if (temp <= 30) { "warm" } else { "hot" } ``` * Below is example of fix ``` temp <- seq(-10, 50, 5) cut(temp, breaks = c(-Inf, 0, 10, 20, 30, Inf), #need to include negative and positive infiniity labels = c("freezing", "cold", "cool", "warm", "hot"), right = TRUE, oredered_result = TRUE) ``` ``` ## [1] freezing freezing freezing cold cold cool cool ## [8] warm warm hot hot hot hot ## Levels: freezing cold cool warm hot ``` How would you change the call to `cut()` if I’d used `<` instead of `<=`? What is the other chief advantage of `cut()` for this problem? (Hint: what happens if you have many values in `temp`?) * See below change to `right` argument ``` cut(temp, breaks = c(-Inf, 0, 10, 20, 30, Inf), #need to include negative and positive infiniity labels = c("freezing", "cold", "cool", "warm", "hot"), right = FALSE, oredered_result = TRUE) ``` ``` ## [1] freezing freezing cold cold cool cool warm ## [8] warm hot hot hot hot hot ## Levels: freezing cold cool warm hot ``` 5. What happens if you use `switch()` with numeric values? * It will return the index of the argument. + In example below, I input ‘3’ into `switch` value so it does the `times` argument ``` math_operation <- function(x, y, op){ switch(op, plus = x + y, minus = x - y, times = x * y, divide = x / y, stop("Unknown op!") ) } math_operation(5, 4, 3) ``` ``` ## [1] 20 ``` 6. What does this `switch()` call do? What happens if `x` is “e”? ``` x <- "e" switch(x, a = , b = "ab", c = , d = "cd" ) ``` Experiment, then carefully read the documentation. * If `x` is ‘e’ nothing will be outputted. If `x` is ‘c’ or ‘d’ then ‘cd’ is outputted. If ‘a’ or ‘b’ then ‘ab’ is outputeed. If blank it will continue down list until reaching an argument to output. 19\.5: Function arguments ------------------------- *Common non\-descriptive short argument names:* * `x`, `y`, `z`: vectors. * `w`: a vector of weights. * `df`: a data frame. * `i`, `j`: numeric indices (typically rows and columns). * `n`: length, or number of rows. * `p`: number of columns. ### 19\.5\.5\. 1. What does `commas(letters, collapse = "-")` do? Why? * `commas` function is below ``` commas <- function(...) stringr::str_c(..., collapse = ", ") commas(letters[1:10]) ``` ``` ## [1] "a, b, c, d, e, f, g, h, i, j" ``` ``` commas(letters[1:10], collapse = "-") ``` ``` ## Error in stringr::str_c(..., collapse = ", "): formal argument "collapse" matched by multiple actual arguments ``` * The above fails because are essentially specifying two different values for the `collapse` argument * Takes in vector of mulitple strings and outputs one\-unit character string with items concatenated together and seperated by columns * Is able to do this via use of `...` that turns this into a wrapper on `stringr::str_c` with the `collapse` value specified 2. It’d be nice if you could supply multiple characters to the `pad` argument, e.g. `rule("Title", pad = "-+")`. Why doesn’t this currently work? How could you fix it? * current `rule` function is below ``` rule <- function(..., pad = "-") { title <- paste0(...) width <- getOption("width") - nchar(title) - 5 cat(title, " ", stringr::str_dup(pad, width), "\n", sep = "") } # Note that `cat` is used instead of `paste` because paste would output it as a character vector, whereas `cat` is focused on just ouptut, could also have used `print`, though print does more conversion than cat does (apparently) rule("Tis the season"," to be jolly") ``` ``` ## Tis the season to be jolly -------------------------------------------- ``` * doesn’t work because pad ends\-up being too many characters in this situation ``` rule("Tis the season"," to be jolly", pad="+-") ``` ``` ## Tis the season to be jolly +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- ``` * instead would need to make the number of times `pad` is duplicated dependent on its length, see below for fix ``` rule_pad_fix <- function(..., pad = "-") { title <- paste0(...) width <- getOption("width") - nchar(title) - 5 width_fix <- width %/% stringr::str_length(pad) cat(title, " ", stringr::str_dup(pad, width_fix), "\n", sep = "") } rule_pad_fix("Tis the season"," to be jolly", pad="+-") ``` ``` ## Tis the season to be jolly +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- ``` 3. What does the `trim` argument to `mean()` do? When might you use it? * `trim` specifies proportion of data to take off from both ends, good with outliers ``` mean(c(-1000, 1:100, 100000), trim = .025) ``` ``` ## [1] 50.5 ``` 4. The default value for the `method` argument to `cor()` is `c("pearson", "kendall", "spearman")`. What does that mean? What value is used by default? * is showing that you can choose from any of these, will default to use `pearson` (value in first position) 19\.6: Return values -------------------- ``` show_missings <- function(df) { n <- sum(is.na(df)) cat("Missing values: ", n, "\n", sep = "") invisible(df) } x <- show_missings(mtcars) ``` ``` ## Missing values: 0 ``` ``` str(x) ``` ``` ## 'data.frame': 32 obs. of 11 variables: ## $ mpg : num 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ... ## $ cyl : num 6 6 4 6 8 6 8 4 4 6 ... ## $ disp: num 160 160 108 258 360 ... ## $ hp : num 110 110 93 110 175 105 245 62 95 123 ... ## $ drat: num 3.9 3.9 3.85 3.08 3.15 2.76 3.21 3.69 3.92 3.92 ... ## $ wt : num 2.62 2.88 2.32 3.21 3.44 ... ## $ qsec: num 16.5 17 18.6 19.4 17 ... ## $ vs : num 0 0 1 1 0 1 0 1 1 1 ... ## $ am : num 1 1 1 0 0 0 0 0 0 0 ... ## $ gear: num 4 4 4 3 3 3 3 4 4 4 ... ## $ carb: num 4 4 1 1 2 1 4 2 2 4 ... ``` * can still use in pipes ``` mtcars %>% show_missings() %>% mutate(mpg = ifelse(mpg < 20, NA, mpg)) %>% show_missings() ``` ``` ## Missing values: 0 ## Missing values: 18 ``` Appendix -------- ### 19\.2\.1\.4 *Function for Standard Error:* ``` x <- c(5, -2, 8, 6, 9) sd(x, na.rm = TRUE) / sqrt(sum(!is.na(x))) ``` ``` ## [1] 1.933908 ``` ``` sample_se <- function(x) { sd(x, na.rm = TRUE) / sqrt(sum(!is.na(x)) - 1) } #sqrt(var(x)/sum(!is.na(x))) sample_se(x) ``` ``` ## [1] 2.162175 ``` *Function for kurtosis:* ``` kurtosis_type3 <- function(x){ sum((x - mean(x)) ^ 4) / length(x) / sd(x) ^ 4 - 3 } ``` Notice differences between cauchy and normal distribution ``` set.seed(1235) norm_exp <- rnorm(10000) set.seed(1235) cauchy_exp <- rcauchy(10000) ``` ``` hist(norm_exp) hist(cauchy_exp) ``` kurtosis ``` kurtosis_type3(norm_exp) ``` ``` ## [1] 0.06382172 ``` ``` kurtosis_type3(cauchy_exp) ``` ``` ## [1] 1197.052 ``` ### 19\.2\.3\.5 ``` position_both_na <- function(x, y) { if (length(x) == length(y)) { (c(1:length(x)))[(is.na(x) & is.na(y))] } else stop("Vectors are not equal length") } x <- c(4, NA, 7, NA, 3) y <- c(NA, NA, 5, NA, 0) z <- c(NA, 4) both_na(x, y) ``` ``` ## [1] 2 ``` ``` both_na(x, z) ``` ``` ## Error in both_na(x, z): Vectors are not equal length ``` * specifies position where both are `NA` * second example shows returning of ‘stop’ argument 19\.2: When should you write a function? ---------------------------------------- ### 19\.2\.1 1. Why is `TRUE` not a parameter to `rescale01()`? What would happen if `x` contained a single missing value, and `na.rm` was `FALSE`? * `TRUE` doesn’t change between uses. * The output would be `NA` 2. In the second variant of `rescale01()`, infinite values are left unchanged. Rewrite `rescale01()` so that `-Inf` is mapped to 0, and `Inf` is mapped to 1\. ``` rescale01_inf <- function(x){ rng <- range(x, na.rm = TRUE, finite = TRUE) x_scaled <- (x - rng[1]) / (rng[2] - rng[1]) is_inf <- is.infinite(x) is_less0 <- x < 0 x_scaled[is_inf & is_less0] <- 0 x_scaled[is_inf & (!is_less0)] <- 1 x_scaled } x <- c(Inf, -Inf, 0, 3, -5) rescale01_inf(x) ``` ``` ## [1] 1.000 0.000 0.625 1.000 0.000 ``` 3. Practice turning the following code snippets into functions. Think about what each function does. What would you call it? How many arguments does it need? Can you rewrite it to be more expressive or less duplicative? ``` mean(is.na(x)) x / sum(x, na.rm = TRUE) sd(x, na.rm = TRUE) / mean(x, na.rm = TRUE) ``` * See solutions below: ``` x <- c(1, 4, 2, 0, NA, 3, NA) #mean(is.na(x)) perc_na <- function(x) { is.na(x) %>% mean() } perc_na(x) ``` ``` ## [1] 0.2857143 ``` ``` #x / sum(x, na.rm = TRUE) prop_weighted <- function(x) { x / sum(x, na.rm = TRUE) } prop_weighted(x) ``` ``` ## [1] 0.1 0.4 0.2 0.0 NA 0.3 NA ``` ``` #sd(x, na.rm = TRUE) / mean(x, na.rm = TRUE) CoefficientOfVariation <- function(x) { sd(x, na.rm = TRUE) / mean(x, na.rm = TRUE) } CoefficientOfVariation(x) ``` ``` ## [1] 0.7905694 ``` 4. Follow [http://nicercode.github.io/intro/writing\-functions.html](http://nicercode.github.io/intro/writing-functions.html) to write your own functions to compute the variance and skew of a numeric vector. * Re\-do below to write measures for skew and variance (e.g. kurtosis, etc.) ``` var_bry <- function(x){ sum((x - mean(x)) ^ 2) / (length(x) - 1) } skewness_bry <- function(x) { mean((x - mean(x)) ^ 3) / var_bry(x) ^ (3 / 2) } ``` Let’s create some samples of distributions – normal, t (with 7 degrees of freedom), unifrom, poisson (with lambda of 2\). Note that an example with a cauchy distribution and looking at difference in kurtosis between that and a normal distribution has been moved to the [Appendix](28-graphics-for-communication.html#appendix-13) section [19\.2\.1\.4](19-functions.html#section-71). ``` nd <- rnorm(10000) td_df7 <- rt(10000, df = 7) ud <- runif(10000) pd_l2 <- rpois(10000, 2) ``` Verify that these functions match with established functions ``` dplyr::near(skewness_bry(pd_l2), e1071::skewness(pd_l2, type = 3)) ``` ``` ## [1] TRUE ``` ``` dplyr::near(var_bry(pd_l2), var(pd_l2)) ``` ``` ## [1] TRUE ``` Let’s look at the distributions as well as their variance an skewness ``` distributions_df <- tibble(normal_dist = nd, t_7df_dist = td_df7, uniform_dist = ud, poisson_dist = pd_l2) distributions_df %>% gather(normal_dist:poisson_dist, value = "sample", key = "dist_type") %>% mutate(dist_type = factor(forcats::fct_inorder(dist_type))) %>% ggplot(aes(x = sample))+ geom_histogram()+ facet_wrap(~ dist_type, scales = "free") ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` ``` tibble(dist_type = names(distributions_df), skewness = purrr::map_dbl(distributions_df, skewness_bry), variance = purrr::map_dbl(distributions_df, var_bry)) ``` ``` ## # A tibble: 4 x 3 ## dist_type skewness variance ## <chr> <dbl> <dbl> ## 1 normal_dist 0.0561 0.988 ## 2 t_7df_dist -0.0263 1.40 ## 3 uniform_dist -0.00678 0.0833 ## 4 poisson_dist 0.680 1.98 ``` * excellent video explaining intuition behind skewness: [https://www.youtube.com/watch?v\=z3XaFUP1rAM](https://www.youtube.com/watch?v=z3XaFUP1rAM) 5. Write `both_na()`, a function that takes two vectors of the same length and returns the number of positions that have an `NA` in both vectors. ``` both_na <- function(x, y) { if (length(x) == length(y)) { sum(is.na(x) & is.na(y)) } else stop("Vectors are not equal length") } x <- c(4, NA, 7, NA, 3) y <- c(NA, NA, 5, NA, 0) z <- c(NA, 4) both_na(x, y) ``` ``` ## [1] 2 ``` ``` both_na(x, z) ``` ``` ## Error in both_na(x, z): Vectors are not equal length ``` 6. What do the following functions do? Why are they useful even though they are so short? ``` is_directory <- function(x) file.info(x)$isdir is_readable <- function(x) file.access(x, 4) == 0 ``` * first checks if what is being referred to is actually a directory * second checks if a specific file is readable 7. Read the [complete lyrics](https://en.wikipedia.org/wiki/Little_Bunny_Foo_Foo) to “Little Bunny Foo Foo”. There’s a lot of duplication in this song. Extend the initial piping example to recreate the complete song, and use functions to reduce the duplication. ### 19\.2\.1 1. Why is `TRUE` not a parameter to `rescale01()`? What would happen if `x` contained a single missing value, and `na.rm` was `FALSE`? * `TRUE` doesn’t change between uses. * The output would be `NA` 2. In the second variant of `rescale01()`, infinite values are left unchanged. Rewrite `rescale01()` so that `-Inf` is mapped to 0, and `Inf` is mapped to 1\. ``` rescale01_inf <- function(x){ rng <- range(x, na.rm = TRUE, finite = TRUE) x_scaled <- (x - rng[1]) / (rng[2] - rng[1]) is_inf <- is.infinite(x) is_less0 <- x < 0 x_scaled[is_inf & is_less0] <- 0 x_scaled[is_inf & (!is_less0)] <- 1 x_scaled } x <- c(Inf, -Inf, 0, 3, -5) rescale01_inf(x) ``` ``` ## [1] 1.000 0.000 0.625 1.000 0.000 ``` 3. Practice turning the following code snippets into functions. Think about what each function does. What would you call it? How many arguments does it need? Can you rewrite it to be more expressive or less duplicative? ``` mean(is.na(x)) x / sum(x, na.rm = TRUE) sd(x, na.rm = TRUE) / mean(x, na.rm = TRUE) ``` * See solutions below: ``` x <- c(1, 4, 2, 0, NA, 3, NA) #mean(is.na(x)) perc_na <- function(x) { is.na(x) %>% mean() } perc_na(x) ``` ``` ## [1] 0.2857143 ``` ``` #x / sum(x, na.rm = TRUE) prop_weighted <- function(x) { x / sum(x, na.rm = TRUE) } prop_weighted(x) ``` ``` ## [1] 0.1 0.4 0.2 0.0 NA 0.3 NA ``` ``` #sd(x, na.rm = TRUE) / mean(x, na.rm = TRUE) CoefficientOfVariation <- function(x) { sd(x, na.rm = TRUE) / mean(x, na.rm = TRUE) } CoefficientOfVariation(x) ``` ``` ## [1] 0.7905694 ``` 4. Follow [http://nicercode.github.io/intro/writing\-functions.html](http://nicercode.github.io/intro/writing-functions.html) to write your own functions to compute the variance and skew of a numeric vector. * Re\-do below to write measures for skew and variance (e.g. kurtosis, etc.) ``` var_bry <- function(x){ sum((x - mean(x)) ^ 2) / (length(x) - 1) } skewness_bry <- function(x) { mean((x - mean(x)) ^ 3) / var_bry(x) ^ (3 / 2) } ``` Let’s create some samples of distributions – normal, t (with 7 degrees of freedom), unifrom, poisson (with lambda of 2\). Note that an example with a cauchy distribution and looking at difference in kurtosis between that and a normal distribution has been moved to the [Appendix](28-graphics-for-communication.html#appendix-13) section [19\.2\.1\.4](19-functions.html#section-71). ``` nd <- rnorm(10000) td_df7 <- rt(10000, df = 7) ud <- runif(10000) pd_l2 <- rpois(10000, 2) ``` Verify that these functions match with established functions ``` dplyr::near(skewness_bry(pd_l2), e1071::skewness(pd_l2, type = 3)) ``` ``` ## [1] TRUE ``` ``` dplyr::near(var_bry(pd_l2), var(pd_l2)) ``` ``` ## [1] TRUE ``` Let’s look at the distributions as well as their variance an skewness ``` distributions_df <- tibble(normal_dist = nd, t_7df_dist = td_df7, uniform_dist = ud, poisson_dist = pd_l2) distributions_df %>% gather(normal_dist:poisson_dist, value = "sample", key = "dist_type") %>% mutate(dist_type = factor(forcats::fct_inorder(dist_type))) %>% ggplot(aes(x = sample))+ geom_histogram()+ facet_wrap(~ dist_type, scales = "free") ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` ``` tibble(dist_type = names(distributions_df), skewness = purrr::map_dbl(distributions_df, skewness_bry), variance = purrr::map_dbl(distributions_df, var_bry)) ``` ``` ## # A tibble: 4 x 3 ## dist_type skewness variance ## <chr> <dbl> <dbl> ## 1 normal_dist 0.0561 0.988 ## 2 t_7df_dist -0.0263 1.40 ## 3 uniform_dist -0.00678 0.0833 ## 4 poisson_dist 0.680 1.98 ``` * excellent video explaining intuition behind skewness: [https://www.youtube.com/watch?v\=z3XaFUP1rAM](https://www.youtube.com/watch?v=z3XaFUP1rAM) 5. Write `both_na()`, a function that takes two vectors of the same length and returns the number of positions that have an `NA` in both vectors. ``` both_na <- function(x, y) { if (length(x) == length(y)) { sum(is.na(x) & is.na(y)) } else stop("Vectors are not equal length") } x <- c(4, NA, 7, NA, 3) y <- c(NA, NA, 5, NA, 0) z <- c(NA, 4) both_na(x, y) ``` ``` ## [1] 2 ``` ``` both_na(x, z) ``` ``` ## Error in both_na(x, z): Vectors are not equal length ``` 6. What do the following functions do? Why are they useful even though they are so short? ``` is_directory <- function(x) file.info(x)$isdir is_readable <- function(x) file.access(x, 4) == 0 ``` * first checks if what is being referred to is actually a directory * second checks if a specific file is readable 7. Read the [complete lyrics](https://en.wikipedia.org/wiki/Little_Bunny_Foo_Foo) to “Little Bunny Foo Foo”. There’s a lot of duplication in this song. Extend the initial piping example to recreate the complete song, and use functions to reduce the duplication. 19\.3: Functions are for humans and computers --------------------------------------------- * Recommends snake\_case over camelCase, but just choose one and be consistent * When functions have a link, common prefix over suffix (i.e. input\_select, input\_text over, select\_input, text\_input) * ctrl \+ shift \+ r creates section breaks in R scripts like below `# test label --------------------------------------------------------------` * (though these cannot be made in markdown documents) ### 19\.3\.1 1. Read the source code for each of the following three functions, puzzle out what they do, and then brainstorm better names. ``` f1 <- function(string, prefix) { substr(string, 1, nchar(prefix)) == prefix } f2 <- function(x) { if (length(x) <= 1) return(NULL) x[-length(x)] } f3 <- function(x, y) { rep(y, length.out = length(x)) } ``` * `f1`: `check_prefix` * `f2`: `return_not_last` * `f3`: `repeat_for_length` 2. Take a function that you’ve written recently and spend 5 minutes brainstorming a better name for it and its arguments. * done seperately 3. Compare and contrast `rnorm()` and `MASS::mvrnorm()`. How could you make them more consistent? * uses mu \= and Sigma \= instead of mean \= and sd \= , and has extra parameters like tol, empirical, EISPACK * Similar in that both are pulling samples from gaussian distribution * `mvrnorm` is multivariate though, could change name to `rnorm_mv` 4. Make a case for why `norm_r()`, `norm_d()` etc would be better than `rnorm()`, `dnorm()`. Make a case for the opposite. * `norm_*` would show the commonality of them being from the same distribution. One could argue the important commonality though may be more related to it being either a random sample or a density distribution, in which case the `r*` or `d*` coming first may make more sense. To me, the fact that the help pages has all of the ‘normal distribution’ functions on the same page suggests the former may make more sense. However, I actually like having it be set\-up the way it is, because I am more likely to forget the name of the distribution type I want over the fact that I want a random sample, so it’s easier to type `r` and then do ctrl \+ space and have autocomplete help me find the specific distribution I want, e.g. `rnorm`, `runif`, `rpois`, `rbinom`… ### 19\.3\.1 1. Read the source code for each of the following three functions, puzzle out what they do, and then brainstorm better names. ``` f1 <- function(string, prefix) { substr(string, 1, nchar(prefix)) == prefix } f2 <- function(x) { if (length(x) <= 1) return(NULL) x[-length(x)] } f3 <- function(x, y) { rep(y, length.out = length(x)) } ``` * `f1`: `check_prefix` * `f2`: `return_not_last` * `f3`: `repeat_for_length` 2. Take a function that you’ve written recently and spend 5 minutes brainstorming a better name for it and its arguments. * done seperately 3. Compare and contrast `rnorm()` and `MASS::mvrnorm()`. How could you make them more consistent? * uses mu \= and Sigma \= instead of mean \= and sd \= , and has extra parameters like tol, empirical, EISPACK * Similar in that both are pulling samples from gaussian distribution * `mvrnorm` is multivariate though, could change name to `rnorm_mv` 4. Make a case for why `norm_r()`, `norm_d()` etc would be better than `rnorm()`, `dnorm()`. Make a case for the opposite. * `norm_*` would show the commonality of them being from the same distribution. One could argue the important commonality though may be more related to it being either a random sample or a density distribution, in which case the `r*` or `d*` coming first may make more sense. To me, the fact that the help pages has all of the ‘normal distribution’ functions on the same page suggests the former may make more sense. However, I actually like having it be set\-up the way it is, because I am more likely to forget the name of the distribution type I want over the fact that I want a random sample, so it’s easier to type `r` and then do ctrl \+ space and have autocomplete help me find the specific distribution I want, e.g. `rnorm`, `runif`, `rpois`, `rbinom`… 19\.4: Conditional execution ---------------------------- Function example that uses `if` statement: ``` has_name <- function(x) { nms <- names(x) if (is.null(nms)) { rep(FALSE, length(x)) } else { !is.na(nms) & nms != "" } } ``` * note that if all names are blank, it returns the one\-unit vector value `NULL`, hence the need for the `if` statement here…[33](#fn33) ### 19\.4\.4\. 1. What’s the difference between `if` and `ifelse()`? Carefully read the help and construct three examples that illustrate the key differences. * `ifelse` is vectorized, `if` is not + Typically use `if` in functions when giving conditional options for how to evaluate + Typically use `ifelse` when changing specific values in a vector * If you supply `if` with a vector of length \> 1, it will use the first value ``` x <- c(3, 4, 6) y <- c("5", "c", "9") # Use `ifelse` simple transformations of values ifelse(x < 5, 0, x) ``` ``` ## [1] 0 0 6 ``` ``` # Use `if` for single condition tests cutoff_make0 <- function(x, cutoff = 0){ if(is.numeric(x)){ ifelse(x < cutoff, 0, x) } else stop("The input provided is not a numeric vector") } cutoff_make0(x, cutoff = 4) ``` ``` ## [1] 0 4 6 ``` ``` cutoff_make0(y, cutoff = 4) ``` ``` ## Error in cutoff_make0(y, cutoff = 4): The input provided is not a numeric vector ``` 2. Write a greeting function that says “good morning”, “good afternoon”, or “good evening”, depending on the time of day. (Hint: use a time argument that defaults to `lubridate::now()`. That will make it easier to test your function.) ``` greeting <- function(when) { time <- hour(when) if (time < 12 && time > 4) { greating <- "good morning" } else if (time < 17 && time >= 12) { greeting <- "good afternoon" } else greeting <- "good evening" when_char <- as.character(when) mid <- ", it is: " cat(greeting, mid, when_char, sep = "") } greeting(now()) ``` ``` ## good evening, it is: 2019-06-05 19:28:02 ``` 3. Implement a `fizzbuzz` function. It takes a single number as input. If the number is divisible by three, it returns “fizz”. If it’s divisible by five it returns “buzz”. If it’s divisible by three and five, it returns “fizzbuzz”. Otherwise, it returns the number. Make sure you first write working code before you create the function. ``` fizzbuzz <- function(x){ if(is.numeric(x) && length(x) == 1){ y <- "" if (x %% 5 == 0) y <- str_c(y, "fizz") if (x %% 3 == 0) y <- str_c(y, "buzz") if (str_length(y) == 0) { print(x) } else print(y) } else stop("Input is not a numeric vector with length 1") } fizzbuzz(4) ``` ``` ## [1] 4 ``` ``` fizzbuzz(10) ``` ``` ## [1] "fizz" ``` ``` fizzbuzz(6) ``` ``` ## [1] "buzz" ``` ``` fizzbuzz(30) ``` ``` ## [1] "fizzbuzz" ``` ``` fizzbuzz(c(34, 21)) ``` ``` ## Error in fizzbuzz(c(34, 21)): Input is not a numeric vector with length 1 ``` 4. How could you use `cut()` to simplify this set of nested if\-else statements? ``` if (temp <= 0) { "freezing" } else if (temp <= 10) { "cold" } else if (temp <= 20) { "cool" } else if (temp <= 30) { "warm" } else { "hot" } ``` * Below is example of fix ``` temp <- seq(-10, 50, 5) cut(temp, breaks = c(-Inf, 0, 10, 20, 30, Inf), #need to include negative and positive infiniity labels = c("freezing", "cold", "cool", "warm", "hot"), right = TRUE, oredered_result = TRUE) ``` ``` ## [1] freezing freezing freezing cold cold cool cool ## [8] warm warm hot hot hot hot ## Levels: freezing cold cool warm hot ``` How would you change the call to `cut()` if I’d used `<` instead of `<=`? What is the other chief advantage of `cut()` for this problem? (Hint: what happens if you have many values in `temp`?) * See below change to `right` argument ``` cut(temp, breaks = c(-Inf, 0, 10, 20, 30, Inf), #need to include negative and positive infiniity labels = c("freezing", "cold", "cool", "warm", "hot"), right = FALSE, oredered_result = TRUE) ``` ``` ## [1] freezing freezing cold cold cool cool warm ## [8] warm hot hot hot hot hot ## Levels: freezing cold cool warm hot ``` 5. What happens if you use `switch()` with numeric values? * It will return the index of the argument. + In example below, I input ‘3’ into `switch` value so it does the `times` argument ``` math_operation <- function(x, y, op){ switch(op, plus = x + y, minus = x - y, times = x * y, divide = x / y, stop("Unknown op!") ) } math_operation(5, 4, 3) ``` ``` ## [1] 20 ``` 6. What does this `switch()` call do? What happens if `x` is “e”? ``` x <- "e" switch(x, a = , b = "ab", c = , d = "cd" ) ``` Experiment, then carefully read the documentation. * If `x` is ‘e’ nothing will be outputted. If `x` is ‘c’ or ‘d’ then ‘cd’ is outputted. If ‘a’ or ‘b’ then ‘ab’ is outputeed. If blank it will continue down list until reaching an argument to output. ### 19\.4\.4\. 1. What’s the difference between `if` and `ifelse()`? Carefully read the help and construct three examples that illustrate the key differences. * `ifelse` is vectorized, `if` is not + Typically use `if` in functions when giving conditional options for how to evaluate + Typically use `ifelse` when changing specific values in a vector * If you supply `if` with a vector of length \> 1, it will use the first value ``` x <- c(3, 4, 6) y <- c("5", "c", "9") # Use `ifelse` simple transformations of values ifelse(x < 5, 0, x) ``` ``` ## [1] 0 0 6 ``` ``` # Use `if` for single condition tests cutoff_make0 <- function(x, cutoff = 0){ if(is.numeric(x)){ ifelse(x < cutoff, 0, x) } else stop("The input provided is not a numeric vector") } cutoff_make0(x, cutoff = 4) ``` ``` ## [1] 0 4 6 ``` ``` cutoff_make0(y, cutoff = 4) ``` ``` ## Error in cutoff_make0(y, cutoff = 4): The input provided is not a numeric vector ``` 2. Write a greeting function that says “good morning”, “good afternoon”, or “good evening”, depending on the time of day. (Hint: use a time argument that defaults to `lubridate::now()`. That will make it easier to test your function.) ``` greeting <- function(when) { time <- hour(when) if (time < 12 && time > 4) { greating <- "good morning" } else if (time < 17 && time >= 12) { greeting <- "good afternoon" } else greeting <- "good evening" when_char <- as.character(when) mid <- ", it is: " cat(greeting, mid, when_char, sep = "") } greeting(now()) ``` ``` ## good evening, it is: 2019-06-05 19:28:02 ``` 3. Implement a `fizzbuzz` function. It takes a single number as input. If the number is divisible by three, it returns “fizz”. If it’s divisible by five it returns “buzz”. If it’s divisible by three and five, it returns “fizzbuzz”. Otherwise, it returns the number. Make sure you first write working code before you create the function. ``` fizzbuzz <- function(x){ if(is.numeric(x) && length(x) == 1){ y <- "" if (x %% 5 == 0) y <- str_c(y, "fizz") if (x %% 3 == 0) y <- str_c(y, "buzz") if (str_length(y) == 0) { print(x) } else print(y) } else stop("Input is not a numeric vector with length 1") } fizzbuzz(4) ``` ``` ## [1] 4 ``` ``` fizzbuzz(10) ``` ``` ## [1] "fizz" ``` ``` fizzbuzz(6) ``` ``` ## [1] "buzz" ``` ``` fizzbuzz(30) ``` ``` ## [1] "fizzbuzz" ``` ``` fizzbuzz(c(34, 21)) ``` ``` ## Error in fizzbuzz(c(34, 21)): Input is not a numeric vector with length 1 ``` 4. How could you use `cut()` to simplify this set of nested if\-else statements? ``` if (temp <= 0) { "freezing" } else if (temp <= 10) { "cold" } else if (temp <= 20) { "cool" } else if (temp <= 30) { "warm" } else { "hot" } ``` * Below is example of fix ``` temp <- seq(-10, 50, 5) cut(temp, breaks = c(-Inf, 0, 10, 20, 30, Inf), #need to include negative and positive infiniity labels = c("freezing", "cold", "cool", "warm", "hot"), right = TRUE, oredered_result = TRUE) ``` ``` ## [1] freezing freezing freezing cold cold cool cool ## [8] warm warm hot hot hot hot ## Levels: freezing cold cool warm hot ``` How would you change the call to `cut()` if I’d used `<` instead of `<=`? What is the other chief advantage of `cut()` for this problem? (Hint: what happens if you have many values in `temp`?) * See below change to `right` argument ``` cut(temp, breaks = c(-Inf, 0, 10, 20, 30, Inf), #need to include negative and positive infiniity labels = c("freezing", "cold", "cool", "warm", "hot"), right = FALSE, oredered_result = TRUE) ``` ``` ## [1] freezing freezing cold cold cool cool warm ## [8] warm hot hot hot hot hot ## Levels: freezing cold cool warm hot ``` 5. What happens if you use `switch()` with numeric values? * It will return the index of the argument. + In example below, I input ‘3’ into `switch` value so it does the `times` argument ``` math_operation <- function(x, y, op){ switch(op, plus = x + y, minus = x - y, times = x * y, divide = x / y, stop("Unknown op!") ) } math_operation(5, 4, 3) ``` ``` ## [1] 20 ``` 6. What does this `switch()` call do? What happens if `x` is “e”? ``` x <- "e" switch(x, a = , b = "ab", c = , d = "cd" ) ``` Experiment, then carefully read the documentation. * If `x` is ‘e’ nothing will be outputted. If `x` is ‘c’ or ‘d’ then ‘cd’ is outputted. If ‘a’ or ‘b’ then ‘ab’ is outputeed. If blank it will continue down list until reaching an argument to output. 19\.5: Function arguments ------------------------- *Common non\-descriptive short argument names:* * `x`, `y`, `z`: vectors. * `w`: a vector of weights. * `df`: a data frame. * `i`, `j`: numeric indices (typically rows and columns). * `n`: length, or number of rows. * `p`: number of columns. ### 19\.5\.5\. 1. What does `commas(letters, collapse = "-")` do? Why? * `commas` function is below ``` commas <- function(...) stringr::str_c(..., collapse = ", ") commas(letters[1:10]) ``` ``` ## [1] "a, b, c, d, e, f, g, h, i, j" ``` ``` commas(letters[1:10], collapse = "-") ``` ``` ## Error in stringr::str_c(..., collapse = ", "): formal argument "collapse" matched by multiple actual arguments ``` * The above fails because are essentially specifying two different values for the `collapse` argument * Takes in vector of mulitple strings and outputs one\-unit character string with items concatenated together and seperated by columns * Is able to do this via use of `...` that turns this into a wrapper on `stringr::str_c` with the `collapse` value specified 2. It’d be nice if you could supply multiple characters to the `pad` argument, e.g. `rule("Title", pad = "-+")`. Why doesn’t this currently work? How could you fix it? * current `rule` function is below ``` rule <- function(..., pad = "-") { title <- paste0(...) width <- getOption("width") - nchar(title) - 5 cat(title, " ", stringr::str_dup(pad, width), "\n", sep = "") } # Note that `cat` is used instead of `paste` because paste would output it as a character vector, whereas `cat` is focused on just ouptut, could also have used `print`, though print does more conversion than cat does (apparently) rule("Tis the season"," to be jolly") ``` ``` ## Tis the season to be jolly -------------------------------------------- ``` * doesn’t work because pad ends\-up being too many characters in this situation ``` rule("Tis the season"," to be jolly", pad="+-") ``` ``` ## Tis the season to be jolly +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- ``` * instead would need to make the number of times `pad` is duplicated dependent on its length, see below for fix ``` rule_pad_fix <- function(..., pad = "-") { title <- paste0(...) width <- getOption("width") - nchar(title) - 5 width_fix <- width %/% stringr::str_length(pad) cat(title, " ", stringr::str_dup(pad, width_fix), "\n", sep = "") } rule_pad_fix("Tis the season"," to be jolly", pad="+-") ``` ``` ## Tis the season to be jolly +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- ``` 3. What does the `trim` argument to `mean()` do? When might you use it? * `trim` specifies proportion of data to take off from both ends, good with outliers ``` mean(c(-1000, 1:100, 100000), trim = .025) ``` ``` ## [1] 50.5 ``` 4. The default value for the `method` argument to `cor()` is `c("pearson", "kendall", "spearman")`. What does that mean? What value is used by default? * is showing that you can choose from any of these, will default to use `pearson` (value in first position) ### 19\.5\.5\. 1. What does `commas(letters, collapse = "-")` do? Why? * `commas` function is below ``` commas <- function(...) stringr::str_c(..., collapse = ", ") commas(letters[1:10]) ``` ``` ## [1] "a, b, c, d, e, f, g, h, i, j" ``` ``` commas(letters[1:10], collapse = "-") ``` ``` ## Error in stringr::str_c(..., collapse = ", "): formal argument "collapse" matched by multiple actual arguments ``` * The above fails because are essentially specifying two different values for the `collapse` argument * Takes in vector of mulitple strings and outputs one\-unit character string with items concatenated together and seperated by columns * Is able to do this via use of `...` that turns this into a wrapper on `stringr::str_c` with the `collapse` value specified 2. It’d be nice if you could supply multiple characters to the `pad` argument, e.g. `rule("Title", pad = "-+")`. Why doesn’t this currently work? How could you fix it? * current `rule` function is below ``` rule <- function(..., pad = "-") { title <- paste0(...) width <- getOption("width") - nchar(title) - 5 cat(title, " ", stringr::str_dup(pad, width), "\n", sep = "") } # Note that `cat` is used instead of `paste` because paste would output it as a character vector, whereas `cat` is focused on just ouptut, could also have used `print`, though print does more conversion than cat does (apparently) rule("Tis the season"," to be jolly") ``` ``` ## Tis the season to be jolly -------------------------------------------- ``` * doesn’t work because pad ends\-up being too many characters in this situation ``` rule("Tis the season"," to be jolly", pad="+-") ``` ``` ## Tis the season to be jolly +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- ``` * instead would need to make the number of times `pad` is duplicated dependent on its length, see below for fix ``` rule_pad_fix <- function(..., pad = "-") { title <- paste0(...) width <- getOption("width") - nchar(title) - 5 width_fix <- width %/% stringr::str_length(pad) cat(title, " ", stringr::str_dup(pad, width_fix), "\n", sep = "") } rule_pad_fix("Tis the season"," to be jolly", pad="+-") ``` ``` ## Tis the season to be jolly +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- ``` 3. What does the `trim` argument to `mean()` do? When might you use it? * `trim` specifies proportion of data to take off from both ends, good with outliers ``` mean(c(-1000, 1:100, 100000), trim = .025) ``` ``` ## [1] 50.5 ``` 4. The default value for the `method` argument to `cor()` is `c("pearson", "kendall", "spearman")`. What does that mean? What value is used by default? * is showing that you can choose from any of these, will default to use `pearson` (value in first position) 19\.6: Return values -------------------- ``` show_missings <- function(df) { n <- sum(is.na(df)) cat("Missing values: ", n, "\n", sep = "") invisible(df) } x <- show_missings(mtcars) ``` ``` ## Missing values: 0 ``` ``` str(x) ``` ``` ## 'data.frame': 32 obs. of 11 variables: ## $ mpg : num 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ... ## $ cyl : num 6 6 4 6 8 6 8 4 4 6 ... ## $ disp: num 160 160 108 258 360 ... ## $ hp : num 110 110 93 110 175 105 245 62 95 123 ... ## $ drat: num 3.9 3.9 3.85 3.08 3.15 2.76 3.21 3.69 3.92 3.92 ... ## $ wt : num 2.62 2.88 2.32 3.21 3.44 ... ## $ qsec: num 16.5 17 18.6 19.4 17 ... ## $ vs : num 0 0 1 1 0 1 0 1 1 1 ... ## $ am : num 1 1 1 0 0 0 0 0 0 0 ... ## $ gear: num 4 4 4 3 3 3 3 4 4 4 ... ## $ carb: num 4 4 1 1 2 1 4 2 2 4 ... ``` * can still use in pipes ``` mtcars %>% show_missings() %>% mutate(mpg = ifelse(mpg < 20, NA, mpg)) %>% show_missings() ``` ``` ## Missing values: 0 ## Missing values: 18 ``` Appendix -------- ### 19\.2\.1\.4 *Function for Standard Error:* ``` x <- c(5, -2, 8, 6, 9) sd(x, na.rm = TRUE) / sqrt(sum(!is.na(x))) ``` ``` ## [1] 1.933908 ``` ``` sample_se <- function(x) { sd(x, na.rm = TRUE) / sqrt(sum(!is.na(x)) - 1) } #sqrt(var(x)/sum(!is.na(x))) sample_se(x) ``` ``` ## [1] 2.162175 ``` *Function for kurtosis:* ``` kurtosis_type3 <- function(x){ sum((x - mean(x)) ^ 4) / length(x) / sd(x) ^ 4 - 3 } ``` Notice differences between cauchy and normal distribution ``` set.seed(1235) norm_exp <- rnorm(10000) set.seed(1235) cauchy_exp <- rcauchy(10000) ``` ``` hist(norm_exp) hist(cauchy_exp) ``` kurtosis ``` kurtosis_type3(norm_exp) ``` ``` ## [1] 0.06382172 ``` ``` kurtosis_type3(cauchy_exp) ``` ``` ## [1] 1197.052 ``` ### 19\.2\.3\.5 ``` position_both_na <- function(x, y) { if (length(x) == length(y)) { (c(1:length(x)))[(is.na(x) & is.na(y))] } else stop("Vectors are not equal length") } x <- c(4, NA, 7, NA, 3) y <- c(NA, NA, 5, NA, 0) z <- c(NA, 4) both_na(x, y) ``` ``` ## [1] 2 ``` ``` both_na(x, z) ``` ``` ## Error in both_na(x, z): Vectors are not equal length ``` * specifies position where both are `NA` * second example shows returning of ‘stop’ argument ### 19\.2\.1\.4 *Function for Standard Error:* ``` x <- c(5, -2, 8, 6, 9) sd(x, na.rm = TRUE) / sqrt(sum(!is.na(x))) ``` ``` ## [1] 1.933908 ``` ``` sample_se <- function(x) { sd(x, na.rm = TRUE) / sqrt(sum(!is.na(x)) - 1) } #sqrt(var(x)/sum(!is.na(x))) sample_se(x) ``` ``` ## [1] 2.162175 ``` *Function for kurtosis:* ``` kurtosis_type3 <- function(x){ sum((x - mean(x)) ^ 4) / length(x) / sd(x) ^ 4 - 3 } ``` Notice differences between cauchy and normal distribution ``` set.seed(1235) norm_exp <- rnorm(10000) set.seed(1235) cauchy_exp <- rcauchy(10000) ``` ``` hist(norm_exp) hist(cauchy_exp) ``` kurtosis ``` kurtosis_type3(norm_exp) ``` ``` ## [1] 0.06382172 ``` ``` kurtosis_type3(cauchy_exp) ``` ``` ## [1] 1197.052 ``` ### 19\.2\.3\.5 ``` position_both_na <- function(x, y) { if (length(x) == length(y)) { (c(1:length(x)))[(is.na(x) & is.na(y))] } else stop("Vectors are not equal length") } x <- c(4, NA, 7, NA, 3) y <- c(NA, NA, 5, NA, 0) z <- c(NA, 4) both_na(x, y) ``` ``` ## [1] 2 ``` ``` both_na(x, z) ``` ``` ## Error in both_na(x, z): Vectors are not equal length ``` * specifies position where both are `NA` * second example shows returning of ‘stop’ argument
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/20-vectors.html
Ch. 20: Vectors =============== **Key questions:** * 20\.3\.5, \#1 * 20\.4\.6\. \#1, 4, 5 **Functions and notes:** *Types of vectors, not including augmented types:* * Check special value types: `is.finite`, `is.infinite`, `is.na`, `is.nan` * `typeof` retruns type of vector * `length` returns length of vector * `pryr::object_size` view size of object stored * specific `NA` values can be defined explicitly with `NA_integer_`, `NA_real_`, `NA_character_` (usually don’t need to know) * explicitly differentiate integers from doubles with `10L` v `10` * explicit coersion functions: `as.logical`, `as.integer`, `as.double`, `as.character`, or use `col_[types]` when reading in so that coersion done at source * test functions from `purrr` package that are more consistent than base R’s *Purrr versions for testing types:* * `set_names` lets you set names after the fact, e.g. `set_names(1:3, c("a", "b", "c"))` * For more details on subsetting: [http://adv\-r.had.co.nz/Subsetting.html\#applications](http://adv-r.had.co.nz/Subsetting.html#applications) * `str` checks structure (excellent for use with lists) * `attr` and `attributes` get and set attributes + main types of attributes: **Names**, **Dimensions/dims**, **Class**. Class is important for object oriented programming which is covered in more detail here: [http://adv\-r.had.co.nz/OO\-essentials.html\#s3](http://adv-r.had.co.nz/OO-essentials.html#s3) * Used together to investigate details of code for functions + `useMethod` in function syntax indicates it is a generic function + `methods` lists all methods within a generic + `getS3method` to show specific implementation of a method ``` as.Date ``` ``` ## function (x, ...) ## UseMethod("as.Date") ## <bytecode: 0x000000001d6f1aa0> ## <environment: namespace:base> ``` ``` methods("as.Date") ``` ``` ## [1] as.Date.character as.Date.default as.Date.factor as.Date.numeric ## [5] as.Date.POSIXct as.Date.POSIXlt ## see '?methods' for accessing help and source code ``` ``` getS3method("as.Date", "default") ``` ``` ## function (x, ...) ## { ## if (inherits(x, "Date")) ## x ## else if (is.logical(x) && all(is.na(x))) ## .Date(as.numeric(x)) ## else stop(gettextf("do not know how to convert '%s' to class %s", ## deparse(substitute(x)), dQuote("Date")), domain = NA) ## } ## <bytecode: 0x0000000019e1f0e8> ## <environment: namespace:base> ``` * Some tidyverse functions are not always easy to unpack with just the above[34](#fn34) * **Augmented vectors**: vectors with additional attributes, e.g. factors (levels, class \= factors), dates and datetimes (tzone, class \= (POSIXct, POSIXt)), POSIXlt (names, class \= (POSIXLt, POSIXt)), tibbles (names, class \= (tbl\_df, tbl, data.frame), row.names) – in the integer, double and double, list, list types. + data frames only have class `data.frame`, whereas tibbles have `tbl_df`, and `tbl` as well * `class` get or set class attribute * `unclass` returns copy with ‘class’ attribute removed 20\.3: Important types of atomic vector --------------------------------------- ### 20\.3\.5 1. Describe the difference between `is.finite(x)` and `!is.infinite(x)`. * `is.finite` and `is.infinite` return `FALSE` for `NA` or `NaN` values, therefore these values become `TRUE` when negated as in the latter case, e.g.: ``` is.finite(c(6,11,-Inf, NA, NaN)) ``` ``` ## [1] TRUE TRUE FALSE FALSE FALSE ``` ``` !is.infinite(c(6,11,-Inf, NA, NaN)) ``` ``` ## [1] TRUE TRUE FALSE TRUE TRUE ``` 2. Read the source code for `dplyr::near()` (Hint: to see the source code, drop the `()`). How does it work? * safer way to test equality of floating point numbers (as has some tolerance for differences caused by rounding) * it checks if the difference between the value is within `tol` which by default is `.Machine$double.eps^0.5` 3. A logical vector can take 3 possible values. How many possible values can an integer vector take? How many possible values can a double take? Use google to do some research. * Part of the point here is that it’s not ‘infinite’ like someone may be tempted to answer – it’s constrained by memory of the machine * For integer it is 2 \* 2 \* 10^9 * For double it is 2 \* 2 \* 10^308 4. Brainstorm at least four functions that allow you to convert a double to an integer. How do they differ? Be precise. * `as.integer`, `as.factor` (technically is going to a factor – but this class is built on top of integers), `round`, `floor`, `ceiling`, these last 3 though do not change the type, which would remain a type double 5. What functions from the `readr` package allow you to turn a string into logical, integer, and double vector? * The appropriate `parse_*` or `col_*` functions 20\.4: Using atomic vectors --------------------------- ### 20\.4\.6 1. What does `mean(is.na(x))` tell you about a vector `x`? What about `sum(!is.finite(x))`? * percentage that are `NA` or `NaN` * number that are either `Inf`, `-Inf`, `NA` or `NaN` 2. Carefully read the documentation of `is.vector()`. What does it actually test for? Why does `is.atomic()` not agree with the definition of atomic vectors above? * `is.vector` tests if it is a specific type of vector with no attributes other than names. This second requirement means that any augmented vectors such as factors, dates, tibbles all would return false. * `is.atomic` returns TRUE to `is.atomic(NULL)` despite this representing the empty set. 3. Compare and contrast `setNames()` with `purrr::set_names()`. * both assign names after fact * `purrr::set_names` is stricter and returns an error in situations like the following where as `setNames` does not: ``` setNames(1:4, c("a")) ``` ``` ## a <NA> <NA> <NA> ## 1 2 3 4 ``` ``` set_names(1:4, c("a")) ``` ``` ## Error: `nm` must be `NULL` or a character vector the same length as `x` ``` 4. Create functions that take a vector as input and returns: ``` x <- c(-3:14, NA, Inf, NaN) ``` 1. The last value. Should you use `[` or `[[`? ``` return_last <- function(x) x[[length(x)]] return_last(x) ``` ``` ## [1] NaN ``` 1. The elements at even numbered positions. ``` return_even <- function(x) x[((1:length(x)) %% 2 == 0)] return_even(x) ``` ``` ## [1] -2 0 2 4 6 8 10 12 14 Inf ``` 1. Every element except the last value. ``` return_not_last <- function(x) x[-length(x)] return_not_last(x) ``` ``` ## [1] -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 ## [18] 14 NA Inf ``` 1. Only even numbers (and no missing values). ``` #only even and not na return_even_no_na <- function(x) x[((1:length(x)) %% 2 == 0) & !is.na(x)] return_even_no_na(x) ``` ``` ## [1] -2 0 2 4 6 8 10 12 14 Inf ``` 5. Why is `x[-which(x > 0)]` not the same as `x[x <= 0]`? ``` x[-which(x > 0)] #which only reports the indices of the matches, so specifies all to be removed ``` ``` ## [1] -3 -2 -1 0 NA NaN ``` ``` x[x <= 0] #This method reports T/F'sNaN is converted into NA ``` ``` ## [1] -3 -2 -1 0 NA NA ``` * in the 2nd instance, `NaN`s will get converted to `NA` 6. What happens when you subset with a positive integer that’s bigger than the length of the vector? What happens when you subset with a name that doesn’t exist? * In both cases you get back an `NA` (though it seems to take longer in the case when subsetting by a name that doesn’t exist). 20\.5: Recursive vectors (lists) -------------------------------- Example of subsetting items from a list: ``` a <- list(a = 1:3, b = "a string", c = pi, d = list(c(-1,-2), -5)) a[[4]][[1]] ``` ``` ## [1] -1 -2 ``` ``` # equivalent alternatives: # a$d[[1]] # a[4][[1]][[1]] ``` * 3 ways of subsetting `[]`, `[[]]`, `$` ### 20\.5\.4\. ``` a <- list(a = 1:3, b = "a string", c = pi, d = list(-1, -5)) ``` 1. Draw the following lists as nested sets: 1. `list(a, b, list(c, d), list(e, f))` 2. `list(list(list(list(list(list(a))))))` * I did not conform with Hadley’s square v rounded syntax, but hopefully this gives a sense of what the above are: drawings 1 and 2 for 20\.5\.4\. 2. What happens if you subset a tibble as if you’re subsetting a list? What are the key differences between a list and a tibble? * Dataframe is just a list of vectors (columns) – with the restriction that each column has the same number of elements whereas lists do not have this requirement * Dataframe structure better connects elements by row structure, making subsetting by the qualities of these values much easier 20\.7: Augmented vectors ------------------------ ### 20\.7\.4 1. What does `hms::hms(3600)` return? ``` x <- hms::hms(3600) ``` How does it print? ``` print(x) ``` ``` ## 01:00:00 ``` What primitive type is the augmented vector built on top of? ``` typeof(x) ``` ``` ## [1] "double" ``` What attributes does it use? ``` attributes(x) ``` ``` ## $class ## [1] "hms" "difftime" ## ## $units ## [1] "secs" ``` 2. Try and make a tibble that has columns with different lengths. What happens? * if the column is length one it will repeat for the length of the other column(s), otherwise if it is not the same length it will return an error ``` tibble(x = 1:4, y = 5:6) #error ``` ``` ## Error: Tibble columns must have consistent lengths, only values of length one are recycled: ## * Length 2: Column `y` ## * Length 4: Column `x` ``` ``` tibble(x = 1:5, y = 6) #can have length 1 that repeats ``` ``` ## # A tibble: 5 x 2 ## x y ## <int> <dbl> ## 1 1 6 ## 2 2 6 ## 3 3 6 ## 4 4 6 ## 5 5 6 ``` 3. Based on the definition above, is it ok to have a list as a column of a tibble? * Yes, as long as the number of elements align with the other length of the other columns – this will come\-up a lot in the modeling chapters. Appendix -------- ### Subsetting nested lists ``` x <- list("a", list(list("c", "d"), "e2"), list("e", "f")) x ``` ``` ## [[1]] ## [1] "a" ## ## [[2]] ## [[2]][[1]] ## [[2]][[1]][[1]] ## [1] "c" ## ## [[2]][[1]][[2]] ## [1] "d" ## ## ## [[2]][[2]] ## [1] "e2" ## ## ## [[3]] ## [[3]][[1]] ## [1] "e" ## ## [[3]][[2]] ## [1] "f" ``` It can be confusing how to subset items in a nested list, lists output helps tell you the subsetting needed to extract particular items. For example, to output `list("c", "d")` requires `x[[2]][[1]]`, to output just `d` requires `x[[2]][[1]][[2]]` *Subset nested lists:* 20\.3: Important types of atomic vector --------------------------------------- ### 20\.3\.5 1. Describe the difference between `is.finite(x)` and `!is.infinite(x)`. * `is.finite` and `is.infinite` return `FALSE` for `NA` or `NaN` values, therefore these values become `TRUE` when negated as in the latter case, e.g.: ``` is.finite(c(6,11,-Inf, NA, NaN)) ``` ``` ## [1] TRUE TRUE FALSE FALSE FALSE ``` ``` !is.infinite(c(6,11,-Inf, NA, NaN)) ``` ``` ## [1] TRUE TRUE FALSE TRUE TRUE ``` 2. Read the source code for `dplyr::near()` (Hint: to see the source code, drop the `()`). How does it work? * safer way to test equality of floating point numbers (as has some tolerance for differences caused by rounding) * it checks if the difference between the value is within `tol` which by default is `.Machine$double.eps^0.5` 3. A logical vector can take 3 possible values. How many possible values can an integer vector take? How many possible values can a double take? Use google to do some research. * Part of the point here is that it’s not ‘infinite’ like someone may be tempted to answer – it’s constrained by memory of the machine * For integer it is 2 \* 2 \* 10^9 * For double it is 2 \* 2 \* 10^308 4. Brainstorm at least four functions that allow you to convert a double to an integer. How do they differ? Be precise. * `as.integer`, `as.factor` (technically is going to a factor – but this class is built on top of integers), `round`, `floor`, `ceiling`, these last 3 though do not change the type, which would remain a type double 5. What functions from the `readr` package allow you to turn a string into logical, integer, and double vector? * The appropriate `parse_*` or `col_*` functions ### 20\.3\.5 1. Describe the difference between `is.finite(x)` and `!is.infinite(x)`. * `is.finite` and `is.infinite` return `FALSE` for `NA` or `NaN` values, therefore these values become `TRUE` when negated as in the latter case, e.g.: ``` is.finite(c(6,11,-Inf, NA, NaN)) ``` ``` ## [1] TRUE TRUE FALSE FALSE FALSE ``` ``` !is.infinite(c(6,11,-Inf, NA, NaN)) ``` ``` ## [1] TRUE TRUE FALSE TRUE TRUE ``` 2. Read the source code for `dplyr::near()` (Hint: to see the source code, drop the `()`). How does it work? * safer way to test equality of floating point numbers (as has some tolerance for differences caused by rounding) * it checks if the difference between the value is within `tol` which by default is `.Machine$double.eps^0.5` 3. A logical vector can take 3 possible values. How many possible values can an integer vector take? How many possible values can a double take? Use google to do some research. * Part of the point here is that it’s not ‘infinite’ like someone may be tempted to answer – it’s constrained by memory of the machine * For integer it is 2 \* 2 \* 10^9 * For double it is 2 \* 2 \* 10^308 4. Brainstorm at least four functions that allow you to convert a double to an integer. How do they differ? Be precise. * `as.integer`, `as.factor` (technically is going to a factor – but this class is built on top of integers), `round`, `floor`, `ceiling`, these last 3 though do not change the type, which would remain a type double 5. What functions from the `readr` package allow you to turn a string into logical, integer, and double vector? * The appropriate `parse_*` or `col_*` functions 20\.4: Using atomic vectors --------------------------- ### 20\.4\.6 1. What does `mean(is.na(x))` tell you about a vector `x`? What about `sum(!is.finite(x))`? * percentage that are `NA` or `NaN` * number that are either `Inf`, `-Inf`, `NA` or `NaN` 2. Carefully read the documentation of `is.vector()`. What does it actually test for? Why does `is.atomic()` not agree with the definition of atomic vectors above? * `is.vector` tests if it is a specific type of vector with no attributes other than names. This second requirement means that any augmented vectors such as factors, dates, tibbles all would return false. * `is.atomic` returns TRUE to `is.atomic(NULL)` despite this representing the empty set. 3. Compare and contrast `setNames()` with `purrr::set_names()`. * both assign names after fact * `purrr::set_names` is stricter and returns an error in situations like the following where as `setNames` does not: ``` setNames(1:4, c("a")) ``` ``` ## a <NA> <NA> <NA> ## 1 2 3 4 ``` ``` set_names(1:4, c("a")) ``` ``` ## Error: `nm` must be `NULL` or a character vector the same length as `x` ``` 4. Create functions that take a vector as input and returns: ``` x <- c(-3:14, NA, Inf, NaN) ``` 1. The last value. Should you use `[` or `[[`? ``` return_last <- function(x) x[[length(x)]] return_last(x) ``` ``` ## [1] NaN ``` 1. The elements at even numbered positions. ``` return_even <- function(x) x[((1:length(x)) %% 2 == 0)] return_even(x) ``` ``` ## [1] -2 0 2 4 6 8 10 12 14 Inf ``` 1. Every element except the last value. ``` return_not_last <- function(x) x[-length(x)] return_not_last(x) ``` ``` ## [1] -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 ## [18] 14 NA Inf ``` 1. Only even numbers (and no missing values). ``` #only even and not na return_even_no_na <- function(x) x[((1:length(x)) %% 2 == 0) & !is.na(x)] return_even_no_na(x) ``` ``` ## [1] -2 0 2 4 6 8 10 12 14 Inf ``` 5. Why is `x[-which(x > 0)]` not the same as `x[x <= 0]`? ``` x[-which(x > 0)] #which only reports the indices of the matches, so specifies all to be removed ``` ``` ## [1] -3 -2 -1 0 NA NaN ``` ``` x[x <= 0] #This method reports T/F'sNaN is converted into NA ``` ``` ## [1] -3 -2 -1 0 NA NA ``` * in the 2nd instance, `NaN`s will get converted to `NA` 6. What happens when you subset with a positive integer that’s bigger than the length of the vector? What happens when you subset with a name that doesn’t exist? * In both cases you get back an `NA` (though it seems to take longer in the case when subsetting by a name that doesn’t exist). ### 20\.4\.6 1. What does `mean(is.na(x))` tell you about a vector `x`? What about `sum(!is.finite(x))`? * percentage that are `NA` or `NaN` * number that are either `Inf`, `-Inf`, `NA` or `NaN` 2. Carefully read the documentation of `is.vector()`. What does it actually test for? Why does `is.atomic()` not agree with the definition of atomic vectors above? * `is.vector` tests if it is a specific type of vector with no attributes other than names. This second requirement means that any augmented vectors such as factors, dates, tibbles all would return false. * `is.atomic` returns TRUE to `is.atomic(NULL)` despite this representing the empty set. 3. Compare and contrast `setNames()` with `purrr::set_names()`. * both assign names after fact * `purrr::set_names` is stricter and returns an error in situations like the following where as `setNames` does not: ``` setNames(1:4, c("a")) ``` ``` ## a <NA> <NA> <NA> ## 1 2 3 4 ``` ``` set_names(1:4, c("a")) ``` ``` ## Error: `nm` must be `NULL` or a character vector the same length as `x` ``` 4. Create functions that take a vector as input and returns: ``` x <- c(-3:14, NA, Inf, NaN) ``` 1. The last value. Should you use `[` or `[[`? ``` return_last <- function(x) x[[length(x)]] return_last(x) ``` ``` ## [1] NaN ``` 1. The elements at even numbered positions. ``` return_even <- function(x) x[((1:length(x)) %% 2 == 0)] return_even(x) ``` ``` ## [1] -2 0 2 4 6 8 10 12 14 Inf ``` 1. Every element except the last value. ``` return_not_last <- function(x) x[-length(x)] return_not_last(x) ``` ``` ## [1] -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 ## [18] 14 NA Inf ``` 1. Only even numbers (and no missing values). ``` #only even and not na return_even_no_na <- function(x) x[((1:length(x)) %% 2 == 0) & !is.na(x)] return_even_no_na(x) ``` ``` ## [1] -2 0 2 4 6 8 10 12 14 Inf ``` 5. Why is `x[-which(x > 0)]` not the same as `x[x <= 0]`? ``` x[-which(x > 0)] #which only reports the indices of the matches, so specifies all to be removed ``` ``` ## [1] -3 -2 -1 0 NA NaN ``` ``` x[x <= 0] #This method reports T/F'sNaN is converted into NA ``` ``` ## [1] -3 -2 -1 0 NA NA ``` * in the 2nd instance, `NaN`s will get converted to `NA` 6. What happens when you subset with a positive integer that’s bigger than the length of the vector? What happens when you subset with a name that doesn’t exist? * In both cases you get back an `NA` (though it seems to take longer in the case when subsetting by a name that doesn’t exist). 20\.5: Recursive vectors (lists) -------------------------------- Example of subsetting items from a list: ``` a <- list(a = 1:3, b = "a string", c = pi, d = list(c(-1,-2), -5)) a[[4]][[1]] ``` ``` ## [1] -1 -2 ``` ``` # equivalent alternatives: # a$d[[1]] # a[4][[1]][[1]] ``` * 3 ways of subsetting `[]`, `[[]]`, `$` ### 20\.5\.4\. ``` a <- list(a = 1:3, b = "a string", c = pi, d = list(-1, -5)) ``` 1. Draw the following lists as nested sets: 1. `list(a, b, list(c, d), list(e, f))` 2. `list(list(list(list(list(list(a))))))` * I did not conform with Hadley’s square v rounded syntax, but hopefully this gives a sense of what the above are: drawings 1 and 2 for 20\.5\.4\. 2. What happens if you subset a tibble as if you’re subsetting a list? What are the key differences between a list and a tibble? * Dataframe is just a list of vectors (columns) – with the restriction that each column has the same number of elements whereas lists do not have this requirement * Dataframe structure better connects elements by row structure, making subsetting by the qualities of these values much easier ### 20\.5\.4\. ``` a <- list(a = 1:3, b = "a string", c = pi, d = list(-1, -5)) ``` 1. Draw the following lists as nested sets: 1. `list(a, b, list(c, d), list(e, f))` 2. `list(list(list(list(list(list(a))))))` * I did not conform with Hadley’s square v rounded syntax, but hopefully this gives a sense of what the above are: drawings 1 and 2 for 20\.5\.4\. 2. What happens if you subset a tibble as if you’re subsetting a list? What are the key differences between a list and a tibble? * Dataframe is just a list of vectors (columns) – with the restriction that each column has the same number of elements whereas lists do not have this requirement * Dataframe structure better connects elements by row structure, making subsetting by the qualities of these values much easier 20\.7: Augmented vectors ------------------------ ### 20\.7\.4 1. What does `hms::hms(3600)` return? ``` x <- hms::hms(3600) ``` How does it print? ``` print(x) ``` ``` ## 01:00:00 ``` What primitive type is the augmented vector built on top of? ``` typeof(x) ``` ``` ## [1] "double" ``` What attributes does it use? ``` attributes(x) ``` ``` ## $class ## [1] "hms" "difftime" ## ## $units ## [1] "secs" ``` 2. Try and make a tibble that has columns with different lengths. What happens? * if the column is length one it will repeat for the length of the other column(s), otherwise if it is not the same length it will return an error ``` tibble(x = 1:4, y = 5:6) #error ``` ``` ## Error: Tibble columns must have consistent lengths, only values of length one are recycled: ## * Length 2: Column `y` ## * Length 4: Column `x` ``` ``` tibble(x = 1:5, y = 6) #can have length 1 that repeats ``` ``` ## # A tibble: 5 x 2 ## x y ## <int> <dbl> ## 1 1 6 ## 2 2 6 ## 3 3 6 ## 4 4 6 ## 5 5 6 ``` 3. Based on the definition above, is it ok to have a list as a column of a tibble? * Yes, as long as the number of elements align with the other length of the other columns – this will come\-up a lot in the modeling chapters. ### 20\.7\.4 1. What does `hms::hms(3600)` return? ``` x <- hms::hms(3600) ``` How does it print? ``` print(x) ``` ``` ## 01:00:00 ``` What primitive type is the augmented vector built on top of? ``` typeof(x) ``` ``` ## [1] "double" ``` What attributes does it use? ``` attributes(x) ``` ``` ## $class ## [1] "hms" "difftime" ## ## $units ## [1] "secs" ``` 2. Try and make a tibble that has columns with different lengths. What happens? * if the column is length one it will repeat for the length of the other column(s), otherwise if it is not the same length it will return an error ``` tibble(x = 1:4, y = 5:6) #error ``` ``` ## Error: Tibble columns must have consistent lengths, only values of length one are recycled: ## * Length 2: Column `y` ## * Length 4: Column `x` ``` ``` tibble(x = 1:5, y = 6) #can have length 1 that repeats ``` ``` ## # A tibble: 5 x 2 ## x y ## <int> <dbl> ## 1 1 6 ## 2 2 6 ## 3 3 6 ## 4 4 6 ## 5 5 6 ``` 3. Based on the definition above, is it ok to have a list as a column of a tibble? * Yes, as long as the number of elements align with the other length of the other columns – this will come\-up a lot in the modeling chapters. Appendix -------- ### Subsetting nested lists ``` x <- list("a", list(list("c", "d"), "e2"), list("e", "f")) x ``` ``` ## [[1]] ## [1] "a" ## ## [[2]] ## [[2]][[1]] ## [[2]][[1]][[1]] ## [1] "c" ## ## [[2]][[1]][[2]] ## [1] "d" ## ## ## [[2]][[2]] ## [1] "e2" ## ## ## [[3]] ## [[3]][[1]] ## [1] "e" ## ## [[3]][[2]] ## [1] "f" ``` It can be confusing how to subset items in a nested list, lists output helps tell you the subsetting needed to extract particular items. For example, to output `list("c", "d")` requires `x[[2]][[1]]`, to output just `d` requires `x[[2]][[1]][[2]]` *Subset nested lists:* ### Subsetting nested lists ``` x <- list("a", list(list("c", "d"), "e2"), list("e", "f")) x ``` ``` ## [[1]] ## [1] "a" ## ## [[2]] ## [[2]][[1]] ## [[2]][[1]][[1]] ## [1] "c" ## ## [[2]][[1]][[2]] ## [1] "d" ## ## ## [[2]][[2]] ## [1] "e2" ## ## ## [[3]] ## [[3]][[1]] ## [1] "e" ## ## [[3]][[2]] ## [1] "f" ``` It can be confusing how to subset items in a nested list, lists output helps tell you the subsetting needed to extract particular items. For example, to output `list("c", "d")` requires `x[[2]][[1]]`, to output just `d` requires `x[[2]][[1]][[2]]` *Subset nested lists:*
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/21-iteration.html
Ch. 21: Iteration ================= **Key questions:** * 21\.2\.1\. \#1, 2 * 21\.3\.5\. \#1, 3 * 21\.4\.1\. \#2 * 21\.5\.3\. \#1 * 21\.9\.4\. \#2 **Functions and notes:** * Common `for` loop template: ``` output <- vector("double", ncol(df)) # common for loop style for (i in seq_len(length(df))){ output[[i]] <- fun(df[[i]]) } ``` * Common `while` loop template: ``` i <- 1 while (i <= length(x)){ # body i <- i + 1 } ``` * `seq_along(df)` does essentially same as `seq_len(length(df))` * `unlist` flatten list of vectors into single vector + `flaten_dbl` is stricter alternative * `dplyr::bind_rows` save output in a list of dfs and then append all at end rather than sequential `rbind`ing * `sample(c("T", "H"), 1)` * `sapply` is wrapper around `lapply` that automatically simplifies output – problematic in that never know what ouptut will be * `vapply` is safe alternative to `sapply` e.g. for logical `vapply(df, is.numeric, logical(1))`, but `map_lgl(df, is.numeric)` is more simple * `map()` makes a list. + `map_lgl()` makes a logical vector. + `map_int()` makes an integer vector. + `map_dbl()` makes a double vector. + `map_chr()` makes a character vector. * shortcuts for applying functions in `map`: ``` models <- mtcars %>% split(.$cyl) %>% map(function(df) lm(mpg ~ wt, data = df)) models <- mtcars %>% split(.$cyl) %>% map(~lm(mpg ~ wt, data = .)) ``` * extracting by named elements from `map`: ``` models %>% map(summary) %>% map_dbl("r.squared") ``` * extracting by positions from `map` ``` x <- list(list(1, 2, 3), list(4, 5, 6), list(7, 8, 9)) x %>% map_dbl(2) ``` * `map2` let’s you iterate through two components at once * `pmap` allows you to iterate over p components – works well to hold inputs in a dataframe * `safely` takes funciton returns two parts, result and error object + similar to `try` but more consistent * `possibly` similar to safely, but provide it a default value to return for errors * `quietly` is similar to safely but captures all printed output messages and warnings * `purrr::transpose` allows you to do things like get all 2nd elements in list, e.g. show later * `invoke_map` let’s you iterate over both the functions and the parameters, have an `f` and a `param` input, e.g. ``` f <- c("runif", "rnorm", "rpois") param <- list( list(min = -1, max = 1), list(sd = 5), list(lambda = 10) ) invoke_map(f, param, n = 5) %>% str() ``` * `walk` is alternative to `map` that you call for side effects. Also have `walk2` and `pwalk` that are generally more useful + all invisibly return \`.x (the first argument) so can used in the middle of pipelines * `keep` and `discard` keep or discard elements in the input based off if `TRUE` to predicate * `some` and `every` determine if the predicte is true for any or for all of our elements * `detect` finds the first element where the predicate is true, `detect_index` returns its position * `head_while` and `tail_while` take elements from the start or end of a vector while a predicate is true * `reduce` is good for applying two table rule repeatedly, e.g. joins + `accumulate` is similar but keeps all the interim results 21\.2: For loops ---------------- ### 21\.2\.1 1. Write for loops to (think about the output, sequence, and body **before** you start writing the loop): 1. Compute the mean of every column in `mtcars`. ``` output <- vector("double", length(mtcars)) for (i in seq_along(mtcars)){ output[[i]] <- mean(mtcars[[i]]) } output ``` ``` ## [1] 20.090625 6.187500 230.721875 146.687500 3.596563 3.217250 ## [7] 17.848750 0.437500 0.406250 3.687500 2.812500 ``` 1. Determine the type of each column in `nycflights13::flights`. ``` output <- vector("character", length(flights)) for (i in seq_along(flights)){ output[[i]] <- typeof(flights[[i]]) } output ``` ``` ## [1] "integer" "integer" "integer" "integer" "integer" ## [6] "double" "integer" "integer" "double" "character" ## [11] "integer" "character" "character" "character" "double" ## [16] "double" "double" "double" "double" ``` 1. Compute the number of unique values in each column of `iris`. ``` output <- vector("integer", length(iris)) for (i in seq_along(iris)){ output[[i]] <- unique(iris[[i]]) %>% length() } output ``` ``` ## [1] 35 23 43 22 3 ``` 1. Generate 10 random normals for each of \\(\\mu \= \-10\\), \\(0\\), \\(10\\), and \\(100\\). ``` output <- vector("list", 4) input_means <- c(-10, 0, 10, 100) for (i in seq_along(output)){ output[[i]] <- rnorm(10, mean = input_means[[i]]) } output ``` ``` ## [[1]] ## [1] -11.371326 -10.118467 -10.582961 -10.324829 -7.604983 -9.300232 ## [7] -9.840124 -9.719733 -9.784274 -10.338814 ## ## [[2]] ## [1] -1.04951842 -0.68385670 0.17893523 0.07338463 -1.18028235 ## [6] -1.00777188 0.91491408 -0.14041984 -0.25074297 -0.50055019 ## ## [[3]] ## [1] 11.013913 9.790495 10.631115 10.325991 10.608040 9.463515 11.265961 ## [8] 10.630382 10.436201 8.907654 ## ## [[4]] ## [1] 99.37012 100.31396 99.06230 98.00350 100.31506 99.67347 101.02248 ## [8] 98.32484 98.62669 100.28487 ``` 2. Eliminate the for loop in each of the following examples by taking advantage of an existing function that works with vectors: *example:* ``` out <- "" for (x in letters) { out <- stringr::str_c(out, x) } out ``` * collabse letters into length\-one character vector with all characters concatenated ``` str_c(letters, collapse = "") ``` ``` ## [1] "abcdefghijklmnopqrstuvwxyz" ``` *example:* ``` x <- sample(100) sd <- 0 for (i in seq_along(x)) { sd <- sd + (x[i] - mean(x)) ^ 2 } sd <- sqrt(sd / (length(x) - 1)) sd ``` ``` ## [1] 29.01149 ``` * calculate standard deviaiton of x ``` sd(x) ``` ``` ## [1] 29.01149 ``` *example:* ``` x <- runif(100) out <- vector("numeric", length(x)) out[1] <- x[1] for (i in 2:length(x)) { out[i] <- out[i - 1] + x[i] } out ``` ``` ## [1] 0.1543797 0.5168570 1.4323513 1.4861995 1.7440626 2.3503876 ## [7] 2.7033856 3.4933038 3.8878801 4.8166162 4.8404351 5.0134399 ## [13] 5.8128633 5.9002886 6.4672338 7.3249551 7.4813311 7.9067374 ## [19] 7.9143362 8.6500421 9.4114592 9.8109883 10.6637337 11.5345437 ## [25] 11.8881403 12.8609933 13.0060893 13.1121490 13.2820768 13.7832678 ## [31] 14.0103818 14.8921300 15.8878166 16.3724888 17.2897726 17.6764167 ## [37] 18.3759822 18.5914902 18.7581008 19.3126850 20.0314901 20.9729033 ## [43] 21.5123325 22.1361972 22.9338153 23.9220106 23.9905409 24.1247463 ## [49] 24.3690186 24.6778073 25.1676470 25.6649358 26.0152919 26.3936317 ## [55] 26.6769802 26.7589431 27.4933689 28.3744835 28.8274173 29.5040112 ## [61] 30.4625068 31.1908181 31.5785996 32.0691594 32.4015008 33.1859971 ## [67] 34.0973779 34.4118215 34.6828655 34.9383821 35.5988994 35.9820211 ## [73] 36.7825814 37.5402040 37.9568733 38.5686788 38.6336509 39.0451422 ## [79] 39.1208101 39.8826954 40.4989736 41.2877620 41.4204198 41.8790701 ## [85] 42.8085235 43.2102977 43.4620636 43.9427926 44.7306195 45.4886119 ## [91] 46.0891834 46.4679661 47.0817039 47.6331389 48.1357901 48.3671822 ## [97] 48.8290107 49.8198761 50.6520274 50.6527903 ``` * calculate cumulative sum ``` cumsum(x) ``` ``` ## [1] 0.1543797 0.5168570 1.4323513 1.4861995 1.7440626 2.3503876 ## [7] 2.7033856 3.4933038 3.8878801 4.8166162 4.8404351 5.0134399 ## [13] 5.8128633 5.9002886 6.4672338 7.3249551 7.4813311 7.9067374 ## [19] 7.9143362 8.6500421 9.4114592 9.8109883 10.6637337 11.5345437 ## [25] 11.8881403 12.8609933 13.0060893 13.1121490 13.2820768 13.7832678 ## [31] 14.0103818 14.8921300 15.8878166 16.3724888 17.2897726 17.6764167 ## [37] 18.3759822 18.5914902 18.7581008 19.3126850 20.0314901 20.9729033 ## [43] 21.5123325 22.1361972 22.9338153 23.9220106 23.9905409 24.1247463 ## [49] 24.3690186 24.6778073 25.1676470 25.6649358 26.0152919 26.3936317 ## [55] 26.6769802 26.7589431 27.4933689 28.3744835 28.8274173 29.5040112 ## [61] 30.4625068 31.1908181 31.5785996 32.0691594 32.4015008 33.1859971 ## [67] 34.0973779 34.4118215 34.6828655 34.9383821 35.5988994 35.9820211 ## [73] 36.7825814 37.5402040 37.9568733 38.5686788 38.6336509 39.0451422 ## [79] 39.1208101 39.8826954 40.4989736 41.2877620 41.4204198 41.8790701 ## [85] 42.8085235 43.2102977 43.4620636 43.9427926 44.7306195 45.4886119 ## [91] 46.0891834 46.4679661 47.0817039 47.6331389 48.1357901 48.3671822 ## [97] 48.8290107 49.8198761 50.6520274 50.6527903 ``` 3. Combine your function writing and for loop skills: 1. Write a for loop that `prints()` the lyrics to the children’s song “Alice the camel”. ``` num_humps <- c("five", "four", "three", "two", "one", "no") for (i in seq_along(num_humps)){ paste0("Alice the camel has ", num_humps[[i]], " humps.") %>% rep(3) %>% writeLines() writeLines("So go, Alice, go.\n") } ``` ``` ## Alice the camel has five humps. ## Alice the camel has five humps. ## Alice the camel has five humps. ## So go, Alice, go. ## ## Alice the camel has four humps. ## Alice the camel has four humps. ## Alice the camel has four humps. ## So go, Alice, go. ## ## Alice the camel has three humps. ## Alice the camel has three humps. ## Alice the camel has three humps. ## So go, Alice, go. ## ## Alice the camel has two humps. ## Alice the camel has two humps. ## Alice the camel has two humps. ## So go, Alice, go. ## ## Alice the camel has one humps. ## Alice the camel has one humps. ## Alice the camel has one humps. ## So go, Alice, go. ## ## Alice the camel has no humps. ## Alice the camel has no humps. ## Alice the camel has no humps. ## So go, Alice, go. ``` 2. Convert the nursery rhyme “ten in the bed” to a function. Generalise it to any number of people in any sleeping structure. ``` nursery_bed <- function(num, y) { output <- vector("character", num) for (i in seq_along(output)) { output[[i]] <- str_replace_all( 'There were x in the _y\n And the little one said, \n"Roll over! Roll over!"\n So they all rolled over and\n one fell out.', c("x" = (length(output) - i + 1), "_y" = y)) } str_c(output, collapse = "\n\n") %>% writeLines() } nursery_bed(3, "asteroid") ``` ``` ## There were 3 in the asteroid ## And the little one said, ## "Roll over! Roll over!" ## So they all rolled over and ## one fell out. ## ## There were 2 in the asteroid ## And the little one said, ## "Roll over! Roll over!" ## So they all rolled over and ## one fell out. ## ## There were 1 in the asteroid ## And the little one said, ## "Roll over! Roll over!" ## So they all rolled over and ## one fell out. ``` 3. Convert the song “99 bottles of beer on the wall” to a function. Generalise to any number of any vessel containing any liquid on any surface. * This is a little bit of a lazy version… ``` beer_rhyme <- function(x, y, z){ output <- vector("character", x) for (i in seq_along(output)){ output[i] <- str_replace_all("x bottles of y on the z.\n One fell off...", c( "x" = (x - i + 1), "y" = y, "z" = z )) } output <- (str_c(output, collapse = "\n") %>% str_c("\nNo more bottles...", collapse = "")) writeLines(output) } beer_rhyme(4, "soda", "toilet") ``` ``` ## 4 bottles of soda on the toilet. ## One fell off... ## 3 bottles of soda on the toilet. ## One fell off... ## 2 bottles of soda on the toilet. ## One fell off... ## 1 bottles of soda on the toilet. ## One fell off... ## No more bottles... ``` 4. It’s common to see for loops that don’t preallocate the output and instead increase the length of a vector at each step. How does this affect performance? Design and execute an experiment. ``` preallocate <- function(){ x <- vector("double", 100) for (i in seq_along(x)){ x[i] <- rnorm(1) } } growing <- function(){ x <- c(0) for (i in 1:100){ x[i] <- rnorm(1) } } microbenchmark::microbenchmark( space = preallocate(), no_space = growing(), times = 20 ) ``` ``` ## Unit: microseconds ## expr min lq mean median uq max neval cld ## space 178.0 183.6 523.665 308.05 342.65 4991.2 20 a ## no_space 213.6 222.2 531.440 344.50 429.05 4081.9 20 a ``` * see roughly 35% better performance when creating ahead of time * note: if you can do these operations with vectorized approach though – they’re often much faster ``` microbenchmark::microbenchmark( space = preallocate(), no_space = growing(), vector = rnorm(100), times = 20 ) ``` ``` ## Unit: microseconds ## expr min lq mean median uq max neval cld ## space 155.8 161.45 177.075 163.50 166.20 349.6 20 b ## no_space 185.8 193.65 202.390 199.55 205.25 234.5 20 c ## vector 8.8 9.45 9.795 9.70 10.10 11.0 20 a ``` * vectorized was \> 10x faster 21\.3 For loop variations ------------------------- ### 21\.3\.5 1. Imagine you have a directory full of CSV files that you want to read in. You have their paths in a vector, `files <- dir("data/", pattern = "\\.csv$", full.names = TRUE)`, and now want to read each one with `read_csv()`. Write the for loop that will load them into a single data frame. * To start this problem, I first created a file directory, and then wrote in 26 csvs each with the most popular name from each year since 1880 for a particular letter[35](#fn35). * Next I read these into a single dataframe with a for loop 2. *What happens if you use `for (nm in names(x))` and `x` has no names?* ``` x <- list(1:10, 11:18, 19:25) for (nm in names(x)) { print(x[[nm]]) } ``` * each iteration produces an error, so nothing is written*What if only some of the elements are named?* ``` x <- list(a = 1:10, 11:18, c = 19:25) for (nm in names(x)) { print(x[[nm]]) } ``` ``` ## [1] 1 2 3 4 5 6 7 8 9 10 ## NULL ## [1] 19 20 21 22 23 24 25 ``` * you have output for those with names and NULL for those without*What if the names are not unique?* ``` x <- list(a = 1:10, a = 11:18, c = 19:25) for (nm in names(x)) { print(x[[nm]]) } ``` ``` ## [1] 1 2 3 4 5 6 7 8 9 10 ## [1] 1 2 3 4 5 6 7 8 9 10 ## [1] 19 20 21 22 23 24 25 ``` * it prints the first position with the name 3. Write a function that prints the mean of each numeric column in a data frame, along with its name. For example, `show_mean(iris)` would print: ``` show_mean(iris) #> Sepal.Length: 5.84 #> Sepal.Width: 3.06 #> Petal.Length: 3.76 #> Petal.Width: 1.20 ``` (Extra challenge: what function did I use to make sure that the numbers lined up nicely, even though the variable names had different lengths?) ``` show_mean <- function(df){ # select just cols that are numeric out <- vector("logical", length(df)) for (i in seq_along(df)) { out[[i]] <- is.numeric(df[[i]]) } df_select <- df[out] # keep/discard funs would have made this easy # make list of values w/ mean means <- vector("list", length(df_select)) names(means) <- names(df_select) for (i in seq_along(df_select)){ means[[i]] <- mean(df_select[[i]], na.rm = TRUE) %>% round(digits = 2) } # print out, use method to identify max chars for vars printed means_names <- names(means) chars_max <- (str_count(means_names) + str_count(as.character(means))) %>% max() chars_pad <- chars_max - (str_count(means_names) + str_count(as.character(means))) names(chars_pad) <- means_names str_c(means_names, ": ", str_dup(" ", chars_pad), means) %>% writeLines() } show_mean(flights) ``` ``` ## year: 2013 ## month: 6.55 ## day: 15.71 ## dep_time: 1349.11 ## sched_dep_time: 1344.25 ## dep_delay: 12.64 ## arr_time: 1502.05 ## sched_arr_time: 1536.38 ## arr_delay: 6.9 ## flight: 1971.92 ## air_time: 150.69 ## distance: 1039.91 ## hour: 13.18 ## minute: 26.23 ``` 4. What does this code do? How does it work? ``` trans <- list( disp = function(x) x * 0.0163871, am = function(x) { factor(x, labels = c("auto", "manual")) } ) for (var in names(trans)) { mtcars[[var]] <- trans[[var]](mtcars[[var]]) } mtcars ``` * first part builds list of functions, 2nd applies those to a dataset * are storing the data transformations as a function and then applying this to a dataframe[36](#fn36) 21\.4: For loops vs. functionals -------------------------------- ### 21\.4\.1 1. Read the documentation for `apply()`. In the 2d case, what two for loops does it generalise? * It allows you to input either 1 or 2 for the `MARGIN` argument, which corresponds with looping over either the rows or the columns. 2. Adapt `col_summary()` so that it only applies to numeric columns You might want to start with an `is_numeric()` function that returns a logical vector that has a TRUE corresponding to each numeric column. ``` col_summary_gen <- function(df, fun, ...) { #find cols that are numeric out <- vector("logical", length(df)) for (i in seq_along(df)) { out[[i]] <- is.numeric(df[[i]]) } #make list of values w/ mean df_select <- df[out] output <- vector("list", length(df_select)) names(output) <- names(df_select) for (nm in names(output)) { output[[nm]] <- fun(df_select[[nm]], ...) %>% round(digits = 2) } as_tibble(output) } col_summary_gen(flights, fun = median, na.rm = TRUE) %>% gather() # trick to gather all easily ``` ``` ## # A tibble: 14 x 2 ## key value ## <chr> <dbl> ## 1 year 2013 ## 2 month 7 ## 3 day 16 ## 4 dep_time 1401 ## 5 sched_dep_time 1359 ## 6 dep_delay -2 ## 7 arr_time 1535 ## 8 sched_arr_time 1556 ## 9 arr_delay -5 ## 10 flight 1496 ## 11 air_time 129 ## 12 distance 872 ## 13 hour 13 ## 14 minute 29 ``` * the `...` makes this so you can add arguments to the functions. 21\.5: The map functions ------------------------ ### 21\.5\.3 1. Write code that uses one of the map functions to: *Compute the mean of every column in `mtcars`.* ``` purrr::map_dbl(mtcars, mean) ``` ``` ## mpg cyl disp hp drat wt ## 20.090625 6.187500 230.721875 146.687500 3.596563 3.217250 ## qsec vs am gear carb ## 17.848750 0.437500 0.406250 3.687500 2.812500 ``` *Determine the type of each column in `nycflights13::flights`.* ``` purrr::map_chr(flights, typeof) ``` ``` ## year month day dep_time sched_dep_time ## "integer" "integer" "integer" "integer" "integer" ## dep_delay arr_time sched_arr_time arr_delay carrier ## "double" "integer" "integer" "double" "character" ## flight tailnum origin dest air_time ## "integer" "character" "character" "character" "double" ## distance hour minute time_hour ## "double" "double" "double" "double" ``` *Compute the number of unique values in each column of `iris`.* ``` purrr::map(iris, unique) %>% map_dbl(length) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 35 23 43 22 3 ``` *Generate 10 random normals for each of \\(\\mu \= \-10\\), \\(0\\), \\(10\\), and \\(100\\).* ``` purrr::map(c(-10, 0, 10, 100), rnorm, n = 10) ``` ``` ## [[1]] ## [1] -11.668016 -10.174630 -9.873417 -9.935144 -9.549267 -9.989001 ## [7] -9.991157 -9.490583 -9.020713 -11.215907 ## ## [[2]] ## [1] -1.3330518 1.7970408 -0.7859694 -1.5184894 0.4544287 0.2134496 ## [7] -1.0761067 0.1600194 -0.1258518 -0.6974829 ## ## [[3]] ## [1] 10.334081 9.523160 9.730305 10.855434 10.899334 11.522520 9.532049 ## [8] 9.778320 10.276128 9.939547 ## ## [[4]] ## [1] 98.63699 100.57597 100.23664 99.65274 100.66985 99.86635 99.79877 ## [8] 98.84634 101.00019 99.09162 ``` ``` # purrr::map_dbl(flights, ~mean(is.na(.x))) ``` 2. How can you create a single vector that for each column in a data frame indicates whether or not it’s a factor? ``` purrr::map_lgl(iris, is.factor) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## FALSE FALSE FALSE FALSE TRUE ``` 3. What happens when you use the map functions on vectors that aren’t lists? What does `map(1:5, runif)` do? Why? ``` purrr::map(1:5, rnorm) ``` ``` ## [[1]] ## [1] 0.26078 ## ## [[2]] ## [1] 0.39670324 0.03106982 ## ## [[3]] ## [1] 1.0644632 -0.1632358 -1.0353975 ## ## [[4]] ## [1] -0.3556528 -0.5027896 2.0659595 -0.1360896 ## ## [[5]] ## [1] 0.50936851 0.16219258 -1.53746908 -0.04141543 -0.79950355 ``` * It runs on each item in the vector. * `map()` runs on each element item within the input, i.e .x\[\[1]], .x\[\[2]], .x\[\[n]]. The elements of a numeric vector are scalars (or technically length 1 numeric vectors) * In this case then it is passing the values 1, 2, 3, 4, 5 into the first argument of `rnorm` for each run, hence pattern above. 4. What does `map(-2:2, rnorm, n = 5)` do? Why? ``` map(-2:2, rnorm, n = 5) ``` ``` ## [[1]] ## [1] -1.829446 -3.357986 -3.582975 -2.039341 -2.087265 ## ## [[2]] ## [1] -0.6831658 -0.8729133 -0.3192894 -1.3425364 0.2383131 ## ## [[3]] ## [1] 0.43215278 -0.07629132 -0.14400722 1.85870258 0.13472292 ## ## [[4]] ## [1] -0.22256104 2.00645188 -0.06027834 1.44273092 0.69404413 ## ## [[5]] ## [1] 1.642268 2.233247 2.021023 1.988244 2.798515 ``` * It makes 5 vectors each of length 5 with the values centered at the means of \-2,\-1, 0, 1, 2 respectively. * The reason is that the default filling of the first argument is already named by the defined input of ‘n \= 5’, therefore, the inputs are instead going to the 2nd argument, and hence become the mean of the different rnorm calls. 5. Rewrite `map(x, function(df) lm(mpg ~ wt, data = df))` to eliminate the anonymous function. ``` mtcars %>% purrr::map( ~ lm(mpg ~ wt, data = .)) ``` 21\.9 Other patterns of for loops --------------------------------- ### 21\.9\.3 1. Implement your own version of `every()` using a for loop. Compare it with `purrr::every()`. What does purrr’s version do that your version doesn’t? ``` every_loop <- function(x, fun, ...) { output <- vector("list", length(x)) for (i in seq_along(x)) { output[[i]] <- fun(x[[i]]) } total <- flatten_lgl(output) sum(total) == length(x) } x <- list(flights, mtcars, iris) every_loop(x, is.data.frame) ``` ``` ## [1] TRUE ``` ``` every(x, is.data.frame) ``` ``` ## [1] TRUE ``` 2. Create an enhanced `col_sum()` that applies a summary function to every numeric column in a data frame. ``` col_summary_enh <- function(x,fun){ x %>% keep(is.numeric) %>% purrr::map_dbl(fun) } col_summary_enh(mtcars, median) ``` ``` ## mpg cyl disp hp drat wt qsec vs am ## 19.200 6.000 196.300 123.000 3.695 3.325 17.710 0.000 0.000 ## gear carb ## 4.000 2.000 ``` 3. A possible base R equivalent of `col_sum()` is: ``` col_sum3 <- function(df, f) { is_num <- sapply(df, is.numeric) df_num <- df[, is_num] sapply(df_num, f) } ``` But it has a number of bugs as illustrated with the following inputs: ``` df <- tibble( x = 1:3, y = 3:1, z = c("a", "b", "c") ) # OK col_sum3(df, mean) # Has problems: don't always return numeric vector col_sum3(df[1:2], mean) col_sum3(df[1], mean) col_sum3(df[0], mean) ``` What causes the bugs? * The vector output is not always consistent in it’s output type. Also, returns error when inputting an empty list due to indexing issue. Appendix -------- ### 21\.3\.5\.1 #### Using map ``` outputted_csv <- files_example %>% mutate(csv_data = map(file_paths, read_csv)) outputted_csv <- files_example %>% mutate(csv_data = map(file_paths, safely(read_csv))) ``` #### Plot of names * Below is a plot of the proportion of individuals named the most popular letter in each year. This suggests that the top names by letter do not have as large of a proportion of the population ocmpared to historically. ``` names_appended %>% ggplot(aes(x = year, y = prop, colour = first_letter))+ geom_line() ``` #### csv other example The code below might be used to read csvs from a shared drive. I added on the ‘file\_path\_pull’ and ‘files\_example’ components to add in information on the file paths and other details that were relevant. You might also add this data into a new column on the output… ``` files_path_pull <- dir("//companydomain.com/directory/", pattern = "csv$", full.names = TRUE) files_example <- tibble(file_paths = files_path_pull[1:2]) %>% extract(file_paths, into = c("path", "name"), regex = "(.*)([0-9]{4}-[0-9]{2}-[0-9]{2})", remove = FALSE) read_dir <- function(dir){ #input vector of file paths name and output appended file out <- vector("list", length(dir)) for (i in seq_along(out)){ out[[i]] <- read_csv(dir[[i]]) } out <- bind_rows(out) out } read_dir(files_example$file_paths) ``` ### 21\.3\.5\.2 (with purrr) ``` purrr::map_lgl(iris, is.factor) %>% tibble::enframe() ``` ``` ## # A tibble: 5 x 2 ## name value ## <chr> <lgl> ## 1 Sepal.Length FALSE ## 2 Sepal.Width FALSE ## 3 Petal.Length FALSE ## 4 Petal.Width FALSE ## 5 Species TRUE ``` Slightly less attractive printing ``` show_mean2 <- function(df) { df %>% keep(is.numeric) %>% map_dbl(mean, na.rm = TRUE) } show_mean2(flights) ``` ``` ## year month day dep_time sched_dep_time ## 2013.000000 6.548510 15.710787 1349.109947 1344.254840 ## dep_delay arr_time sched_arr_time arr_delay flight ## 12.639070 1502.054999 1536.380220 6.895377 1971.923620 ## air_time distance hour minute ## 150.686460 1039.912604 13.180247 26.230100 ``` Maybe slightly better printing and in df ``` show_mean3 <- function(df){ df %>% keep(is.numeric) %>% map_dbl(mean, na.rm = TRUE) %>% as_tibble() %>% mutate(names = row.names(.)) } show_mean3(flights) ``` ``` ## Warning: Calling `as_tibble()` on a vector is discouraged, because the behavior is likely to change in the future. Use `enframe(name = NULL)` instead. ## This warning is displayed once per session. ``` ``` ## # A tibble: 14 x 2 ## value names ## <dbl> <chr> ## 1 2013 1 ## 2 6.55 2 ## 3 15.7 3 ## 4 1349. 4 ## 5 1344. 5 ## 6 12.6 6 ## 7 1502. 7 ## 8 1536. 8 ## 9 6.90 9 ## 10 1972. 10 ## 11 151. 11 ## 12 1040. 12 ## 13 13.2 13 ## 14 26.2 14 ``` Other method is to take advantage of the `gather()` function ``` flights %>% keep(is.numeric) %>% map(mean, na.rm = TRUE) %>% as_tibble() %>% gather() ``` ``` ## # A tibble: 14 x 2 ## key value ## <chr> <dbl> ## 1 year 2013 ## 2 month 6.55 ## 3 day 15.7 ## 4 dep_time 1349. ## 5 sched_dep_time 1344. ## 6 dep_delay 12.6 ## 7 arr_time 1502. ## 8 sched_arr_time 1536. ## 9 arr_delay 6.90 ## 10 flight 1972. ## 11 air_time 151. ## 12 distance 1040. ## 13 hour 13.2 ## 14 minute 26.2 ``` ### 21\.9\.3\.1 * mine can’t handle shortcut formulas or new functions ``` z <- sample(10) z %>% every( ~ . < 11) ``` ``` ## [1] TRUE ``` ``` # e.g. below would fail # z %>% # every_loop( ~ . < 11) ``` ### 21\.9 mirroring `keep` * below is one method for passing multiple, more complex arguments through keep, though you can also use function shortcuts (`~`) in `keep` and `discard` ``` ##how to pass multiple functions through keep? #can use map to subset columns by multiple criteria and then subset at end flights %>% purrr::map(is.na) %>% purrr::map_dbl(sum) %>% purrr::map_lgl(~.>10) %>% flights[.] ``` ``` ## # A tibble: 336,776 x 6 ## dep_time dep_delay arr_time arr_delay tailnum air_time ## <int> <dbl> <int> <dbl> <chr> <dbl> ## 1 517 2 830 11 N14228 227 ## 2 533 4 850 20 N24211 227 ## 3 542 2 923 33 N619AA 160 ## 4 544 -1 1004 -18 N804JB 183 ## 5 554 -6 812 -25 N668DN 116 ## 6 554 -4 740 12 N39463 150 ## 7 555 -5 913 19 N516JB 158 ## 8 557 -3 709 -14 N829AS 53 ## 9 557 -3 838 -8 N593JB 140 ## 10 558 -2 753 8 N3ALAA 138 ## # ... with 336,766 more rows ``` ### invoke examples Let’s change the example to be with quantile… ``` invoke(runif, n = 10) ``` ``` ## [1] 0.775555937 0.328805817 0.920314980 0.176599637 0.210958651 ## [6] 0.890200325 0.456075735 0.498955991 0.148438198 0.001021321 ``` ``` list("01a", "01b") %>% invoke(paste, ., sep = "-") ``` ``` ## [1] "01a-01b" ``` ``` set.seed(123) invoke_map(list(runif, rnorm), list(list(n = 10), list(n = 5))) ``` ``` ## [[1]] ## [1] 0.2875775 0.7883051 0.4089769 0.8830174 0.9404673 0.0455565 0.5281055 ## [8] 0.8924190 0.5514350 0.4566147 ## ## [[2]] ## [1] 1.7150650 0.4609162 -1.2650612 -0.6868529 -0.4456620 ``` ``` set.seed(123) invoke_map(list(runif, rnorm), list(list(n = 10), list(5, 50))) ``` ``` ## [[1]] ## [1] 0.2875775 0.7883051 0.4089769 0.8830174 0.9404673 0.0455565 0.5281055 ## [8] 0.8924190 0.5514350 0.4566147 ## ## [[2]] ## [1] 51.71506 50.46092 48.73494 49.31315 49.55434 ``` ``` list(m1 = mean, m2 = median) %>% invoke_map(x = rcauchy(100)) ``` ``` ## $m1 ## [1] 0.7316016 ## ## $m2 ## [1] 0.1690467 ``` ``` rcauchy(100) ``` ``` ## [1] -1.99514216 1.57378677 1.44901985 0.82604308 2.30072052 ## [6] -0.04961749 0.52626840 0.29408692 0.47790231 -1.47138470 ## [11] -2.54305059 -0.35508248 -1.65511601 -1.08467708 -15.03813728 ## [16] -1.82118206 -0.62669137 -0.79456204 -0.06347636 5.19179251 ## [21] 1.48851593 3.42095041 0.03289526 0.65171559 -0.53864091 ## [26] 0.88812626 0.93375555 0.24570517 0.97348569 -1.11905466 ## [31] -0.51964526 128.72537963 2.72138263 0.97793363 0.36391811 ## [36] 2.77745450 -4.34935786 0.81096079 5.70518746 0.81669440 ## [41] -138.41947905 2.02359725 -1.96283674 2.40809060 2.04850398 ## [46] -9.41347275 -1.06265274 0.83312509 3.55625549 1.10375978 ## [51] -2.31140048 0.65162145 -0.45665528 -1.02179975 -1.71189590 ## [56] -2.57239721 2.35617831 -10.63750166 -0.41538322 -3.80770683 ## [61] -0.55070513 1.49607830 -1.30359005 1.09910916 -3.27457763 ## [66] 16.99304208 1.09921270 -4.86030197 -0.27969649 -0.31842181 ## [71] 1.16466121 1.59209243 -0.04514112 -2.52586678 -0.19951960 ## [76] 9.47599952 3.31841045 -1.82945785 0.51884667 -4.29179059 ## [81] 0.93155898 -0.11880720 -3.03333758 -21.16294537 3.16450655 ## [86] -0.39503234 2.19801293 1.27457150 0.59413768 0.60064481 ## [91] 17.70703023 1.01880490 0.80764382 -1.63905090 0.15086898 ## [96] -1.36865319 1.99173761 3.39988162 -0.63043489 -0.26058630 ``` Let’s store everything in a dataframe… ``` set.seed(123) tibble(funs = list(rn = "rnorm", rp = "rpois", ru = "runif"), params = list(list(n = 20, mean = 10), list(n = 20, lambda = 3), list(n = 20, min = -1, max = 1))) %>% with(invoke_map_df(funs, params)) ``` ``` ## # A tibble: 20 x 3 ## rn rp ru ## <dbl> <int> <dbl> ## 1 9.44 1 0.330 ## 2 9.77 2 -0.810 ## 3 11.6 2 -0.232 ## 4 10.1 2 -0.451 ## 5 10.1 1 0.629 ## 6 11.7 1 -0.103 ## 7 10.5 2 0.620 ## 8 8.73 3 0.625 ## 9 9.31 2 0.589 ## 10 9.55 5 -0.120 ## 11 11.2 0 0.509 ## 12 10.4 3 0.258 ## 13 10.4 4 0.420 ## 14 10.1 1 -0.999 ## 15 9.44 3 -0.0494 ## 16 11.8 2 -0.560 ## 17 10.5 1 -0.240 ## 18 8.03 4 0.226 ## 19 10.7 5 -0.296 ## 20 9.53 2 -0.778 ``` ``` map_df(iris, ~.x*2) ``` ``` ## Warning in Ops.factor(.x, 2): '*' not meaningful for factors ``` ``` ## # A tibble: 150 x 5 ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## <dbl> <dbl> <dbl> <dbl> <lgl> ## 1 10.2 7 2.8 0.4 NA ## 2 9.8 6 2.8 0.4 NA ## 3 9.4 6.4 2.6 0.4 NA ## 4 9.2 6.2 3 0.4 NA ## 5 10 7.2 2.8 0.4 NA ## 6 10.8 7.8 3.4 0.8 NA ## 7 9.2 6.8 2.8 0.6 NA ## 8 10 6.8 3 0.4 NA ## 9 8.8 5.8 2.8 0.4 NA ## 10 9.8 6.2 3 0.2 NA ## # ... with 140 more rows ``` ``` select(iris, -Species) %>% flatten_dbl() %>% mean() ``` ``` ## [1] 3.4645 ``` ``` mean.and.median <- function(x){ list(mean = mean(x, na.rm = TRUE), median = median(x, na.rm = TRUE)) } ``` Difference between dfr and dfc, taken from here: [https://bio304\-class.github.io/bio304\-fall2017/control\-flow\-in\-R.html](https://bio304-class.github.io/bio304-fall2017/control-flow-in-R.html) ``` iris %>% select(-Species) %>% map_dfr(mean.and.median) %>% bind_cols(tibble(names = names(select(iris, -Species)))) ``` ``` ## # A tibble: 4 x 3 ## mean median names ## <dbl> <dbl> <chr> ## 1 5.84 5.8 Sepal.Length ## 2 3.06 3 Sepal.Width ## 3 3.76 4.35 Petal.Length ## 4 1.20 1.3 Petal.Width ``` ``` iris %>% select(-Species) %>% map_dfr(mean.and.median) %>% bind_cols(tibble(names = names(select(iris, -Species)))) ``` ``` ## # A tibble: 4 x 3 ## mean median names ## <dbl> <dbl> <chr> ## 1 5.84 5.8 Sepal.Length ## 2 3.06 3 Sepal.Width ## 3 3.76 4.35 Petal.Length ## 4 1.20 1.3 Petal.Width ``` ``` iris %>% select(-Species) %>% map_dfc(mean.and.median) ``` ``` ## # A tibble: 1 x 8 ## mean median mean1 median1 mean2 median2 mean3 median3 ## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 5.84 5.8 3.06 3 3.76 4.35 1.20 1.3 ``` ### indexing nms caution When creating your empty list, use indexes rather than names if you are creating values, otherwise you are creating new values on the list. E.g. in the example below I the output ends up being length 6 because you have the 3 `NULL` values plus the 3 newly created named positions. ``` x <- list(a = 1:10, b = 11:18, c = 19:25) output <- vector("list", length(x)) for (nm in names(x)) { output[[nm]] <- x[[nm]] * 3 } output ``` ``` ## [[1]] ## NULL ## ## [[2]] ## NULL ## ## [[3]] ## NULL ## ## $a ## [1] 3 6 9 12 15 18 21 24 27 30 ## ## $b ## [1] 33 36 39 42 45 48 51 54 ## ## $c ## [1] 57 60 63 66 69 72 75 ``` ### in\-class notes the `map_*` functions are essentially like running a `flatten_*` after running `map`. E.g. the two things below are equivalent ``` map(flights, typeof) %>% flatten_chr() ``` ``` ## [1] "integer" "integer" "integer" "integer" "integer" ## [6] "double" "integer" "integer" "double" "character" ## [11] "integer" "character" "character" "character" "double" ## [16] "double" "double" "double" "double" ``` ``` map_chr(flights, typeof) ``` ``` ## year month day dep_time sched_dep_time ## "integer" "integer" "integer" "integer" "integer" ## dep_delay arr_time sched_arr_time arr_delay carrier ## "double" "integer" "integer" "double" "character" ## flight tailnum origin dest air_time ## "integer" "character" "character" "character" "double" ## distance hour minute time_hour ## "double" "double" "double" "double" ``` Calculate the number of unique values for each level ``` iris %>% map(unique) %>% map_dbl(length) map_int(iris, ~length(unique(.x))) ``` Iterate through different min and max values ``` min_params <- c(-1, 0, -10) max_params <- c(11:13) map2(.x = min_params, .y = max_params, ~runif(n = 10, min = .x, max = .y)) ``` ``` ## [[1]] ## [1] 1.9234337 7.0166670 4.0117614 8.4583500 0.2343757 4.2187129 ## [7] 10.8194838 9.7166134 9.6376287 1.1006318 ## ## [[2]] ## [1] 1.568348 7.837223 4.122198 7.881098 3.844479 2.252293 9.387532 ## [8] 1.123140 5.601348 6.138066 ## ## [[3]] ## [1] 3.7997461 -2.3450586 1.2380998 11.9528980 1.1067551 10.4780551 ## [7] 11.0320783 4.0009046 -0.5541351 -6.6168221 ``` When using `pmap` it’s often best to keep the parameters in a dataframe ``` min_df_params <- tibble(n = c(10, 15, 20, 50 ), min = c(-1, 0, 1, 2), max = c(0, 1, 2, 3)) pmap(min_df_params, runif) ``` ``` ## [[1]] ## [1] -0.06470020 -0.69877110 -0.93927943 -0.05227306 -0.27940373 ## [6] -0.85770570 -0.45071534 -0.04590876 -0.41451665 -0.59548972 ## ## [[2]] ## [1] 0.6478935 0.3198206 0.3077200 0.2197676 0.3694889 0.9842192 0.1542023 ## [8] 0.0910440 0.1419069 0.6900071 0.6192565 0.8913941 0.6729991 0.7370777 ## [15] 0.5211357 ## ## [[3]] ## [1] 1.659838 1.821805 1.786282 1.979822 1.439432 1.311702 1.409475 ## [8] 1.010467 1.183850 1.842729 1.231162 1.239100 1.076691 1.245724 ## [15] 1.732135 1.847453 1.497527 1.387909 1.246449 1.111096 ## ## [[4]] ## [1] 2.389994 2.571935 2.216893 2.444768 2.217991 2.502300 2.353905 ## [8] 2.649985 2.374714 2.355445 2.533688 2.740334 2.221103 2.412746 ## [15] 2.265687 2.629973 2.183828 2.863644 2.746568 2.668285 2.618018 ## [22] 2.372238 2.529836 2.874682 2.581750 2.839768 2.312448 2.708290 ## [29] 2.265018 2.594343 2.481290 2.265033 2.564590 2.913188 2.901874 ## [36] 2.274167 2.321483 2.985641 2.619993 2.937314 2.466533 2.406833 ## [43] 2.659230 2.152347 2.572867 2.238726 2.962359 2.601366 2.515030 ## [50] 2.402573 ``` You can often use `map` a bunch of output that can then be stored in a tibble ``` tibble(type = map_chr(mtcars, typeof), means = map_dbl(mtcars, mean), median = map_dbl(mtcars, median), names = names(mtcars)) ``` ``` ## # A tibble: 11 x 4 ## type means median names ## <chr> <dbl> <dbl> <chr> ## 1 double 20.1 19.2 mpg ## 2 double 6.19 6 cyl ## 3 double 231. 196. disp ## 4 double 147. 123 hp ## 5 double 3.60 3.70 drat ## 6 double 3.22 3.32 wt ## 7 double 17.8 17.7 qsec ## 8 double 0.438 0 vs ## 9 double 0.406 0 am ## 10 double 3.69 4 gear ## 11 double 2.81 2 carb ``` *Provide the number of unique values for all columns excluding columns with numeric types or date types.* ``` num_unique <- function(df) { df %>% keep(~is_character(.x) | is.factor(.x)) %>% map(~length(unique(.x))) %>% as_tibble() %>% gather() %>% rename(field_name = key, num_unique = value) } num_unique(flights) ``` ``` ## # A tibble: 4 x 2 ## field_name num_unique ## <chr> <int> ## 1 carrier 16 ## 2 tailnum 4044 ## 3 origin 3 ## 4 dest 105 ``` ``` num_unique(iris) ``` ``` ## # A tibble: 1 x 2 ## field_name num_unique ## <chr> <int> ## 1 Species 3 ``` ``` num_unique(mpg) ``` ``` ## # A tibble: 6 x 2 ## field_name num_unique ## <chr> <int> ## 1 manufacturer 15 ## 2 model 38 ## 3 trans 10 ## 4 drv 3 ## 5 fl 5 ## 6 class 7 ``` 21\.2: For loops ---------------- ### 21\.2\.1 1. Write for loops to (think about the output, sequence, and body **before** you start writing the loop): 1. Compute the mean of every column in `mtcars`. ``` output <- vector("double", length(mtcars)) for (i in seq_along(mtcars)){ output[[i]] <- mean(mtcars[[i]]) } output ``` ``` ## [1] 20.090625 6.187500 230.721875 146.687500 3.596563 3.217250 ## [7] 17.848750 0.437500 0.406250 3.687500 2.812500 ``` 1. Determine the type of each column in `nycflights13::flights`. ``` output <- vector("character", length(flights)) for (i in seq_along(flights)){ output[[i]] <- typeof(flights[[i]]) } output ``` ``` ## [1] "integer" "integer" "integer" "integer" "integer" ## [6] "double" "integer" "integer" "double" "character" ## [11] "integer" "character" "character" "character" "double" ## [16] "double" "double" "double" "double" ``` 1. Compute the number of unique values in each column of `iris`. ``` output <- vector("integer", length(iris)) for (i in seq_along(iris)){ output[[i]] <- unique(iris[[i]]) %>% length() } output ``` ``` ## [1] 35 23 43 22 3 ``` 1. Generate 10 random normals for each of \\(\\mu \= \-10\\), \\(0\\), \\(10\\), and \\(100\\). ``` output <- vector("list", 4) input_means <- c(-10, 0, 10, 100) for (i in seq_along(output)){ output[[i]] <- rnorm(10, mean = input_means[[i]]) } output ``` ``` ## [[1]] ## [1] -11.371326 -10.118467 -10.582961 -10.324829 -7.604983 -9.300232 ## [7] -9.840124 -9.719733 -9.784274 -10.338814 ## ## [[2]] ## [1] -1.04951842 -0.68385670 0.17893523 0.07338463 -1.18028235 ## [6] -1.00777188 0.91491408 -0.14041984 -0.25074297 -0.50055019 ## ## [[3]] ## [1] 11.013913 9.790495 10.631115 10.325991 10.608040 9.463515 11.265961 ## [8] 10.630382 10.436201 8.907654 ## ## [[4]] ## [1] 99.37012 100.31396 99.06230 98.00350 100.31506 99.67347 101.02248 ## [8] 98.32484 98.62669 100.28487 ``` 2. Eliminate the for loop in each of the following examples by taking advantage of an existing function that works with vectors: *example:* ``` out <- "" for (x in letters) { out <- stringr::str_c(out, x) } out ``` * collabse letters into length\-one character vector with all characters concatenated ``` str_c(letters, collapse = "") ``` ``` ## [1] "abcdefghijklmnopqrstuvwxyz" ``` *example:* ``` x <- sample(100) sd <- 0 for (i in seq_along(x)) { sd <- sd + (x[i] - mean(x)) ^ 2 } sd <- sqrt(sd / (length(x) - 1)) sd ``` ``` ## [1] 29.01149 ``` * calculate standard deviaiton of x ``` sd(x) ``` ``` ## [1] 29.01149 ``` *example:* ``` x <- runif(100) out <- vector("numeric", length(x)) out[1] <- x[1] for (i in 2:length(x)) { out[i] <- out[i - 1] + x[i] } out ``` ``` ## [1] 0.1543797 0.5168570 1.4323513 1.4861995 1.7440626 2.3503876 ## [7] 2.7033856 3.4933038 3.8878801 4.8166162 4.8404351 5.0134399 ## [13] 5.8128633 5.9002886 6.4672338 7.3249551 7.4813311 7.9067374 ## [19] 7.9143362 8.6500421 9.4114592 9.8109883 10.6637337 11.5345437 ## [25] 11.8881403 12.8609933 13.0060893 13.1121490 13.2820768 13.7832678 ## [31] 14.0103818 14.8921300 15.8878166 16.3724888 17.2897726 17.6764167 ## [37] 18.3759822 18.5914902 18.7581008 19.3126850 20.0314901 20.9729033 ## [43] 21.5123325 22.1361972 22.9338153 23.9220106 23.9905409 24.1247463 ## [49] 24.3690186 24.6778073 25.1676470 25.6649358 26.0152919 26.3936317 ## [55] 26.6769802 26.7589431 27.4933689 28.3744835 28.8274173 29.5040112 ## [61] 30.4625068 31.1908181 31.5785996 32.0691594 32.4015008 33.1859971 ## [67] 34.0973779 34.4118215 34.6828655 34.9383821 35.5988994 35.9820211 ## [73] 36.7825814 37.5402040 37.9568733 38.5686788 38.6336509 39.0451422 ## [79] 39.1208101 39.8826954 40.4989736 41.2877620 41.4204198 41.8790701 ## [85] 42.8085235 43.2102977 43.4620636 43.9427926 44.7306195 45.4886119 ## [91] 46.0891834 46.4679661 47.0817039 47.6331389 48.1357901 48.3671822 ## [97] 48.8290107 49.8198761 50.6520274 50.6527903 ``` * calculate cumulative sum ``` cumsum(x) ``` ``` ## [1] 0.1543797 0.5168570 1.4323513 1.4861995 1.7440626 2.3503876 ## [7] 2.7033856 3.4933038 3.8878801 4.8166162 4.8404351 5.0134399 ## [13] 5.8128633 5.9002886 6.4672338 7.3249551 7.4813311 7.9067374 ## [19] 7.9143362 8.6500421 9.4114592 9.8109883 10.6637337 11.5345437 ## [25] 11.8881403 12.8609933 13.0060893 13.1121490 13.2820768 13.7832678 ## [31] 14.0103818 14.8921300 15.8878166 16.3724888 17.2897726 17.6764167 ## [37] 18.3759822 18.5914902 18.7581008 19.3126850 20.0314901 20.9729033 ## [43] 21.5123325 22.1361972 22.9338153 23.9220106 23.9905409 24.1247463 ## [49] 24.3690186 24.6778073 25.1676470 25.6649358 26.0152919 26.3936317 ## [55] 26.6769802 26.7589431 27.4933689 28.3744835 28.8274173 29.5040112 ## [61] 30.4625068 31.1908181 31.5785996 32.0691594 32.4015008 33.1859971 ## [67] 34.0973779 34.4118215 34.6828655 34.9383821 35.5988994 35.9820211 ## [73] 36.7825814 37.5402040 37.9568733 38.5686788 38.6336509 39.0451422 ## [79] 39.1208101 39.8826954 40.4989736 41.2877620 41.4204198 41.8790701 ## [85] 42.8085235 43.2102977 43.4620636 43.9427926 44.7306195 45.4886119 ## [91] 46.0891834 46.4679661 47.0817039 47.6331389 48.1357901 48.3671822 ## [97] 48.8290107 49.8198761 50.6520274 50.6527903 ``` 3. Combine your function writing and for loop skills: 1. Write a for loop that `prints()` the lyrics to the children’s song “Alice the camel”. ``` num_humps <- c("five", "four", "three", "two", "one", "no") for (i in seq_along(num_humps)){ paste0("Alice the camel has ", num_humps[[i]], " humps.") %>% rep(3) %>% writeLines() writeLines("So go, Alice, go.\n") } ``` ``` ## Alice the camel has five humps. ## Alice the camel has five humps. ## Alice the camel has five humps. ## So go, Alice, go. ## ## Alice the camel has four humps. ## Alice the camel has four humps. ## Alice the camel has four humps. ## So go, Alice, go. ## ## Alice the camel has three humps. ## Alice the camel has three humps. ## Alice the camel has three humps. ## So go, Alice, go. ## ## Alice the camel has two humps. ## Alice the camel has two humps. ## Alice the camel has two humps. ## So go, Alice, go. ## ## Alice the camel has one humps. ## Alice the camel has one humps. ## Alice the camel has one humps. ## So go, Alice, go. ## ## Alice the camel has no humps. ## Alice the camel has no humps. ## Alice the camel has no humps. ## So go, Alice, go. ``` 2. Convert the nursery rhyme “ten in the bed” to a function. Generalise it to any number of people in any sleeping structure. ``` nursery_bed <- function(num, y) { output <- vector("character", num) for (i in seq_along(output)) { output[[i]] <- str_replace_all( 'There were x in the _y\n And the little one said, \n"Roll over! Roll over!"\n So they all rolled over and\n one fell out.', c("x" = (length(output) - i + 1), "_y" = y)) } str_c(output, collapse = "\n\n") %>% writeLines() } nursery_bed(3, "asteroid") ``` ``` ## There were 3 in the asteroid ## And the little one said, ## "Roll over! Roll over!" ## So they all rolled over and ## one fell out. ## ## There were 2 in the asteroid ## And the little one said, ## "Roll over! Roll over!" ## So they all rolled over and ## one fell out. ## ## There were 1 in the asteroid ## And the little one said, ## "Roll over! Roll over!" ## So they all rolled over and ## one fell out. ``` 3. Convert the song “99 bottles of beer on the wall” to a function. Generalise to any number of any vessel containing any liquid on any surface. * This is a little bit of a lazy version… ``` beer_rhyme <- function(x, y, z){ output <- vector("character", x) for (i in seq_along(output)){ output[i] <- str_replace_all("x bottles of y on the z.\n One fell off...", c( "x" = (x - i + 1), "y" = y, "z" = z )) } output <- (str_c(output, collapse = "\n") %>% str_c("\nNo more bottles...", collapse = "")) writeLines(output) } beer_rhyme(4, "soda", "toilet") ``` ``` ## 4 bottles of soda on the toilet. ## One fell off... ## 3 bottles of soda on the toilet. ## One fell off... ## 2 bottles of soda on the toilet. ## One fell off... ## 1 bottles of soda on the toilet. ## One fell off... ## No more bottles... ``` 4. It’s common to see for loops that don’t preallocate the output and instead increase the length of a vector at each step. How does this affect performance? Design and execute an experiment. ``` preallocate <- function(){ x <- vector("double", 100) for (i in seq_along(x)){ x[i] <- rnorm(1) } } growing <- function(){ x <- c(0) for (i in 1:100){ x[i] <- rnorm(1) } } microbenchmark::microbenchmark( space = preallocate(), no_space = growing(), times = 20 ) ``` ``` ## Unit: microseconds ## expr min lq mean median uq max neval cld ## space 178.0 183.6 523.665 308.05 342.65 4991.2 20 a ## no_space 213.6 222.2 531.440 344.50 429.05 4081.9 20 a ``` * see roughly 35% better performance when creating ahead of time * note: if you can do these operations with vectorized approach though – they’re often much faster ``` microbenchmark::microbenchmark( space = preallocate(), no_space = growing(), vector = rnorm(100), times = 20 ) ``` ``` ## Unit: microseconds ## expr min lq mean median uq max neval cld ## space 155.8 161.45 177.075 163.50 166.20 349.6 20 b ## no_space 185.8 193.65 202.390 199.55 205.25 234.5 20 c ## vector 8.8 9.45 9.795 9.70 10.10 11.0 20 a ``` * vectorized was \> 10x faster ### 21\.2\.1 1. Write for loops to (think about the output, sequence, and body **before** you start writing the loop): 1. Compute the mean of every column in `mtcars`. ``` output <- vector("double", length(mtcars)) for (i in seq_along(mtcars)){ output[[i]] <- mean(mtcars[[i]]) } output ``` ``` ## [1] 20.090625 6.187500 230.721875 146.687500 3.596563 3.217250 ## [7] 17.848750 0.437500 0.406250 3.687500 2.812500 ``` 1. Determine the type of each column in `nycflights13::flights`. ``` output <- vector("character", length(flights)) for (i in seq_along(flights)){ output[[i]] <- typeof(flights[[i]]) } output ``` ``` ## [1] "integer" "integer" "integer" "integer" "integer" ## [6] "double" "integer" "integer" "double" "character" ## [11] "integer" "character" "character" "character" "double" ## [16] "double" "double" "double" "double" ``` 1. Compute the number of unique values in each column of `iris`. ``` output <- vector("integer", length(iris)) for (i in seq_along(iris)){ output[[i]] <- unique(iris[[i]]) %>% length() } output ``` ``` ## [1] 35 23 43 22 3 ``` 1. Generate 10 random normals for each of \\(\\mu \= \-10\\), \\(0\\), \\(10\\), and \\(100\\). ``` output <- vector("list", 4) input_means <- c(-10, 0, 10, 100) for (i in seq_along(output)){ output[[i]] <- rnorm(10, mean = input_means[[i]]) } output ``` ``` ## [[1]] ## [1] -11.371326 -10.118467 -10.582961 -10.324829 -7.604983 -9.300232 ## [7] -9.840124 -9.719733 -9.784274 -10.338814 ## ## [[2]] ## [1] -1.04951842 -0.68385670 0.17893523 0.07338463 -1.18028235 ## [6] -1.00777188 0.91491408 -0.14041984 -0.25074297 -0.50055019 ## ## [[3]] ## [1] 11.013913 9.790495 10.631115 10.325991 10.608040 9.463515 11.265961 ## [8] 10.630382 10.436201 8.907654 ## ## [[4]] ## [1] 99.37012 100.31396 99.06230 98.00350 100.31506 99.67347 101.02248 ## [8] 98.32484 98.62669 100.28487 ``` 2. Eliminate the for loop in each of the following examples by taking advantage of an existing function that works with vectors: *example:* ``` out <- "" for (x in letters) { out <- stringr::str_c(out, x) } out ``` * collabse letters into length\-one character vector with all characters concatenated ``` str_c(letters, collapse = "") ``` ``` ## [1] "abcdefghijklmnopqrstuvwxyz" ``` *example:* ``` x <- sample(100) sd <- 0 for (i in seq_along(x)) { sd <- sd + (x[i] - mean(x)) ^ 2 } sd <- sqrt(sd / (length(x) - 1)) sd ``` ``` ## [1] 29.01149 ``` * calculate standard deviaiton of x ``` sd(x) ``` ``` ## [1] 29.01149 ``` *example:* ``` x <- runif(100) out <- vector("numeric", length(x)) out[1] <- x[1] for (i in 2:length(x)) { out[i] <- out[i - 1] + x[i] } out ``` ``` ## [1] 0.1543797 0.5168570 1.4323513 1.4861995 1.7440626 2.3503876 ## [7] 2.7033856 3.4933038 3.8878801 4.8166162 4.8404351 5.0134399 ## [13] 5.8128633 5.9002886 6.4672338 7.3249551 7.4813311 7.9067374 ## [19] 7.9143362 8.6500421 9.4114592 9.8109883 10.6637337 11.5345437 ## [25] 11.8881403 12.8609933 13.0060893 13.1121490 13.2820768 13.7832678 ## [31] 14.0103818 14.8921300 15.8878166 16.3724888 17.2897726 17.6764167 ## [37] 18.3759822 18.5914902 18.7581008 19.3126850 20.0314901 20.9729033 ## [43] 21.5123325 22.1361972 22.9338153 23.9220106 23.9905409 24.1247463 ## [49] 24.3690186 24.6778073 25.1676470 25.6649358 26.0152919 26.3936317 ## [55] 26.6769802 26.7589431 27.4933689 28.3744835 28.8274173 29.5040112 ## [61] 30.4625068 31.1908181 31.5785996 32.0691594 32.4015008 33.1859971 ## [67] 34.0973779 34.4118215 34.6828655 34.9383821 35.5988994 35.9820211 ## [73] 36.7825814 37.5402040 37.9568733 38.5686788 38.6336509 39.0451422 ## [79] 39.1208101 39.8826954 40.4989736 41.2877620 41.4204198 41.8790701 ## [85] 42.8085235 43.2102977 43.4620636 43.9427926 44.7306195 45.4886119 ## [91] 46.0891834 46.4679661 47.0817039 47.6331389 48.1357901 48.3671822 ## [97] 48.8290107 49.8198761 50.6520274 50.6527903 ``` * calculate cumulative sum ``` cumsum(x) ``` ``` ## [1] 0.1543797 0.5168570 1.4323513 1.4861995 1.7440626 2.3503876 ## [7] 2.7033856 3.4933038 3.8878801 4.8166162 4.8404351 5.0134399 ## [13] 5.8128633 5.9002886 6.4672338 7.3249551 7.4813311 7.9067374 ## [19] 7.9143362 8.6500421 9.4114592 9.8109883 10.6637337 11.5345437 ## [25] 11.8881403 12.8609933 13.0060893 13.1121490 13.2820768 13.7832678 ## [31] 14.0103818 14.8921300 15.8878166 16.3724888 17.2897726 17.6764167 ## [37] 18.3759822 18.5914902 18.7581008 19.3126850 20.0314901 20.9729033 ## [43] 21.5123325 22.1361972 22.9338153 23.9220106 23.9905409 24.1247463 ## [49] 24.3690186 24.6778073 25.1676470 25.6649358 26.0152919 26.3936317 ## [55] 26.6769802 26.7589431 27.4933689 28.3744835 28.8274173 29.5040112 ## [61] 30.4625068 31.1908181 31.5785996 32.0691594 32.4015008 33.1859971 ## [67] 34.0973779 34.4118215 34.6828655 34.9383821 35.5988994 35.9820211 ## [73] 36.7825814 37.5402040 37.9568733 38.5686788 38.6336509 39.0451422 ## [79] 39.1208101 39.8826954 40.4989736 41.2877620 41.4204198 41.8790701 ## [85] 42.8085235 43.2102977 43.4620636 43.9427926 44.7306195 45.4886119 ## [91] 46.0891834 46.4679661 47.0817039 47.6331389 48.1357901 48.3671822 ## [97] 48.8290107 49.8198761 50.6520274 50.6527903 ``` 3. Combine your function writing and for loop skills: 1. Write a for loop that `prints()` the lyrics to the children’s song “Alice the camel”. ``` num_humps <- c("five", "four", "three", "two", "one", "no") for (i in seq_along(num_humps)){ paste0("Alice the camel has ", num_humps[[i]], " humps.") %>% rep(3) %>% writeLines() writeLines("So go, Alice, go.\n") } ``` ``` ## Alice the camel has five humps. ## Alice the camel has five humps. ## Alice the camel has five humps. ## So go, Alice, go. ## ## Alice the camel has four humps. ## Alice the camel has four humps. ## Alice the camel has four humps. ## So go, Alice, go. ## ## Alice the camel has three humps. ## Alice the camel has three humps. ## Alice the camel has three humps. ## So go, Alice, go. ## ## Alice the camel has two humps. ## Alice the camel has two humps. ## Alice the camel has two humps. ## So go, Alice, go. ## ## Alice the camel has one humps. ## Alice the camel has one humps. ## Alice the camel has one humps. ## So go, Alice, go. ## ## Alice the camel has no humps. ## Alice the camel has no humps. ## Alice the camel has no humps. ## So go, Alice, go. ``` 2. Convert the nursery rhyme “ten in the bed” to a function. Generalise it to any number of people in any sleeping structure. ``` nursery_bed <- function(num, y) { output <- vector("character", num) for (i in seq_along(output)) { output[[i]] <- str_replace_all( 'There were x in the _y\n And the little one said, \n"Roll over! Roll over!"\n So they all rolled over and\n one fell out.', c("x" = (length(output) - i + 1), "_y" = y)) } str_c(output, collapse = "\n\n") %>% writeLines() } nursery_bed(3, "asteroid") ``` ``` ## There were 3 in the asteroid ## And the little one said, ## "Roll over! Roll over!" ## So they all rolled over and ## one fell out. ## ## There were 2 in the asteroid ## And the little one said, ## "Roll over! Roll over!" ## So they all rolled over and ## one fell out. ## ## There were 1 in the asteroid ## And the little one said, ## "Roll over! Roll over!" ## So they all rolled over and ## one fell out. ``` 3. Convert the song “99 bottles of beer on the wall” to a function. Generalise to any number of any vessel containing any liquid on any surface. * This is a little bit of a lazy version… ``` beer_rhyme <- function(x, y, z){ output <- vector("character", x) for (i in seq_along(output)){ output[i] <- str_replace_all("x bottles of y on the z.\n One fell off...", c( "x" = (x - i + 1), "y" = y, "z" = z )) } output <- (str_c(output, collapse = "\n") %>% str_c("\nNo more bottles...", collapse = "")) writeLines(output) } beer_rhyme(4, "soda", "toilet") ``` ``` ## 4 bottles of soda on the toilet. ## One fell off... ## 3 bottles of soda on the toilet. ## One fell off... ## 2 bottles of soda on the toilet. ## One fell off... ## 1 bottles of soda on the toilet. ## One fell off... ## No more bottles... ``` 4. It’s common to see for loops that don’t preallocate the output and instead increase the length of a vector at each step. How does this affect performance? Design and execute an experiment. ``` preallocate <- function(){ x <- vector("double", 100) for (i in seq_along(x)){ x[i] <- rnorm(1) } } growing <- function(){ x <- c(0) for (i in 1:100){ x[i] <- rnorm(1) } } microbenchmark::microbenchmark( space = preallocate(), no_space = growing(), times = 20 ) ``` ``` ## Unit: microseconds ## expr min lq mean median uq max neval cld ## space 178.0 183.6 523.665 308.05 342.65 4991.2 20 a ## no_space 213.6 222.2 531.440 344.50 429.05 4081.9 20 a ``` * see roughly 35% better performance when creating ahead of time * note: if you can do these operations with vectorized approach though – they’re often much faster ``` microbenchmark::microbenchmark( space = preallocate(), no_space = growing(), vector = rnorm(100), times = 20 ) ``` ``` ## Unit: microseconds ## expr min lq mean median uq max neval cld ## space 155.8 161.45 177.075 163.50 166.20 349.6 20 b ## no_space 185.8 193.65 202.390 199.55 205.25 234.5 20 c ## vector 8.8 9.45 9.795 9.70 10.10 11.0 20 a ``` * vectorized was \> 10x faster 21\.3 For loop variations ------------------------- ### 21\.3\.5 1. Imagine you have a directory full of CSV files that you want to read in. You have their paths in a vector, `files <- dir("data/", pattern = "\\.csv$", full.names = TRUE)`, and now want to read each one with `read_csv()`. Write the for loop that will load them into a single data frame. * To start this problem, I first created a file directory, and then wrote in 26 csvs each with the most popular name from each year since 1880 for a particular letter[35](#fn35). * Next I read these into a single dataframe with a for loop 2. *What happens if you use `for (nm in names(x))` and `x` has no names?* ``` x <- list(1:10, 11:18, 19:25) for (nm in names(x)) { print(x[[nm]]) } ``` * each iteration produces an error, so nothing is written*What if only some of the elements are named?* ``` x <- list(a = 1:10, 11:18, c = 19:25) for (nm in names(x)) { print(x[[nm]]) } ``` ``` ## [1] 1 2 3 4 5 6 7 8 9 10 ## NULL ## [1] 19 20 21 22 23 24 25 ``` * you have output for those with names and NULL for those without*What if the names are not unique?* ``` x <- list(a = 1:10, a = 11:18, c = 19:25) for (nm in names(x)) { print(x[[nm]]) } ``` ``` ## [1] 1 2 3 4 5 6 7 8 9 10 ## [1] 1 2 3 4 5 6 7 8 9 10 ## [1] 19 20 21 22 23 24 25 ``` * it prints the first position with the name 3. Write a function that prints the mean of each numeric column in a data frame, along with its name. For example, `show_mean(iris)` would print: ``` show_mean(iris) #> Sepal.Length: 5.84 #> Sepal.Width: 3.06 #> Petal.Length: 3.76 #> Petal.Width: 1.20 ``` (Extra challenge: what function did I use to make sure that the numbers lined up nicely, even though the variable names had different lengths?) ``` show_mean <- function(df){ # select just cols that are numeric out <- vector("logical", length(df)) for (i in seq_along(df)) { out[[i]] <- is.numeric(df[[i]]) } df_select <- df[out] # keep/discard funs would have made this easy # make list of values w/ mean means <- vector("list", length(df_select)) names(means) <- names(df_select) for (i in seq_along(df_select)){ means[[i]] <- mean(df_select[[i]], na.rm = TRUE) %>% round(digits = 2) } # print out, use method to identify max chars for vars printed means_names <- names(means) chars_max <- (str_count(means_names) + str_count(as.character(means))) %>% max() chars_pad <- chars_max - (str_count(means_names) + str_count(as.character(means))) names(chars_pad) <- means_names str_c(means_names, ": ", str_dup(" ", chars_pad), means) %>% writeLines() } show_mean(flights) ``` ``` ## year: 2013 ## month: 6.55 ## day: 15.71 ## dep_time: 1349.11 ## sched_dep_time: 1344.25 ## dep_delay: 12.64 ## arr_time: 1502.05 ## sched_arr_time: 1536.38 ## arr_delay: 6.9 ## flight: 1971.92 ## air_time: 150.69 ## distance: 1039.91 ## hour: 13.18 ## minute: 26.23 ``` 4. What does this code do? How does it work? ``` trans <- list( disp = function(x) x * 0.0163871, am = function(x) { factor(x, labels = c("auto", "manual")) } ) for (var in names(trans)) { mtcars[[var]] <- trans[[var]](mtcars[[var]]) } mtcars ``` * first part builds list of functions, 2nd applies those to a dataset * are storing the data transformations as a function and then applying this to a dataframe[36](#fn36) ### 21\.3\.5 1. Imagine you have a directory full of CSV files that you want to read in. You have their paths in a vector, `files <- dir("data/", pattern = "\\.csv$", full.names = TRUE)`, and now want to read each one with `read_csv()`. Write the for loop that will load them into a single data frame. * To start this problem, I first created a file directory, and then wrote in 26 csvs each with the most popular name from each year since 1880 for a particular letter[35](#fn35). * Next I read these into a single dataframe with a for loop 2. *What happens if you use `for (nm in names(x))` and `x` has no names?* ``` x <- list(1:10, 11:18, 19:25) for (nm in names(x)) { print(x[[nm]]) } ``` * each iteration produces an error, so nothing is written*What if only some of the elements are named?* ``` x <- list(a = 1:10, 11:18, c = 19:25) for (nm in names(x)) { print(x[[nm]]) } ``` ``` ## [1] 1 2 3 4 5 6 7 8 9 10 ## NULL ## [1] 19 20 21 22 23 24 25 ``` * you have output for those with names and NULL for those without*What if the names are not unique?* ``` x <- list(a = 1:10, a = 11:18, c = 19:25) for (nm in names(x)) { print(x[[nm]]) } ``` ``` ## [1] 1 2 3 4 5 6 7 8 9 10 ## [1] 1 2 3 4 5 6 7 8 9 10 ## [1] 19 20 21 22 23 24 25 ``` * it prints the first position with the name 3. Write a function that prints the mean of each numeric column in a data frame, along with its name. For example, `show_mean(iris)` would print: ``` show_mean(iris) #> Sepal.Length: 5.84 #> Sepal.Width: 3.06 #> Petal.Length: 3.76 #> Petal.Width: 1.20 ``` (Extra challenge: what function did I use to make sure that the numbers lined up nicely, even though the variable names had different lengths?) ``` show_mean <- function(df){ # select just cols that are numeric out <- vector("logical", length(df)) for (i in seq_along(df)) { out[[i]] <- is.numeric(df[[i]]) } df_select <- df[out] # keep/discard funs would have made this easy # make list of values w/ mean means <- vector("list", length(df_select)) names(means) <- names(df_select) for (i in seq_along(df_select)){ means[[i]] <- mean(df_select[[i]], na.rm = TRUE) %>% round(digits = 2) } # print out, use method to identify max chars for vars printed means_names <- names(means) chars_max <- (str_count(means_names) + str_count(as.character(means))) %>% max() chars_pad <- chars_max - (str_count(means_names) + str_count(as.character(means))) names(chars_pad) <- means_names str_c(means_names, ": ", str_dup(" ", chars_pad), means) %>% writeLines() } show_mean(flights) ``` ``` ## year: 2013 ## month: 6.55 ## day: 15.71 ## dep_time: 1349.11 ## sched_dep_time: 1344.25 ## dep_delay: 12.64 ## arr_time: 1502.05 ## sched_arr_time: 1536.38 ## arr_delay: 6.9 ## flight: 1971.92 ## air_time: 150.69 ## distance: 1039.91 ## hour: 13.18 ## minute: 26.23 ``` 4. What does this code do? How does it work? ``` trans <- list( disp = function(x) x * 0.0163871, am = function(x) { factor(x, labels = c("auto", "manual")) } ) for (var in names(trans)) { mtcars[[var]] <- trans[[var]](mtcars[[var]]) } mtcars ``` * first part builds list of functions, 2nd applies those to a dataset * are storing the data transformations as a function and then applying this to a dataframe[36](#fn36) 21\.4: For loops vs. functionals -------------------------------- ### 21\.4\.1 1. Read the documentation for `apply()`. In the 2d case, what two for loops does it generalise? * It allows you to input either 1 or 2 for the `MARGIN` argument, which corresponds with looping over either the rows or the columns. 2. Adapt `col_summary()` so that it only applies to numeric columns You might want to start with an `is_numeric()` function that returns a logical vector that has a TRUE corresponding to each numeric column. ``` col_summary_gen <- function(df, fun, ...) { #find cols that are numeric out <- vector("logical", length(df)) for (i in seq_along(df)) { out[[i]] <- is.numeric(df[[i]]) } #make list of values w/ mean df_select <- df[out] output <- vector("list", length(df_select)) names(output) <- names(df_select) for (nm in names(output)) { output[[nm]] <- fun(df_select[[nm]], ...) %>% round(digits = 2) } as_tibble(output) } col_summary_gen(flights, fun = median, na.rm = TRUE) %>% gather() # trick to gather all easily ``` ``` ## # A tibble: 14 x 2 ## key value ## <chr> <dbl> ## 1 year 2013 ## 2 month 7 ## 3 day 16 ## 4 dep_time 1401 ## 5 sched_dep_time 1359 ## 6 dep_delay -2 ## 7 arr_time 1535 ## 8 sched_arr_time 1556 ## 9 arr_delay -5 ## 10 flight 1496 ## 11 air_time 129 ## 12 distance 872 ## 13 hour 13 ## 14 minute 29 ``` * the `...` makes this so you can add arguments to the functions. ### 21\.4\.1 1. Read the documentation for `apply()`. In the 2d case, what two for loops does it generalise? * It allows you to input either 1 or 2 for the `MARGIN` argument, which corresponds with looping over either the rows or the columns. 2. Adapt `col_summary()` so that it only applies to numeric columns You might want to start with an `is_numeric()` function that returns a logical vector that has a TRUE corresponding to each numeric column. ``` col_summary_gen <- function(df, fun, ...) { #find cols that are numeric out <- vector("logical", length(df)) for (i in seq_along(df)) { out[[i]] <- is.numeric(df[[i]]) } #make list of values w/ mean df_select <- df[out] output <- vector("list", length(df_select)) names(output) <- names(df_select) for (nm in names(output)) { output[[nm]] <- fun(df_select[[nm]], ...) %>% round(digits = 2) } as_tibble(output) } col_summary_gen(flights, fun = median, na.rm = TRUE) %>% gather() # trick to gather all easily ``` ``` ## # A tibble: 14 x 2 ## key value ## <chr> <dbl> ## 1 year 2013 ## 2 month 7 ## 3 day 16 ## 4 dep_time 1401 ## 5 sched_dep_time 1359 ## 6 dep_delay -2 ## 7 arr_time 1535 ## 8 sched_arr_time 1556 ## 9 arr_delay -5 ## 10 flight 1496 ## 11 air_time 129 ## 12 distance 872 ## 13 hour 13 ## 14 minute 29 ``` * the `...` makes this so you can add arguments to the functions. 21\.5: The map functions ------------------------ ### 21\.5\.3 1. Write code that uses one of the map functions to: *Compute the mean of every column in `mtcars`.* ``` purrr::map_dbl(mtcars, mean) ``` ``` ## mpg cyl disp hp drat wt ## 20.090625 6.187500 230.721875 146.687500 3.596563 3.217250 ## qsec vs am gear carb ## 17.848750 0.437500 0.406250 3.687500 2.812500 ``` *Determine the type of each column in `nycflights13::flights`.* ``` purrr::map_chr(flights, typeof) ``` ``` ## year month day dep_time sched_dep_time ## "integer" "integer" "integer" "integer" "integer" ## dep_delay arr_time sched_arr_time arr_delay carrier ## "double" "integer" "integer" "double" "character" ## flight tailnum origin dest air_time ## "integer" "character" "character" "character" "double" ## distance hour minute time_hour ## "double" "double" "double" "double" ``` *Compute the number of unique values in each column of `iris`.* ``` purrr::map(iris, unique) %>% map_dbl(length) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 35 23 43 22 3 ``` *Generate 10 random normals for each of \\(\\mu \= \-10\\), \\(0\\), \\(10\\), and \\(100\\).* ``` purrr::map(c(-10, 0, 10, 100), rnorm, n = 10) ``` ``` ## [[1]] ## [1] -11.668016 -10.174630 -9.873417 -9.935144 -9.549267 -9.989001 ## [7] -9.991157 -9.490583 -9.020713 -11.215907 ## ## [[2]] ## [1] -1.3330518 1.7970408 -0.7859694 -1.5184894 0.4544287 0.2134496 ## [7] -1.0761067 0.1600194 -0.1258518 -0.6974829 ## ## [[3]] ## [1] 10.334081 9.523160 9.730305 10.855434 10.899334 11.522520 9.532049 ## [8] 9.778320 10.276128 9.939547 ## ## [[4]] ## [1] 98.63699 100.57597 100.23664 99.65274 100.66985 99.86635 99.79877 ## [8] 98.84634 101.00019 99.09162 ``` ``` # purrr::map_dbl(flights, ~mean(is.na(.x))) ``` 2. How can you create a single vector that for each column in a data frame indicates whether or not it’s a factor? ``` purrr::map_lgl(iris, is.factor) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## FALSE FALSE FALSE FALSE TRUE ``` 3. What happens when you use the map functions on vectors that aren’t lists? What does `map(1:5, runif)` do? Why? ``` purrr::map(1:5, rnorm) ``` ``` ## [[1]] ## [1] 0.26078 ## ## [[2]] ## [1] 0.39670324 0.03106982 ## ## [[3]] ## [1] 1.0644632 -0.1632358 -1.0353975 ## ## [[4]] ## [1] -0.3556528 -0.5027896 2.0659595 -0.1360896 ## ## [[5]] ## [1] 0.50936851 0.16219258 -1.53746908 -0.04141543 -0.79950355 ``` * It runs on each item in the vector. * `map()` runs on each element item within the input, i.e .x\[\[1]], .x\[\[2]], .x\[\[n]]. The elements of a numeric vector are scalars (or technically length 1 numeric vectors) * In this case then it is passing the values 1, 2, 3, 4, 5 into the first argument of `rnorm` for each run, hence pattern above. 4. What does `map(-2:2, rnorm, n = 5)` do? Why? ``` map(-2:2, rnorm, n = 5) ``` ``` ## [[1]] ## [1] -1.829446 -3.357986 -3.582975 -2.039341 -2.087265 ## ## [[2]] ## [1] -0.6831658 -0.8729133 -0.3192894 -1.3425364 0.2383131 ## ## [[3]] ## [1] 0.43215278 -0.07629132 -0.14400722 1.85870258 0.13472292 ## ## [[4]] ## [1] -0.22256104 2.00645188 -0.06027834 1.44273092 0.69404413 ## ## [[5]] ## [1] 1.642268 2.233247 2.021023 1.988244 2.798515 ``` * It makes 5 vectors each of length 5 with the values centered at the means of \-2,\-1, 0, 1, 2 respectively. * The reason is that the default filling of the first argument is already named by the defined input of ‘n \= 5’, therefore, the inputs are instead going to the 2nd argument, and hence become the mean of the different rnorm calls. 5. Rewrite `map(x, function(df) lm(mpg ~ wt, data = df))` to eliminate the anonymous function. ``` mtcars %>% purrr::map( ~ lm(mpg ~ wt, data = .)) ``` ### 21\.5\.3 1. Write code that uses one of the map functions to: *Compute the mean of every column in `mtcars`.* ``` purrr::map_dbl(mtcars, mean) ``` ``` ## mpg cyl disp hp drat wt ## 20.090625 6.187500 230.721875 146.687500 3.596563 3.217250 ## qsec vs am gear carb ## 17.848750 0.437500 0.406250 3.687500 2.812500 ``` *Determine the type of each column in `nycflights13::flights`.* ``` purrr::map_chr(flights, typeof) ``` ``` ## year month day dep_time sched_dep_time ## "integer" "integer" "integer" "integer" "integer" ## dep_delay arr_time sched_arr_time arr_delay carrier ## "double" "integer" "integer" "double" "character" ## flight tailnum origin dest air_time ## "integer" "character" "character" "character" "double" ## distance hour minute time_hour ## "double" "double" "double" "double" ``` *Compute the number of unique values in each column of `iris`.* ``` purrr::map(iris, unique) %>% map_dbl(length) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## 35 23 43 22 3 ``` *Generate 10 random normals for each of \\(\\mu \= \-10\\), \\(0\\), \\(10\\), and \\(100\\).* ``` purrr::map(c(-10, 0, 10, 100), rnorm, n = 10) ``` ``` ## [[1]] ## [1] -11.668016 -10.174630 -9.873417 -9.935144 -9.549267 -9.989001 ## [7] -9.991157 -9.490583 -9.020713 -11.215907 ## ## [[2]] ## [1] -1.3330518 1.7970408 -0.7859694 -1.5184894 0.4544287 0.2134496 ## [7] -1.0761067 0.1600194 -0.1258518 -0.6974829 ## ## [[3]] ## [1] 10.334081 9.523160 9.730305 10.855434 10.899334 11.522520 9.532049 ## [8] 9.778320 10.276128 9.939547 ## ## [[4]] ## [1] 98.63699 100.57597 100.23664 99.65274 100.66985 99.86635 99.79877 ## [8] 98.84634 101.00019 99.09162 ``` ``` # purrr::map_dbl(flights, ~mean(is.na(.x))) ``` 2. How can you create a single vector that for each column in a data frame indicates whether or not it’s a factor? ``` purrr::map_lgl(iris, is.factor) ``` ``` ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## FALSE FALSE FALSE FALSE TRUE ``` 3. What happens when you use the map functions on vectors that aren’t lists? What does `map(1:5, runif)` do? Why? ``` purrr::map(1:5, rnorm) ``` ``` ## [[1]] ## [1] 0.26078 ## ## [[2]] ## [1] 0.39670324 0.03106982 ## ## [[3]] ## [1] 1.0644632 -0.1632358 -1.0353975 ## ## [[4]] ## [1] -0.3556528 -0.5027896 2.0659595 -0.1360896 ## ## [[5]] ## [1] 0.50936851 0.16219258 -1.53746908 -0.04141543 -0.79950355 ``` * It runs on each item in the vector. * `map()` runs on each element item within the input, i.e .x\[\[1]], .x\[\[2]], .x\[\[n]]. The elements of a numeric vector are scalars (or technically length 1 numeric vectors) * In this case then it is passing the values 1, 2, 3, 4, 5 into the first argument of `rnorm` for each run, hence pattern above. 4. What does `map(-2:2, rnorm, n = 5)` do? Why? ``` map(-2:2, rnorm, n = 5) ``` ``` ## [[1]] ## [1] -1.829446 -3.357986 -3.582975 -2.039341 -2.087265 ## ## [[2]] ## [1] -0.6831658 -0.8729133 -0.3192894 -1.3425364 0.2383131 ## ## [[3]] ## [1] 0.43215278 -0.07629132 -0.14400722 1.85870258 0.13472292 ## ## [[4]] ## [1] -0.22256104 2.00645188 -0.06027834 1.44273092 0.69404413 ## ## [[5]] ## [1] 1.642268 2.233247 2.021023 1.988244 2.798515 ``` * It makes 5 vectors each of length 5 with the values centered at the means of \-2,\-1, 0, 1, 2 respectively. * The reason is that the default filling of the first argument is already named by the defined input of ‘n \= 5’, therefore, the inputs are instead going to the 2nd argument, and hence become the mean of the different rnorm calls. 5. Rewrite `map(x, function(df) lm(mpg ~ wt, data = df))` to eliminate the anonymous function. ``` mtcars %>% purrr::map( ~ lm(mpg ~ wt, data = .)) ``` 21\.9 Other patterns of for loops --------------------------------- ### 21\.9\.3 1. Implement your own version of `every()` using a for loop. Compare it with `purrr::every()`. What does purrr’s version do that your version doesn’t? ``` every_loop <- function(x, fun, ...) { output <- vector("list", length(x)) for (i in seq_along(x)) { output[[i]] <- fun(x[[i]]) } total <- flatten_lgl(output) sum(total) == length(x) } x <- list(flights, mtcars, iris) every_loop(x, is.data.frame) ``` ``` ## [1] TRUE ``` ``` every(x, is.data.frame) ``` ``` ## [1] TRUE ``` 2. Create an enhanced `col_sum()` that applies a summary function to every numeric column in a data frame. ``` col_summary_enh <- function(x,fun){ x %>% keep(is.numeric) %>% purrr::map_dbl(fun) } col_summary_enh(mtcars, median) ``` ``` ## mpg cyl disp hp drat wt qsec vs am ## 19.200 6.000 196.300 123.000 3.695 3.325 17.710 0.000 0.000 ## gear carb ## 4.000 2.000 ``` 3. A possible base R equivalent of `col_sum()` is: ``` col_sum3 <- function(df, f) { is_num <- sapply(df, is.numeric) df_num <- df[, is_num] sapply(df_num, f) } ``` But it has a number of bugs as illustrated with the following inputs: ``` df <- tibble( x = 1:3, y = 3:1, z = c("a", "b", "c") ) # OK col_sum3(df, mean) # Has problems: don't always return numeric vector col_sum3(df[1:2], mean) col_sum3(df[1], mean) col_sum3(df[0], mean) ``` What causes the bugs? * The vector output is not always consistent in it’s output type. Also, returns error when inputting an empty list due to indexing issue. ### 21\.9\.3 1. Implement your own version of `every()` using a for loop. Compare it with `purrr::every()`. What does purrr’s version do that your version doesn’t? ``` every_loop <- function(x, fun, ...) { output <- vector("list", length(x)) for (i in seq_along(x)) { output[[i]] <- fun(x[[i]]) } total <- flatten_lgl(output) sum(total) == length(x) } x <- list(flights, mtcars, iris) every_loop(x, is.data.frame) ``` ``` ## [1] TRUE ``` ``` every(x, is.data.frame) ``` ``` ## [1] TRUE ``` 2. Create an enhanced `col_sum()` that applies a summary function to every numeric column in a data frame. ``` col_summary_enh <- function(x,fun){ x %>% keep(is.numeric) %>% purrr::map_dbl(fun) } col_summary_enh(mtcars, median) ``` ``` ## mpg cyl disp hp drat wt qsec vs am ## 19.200 6.000 196.300 123.000 3.695 3.325 17.710 0.000 0.000 ## gear carb ## 4.000 2.000 ``` 3. A possible base R equivalent of `col_sum()` is: ``` col_sum3 <- function(df, f) { is_num <- sapply(df, is.numeric) df_num <- df[, is_num] sapply(df_num, f) } ``` But it has a number of bugs as illustrated with the following inputs: ``` df <- tibble( x = 1:3, y = 3:1, z = c("a", "b", "c") ) # OK col_sum3(df, mean) # Has problems: don't always return numeric vector col_sum3(df[1:2], mean) col_sum3(df[1], mean) col_sum3(df[0], mean) ``` What causes the bugs? * The vector output is not always consistent in it’s output type. Also, returns error when inputting an empty list due to indexing issue. Appendix -------- ### 21\.3\.5\.1 #### Using map ``` outputted_csv <- files_example %>% mutate(csv_data = map(file_paths, read_csv)) outputted_csv <- files_example %>% mutate(csv_data = map(file_paths, safely(read_csv))) ``` #### Plot of names * Below is a plot of the proportion of individuals named the most popular letter in each year. This suggests that the top names by letter do not have as large of a proportion of the population ocmpared to historically. ``` names_appended %>% ggplot(aes(x = year, y = prop, colour = first_letter))+ geom_line() ``` #### csv other example The code below might be used to read csvs from a shared drive. I added on the ‘file\_path\_pull’ and ‘files\_example’ components to add in information on the file paths and other details that were relevant. You might also add this data into a new column on the output… ``` files_path_pull <- dir("//companydomain.com/directory/", pattern = "csv$", full.names = TRUE) files_example <- tibble(file_paths = files_path_pull[1:2]) %>% extract(file_paths, into = c("path", "name"), regex = "(.*)([0-9]{4}-[0-9]{2}-[0-9]{2})", remove = FALSE) read_dir <- function(dir){ #input vector of file paths name and output appended file out <- vector("list", length(dir)) for (i in seq_along(out)){ out[[i]] <- read_csv(dir[[i]]) } out <- bind_rows(out) out } read_dir(files_example$file_paths) ``` ### 21\.3\.5\.2 (with purrr) ``` purrr::map_lgl(iris, is.factor) %>% tibble::enframe() ``` ``` ## # A tibble: 5 x 2 ## name value ## <chr> <lgl> ## 1 Sepal.Length FALSE ## 2 Sepal.Width FALSE ## 3 Petal.Length FALSE ## 4 Petal.Width FALSE ## 5 Species TRUE ``` Slightly less attractive printing ``` show_mean2 <- function(df) { df %>% keep(is.numeric) %>% map_dbl(mean, na.rm = TRUE) } show_mean2(flights) ``` ``` ## year month day dep_time sched_dep_time ## 2013.000000 6.548510 15.710787 1349.109947 1344.254840 ## dep_delay arr_time sched_arr_time arr_delay flight ## 12.639070 1502.054999 1536.380220 6.895377 1971.923620 ## air_time distance hour minute ## 150.686460 1039.912604 13.180247 26.230100 ``` Maybe slightly better printing and in df ``` show_mean3 <- function(df){ df %>% keep(is.numeric) %>% map_dbl(mean, na.rm = TRUE) %>% as_tibble() %>% mutate(names = row.names(.)) } show_mean3(flights) ``` ``` ## Warning: Calling `as_tibble()` on a vector is discouraged, because the behavior is likely to change in the future. Use `enframe(name = NULL)` instead. ## This warning is displayed once per session. ``` ``` ## # A tibble: 14 x 2 ## value names ## <dbl> <chr> ## 1 2013 1 ## 2 6.55 2 ## 3 15.7 3 ## 4 1349. 4 ## 5 1344. 5 ## 6 12.6 6 ## 7 1502. 7 ## 8 1536. 8 ## 9 6.90 9 ## 10 1972. 10 ## 11 151. 11 ## 12 1040. 12 ## 13 13.2 13 ## 14 26.2 14 ``` Other method is to take advantage of the `gather()` function ``` flights %>% keep(is.numeric) %>% map(mean, na.rm = TRUE) %>% as_tibble() %>% gather() ``` ``` ## # A tibble: 14 x 2 ## key value ## <chr> <dbl> ## 1 year 2013 ## 2 month 6.55 ## 3 day 15.7 ## 4 dep_time 1349. ## 5 sched_dep_time 1344. ## 6 dep_delay 12.6 ## 7 arr_time 1502. ## 8 sched_arr_time 1536. ## 9 arr_delay 6.90 ## 10 flight 1972. ## 11 air_time 151. ## 12 distance 1040. ## 13 hour 13.2 ## 14 minute 26.2 ``` ### 21\.9\.3\.1 * mine can’t handle shortcut formulas or new functions ``` z <- sample(10) z %>% every( ~ . < 11) ``` ``` ## [1] TRUE ``` ``` # e.g. below would fail # z %>% # every_loop( ~ . < 11) ``` ### 21\.9 mirroring `keep` * below is one method for passing multiple, more complex arguments through keep, though you can also use function shortcuts (`~`) in `keep` and `discard` ``` ##how to pass multiple functions through keep? #can use map to subset columns by multiple criteria and then subset at end flights %>% purrr::map(is.na) %>% purrr::map_dbl(sum) %>% purrr::map_lgl(~.>10) %>% flights[.] ``` ``` ## # A tibble: 336,776 x 6 ## dep_time dep_delay arr_time arr_delay tailnum air_time ## <int> <dbl> <int> <dbl> <chr> <dbl> ## 1 517 2 830 11 N14228 227 ## 2 533 4 850 20 N24211 227 ## 3 542 2 923 33 N619AA 160 ## 4 544 -1 1004 -18 N804JB 183 ## 5 554 -6 812 -25 N668DN 116 ## 6 554 -4 740 12 N39463 150 ## 7 555 -5 913 19 N516JB 158 ## 8 557 -3 709 -14 N829AS 53 ## 9 557 -3 838 -8 N593JB 140 ## 10 558 -2 753 8 N3ALAA 138 ## # ... with 336,766 more rows ``` ### invoke examples Let’s change the example to be with quantile… ``` invoke(runif, n = 10) ``` ``` ## [1] 0.775555937 0.328805817 0.920314980 0.176599637 0.210958651 ## [6] 0.890200325 0.456075735 0.498955991 0.148438198 0.001021321 ``` ``` list("01a", "01b") %>% invoke(paste, ., sep = "-") ``` ``` ## [1] "01a-01b" ``` ``` set.seed(123) invoke_map(list(runif, rnorm), list(list(n = 10), list(n = 5))) ``` ``` ## [[1]] ## [1] 0.2875775 0.7883051 0.4089769 0.8830174 0.9404673 0.0455565 0.5281055 ## [8] 0.8924190 0.5514350 0.4566147 ## ## [[2]] ## [1] 1.7150650 0.4609162 -1.2650612 -0.6868529 -0.4456620 ``` ``` set.seed(123) invoke_map(list(runif, rnorm), list(list(n = 10), list(5, 50))) ``` ``` ## [[1]] ## [1] 0.2875775 0.7883051 0.4089769 0.8830174 0.9404673 0.0455565 0.5281055 ## [8] 0.8924190 0.5514350 0.4566147 ## ## [[2]] ## [1] 51.71506 50.46092 48.73494 49.31315 49.55434 ``` ``` list(m1 = mean, m2 = median) %>% invoke_map(x = rcauchy(100)) ``` ``` ## $m1 ## [1] 0.7316016 ## ## $m2 ## [1] 0.1690467 ``` ``` rcauchy(100) ``` ``` ## [1] -1.99514216 1.57378677 1.44901985 0.82604308 2.30072052 ## [6] -0.04961749 0.52626840 0.29408692 0.47790231 -1.47138470 ## [11] -2.54305059 -0.35508248 -1.65511601 -1.08467708 -15.03813728 ## [16] -1.82118206 -0.62669137 -0.79456204 -0.06347636 5.19179251 ## [21] 1.48851593 3.42095041 0.03289526 0.65171559 -0.53864091 ## [26] 0.88812626 0.93375555 0.24570517 0.97348569 -1.11905466 ## [31] -0.51964526 128.72537963 2.72138263 0.97793363 0.36391811 ## [36] 2.77745450 -4.34935786 0.81096079 5.70518746 0.81669440 ## [41] -138.41947905 2.02359725 -1.96283674 2.40809060 2.04850398 ## [46] -9.41347275 -1.06265274 0.83312509 3.55625549 1.10375978 ## [51] -2.31140048 0.65162145 -0.45665528 -1.02179975 -1.71189590 ## [56] -2.57239721 2.35617831 -10.63750166 -0.41538322 -3.80770683 ## [61] -0.55070513 1.49607830 -1.30359005 1.09910916 -3.27457763 ## [66] 16.99304208 1.09921270 -4.86030197 -0.27969649 -0.31842181 ## [71] 1.16466121 1.59209243 -0.04514112 -2.52586678 -0.19951960 ## [76] 9.47599952 3.31841045 -1.82945785 0.51884667 -4.29179059 ## [81] 0.93155898 -0.11880720 -3.03333758 -21.16294537 3.16450655 ## [86] -0.39503234 2.19801293 1.27457150 0.59413768 0.60064481 ## [91] 17.70703023 1.01880490 0.80764382 -1.63905090 0.15086898 ## [96] -1.36865319 1.99173761 3.39988162 -0.63043489 -0.26058630 ``` Let’s store everything in a dataframe… ``` set.seed(123) tibble(funs = list(rn = "rnorm", rp = "rpois", ru = "runif"), params = list(list(n = 20, mean = 10), list(n = 20, lambda = 3), list(n = 20, min = -1, max = 1))) %>% with(invoke_map_df(funs, params)) ``` ``` ## # A tibble: 20 x 3 ## rn rp ru ## <dbl> <int> <dbl> ## 1 9.44 1 0.330 ## 2 9.77 2 -0.810 ## 3 11.6 2 -0.232 ## 4 10.1 2 -0.451 ## 5 10.1 1 0.629 ## 6 11.7 1 -0.103 ## 7 10.5 2 0.620 ## 8 8.73 3 0.625 ## 9 9.31 2 0.589 ## 10 9.55 5 -0.120 ## 11 11.2 0 0.509 ## 12 10.4 3 0.258 ## 13 10.4 4 0.420 ## 14 10.1 1 -0.999 ## 15 9.44 3 -0.0494 ## 16 11.8 2 -0.560 ## 17 10.5 1 -0.240 ## 18 8.03 4 0.226 ## 19 10.7 5 -0.296 ## 20 9.53 2 -0.778 ``` ``` map_df(iris, ~.x*2) ``` ``` ## Warning in Ops.factor(.x, 2): '*' not meaningful for factors ``` ``` ## # A tibble: 150 x 5 ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## <dbl> <dbl> <dbl> <dbl> <lgl> ## 1 10.2 7 2.8 0.4 NA ## 2 9.8 6 2.8 0.4 NA ## 3 9.4 6.4 2.6 0.4 NA ## 4 9.2 6.2 3 0.4 NA ## 5 10 7.2 2.8 0.4 NA ## 6 10.8 7.8 3.4 0.8 NA ## 7 9.2 6.8 2.8 0.6 NA ## 8 10 6.8 3 0.4 NA ## 9 8.8 5.8 2.8 0.4 NA ## 10 9.8 6.2 3 0.2 NA ## # ... with 140 more rows ``` ``` select(iris, -Species) %>% flatten_dbl() %>% mean() ``` ``` ## [1] 3.4645 ``` ``` mean.and.median <- function(x){ list(mean = mean(x, na.rm = TRUE), median = median(x, na.rm = TRUE)) } ``` Difference between dfr and dfc, taken from here: [https://bio304\-class.github.io/bio304\-fall2017/control\-flow\-in\-R.html](https://bio304-class.github.io/bio304-fall2017/control-flow-in-R.html) ``` iris %>% select(-Species) %>% map_dfr(mean.and.median) %>% bind_cols(tibble(names = names(select(iris, -Species)))) ``` ``` ## # A tibble: 4 x 3 ## mean median names ## <dbl> <dbl> <chr> ## 1 5.84 5.8 Sepal.Length ## 2 3.06 3 Sepal.Width ## 3 3.76 4.35 Petal.Length ## 4 1.20 1.3 Petal.Width ``` ``` iris %>% select(-Species) %>% map_dfr(mean.and.median) %>% bind_cols(tibble(names = names(select(iris, -Species)))) ``` ``` ## # A tibble: 4 x 3 ## mean median names ## <dbl> <dbl> <chr> ## 1 5.84 5.8 Sepal.Length ## 2 3.06 3 Sepal.Width ## 3 3.76 4.35 Petal.Length ## 4 1.20 1.3 Petal.Width ``` ``` iris %>% select(-Species) %>% map_dfc(mean.and.median) ``` ``` ## # A tibble: 1 x 8 ## mean median mean1 median1 mean2 median2 mean3 median3 ## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 5.84 5.8 3.06 3 3.76 4.35 1.20 1.3 ``` ### indexing nms caution When creating your empty list, use indexes rather than names if you are creating values, otherwise you are creating new values on the list. E.g. in the example below I the output ends up being length 6 because you have the 3 `NULL` values plus the 3 newly created named positions. ``` x <- list(a = 1:10, b = 11:18, c = 19:25) output <- vector("list", length(x)) for (nm in names(x)) { output[[nm]] <- x[[nm]] * 3 } output ``` ``` ## [[1]] ## NULL ## ## [[2]] ## NULL ## ## [[3]] ## NULL ## ## $a ## [1] 3 6 9 12 15 18 21 24 27 30 ## ## $b ## [1] 33 36 39 42 45 48 51 54 ## ## $c ## [1] 57 60 63 66 69 72 75 ``` ### in\-class notes the `map_*` functions are essentially like running a `flatten_*` after running `map`. E.g. the two things below are equivalent ``` map(flights, typeof) %>% flatten_chr() ``` ``` ## [1] "integer" "integer" "integer" "integer" "integer" ## [6] "double" "integer" "integer" "double" "character" ## [11] "integer" "character" "character" "character" "double" ## [16] "double" "double" "double" "double" ``` ``` map_chr(flights, typeof) ``` ``` ## year month day dep_time sched_dep_time ## "integer" "integer" "integer" "integer" "integer" ## dep_delay arr_time sched_arr_time arr_delay carrier ## "double" "integer" "integer" "double" "character" ## flight tailnum origin dest air_time ## "integer" "character" "character" "character" "double" ## distance hour minute time_hour ## "double" "double" "double" "double" ``` Calculate the number of unique values for each level ``` iris %>% map(unique) %>% map_dbl(length) map_int(iris, ~length(unique(.x))) ``` Iterate through different min and max values ``` min_params <- c(-1, 0, -10) max_params <- c(11:13) map2(.x = min_params, .y = max_params, ~runif(n = 10, min = .x, max = .y)) ``` ``` ## [[1]] ## [1] 1.9234337 7.0166670 4.0117614 8.4583500 0.2343757 4.2187129 ## [7] 10.8194838 9.7166134 9.6376287 1.1006318 ## ## [[2]] ## [1] 1.568348 7.837223 4.122198 7.881098 3.844479 2.252293 9.387532 ## [8] 1.123140 5.601348 6.138066 ## ## [[3]] ## [1] 3.7997461 -2.3450586 1.2380998 11.9528980 1.1067551 10.4780551 ## [7] 11.0320783 4.0009046 -0.5541351 -6.6168221 ``` When using `pmap` it’s often best to keep the parameters in a dataframe ``` min_df_params <- tibble(n = c(10, 15, 20, 50 ), min = c(-1, 0, 1, 2), max = c(0, 1, 2, 3)) pmap(min_df_params, runif) ``` ``` ## [[1]] ## [1] -0.06470020 -0.69877110 -0.93927943 -0.05227306 -0.27940373 ## [6] -0.85770570 -0.45071534 -0.04590876 -0.41451665 -0.59548972 ## ## [[2]] ## [1] 0.6478935 0.3198206 0.3077200 0.2197676 0.3694889 0.9842192 0.1542023 ## [8] 0.0910440 0.1419069 0.6900071 0.6192565 0.8913941 0.6729991 0.7370777 ## [15] 0.5211357 ## ## [[3]] ## [1] 1.659838 1.821805 1.786282 1.979822 1.439432 1.311702 1.409475 ## [8] 1.010467 1.183850 1.842729 1.231162 1.239100 1.076691 1.245724 ## [15] 1.732135 1.847453 1.497527 1.387909 1.246449 1.111096 ## ## [[4]] ## [1] 2.389994 2.571935 2.216893 2.444768 2.217991 2.502300 2.353905 ## [8] 2.649985 2.374714 2.355445 2.533688 2.740334 2.221103 2.412746 ## [15] 2.265687 2.629973 2.183828 2.863644 2.746568 2.668285 2.618018 ## [22] 2.372238 2.529836 2.874682 2.581750 2.839768 2.312448 2.708290 ## [29] 2.265018 2.594343 2.481290 2.265033 2.564590 2.913188 2.901874 ## [36] 2.274167 2.321483 2.985641 2.619993 2.937314 2.466533 2.406833 ## [43] 2.659230 2.152347 2.572867 2.238726 2.962359 2.601366 2.515030 ## [50] 2.402573 ``` You can often use `map` a bunch of output that can then be stored in a tibble ``` tibble(type = map_chr(mtcars, typeof), means = map_dbl(mtcars, mean), median = map_dbl(mtcars, median), names = names(mtcars)) ``` ``` ## # A tibble: 11 x 4 ## type means median names ## <chr> <dbl> <dbl> <chr> ## 1 double 20.1 19.2 mpg ## 2 double 6.19 6 cyl ## 3 double 231. 196. disp ## 4 double 147. 123 hp ## 5 double 3.60 3.70 drat ## 6 double 3.22 3.32 wt ## 7 double 17.8 17.7 qsec ## 8 double 0.438 0 vs ## 9 double 0.406 0 am ## 10 double 3.69 4 gear ## 11 double 2.81 2 carb ``` *Provide the number of unique values for all columns excluding columns with numeric types or date types.* ``` num_unique <- function(df) { df %>% keep(~is_character(.x) | is.factor(.x)) %>% map(~length(unique(.x))) %>% as_tibble() %>% gather() %>% rename(field_name = key, num_unique = value) } num_unique(flights) ``` ``` ## # A tibble: 4 x 2 ## field_name num_unique ## <chr> <int> ## 1 carrier 16 ## 2 tailnum 4044 ## 3 origin 3 ## 4 dest 105 ``` ``` num_unique(iris) ``` ``` ## # A tibble: 1 x 2 ## field_name num_unique ## <chr> <int> ## 1 Species 3 ``` ``` num_unique(mpg) ``` ``` ## # A tibble: 6 x 2 ## field_name num_unique ## <chr> <int> ## 1 manufacturer 15 ## 2 model 38 ## 3 trans 10 ## 4 drv 3 ## 5 fl 5 ## 6 class 7 ``` ### 21\.3\.5\.1 #### Using map ``` outputted_csv <- files_example %>% mutate(csv_data = map(file_paths, read_csv)) outputted_csv <- files_example %>% mutate(csv_data = map(file_paths, safely(read_csv))) ``` #### Plot of names * Below is a plot of the proportion of individuals named the most popular letter in each year. This suggests that the top names by letter do not have as large of a proportion of the population ocmpared to historically. ``` names_appended %>% ggplot(aes(x = year, y = prop, colour = first_letter))+ geom_line() ``` #### csv other example The code below might be used to read csvs from a shared drive. I added on the ‘file\_path\_pull’ and ‘files\_example’ components to add in information on the file paths and other details that were relevant. You might also add this data into a new column on the output… ``` files_path_pull <- dir("//companydomain.com/directory/", pattern = "csv$", full.names = TRUE) files_example <- tibble(file_paths = files_path_pull[1:2]) %>% extract(file_paths, into = c("path", "name"), regex = "(.*)([0-9]{4}-[0-9]{2}-[0-9]{2})", remove = FALSE) read_dir <- function(dir){ #input vector of file paths name and output appended file out <- vector("list", length(dir)) for (i in seq_along(out)){ out[[i]] <- read_csv(dir[[i]]) } out <- bind_rows(out) out } read_dir(files_example$file_paths) ``` #### Using map ``` outputted_csv <- files_example %>% mutate(csv_data = map(file_paths, read_csv)) outputted_csv <- files_example %>% mutate(csv_data = map(file_paths, safely(read_csv))) ``` #### Plot of names * Below is a plot of the proportion of individuals named the most popular letter in each year. This suggests that the top names by letter do not have as large of a proportion of the population ocmpared to historically. ``` names_appended %>% ggplot(aes(x = year, y = prop, colour = first_letter))+ geom_line() ``` #### csv other example The code below might be used to read csvs from a shared drive. I added on the ‘file\_path\_pull’ and ‘files\_example’ components to add in information on the file paths and other details that were relevant. You might also add this data into a new column on the output… ``` files_path_pull <- dir("//companydomain.com/directory/", pattern = "csv$", full.names = TRUE) files_example <- tibble(file_paths = files_path_pull[1:2]) %>% extract(file_paths, into = c("path", "name"), regex = "(.*)([0-9]{4}-[0-9]{2}-[0-9]{2})", remove = FALSE) read_dir <- function(dir){ #input vector of file paths name and output appended file out <- vector("list", length(dir)) for (i in seq_along(out)){ out[[i]] <- read_csv(dir[[i]]) } out <- bind_rows(out) out } read_dir(files_example$file_paths) ``` ### 21\.3\.5\.2 (with purrr) ``` purrr::map_lgl(iris, is.factor) %>% tibble::enframe() ``` ``` ## # A tibble: 5 x 2 ## name value ## <chr> <lgl> ## 1 Sepal.Length FALSE ## 2 Sepal.Width FALSE ## 3 Petal.Length FALSE ## 4 Petal.Width FALSE ## 5 Species TRUE ``` Slightly less attractive printing ``` show_mean2 <- function(df) { df %>% keep(is.numeric) %>% map_dbl(mean, na.rm = TRUE) } show_mean2(flights) ``` ``` ## year month day dep_time sched_dep_time ## 2013.000000 6.548510 15.710787 1349.109947 1344.254840 ## dep_delay arr_time sched_arr_time arr_delay flight ## 12.639070 1502.054999 1536.380220 6.895377 1971.923620 ## air_time distance hour minute ## 150.686460 1039.912604 13.180247 26.230100 ``` Maybe slightly better printing and in df ``` show_mean3 <- function(df){ df %>% keep(is.numeric) %>% map_dbl(mean, na.rm = TRUE) %>% as_tibble() %>% mutate(names = row.names(.)) } show_mean3(flights) ``` ``` ## Warning: Calling `as_tibble()` on a vector is discouraged, because the behavior is likely to change in the future. Use `enframe(name = NULL)` instead. ## This warning is displayed once per session. ``` ``` ## # A tibble: 14 x 2 ## value names ## <dbl> <chr> ## 1 2013 1 ## 2 6.55 2 ## 3 15.7 3 ## 4 1349. 4 ## 5 1344. 5 ## 6 12.6 6 ## 7 1502. 7 ## 8 1536. 8 ## 9 6.90 9 ## 10 1972. 10 ## 11 151. 11 ## 12 1040. 12 ## 13 13.2 13 ## 14 26.2 14 ``` Other method is to take advantage of the `gather()` function ``` flights %>% keep(is.numeric) %>% map(mean, na.rm = TRUE) %>% as_tibble() %>% gather() ``` ``` ## # A tibble: 14 x 2 ## key value ## <chr> <dbl> ## 1 year 2013 ## 2 month 6.55 ## 3 day 15.7 ## 4 dep_time 1349. ## 5 sched_dep_time 1344. ## 6 dep_delay 12.6 ## 7 arr_time 1502. ## 8 sched_arr_time 1536. ## 9 arr_delay 6.90 ## 10 flight 1972. ## 11 air_time 151. ## 12 distance 1040. ## 13 hour 13.2 ## 14 minute 26.2 ``` ### 21\.9\.3\.1 * mine can’t handle shortcut formulas or new functions ``` z <- sample(10) z %>% every( ~ . < 11) ``` ``` ## [1] TRUE ``` ``` # e.g. below would fail # z %>% # every_loop( ~ . < 11) ``` ### 21\.9 mirroring `keep` * below is one method for passing multiple, more complex arguments through keep, though you can also use function shortcuts (`~`) in `keep` and `discard` ``` ##how to pass multiple functions through keep? #can use map to subset columns by multiple criteria and then subset at end flights %>% purrr::map(is.na) %>% purrr::map_dbl(sum) %>% purrr::map_lgl(~.>10) %>% flights[.] ``` ``` ## # A tibble: 336,776 x 6 ## dep_time dep_delay arr_time arr_delay tailnum air_time ## <int> <dbl> <int> <dbl> <chr> <dbl> ## 1 517 2 830 11 N14228 227 ## 2 533 4 850 20 N24211 227 ## 3 542 2 923 33 N619AA 160 ## 4 544 -1 1004 -18 N804JB 183 ## 5 554 -6 812 -25 N668DN 116 ## 6 554 -4 740 12 N39463 150 ## 7 555 -5 913 19 N516JB 158 ## 8 557 -3 709 -14 N829AS 53 ## 9 557 -3 838 -8 N593JB 140 ## 10 558 -2 753 8 N3ALAA 138 ## # ... with 336,766 more rows ``` ### invoke examples Let’s change the example to be with quantile… ``` invoke(runif, n = 10) ``` ``` ## [1] 0.775555937 0.328805817 0.920314980 0.176599637 0.210958651 ## [6] 0.890200325 0.456075735 0.498955991 0.148438198 0.001021321 ``` ``` list("01a", "01b") %>% invoke(paste, ., sep = "-") ``` ``` ## [1] "01a-01b" ``` ``` set.seed(123) invoke_map(list(runif, rnorm), list(list(n = 10), list(n = 5))) ``` ``` ## [[1]] ## [1] 0.2875775 0.7883051 0.4089769 0.8830174 0.9404673 0.0455565 0.5281055 ## [8] 0.8924190 0.5514350 0.4566147 ## ## [[2]] ## [1] 1.7150650 0.4609162 -1.2650612 -0.6868529 -0.4456620 ``` ``` set.seed(123) invoke_map(list(runif, rnorm), list(list(n = 10), list(5, 50))) ``` ``` ## [[1]] ## [1] 0.2875775 0.7883051 0.4089769 0.8830174 0.9404673 0.0455565 0.5281055 ## [8] 0.8924190 0.5514350 0.4566147 ## ## [[2]] ## [1] 51.71506 50.46092 48.73494 49.31315 49.55434 ``` ``` list(m1 = mean, m2 = median) %>% invoke_map(x = rcauchy(100)) ``` ``` ## $m1 ## [1] 0.7316016 ## ## $m2 ## [1] 0.1690467 ``` ``` rcauchy(100) ``` ``` ## [1] -1.99514216 1.57378677 1.44901985 0.82604308 2.30072052 ## [6] -0.04961749 0.52626840 0.29408692 0.47790231 -1.47138470 ## [11] -2.54305059 -0.35508248 -1.65511601 -1.08467708 -15.03813728 ## [16] -1.82118206 -0.62669137 -0.79456204 -0.06347636 5.19179251 ## [21] 1.48851593 3.42095041 0.03289526 0.65171559 -0.53864091 ## [26] 0.88812626 0.93375555 0.24570517 0.97348569 -1.11905466 ## [31] -0.51964526 128.72537963 2.72138263 0.97793363 0.36391811 ## [36] 2.77745450 -4.34935786 0.81096079 5.70518746 0.81669440 ## [41] -138.41947905 2.02359725 -1.96283674 2.40809060 2.04850398 ## [46] -9.41347275 -1.06265274 0.83312509 3.55625549 1.10375978 ## [51] -2.31140048 0.65162145 -0.45665528 -1.02179975 -1.71189590 ## [56] -2.57239721 2.35617831 -10.63750166 -0.41538322 -3.80770683 ## [61] -0.55070513 1.49607830 -1.30359005 1.09910916 -3.27457763 ## [66] 16.99304208 1.09921270 -4.86030197 -0.27969649 -0.31842181 ## [71] 1.16466121 1.59209243 -0.04514112 -2.52586678 -0.19951960 ## [76] 9.47599952 3.31841045 -1.82945785 0.51884667 -4.29179059 ## [81] 0.93155898 -0.11880720 -3.03333758 -21.16294537 3.16450655 ## [86] -0.39503234 2.19801293 1.27457150 0.59413768 0.60064481 ## [91] 17.70703023 1.01880490 0.80764382 -1.63905090 0.15086898 ## [96] -1.36865319 1.99173761 3.39988162 -0.63043489 -0.26058630 ``` Let’s store everything in a dataframe… ``` set.seed(123) tibble(funs = list(rn = "rnorm", rp = "rpois", ru = "runif"), params = list(list(n = 20, mean = 10), list(n = 20, lambda = 3), list(n = 20, min = -1, max = 1))) %>% with(invoke_map_df(funs, params)) ``` ``` ## # A tibble: 20 x 3 ## rn rp ru ## <dbl> <int> <dbl> ## 1 9.44 1 0.330 ## 2 9.77 2 -0.810 ## 3 11.6 2 -0.232 ## 4 10.1 2 -0.451 ## 5 10.1 1 0.629 ## 6 11.7 1 -0.103 ## 7 10.5 2 0.620 ## 8 8.73 3 0.625 ## 9 9.31 2 0.589 ## 10 9.55 5 -0.120 ## 11 11.2 0 0.509 ## 12 10.4 3 0.258 ## 13 10.4 4 0.420 ## 14 10.1 1 -0.999 ## 15 9.44 3 -0.0494 ## 16 11.8 2 -0.560 ## 17 10.5 1 -0.240 ## 18 8.03 4 0.226 ## 19 10.7 5 -0.296 ## 20 9.53 2 -0.778 ``` ``` map_df(iris, ~.x*2) ``` ``` ## Warning in Ops.factor(.x, 2): '*' not meaningful for factors ``` ``` ## # A tibble: 150 x 5 ## Sepal.Length Sepal.Width Petal.Length Petal.Width Species ## <dbl> <dbl> <dbl> <dbl> <lgl> ## 1 10.2 7 2.8 0.4 NA ## 2 9.8 6 2.8 0.4 NA ## 3 9.4 6.4 2.6 0.4 NA ## 4 9.2 6.2 3 0.4 NA ## 5 10 7.2 2.8 0.4 NA ## 6 10.8 7.8 3.4 0.8 NA ## 7 9.2 6.8 2.8 0.6 NA ## 8 10 6.8 3 0.4 NA ## 9 8.8 5.8 2.8 0.4 NA ## 10 9.8 6.2 3 0.2 NA ## # ... with 140 more rows ``` ``` select(iris, -Species) %>% flatten_dbl() %>% mean() ``` ``` ## [1] 3.4645 ``` ``` mean.and.median <- function(x){ list(mean = mean(x, na.rm = TRUE), median = median(x, na.rm = TRUE)) } ``` Difference between dfr and dfc, taken from here: [https://bio304\-class.github.io/bio304\-fall2017/control\-flow\-in\-R.html](https://bio304-class.github.io/bio304-fall2017/control-flow-in-R.html) ``` iris %>% select(-Species) %>% map_dfr(mean.and.median) %>% bind_cols(tibble(names = names(select(iris, -Species)))) ``` ``` ## # A tibble: 4 x 3 ## mean median names ## <dbl> <dbl> <chr> ## 1 5.84 5.8 Sepal.Length ## 2 3.06 3 Sepal.Width ## 3 3.76 4.35 Petal.Length ## 4 1.20 1.3 Petal.Width ``` ``` iris %>% select(-Species) %>% map_dfr(mean.and.median) %>% bind_cols(tibble(names = names(select(iris, -Species)))) ``` ``` ## # A tibble: 4 x 3 ## mean median names ## <dbl> <dbl> <chr> ## 1 5.84 5.8 Sepal.Length ## 2 3.06 3 Sepal.Width ## 3 3.76 4.35 Petal.Length ## 4 1.20 1.3 Petal.Width ``` ``` iris %>% select(-Species) %>% map_dfc(mean.and.median) ``` ``` ## # A tibble: 1 x 8 ## mean median mean1 median1 mean2 median2 mean3 median3 ## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 5.84 5.8 3.06 3 3.76 4.35 1.20 1.3 ``` ### indexing nms caution When creating your empty list, use indexes rather than names if you are creating values, otherwise you are creating new values on the list. E.g. in the example below I the output ends up being length 6 because you have the 3 `NULL` values plus the 3 newly created named positions. ``` x <- list(a = 1:10, b = 11:18, c = 19:25) output <- vector("list", length(x)) for (nm in names(x)) { output[[nm]] <- x[[nm]] * 3 } output ``` ``` ## [[1]] ## NULL ## ## [[2]] ## NULL ## ## [[3]] ## NULL ## ## $a ## [1] 3 6 9 12 15 18 21 24 27 30 ## ## $b ## [1] 33 36 39 42 45 48 51 54 ## ## $c ## [1] 57 60 63 66 69 72 75 ``` ### in\-class notes the `map_*` functions are essentially like running a `flatten_*` after running `map`. E.g. the two things below are equivalent ``` map(flights, typeof) %>% flatten_chr() ``` ``` ## [1] "integer" "integer" "integer" "integer" "integer" ## [6] "double" "integer" "integer" "double" "character" ## [11] "integer" "character" "character" "character" "double" ## [16] "double" "double" "double" "double" ``` ``` map_chr(flights, typeof) ``` ``` ## year month day dep_time sched_dep_time ## "integer" "integer" "integer" "integer" "integer" ## dep_delay arr_time sched_arr_time arr_delay carrier ## "double" "integer" "integer" "double" "character" ## flight tailnum origin dest air_time ## "integer" "character" "character" "character" "double" ## distance hour minute time_hour ## "double" "double" "double" "double" ``` Calculate the number of unique values for each level ``` iris %>% map(unique) %>% map_dbl(length) map_int(iris, ~length(unique(.x))) ``` Iterate through different min and max values ``` min_params <- c(-1, 0, -10) max_params <- c(11:13) map2(.x = min_params, .y = max_params, ~runif(n = 10, min = .x, max = .y)) ``` ``` ## [[1]] ## [1] 1.9234337 7.0166670 4.0117614 8.4583500 0.2343757 4.2187129 ## [7] 10.8194838 9.7166134 9.6376287 1.1006318 ## ## [[2]] ## [1] 1.568348 7.837223 4.122198 7.881098 3.844479 2.252293 9.387532 ## [8] 1.123140 5.601348 6.138066 ## ## [[3]] ## [1] 3.7997461 -2.3450586 1.2380998 11.9528980 1.1067551 10.4780551 ## [7] 11.0320783 4.0009046 -0.5541351 -6.6168221 ``` When using `pmap` it’s often best to keep the parameters in a dataframe ``` min_df_params <- tibble(n = c(10, 15, 20, 50 ), min = c(-1, 0, 1, 2), max = c(0, 1, 2, 3)) pmap(min_df_params, runif) ``` ``` ## [[1]] ## [1] -0.06470020 -0.69877110 -0.93927943 -0.05227306 -0.27940373 ## [6] -0.85770570 -0.45071534 -0.04590876 -0.41451665 -0.59548972 ## ## [[2]] ## [1] 0.6478935 0.3198206 0.3077200 0.2197676 0.3694889 0.9842192 0.1542023 ## [8] 0.0910440 0.1419069 0.6900071 0.6192565 0.8913941 0.6729991 0.7370777 ## [15] 0.5211357 ## ## [[3]] ## [1] 1.659838 1.821805 1.786282 1.979822 1.439432 1.311702 1.409475 ## [8] 1.010467 1.183850 1.842729 1.231162 1.239100 1.076691 1.245724 ## [15] 1.732135 1.847453 1.497527 1.387909 1.246449 1.111096 ## ## [[4]] ## [1] 2.389994 2.571935 2.216893 2.444768 2.217991 2.502300 2.353905 ## [8] 2.649985 2.374714 2.355445 2.533688 2.740334 2.221103 2.412746 ## [15] 2.265687 2.629973 2.183828 2.863644 2.746568 2.668285 2.618018 ## [22] 2.372238 2.529836 2.874682 2.581750 2.839768 2.312448 2.708290 ## [29] 2.265018 2.594343 2.481290 2.265033 2.564590 2.913188 2.901874 ## [36] 2.274167 2.321483 2.985641 2.619993 2.937314 2.466533 2.406833 ## [43] 2.659230 2.152347 2.572867 2.238726 2.962359 2.601366 2.515030 ## [50] 2.402573 ``` You can often use `map` a bunch of output that can then be stored in a tibble ``` tibble(type = map_chr(mtcars, typeof), means = map_dbl(mtcars, mean), median = map_dbl(mtcars, median), names = names(mtcars)) ``` ``` ## # A tibble: 11 x 4 ## type means median names ## <chr> <dbl> <dbl> <chr> ## 1 double 20.1 19.2 mpg ## 2 double 6.19 6 cyl ## 3 double 231. 196. disp ## 4 double 147. 123 hp ## 5 double 3.60 3.70 drat ## 6 double 3.22 3.32 wt ## 7 double 17.8 17.7 qsec ## 8 double 0.438 0 vs ## 9 double 0.406 0 am ## 10 double 3.69 4 gear ## 11 double 2.81 2 carb ``` *Provide the number of unique values for all columns excluding columns with numeric types or date types.* ``` num_unique <- function(df) { df %>% keep(~is_character(.x) | is.factor(.x)) %>% map(~length(unique(.x))) %>% as_tibble() %>% gather() %>% rename(field_name = key, num_unique = value) } num_unique(flights) ``` ``` ## # A tibble: 4 x 2 ## field_name num_unique ## <chr> <int> ## 1 carrier 16 ## 2 tailnum 4044 ## 3 origin 3 ## 4 dest 105 ``` ``` num_unique(iris) ``` ``` ## # A tibble: 1 x 2 ## field_name num_unique ## <chr> <int> ## 1 Species 3 ``` ``` num_unique(mpg) ``` ``` ## # A tibble: 6 x 2 ## field_name num_unique ## <chr> <int> ## 1 manufacturer 15 ## 2 model 38 ## 3 trans 10 ## 4 drv 3 ## 5 fl 5 ## 6 class 7 ```
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/23-model-basics.html
Ch. 23: Model basics ==================== ***WARNING* many of the solutions and notes in this chapter use various `modelr::sim*` datasets but do not explicitly refer to these coming from the `modelr` package.** **Key questions:** * 23\.2\.1\. \#2 * 23\.3\.3\. \#1, 4 * 23\.4\.5\. \#4 **Functions and notes:** * `geom_abline` create line or lines given intercept and slopes, e.g. `geom_abline(aes(intercept = a1, slope = a2), data = models, alpha = 1/4)` * `optim` general purpose function for optimization using Newton\-Raphson search * `coef` function for extracting coefficients from a linear model * `modelr::data_grid`, can be used to generate an evenly spaced grid of values that covers region where data lies[37](#fn37) + First arg is dataframe, next args are variable names to build grid from + when using with continuous data, `seq_range` or similar can be a good complement, see[38](#fn38) for an example where this is used. * `seq_range(x1, 5)` takes 5 values evenly spaced within this set. Two other useful args: + `pretty = TRUE` : will generate a “pretty sequence” + `seq_range(c(0.0123, 0.923423), n = 5, pretty = TRUE)` , 0, 0\.2, 0\.4, 0\.6, 0\.8, 1 + `trim = 0.1` will trim off 10% of tail values (useful if vars have long tailed distribution and you want to focus on values near center) \*\`expand \= 0\.1 is kind of the opposite and expands the range by 10% * `modelr::add_predictions` takes a data frame and modle an adds predictions from the model to a new column * `modelr::add_residuals` is similar to above but requires actual value be in dataframe so that residuals can be calculated * `modelr::spread_residuals` and `modelr::gather_residuals` allow you to do this for multiple models at once. equivalent are avaialble for `*_predictions` as well. * use `model_matrix` to see what equation is being fitted * To include *all* 2\-way interactions can do something like this `model_matrix(sim3, y ~ (x1 + x2 + x3)^2)` or `model_matrix(sim3, y ~ (.)^2)` * Use `I()` in `I(x2^2)` to incorporate squared term or any transformation that includes `+`, `-`, `*`, or `^` * Use `poly` to include 1, 2, … n order terms associated with a variable, e.g. `model_matrix(df, y ~ poly(x, 3))` * `splines::ns()` represent safer alternative to `poly` that is less likely to become extreme, e.g.Interesting spline note[39](#fn39). * `nobs(mod)` see how many observations were used in model building (assuming ‘mod’ represents a model) * make dropping of missing values explicit with `options(na.action = na.warn)` + to make silent in specific models use e.g. `mod <- lm(y ~ x, data = df, na.action = na.exclude)` 23\.2: A simple model --------------------- ### 23\.2\.1 1. One downside of the linear model is that it is sensitive to unusual values because the distance incorporates a squared term. Fit a linear model to the simulated data below, and visualise the results. Rerun a few times to generate different simulated datasets. What do you notice about the model? ``` sim1a <- tibble( x = rep(1:10, each = 3), y = x * 1.5 + 6 + rt(length(x), df = 2) ) ``` generate n number of datasets that fit characteristics of `sim1a` ``` sim1a_mult <- tibble(num = 1:500) %>% rowwise() %>% mutate(data = list(tibble( x = rep(1:10, each = 3), y = x * 1.5 + 6 + rt(length(x), df = 2) ))) %>% #undoes rowwise (used to have much more of workflow with rowwise, but have #changed to use more of map) ungroup() plots_prep <- sim1a_mult %>% mutate(mods = map(data, ~lm(y ~ x, data = .x))) %>% mutate(preds = map2(data, mods, modelr::add_predictions), rmse = map2_dbl(mods, data, modelr::rmse), mae = map2_dbl(mods, data, modelr::mae)) plots_prep %>% ggplot(aes(x = "rmse", y = rmse))+ ggbeeswarm::geom_beeswarm() ``` ``` # # Other option for visualizing # plots_prep %>% # ggplot(aes(x = "rmse", y = rmse))+ # geom_violin() ``` * as a metric it tends to be more suseptible to outliers, than say mae 2. One way to make linear models more robust is to use a different distance measure. For example, instead of root\-mean\-squared distance, you could use mean\-absolute distance: ``` measure_distance <- function(mod, data) { diff <- data$y - make_prediction(mod, data) mean(abs(diff)) } ``` Use `optim()` to fit this model to the simulated data above and compare it to the linear model. ``` model_1df <- function(betas, x1 = sim1$x) { betas[1] + x1 * betas[2] } measure_mae <- function(mod, data) { diff <- data$y - model_1df(betas = mod, data$x) mean(abs(diff)) } measure_rmse <- function(mod, data) { diff <- data$y - model_1df(betas = mod, data$x) sqrt(mean(diff^2)) } best_mae_sims <- map(sim1a_mult$data, ~optim(c(0,0), measure_mae, data = .x)) best_rmse_sims <- map(sim1a_mult$data, ~optim(c(0,0), measure_rmse, data = .x)) ``` ``` mae_df <- best_mae_sims %>% map("value") %>% transpose() %>% set_names(c("error")) %>% as_tibble() %>% unnest() %>% mutate(error_type = "mae", row_n = row_number()) rmse_df <- best_rmse_sims %>% map("value") %>% transpose() %>% set_names(c("error")) %>% as_tibble() %>% unnest() %>% mutate(error_type = "rmse", row_n = row_number()) ``` ``` bind_rows(rmse_df, mae_df) %>% ggplot(aes(x = error_type, colour = error_type))+ ggbeeswarm::geom_beeswarm(aes(y = error))+ facet_wrap(~error_type, scales = "free_x") ``` * you can see the error for rmse seems to have more extreme examples 3. One challenge with performing numerical optimisation is that it’s only guaranteed to find one local optima. What’s the problem with optimising a three parameter model like this? ``` model1 <- function(a, data) { a[1] + data$x * a[2] + a[3] } ``` * the problem is that is that there are multiple “best” solutions in this example. a\[1] and a\[3] together represent the intercept here. ``` models_two <- vector("list", 2) model1 <- function(a, data) { a[1] + data$x * a[2] + a[3] } models_two[[1]] <- optim(c(0, 0, 0), measure_rmse, data = sim1) models_two[[1]]$par ``` ``` ## [1] 4.219814 2.051678 -3.049197 ``` ``` model1 <- function(a, data) { a[1] + data$x * a[2] } models_two[[2]] <- optim(c(0, 0), measure_rmse, data = sim1) models_two ``` ``` ## [[1]] ## [[1]]$par ## [1] 4.219814 2.051678 -3.049197 ## ## [[1]]$value ## [1] 2.128181 ## ## [[1]]$counts ## function gradient ## 110 NA ## ## [[1]]$convergence ## [1] 0 ## ## [[1]]$message ## NULL ## ## ## [[2]] ## [[2]]$par ## [1] 4.222248 2.051204 ## ## [[2]]$value ## [1] 2.128181 ## ## [[2]]$counts ## function gradient ## 77 NA ## ## [[2]]$convergence ## [1] 0 ## ## [[2]]$message ## NULL ``` * a1 and a3 are essentially equivalent, so optimizes somewhat arbitrarily, in this case can see the a1\+a3 in the 1st (when there are 3 parameters) is equal to a1 in the 2nd (when there are only two parameters)…[40](#fn40) 23\.3: Visualising models ------------------------- ### 23\.3\.3 1. Instead of using `lm()` to fit a straight line, you can use `loess()` to fit a smooth curve. Repeat the process of model fitting, grid generation, predictions, and visualisation on `sim1` using `loess()` instead of `lm()`. How does the result compare to `geom_smooth()`? ``` sim1_mod <- lm(y ~ x, data = sim1) sim1_mod_loess <- loess(y ~ x, data = sim1) #Look at plot of points sim1 %>% add_predictions(sim1_mod, var = "pred_lin") %>% add_predictions(sim1_mod_loess) %>% add_residuals(sim1_mod_loess) %>% ggplot()+ geom_point(aes(x = x, y = y))+ geom_smooth(aes(x = x, y = y), se = FALSE)+ geom_line(aes(x = x, y = pred_lin, colour = "pred_lin"), alpha = 0.3, size = 2.5)+ geom_line(aes(x = x, y = pred, colour = "pred_loess"), alpha = 0.3, size = 2.5) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` * For sim1, the default value for `geom_smooth` is to use loess, so it is the exact same. `geom_smooth` will sometimes use gam or other methods depending on data, note that there is also a `weight` argument that can be useful * this relationship looks pretty solidly linear*Plot of the resids:* ``` sim1 %>% add_predictions(sim1_mod_loess, var = "pred_loess") %>% add_residuals(sim1_mod_loess, var = "resid_loess") %>% add_predictions(sim1_mod, var = "pred_lin") %>% add_residuals(sim1_mod, var = "resid_lin") %>% # mutate(row_n = row_number) %>% ggplot()+ geom_ref_line(h = 0)+ geom_point(aes(x = x, y = resid_lin, colour = "pred_lin"))+ geom_point(aes(x = x, y = resid_loess, colour = "pred_loess")) ``` *Distribution of residuals:* ``` sim1 %>% gather_residuals(sim1_mod, sim1_mod_loess) %>% mutate(model = ifelse(model == "sim1_mod", "sim1_mod_lin", model)) %>% ggplot()+ geom_histogram(aes(x = resid, fill = model))+ facet_wrap(~model) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` 2. `add_predictions()` is paired with `gather_predictions()` and `spread_predictions()`. How do these three functions differ? * `spread_predictions()` adds a new `pred` for each model included * `gather_predictions()` adds 2 columns `model` and `pred` for each model and repeats the input rows for each model (seems like it would work well with facets) ``` sim1 %>% spread_predictions(sim1_mod, sim1_mod_loess) ``` ``` ## # A tibble: 30 x 4 ## x y sim1_mod sim1_mod_loess ## <int> <dbl> <dbl> <dbl> ## 1 1 4.20 6.27 5.34 ## 2 1 7.51 6.27 5.34 ## 3 1 2.13 6.27 5.34 ## 4 2 8.99 8.32 8.27 ## 5 2 10.2 8.32 8.27 ## 6 2 11.3 8.32 8.27 ## 7 3 7.36 10.4 10.8 ## 8 3 10.5 10.4 10.8 ## 9 3 10.5 10.4 10.8 ## 10 4 12.4 12.4 12.8 ## # ... with 20 more rows ``` 3. What does `geom_ref_line()` do? What package does it come from? Why is displaying a reference line in plots showing residuals useful and important? * It comes from ggplot2 and shows either a geom\_hline or a geom\_vline, depending on whether you specify h or v. `ggplot2::geom_ref_line` 4. Why might you want to look at a frequency polygon of absolute residuals? What are the pros and cons compared to looking at the raw residuals? * may be good for situations when you have TONS of residuals, and is hard to look at?… * pros are it may be easier to get sense of count, cons are that you can’t plot it against something like x so patterns associated with residuals will not be picked\-up, e.g. heteroskedasticity, or more simply, signs that the model could be improved by incorporating other vars in the model 23\.4: Formulas and model families ---------------------------------- ### 23\.4\.5 1. What happens if you repeat the analysis of `sim2` using a model without an intercept. What happens to the model equation? What happens to the predictions? ``` mod2 <- lm(y ~ x, data = sim2) mod2_NoInt <- lm(y ~ x - 1, data = sim2) mod2 ``` ``` ## ## Call: ## lm(formula = y ~ x, data = sim2) ## ## Coefficients: ## (Intercept) xb xc xd ## 1.1522 6.9639 4.9750 0.7588 ``` ``` mod2_NoInt ``` ``` ## ## Call: ## lm(formula = y ~ x - 1, data = sim2) ## ## Coefficients: ## xa xb xc xd ## 1.152 8.116 6.127 1.911 ``` * you have an ANOVA analysis, one of the variables takes on the value of the intercept, the others all have the value of the intercept added to them. 2. Use `model_matrix()` to explore the equations generated for the models I fit to `sim3` and `sim4`. Why is `*` a good shorthand for interaction? ``` model_matrix(y ~ x1 * x2, data = sim3) ``` ``` ## # A tibble: 120 x 8 ## `(Intercept)` x1 x2b x2c x2d `x1:x2b` `x1:x2c` `x1:x2d` ## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 1 1 0 0 0 0 0 0 ## 2 1 1 0 0 0 0 0 0 ## 3 1 1 0 0 0 0 0 0 ## 4 1 1 1 0 0 1 0 0 ## 5 1 1 1 0 0 1 0 0 ## 6 1 1 1 0 0 1 0 0 ## 7 1 1 0 1 0 0 1 0 ## 8 1 1 0 1 0 0 1 0 ## 9 1 1 0 1 0 0 1 0 ## 10 1 1 0 0 1 0 0 1 ## # ... with 110 more rows ``` ``` model_matrix(y ~ x1 * x2, data = sim4) ``` ``` ## # A tibble: 300 x 4 ## `(Intercept)` x1 x2 `x1:x2` ## <dbl> <dbl> <dbl> <dbl> ## 1 1 -1 -1 1 ## 2 1 -1 -1 1 ## 3 1 -1 -1 1 ## 4 1 -1 -0.778 0.778 ## 5 1 -1 -0.778 0.778 ## 6 1 -1 -0.778 0.778 ## 7 1 -1 -0.556 0.556 ## 8 1 -1 -0.556 0.556 ## 9 1 -1 -0.556 0.556 ## 10 1 -1 -0.333 0.333 ## # ... with 290 more rows ``` * because each of the levels are multiplied by one another (just don’t have to write in the design variables) 3. Using the basic principles, convert the formulas in the following two models into functions. (Hint: start by converting the categorical variable into 0\-1 variables.) ``` mod1 <- lm(y ~ x1 + x2, data = sim3) mod2 <- lm(y ~ x1 * x2, data = sim3) ``` * not completed 4. For `sim4`, which of `mod1` and `mod2` is better? I think `mod2` does a slightly better job at removing patterns, but it’s pretty subtle. Can you come up with a plot to support my claim? ``` ```r mod1 <- lm(y ~ x1 + x2, data = sim4) mod2 <- lm(y ~ x1 * x2, data = sim4) grid <- modelr::seq_range(sim4$x1, n = 3, pretty = TRUE) sim4 %>% gather_residuals(mod1, mod2) %>% mutate(resid_abs = (resid)^2) %>% group_by(model) %>% summarise(rmse = sqrt(mean(resid_abs))) ``` ``` ## # A tibble: 2 x 2 ## model rmse ## <chr> <dbl> ## 1 mod1 2.10 ## 2 mod2 2.06 ``` * The aggregate `rmse` between the two models is nearly the same. ```r sim4 %>% gather_residuals(mod1, mod2) %>% ggplot(aes(x = resid, fill = model, group = model))+ geom_density(alpha = 0.3) ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-19-1.png" width="672" /> * any difference in resids is pretty subtle... *Let's plot them though and see how their predictions differ* ```r sim4 %>% spread_residuals(mod1, mod2) %>% gather_predictions(mod1, mod2) %>% ggplot(aes(x1, pred, colour = x2, group = x2))+ geom_line()+ geom_point(aes(y = y), alpha = 0.3)+ facet_wrap(~model) ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-20-1.png" width="672" /> * notice subtle difference where for mod2 as x2 decreases, the predicted value for x1 also decreases, this is because the interaciton term between these is positive * the values near where x2 and x1 are most near 0 should be where the residuals are most similar *Plot difference in residuals* ```r sim4 %>% spread_residuals(mod1, mod2) %>% mutate(mod2_less_mod1 = mod2 - mod1) %>% group_by(x1, x2) %>% summarise(mod2_less_mod1 = mean(mod2_less_mod1) ) %>% ungroup() %>% ggplot(aes(x = x1, y = x2))+ geom_tile(aes(fill = mod2_less_mod1))+ geom_text(aes(label = round(mod2_less_mod1, 1)), size = 3)+ scale_fill_gradient2() ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-21-1.png" width="672" /> * This shows how `mod2` has higher valued predictions when x1 and x2 are opposite signs compared to the predictions from `mod1` *Plot difference in distance from 0 between `mod1` and `mod1` resids* ```r sim4 %>% spread_residuals(mod1, mod2) %>% mutate_at(c("mod1", "mod2"), abs) %>% mutate(mod2_less_mod1 = mod2 - mod1) %>% group_by(x1, x2) %>% summarise(mod2_less_mod1 = mean(mod2_less_mod1) ) %>% ungroup() %>% ggplot(aes(x = x1, y = x2))+ geom_tile(aes(fill = mod2_less_mod1))+ geom_text(aes(label = round(mod2_less_mod1, 1)), size = 3)+ scale_fill_gradient2() ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-22-1.png" width="672" /> * see slightly more red than blue indicating that `mod2` may, in general, have slightly smaller residuals on a wider range of locations * however little and `mod1` has advantage of simplicity ``` Appendix -------- ### Other model families * **Generalised linear models**, e.g. `stats::glm()`. Linear models assume that the response is continuous and the error has a normal distribution. Generalised linear models extend linear models to include non\-continuous responses (e.g. binary data or counts). They work by defining a distance metric based on the statistical idea of likelihood. * **Generalised additive models**, e.g. `mgcv::gam()`, extend generalised linear models to incorporate arbitrary smooth functions. That means you can write a formula like `y ~ s(x)` which becomes an equation like `y = f(x)` and let `gam()` estimate what that function is (subject to some smoothness constraints to make the problem tractable). * **Penalised linear models**, e.g. `glmnet::glmnet()`, add a penalty term to the distance that penalises complex models (as defined by the distance between the parameter vector and the origin). This tends to make models that generalise better to new datasets from the same population. * **Robust linear models**, e.g. `MASS:rlm()`, tweak the distance to downweight points that are very far away. This makes them less sensitive to the presence of outliers, at the cost of being not quite as good when there are no outliers. * **Trees**, e.g. `rpart::rpart()`, attack the problem in a completely different way than linear models. They fit a piece\-wise constant model, splitting the data into progressively smaller and smaller pieces. Trees aren’t terribly effective by themselves, but they are very powerful when used in aggregate by models like **random forests** (e.g. `randomForest::randomForest()`) or **gradient boosting machines** (e.g. `xgboost::xgboost`.) ### 23\.2 book example ``` models <- tibble( a1 = runif(250, -20, 40), a2 = runif(250, -5, 5) ) ggplot(sim1, aes(x,y))+ geom_abline(aes(intercept = a1, slope = a2), data = models, alpha = 0.25)+ geom_point() ``` Next, lets set\-up a way to calculate the distance between predicted value and each point. ``` models_error <- models %>% mutate(preds = map2(.y = a1, .x = a2, ~mutate(sim1, pred = .x*x + .y, resid = y - pred, error_squared = (y - pred)^2, error_abs = abs(y - pred))), rmse = map_dbl(preds, ~(with(.x, mean(error_squared)) %>% sqrt(.))), mae = map_dbl(preds, ~with(.x, mean(error_abs))), rank_rmse = min_rank(rmse)) ``` ``` ggplot(sim1, aes(x, y))+ geom_abline(aes(intercept = a1, slope = a2, colour = -rmse), data = filter(models_error, rank_rmse <= 10))+ geom_point() ``` Could instead plot this as a model of a1 vs a2 and whichever does the best ``` models_error %>% ggplot(aes(x = a1, y = a2))+ geom_point(colour = "red", size = 4, data = filter(models_error, rank_rmse < 15))+ geom_point(aes(colour = -rmse)) ``` *Could be more methodical and use Grid Search.* Let’s use the min and max points of the top 15 to set. ``` #need helper function because distance function expects the model as a numeric vector of length 2 sim1_rmse <- function(b0, b1, df = sim1, x = "x", y = "y"){ ((b0 + b1*df[[x]]) - df[[y]])^2 %>% mean() %>% sqrt() } sim1_rmse(2,3) ``` ``` ## [1] 4.574414 ``` ``` grid_space <- models_error %>% filter(rank_rmse < 15) %>% summarise(min_x = min(a1), max_x = max(a1), min_y = min(a2), max_y = max(a2)) grid_models <- data_grid(grid_space, a1 = seq(min_x, max_x, length = 25), a2 = seq(min_y, max_y, length = 25) ) %>% mutate(rmse = map2_dbl(a1, a2, sim1_rmse, df = sim1)) grid_models %>% ggplot(aes(x = a1, y = a2))+ geom_point(colour = "red", size = 4, data = filter(grid_models, min_rank(rmse) < 15))+ geom_point(aes(colour = -rmse)) ``` In the future could add\-in a grid\-search that would have used PCA to first rotate axes and then do min and max values. Could instead use Newton\-Raphson search with `optim` ``` model_1df <- function(betas, x1 = sim1$x) { betas[1] + x1 * betas[2] } measure_rmse <- function(mod, data) { diff <- data$y - model_1df(betas = mod, data$x) sqrt(mean(diff^2)) } best_rmse <- optim(c(0,0), measure_rmse, data = sim1) best_rmse$par ``` ``` ## [1] 4.222248 2.051204 ``` ``` best_rmse$value ``` ``` ## [1] 2.128181 ``` Above produces roughly equal to R’s `lm()` function ``` sim1_mod <- lm(y ~ x, data = sim1) sim1_mod %>% coef ``` ``` ## (Intercept) x ## 4.220822 2.051533 ``` ``` rmse(sim1_mod, sim1) ``` ``` ## [1] 2.128181 ``` * Notice are *slightly* different, perhaps due to number of steps `optim()` will take E.g. could build a function for optimizing upon MAE instead and still works ``` measure_mae <- function(mod, data) { diff <- data$y - model_1df(betas = mod, data$x) mean(abs(diff)) } best_mae <- optim(c(0,0), measure_mae, data = sim1) best_mae$par ``` ``` ## [1] 4.364852 2.048918 ``` ### 23\.2\.1\.1 ``` *Let's look at differences in the coefficients produced:* ```r mae_df <- best_mae_sims %>% map("par") %>% transpose() %>% set_names(c("bo", "b1")) %>% as_tibble() %>% unnest() %>% mutate(error_type = "mae", row_n = row_number()) rmse_df <- best_rmse_sims %>% map("par") %>% transpose() %>% set_names(c("bo", "b1")) %>% as_tibble() %>% unnest() %>% mutate(error_type = "rmse", row_n = row_number()) bind_rows(rmse_df, mae_df) %>% ggplot(aes(x = bo, colour = error_type))+ geom_point(aes(y = b1)) ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-29-1.png" width="672" /> * see more variability in the b1 * another way of visualizing the variability in coefficients is below ```r left_join(rmse_df, mae_df, by = "row_n", suffix = c("_rmse", "_mae")) %>% ggplot(aes(x = b1_rmse, y = b1_mae))+ geom_point()+ coord_fixed() ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-30-1.png" width="672" /> ``` ### 23\.3\.3\.2 ``` #How can I add a prefix when using spread_predictions() ? -- could use the #method below sim1 %>% gather_predictions(sim1_mod, sim1_mod_loess) %>% mutate(model = str_c(model, "_pred")) %>% spread(key = model, value = pred) ``` ``` ## # A tibble: 30 x 4 ## x y sim1_mod_loess_pred sim1_mod_pred ## <int> <dbl> <dbl> <dbl> ## 1 1 2.13 5.34 6.27 ## 2 1 4.20 5.34 6.27 ## 3 1 7.51 5.34 6.27 ## 4 2 8.99 8.27 8.32 ## 5 2 10.2 8.27 8.32 ## 6 2 11.3 8.27 8.32 ## 7 3 7.36 10.8 10.4 ## 8 3 10.5 10.8 10.4 ## 9 3 10.5 10.8 10.4 ## 10 4 11.9 12.8 12.4 ## # ... with 20 more rows ``` ``` #now could add a spread_residuals() without it breaking... sim1 %>% gather_predictions(sim1_mod, sim1_mod_loess) %>% ggplot()+ geom_point(aes(x = x, y = y))+ geom_line(aes(x = x, y = pred))+ facet_wrap(~model) ``` ### tidy grid\_space Below is a pseudo\-tidy way of creating the `grid_space` var from above, it actually took more effort to create this probably, so didn’t use. However you could imagine if you had to search across A LOT of values it could be worth doing something similar to this (note `caret` or `tidymodels` are resources for effectively building search spaces for hyper paramter tuning and other related modeling activities). ``` funs_names <- tibble(funs = c(rep("min", 2), rep("max", 2)), coord = rep(c("x", "y"), 2), field_names = str_c(funs, "_", coord)) grid_space <- models_error %>% filter(rank_rmse < 15) %>% select(a1, a2) %>% rep(2) %>% invoke_map(.f = funs_names$funs, .x = .) %>% set_names(funs_names$field_names) %>% as_tibble() grid_space ``` ``` ## # A tibble: 1 x 4 ## min_x min_y max_x max_y ## <dbl> <dbl> <dbl> <dbl> ## 1 -7.10 0.319 14.9 3.89 ``` ### 23\.4\.5\.4 Rather than `geom_density` or `geom_freqpoly` let’s look at histogram with values overlaid rather than stacked. ``` sim4 %>% gather_residuals(mod1, mod2) %>% ggplot(aes(x = resid, y = ..density.., fill = model))+ geom_histogram(position = "identity", alpha = 0.3) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` 23\.2: A simple model --------------------- ### 23\.2\.1 1. One downside of the linear model is that it is sensitive to unusual values because the distance incorporates a squared term. Fit a linear model to the simulated data below, and visualise the results. Rerun a few times to generate different simulated datasets. What do you notice about the model? ``` sim1a <- tibble( x = rep(1:10, each = 3), y = x * 1.5 + 6 + rt(length(x), df = 2) ) ``` generate n number of datasets that fit characteristics of `sim1a` ``` sim1a_mult <- tibble(num = 1:500) %>% rowwise() %>% mutate(data = list(tibble( x = rep(1:10, each = 3), y = x * 1.5 + 6 + rt(length(x), df = 2) ))) %>% #undoes rowwise (used to have much more of workflow with rowwise, but have #changed to use more of map) ungroup() plots_prep <- sim1a_mult %>% mutate(mods = map(data, ~lm(y ~ x, data = .x))) %>% mutate(preds = map2(data, mods, modelr::add_predictions), rmse = map2_dbl(mods, data, modelr::rmse), mae = map2_dbl(mods, data, modelr::mae)) plots_prep %>% ggplot(aes(x = "rmse", y = rmse))+ ggbeeswarm::geom_beeswarm() ``` ``` # # Other option for visualizing # plots_prep %>% # ggplot(aes(x = "rmse", y = rmse))+ # geom_violin() ``` * as a metric it tends to be more suseptible to outliers, than say mae 2. One way to make linear models more robust is to use a different distance measure. For example, instead of root\-mean\-squared distance, you could use mean\-absolute distance: ``` measure_distance <- function(mod, data) { diff <- data$y - make_prediction(mod, data) mean(abs(diff)) } ``` Use `optim()` to fit this model to the simulated data above and compare it to the linear model. ``` model_1df <- function(betas, x1 = sim1$x) { betas[1] + x1 * betas[2] } measure_mae <- function(mod, data) { diff <- data$y - model_1df(betas = mod, data$x) mean(abs(diff)) } measure_rmse <- function(mod, data) { diff <- data$y - model_1df(betas = mod, data$x) sqrt(mean(diff^2)) } best_mae_sims <- map(sim1a_mult$data, ~optim(c(0,0), measure_mae, data = .x)) best_rmse_sims <- map(sim1a_mult$data, ~optim(c(0,0), measure_rmse, data = .x)) ``` ``` mae_df <- best_mae_sims %>% map("value") %>% transpose() %>% set_names(c("error")) %>% as_tibble() %>% unnest() %>% mutate(error_type = "mae", row_n = row_number()) rmse_df <- best_rmse_sims %>% map("value") %>% transpose() %>% set_names(c("error")) %>% as_tibble() %>% unnest() %>% mutate(error_type = "rmse", row_n = row_number()) ``` ``` bind_rows(rmse_df, mae_df) %>% ggplot(aes(x = error_type, colour = error_type))+ ggbeeswarm::geom_beeswarm(aes(y = error))+ facet_wrap(~error_type, scales = "free_x") ``` * you can see the error for rmse seems to have more extreme examples 3. One challenge with performing numerical optimisation is that it’s only guaranteed to find one local optima. What’s the problem with optimising a three parameter model like this? ``` model1 <- function(a, data) { a[1] + data$x * a[2] + a[3] } ``` * the problem is that is that there are multiple “best” solutions in this example. a\[1] and a\[3] together represent the intercept here. ``` models_two <- vector("list", 2) model1 <- function(a, data) { a[1] + data$x * a[2] + a[3] } models_two[[1]] <- optim(c(0, 0, 0), measure_rmse, data = sim1) models_two[[1]]$par ``` ``` ## [1] 4.219814 2.051678 -3.049197 ``` ``` model1 <- function(a, data) { a[1] + data$x * a[2] } models_two[[2]] <- optim(c(0, 0), measure_rmse, data = sim1) models_two ``` ``` ## [[1]] ## [[1]]$par ## [1] 4.219814 2.051678 -3.049197 ## ## [[1]]$value ## [1] 2.128181 ## ## [[1]]$counts ## function gradient ## 110 NA ## ## [[1]]$convergence ## [1] 0 ## ## [[1]]$message ## NULL ## ## ## [[2]] ## [[2]]$par ## [1] 4.222248 2.051204 ## ## [[2]]$value ## [1] 2.128181 ## ## [[2]]$counts ## function gradient ## 77 NA ## ## [[2]]$convergence ## [1] 0 ## ## [[2]]$message ## NULL ``` * a1 and a3 are essentially equivalent, so optimizes somewhat arbitrarily, in this case can see the a1\+a3 in the 1st (when there are 3 parameters) is equal to a1 in the 2nd (when there are only two parameters)…[40](#fn40) ### 23\.2\.1 1. One downside of the linear model is that it is sensitive to unusual values because the distance incorporates a squared term. Fit a linear model to the simulated data below, and visualise the results. Rerun a few times to generate different simulated datasets. What do you notice about the model? ``` sim1a <- tibble( x = rep(1:10, each = 3), y = x * 1.5 + 6 + rt(length(x), df = 2) ) ``` generate n number of datasets that fit characteristics of `sim1a` ``` sim1a_mult <- tibble(num = 1:500) %>% rowwise() %>% mutate(data = list(tibble( x = rep(1:10, each = 3), y = x * 1.5 + 6 + rt(length(x), df = 2) ))) %>% #undoes rowwise (used to have much more of workflow with rowwise, but have #changed to use more of map) ungroup() plots_prep <- sim1a_mult %>% mutate(mods = map(data, ~lm(y ~ x, data = .x))) %>% mutate(preds = map2(data, mods, modelr::add_predictions), rmse = map2_dbl(mods, data, modelr::rmse), mae = map2_dbl(mods, data, modelr::mae)) plots_prep %>% ggplot(aes(x = "rmse", y = rmse))+ ggbeeswarm::geom_beeswarm() ``` ``` # # Other option for visualizing # plots_prep %>% # ggplot(aes(x = "rmse", y = rmse))+ # geom_violin() ``` * as a metric it tends to be more suseptible to outliers, than say mae 2. One way to make linear models more robust is to use a different distance measure. For example, instead of root\-mean\-squared distance, you could use mean\-absolute distance: ``` measure_distance <- function(mod, data) { diff <- data$y - make_prediction(mod, data) mean(abs(diff)) } ``` Use `optim()` to fit this model to the simulated data above and compare it to the linear model. ``` model_1df <- function(betas, x1 = sim1$x) { betas[1] + x1 * betas[2] } measure_mae <- function(mod, data) { diff <- data$y - model_1df(betas = mod, data$x) mean(abs(diff)) } measure_rmse <- function(mod, data) { diff <- data$y - model_1df(betas = mod, data$x) sqrt(mean(diff^2)) } best_mae_sims <- map(sim1a_mult$data, ~optim(c(0,0), measure_mae, data = .x)) best_rmse_sims <- map(sim1a_mult$data, ~optim(c(0,0), measure_rmse, data = .x)) ``` ``` mae_df <- best_mae_sims %>% map("value") %>% transpose() %>% set_names(c("error")) %>% as_tibble() %>% unnest() %>% mutate(error_type = "mae", row_n = row_number()) rmse_df <- best_rmse_sims %>% map("value") %>% transpose() %>% set_names(c("error")) %>% as_tibble() %>% unnest() %>% mutate(error_type = "rmse", row_n = row_number()) ``` ``` bind_rows(rmse_df, mae_df) %>% ggplot(aes(x = error_type, colour = error_type))+ ggbeeswarm::geom_beeswarm(aes(y = error))+ facet_wrap(~error_type, scales = "free_x") ``` * you can see the error for rmse seems to have more extreme examples 3. One challenge with performing numerical optimisation is that it’s only guaranteed to find one local optima. What’s the problem with optimising a three parameter model like this? ``` model1 <- function(a, data) { a[1] + data$x * a[2] + a[3] } ``` * the problem is that is that there are multiple “best” solutions in this example. a\[1] and a\[3] together represent the intercept here. ``` models_two <- vector("list", 2) model1 <- function(a, data) { a[1] + data$x * a[2] + a[3] } models_two[[1]] <- optim(c(0, 0, 0), measure_rmse, data = sim1) models_two[[1]]$par ``` ``` ## [1] 4.219814 2.051678 -3.049197 ``` ``` model1 <- function(a, data) { a[1] + data$x * a[2] } models_two[[2]] <- optim(c(0, 0), measure_rmse, data = sim1) models_two ``` ``` ## [[1]] ## [[1]]$par ## [1] 4.219814 2.051678 -3.049197 ## ## [[1]]$value ## [1] 2.128181 ## ## [[1]]$counts ## function gradient ## 110 NA ## ## [[1]]$convergence ## [1] 0 ## ## [[1]]$message ## NULL ## ## ## [[2]] ## [[2]]$par ## [1] 4.222248 2.051204 ## ## [[2]]$value ## [1] 2.128181 ## ## [[2]]$counts ## function gradient ## 77 NA ## ## [[2]]$convergence ## [1] 0 ## ## [[2]]$message ## NULL ``` * a1 and a3 are essentially equivalent, so optimizes somewhat arbitrarily, in this case can see the a1\+a3 in the 1st (when there are 3 parameters) is equal to a1 in the 2nd (when there are only two parameters)…[40](#fn40) 23\.3: Visualising models ------------------------- ### 23\.3\.3 1. Instead of using `lm()` to fit a straight line, you can use `loess()` to fit a smooth curve. Repeat the process of model fitting, grid generation, predictions, and visualisation on `sim1` using `loess()` instead of `lm()`. How does the result compare to `geom_smooth()`? ``` sim1_mod <- lm(y ~ x, data = sim1) sim1_mod_loess <- loess(y ~ x, data = sim1) #Look at plot of points sim1 %>% add_predictions(sim1_mod, var = "pred_lin") %>% add_predictions(sim1_mod_loess) %>% add_residuals(sim1_mod_loess) %>% ggplot()+ geom_point(aes(x = x, y = y))+ geom_smooth(aes(x = x, y = y), se = FALSE)+ geom_line(aes(x = x, y = pred_lin, colour = "pred_lin"), alpha = 0.3, size = 2.5)+ geom_line(aes(x = x, y = pred, colour = "pred_loess"), alpha = 0.3, size = 2.5) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` * For sim1, the default value for `geom_smooth` is to use loess, so it is the exact same. `geom_smooth` will sometimes use gam or other methods depending on data, note that there is also a `weight` argument that can be useful * this relationship looks pretty solidly linear*Plot of the resids:* ``` sim1 %>% add_predictions(sim1_mod_loess, var = "pred_loess") %>% add_residuals(sim1_mod_loess, var = "resid_loess") %>% add_predictions(sim1_mod, var = "pred_lin") %>% add_residuals(sim1_mod, var = "resid_lin") %>% # mutate(row_n = row_number) %>% ggplot()+ geom_ref_line(h = 0)+ geom_point(aes(x = x, y = resid_lin, colour = "pred_lin"))+ geom_point(aes(x = x, y = resid_loess, colour = "pred_loess")) ``` *Distribution of residuals:* ``` sim1 %>% gather_residuals(sim1_mod, sim1_mod_loess) %>% mutate(model = ifelse(model == "sim1_mod", "sim1_mod_lin", model)) %>% ggplot()+ geom_histogram(aes(x = resid, fill = model))+ facet_wrap(~model) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` 2. `add_predictions()` is paired with `gather_predictions()` and `spread_predictions()`. How do these three functions differ? * `spread_predictions()` adds a new `pred` for each model included * `gather_predictions()` adds 2 columns `model` and `pred` for each model and repeats the input rows for each model (seems like it would work well with facets) ``` sim1 %>% spread_predictions(sim1_mod, sim1_mod_loess) ``` ``` ## # A tibble: 30 x 4 ## x y sim1_mod sim1_mod_loess ## <int> <dbl> <dbl> <dbl> ## 1 1 4.20 6.27 5.34 ## 2 1 7.51 6.27 5.34 ## 3 1 2.13 6.27 5.34 ## 4 2 8.99 8.32 8.27 ## 5 2 10.2 8.32 8.27 ## 6 2 11.3 8.32 8.27 ## 7 3 7.36 10.4 10.8 ## 8 3 10.5 10.4 10.8 ## 9 3 10.5 10.4 10.8 ## 10 4 12.4 12.4 12.8 ## # ... with 20 more rows ``` 3. What does `geom_ref_line()` do? What package does it come from? Why is displaying a reference line in plots showing residuals useful and important? * It comes from ggplot2 and shows either a geom\_hline or a geom\_vline, depending on whether you specify h or v. `ggplot2::geom_ref_line` 4. Why might you want to look at a frequency polygon of absolute residuals? What are the pros and cons compared to looking at the raw residuals? * may be good for situations when you have TONS of residuals, and is hard to look at?… * pros are it may be easier to get sense of count, cons are that you can’t plot it against something like x so patterns associated with residuals will not be picked\-up, e.g. heteroskedasticity, or more simply, signs that the model could be improved by incorporating other vars in the model ### 23\.3\.3 1. Instead of using `lm()` to fit a straight line, you can use `loess()` to fit a smooth curve. Repeat the process of model fitting, grid generation, predictions, and visualisation on `sim1` using `loess()` instead of `lm()`. How does the result compare to `geom_smooth()`? ``` sim1_mod <- lm(y ~ x, data = sim1) sim1_mod_loess <- loess(y ~ x, data = sim1) #Look at plot of points sim1 %>% add_predictions(sim1_mod, var = "pred_lin") %>% add_predictions(sim1_mod_loess) %>% add_residuals(sim1_mod_loess) %>% ggplot()+ geom_point(aes(x = x, y = y))+ geom_smooth(aes(x = x, y = y), se = FALSE)+ geom_line(aes(x = x, y = pred_lin, colour = "pred_lin"), alpha = 0.3, size = 2.5)+ geom_line(aes(x = x, y = pred, colour = "pred_loess"), alpha = 0.3, size = 2.5) ``` ``` ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` * For sim1, the default value for `geom_smooth` is to use loess, so it is the exact same. `geom_smooth` will sometimes use gam or other methods depending on data, note that there is also a `weight` argument that can be useful * this relationship looks pretty solidly linear*Plot of the resids:* ``` sim1 %>% add_predictions(sim1_mod_loess, var = "pred_loess") %>% add_residuals(sim1_mod_loess, var = "resid_loess") %>% add_predictions(sim1_mod, var = "pred_lin") %>% add_residuals(sim1_mod, var = "resid_lin") %>% # mutate(row_n = row_number) %>% ggplot()+ geom_ref_line(h = 0)+ geom_point(aes(x = x, y = resid_lin, colour = "pred_lin"))+ geom_point(aes(x = x, y = resid_loess, colour = "pred_loess")) ``` *Distribution of residuals:* ``` sim1 %>% gather_residuals(sim1_mod, sim1_mod_loess) %>% mutate(model = ifelse(model == "sim1_mod", "sim1_mod_lin", model)) %>% ggplot()+ geom_histogram(aes(x = resid, fill = model))+ facet_wrap(~model) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` 2. `add_predictions()` is paired with `gather_predictions()` and `spread_predictions()`. How do these three functions differ? * `spread_predictions()` adds a new `pred` for each model included * `gather_predictions()` adds 2 columns `model` and `pred` for each model and repeats the input rows for each model (seems like it would work well with facets) ``` sim1 %>% spread_predictions(sim1_mod, sim1_mod_loess) ``` ``` ## # A tibble: 30 x 4 ## x y sim1_mod sim1_mod_loess ## <int> <dbl> <dbl> <dbl> ## 1 1 4.20 6.27 5.34 ## 2 1 7.51 6.27 5.34 ## 3 1 2.13 6.27 5.34 ## 4 2 8.99 8.32 8.27 ## 5 2 10.2 8.32 8.27 ## 6 2 11.3 8.32 8.27 ## 7 3 7.36 10.4 10.8 ## 8 3 10.5 10.4 10.8 ## 9 3 10.5 10.4 10.8 ## 10 4 12.4 12.4 12.8 ## # ... with 20 more rows ``` 3. What does `geom_ref_line()` do? What package does it come from? Why is displaying a reference line in plots showing residuals useful and important? * It comes from ggplot2 and shows either a geom\_hline or a geom\_vline, depending on whether you specify h or v. `ggplot2::geom_ref_line` 4. Why might you want to look at a frequency polygon of absolute residuals? What are the pros and cons compared to looking at the raw residuals? * may be good for situations when you have TONS of residuals, and is hard to look at?… * pros are it may be easier to get sense of count, cons are that you can’t plot it against something like x so patterns associated with residuals will not be picked\-up, e.g. heteroskedasticity, or more simply, signs that the model could be improved by incorporating other vars in the model 23\.4: Formulas and model families ---------------------------------- ### 23\.4\.5 1. What happens if you repeat the analysis of `sim2` using a model without an intercept. What happens to the model equation? What happens to the predictions? ``` mod2 <- lm(y ~ x, data = sim2) mod2_NoInt <- lm(y ~ x - 1, data = sim2) mod2 ``` ``` ## ## Call: ## lm(formula = y ~ x, data = sim2) ## ## Coefficients: ## (Intercept) xb xc xd ## 1.1522 6.9639 4.9750 0.7588 ``` ``` mod2_NoInt ``` ``` ## ## Call: ## lm(formula = y ~ x - 1, data = sim2) ## ## Coefficients: ## xa xb xc xd ## 1.152 8.116 6.127 1.911 ``` * you have an ANOVA analysis, one of the variables takes on the value of the intercept, the others all have the value of the intercept added to them. 2. Use `model_matrix()` to explore the equations generated for the models I fit to `sim3` and `sim4`. Why is `*` a good shorthand for interaction? ``` model_matrix(y ~ x1 * x2, data = sim3) ``` ``` ## # A tibble: 120 x 8 ## `(Intercept)` x1 x2b x2c x2d `x1:x2b` `x1:x2c` `x1:x2d` ## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 1 1 0 0 0 0 0 0 ## 2 1 1 0 0 0 0 0 0 ## 3 1 1 0 0 0 0 0 0 ## 4 1 1 1 0 0 1 0 0 ## 5 1 1 1 0 0 1 0 0 ## 6 1 1 1 0 0 1 0 0 ## 7 1 1 0 1 0 0 1 0 ## 8 1 1 0 1 0 0 1 0 ## 9 1 1 0 1 0 0 1 0 ## 10 1 1 0 0 1 0 0 1 ## # ... with 110 more rows ``` ``` model_matrix(y ~ x1 * x2, data = sim4) ``` ``` ## # A tibble: 300 x 4 ## `(Intercept)` x1 x2 `x1:x2` ## <dbl> <dbl> <dbl> <dbl> ## 1 1 -1 -1 1 ## 2 1 -1 -1 1 ## 3 1 -1 -1 1 ## 4 1 -1 -0.778 0.778 ## 5 1 -1 -0.778 0.778 ## 6 1 -1 -0.778 0.778 ## 7 1 -1 -0.556 0.556 ## 8 1 -1 -0.556 0.556 ## 9 1 -1 -0.556 0.556 ## 10 1 -1 -0.333 0.333 ## # ... with 290 more rows ``` * because each of the levels are multiplied by one another (just don’t have to write in the design variables) 3. Using the basic principles, convert the formulas in the following two models into functions. (Hint: start by converting the categorical variable into 0\-1 variables.) ``` mod1 <- lm(y ~ x1 + x2, data = sim3) mod2 <- lm(y ~ x1 * x2, data = sim3) ``` * not completed 4. For `sim4`, which of `mod1` and `mod2` is better? I think `mod2` does a slightly better job at removing patterns, but it’s pretty subtle. Can you come up with a plot to support my claim? ``` ```r mod1 <- lm(y ~ x1 + x2, data = sim4) mod2 <- lm(y ~ x1 * x2, data = sim4) grid <- modelr::seq_range(sim4$x1, n = 3, pretty = TRUE) sim4 %>% gather_residuals(mod1, mod2) %>% mutate(resid_abs = (resid)^2) %>% group_by(model) %>% summarise(rmse = sqrt(mean(resid_abs))) ``` ``` ## # A tibble: 2 x 2 ## model rmse ## <chr> <dbl> ## 1 mod1 2.10 ## 2 mod2 2.06 ``` * The aggregate `rmse` between the two models is nearly the same. ```r sim4 %>% gather_residuals(mod1, mod2) %>% ggplot(aes(x = resid, fill = model, group = model))+ geom_density(alpha = 0.3) ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-19-1.png" width="672" /> * any difference in resids is pretty subtle... *Let's plot them though and see how their predictions differ* ```r sim4 %>% spread_residuals(mod1, mod2) %>% gather_predictions(mod1, mod2) %>% ggplot(aes(x1, pred, colour = x2, group = x2))+ geom_line()+ geom_point(aes(y = y), alpha = 0.3)+ facet_wrap(~model) ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-20-1.png" width="672" /> * notice subtle difference where for mod2 as x2 decreases, the predicted value for x1 also decreases, this is because the interaciton term between these is positive * the values near where x2 and x1 are most near 0 should be where the residuals are most similar *Plot difference in residuals* ```r sim4 %>% spread_residuals(mod1, mod2) %>% mutate(mod2_less_mod1 = mod2 - mod1) %>% group_by(x1, x2) %>% summarise(mod2_less_mod1 = mean(mod2_less_mod1) ) %>% ungroup() %>% ggplot(aes(x = x1, y = x2))+ geom_tile(aes(fill = mod2_less_mod1))+ geom_text(aes(label = round(mod2_less_mod1, 1)), size = 3)+ scale_fill_gradient2() ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-21-1.png" width="672" /> * This shows how `mod2` has higher valued predictions when x1 and x2 are opposite signs compared to the predictions from `mod1` *Plot difference in distance from 0 between `mod1` and `mod1` resids* ```r sim4 %>% spread_residuals(mod1, mod2) %>% mutate_at(c("mod1", "mod2"), abs) %>% mutate(mod2_less_mod1 = mod2 - mod1) %>% group_by(x1, x2) %>% summarise(mod2_less_mod1 = mean(mod2_less_mod1) ) %>% ungroup() %>% ggplot(aes(x = x1, y = x2))+ geom_tile(aes(fill = mod2_less_mod1))+ geom_text(aes(label = round(mod2_less_mod1, 1)), size = 3)+ scale_fill_gradient2() ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-22-1.png" width="672" /> * see slightly more red than blue indicating that `mod2` may, in general, have slightly smaller residuals on a wider range of locations * however little and `mod1` has advantage of simplicity ``` ### 23\.4\.5 1. What happens if you repeat the analysis of `sim2` using a model without an intercept. What happens to the model equation? What happens to the predictions? ``` mod2 <- lm(y ~ x, data = sim2) mod2_NoInt <- lm(y ~ x - 1, data = sim2) mod2 ``` ``` ## ## Call: ## lm(formula = y ~ x, data = sim2) ## ## Coefficients: ## (Intercept) xb xc xd ## 1.1522 6.9639 4.9750 0.7588 ``` ``` mod2_NoInt ``` ``` ## ## Call: ## lm(formula = y ~ x - 1, data = sim2) ## ## Coefficients: ## xa xb xc xd ## 1.152 8.116 6.127 1.911 ``` * you have an ANOVA analysis, one of the variables takes on the value of the intercept, the others all have the value of the intercept added to them. 2. Use `model_matrix()` to explore the equations generated for the models I fit to `sim3` and `sim4`. Why is `*` a good shorthand for interaction? ``` model_matrix(y ~ x1 * x2, data = sim3) ``` ``` ## # A tibble: 120 x 8 ## `(Intercept)` x1 x2b x2c x2d `x1:x2b` `x1:x2c` `x1:x2d` ## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 1 1 0 0 0 0 0 0 ## 2 1 1 0 0 0 0 0 0 ## 3 1 1 0 0 0 0 0 0 ## 4 1 1 1 0 0 1 0 0 ## 5 1 1 1 0 0 1 0 0 ## 6 1 1 1 0 0 1 0 0 ## 7 1 1 0 1 0 0 1 0 ## 8 1 1 0 1 0 0 1 0 ## 9 1 1 0 1 0 0 1 0 ## 10 1 1 0 0 1 0 0 1 ## # ... with 110 more rows ``` ``` model_matrix(y ~ x1 * x2, data = sim4) ``` ``` ## # A tibble: 300 x 4 ## `(Intercept)` x1 x2 `x1:x2` ## <dbl> <dbl> <dbl> <dbl> ## 1 1 -1 -1 1 ## 2 1 -1 -1 1 ## 3 1 -1 -1 1 ## 4 1 -1 -0.778 0.778 ## 5 1 -1 -0.778 0.778 ## 6 1 -1 -0.778 0.778 ## 7 1 -1 -0.556 0.556 ## 8 1 -1 -0.556 0.556 ## 9 1 -1 -0.556 0.556 ## 10 1 -1 -0.333 0.333 ## # ... with 290 more rows ``` * because each of the levels are multiplied by one another (just don’t have to write in the design variables) 3. Using the basic principles, convert the formulas in the following two models into functions. (Hint: start by converting the categorical variable into 0\-1 variables.) ``` mod1 <- lm(y ~ x1 + x2, data = sim3) mod2 <- lm(y ~ x1 * x2, data = sim3) ``` * not completed 4. For `sim4`, which of `mod1` and `mod2` is better? I think `mod2` does a slightly better job at removing patterns, but it’s pretty subtle. Can you come up with a plot to support my claim? ``` ```r mod1 <- lm(y ~ x1 + x2, data = sim4) mod2 <- lm(y ~ x1 * x2, data = sim4) grid <- modelr::seq_range(sim4$x1, n = 3, pretty = TRUE) sim4 %>% gather_residuals(mod1, mod2) %>% mutate(resid_abs = (resid)^2) %>% group_by(model) %>% summarise(rmse = sqrt(mean(resid_abs))) ``` ``` ## # A tibble: 2 x 2 ## model rmse ## <chr> <dbl> ## 1 mod1 2.10 ## 2 mod2 2.06 ``` * The aggregate `rmse` between the two models is nearly the same. ```r sim4 %>% gather_residuals(mod1, mod2) %>% ggplot(aes(x = resid, fill = model, group = model))+ geom_density(alpha = 0.3) ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-19-1.png" width="672" /> * any difference in resids is pretty subtle... *Let's plot them though and see how their predictions differ* ```r sim4 %>% spread_residuals(mod1, mod2) %>% gather_predictions(mod1, mod2) %>% ggplot(aes(x1, pred, colour = x2, group = x2))+ geom_line()+ geom_point(aes(y = y), alpha = 0.3)+ facet_wrap(~model) ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-20-1.png" width="672" /> * notice subtle difference where for mod2 as x2 decreases, the predicted value for x1 also decreases, this is because the interaciton term between these is positive * the values near where x2 and x1 are most near 0 should be where the residuals are most similar *Plot difference in residuals* ```r sim4 %>% spread_residuals(mod1, mod2) %>% mutate(mod2_less_mod1 = mod2 - mod1) %>% group_by(x1, x2) %>% summarise(mod2_less_mod1 = mean(mod2_less_mod1) ) %>% ungroup() %>% ggplot(aes(x = x1, y = x2))+ geom_tile(aes(fill = mod2_less_mod1))+ geom_text(aes(label = round(mod2_less_mod1, 1)), size = 3)+ scale_fill_gradient2() ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-21-1.png" width="672" /> * This shows how `mod2` has higher valued predictions when x1 and x2 are opposite signs compared to the predictions from `mod1` *Plot difference in distance from 0 between `mod1` and `mod1` resids* ```r sim4 %>% spread_residuals(mod1, mod2) %>% mutate_at(c("mod1", "mod2"), abs) %>% mutate(mod2_less_mod1 = mod2 - mod1) %>% group_by(x1, x2) %>% summarise(mod2_less_mod1 = mean(mod2_less_mod1) ) %>% ungroup() %>% ggplot(aes(x = x1, y = x2))+ geom_tile(aes(fill = mod2_less_mod1))+ geom_text(aes(label = round(mod2_less_mod1, 1)), size = 3)+ scale_fill_gradient2() ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-22-1.png" width="672" /> * see slightly more red than blue indicating that `mod2` may, in general, have slightly smaller residuals on a wider range of locations * however little and `mod1` has advantage of simplicity ``` Appendix -------- ### Other model families * **Generalised linear models**, e.g. `stats::glm()`. Linear models assume that the response is continuous and the error has a normal distribution. Generalised linear models extend linear models to include non\-continuous responses (e.g. binary data or counts). They work by defining a distance metric based on the statistical idea of likelihood. * **Generalised additive models**, e.g. `mgcv::gam()`, extend generalised linear models to incorporate arbitrary smooth functions. That means you can write a formula like `y ~ s(x)` which becomes an equation like `y = f(x)` and let `gam()` estimate what that function is (subject to some smoothness constraints to make the problem tractable). * **Penalised linear models**, e.g. `glmnet::glmnet()`, add a penalty term to the distance that penalises complex models (as defined by the distance between the parameter vector and the origin). This tends to make models that generalise better to new datasets from the same population. * **Robust linear models**, e.g. `MASS:rlm()`, tweak the distance to downweight points that are very far away. This makes them less sensitive to the presence of outliers, at the cost of being not quite as good when there are no outliers. * **Trees**, e.g. `rpart::rpart()`, attack the problem in a completely different way than linear models. They fit a piece\-wise constant model, splitting the data into progressively smaller and smaller pieces. Trees aren’t terribly effective by themselves, but they are very powerful when used in aggregate by models like **random forests** (e.g. `randomForest::randomForest()`) or **gradient boosting machines** (e.g. `xgboost::xgboost`.) ### 23\.2 book example ``` models <- tibble( a1 = runif(250, -20, 40), a2 = runif(250, -5, 5) ) ggplot(sim1, aes(x,y))+ geom_abline(aes(intercept = a1, slope = a2), data = models, alpha = 0.25)+ geom_point() ``` Next, lets set\-up a way to calculate the distance between predicted value and each point. ``` models_error <- models %>% mutate(preds = map2(.y = a1, .x = a2, ~mutate(sim1, pred = .x*x + .y, resid = y - pred, error_squared = (y - pred)^2, error_abs = abs(y - pred))), rmse = map_dbl(preds, ~(with(.x, mean(error_squared)) %>% sqrt(.))), mae = map_dbl(preds, ~with(.x, mean(error_abs))), rank_rmse = min_rank(rmse)) ``` ``` ggplot(sim1, aes(x, y))+ geom_abline(aes(intercept = a1, slope = a2, colour = -rmse), data = filter(models_error, rank_rmse <= 10))+ geom_point() ``` Could instead plot this as a model of a1 vs a2 and whichever does the best ``` models_error %>% ggplot(aes(x = a1, y = a2))+ geom_point(colour = "red", size = 4, data = filter(models_error, rank_rmse < 15))+ geom_point(aes(colour = -rmse)) ``` *Could be more methodical and use Grid Search.* Let’s use the min and max points of the top 15 to set. ``` #need helper function because distance function expects the model as a numeric vector of length 2 sim1_rmse <- function(b0, b1, df = sim1, x = "x", y = "y"){ ((b0 + b1*df[[x]]) - df[[y]])^2 %>% mean() %>% sqrt() } sim1_rmse(2,3) ``` ``` ## [1] 4.574414 ``` ``` grid_space <- models_error %>% filter(rank_rmse < 15) %>% summarise(min_x = min(a1), max_x = max(a1), min_y = min(a2), max_y = max(a2)) grid_models <- data_grid(grid_space, a1 = seq(min_x, max_x, length = 25), a2 = seq(min_y, max_y, length = 25) ) %>% mutate(rmse = map2_dbl(a1, a2, sim1_rmse, df = sim1)) grid_models %>% ggplot(aes(x = a1, y = a2))+ geom_point(colour = "red", size = 4, data = filter(grid_models, min_rank(rmse) < 15))+ geom_point(aes(colour = -rmse)) ``` In the future could add\-in a grid\-search that would have used PCA to first rotate axes and then do min and max values. Could instead use Newton\-Raphson search with `optim` ``` model_1df <- function(betas, x1 = sim1$x) { betas[1] + x1 * betas[2] } measure_rmse <- function(mod, data) { diff <- data$y - model_1df(betas = mod, data$x) sqrt(mean(diff^2)) } best_rmse <- optim(c(0,0), measure_rmse, data = sim1) best_rmse$par ``` ``` ## [1] 4.222248 2.051204 ``` ``` best_rmse$value ``` ``` ## [1] 2.128181 ``` Above produces roughly equal to R’s `lm()` function ``` sim1_mod <- lm(y ~ x, data = sim1) sim1_mod %>% coef ``` ``` ## (Intercept) x ## 4.220822 2.051533 ``` ``` rmse(sim1_mod, sim1) ``` ``` ## [1] 2.128181 ``` * Notice are *slightly* different, perhaps due to number of steps `optim()` will take E.g. could build a function for optimizing upon MAE instead and still works ``` measure_mae <- function(mod, data) { diff <- data$y - model_1df(betas = mod, data$x) mean(abs(diff)) } best_mae <- optim(c(0,0), measure_mae, data = sim1) best_mae$par ``` ``` ## [1] 4.364852 2.048918 ``` ### 23\.2\.1\.1 ``` *Let's look at differences in the coefficients produced:* ```r mae_df <- best_mae_sims %>% map("par") %>% transpose() %>% set_names(c("bo", "b1")) %>% as_tibble() %>% unnest() %>% mutate(error_type = "mae", row_n = row_number()) rmse_df <- best_rmse_sims %>% map("par") %>% transpose() %>% set_names(c("bo", "b1")) %>% as_tibble() %>% unnest() %>% mutate(error_type = "rmse", row_n = row_number()) bind_rows(rmse_df, mae_df) %>% ggplot(aes(x = bo, colour = error_type))+ geom_point(aes(y = b1)) ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-29-1.png" width="672" /> * see more variability in the b1 * another way of visualizing the variability in coefficients is below ```r left_join(rmse_df, mae_df, by = "row_n", suffix = c("_rmse", "_mae")) %>% ggplot(aes(x = b1_rmse, y = b1_mae))+ geom_point()+ coord_fixed() ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-30-1.png" width="672" /> ``` ### 23\.3\.3\.2 ``` #How can I add a prefix when using spread_predictions() ? -- could use the #method below sim1 %>% gather_predictions(sim1_mod, sim1_mod_loess) %>% mutate(model = str_c(model, "_pred")) %>% spread(key = model, value = pred) ``` ``` ## # A tibble: 30 x 4 ## x y sim1_mod_loess_pred sim1_mod_pred ## <int> <dbl> <dbl> <dbl> ## 1 1 2.13 5.34 6.27 ## 2 1 4.20 5.34 6.27 ## 3 1 7.51 5.34 6.27 ## 4 2 8.99 8.27 8.32 ## 5 2 10.2 8.27 8.32 ## 6 2 11.3 8.27 8.32 ## 7 3 7.36 10.8 10.4 ## 8 3 10.5 10.8 10.4 ## 9 3 10.5 10.8 10.4 ## 10 4 11.9 12.8 12.4 ## # ... with 20 more rows ``` ``` #now could add a spread_residuals() without it breaking... sim1 %>% gather_predictions(sim1_mod, sim1_mod_loess) %>% ggplot()+ geom_point(aes(x = x, y = y))+ geom_line(aes(x = x, y = pred))+ facet_wrap(~model) ``` ### tidy grid\_space Below is a pseudo\-tidy way of creating the `grid_space` var from above, it actually took more effort to create this probably, so didn’t use. However you could imagine if you had to search across A LOT of values it could be worth doing something similar to this (note `caret` or `tidymodels` are resources for effectively building search spaces for hyper paramter tuning and other related modeling activities). ``` funs_names <- tibble(funs = c(rep("min", 2), rep("max", 2)), coord = rep(c("x", "y"), 2), field_names = str_c(funs, "_", coord)) grid_space <- models_error %>% filter(rank_rmse < 15) %>% select(a1, a2) %>% rep(2) %>% invoke_map(.f = funs_names$funs, .x = .) %>% set_names(funs_names$field_names) %>% as_tibble() grid_space ``` ``` ## # A tibble: 1 x 4 ## min_x min_y max_x max_y ## <dbl> <dbl> <dbl> <dbl> ## 1 -7.10 0.319 14.9 3.89 ``` ### 23\.4\.5\.4 Rather than `geom_density` or `geom_freqpoly` let’s look at histogram with values overlaid rather than stacked. ``` sim4 %>% gather_residuals(mod1, mod2) %>% ggplot(aes(x = resid, y = ..density.., fill = model))+ geom_histogram(position = "identity", alpha = 0.3) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ``` ### Other model families * **Generalised linear models**, e.g. `stats::glm()`. Linear models assume that the response is continuous and the error has a normal distribution. Generalised linear models extend linear models to include non\-continuous responses (e.g. binary data or counts). They work by defining a distance metric based on the statistical idea of likelihood. * **Generalised additive models**, e.g. `mgcv::gam()`, extend generalised linear models to incorporate arbitrary smooth functions. That means you can write a formula like `y ~ s(x)` which becomes an equation like `y = f(x)` and let `gam()` estimate what that function is (subject to some smoothness constraints to make the problem tractable). * **Penalised linear models**, e.g. `glmnet::glmnet()`, add a penalty term to the distance that penalises complex models (as defined by the distance between the parameter vector and the origin). This tends to make models that generalise better to new datasets from the same population. * **Robust linear models**, e.g. `MASS:rlm()`, tweak the distance to downweight points that are very far away. This makes them less sensitive to the presence of outliers, at the cost of being not quite as good when there are no outliers. * **Trees**, e.g. `rpart::rpart()`, attack the problem in a completely different way than linear models. They fit a piece\-wise constant model, splitting the data into progressively smaller and smaller pieces. Trees aren’t terribly effective by themselves, but they are very powerful when used in aggregate by models like **random forests** (e.g. `randomForest::randomForest()`) or **gradient boosting machines** (e.g. `xgboost::xgboost`.) ### 23\.2 book example ``` models <- tibble( a1 = runif(250, -20, 40), a2 = runif(250, -5, 5) ) ggplot(sim1, aes(x,y))+ geom_abline(aes(intercept = a1, slope = a2), data = models, alpha = 0.25)+ geom_point() ``` Next, lets set\-up a way to calculate the distance between predicted value and each point. ``` models_error <- models %>% mutate(preds = map2(.y = a1, .x = a2, ~mutate(sim1, pred = .x*x + .y, resid = y - pred, error_squared = (y - pred)^2, error_abs = abs(y - pred))), rmse = map_dbl(preds, ~(with(.x, mean(error_squared)) %>% sqrt(.))), mae = map_dbl(preds, ~with(.x, mean(error_abs))), rank_rmse = min_rank(rmse)) ``` ``` ggplot(sim1, aes(x, y))+ geom_abline(aes(intercept = a1, slope = a2, colour = -rmse), data = filter(models_error, rank_rmse <= 10))+ geom_point() ``` Could instead plot this as a model of a1 vs a2 and whichever does the best ``` models_error %>% ggplot(aes(x = a1, y = a2))+ geom_point(colour = "red", size = 4, data = filter(models_error, rank_rmse < 15))+ geom_point(aes(colour = -rmse)) ``` *Could be more methodical and use Grid Search.* Let’s use the min and max points of the top 15 to set. ``` #need helper function because distance function expects the model as a numeric vector of length 2 sim1_rmse <- function(b0, b1, df = sim1, x = "x", y = "y"){ ((b0 + b1*df[[x]]) - df[[y]])^2 %>% mean() %>% sqrt() } sim1_rmse(2,3) ``` ``` ## [1] 4.574414 ``` ``` grid_space <- models_error %>% filter(rank_rmse < 15) %>% summarise(min_x = min(a1), max_x = max(a1), min_y = min(a2), max_y = max(a2)) grid_models <- data_grid(grid_space, a1 = seq(min_x, max_x, length = 25), a2 = seq(min_y, max_y, length = 25) ) %>% mutate(rmse = map2_dbl(a1, a2, sim1_rmse, df = sim1)) grid_models %>% ggplot(aes(x = a1, y = a2))+ geom_point(colour = "red", size = 4, data = filter(grid_models, min_rank(rmse) < 15))+ geom_point(aes(colour = -rmse)) ``` In the future could add\-in a grid\-search that would have used PCA to first rotate axes and then do min and max values. Could instead use Newton\-Raphson search with `optim` ``` model_1df <- function(betas, x1 = sim1$x) { betas[1] + x1 * betas[2] } measure_rmse <- function(mod, data) { diff <- data$y - model_1df(betas = mod, data$x) sqrt(mean(diff^2)) } best_rmse <- optim(c(0,0), measure_rmse, data = sim1) best_rmse$par ``` ``` ## [1] 4.222248 2.051204 ``` ``` best_rmse$value ``` ``` ## [1] 2.128181 ``` Above produces roughly equal to R’s `lm()` function ``` sim1_mod <- lm(y ~ x, data = sim1) sim1_mod %>% coef ``` ``` ## (Intercept) x ## 4.220822 2.051533 ``` ``` rmse(sim1_mod, sim1) ``` ``` ## [1] 2.128181 ``` * Notice are *slightly* different, perhaps due to number of steps `optim()` will take E.g. could build a function for optimizing upon MAE instead and still works ``` measure_mae <- function(mod, data) { diff <- data$y - model_1df(betas = mod, data$x) mean(abs(diff)) } best_mae <- optim(c(0,0), measure_mae, data = sim1) best_mae$par ``` ``` ## [1] 4.364852 2.048918 ``` ### 23\.2\.1\.1 ``` *Let's look at differences in the coefficients produced:* ```r mae_df <- best_mae_sims %>% map("par") %>% transpose() %>% set_names(c("bo", "b1")) %>% as_tibble() %>% unnest() %>% mutate(error_type = "mae", row_n = row_number()) rmse_df <- best_rmse_sims %>% map("par") %>% transpose() %>% set_names(c("bo", "b1")) %>% as_tibble() %>% unnest() %>% mutate(error_type = "rmse", row_n = row_number()) bind_rows(rmse_df, mae_df) %>% ggplot(aes(x = bo, colour = error_type))+ geom_point(aes(y = b1)) ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-29-1.png" width="672" /> * see more variability in the b1 * another way of visualizing the variability in coefficients is below ```r left_join(rmse_df, mae_df, by = "row_n", suffix = c("_rmse", "_mae")) %>% ggplot(aes(x = b1_rmse, y = b1_mae))+ geom_point()+ coord_fixed() ``` <img src="23-model-basics_files/figure-html/unnamed-chunk-30-1.png" width="672" /> ``` ### 23\.3\.3\.2 ``` #How can I add a prefix when using spread_predictions() ? -- could use the #method below sim1 %>% gather_predictions(sim1_mod, sim1_mod_loess) %>% mutate(model = str_c(model, "_pred")) %>% spread(key = model, value = pred) ``` ``` ## # A tibble: 30 x 4 ## x y sim1_mod_loess_pred sim1_mod_pred ## <int> <dbl> <dbl> <dbl> ## 1 1 2.13 5.34 6.27 ## 2 1 4.20 5.34 6.27 ## 3 1 7.51 5.34 6.27 ## 4 2 8.99 8.27 8.32 ## 5 2 10.2 8.27 8.32 ## 6 2 11.3 8.27 8.32 ## 7 3 7.36 10.8 10.4 ## 8 3 10.5 10.8 10.4 ## 9 3 10.5 10.8 10.4 ## 10 4 11.9 12.8 12.4 ## # ... with 20 more rows ``` ``` #now could add a spread_residuals() without it breaking... sim1 %>% gather_predictions(sim1_mod, sim1_mod_loess) %>% ggplot()+ geom_point(aes(x = x, y = y))+ geom_line(aes(x = x, y = pred))+ facet_wrap(~model) ``` ### tidy grid\_space Below is a pseudo\-tidy way of creating the `grid_space` var from above, it actually took more effort to create this probably, so didn’t use. However you could imagine if you had to search across A LOT of values it could be worth doing something similar to this (note `caret` or `tidymodels` are resources for effectively building search spaces for hyper paramter tuning and other related modeling activities). ``` funs_names <- tibble(funs = c(rep("min", 2), rep("max", 2)), coord = rep(c("x", "y"), 2), field_names = str_c(funs, "_", coord)) grid_space <- models_error %>% filter(rank_rmse < 15) %>% select(a1, a2) %>% rep(2) %>% invoke_map(.f = funs_names$funs, .x = .) %>% set_names(funs_names$field_names) %>% as_tibble() grid_space ``` ``` ## # A tibble: 1 x 4 ## min_x min_y max_x max_y ## <dbl> <dbl> <dbl> <dbl> ## 1 -7.10 0.319 14.9 3.89 ``` ### 23\.4\.5\.4 Rather than `geom_density` or `geom_freqpoly` let’s look at histogram with values overlaid rather than stacked. ``` sim4 %>% gather_residuals(mod1, mod2) %>% ggplot(aes(x = resid, y = ..density.., fill = model))+ geom_histogram(position = "identity", alpha = 0.3) ``` ``` ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. ```
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/24-model-building.html
Ch. 24: Model building ====================== **Key questions:** * 24\.2\.3\. \#1, 2, 4 **Functions and notes:** * `data_grid`, argument: `.model`: if the model needs variables that haven’t been supplied explicitly, will auto\-fill them with “typical” values; continuous –\> median; categorical –\> mode * `MASS:rlm` robust linear model that uses “M estimation by default” + warning: MASS::select will clash with dplyr::select, so I usually won’t load in MASS explicitly 24\.2: Why are low quality diamonds more expensive? --------------------------------------------------- ``` # model for only small diamonds diamonds2 <- diamonds %>% filter(carat <= 2.5) %>% mutate(lprice = log2(price), lcarat = log2(carat)) mod_diamond <- lm(lprice ~ lcarat, data = diamonds2) mod_diamond2 <- lm(lprice ~ lcarat + color + cut + clarity, data = diamonds2) diamonds2 <- diamonds2 %>% add_residuals(mod_diamond2, "resid_lg") ``` ### 24\.2\.3 1. In the plot of `lcarat` vs. `lprice`, there are some bright vertical strips. What do they represent? ``` plot_lc_lp <- diamonds2 %>% ggplot(aes(lcarat, lprice))+ geom_hex(show.legend = FALSE) plot_lp_lc <- diamonds2 %>% ggplot(aes(lprice, lcarat))+ geom_hex(show.legend = FALSE) plot_lp <- diamonds2 %>% ggplot(aes(lprice))+ geom_histogram(binwidth = .1) plot_lc <- diamonds2 %>% ggplot(aes(lcarat))+ geom_histogram(binwidth = 0.1) gridExtra::grid.arrange(plot_lc_lp, plot_lc, plot_lp + coord_flip()) ``` * The vertical bands correspond with clumps of `carat_lg` values falling across a range of `price_lg` values*Histogram of `carat` values:* ``` diamonds2 %>% ggplot(aes(carat))+ geom_histogram(binwidth = 0.01)+ scale_x_continuous(breaks = seq(0, 2, 0.1)) ``` * The chart above shows spikes in carat values at 0\.3, 0\.4, 0\.41, 0\.5, 0\.7, 0\.9, 1\.0, 1\.01, 1\.2, 1\.5, 1\.7 and 2\.0, each distribution spikes at that value and then decreases until hitting the next spike * This suggests there is a preference for round numbers ending on tenths * It’s curious why you don’t see really see spikes at 0\.6, 0\.8, 0\.9, 1\.1, 1\.3, 1\.4, 1\.6, 1\.8, 1\.9, it suggests either there is something special about those paricular values – perhaps diamonds just tend to develop near those sizes so are more available in sizes of 0\.7 than say 0\.8 * this article also found similar spikes: [https://www.diamdb.com/carat\-weight\-vs\-face\-up\-size/](https://www.diamdb.com/carat-weight-vs-face-up-size/) as did this: [https://www.pricescope.com/wiki/diamonds/diamond\-carat\-weight](https://www.pricescope.com/wiki/diamonds/diamond-carat-weight) , which use different data sets (though they do not explain the spike at 0\.9 but no spike at 1\.4\) 2. If `log(price) = a_0 + a_1 * log(carat)`, what does that say about the relationship between `price` and `carat`? * because we’re using a natural log it means that an a\_1 percentage change in carat corresponds with an a\_1 percentage increase in the price * if you had used a log base 2 it has a different interpretation that can be thought of in terms of relationship of doubling 3. Extract the diamonds that have very high and very low residuals. Is there anything unusual about these diamonds? Are they particularly bad or good, or do you think these are pricing errors? ``` extreme_vals <- diamonds2 %>% mutate(extreme_value = (abs(resid_lg) > 1)) %>% filter(extreme_value) %>% add_predictions(mod_diamond2, "pred_lg") %>% mutate(price_pred = 2^(pred_lg)) #graph extreme points as well as line of pred diamonds2 %>% add_predictions(mod_diamond2) %>% # mutate(extreme_value = (abs(resid_lg) > 1)) %>% # filter(!extreme_value) %>% ggplot(aes(carat, price))+ geom_hex(bins = 50)+ geom_point(aes(carat, price), data = extreme_vals, color = "orange") ``` * It’s possible some of these these were mislabeled or errors, e.g. an error in typing e.g. 200 miswritten as 2000, though given the wide range in pricing this does not seem like that extreme of a variation. ``` diamonds2 %>% add_predictions(mod_diamond2) %>% mutate(extreme_value = (abs(resid_lg) > 1), price_pred = 2^pred) %>% filter(extreme_value) %>% mutate(multiple = price / price_pred) %>% arrange(desc(multiple)) %>% select(price, price_pred, multiple) ``` ``` ## # A tibble: 16 x 3 ## price price_pred multiple ## <int> <dbl> <dbl> ## 1 2160 314. 6.88 ## 2 1776 412. 4.31 ## 3 1186 284. 4.17 ## 4 1186 284. 4.17 ## 5 1013 264. 3.83 ## 6 2366 774. 3.05 ## 7 1715 576. 2.98 ## 8 4368 1705. 2.56 ## 9 10011 4048. 2.47 ## 10 3807 1540. 2.47 ## 11 3360 1373. 2.45 ## 12 3920 1705. 2.30 ## 13 1415 639. 2.21 ## 14 1415 639. 2.21 ## 15 1262 2644. 0.477 ## 16 10470 23622. 0.443 ``` * If the mislabeling were an issue of e.g. 200 to 2000, you would expect that some of the actual values were \~1/10th or 10x the value of the predicted value. Though none of them appear to have this issue, except for maybe the item that was priced at 2160 but has a price of 314, which is the closest error where the actual value was \~1/10th the value of the prediction 4. Does the final model, `mod_diamonds2`, do a good job of predicting diamond prices? Would you trust it to tell you how much to spend if you were buying a diamond? ``` perc_unexplained <- diamonds2 %>% add_predictions(mod_diamond2, "pred") %>% mutate(pred_2 = 2^pred, mean_price = mean(price), error_deviation = (price - pred_2)^2, reg_deviation = (pred_2 - mean_price)^2, tot_deviation = (price - mean_price)^2) %>% summarise(R_squared = sum(error_deviation) / sum(tot_deviation)) %>% flatten_dbl() 1 - perc_unexplained ``` ``` ## [1] 0.9653255 ``` * \~96\.5% of variance is explained by model which seems pretty solid, though is relative to each situation * See [24\.2\.3\.3](24-model-building.html#section-92) for other considerations, though even this is very incomplete. Would want to check a variety of other metrics to further evaluate trust. 24\.3 What affects the number of daily flights? ----------------------------------------------- Some useful notes copied from this section: ``` daily <- flights %>% mutate(date = make_date(year, month, day)) %>% count(date) daily <- daily %>% mutate(month = month(date, label = TRUE)) daily <- daily %>% mutate(wday = wday(date, label = TRUE)) term <- function(date) { cut(date, breaks = ymd(20130101, 20130605, 20130825, 20140101), labels = c("spring", "summer", "fall") ) } daily <- daily %>% mutate(term = term(date)) daily %>% filter(wday == "Sat") %>% ggplot(aes(date, n, colour = term))+ geom_point(alpha = .3)+ geom_line()+ scale_x_date(NULL, date_breaks = "1 month", date_labels = "%b") ``` ### 24\.3\.5 1. Use your Google sleuthing skills to brainstorm why there were fewer than expected flights on Jan 20, May 26, and Sep 1\. (Hint: they all have the same explanation.) How would these days generalise to another year? * January 20th was the day for MLK day[41](#fn41) * May 26th was the day before Memorial Day weekend * September 1st was the day before labor dayBased on the above, it seems a variable representing “holiday” or “holiday weekend” may be valuable. 2. What do the three days with high positive residuals represent? How would these days generalise to another year? ``` daily <- flights %>% mutate(date = make_date(year, month, day)) %>% count(date) daily <- daily %>% mutate(wday = wday(date, label = TRUE)) ``` ``` mod <- lm(n ~ wday, data = daily) daily <- daily %>% add_residuals(mod) daily %>% top_n(3, resid) ``` ``` ## # A tibble: 3 x 4 ## date n wday resid ## <date> <int> <ord> <dbl> ## 1 2013-11-30 857 Sat 112. ## 2 2013-12-01 987 Sun 95.5 ## 3 2013-12-28 814 Sat 69.4 ``` * these days correspond with the Saturday and Sunday of Thanksgiving, as well as the Saturday after Christmas * these days can fall on different days of the week each year so would vary from year to year depending on which day they fell on + ideally you would include some “holiday” variable to help capture the impact of these / better generalise between years*Check the absolute values* ``` daily %>% top_n(3, abs(resid)) ``` ``` ## # A tibble: 3 x 4 ## date n wday resid ## <date> <int> <ord> <dbl> ## 1 2013-11-28 634 Thu -332. ## 2 2013-11-29 661 Fri -306. ## 3 2013-12-25 719 Wed -244. ``` * The days with the greatest magnitude for residuals were on Christmast Day, Thanksgiving Day, and the day after Thanksgiving 3. Create a new variable that splits the `wday` variable into terms, but only for Saturdays, i.e. it should have `Thurs`, `Fri`, but `Sat-summer`, `Sat-spring`, `Sat-fall`. How does this model compare with the model with every combination of `wday` and `term`? ``` term <- function(date) { cut(date, breaks = ymd(20130101, 20130605, 20130825, 20140101), labels = c("spring", "summer", "fall") ) } daily <- daily %>% mutate(term = term(date)) # example with wday_mod Example_term_with_sat <- daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% lm(n ~ wday_mod, data = .) # just wday wkday <- daily %>% lm(n ~ wday, data = .) # wday and term, no interaction... wkday_term <- daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% lm(n ~ wday + term, data = .) # wday and term, interaction wkday_term_interaction <- daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% lm(n ~ wday*term, data = .) daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% gather_predictions(Example_term_with_sat, wkday, wkday_term, wkday_term_interaction) %>% ggplot(aes(date, pred, colour = wday))+ geom_point()+ geom_line()+ facet_wrap(~model, ncol = 1) ``` * In the example, saturday has different predicted number of flights in the summer + when just including `wkday` you don’t see this differentiation + when including `wkday` and `term` you see differentiation in the summer, though this difference is the same across all `wday`s, hence the increased number for Saturday’s is less than it shows\-up as as compared to either the example (where the term is only interacted with for Saturday) or the `wkday_term_interaction` chart where the interaciton is allowed for each day of the week + you see increases in flights across pretty much all `wday`s in summer, though you see the biggest difference in Saturday[42](#fn42)*Residuals of these models* ``` daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% gather_residuals(Example_term_with_sat, wkday, wkday_term, wkday_term_interaction) %>% ggplot(aes(date, resid, colour = wday))+ geom_point()+ geom_line()+ facet_wrap(~model, ncol = 1) ``` * The graphs with saturday term and interaction across terms do not show gross changes in residuals varying by season the way the models that included just weekday or weekday and term without an interaction do. * note that you have a few days with large negative residuals[43](#fn43) + these likely correspond with holidays 4. Create a new `wday` variable that combines the day of week, term (for Saturdays), and public holidays. What do the residuals of that model look like? *Create dataset of federal holidays* ``` # holiday's that could have been added: Easter, black friday # consider adding a filter to remove Columbus day and perhaps Veteran's day holidays <- tribble( ~HolidayName, ~HolidayDate, "New Year's", "2013-01-01", "MLK", "2013-01-21", "President's Day", "2013-02-18", "Memorial Day", "2013-05-27", "Independene Day", "2013-07-04", "Labor Day", "2013-09-02", "Columbus Day", "2013-10-14", "Veteran's Day", "2013-11-11", "Thanksgiving", "2013-11-28", "Christmas Day", "2013-12-25" ) %>% mutate(HolidayDate = ymd(HolidayDate)) ``` Create model with Holiday variable ``` Example_term_with_sat_holiday <- daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% left_join(holidays, by = c("date" = "HolidayDate")) %>% mutate(Holiday = !is.na(HolidayName)) %>% lm(n ~ wday_mod + Holiday, data = .) ``` Look at residuals of model ``` daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% left_join(holidays, by = c("date" = "HolidayDate")) %>% mutate(Holiday = !is.na(HolidayName)) %>% gather_residuals(Example_term_with_sat_holiday, Example_term_with_sat) %>% ggplot(aes(date, resid, colour = wday))+ geom_point()+ geom_line()+ facet_wrap(~model, ncol = 1) ``` * Notice the residuals for day’s like July 4th and Christas are closer to 0 now, though residuals for smaller holidays like MLK, President’s, Columbus, and Veteran’s Day are now positive when before they did not have such noticable abberations * Suggests that just “holiday” is not enough to capture the relationship + In [24\.3\.5\.4](24-model-building.html#section-94) I show how to create a “near holiday” variable (though I do not add any new analysis after creating this) 5. What happens if you fit a day of week effect that varies by month (i.e. `n ~ wday * month`)? Why is this not very helpful? *Create model* ``` week_month <- daily %>% mutate(month = month(date) %>% as.factor()) %>% lm(n ~ wday * month, data = .) ``` *Graph predictions* (with `n ~ wday * term` as the comparison) ``` daily %>% mutate(month = month(date) %>% as.factor()) %>% gather_predictions(wkday_term_interaction, week_month) %>% ggplot(aes(date, pred, colour = wday))+ geom_point()+ geom_line()+ facet_wrap(~model, ncol = 1) ``` * This model has the most flexibility / inputs, though this makes the pattern harder to follow / interpret * Certain decreases in the month to month model are difficult to explain, for example the decrease in the month of May*Graph residuals* (with `n ~ wday * term` as the comparison) ``` daily %>% mutate(month = month(date) %>% as.factor()) %>% gather_residuals(wkday_term_interaction, week_month) %>% ggplot(aes(date, resid, colour = wday))+ geom_point()+ geom_line()+ facet_wrap(~model, ncol = 1) ``` The residuals seem to partially explain some of these inexplicable ups / downs: * For the model that incorporates an interaciton with month, you see the residuals in months with a holiday tend to cause the associated day of the week the holiday fell on to then have high residuals on the non\-holiday days, an effect thta is less pronounced on the models interacted with `term`[44](#fn44) + The reason for this is that for the monthly variables there are only 4\-5 week days in each month, so a holiday on one of these can substantially impact the expected number of flights on the weekend in that month (i.e. the prediction is based just on 4\-5 observations). For the term interaction you have more like 12 observations to get an expected value, so while there is still an aberration on that day, the other days predictions are less affected 6. What would you expect the model `n ~ wday + ns(date, 5)` to look like? Knowing what you know about the data, why would you expect it to be not particularly effective? I would expect to see a similar overall pattern, but with more smoothed affects. Let’s check what these actually look like below. ``` wkday_term_ns <- daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% lm(n ~ wday + splines::ns(date, 5), data = .) wkday_term_interaction_ns <- lm(n ~ wday * splines::ns(date, 5), data = daily) ``` Look at predictions (light grey are actuals) ``` daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% gather_predictions(wkday_term_ns, wkday_term_interaction_ns) %>% ggplot(aes(date, pred, colour = wday))+ geom_point()+ geom_line(aes(x = date, y = n, group = wday), colour = "grey", alpha = 0.5)+ geom_line()+ facet_wrap(~model, ncol = 1) ``` Look at residuals (in light grey are actuals) ``` daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% gather_residuals(wkday_term_ns, wkday_term_interaction_ns) %>% ggplot(aes(date, resid, colour = wday))+ geom_point()+ geom_line(aes(x = date, y = n, group = wday), colour = "grey", alpha = 0.5)+ geom_line()+ facet_wrap(~model, ncol = 1) ``` 7. We hypothesised that people leaving on Sundays are more likely to be business travellers who need to be somewhere on Monday. Explore that hypothesis by seeing how it breaks down based on distance and time: if it’s true, you’d expect to see more Sunday evening flights to places that are far away. ``` flights %>% mutate(date = lubridate::make_date(year, month, day), wday = wday(date, label = TRUE)) %>% select(date, wday, distance) %>% filter(distance < 3000) %>% ggplot(aes(wday, distance))+ geom_boxplot() ``` * 25th and 75th percentiles aren’t visibly different, but median is a little higher * the same is the case for Saturday travel which does not seem to fit into this hypothesis as neatly. The effect seems more general to the weekend than just Saturday, and there seem like there may be other potential explanations than “business travel” 8. It’s a little frustrating that Sunday and Saturday are on separate ends of the plot. Write a small function to set the levels of the factor so that the week starts on Monday. ``` wday_modified <- function(date){ date_order <- (wday(date) + 5) %% 7 date <- wday(date, label = TRUE) %>% fct_reorder(date_order) date } flights %>% mutate(date = lubridate::make_date(year, month, day), wday = wday_modified(date)) %>% select(date, wday, distance) %>% filter(distance < 3000) %>% ggplot(aes(wday, distance))+ geom_boxplot() ``` Appendix -------- ### 24\.2\.3\.3 Plots of extreme values against a sample and colored by some of the key attributes *Plots of extreme values against carat, price, clarity* ``` diamonds2 %>% add_predictions(mod_diamond2) %>% sample_n(5000) %>% ggplot(aes(carat, price))+ geom_point(aes(carat, price, colour = clarity), alpha = 0.5)+ geom_point(aes(carat, price, colour = clarity), data = extreme_vals, size = 3) ``` *Plots of extreme values against carat, price, cut* ``` diamonds2 %>% add_predictions(mod_diamond2) %>% sample_n(5000) %>% ggplot(aes(carat, price))+ # geom_hex(bins = 50)+ geom_point(aes(carat, price, colour = cut), alpha = 0.5)+ geom_point(aes(carat, price, colour = cut), data = extreme_vals, size = 3) ``` ### 24\.2\.3\.1 Visualization with horizontal stripes and `lprice` as the focus ``` # horizontal stripes gridExtra::grid.arrange(plot_lp_lc, plot_lp, plot_lc + coord_flip()) ``` * same thing, just change orientation and highlight `lprice` with a histogram *A few other graphs from this problem* ``` diamonds2 %>% ggplot(aes(price))+ geom_histogram(binwidth = 50) ``` ``` diamonds2 %>% ggplot(aes(carat))+ geom_histogram(binwidth = 0.01) ``` Taking the log of price seems to have a bigger impact on the shape of the geom\_hex graph ``` diamonds2 %>% ggplot(aes(carat, lprice))+ geom_hex(show.legend = FALSE) ``` ``` diamonds2 %>% ggplot(aes(lcarat, price))+ geom_hex(show.legend = FALSE) ``` ### 24\.3\.5\.4 In this section I create a marker for days that are “near a holiday” ``` near_holidays <- holidays %>% # This creates a series of helper variables to create the variable 'Holiday_IntervalDay' that represents an interval that encloses the period between the holiday and the beginning or end of the most recent weekend mutate(HolidayWday = wday(HolidayDate, label = TRUE), HolidayWknd = lubridate::round_date(HolidayDate, unit = "week"), HolidayFloor = lubridate::floor_date(HolidayDate, unit = "week"), HolidayCeiling = lubridate::ceiling_date(HolidayDate, unit = "week"), Holiday_IntervalDay = case_when(HolidayWknd == HolidayFloor ~ (HolidayFloor - ddays(2)), TRUE ~ HolidayCeiling)) %>% mutate(Holiday_Period = interval(pmin(HolidayDate, Holiday_IntervalDay), pmax(HolidayDate, Holiday_IntervalDay))) # This returns each day and whether it occurred near a holiday near_holiday <- map(near_holidays$Holiday_Period, ~(seq.Date(ymd("2013-01-01"), ymd("2013-12-31"), by = "day") %within% .x)) %>% transpose() %>% map_lgl(any) %>% as_tibble() %>% rename(NearHoliday = value) %>% mutate(date = seq.Date(ymd("2013-01-01"), ymd("2013-12-31"), by = "day")) near_holiday ``` ``` ## # A tibble: 365 x 2 ## NearHoliday date ## <lgl> <date> ## 1 TRUE 2013-01-01 ## 2 FALSE 2013-01-02 ## 3 FALSE 2013-01-03 ## 4 FALSE 2013-01-04 ## 5 FALSE 2013-01-05 ## 6 FALSE 2013-01-06 ## 7 FALSE 2013-01-07 ## 8 FALSE 2013-01-08 ## 9 FALSE 2013-01-09 ## 10 FALSE 2013-01-10 ## # ... with 355 more rows ``` * I ended\-up not adding any additional analysis here, though the methodology for creating the “near holiday” seemed worth saving * Could come back to add more in the future 24\.2: Why are low quality diamonds more expensive? --------------------------------------------------- ``` # model for only small diamonds diamonds2 <- diamonds %>% filter(carat <= 2.5) %>% mutate(lprice = log2(price), lcarat = log2(carat)) mod_diamond <- lm(lprice ~ lcarat, data = diamonds2) mod_diamond2 <- lm(lprice ~ lcarat + color + cut + clarity, data = diamonds2) diamonds2 <- diamonds2 %>% add_residuals(mod_diamond2, "resid_lg") ``` ### 24\.2\.3 1. In the plot of `lcarat` vs. `lprice`, there are some bright vertical strips. What do they represent? ``` plot_lc_lp <- diamonds2 %>% ggplot(aes(lcarat, lprice))+ geom_hex(show.legend = FALSE) plot_lp_lc <- diamonds2 %>% ggplot(aes(lprice, lcarat))+ geom_hex(show.legend = FALSE) plot_lp <- diamonds2 %>% ggplot(aes(lprice))+ geom_histogram(binwidth = .1) plot_lc <- diamonds2 %>% ggplot(aes(lcarat))+ geom_histogram(binwidth = 0.1) gridExtra::grid.arrange(plot_lc_lp, plot_lc, plot_lp + coord_flip()) ``` * The vertical bands correspond with clumps of `carat_lg` values falling across a range of `price_lg` values*Histogram of `carat` values:* ``` diamonds2 %>% ggplot(aes(carat))+ geom_histogram(binwidth = 0.01)+ scale_x_continuous(breaks = seq(0, 2, 0.1)) ``` * The chart above shows spikes in carat values at 0\.3, 0\.4, 0\.41, 0\.5, 0\.7, 0\.9, 1\.0, 1\.01, 1\.2, 1\.5, 1\.7 and 2\.0, each distribution spikes at that value and then decreases until hitting the next spike * This suggests there is a preference for round numbers ending on tenths * It’s curious why you don’t see really see spikes at 0\.6, 0\.8, 0\.9, 1\.1, 1\.3, 1\.4, 1\.6, 1\.8, 1\.9, it suggests either there is something special about those paricular values – perhaps diamonds just tend to develop near those sizes so are more available in sizes of 0\.7 than say 0\.8 * this article also found similar spikes: [https://www.diamdb.com/carat\-weight\-vs\-face\-up\-size/](https://www.diamdb.com/carat-weight-vs-face-up-size/) as did this: [https://www.pricescope.com/wiki/diamonds/diamond\-carat\-weight](https://www.pricescope.com/wiki/diamonds/diamond-carat-weight) , which use different data sets (though they do not explain the spike at 0\.9 but no spike at 1\.4\) 2. If `log(price) = a_0 + a_1 * log(carat)`, what does that say about the relationship between `price` and `carat`? * because we’re using a natural log it means that an a\_1 percentage change in carat corresponds with an a\_1 percentage increase in the price * if you had used a log base 2 it has a different interpretation that can be thought of in terms of relationship of doubling 3. Extract the diamonds that have very high and very low residuals. Is there anything unusual about these diamonds? Are they particularly bad or good, or do you think these are pricing errors? ``` extreme_vals <- diamonds2 %>% mutate(extreme_value = (abs(resid_lg) > 1)) %>% filter(extreme_value) %>% add_predictions(mod_diamond2, "pred_lg") %>% mutate(price_pred = 2^(pred_lg)) #graph extreme points as well as line of pred diamonds2 %>% add_predictions(mod_diamond2) %>% # mutate(extreme_value = (abs(resid_lg) > 1)) %>% # filter(!extreme_value) %>% ggplot(aes(carat, price))+ geom_hex(bins = 50)+ geom_point(aes(carat, price), data = extreme_vals, color = "orange") ``` * It’s possible some of these these were mislabeled or errors, e.g. an error in typing e.g. 200 miswritten as 2000, though given the wide range in pricing this does not seem like that extreme of a variation. ``` diamonds2 %>% add_predictions(mod_diamond2) %>% mutate(extreme_value = (abs(resid_lg) > 1), price_pred = 2^pred) %>% filter(extreme_value) %>% mutate(multiple = price / price_pred) %>% arrange(desc(multiple)) %>% select(price, price_pred, multiple) ``` ``` ## # A tibble: 16 x 3 ## price price_pred multiple ## <int> <dbl> <dbl> ## 1 2160 314. 6.88 ## 2 1776 412. 4.31 ## 3 1186 284. 4.17 ## 4 1186 284. 4.17 ## 5 1013 264. 3.83 ## 6 2366 774. 3.05 ## 7 1715 576. 2.98 ## 8 4368 1705. 2.56 ## 9 10011 4048. 2.47 ## 10 3807 1540. 2.47 ## 11 3360 1373. 2.45 ## 12 3920 1705. 2.30 ## 13 1415 639. 2.21 ## 14 1415 639. 2.21 ## 15 1262 2644. 0.477 ## 16 10470 23622. 0.443 ``` * If the mislabeling were an issue of e.g. 200 to 2000, you would expect that some of the actual values were \~1/10th or 10x the value of the predicted value. Though none of them appear to have this issue, except for maybe the item that was priced at 2160 but has a price of 314, which is the closest error where the actual value was \~1/10th the value of the prediction 4. Does the final model, `mod_diamonds2`, do a good job of predicting diamond prices? Would you trust it to tell you how much to spend if you were buying a diamond? ``` perc_unexplained <- diamonds2 %>% add_predictions(mod_diamond2, "pred") %>% mutate(pred_2 = 2^pred, mean_price = mean(price), error_deviation = (price - pred_2)^2, reg_deviation = (pred_2 - mean_price)^2, tot_deviation = (price - mean_price)^2) %>% summarise(R_squared = sum(error_deviation) / sum(tot_deviation)) %>% flatten_dbl() 1 - perc_unexplained ``` ``` ## [1] 0.9653255 ``` * \~96\.5% of variance is explained by model which seems pretty solid, though is relative to each situation * See [24\.2\.3\.3](24-model-building.html#section-92) for other considerations, though even this is very incomplete. Would want to check a variety of other metrics to further evaluate trust. ### 24\.2\.3 1. In the plot of `lcarat` vs. `lprice`, there are some bright vertical strips. What do they represent? ``` plot_lc_lp <- diamonds2 %>% ggplot(aes(lcarat, lprice))+ geom_hex(show.legend = FALSE) plot_lp_lc <- diamonds2 %>% ggplot(aes(lprice, lcarat))+ geom_hex(show.legend = FALSE) plot_lp <- diamonds2 %>% ggplot(aes(lprice))+ geom_histogram(binwidth = .1) plot_lc <- diamonds2 %>% ggplot(aes(lcarat))+ geom_histogram(binwidth = 0.1) gridExtra::grid.arrange(plot_lc_lp, plot_lc, plot_lp + coord_flip()) ``` * The vertical bands correspond with clumps of `carat_lg` values falling across a range of `price_lg` values*Histogram of `carat` values:* ``` diamonds2 %>% ggplot(aes(carat))+ geom_histogram(binwidth = 0.01)+ scale_x_continuous(breaks = seq(0, 2, 0.1)) ``` * The chart above shows spikes in carat values at 0\.3, 0\.4, 0\.41, 0\.5, 0\.7, 0\.9, 1\.0, 1\.01, 1\.2, 1\.5, 1\.7 and 2\.0, each distribution spikes at that value and then decreases until hitting the next spike * This suggests there is a preference for round numbers ending on tenths * It’s curious why you don’t see really see spikes at 0\.6, 0\.8, 0\.9, 1\.1, 1\.3, 1\.4, 1\.6, 1\.8, 1\.9, it suggests either there is something special about those paricular values – perhaps diamonds just tend to develop near those sizes so are more available in sizes of 0\.7 than say 0\.8 * this article also found similar spikes: [https://www.diamdb.com/carat\-weight\-vs\-face\-up\-size/](https://www.diamdb.com/carat-weight-vs-face-up-size/) as did this: [https://www.pricescope.com/wiki/diamonds/diamond\-carat\-weight](https://www.pricescope.com/wiki/diamonds/diamond-carat-weight) , which use different data sets (though they do not explain the spike at 0\.9 but no spike at 1\.4\) 2. If `log(price) = a_0 + a_1 * log(carat)`, what does that say about the relationship between `price` and `carat`? * because we’re using a natural log it means that an a\_1 percentage change in carat corresponds with an a\_1 percentage increase in the price * if you had used a log base 2 it has a different interpretation that can be thought of in terms of relationship of doubling 3. Extract the diamonds that have very high and very low residuals. Is there anything unusual about these diamonds? Are they particularly bad or good, or do you think these are pricing errors? ``` extreme_vals <- diamonds2 %>% mutate(extreme_value = (abs(resid_lg) > 1)) %>% filter(extreme_value) %>% add_predictions(mod_diamond2, "pred_lg") %>% mutate(price_pred = 2^(pred_lg)) #graph extreme points as well as line of pred diamonds2 %>% add_predictions(mod_diamond2) %>% # mutate(extreme_value = (abs(resid_lg) > 1)) %>% # filter(!extreme_value) %>% ggplot(aes(carat, price))+ geom_hex(bins = 50)+ geom_point(aes(carat, price), data = extreme_vals, color = "orange") ``` * It’s possible some of these these were mislabeled or errors, e.g. an error in typing e.g. 200 miswritten as 2000, though given the wide range in pricing this does not seem like that extreme of a variation. ``` diamonds2 %>% add_predictions(mod_diamond2) %>% mutate(extreme_value = (abs(resid_lg) > 1), price_pred = 2^pred) %>% filter(extreme_value) %>% mutate(multiple = price / price_pred) %>% arrange(desc(multiple)) %>% select(price, price_pred, multiple) ``` ``` ## # A tibble: 16 x 3 ## price price_pred multiple ## <int> <dbl> <dbl> ## 1 2160 314. 6.88 ## 2 1776 412. 4.31 ## 3 1186 284. 4.17 ## 4 1186 284. 4.17 ## 5 1013 264. 3.83 ## 6 2366 774. 3.05 ## 7 1715 576. 2.98 ## 8 4368 1705. 2.56 ## 9 10011 4048. 2.47 ## 10 3807 1540. 2.47 ## 11 3360 1373. 2.45 ## 12 3920 1705. 2.30 ## 13 1415 639. 2.21 ## 14 1415 639. 2.21 ## 15 1262 2644. 0.477 ## 16 10470 23622. 0.443 ``` * If the mislabeling were an issue of e.g. 200 to 2000, you would expect that some of the actual values were \~1/10th or 10x the value of the predicted value. Though none of them appear to have this issue, except for maybe the item that was priced at 2160 but has a price of 314, which is the closest error where the actual value was \~1/10th the value of the prediction 4. Does the final model, `mod_diamonds2`, do a good job of predicting diamond prices? Would you trust it to tell you how much to spend if you were buying a diamond? ``` perc_unexplained <- diamonds2 %>% add_predictions(mod_diamond2, "pred") %>% mutate(pred_2 = 2^pred, mean_price = mean(price), error_deviation = (price - pred_2)^2, reg_deviation = (pred_2 - mean_price)^2, tot_deviation = (price - mean_price)^2) %>% summarise(R_squared = sum(error_deviation) / sum(tot_deviation)) %>% flatten_dbl() 1 - perc_unexplained ``` ``` ## [1] 0.9653255 ``` * \~96\.5% of variance is explained by model which seems pretty solid, though is relative to each situation * See [24\.2\.3\.3](24-model-building.html#section-92) for other considerations, though even this is very incomplete. Would want to check a variety of other metrics to further evaluate trust. 24\.3 What affects the number of daily flights? ----------------------------------------------- Some useful notes copied from this section: ``` daily <- flights %>% mutate(date = make_date(year, month, day)) %>% count(date) daily <- daily %>% mutate(month = month(date, label = TRUE)) daily <- daily %>% mutate(wday = wday(date, label = TRUE)) term <- function(date) { cut(date, breaks = ymd(20130101, 20130605, 20130825, 20140101), labels = c("spring", "summer", "fall") ) } daily <- daily %>% mutate(term = term(date)) daily %>% filter(wday == "Sat") %>% ggplot(aes(date, n, colour = term))+ geom_point(alpha = .3)+ geom_line()+ scale_x_date(NULL, date_breaks = "1 month", date_labels = "%b") ``` ### 24\.3\.5 1. Use your Google sleuthing skills to brainstorm why there were fewer than expected flights on Jan 20, May 26, and Sep 1\. (Hint: they all have the same explanation.) How would these days generalise to another year? * January 20th was the day for MLK day[41](#fn41) * May 26th was the day before Memorial Day weekend * September 1st was the day before labor dayBased on the above, it seems a variable representing “holiday” or “holiday weekend” may be valuable. 2. What do the three days with high positive residuals represent? How would these days generalise to another year? ``` daily <- flights %>% mutate(date = make_date(year, month, day)) %>% count(date) daily <- daily %>% mutate(wday = wday(date, label = TRUE)) ``` ``` mod <- lm(n ~ wday, data = daily) daily <- daily %>% add_residuals(mod) daily %>% top_n(3, resid) ``` ``` ## # A tibble: 3 x 4 ## date n wday resid ## <date> <int> <ord> <dbl> ## 1 2013-11-30 857 Sat 112. ## 2 2013-12-01 987 Sun 95.5 ## 3 2013-12-28 814 Sat 69.4 ``` * these days correspond with the Saturday and Sunday of Thanksgiving, as well as the Saturday after Christmas * these days can fall on different days of the week each year so would vary from year to year depending on which day they fell on + ideally you would include some “holiday” variable to help capture the impact of these / better generalise between years*Check the absolute values* ``` daily %>% top_n(3, abs(resid)) ``` ``` ## # A tibble: 3 x 4 ## date n wday resid ## <date> <int> <ord> <dbl> ## 1 2013-11-28 634 Thu -332. ## 2 2013-11-29 661 Fri -306. ## 3 2013-12-25 719 Wed -244. ``` * The days with the greatest magnitude for residuals were on Christmast Day, Thanksgiving Day, and the day after Thanksgiving 3. Create a new variable that splits the `wday` variable into terms, but only for Saturdays, i.e. it should have `Thurs`, `Fri`, but `Sat-summer`, `Sat-spring`, `Sat-fall`. How does this model compare with the model with every combination of `wday` and `term`? ``` term <- function(date) { cut(date, breaks = ymd(20130101, 20130605, 20130825, 20140101), labels = c("spring", "summer", "fall") ) } daily <- daily %>% mutate(term = term(date)) # example with wday_mod Example_term_with_sat <- daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% lm(n ~ wday_mod, data = .) # just wday wkday <- daily %>% lm(n ~ wday, data = .) # wday and term, no interaction... wkday_term <- daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% lm(n ~ wday + term, data = .) # wday and term, interaction wkday_term_interaction <- daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% lm(n ~ wday*term, data = .) daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% gather_predictions(Example_term_with_sat, wkday, wkday_term, wkday_term_interaction) %>% ggplot(aes(date, pred, colour = wday))+ geom_point()+ geom_line()+ facet_wrap(~model, ncol = 1) ``` * In the example, saturday has different predicted number of flights in the summer + when just including `wkday` you don’t see this differentiation + when including `wkday` and `term` you see differentiation in the summer, though this difference is the same across all `wday`s, hence the increased number for Saturday’s is less than it shows\-up as as compared to either the example (where the term is only interacted with for Saturday) or the `wkday_term_interaction` chart where the interaciton is allowed for each day of the week + you see increases in flights across pretty much all `wday`s in summer, though you see the biggest difference in Saturday[42](#fn42)*Residuals of these models* ``` daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% gather_residuals(Example_term_with_sat, wkday, wkday_term, wkday_term_interaction) %>% ggplot(aes(date, resid, colour = wday))+ geom_point()+ geom_line()+ facet_wrap(~model, ncol = 1) ``` * The graphs with saturday term and interaction across terms do not show gross changes in residuals varying by season the way the models that included just weekday or weekday and term without an interaction do. * note that you have a few days with large negative residuals[43](#fn43) + these likely correspond with holidays 4. Create a new `wday` variable that combines the day of week, term (for Saturdays), and public holidays. What do the residuals of that model look like? *Create dataset of federal holidays* ``` # holiday's that could have been added: Easter, black friday # consider adding a filter to remove Columbus day and perhaps Veteran's day holidays <- tribble( ~HolidayName, ~HolidayDate, "New Year's", "2013-01-01", "MLK", "2013-01-21", "President's Day", "2013-02-18", "Memorial Day", "2013-05-27", "Independene Day", "2013-07-04", "Labor Day", "2013-09-02", "Columbus Day", "2013-10-14", "Veteran's Day", "2013-11-11", "Thanksgiving", "2013-11-28", "Christmas Day", "2013-12-25" ) %>% mutate(HolidayDate = ymd(HolidayDate)) ``` Create model with Holiday variable ``` Example_term_with_sat_holiday <- daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% left_join(holidays, by = c("date" = "HolidayDate")) %>% mutate(Holiday = !is.na(HolidayName)) %>% lm(n ~ wday_mod + Holiday, data = .) ``` Look at residuals of model ``` daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% left_join(holidays, by = c("date" = "HolidayDate")) %>% mutate(Holiday = !is.na(HolidayName)) %>% gather_residuals(Example_term_with_sat_holiday, Example_term_with_sat) %>% ggplot(aes(date, resid, colour = wday))+ geom_point()+ geom_line()+ facet_wrap(~model, ncol = 1) ``` * Notice the residuals for day’s like July 4th and Christas are closer to 0 now, though residuals for smaller holidays like MLK, President’s, Columbus, and Veteran’s Day are now positive when before they did not have such noticable abberations * Suggests that just “holiday” is not enough to capture the relationship + In [24\.3\.5\.4](24-model-building.html#section-94) I show how to create a “near holiday” variable (though I do not add any new analysis after creating this) 5. What happens if you fit a day of week effect that varies by month (i.e. `n ~ wday * month`)? Why is this not very helpful? *Create model* ``` week_month <- daily %>% mutate(month = month(date) %>% as.factor()) %>% lm(n ~ wday * month, data = .) ``` *Graph predictions* (with `n ~ wday * term` as the comparison) ``` daily %>% mutate(month = month(date) %>% as.factor()) %>% gather_predictions(wkday_term_interaction, week_month) %>% ggplot(aes(date, pred, colour = wday))+ geom_point()+ geom_line()+ facet_wrap(~model, ncol = 1) ``` * This model has the most flexibility / inputs, though this makes the pattern harder to follow / interpret * Certain decreases in the month to month model are difficult to explain, for example the decrease in the month of May*Graph residuals* (with `n ~ wday * term` as the comparison) ``` daily %>% mutate(month = month(date) %>% as.factor()) %>% gather_residuals(wkday_term_interaction, week_month) %>% ggplot(aes(date, resid, colour = wday))+ geom_point()+ geom_line()+ facet_wrap(~model, ncol = 1) ``` The residuals seem to partially explain some of these inexplicable ups / downs: * For the model that incorporates an interaciton with month, you see the residuals in months with a holiday tend to cause the associated day of the week the holiday fell on to then have high residuals on the non\-holiday days, an effect thta is less pronounced on the models interacted with `term`[44](#fn44) + The reason for this is that for the monthly variables there are only 4\-5 week days in each month, so a holiday on one of these can substantially impact the expected number of flights on the weekend in that month (i.e. the prediction is based just on 4\-5 observations). For the term interaction you have more like 12 observations to get an expected value, so while there is still an aberration on that day, the other days predictions are less affected 6. What would you expect the model `n ~ wday + ns(date, 5)` to look like? Knowing what you know about the data, why would you expect it to be not particularly effective? I would expect to see a similar overall pattern, but with more smoothed affects. Let’s check what these actually look like below. ``` wkday_term_ns <- daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% lm(n ~ wday + splines::ns(date, 5), data = .) wkday_term_interaction_ns <- lm(n ~ wday * splines::ns(date, 5), data = daily) ``` Look at predictions (light grey are actuals) ``` daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% gather_predictions(wkday_term_ns, wkday_term_interaction_ns) %>% ggplot(aes(date, pred, colour = wday))+ geom_point()+ geom_line(aes(x = date, y = n, group = wday), colour = "grey", alpha = 0.5)+ geom_line()+ facet_wrap(~model, ncol = 1) ``` Look at residuals (in light grey are actuals) ``` daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% gather_residuals(wkday_term_ns, wkday_term_interaction_ns) %>% ggplot(aes(date, resid, colour = wday))+ geom_point()+ geom_line(aes(x = date, y = n, group = wday), colour = "grey", alpha = 0.5)+ geom_line()+ facet_wrap(~model, ncol = 1) ``` 7. We hypothesised that people leaving on Sundays are more likely to be business travellers who need to be somewhere on Monday. Explore that hypothesis by seeing how it breaks down based on distance and time: if it’s true, you’d expect to see more Sunday evening flights to places that are far away. ``` flights %>% mutate(date = lubridate::make_date(year, month, day), wday = wday(date, label = TRUE)) %>% select(date, wday, distance) %>% filter(distance < 3000) %>% ggplot(aes(wday, distance))+ geom_boxplot() ``` * 25th and 75th percentiles aren’t visibly different, but median is a little higher * the same is the case for Saturday travel which does not seem to fit into this hypothesis as neatly. The effect seems more general to the weekend than just Saturday, and there seem like there may be other potential explanations than “business travel” 8. It’s a little frustrating that Sunday and Saturday are on separate ends of the plot. Write a small function to set the levels of the factor so that the week starts on Monday. ``` wday_modified <- function(date){ date_order <- (wday(date) + 5) %% 7 date <- wday(date, label = TRUE) %>% fct_reorder(date_order) date } flights %>% mutate(date = lubridate::make_date(year, month, day), wday = wday_modified(date)) %>% select(date, wday, distance) %>% filter(distance < 3000) %>% ggplot(aes(wday, distance))+ geom_boxplot() ``` ### 24\.3\.5 1. Use your Google sleuthing skills to brainstorm why there were fewer than expected flights on Jan 20, May 26, and Sep 1\. (Hint: they all have the same explanation.) How would these days generalise to another year? * January 20th was the day for MLK day[41](#fn41) * May 26th was the day before Memorial Day weekend * September 1st was the day before labor dayBased on the above, it seems a variable representing “holiday” or “holiday weekend” may be valuable. 2. What do the three days with high positive residuals represent? How would these days generalise to another year? ``` daily <- flights %>% mutate(date = make_date(year, month, day)) %>% count(date) daily <- daily %>% mutate(wday = wday(date, label = TRUE)) ``` ``` mod <- lm(n ~ wday, data = daily) daily <- daily %>% add_residuals(mod) daily %>% top_n(3, resid) ``` ``` ## # A tibble: 3 x 4 ## date n wday resid ## <date> <int> <ord> <dbl> ## 1 2013-11-30 857 Sat 112. ## 2 2013-12-01 987 Sun 95.5 ## 3 2013-12-28 814 Sat 69.4 ``` * these days correspond with the Saturday and Sunday of Thanksgiving, as well as the Saturday after Christmas * these days can fall on different days of the week each year so would vary from year to year depending on which day they fell on + ideally you would include some “holiday” variable to help capture the impact of these / better generalise between years*Check the absolute values* ``` daily %>% top_n(3, abs(resid)) ``` ``` ## # A tibble: 3 x 4 ## date n wday resid ## <date> <int> <ord> <dbl> ## 1 2013-11-28 634 Thu -332. ## 2 2013-11-29 661 Fri -306. ## 3 2013-12-25 719 Wed -244. ``` * The days with the greatest magnitude for residuals were on Christmast Day, Thanksgiving Day, and the day after Thanksgiving 3. Create a new variable that splits the `wday` variable into terms, but only for Saturdays, i.e. it should have `Thurs`, `Fri`, but `Sat-summer`, `Sat-spring`, `Sat-fall`. How does this model compare with the model with every combination of `wday` and `term`? ``` term <- function(date) { cut(date, breaks = ymd(20130101, 20130605, 20130825, 20140101), labels = c("spring", "summer", "fall") ) } daily <- daily %>% mutate(term = term(date)) # example with wday_mod Example_term_with_sat <- daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% lm(n ~ wday_mod, data = .) # just wday wkday <- daily %>% lm(n ~ wday, data = .) # wday and term, no interaction... wkday_term <- daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% lm(n ~ wday + term, data = .) # wday and term, interaction wkday_term_interaction <- daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% lm(n ~ wday*term, data = .) daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% gather_predictions(Example_term_with_sat, wkday, wkday_term, wkday_term_interaction) %>% ggplot(aes(date, pred, colour = wday))+ geom_point()+ geom_line()+ facet_wrap(~model, ncol = 1) ``` * In the example, saturday has different predicted number of flights in the summer + when just including `wkday` you don’t see this differentiation + when including `wkday` and `term` you see differentiation in the summer, though this difference is the same across all `wday`s, hence the increased number for Saturday’s is less than it shows\-up as as compared to either the example (where the term is only interacted with for Saturday) or the `wkday_term_interaction` chart where the interaciton is allowed for each day of the week + you see increases in flights across pretty much all `wday`s in summer, though you see the biggest difference in Saturday[42](#fn42)*Residuals of these models* ``` daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% gather_residuals(Example_term_with_sat, wkday, wkday_term, wkday_term_interaction) %>% ggplot(aes(date, resid, colour = wday))+ geom_point()+ geom_line()+ facet_wrap(~model, ncol = 1) ``` * The graphs with saturday term and interaction across terms do not show gross changes in residuals varying by season the way the models that included just weekday or weekday and term without an interaction do. * note that you have a few days with large negative residuals[43](#fn43) + these likely correspond with holidays 4. Create a new `wday` variable that combines the day of week, term (for Saturdays), and public holidays. What do the residuals of that model look like? *Create dataset of federal holidays* ``` # holiday's that could have been added: Easter, black friday # consider adding a filter to remove Columbus day and perhaps Veteran's day holidays <- tribble( ~HolidayName, ~HolidayDate, "New Year's", "2013-01-01", "MLK", "2013-01-21", "President's Day", "2013-02-18", "Memorial Day", "2013-05-27", "Independene Day", "2013-07-04", "Labor Day", "2013-09-02", "Columbus Day", "2013-10-14", "Veteran's Day", "2013-11-11", "Thanksgiving", "2013-11-28", "Christmas Day", "2013-12-25" ) %>% mutate(HolidayDate = ymd(HolidayDate)) ``` Create model with Holiday variable ``` Example_term_with_sat_holiday <- daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% left_join(holidays, by = c("date" = "HolidayDate")) %>% mutate(Holiday = !is.na(HolidayName)) %>% lm(n ~ wday_mod + Holiday, data = .) ``` Look at residuals of model ``` daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% left_join(holidays, by = c("date" = "HolidayDate")) %>% mutate(Holiday = !is.na(HolidayName)) %>% gather_residuals(Example_term_with_sat_holiday, Example_term_with_sat) %>% ggplot(aes(date, resid, colour = wday))+ geom_point()+ geom_line()+ facet_wrap(~model, ncol = 1) ``` * Notice the residuals for day’s like July 4th and Christas are closer to 0 now, though residuals for smaller holidays like MLK, President’s, Columbus, and Veteran’s Day are now positive when before they did not have such noticable abberations * Suggests that just “holiday” is not enough to capture the relationship + In [24\.3\.5\.4](24-model-building.html#section-94) I show how to create a “near holiday” variable (though I do not add any new analysis after creating this) 5. What happens if you fit a day of week effect that varies by month (i.e. `n ~ wday * month`)? Why is this not very helpful? *Create model* ``` week_month <- daily %>% mutate(month = month(date) %>% as.factor()) %>% lm(n ~ wday * month, data = .) ``` *Graph predictions* (with `n ~ wday * term` as the comparison) ``` daily %>% mutate(month = month(date) %>% as.factor()) %>% gather_predictions(wkday_term_interaction, week_month) %>% ggplot(aes(date, pred, colour = wday))+ geom_point()+ geom_line()+ facet_wrap(~model, ncol = 1) ``` * This model has the most flexibility / inputs, though this makes the pattern harder to follow / interpret * Certain decreases in the month to month model are difficult to explain, for example the decrease in the month of May*Graph residuals* (with `n ~ wday * term` as the comparison) ``` daily %>% mutate(month = month(date) %>% as.factor()) %>% gather_residuals(wkday_term_interaction, week_month) %>% ggplot(aes(date, resid, colour = wday))+ geom_point()+ geom_line()+ facet_wrap(~model, ncol = 1) ``` The residuals seem to partially explain some of these inexplicable ups / downs: * For the model that incorporates an interaciton with month, you see the residuals in months with a holiday tend to cause the associated day of the week the holiday fell on to then have high residuals on the non\-holiday days, an effect thta is less pronounced on the models interacted with `term`[44](#fn44) + The reason for this is that for the monthly variables there are only 4\-5 week days in each month, so a holiday on one of these can substantially impact the expected number of flights on the weekend in that month (i.e. the prediction is based just on 4\-5 observations). For the term interaction you have more like 12 observations to get an expected value, so while there is still an aberration on that day, the other days predictions are less affected 6. What would you expect the model `n ~ wday + ns(date, 5)` to look like? Knowing what you know about the data, why would you expect it to be not particularly effective? I would expect to see a similar overall pattern, but with more smoothed affects. Let’s check what these actually look like below. ``` wkday_term_ns <- daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% lm(n ~ wday + splines::ns(date, 5), data = .) wkday_term_interaction_ns <- lm(n ~ wday * splines::ns(date, 5), data = daily) ``` Look at predictions (light grey are actuals) ``` daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% gather_predictions(wkday_term_ns, wkday_term_interaction_ns) %>% ggplot(aes(date, pred, colour = wday))+ geom_point()+ geom_line(aes(x = date, y = n, group = wday), colour = "grey", alpha = 0.5)+ geom_line()+ facet_wrap(~model, ncol = 1) ``` Look at residuals (in light grey are actuals) ``` daily %>% mutate(wday_mod = ifelse(wday == "Sat", paste(wday, "_", term), wday)) %>% gather_residuals(wkday_term_ns, wkday_term_interaction_ns) %>% ggplot(aes(date, resid, colour = wday))+ geom_point()+ geom_line(aes(x = date, y = n, group = wday), colour = "grey", alpha = 0.5)+ geom_line()+ facet_wrap(~model, ncol = 1) ``` 7. We hypothesised that people leaving on Sundays are more likely to be business travellers who need to be somewhere on Monday. Explore that hypothesis by seeing how it breaks down based on distance and time: if it’s true, you’d expect to see more Sunday evening flights to places that are far away. ``` flights %>% mutate(date = lubridate::make_date(year, month, day), wday = wday(date, label = TRUE)) %>% select(date, wday, distance) %>% filter(distance < 3000) %>% ggplot(aes(wday, distance))+ geom_boxplot() ``` * 25th and 75th percentiles aren’t visibly different, but median is a little higher * the same is the case for Saturday travel which does not seem to fit into this hypothesis as neatly. The effect seems more general to the weekend than just Saturday, and there seem like there may be other potential explanations than “business travel” 8. It’s a little frustrating that Sunday and Saturday are on separate ends of the plot. Write a small function to set the levels of the factor so that the week starts on Monday. ``` wday_modified <- function(date){ date_order <- (wday(date) + 5) %% 7 date <- wday(date, label = TRUE) %>% fct_reorder(date_order) date } flights %>% mutate(date = lubridate::make_date(year, month, day), wday = wday_modified(date)) %>% select(date, wday, distance) %>% filter(distance < 3000) %>% ggplot(aes(wday, distance))+ geom_boxplot() ``` Appendix -------- ### 24\.2\.3\.3 Plots of extreme values against a sample and colored by some of the key attributes *Plots of extreme values against carat, price, clarity* ``` diamonds2 %>% add_predictions(mod_diamond2) %>% sample_n(5000) %>% ggplot(aes(carat, price))+ geom_point(aes(carat, price, colour = clarity), alpha = 0.5)+ geom_point(aes(carat, price, colour = clarity), data = extreme_vals, size = 3) ``` *Plots of extreme values against carat, price, cut* ``` diamonds2 %>% add_predictions(mod_diamond2) %>% sample_n(5000) %>% ggplot(aes(carat, price))+ # geom_hex(bins = 50)+ geom_point(aes(carat, price, colour = cut), alpha = 0.5)+ geom_point(aes(carat, price, colour = cut), data = extreme_vals, size = 3) ``` ### 24\.2\.3\.1 Visualization with horizontal stripes and `lprice` as the focus ``` # horizontal stripes gridExtra::grid.arrange(plot_lp_lc, plot_lp, plot_lc + coord_flip()) ``` * same thing, just change orientation and highlight `lprice` with a histogram *A few other graphs from this problem* ``` diamonds2 %>% ggplot(aes(price))+ geom_histogram(binwidth = 50) ``` ``` diamonds2 %>% ggplot(aes(carat))+ geom_histogram(binwidth = 0.01) ``` Taking the log of price seems to have a bigger impact on the shape of the geom\_hex graph ``` diamonds2 %>% ggplot(aes(carat, lprice))+ geom_hex(show.legend = FALSE) ``` ``` diamonds2 %>% ggplot(aes(lcarat, price))+ geom_hex(show.legend = FALSE) ``` ### 24\.3\.5\.4 In this section I create a marker for days that are “near a holiday” ``` near_holidays <- holidays %>% # This creates a series of helper variables to create the variable 'Holiday_IntervalDay' that represents an interval that encloses the period between the holiday and the beginning or end of the most recent weekend mutate(HolidayWday = wday(HolidayDate, label = TRUE), HolidayWknd = lubridate::round_date(HolidayDate, unit = "week"), HolidayFloor = lubridate::floor_date(HolidayDate, unit = "week"), HolidayCeiling = lubridate::ceiling_date(HolidayDate, unit = "week"), Holiday_IntervalDay = case_when(HolidayWknd == HolidayFloor ~ (HolidayFloor - ddays(2)), TRUE ~ HolidayCeiling)) %>% mutate(Holiday_Period = interval(pmin(HolidayDate, Holiday_IntervalDay), pmax(HolidayDate, Holiday_IntervalDay))) # This returns each day and whether it occurred near a holiday near_holiday <- map(near_holidays$Holiday_Period, ~(seq.Date(ymd("2013-01-01"), ymd("2013-12-31"), by = "day") %within% .x)) %>% transpose() %>% map_lgl(any) %>% as_tibble() %>% rename(NearHoliday = value) %>% mutate(date = seq.Date(ymd("2013-01-01"), ymd("2013-12-31"), by = "day")) near_holiday ``` ``` ## # A tibble: 365 x 2 ## NearHoliday date ## <lgl> <date> ## 1 TRUE 2013-01-01 ## 2 FALSE 2013-01-02 ## 3 FALSE 2013-01-03 ## 4 FALSE 2013-01-04 ## 5 FALSE 2013-01-05 ## 6 FALSE 2013-01-06 ## 7 FALSE 2013-01-07 ## 8 FALSE 2013-01-08 ## 9 FALSE 2013-01-09 ## 10 FALSE 2013-01-10 ## # ... with 355 more rows ``` * I ended\-up not adding any additional analysis here, though the methodology for creating the “near holiday” seemed worth saving * Could come back to add more in the future ### 24\.2\.3\.3 Plots of extreme values against a sample and colored by some of the key attributes *Plots of extreme values against carat, price, clarity* ``` diamonds2 %>% add_predictions(mod_diamond2) %>% sample_n(5000) %>% ggplot(aes(carat, price))+ geom_point(aes(carat, price, colour = clarity), alpha = 0.5)+ geom_point(aes(carat, price, colour = clarity), data = extreme_vals, size = 3) ``` *Plots of extreme values against carat, price, cut* ``` diamonds2 %>% add_predictions(mod_diamond2) %>% sample_n(5000) %>% ggplot(aes(carat, price))+ # geom_hex(bins = 50)+ geom_point(aes(carat, price, colour = cut), alpha = 0.5)+ geom_point(aes(carat, price, colour = cut), data = extreme_vals, size = 3) ``` ### 24\.2\.3\.1 Visualization with horizontal stripes and `lprice` as the focus ``` # horizontal stripes gridExtra::grid.arrange(plot_lp_lc, plot_lp, plot_lc + coord_flip()) ``` * same thing, just change orientation and highlight `lprice` with a histogram *A few other graphs from this problem* ``` diamonds2 %>% ggplot(aes(price))+ geom_histogram(binwidth = 50) ``` ``` diamonds2 %>% ggplot(aes(carat))+ geom_histogram(binwidth = 0.01) ``` Taking the log of price seems to have a bigger impact on the shape of the geom\_hex graph ``` diamonds2 %>% ggplot(aes(carat, lprice))+ geom_hex(show.legend = FALSE) ``` ``` diamonds2 %>% ggplot(aes(lcarat, price))+ geom_hex(show.legend = FALSE) ``` ### 24\.3\.5\.4 In this section I create a marker for days that are “near a holiday” ``` near_holidays <- holidays %>% # This creates a series of helper variables to create the variable 'Holiday_IntervalDay' that represents an interval that encloses the period between the holiday and the beginning or end of the most recent weekend mutate(HolidayWday = wday(HolidayDate, label = TRUE), HolidayWknd = lubridate::round_date(HolidayDate, unit = "week"), HolidayFloor = lubridate::floor_date(HolidayDate, unit = "week"), HolidayCeiling = lubridate::ceiling_date(HolidayDate, unit = "week"), Holiday_IntervalDay = case_when(HolidayWknd == HolidayFloor ~ (HolidayFloor - ddays(2)), TRUE ~ HolidayCeiling)) %>% mutate(Holiday_Period = interval(pmin(HolidayDate, Holiday_IntervalDay), pmax(HolidayDate, Holiday_IntervalDay))) # This returns each day and whether it occurred near a holiday near_holiday <- map(near_holidays$Holiday_Period, ~(seq.Date(ymd("2013-01-01"), ymd("2013-12-31"), by = "day") %within% .x)) %>% transpose() %>% map_lgl(any) %>% as_tibble() %>% rename(NearHoliday = value) %>% mutate(date = seq.Date(ymd("2013-01-01"), ymd("2013-12-31"), by = "day")) near_holiday ``` ``` ## # A tibble: 365 x 2 ## NearHoliday date ## <lgl> <date> ## 1 TRUE 2013-01-01 ## 2 FALSE 2013-01-02 ## 3 FALSE 2013-01-03 ## 4 FALSE 2013-01-04 ## 5 FALSE 2013-01-05 ## 6 FALSE 2013-01-06 ## 7 FALSE 2013-01-07 ## 8 FALSE 2013-01-08 ## 9 FALSE 2013-01-09 ## 10 FALSE 2013-01-10 ## # ... with 355 more rows ``` * I ended\-up not adding any additional analysis here, though the methodology for creating the “near holiday” seemed worth saving * Could come back to add more in the future
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/25-many-models.html
Ch. 25: Many models =================== **Key questions:** * 25\.2\.5 \#1, 2 **Functions and notes:** * `nest` creates a list\-column with default key value `data`. Each row value becomes a dataframe with all non\-grouping columns and all rows corresponding with a particular group ``` iris %>% group_by(Species) %>% nest() ``` * `unnest` unnest any list\-column in your dataframe. Notes on `unnest` behavior: * if the atomic components of the elements of the list column are length \> 1, the non\-nested row columns will be duplicated when the list\-column is unnested ``` # atomic components of elements of list-col == 3 --> (will see duplicates of `x`) tibble(x = 1:100) %>% mutate(test1 = list(tibble(a = c(1, 2, 3)))) %>% unnest(test1) # atomic components of elements of list-col == 1 --> (will not see duplicates of `x`) tibble(x = 1:100) %>% mutate(test1 = list(tibble(a = 1, b = 2))) %>% unnest(test1) ``` * if there are multiple list\-cols, specify the column to unnest or default behavior will be to unnest all * when unnesting a single column but multiple list\-cols exist, the default behavior is to drop the other list columns. To override this use `.drop = FALSE`.[45](#fn45) ``` tibble(x = 1:100) %>% mutate(test1 = list(c(1, 2)), test2 = list(c(3, 4))) %>% unnest(test1, .drop = FALSE) # change to default, i.e. `.drop = TRUE` to drop `test2` column ``` * when unnesting multiple columns, all must be the same length or you will get an error, e.g. below fails: ``` tibble(x = 1:100) %>% mutate(test1 = list(c(1)), test2 = list(c(2,3))) %>% unnest() ``` ``` ## Error: All nested columns must have the same number of elements. ``` ``` # # to successfully unnest this could have added another unnest, e.g.: # tibble(x = 1:100) %>% # mutate(test1 = list(c(1)), # test2 = list(c(2,3))) %>% # unnest(test1) %>% # unnest(test2) ``` * Method for nesting individual vectors: `group_by() %>% summarise()`, e.g.: ``` iris %>% group_by(Species) %>% summarise_all(list) ``` ``` ## # A tibble: 3 x 5 ## Species Sepal.Length Sepal.Width Petal.Length Petal.Width ## <fct> <list> <list> <list> <list> ## 1 setosa <dbl [50]> <dbl [50]> <dbl [50]> <dbl [50]> ## 2 versicolor <dbl [50]> <dbl [50]> <dbl [50]> <dbl [50]> ## 3 virginica <dbl [50]> <dbl [50]> <dbl [50]> <dbl [50]> ``` * the above has the advantage of producing atomic vectors rather than dataframes as the types inside of the lists * `broom::glance` takes a model as input and outputs a one row tibble with columns for each of several model evalation statistics (note that these metrics are geared towards evaluating the training) * `broom::tidy` creates a tibble with columns `term`, `estimate`, `std.error`, `statistic` (t\-statistic) and `p.value`. A new row is created for each `term` type, e.g. intercept, x1, x2, etc. * `ggtitle()`, alternative to `labs(title = "type title here")` * see [25\.4\.5](25-many-models.html#section-96) number 3 for a useful way of wrapping certain functions in `list` functions to take advantage of the list\-col format 25\.2: gapminder ---------------- The set\-up example Hadley goes through is important, below is a slightly altered copy of his example. **Nested Data** ``` by_country <- gapminder::gapminder %>% group_by(country, continent) %>% nest() ``` **List\-columns** ``` country_model <- function(df) { lm(lifeExp ~ year, data = df) } ``` Want to apply this function over every data frame, the dataframes are in a list, so do this by: ``` by_country2 <- by_country %>% mutate(model = purrr::map(data, country_model)) ``` Advantage with keeping things in the dataframe is that when you filter, or move things around, everything stays in sync, as do new summary values you might add. ``` by_country2 %>% arrange(continent, country) ``` ``` ## # A tibble: 142 x 4 ## country continent data model ## <fct> <fct> <list> <list> ## 1 Algeria Africa <tibble [12 x 4]> <S3: lm> ## 2 Angola Africa <tibble [12 x 4]> <S3: lm> ## 3 Benin Africa <tibble [12 x 4]> <S3: lm> ## 4 Botswana Africa <tibble [12 x 4]> <S3: lm> ## 5 Burkina Faso Africa <tibble [12 x 4]> <S3: lm> ## 6 Burundi Africa <tibble [12 x 4]> <S3: lm> ## 7 Cameroon Africa <tibble [12 x 4]> <S3: lm> ## 8 Central African Republic Africa <tibble [12 x 4]> <S3: lm> ## 9 Chad Africa <tibble [12 x 4]> <S3: lm> ## 10 Comoros Africa <tibble [12 x 4]> <S3: lm> ## # ... with 132 more rows ``` ``` by_country2 %>% mutate(summaries = purrr::map(model, summary)) %>% mutate(r_squared = purrr::map2_dbl(model, data, rsquare)) ``` ``` ## # A tibble: 142 x 6 ## country continent data model summaries r_squared ## <fct> <fct> <list> <list> <list> <dbl> ## 1 Afghanistan Asia <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.948 ## 2 Albania Europe <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.911 ## 3 Algeria Africa <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.985 ## 4 Angola Africa <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.888 ## 5 Argentina Americas <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.996 ## 6 Australia Oceania <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.980 ## 7 Austria Europe <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.992 ## 8 Bahrain Asia <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.967 ## 9 Bangladesh Asia <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.989 ## 10 Belgium Europe <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.995 ## # ... with 132 more rows ``` **unnesting**, another dataframe with the residuals included and then unnest ``` by_country3 <- by_country2 %>% mutate(resids = purrr::map2(data, model, add_residuals)) ``` ``` resids <- by_country3 %>% unnest(resids) resids ``` ``` ## # A tibble: 1,704 x 7 ## country continent year lifeExp pop gdpPercap resid ## <fct> <fct> <int> <dbl> <int> <dbl> <dbl> ## 1 Afghanistan Asia 1952 28.8 8425333 779. -1.11 ## 2 Afghanistan Asia 1957 30.3 9240934 821. -0.952 ## 3 Afghanistan Asia 1962 32.0 10267083 853. -0.664 ## 4 Afghanistan Asia 1967 34.0 11537966 836. -0.0172 ## 5 Afghanistan Asia 1972 36.1 13079460 740. 0.674 ## 6 Afghanistan Asia 1977 38.4 14880372 786. 1.65 ## 7 Afghanistan Asia 1982 39.9 12881816 978. 1.69 ## 8 Afghanistan Asia 1987 40.8 13867957 852. 1.28 ## 9 Afghanistan Asia 1992 41.7 16317921 649. 0.754 ## 10 Afghanistan Asia 1997 41.8 22227415 635. -0.534 ## # ... with 1,694 more rows ``` ### 25\.2\.5 1. A linear trend seems to be slightly too simple for the overall trend. Can you do better with a quadratic polynomial? How can you interpret the coefficients of the quadratic? (Hint you might want to transform `year` so that it has mean zero.) *Create functions* ``` # funciton to center value center_value <- function(df){ df %>% mutate(year_cent = year - mean(year)) } # this function allows me to input any text to "var" to customize the inputs # to the model, default are a linear and quadratic term for year (centered) lm_quad_2 <- function(df, var = "year_cent + I(year_cent^2)"){ lm(as.formula(paste("lifeExp ~ ", var)), data = df) } ``` *Create dataframe with evaluation metrics* ``` by_country3_quad <- by_country3 %>% mutate( # create centered data data_cent = purrr::map(data, center_value), # create quadratic models mod_quad = purrr::map(data_cent, lm_quad_2), # get model evaluation stats from original model glance_mod = purrr::map(model, broom::glance), # get model evaluation stats from quadratic model glance_quad = purrr::map(mod_quad, broom::glance)) ``` *Create plots* ``` by_country3_quad %>% unnest(glance_mod, glance_quad, .sep = "_", .drop = TRUE) %>% gather(glance_mod_r.squared, glance_quad_r.squared, key = "order", value = "r.squared") %>% ggplot(aes(x = continent, y = r.squared, colour = continent)) + geom_boxplot() + facet_wrap(~order) ``` * The quadratic trend seems to do better –\> indicated by the distribution of the R^2 values being closer to one. The level of improvement seems especially pronounced for African countries.Let’s check this closer by looking at percentage point improvement in R^2 in chart below ``` by_country3_quad %>% mutate(quad_coefs = map(mod_quad, broom::tidy)) %>% unnest(glance_mod, .sep = "_") %>% unnest(glance_quad) %>% mutate(bad_fit = glance_mod_r.squared < 0.25, R.squ_ppt_increase = r.squared - glance_mod_r.squared) %>% ggplot(aes(x = continent, y = R.squ_ppt_increase))+ # geom_quasirandom(aes(alpha = bad_fit), colour = "black")+ geom_boxplot(alpha = 0.1, colour = "dark grey")+ geom_quasirandom(aes(colour = continent))+ labs(title = "Percentage point (PPT) improvement in R squared value", subtitle = "(When adding a quadratic term to the linear regression model)") ``` *View predictions from linear model with quadratic term* (of countries where linear trend did not capture relationship) ``` bad_fit <- by_country3 %>% mutate(glance = purrr::map(model, broom::glance)) %>% unnest(glance, .drop = TRUE) %>% filter(r.squared < 0.25) #solve with join with bad_fit by_country3_quad %>% semi_join(bad_fit, by = "country") %>% mutate(data_preds = purrr::map2(data_cent, mod_quad, add_predictions)) %>% unnest(data_preds) %>% ggplot(aes(x = year, group = country))+ geom_point(aes(y = lifeExp, colour = country))+ geom_line(aes(y = pred, colour = country))+ facet_wrap(~country)+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` * while the quadratic model does a better job fitting the model than a linear term does, I wouldn’t say it does a good job of fitting the model * it looks like the trends are generally consistent rates of improvement and then there is a sudden drop\-off associated with some event, hence an intervention variable may be a more appropriate method for modeling this pattern*Quadratic model parameters* ``` by_country3_quad %>% mutate(quad_coefs = map(mod_quad, broom::tidy)) %>% unnest(glance_mod, .sep = "_") %>% unnest(glance_quad) %>% unnest(quad_coefs) %>% mutate(bad_fit = glance_mod_r.squared < 0.25) %>% ggplot(aes(x = continent, y = estimate, alpha = bad_fit))+ geom_boxplot(alpha = 0.1, colour = "dark grey")+ geom_quasirandom(aes(colour = continent))+ facet_wrap(~term, scales = "free")+ labs(caption = "Note that 'bad fit' represents a bad fit on the initial model \nthat did not contain a quadratic term)")+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` ``` ## Warning: Using alpha for a discrete variable is not advised. ``` * The quadratic term (in a linear function, trained with the x\-value centered at the mean, as in this dataset) has a few important notes related to interpretation + If the coefficient is positive the output will be convex, if it is negative it will be concave (i.e. smile vs. frown shape) + The value on the coefficient represents 1/2 the rate at which the relationship between `lifeExp` and `year` is changing for every one unit change from the mean / expected value of `lifeExp` in the dataset. + Hence if the coefficient is near 0, that means the relationship between `lifeExp` and `year` does not change (or at least does not change at a constant rate) when moving in either direction from `lifeExp`s mean value.To better understand this, let’s look look at a specific example. Excluding Rwanda, Botswana was the `country` that the linear model without the quadratic term performed the worst on. We’ll use this as our example for interpreting the coefficients. *Plots of predicted and actual values for Botswanian life expectancy by year* ``` by_country3_quad %>% filter(country == "Botswana") %>% mutate(data_preds = purrr::map2(data_cent, mod_quad, add_predictions)) %>% unnest(data_preds) %>% ggplot(aes(x = year, group = country))+ geom_point(aes(y = lifeExp))+ geom_line(aes(y = pred, colour = "prediction"))+ labs(title = "Data and quadratic trend of predictions for Botswana") ``` *(note that the centered value for year in the ‘centered’ dataset is 1979\.5\)* In the model for Botswana, coefficents are: Intercept: \~ 59\.81 year (centered): \~ 0\.0607 year (centered)^2: \~ \-0\.0175 Hence for every one year we move away from the central year (1979\.5\), the rate of change between year and price decreases by *\~0\.035*. Below I show this graphically by plotting the lines tangent to the models output. ``` botswana_coefs <- by_country3_quad %>% filter(country == "Botswana") %>% with(map(mod_quad, coef)) %>% flatten_dbl() ``` Helper functions to find tangent points ``` find_slope <- function(x){ 2*botswana_coefs[[3]]*x + botswana_coefs[[2]] } find_y1 <- function(x){ botswana_coefs[[3]]*(x^2) + botswana_coefs[[2]]*x + botswana_coefs[[1]] } find_intercept <- function(x, y, m){ y - x*m } tangent_lines <- tibble(x1 = seq(-20, 20, 10)) %>% mutate(slope = find_slope(x1), y1 = find_y1(x1), intercept = find_intercept(x1, y1, slope), slope_change = x1*2*botswana_coefs[[3]]) %>% select(slope, intercept, everything()) ``` ``` by_country3_quad %>% filter(country == "Botswana") %>% mutate(data_preds = purrr::map2(data_cent, mod_quad, add_predictions)) %>% unnest(data_preds) %>% ggplot(aes(x = year_cent))+ geom_line(aes(x = year_cent, y = pred), colour = "red")+ geom_abline(aes(intercept = intercept, slope = slope), data = tangent_lines)+ coord_fixed() ``` Below is the relevant output in a table. `x1`: represents the change in x value from 1979\.5 `slope`: slope of the tangent line at particular `x1` value `slope_diff_central`: the amount the slope is different from the slope of the tangent line at the central year ``` select(tangent_lines, x1, slope, slope_diff_central = slope_change) ``` ``` ## # A tibble: 5 x 3 ## x1 slope slope_diff_central ## <dbl> <dbl> <dbl> ## 1 -20 0.760 0.700 ## 2 -10 0.411 0.350 ## 3 0 0.0607 0 ## 4 10 -0.289 -0.350 ## 5 20 -0.639 -0.700 ``` * notice that for every 10 year increase in `x1` we see the slope of the tangent line has decreased by 0\.35\. If we’d looked at just one year we would have seen the change was 0\.035, this correspondig with 2 multiplied by the coefficient on the quadratic term of our model. 2. Explore other methods for visualising the distribution of \\(R^2\\) per continent. You might want to try the ggbeeswarm package, which provides similar methods for avoiding overlaps as jitter, but uses deterministic methods. *visualisations of linear model* ``` by_country3_quad %>% unnest(glance_mod) %>% ggplot(aes(x = continent, y = r.squared, colour = continent))+ geom_boxplot(alpha = 0.1, colour = "dark grey")+ ggbeeswarm::geom_quasirandom() ``` * I like `geom_quasirandom()` the best as an overlay on boxplot, it keeps things centered and doesn’t have the gravitational pull affect that makes `geom_beeswarm()` become a little misaligned, it also works well here over `geom_jitter()` as the points stay better around their true value 3. To create the last plot (showing the data for the countries with the worst model fits), we needed two steps: we created a data frame with one row per country and then semi\-joined it to the original dataset. It’s possible to avoid this join if we use `unnest()` instead of `unnest(.drop = TRUE)`. How? ``` #first filter by r.squared and then unnest by_country3_quad %>% mutate(data_preds = purrr::map2(data_cent, mod_quad, add_predictions)) %>% unnest(glance_mod) %>% mutate(bad_fit = r.squared < 0.25) %>% filter(bad_fit) %>% unnest(data_preds) %>% ggplot(aes(x = year, group = country))+ geom_point(aes(y = lifeExp, colour = country))+ geom_line(aes(y = pred, colour = country))+ facet_wrap(~country)+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` 25\.4: Creating list\-columns ----------------------------- ### 25\.4\.5 1. List all the functions that you can think of that take an atomic vector and return a list. * `stringr::str_extract_all` \+ other `stringr` functions(however the below can also take types that are not atomic and are probably not really what is being looked for) * `list` * `tibble` * `map` / `lapply` 2. Brainstorm useful summary functions that, like `quantile()`, return multiple values. * `summary` * `range` * … 3. What’s missing in the following data frame? How does `quantile()` return that missing piece? Why isn’t that helpful here? ``` mtcars %>% group_by(cyl) %>% summarise(q = list(quantile(mpg))) %>% unnest() ``` ``` ## # A tibble: 15 x 2 ## cyl q ## <dbl> <dbl> ## 1 4 21.4 ## 2 4 22.8 ## 3 4 26 ## 4 4 30.4 ## 5 4 33.9 ## 6 6 17.8 ## 7 6 18.6 ## 8 6 19.7 ## 9 6 21 ## 10 6 21.4 ## 11 8 10.4 ## 12 8 14.4 ## 13 8 15.2 ## 14 8 16.2 ## 15 8 19.2 ``` * need to capture probabilities of quantiles to make useful… ``` probs <- c(0.01, 0.25, 0.5, 0.75, 0.99) mtcars %>% group_by(cyl) %>% summarise(p = list(probs), q = list(quantile(mpg, probs))) %>% unnest() ``` ``` ## # A tibble: 15 x 3 ## cyl p q ## <dbl> <dbl> <dbl> ## 1 4 0.01 21.4 ## 2 4 0.25 22.8 ## 3 4 0.5 26 ## 4 4 0.75 30.4 ## 5 4 0.99 33.8 ## 6 6 0.01 17.8 ## 7 6 0.25 18.6 ## 8 6 0.5 19.7 ## 9 6 0.75 21 ## 10 6 0.99 21.4 ## 11 8 0.01 10.4 ## 12 8 0.25 14.4 ## 13 8 0.5 15.2 ## 14 8 0.75 16.2 ## 15 8 0.99 19.1 ``` * see [list(quantile()) examples](25-many-models.html#listquantile-examples) for related method that captures names of quantiles (rather than requiring th user to manually input a vector of probabilities) 4. What does this code do? Why might it be useful? ``` mtcars %>% select(1:3) %>% group_by(cyl) %>% summarise_all(funs(list)) ``` * It turns each row into an atomic vector grouped by the particular `cyl` value. It is different from `nest` in that each column creates a new list\-column representing an atomic vector. If `nest` had been used, this would have created a single dataframe that all the values woudl have been in. Could be useful for running purr through particular columns… * e.g. let’s say we want to find the number of unique items in each column for each grouping, we could do that like so ``` mtcars %>% group_by(cyl) %>% select(1:5) %>% summarise_all(funs(list)) %>% mutate_all(funs(unique = map_int(., ~length(unique(.x))))) ``` ``` ## Warning: funs() is soft deprecated as of dplyr 0.8.0 ## please use list() instead ## ## # Before: ## funs(name = f(.) ## ## # After: ## list(name = ~f(.)) ## This warning is displayed once per session. ``` ``` ## # A tibble: 3 x 10 ## cyl mpg disp hp drat cyl_unique mpg_unique disp_unique hp_unique ## <dbl> <lis> <lis> <lis> <lis> <int> <int> <int> <int> ## 1 4 <dbl~ <dbl~ <dbl~ <dbl~ 1 9 11 10 ## 2 6 <dbl~ <dbl~ <dbl~ <dbl~ 1 6 5 4 ## 3 8 <dbl~ <dbl~ <dbl~ <dbl~ 1 12 11 9 ## # ... with 1 more variable: drat_unique <int> ``` ``` # we could also simply overwrite the values (rather than make new columns) mtcars %>% group_by(cyl) %>% select(1:5) %>% summarise_all(funs(list)) %>% mutate_all(funs(map_int(., ~length(unique(.x))))) ``` ``` ## # A tibble: 3 x 5 ## cyl mpg disp hp drat ## <int> <int> <int> <int> <int> ## 1 1 9 11 10 10 ## 2 1 6 5 4 5 ## 3 1 12 11 9 11 ``` 25\.5: Simplifying list\-columns -------------------------------- ### 25\.5\.3 1. Why might the `lengths()` function be useful for creating atomic vector columns from list\-columns? * perhaps you want to measure the number of elements (or unique elements) in an individual element of a list column ``` mpg %>% group_by(cyl) %>% summarise(displ_list = list(displ)) %>% mutate(num_unique = map_int(displ_list, ~unique(.x) %>% length())) ``` ``` ## # A tibble: 4 x 3 ## cyl displ_list num_unique ## <int> <list> <int> ## 1 4 <dbl [81]> 8 ## 2 5 <dbl [4]> 1 ## 3 6 <dbl [79]> 14 ## 4 8 <dbl [70]> 17 ``` 1. List the most common types of vector found in a data frame. What makes lists different? * the atomic types: char, int, double, factor, date are all more common, they are atomic, whereas lists are not atomic vectors and can contain any type of data within them (e.g. a list of atomic vectors, list of lists, etc.). Appendix -------- ### Models in lists This is the more traditional way you might store models in a list ``` models_countries <- purrr::map(by_country$data, country_model) names(models_countries) <- by_country$country models_countries[1:3] ``` ``` ## $Afghanistan ## ## Call: ## lm(formula = lifeExp ~ year, data = df) ## ## Coefficients: ## (Intercept) year ## -507.5343 0.2753 ## ## ## $Albania ## ## Call: ## lm(formula = lifeExp ~ year, data = df) ## ## Coefficients: ## (Intercept) year ## -594.0725 0.3347 ## ## ## $Algeria ## ## Call: ## lm(formula = lifeExp ~ year, data = df) ## ## Coefficients: ## (Intercept) year ## -1067.8590 0.5693 ``` ### List\-columns for sampling say you want to sample all the flights on 50 days out of the year. List\-cols can be used to generate a sample like this: ``` flights %>% mutate(create_date = make_date(year, month, day)) %>% select(create_date, 5:8) %>% group_by(create_date) %>% nest() %>% sample_n(50) %>% unnest() ``` ``` ## # A tibble: 45,640 x 5 ## create_date sched_dep_time dep_delay arr_time sched_arr_time ## <date> <int> <dbl> <int> <int> ## 1 2013-02-09 900 1 1242 1227 ## 2 2013-02-09 1130 30 1434 1430 ## 3 2013-02-09 900 186 1814 1540 ## 4 2013-02-09 1220 2 1545 1532 ## 5 2013-02-09 1240 -3 1414 1444 ## 6 2013-02-09 1245 -5 1528 1600 ## 7 2013-02-09 1250 0 1526 1550 ## 8 2013-02-09 1259 -4 1535 1555 ## 9 2013-02-09 1300 -2 1540 1605 ## 10 2013-02-09 1300 3 1626 1608 ## # ... with 45,630 more rows ``` Alternatively you could use a `semi_join()`, e.g. ``` flights_samp <- flights %>% mutate(create_date = make_date(year, month, day)) %>% distinct(create_date) %>% sample_n(50) flights %>% mutate(create_date = make_date(year, month, day)) %>% select(create_date, 5:8) %>% semi_join(flights_samp, by = "create_date") ``` ``` ## # A tibble: 46,640 x 5 ## create_date sched_dep_time dep_delay arr_time sched_arr_time ## <date> <int> <dbl> <int> <int> ## 1 2013-01-15 500 -7 645 648 ## 2 2013-01-15 525 -7 825 820 ## 3 2013-01-15 530 3 839 831 ## 4 2013-01-15 540 -6 829 850 ## 5 2013-01-15 540 -5 1014 1017 ## 6 2013-01-15 600 -17 710 715 ## 7 2013-01-15 600 -11 637 709 ## 8 2013-01-15 600 -8 934 910 ## 9 2013-01-15 600 -8 658 658 ## 10 2013-01-15 600 -7 851 859 ## # ... with 46,630 more rows ``` * In some situations I find the `nest`, `unnest` method more elegant though the `semi_join` method seems to run goes faster on large dataframes * There are also other more specialized functions in the tidyverse to help with various sampling strategies ### 25\.2\.5\.1 #### Include cubic term Let’s look at this example if we had allowed year to be a 3rd order polynomial. We’re really stretching our degrees of freedom (in relation to our number of observations) in this case – these might be less likely to generalize to other data well. ``` by_country3 %>% semi_join(bad_fit, by = "country") %>% mutate( # create centered data data_cent = purrr::map(data, center_value), # create cubic (3rd order) data mod_cubic = purrr::map(data_cent, lm_quad_2, var = "year_cent + I(year_cent^2) + I(year_cent^3)"), # get predictions for 3rd order model data_cubic = purrr::map2(data_cent, mod_cubic, add_predictions)) %>% unnest(data_cubic) %>% ggplot(aes(x = year, group = country))+ geom_point(aes(y = lifeExp, colour = country))+ geom_line(aes(y = pred, colour = country))+ facet_wrap(~country)+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` * interpretibility of coefficients beyond quadratic term becomes less strait forward to explain ### Multiple graphs in chunk There are a variety of ways to have multiple graphs outputted and aligned side by side: * build graphs separately and use `gridExtra::grid.arrange()` * Ensure metrics have been gathered into a single column and then use `facet_wrap()`/`facet_grid()` (`ggforce` is a helpful extension package to ggplot2 that gives more functionality to these faceting functions) * manipulate chunk options, e.g. figures below have the following options set in the R code chunk: `out.width = "33%", fig.asp = 1, fig.width = 3, fig.show='hold',, fig.align='default'` ``` nz <- filter(gapminder, country == "New Zealand") nz %>% ggplot(aes(year, lifeExp)) + geom_line() + ggtitle("Full data = ") nz_mod <- lm(lifeExp ~ year, data = nz) nz %>% add_predictions(nz_mod) %>% ggplot(aes(year, pred)) + geom_line() + ggtitle("Linear trend + ") nz %>% add_residuals(nz_mod) %>% ggplot(aes(year, resid)) + geom_hline(yintercept = 0, colour = "white", size = 3) + geom_line() + ggtitle("Remaining pattern") ``` ### list(quantile()) examples Some of these examples may not represent best practices. ``` prob_vals <- c(0, .25, .5, .75, 1) iris %>% group_by(Species) %>% summarise(Petal.Length_q = list(quantile(Petal.Length))) %>% mutate(probs = list(prob_vals)) %>% unnest() ``` ``` ## # A tibble: 15 x 3 ## Species Petal.Length_q probs ## <fct> <dbl> <dbl> ## 1 setosa 1 0 ## 2 setosa 1.4 0.25 ## 3 setosa 1.5 0.5 ## 4 setosa 1.58 0.75 ## 5 setosa 1.9 1 ## 6 versicolor 3 0 ## 7 versicolor 4 0.25 ## 8 versicolor 4.35 0.5 ## 9 versicolor 4.6 0.75 ## 10 versicolor 5.1 1 ## 11 virginica 4.5 0 ## 12 virginica 5.1 0.25 ## 13 virginica 5.55 0.5 ## 14 virginica 5.88 0.75 ## 15 virginica 6.9 1 ``` *Example for using quantile across range of columns* *Also notice dynamic method for extracting names* ``` iris %>% group_by(Species) %>% summarise_all(funs(list(quantile(., probs = prob_vals)))) %>% mutate(probs = map(Petal.Length, names)) %>% unnest() ``` ``` ## # A tibble: 15 x 6 ## Species Sepal.Length Sepal.Width Petal.Length Petal.Width probs ## <fct> <dbl> <dbl> <dbl> <dbl> <chr> ## 1 setosa 4.3 2.3 1 0.1 0% ## 2 setosa 4.8 3.2 1.4 0.2 25% ## 3 setosa 5 3.4 1.5 0.2 50% ## 4 setosa 5.2 3.68 1.58 0.3 75% ## 5 setosa 5.8 4.4 1.9 0.6 100% ## 6 versicolor 4.9 2 3 1 0% ## 7 versicolor 5.6 2.52 4 1.2 25% ## 8 versicolor 5.9 2.8 4.35 1.3 50% ## 9 versicolor 6.3 3 4.6 1.5 75% ## 10 versicolor 7 3.4 5.1 1.8 100% ## 11 virginica 4.9 2.2 4.5 1.4 0% ## 12 virginica 6.22 2.8 5.1 1.8 25% ## 13 virginica 6.5 3 5.55 2 50% ## 14 virginica 6.9 3.18 5.88 2.3 75% ## 15 virginica 7.9 3.8 6.9 2.5 100% ``` ### Extracting names Maybe not best practice: ``` quantile(1:100) %>% as.data.frame() %>% rownames_to_column() ``` ``` ## rowname . ## 1 0% 1.00 ## 2 25% 25.75 ## 3 50% 50.50 ## 4 75% 75.25 ## 5 100% 100.00 ``` Better would be to use `enframe()` here: ``` quantile(1:100) %>% tibble::enframe() ``` ``` ## # A tibble: 5 x 2 ## name value ## <chr> <dbl> ## 1 0% 1 ## 2 25% 25.8 ## 3 50% 50.5 ## 4 75% 75.2 ## 5 100% 100 ``` ### `invoke_map` example (book) I liked Hadley’s example with invoke\_map and wanted to save it: ``` sim <- tribble( ~f, ~params, "runif", list(min = -1, max = -1), "rnorm", list(sd = 5), "rpois", list(lambda = 10) ) sim %>% mutate(sims = invoke_map(f, params, n = 10)) ``` ``` ## # A tibble: 3 x 3 ## f params sims ## <chr> <list> <list> ## 1 runif <list [2]> <dbl [10]> ## 2 rnorm <list [1]> <dbl [10]> ## 3 rpois <list [1]> <int [10]> ``` ### named list example (book) I liked Hadley’s example where you have a list of named vectors that you need to iterate over both the values as well as the names and the use of enframe to facilitate this. Below is the copied example and notes: ``` x <- list( a = 1:5, b = 3:4, c = 5:6 ) df <- enframe(x) df ``` ``` ## # A tibble: 3 x 2 ## name value ## <chr> <list> ## 1 a <int [5]> ## 2 b <int [2]> ## 3 c <int [2]> ``` The advantage of this structure is that it generalises in a straightforward way \- names are useful if you have character vector of metadata, but don’t help if you have other types of data, or multiple vectors. Now if you want to iterate over names and values in parallel, you can use `map2()`: ``` df %>% mutate( smry = map2_chr(name, value, ~ stringr::str_c(.x, ": ", .y[1])) ) ``` ``` ## # A tibble: 3 x 3 ## name value smry ## <chr> <list> <chr> ## 1 a <int [5]> a: 1 ## 2 b <int [2]> b: 3 ## 3 c <int [2]> c: 5 ``` *Make sure the following packages are installed:* 25\.2: gapminder ---------------- The set\-up example Hadley goes through is important, below is a slightly altered copy of his example. **Nested Data** ``` by_country <- gapminder::gapminder %>% group_by(country, continent) %>% nest() ``` **List\-columns** ``` country_model <- function(df) { lm(lifeExp ~ year, data = df) } ``` Want to apply this function over every data frame, the dataframes are in a list, so do this by: ``` by_country2 <- by_country %>% mutate(model = purrr::map(data, country_model)) ``` Advantage with keeping things in the dataframe is that when you filter, or move things around, everything stays in sync, as do new summary values you might add. ``` by_country2 %>% arrange(continent, country) ``` ``` ## # A tibble: 142 x 4 ## country continent data model ## <fct> <fct> <list> <list> ## 1 Algeria Africa <tibble [12 x 4]> <S3: lm> ## 2 Angola Africa <tibble [12 x 4]> <S3: lm> ## 3 Benin Africa <tibble [12 x 4]> <S3: lm> ## 4 Botswana Africa <tibble [12 x 4]> <S3: lm> ## 5 Burkina Faso Africa <tibble [12 x 4]> <S3: lm> ## 6 Burundi Africa <tibble [12 x 4]> <S3: lm> ## 7 Cameroon Africa <tibble [12 x 4]> <S3: lm> ## 8 Central African Republic Africa <tibble [12 x 4]> <S3: lm> ## 9 Chad Africa <tibble [12 x 4]> <S3: lm> ## 10 Comoros Africa <tibble [12 x 4]> <S3: lm> ## # ... with 132 more rows ``` ``` by_country2 %>% mutate(summaries = purrr::map(model, summary)) %>% mutate(r_squared = purrr::map2_dbl(model, data, rsquare)) ``` ``` ## # A tibble: 142 x 6 ## country continent data model summaries r_squared ## <fct> <fct> <list> <list> <list> <dbl> ## 1 Afghanistan Asia <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.948 ## 2 Albania Europe <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.911 ## 3 Algeria Africa <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.985 ## 4 Angola Africa <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.888 ## 5 Argentina Americas <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.996 ## 6 Australia Oceania <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.980 ## 7 Austria Europe <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.992 ## 8 Bahrain Asia <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.967 ## 9 Bangladesh Asia <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.989 ## 10 Belgium Europe <tibble [12 x 4~ <S3: l~ <S3: summary.l~ 0.995 ## # ... with 132 more rows ``` **unnesting**, another dataframe with the residuals included and then unnest ``` by_country3 <- by_country2 %>% mutate(resids = purrr::map2(data, model, add_residuals)) ``` ``` resids <- by_country3 %>% unnest(resids) resids ``` ``` ## # A tibble: 1,704 x 7 ## country continent year lifeExp pop gdpPercap resid ## <fct> <fct> <int> <dbl> <int> <dbl> <dbl> ## 1 Afghanistan Asia 1952 28.8 8425333 779. -1.11 ## 2 Afghanistan Asia 1957 30.3 9240934 821. -0.952 ## 3 Afghanistan Asia 1962 32.0 10267083 853. -0.664 ## 4 Afghanistan Asia 1967 34.0 11537966 836. -0.0172 ## 5 Afghanistan Asia 1972 36.1 13079460 740. 0.674 ## 6 Afghanistan Asia 1977 38.4 14880372 786. 1.65 ## 7 Afghanistan Asia 1982 39.9 12881816 978. 1.69 ## 8 Afghanistan Asia 1987 40.8 13867957 852. 1.28 ## 9 Afghanistan Asia 1992 41.7 16317921 649. 0.754 ## 10 Afghanistan Asia 1997 41.8 22227415 635. -0.534 ## # ... with 1,694 more rows ``` ### 25\.2\.5 1. A linear trend seems to be slightly too simple for the overall trend. Can you do better with a quadratic polynomial? How can you interpret the coefficients of the quadratic? (Hint you might want to transform `year` so that it has mean zero.) *Create functions* ``` # funciton to center value center_value <- function(df){ df %>% mutate(year_cent = year - mean(year)) } # this function allows me to input any text to "var" to customize the inputs # to the model, default are a linear and quadratic term for year (centered) lm_quad_2 <- function(df, var = "year_cent + I(year_cent^2)"){ lm(as.formula(paste("lifeExp ~ ", var)), data = df) } ``` *Create dataframe with evaluation metrics* ``` by_country3_quad <- by_country3 %>% mutate( # create centered data data_cent = purrr::map(data, center_value), # create quadratic models mod_quad = purrr::map(data_cent, lm_quad_2), # get model evaluation stats from original model glance_mod = purrr::map(model, broom::glance), # get model evaluation stats from quadratic model glance_quad = purrr::map(mod_quad, broom::glance)) ``` *Create plots* ``` by_country3_quad %>% unnest(glance_mod, glance_quad, .sep = "_", .drop = TRUE) %>% gather(glance_mod_r.squared, glance_quad_r.squared, key = "order", value = "r.squared") %>% ggplot(aes(x = continent, y = r.squared, colour = continent)) + geom_boxplot() + facet_wrap(~order) ``` * The quadratic trend seems to do better –\> indicated by the distribution of the R^2 values being closer to one. The level of improvement seems especially pronounced for African countries.Let’s check this closer by looking at percentage point improvement in R^2 in chart below ``` by_country3_quad %>% mutate(quad_coefs = map(mod_quad, broom::tidy)) %>% unnest(glance_mod, .sep = "_") %>% unnest(glance_quad) %>% mutate(bad_fit = glance_mod_r.squared < 0.25, R.squ_ppt_increase = r.squared - glance_mod_r.squared) %>% ggplot(aes(x = continent, y = R.squ_ppt_increase))+ # geom_quasirandom(aes(alpha = bad_fit), colour = "black")+ geom_boxplot(alpha = 0.1, colour = "dark grey")+ geom_quasirandom(aes(colour = continent))+ labs(title = "Percentage point (PPT) improvement in R squared value", subtitle = "(When adding a quadratic term to the linear regression model)") ``` *View predictions from linear model with quadratic term* (of countries where linear trend did not capture relationship) ``` bad_fit <- by_country3 %>% mutate(glance = purrr::map(model, broom::glance)) %>% unnest(glance, .drop = TRUE) %>% filter(r.squared < 0.25) #solve with join with bad_fit by_country3_quad %>% semi_join(bad_fit, by = "country") %>% mutate(data_preds = purrr::map2(data_cent, mod_quad, add_predictions)) %>% unnest(data_preds) %>% ggplot(aes(x = year, group = country))+ geom_point(aes(y = lifeExp, colour = country))+ geom_line(aes(y = pred, colour = country))+ facet_wrap(~country)+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` * while the quadratic model does a better job fitting the model than a linear term does, I wouldn’t say it does a good job of fitting the model * it looks like the trends are generally consistent rates of improvement and then there is a sudden drop\-off associated with some event, hence an intervention variable may be a more appropriate method for modeling this pattern*Quadratic model parameters* ``` by_country3_quad %>% mutate(quad_coefs = map(mod_quad, broom::tidy)) %>% unnest(glance_mod, .sep = "_") %>% unnest(glance_quad) %>% unnest(quad_coefs) %>% mutate(bad_fit = glance_mod_r.squared < 0.25) %>% ggplot(aes(x = continent, y = estimate, alpha = bad_fit))+ geom_boxplot(alpha = 0.1, colour = "dark grey")+ geom_quasirandom(aes(colour = continent))+ facet_wrap(~term, scales = "free")+ labs(caption = "Note that 'bad fit' represents a bad fit on the initial model \nthat did not contain a quadratic term)")+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` ``` ## Warning: Using alpha for a discrete variable is not advised. ``` * The quadratic term (in a linear function, trained with the x\-value centered at the mean, as in this dataset) has a few important notes related to interpretation + If the coefficient is positive the output will be convex, if it is negative it will be concave (i.e. smile vs. frown shape) + The value on the coefficient represents 1/2 the rate at which the relationship between `lifeExp` and `year` is changing for every one unit change from the mean / expected value of `lifeExp` in the dataset. + Hence if the coefficient is near 0, that means the relationship between `lifeExp` and `year` does not change (or at least does not change at a constant rate) when moving in either direction from `lifeExp`s mean value.To better understand this, let’s look look at a specific example. Excluding Rwanda, Botswana was the `country` that the linear model without the quadratic term performed the worst on. We’ll use this as our example for interpreting the coefficients. *Plots of predicted and actual values for Botswanian life expectancy by year* ``` by_country3_quad %>% filter(country == "Botswana") %>% mutate(data_preds = purrr::map2(data_cent, mod_quad, add_predictions)) %>% unnest(data_preds) %>% ggplot(aes(x = year, group = country))+ geom_point(aes(y = lifeExp))+ geom_line(aes(y = pred, colour = "prediction"))+ labs(title = "Data and quadratic trend of predictions for Botswana") ``` *(note that the centered value for year in the ‘centered’ dataset is 1979\.5\)* In the model for Botswana, coefficents are: Intercept: \~ 59\.81 year (centered): \~ 0\.0607 year (centered)^2: \~ \-0\.0175 Hence for every one year we move away from the central year (1979\.5\), the rate of change between year and price decreases by *\~0\.035*. Below I show this graphically by plotting the lines tangent to the models output. ``` botswana_coefs <- by_country3_quad %>% filter(country == "Botswana") %>% with(map(mod_quad, coef)) %>% flatten_dbl() ``` Helper functions to find tangent points ``` find_slope <- function(x){ 2*botswana_coefs[[3]]*x + botswana_coefs[[2]] } find_y1 <- function(x){ botswana_coefs[[3]]*(x^2) + botswana_coefs[[2]]*x + botswana_coefs[[1]] } find_intercept <- function(x, y, m){ y - x*m } tangent_lines <- tibble(x1 = seq(-20, 20, 10)) %>% mutate(slope = find_slope(x1), y1 = find_y1(x1), intercept = find_intercept(x1, y1, slope), slope_change = x1*2*botswana_coefs[[3]]) %>% select(slope, intercept, everything()) ``` ``` by_country3_quad %>% filter(country == "Botswana") %>% mutate(data_preds = purrr::map2(data_cent, mod_quad, add_predictions)) %>% unnest(data_preds) %>% ggplot(aes(x = year_cent))+ geom_line(aes(x = year_cent, y = pred), colour = "red")+ geom_abline(aes(intercept = intercept, slope = slope), data = tangent_lines)+ coord_fixed() ``` Below is the relevant output in a table. `x1`: represents the change in x value from 1979\.5 `slope`: slope of the tangent line at particular `x1` value `slope_diff_central`: the amount the slope is different from the slope of the tangent line at the central year ``` select(tangent_lines, x1, slope, slope_diff_central = slope_change) ``` ``` ## # A tibble: 5 x 3 ## x1 slope slope_diff_central ## <dbl> <dbl> <dbl> ## 1 -20 0.760 0.700 ## 2 -10 0.411 0.350 ## 3 0 0.0607 0 ## 4 10 -0.289 -0.350 ## 5 20 -0.639 -0.700 ``` * notice that for every 10 year increase in `x1` we see the slope of the tangent line has decreased by 0\.35\. If we’d looked at just one year we would have seen the change was 0\.035, this correspondig with 2 multiplied by the coefficient on the quadratic term of our model. 2. Explore other methods for visualising the distribution of \\(R^2\\) per continent. You might want to try the ggbeeswarm package, which provides similar methods for avoiding overlaps as jitter, but uses deterministic methods. *visualisations of linear model* ``` by_country3_quad %>% unnest(glance_mod) %>% ggplot(aes(x = continent, y = r.squared, colour = continent))+ geom_boxplot(alpha = 0.1, colour = "dark grey")+ ggbeeswarm::geom_quasirandom() ``` * I like `geom_quasirandom()` the best as an overlay on boxplot, it keeps things centered and doesn’t have the gravitational pull affect that makes `geom_beeswarm()` become a little misaligned, it also works well here over `geom_jitter()` as the points stay better around their true value 3. To create the last plot (showing the data for the countries with the worst model fits), we needed two steps: we created a data frame with one row per country and then semi\-joined it to the original dataset. It’s possible to avoid this join if we use `unnest()` instead of `unnest(.drop = TRUE)`. How? ``` #first filter by r.squared and then unnest by_country3_quad %>% mutate(data_preds = purrr::map2(data_cent, mod_quad, add_predictions)) %>% unnest(glance_mod) %>% mutate(bad_fit = r.squared < 0.25) %>% filter(bad_fit) %>% unnest(data_preds) %>% ggplot(aes(x = year, group = country))+ geom_point(aes(y = lifeExp, colour = country))+ geom_line(aes(y = pred, colour = country))+ facet_wrap(~country)+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` ### 25\.2\.5 1. A linear trend seems to be slightly too simple for the overall trend. Can you do better with a quadratic polynomial? How can you interpret the coefficients of the quadratic? (Hint you might want to transform `year` so that it has mean zero.) *Create functions* ``` # funciton to center value center_value <- function(df){ df %>% mutate(year_cent = year - mean(year)) } # this function allows me to input any text to "var" to customize the inputs # to the model, default are a linear and quadratic term for year (centered) lm_quad_2 <- function(df, var = "year_cent + I(year_cent^2)"){ lm(as.formula(paste("lifeExp ~ ", var)), data = df) } ``` *Create dataframe with evaluation metrics* ``` by_country3_quad <- by_country3 %>% mutate( # create centered data data_cent = purrr::map(data, center_value), # create quadratic models mod_quad = purrr::map(data_cent, lm_quad_2), # get model evaluation stats from original model glance_mod = purrr::map(model, broom::glance), # get model evaluation stats from quadratic model glance_quad = purrr::map(mod_quad, broom::glance)) ``` *Create plots* ``` by_country3_quad %>% unnest(glance_mod, glance_quad, .sep = "_", .drop = TRUE) %>% gather(glance_mod_r.squared, glance_quad_r.squared, key = "order", value = "r.squared") %>% ggplot(aes(x = continent, y = r.squared, colour = continent)) + geom_boxplot() + facet_wrap(~order) ``` * The quadratic trend seems to do better –\> indicated by the distribution of the R^2 values being closer to one. The level of improvement seems especially pronounced for African countries.Let’s check this closer by looking at percentage point improvement in R^2 in chart below ``` by_country3_quad %>% mutate(quad_coefs = map(mod_quad, broom::tidy)) %>% unnest(glance_mod, .sep = "_") %>% unnest(glance_quad) %>% mutate(bad_fit = glance_mod_r.squared < 0.25, R.squ_ppt_increase = r.squared - glance_mod_r.squared) %>% ggplot(aes(x = continent, y = R.squ_ppt_increase))+ # geom_quasirandom(aes(alpha = bad_fit), colour = "black")+ geom_boxplot(alpha = 0.1, colour = "dark grey")+ geom_quasirandom(aes(colour = continent))+ labs(title = "Percentage point (PPT) improvement in R squared value", subtitle = "(When adding a quadratic term to the linear regression model)") ``` *View predictions from linear model with quadratic term* (of countries where linear trend did not capture relationship) ``` bad_fit <- by_country3 %>% mutate(glance = purrr::map(model, broom::glance)) %>% unnest(glance, .drop = TRUE) %>% filter(r.squared < 0.25) #solve with join with bad_fit by_country3_quad %>% semi_join(bad_fit, by = "country") %>% mutate(data_preds = purrr::map2(data_cent, mod_quad, add_predictions)) %>% unnest(data_preds) %>% ggplot(aes(x = year, group = country))+ geom_point(aes(y = lifeExp, colour = country))+ geom_line(aes(y = pred, colour = country))+ facet_wrap(~country)+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` * while the quadratic model does a better job fitting the model than a linear term does, I wouldn’t say it does a good job of fitting the model * it looks like the trends are generally consistent rates of improvement and then there is a sudden drop\-off associated with some event, hence an intervention variable may be a more appropriate method for modeling this pattern*Quadratic model parameters* ``` by_country3_quad %>% mutate(quad_coefs = map(mod_quad, broom::tidy)) %>% unnest(glance_mod, .sep = "_") %>% unnest(glance_quad) %>% unnest(quad_coefs) %>% mutate(bad_fit = glance_mod_r.squared < 0.25) %>% ggplot(aes(x = continent, y = estimate, alpha = bad_fit))+ geom_boxplot(alpha = 0.1, colour = "dark grey")+ geom_quasirandom(aes(colour = continent))+ facet_wrap(~term, scales = "free")+ labs(caption = "Note that 'bad fit' represents a bad fit on the initial model \nthat did not contain a quadratic term)")+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` ``` ## Warning: Using alpha for a discrete variable is not advised. ``` * The quadratic term (in a linear function, trained with the x\-value centered at the mean, as in this dataset) has a few important notes related to interpretation + If the coefficient is positive the output will be convex, if it is negative it will be concave (i.e. smile vs. frown shape) + The value on the coefficient represents 1/2 the rate at which the relationship between `lifeExp` and `year` is changing for every one unit change from the mean / expected value of `lifeExp` in the dataset. + Hence if the coefficient is near 0, that means the relationship between `lifeExp` and `year` does not change (or at least does not change at a constant rate) when moving in either direction from `lifeExp`s mean value.To better understand this, let’s look look at a specific example. Excluding Rwanda, Botswana was the `country` that the linear model without the quadratic term performed the worst on. We’ll use this as our example for interpreting the coefficients. *Plots of predicted and actual values for Botswanian life expectancy by year* ``` by_country3_quad %>% filter(country == "Botswana") %>% mutate(data_preds = purrr::map2(data_cent, mod_quad, add_predictions)) %>% unnest(data_preds) %>% ggplot(aes(x = year, group = country))+ geom_point(aes(y = lifeExp))+ geom_line(aes(y = pred, colour = "prediction"))+ labs(title = "Data and quadratic trend of predictions for Botswana") ``` *(note that the centered value for year in the ‘centered’ dataset is 1979\.5\)* In the model for Botswana, coefficents are: Intercept: \~ 59\.81 year (centered): \~ 0\.0607 year (centered)^2: \~ \-0\.0175 Hence for every one year we move away from the central year (1979\.5\), the rate of change between year and price decreases by *\~0\.035*. Below I show this graphically by plotting the lines tangent to the models output. ``` botswana_coefs <- by_country3_quad %>% filter(country == "Botswana") %>% with(map(mod_quad, coef)) %>% flatten_dbl() ``` Helper functions to find tangent points ``` find_slope <- function(x){ 2*botswana_coefs[[3]]*x + botswana_coefs[[2]] } find_y1 <- function(x){ botswana_coefs[[3]]*(x^2) + botswana_coefs[[2]]*x + botswana_coefs[[1]] } find_intercept <- function(x, y, m){ y - x*m } tangent_lines <- tibble(x1 = seq(-20, 20, 10)) %>% mutate(slope = find_slope(x1), y1 = find_y1(x1), intercept = find_intercept(x1, y1, slope), slope_change = x1*2*botswana_coefs[[3]]) %>% select(slope, intercept, everything()) ``` ``` by_country3_quad %>% filter(country == "Botswana") %>% mutate(data_preds = purrr::map2(data_cent, mod_quad, add_predictions)) %>% unnest(data_preds) %>% ggplot(aes(x = year_cent))+ geom_line(aes(x = year_cent, y = pred), colour = "red")+ geom_abline(aes(intercept = intercept, slope = slope), data = tangent_lines)+ coord_fixed() ``` Below is the relevant output in a table. `x1`: represents the change in x value from 1979\.5 `slope`: slope of the tangent line at particular `x1` value `slope_diff_central`: the amount the slope is different from the slope of the tangent line at the central year ``` select(tangent_lines, x1, slope, slope_diff_central = slope_change) ``` ``` ## # A tibble: 5 x 3 ## x1 slope slope_diff_central ## <dbl> <dbl> <dbl> ## 1 -20 0.760 0.700 ## 2 -10 0.411 0.350 ## 3 0 0.0607 0 ## 4 10 -0.289 -0.350 ## 5 20 -0.639 -0.700 ``` * notice that for every 10 year increase in `x1` we see the slope of the tangent line has decreased by 0\.35\. If we’d looked at just one year we would have seen the change was 0\.035, this correspondig with 2 multiplied by the coefficient on the quadratic term of our model. 2. Explore other methods for visualising the distribution of \\(R^2\\) per continent. You might want to try the ggbeeswarm package, which provides similar methods for avoiding overlaps as jitter, but uses deterministic methods. *visualisations of linear model* ``` by_country3_quad %>% unnest(glance_mod) %>% ggplot(aes(x = continent, y = r.squared, colour = continent))+ geom_boxplot(alpha = 0.1, colour = "dark grey")+ ggbeeswarm::geom_quasirandom() ``` * I like `geom_quasirandom()` the best as an overlay on boxplot, it keeps things centered and doesn’t have the gravitational pull affect that makes `geom_beeswarm()` become a little misaligned, it also works well here over `geom_jitter()` as the points stay better around their true value 3. To create the last plot (showing the data for the countries with the worst model fits), we needed two steps: we created a data frame with one row per country and then semi\-joined it to the original dataset. It’s possible to avoid this join if we use `unnest()` instead of `unnest(.drop = TRUE)`. How? ``` #first filter by r.squared and then unnest by_country3_quad %>% mutate(data_preds = purrr::map2(data_cent, mod_quad, add_predictions)) %>% unnest(glance_mod) %>% mutate(bad_fit = r.squared < 0.25) %>% filter(bad_fit) %>% unnest(data_preds) %>% ggplot(aes(x = year, group = country))+ geom_point(aes(y = lifeExp, colour = country))+ geom_line(aes(y = pred, colour = country))+ facet_wrap(~country)+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` 25\.4: Creating list\-columns ----------------------------- ### 25\.4\.5 1. List all the functions that you can think of that take an atomic vector and return a list. * `stringr::str_extract_all` \+ other `stringr` functions(however the below can also take types that are not atomic and are probably not really what is being looked for) * `list` * `tibble` * `map` / `lapply` 2. Brainstorm useful summary functions that, like `quantile()`, return multiple values. * `summary` * `range` * … 3. What’s missing in the following data frame? How does `quantile()` return that missing piece? Why isn’t that helpful here? ``` mtcars %>% group_by(cyl) %>% summarise(q = list(quantile(mpg))) %>% unnest() ``` ``` ## # A tibble: 15 x 2 ## cyl q ## <dbl> <dbl> ## 1 4 21.4 ## 2 4 22.8 ## 3 4 26 ## 4 4 30.4 ## 5 4 33.9 ## 6 6 17.8 ## 7 6 18.6 ## 8 6 19.7 ## 9 6 21 ## 10 6 21.4 ## 11 8 10.4 ## 12 8 14.4 ## 13 8 15.2 ## 14 8 16.2 ## 15 8 19.2 ``` * need to capture probabilities of quantiles to make useful… ``` probs <- c(0.01, 0.25, 0.5, 0.75, 0.99) mtcars %>% group_by(cyl) %>% summarise(p = list(probs), q = list(quantile(mpg, probs))) %>% unnest() ``` ``` ## # A tibble: 15 x 3 ## cyl p q ## <dbl> <dbl> <dbl> ## 1 4 0.01 21.4 ## 2 4 0.25 22.8 ## 3 4 0.5 26 ## 4 4 0.75 30.4 ## 5 4 0.99 33.8 ## 6 6 0.01 17.8 ## 7 6 0.25 18.6 ## 8 6 0.5 19.7 ## 9 6 0.75 21 ## 10 6 0.99 21.4 ## 11 8 0.01 10.4 ## 12 8 0.25 14.4 ## 13 8 0.5 15.2 ## 14 8 0.75 16.2 ## 15 8 0.99 19.1 ``` * see [list(quantile()) examples](25-many-models.html#listquantile-examples) for related method that captures names of quantiles (rather than requiring th user to manually input a vector of probabilities) 4. What does this code do? Why might it be useful? ``` mtcars %>% select(1:3) %>% group_by(cyl) %>% summarise_all(funs(list)) ``` * It turns each row into an atomic vector grouped by the particular `cyl` value. It is different from `nest` in that each column creates a new list\-column representing an atomic vector. If `nest` had been used, this would have created a single dataframe that all the values woudl have been in. Could be useful for running purr through particular columns… * e.g. let’s say we want to find the number of unique items in each column for each grouping, we could do that like so ``` mtcars %>% group_by(cyl) %>% select(1:5) %>% summarise_all(funs(list)) %>% mutate_all(funs(unique = map_int(., ~length(unique(.x))))) ``` ``` ## Warning: funs() is soft deprecated as of dplyr 0.8.0 ## please use list() instead ## ## # Before: ## funs(name = f(.) ## ## # After: ## list(name = ~f(.)) ## This warning is displayed once per session. ``` ``` ## # A tibble: 3 x 10 ## cyl mpg disp hp drat cyl_unique mpg_unique disp_unique hp_unique ## <dbl> <lis> <lis> <lis> <lis> <int> <int> <int> <int> ## 1 4 <dbl~ <dbl~ <dbl~ <dbl~ 1 9 11 10 ## 2 6 <dbl~ <dbl~ <dbl~ <dbl~ 1 6 5 4 ## 3 8 <dbl~ <dbl~ <dbl~ <dbl~ 1 12 11 9 ## # ... with 1 more variable: drat_unique <int> ``` ``` # we could also simply overwrite the values (rather than make new columns) mtcars %>% group_by(cyl) %>% select(1:5) %>% summarise_all(funs(list)) %>% mutate_all(funs(map_int(., ~length(unique(.x))))) ``` ``` ## # A tibble: 3 x 5 ## cyl mpg disp hp drat ## <int> <int> <int> <int> <int> ## 1 1 9 11 10 10 ## 2 1 6 5 4 5 ## 3 1 12 11 9 11 ``` ### 25\.4\.5 1. List all the functions that you can think of that take an atomic vector and return a list. * `stringr::str_extract_all` \+ other `stringr` functions(however the below can also take types that are not atomic and are probably not really what is being looked for) * `list` * `tibble` * `map` / `lapply` 2. Brainstorm useful summary functions that, like `quantile()`, return multiple values. * `summary` * `range` * … 3. What’s missing in the following data frame? How does `quantile()` return that missing piece? Why isn’t that helpful here? ``` mtcars %>% group_by(cyl) %>% summarise(q = list(quantile(mpg))) %>% unnest() ``` ``` ## # A tibble: 15 x 2 ## cyl q ## <dbl> <dbl> ## 1 4 21.4 ## 2 4 22.8 ## 3 4 26 ## 4 4 30.4 ## 5 4 33.9 ## 6 6 17.8 ## 7 6 18.6 ## 8 6 19.7 ## 9 6 21 ## 10 6 21.4 ## 11 8 10.4 ## 12 8 14.4 ## 13 8 15.2 ## 14 8 16.2 ## 15 8 19.2 ``` * need to capture probabilities of quantiles to make useful… ``` probs <- c(0.01, 0.25, 0.5, 0.75, 0.99) mtcars %>% group_by(cyl) %>% summarise(p = list(probs), q = list(quantile(mpg, probs))) %>% unnest() ``` ``` ## # A tibble: 15 x 3 ## cyl p q ## <dbl> <dbl> <dbl> ## 1 4 0.01 21.4 ## 2 4 0.25 22.8 ## 3 4 0.5 26 ## 4 4 0.75 30.4 ## 5 4 0.99 33.8 ## 6 6 0.01 17.8 ## 7 6 0.25 18.6 ## 8 6 0.5 19.7 ## 9 6 0.75 21 ## 10 6 0.99 21.4 ## 11 8 0.01 10.4 ## 12 8 0.25 14.4 ## 13 8 0.5 15.2 ## 14 8 0.75 16.2 ## 15 8 0.99 19.1 ``` * see [list(quantile()) examples](25-many-models.html#listquantile-examples) for related method that captures names of quantiles (rather than requiring th user to manually input a vector of probabilities) 4. What does this code do? Why might it be useful? ``` mtcars %>% select(1:3) %>% group_by(cyl) %>% summarise_all(funs(list)) ``` * It turns each row into an atomic vector grouped by the particular `cyl` value. It is different from `nest` in that each column creates a new list\-column representing an atomic vector. If `nest` had been used, this would have created a single dataframe that all the values woudl have been in. Could be useful for running purr through particular columns… * e.g. let’s say we want to find the number of unique items in each column for each grouping, we could do that like so ``` mtcars %>% group_by(cyl) %>% select(1:5) %>% summarise_all(funs(list)) %>% mutate_all(funs(unique = map_int(., ~length(unique(.x))))) ``` ``` ## Warning: funs() is soft deprecated as of dplyr 0.8.0 ## please use list() instead ## ## # Before: ## funs(name = f(.) ## ## # After: ## list(name = ~f(.)) ## This warning is displayed once per session. ``` ``` ## # A tibble: 3 x 10 ## cyl mpg disp hp drat cyl_unique mpg_unique disp_unique hp_unique ## <dbl> <lis> <lis> <lis> <lis> <int> <int> <int> <int> ## 1 4 <dbl~ <dbl~ <dbl~ <dbl~ 1 9 11 10 ## 2 6 <dbl~ <dbl~ <dbl~ <dbl~ 1 6 5 4 ## 3 8 <dbl~ <dbl~ <dbl~ <dbl~ 1 12 11 9 ## # ... with 1 more variable: drat_unique <int> ``` ``` # we could also simply overwrite the values (rather than make new columns) mtcars %>% group_by(cyl) %>% select(1:5) %>% summarise_all(funs(list)) %>% mutate_all(funs(map_int(., ~length(unique(.x))))) ``` ``` ## # A tibble: 3 x 5 ## cyl mpg disp hp drat ## <int> <int> <int> <int> <int> ## 1 1 9 11 10 10 ## 2 1 6 5 4 5 ## 3 1 12 11 9 11 ``` 25\.5: Simplifying list\-columns -------------------------------- ### 25\.5\.3 1. Why might the `lengths()` function be useful for creating atomic vector columns from list\-columns? * perhaps you want to measure the number of elements (or unique elements) in an individual element of a list column ``` mpg %>% group_by(cyl) %>% summarise(displ_list = list(displ)) %>% mutate(num_unique = map_int(displ_list, ~unique(.x) %>% length())) ``` ``` ## # A tibble: 4 x 3 ## cyl displ_list num_unique ## <int> <list> <int> ## 1 4 <dbl [81]> 8 ## 2 5 <dbl [4]> 1 ## 3 6 <dbl [79]> 14 ## 4 8 <dbl [70]> 17 ``` 1. List the most common types of vector found in a data frame. What makes lists different? * the atomic types: char, int, double, factor, date are all more common, they are atomic, whereas lists are not atomic vectors and can contain any type of data within them (e.g. a list of atomic vectors, list of lists, etc.). ### 25\.5\.3 1. Why might the `lengths()` function be useful for creating atomic vector columns from list\-columns? * perhaps you want to measure the number of elements (or unique elements) in an individual element of a list column ``` mpg %>% group_by(cyl) %>% summarise(displ_list = list(displ)) %>% mutate(num_unique = map_int(displ_list, ~unique(.x) %>% length())) ``` ``` ## # A tibble: 4 x 3 ## cyl displ_list num_unique ## <int> <list> <int> ## 1 4 <dbl [81]> 8 ## 2 5 <dbl [4]> 1 ## 3 6 <dbl [79]> 14 ## 4 8 <dbl [70]> 17 ``` 1. List the most common types of vector found in a data frame. What makes lists different? * the atomic types: char, int, double, factor, date are all more common, they are atomic, whereas lists are not atomic vectors and can contain any type of data within them (e.g. a list of atomic vectors, list of lists, etc.). Appendix -------- ### Models in lists This is the more traditional way you might store models in a list ``` models_countries <- purrr::map(by_country$data, country_model) names(models_countries) <- by_country$country models_countries[1:3] ``` ``` ## $Afghanistan ## ## Call: ## lm(formula = lifeExp ~ year, data = df) ## ## Coefficients: ## (Intercept) year ## -507.5343 0.2753 ## ## ## $Albania ## ## Call: ## lm(formula = lifeExp ~ year, data = df) ## ## Coefficients: ## (Intercept) year ## -594.0725 0.3347 ## ## ## $Algeria ## ## Call: ## lm(formula = lifeExp ~ year, data = df) ## ## Coefficients: ## (Intercept) year ## -1067.8590 0.5693 ``` ### List\-columns for sampling say you want to sample all the flights on 50 days out of the year. List\-cols can be used to generate a sample like this: ``` flights %>% mutate(create_date = make_date(year, month, day)) %>% select(create_date, 5:8) %>% group_by(create_date) %>% nest() %>% sample_n(50) %>% unnest() ``` ``` ## # A tibble: 45,640 x 5 ## create_date sched_dep_time dep_delay arr_time sched_arr_time ## <date> <int> <dbl> <int> <int> ## 1 2013-02-09 900 1 1242 1227 ## 2 2013-02-09 1130 30 1434 1430 ## 3 2013-02-09 900 186 1814 1540 ## 4 2013-02-09 1220 2 1545 1532 ## 5 2013-02-09 1240 -3 1414 1444 ## 6 2013-02-09 1245 -5 1528 1600 ## 7 2013-02-09 1250 0 1526 1550 ## 8 2013-02-09 1259 -4 1535 1555 ## 9 2013-02-09 1300 -2 1540 1605 ## 10 2013-02-09 1300 3 1626 1608 ## # ... with 45,630 more rows ``` Alternatively you could use a `semi_join()`, e.g. ``` flights_samp <- flights %>% mutate(create_date = make_date(year, month, day)) %>% distinct(create_date) %>% sample_n(50) flights %>% mutate(create_date = make_date(year, month, day)) %>% select(create_date, 5:8) %>% semi_join(flights_samp, by = "create_date") ``` ``` ## # A tibble: 46,640 x 5 ## create_date sched_dep_time dep_delay arr_time sched_arr_time ## <date> <int> <dbl> <int> <int> ## 1 2013-01-15 500 -7 645 648 ## 2 2013-01-15 525 -7 825 820 ## 3 2013-01-15 530 3 839 831 ## 4 2013-01-15 540 -6 829 850 ## 5 2013-01-15 540 -5 1014 1017 ## 6 2013-01-15 600 -17 710 715 ## 7 2013-01-15 600 -11 637 709 ## 8 2013-01-15 600 -8 934 910 ## 9 2013-01-15 600 -8 658 658 ## 10 2013-01-15 600 -7 851 859 ## # ... with 46,630 more rows ``` * In some situations I find the `nest`, `unnest` method more elegant though the `semi_join` method seems to run goes faster on large dataframes * There are also other more specialized functions in the tidyverse to help with various sampling strategies ### 25\.2\.5\.1 #### Include cubic term Let’s look at this example if we had allowed year to be a 3rd order polynomial. We’re really stretching our degrees of freedom (in relation to our number of observations) in this case – these might be less likely to generalize to other data well. ``` by_country3 %>% semi_join(bad_fit, by = "country") %>% mutate( # create centered data data_cent = purrr::map(data, center_value), # create cubic (3rd order) data mod_cubic = purrr::map(data_cent, lm_quad_2, var = "year_cent + I(year_cent^2) + I(year_cent^3)"), # get predictions for 3rd order model data_cubic = purrr::map2(data_cent, mod_cubic, add_predictions)) %>% unnest(data_cubic) %>% ggplot(aes(x = year, group = country))+ geom_point(aes(y = lifeExp, colour = country))+ geom_line(aes(y = pred, colour = country))+ facet_wrap(~country)+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` * interpretibility of coefficients beyond quadratic term becomes less strait forward to explain ### Multiple graphs in chunk There are a variety of ways to have multiple graphs outputted and aligned side by side: * build graphs separately and use `gridExtra::grid.arrange()` * Ensure metrics have been gathered into a single column and then use `facet_wrap()`/`facet_grid()` (`ggforce` is a helpful extension package to ggplot2 that gives more functionality to these faceting functions) * manipulate chunk options, e.g. figures below have the following options set in the R code chunk: `out.width = "33%", fig.asp = 1, fig.width = 3, fig.show='hold',, fig.align='default'` ``` nz <- filter(gapminder, country == "New Zealand") nz %>% ggplot(aes(year, lifeExp)) + geom_line() + ggtitle("Full data = ") nz_mod <- lm(lifeExp ~ year, data = nz) nz %>% add_predictions(nz_mod) %>% ggplot(aes(year, pred)) + geom_line() + ggtitle("Linear trend + ") nz %>% add_residuals(nz_mod) %>% ggplot(aes(year, resid)) + geom_hline(yintercept = 0, colour = "white", size = 3) + geom_line() + ggtitle("Remaining pattern") ``` ### list(quantile()) examples Some of these examples may not represent best practices. ``` prob_vals <- c(0, .25, .5, .75, 1) iris %>% group_by(Species) %>% summarise(Petal.Length_q = list(quantile(Petal.Length))) %>% mutate(probs = list(prob_vals)) %>% unnest() ``` ``` ## # A tibble: 15 x 3 ## Species Petal.Length_q probs ## <fct> <dbl> <dbl> ## 1 setosa 1 0 ## 2 setosa 1.4 0.25 ## 3 setosa 1.5 0.5 ## 4 setosa 1.58 0.75 ## 5 setosa 1.9 1 ## 6 versicolor 3 0 ## 7 versicolor 4 0.25 ## 8 versicolor 4.35 0.5 ## 9 versicolor 4.6 0.75 ## 10 versicolor 5.1 1 ## 11 virginica 4.5 0 ## 12 virginica 5.1 0.25 ## 13 virginica 5.55 0.5 ## 14 virginica 5.88 0.75 ## 15 virginica 6.9 1 ``` *Example for using quantile across range of columns* *Also notice dynamic method for extracting names* ``` iris %>% group_by(Species) %>% summarise_all(funs(list(quantile(., probs = prob_vals)))) %>% mutate(probs = map(Petal.Length, names)) %>% unnest() ``` ``` ## # A tibble: 15 x 6 ## Species Sepal.Length Sepal.Width Petal.Length Petal.Width probs ## <fct> <dbl> <dbl> <dbl> <dbl> <chr> ## 1 setosa 4.3 2.3 1 0.1 0% ## 2 setosa 4.8 3.2 1.4 0.2 25% ## 3 setosa 5 3.4 1.5 0.2 50% ## 4 setosa 5.2 3.68 1.58 0.3 75% ## 5 setosa 5.8 4.4 1.9 0.6 100% ## 6 versicolor 4.9 2 3 1 0% ## 7 versicolor 5.6 2.52 4 1.2 25% ## 8 versicolor 5.9 2.8 4.35 1.3 50% ## 9 versicolor 6.3 3 4.6 1.5 75% ## 10 versicolor 7 3.4 5.1 1.8 100% ## 11 virginica 4.9 2.2 4.5 1.4 0% ## 12 virginica 6.22 2.8 5.1 1.8 25% ## 13 virginica 6.5 3 5.55 2 50% ## 14 virginica 6.9 3.18 5.88 2.3 75% ## 15 virginica 7.9 3.8 6.9 2.5 100% ``` ### Extracting names Maybe not best practice: ``` quantile(1:100) %>% as.data.frame() %>% rownames_to_column() ``` ``` ## rowname . ## 1 0% 1.00 ## 2 25% 25.75 ## 3 50% 50.50 ## 4 75% 75.25 ## 5 100% 100.00 ``` Better would be to use `enframe()` here: ``` quantile(1:100) %>% tibble::enframe() ``` ``` ## # A tibble: 5 x 2 ## name value ## <chr> <dbl> ## 1 0% 1 ## 2 25% 25.8 ## 3 50% 50.5 ## 4 75% 75.2 ## 5 100% 100 ``` ### `invoke_map` example (book) I liked Hadley’s example with invoke\_map and wanted to save it: ``` sim <- tribble( ~f, ~params, "runif", list(min = -1, max = -1), "rnorm", list(sd = 5), "rpois", list(lambda = 10) ) sim %>% mutate(sims = invoke_map(f, params, n = 10)) ``` ``` ## # A tibble: 3 x 3 ## f params sims ## <chr> <list> <list> ## 1 runif <list [2]> <dbl [10]> ## 2 rnorm <list [1]> <dbl [10]> ## 3 rpois <list [1]> <int [10]> ``` ### named list example (book) I liked Hadley’s example where you have a list of named vectors that you need to iterate over both the values as well as the names and the use of enframe to facilitate this. Below is the copied example and notes: ``` x <- list( a = 1:5, b = 3:4, c = 5:6 ) df <- enframe(x) df ``` ``` ## # A tibble: 3 x 2 ## name value ## <chr> <list> ## 1 a <int [5]> ## 2 b <int [2]> ## 3 c <int [2]> ``` The advantage of this structure is that it generalises in a straightforward way \- names are useful if you have character vector of metadata, but don’t help if you have other types of data, or multiple vectors. Now if you want to iterate over names and values in parallel, you can use `map2()`: ``` df %>% mutate( smry = map2_chr(name, value, ~ stringr::str_c(.x, ": ", .y[1])) ) ``` ``` ## # A tibble: 3 x 3 ## name value smry ## <chr> <list> <chr> ## 1 a <int [5]> a: 1 ## 2 b <int [2]> b: 3 ## 3 c <int [2]> c: 5 ``` *Make sure the following packages are installed:* ### Models in lists This is the more traditional way you might store models in a list ``` models_countries <- purrr::map(by_country$data, country_model) names(models_countries) <- by_country$country models_countries[1:3] ``` ``` ## $Afghanistan ## ## Call: ## lm(formula = lifeExp ~ year, data = df) ## ## Coefficients: ## (Intercept) year ## -507.5343 0.2753 ## ## ## $Albania ## ## Call: ## lm(formula = lifeExp ~ year, data = df) ## ## Coefficients: ## (Intercept) year ## -594.0725 0.3347 ## ## ## $Algeria ## ## Call: ## lm(formula = lifeExp ~ year, data = df) ## ## Coefficients: ## (Intercept) year ## -1067.8590 0.5693 ``` ### List\-columns for sampling say you want to sample all the flights on 50 days out of the year. List\-cols can be used to generate a sample like this: ``` flights %>% mutate(create_date = make_date(year, month, day)) %>% select(create_date, 5:8) %>% group_by(create_date) %>% nest() %>% sample_n(50) %>% unnest() ``` ``` ## # A tibble: 45,640 x 5 ## create_date sched_dep_time dep_delay arr_time sched_arr_time ## <date> <int> <dbl> <int> <int> ## 1 2013-02-09 900 1 1242 1227 ## 2 2013-02-09 1130 30 1434 1430 ## 3 2013-02-09 900 186 1814 1540 ## 4 2013-02-09 1220 2 1545 1532 ## 5 2013-02-09 1240 -3 1414 1444 ## 6 2013-02-09 1245 -5 1528 1600 ## 7 2013-02-09 1250 0 1526 1550 ## 8 2013-02-09 1259 -4 1535 1555 ## 9 2013-02-09 1300 -2 1540 1605 ## 10 2013-02-09 1300 3 1626 1608 ## # ... with 45,630 more rows ``` Alternatively you could use a `semi_join()`, e.g. ``` flights_samp <- flights %>% mutate(create_date = make_date(year, month, day)) %>% distinct(create_date) %>% sample_n(50) flights %>% mutate(create_date = make_date(year, month, day)) %>% select(create_date, 5:8) %>% semi_join(flights_samp, by = "create_date") ``` ``` ## # A tibble: 46,640 x 5 ## create_date sched_dep_time dep_delay arr_time sched_arr_time ## <date> <int> <dbl> <int> <int> ## 1 2013-01-15 500 -7 645 648 ## 2 2013-01-15 525 -7 825 820 ## 3 2013-01-15 530 3 839 831 ## 4 2013-01-15 540 -6 829 850 ## 5 2013-01-15 540 -5 1014 1017 ## 6 2013-01-15 600 -17 710 715 ## 7 2013-01-15 600 -11 637 709 ## 8 2013-01-15 600 -8 934 910 ## 9 2013-01-15 600 -8 658 658 ## 10 2013-01-15 600 -7 851 859 ## # ... with 46,630 more rows ``` * In some situations I find the `nest`, `unnest` method more elegant though the `semi_join` method seems to run goes faster on large dataframes * There are also other more specialized functions in the tidyverse to help with various sampling strategies ### 25\.2\.5\.1 #### Include cubic term Let’s look at this example if we had allowed year to be a 3rd order polynomial. We’re really stretching our degrees of freedom (in relation to our number of observations) in this case – these might be less likely to generalize to other data well. ``` by_country3 %>% semi_join(bad_fit, by = "country") %>% mutate( # create centered data data_cent = purrr::map(data, center_value), # create cubic (3rd order) data mod_cubic = purrr::map(data_cent, lm_quad_2, var = "year_cent + I(year_cent^2) + I(year_cent^3)"), # get predictions for 3rd order model data_cubic = purrr::map2(data_cent, mod_cubic, add_predictions)) %>% unnest(data_cubic) %>% ggplot(aes(x = year, group = country))+ geom_point(aes(y = lifeExp, colour = country))+ geom_line(aes(y = pred, colour = country))+ facet_wrap(~country)+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` * interpretibility of coefficients beyond quadratic term becomes less strait forward to explain #### Include cubic term Let’s look at this example if we had allowed year to be a 3rd order polynomial. We’re really stretching our degrees of freedom (in relation to our number of observations) in this case – these might be less likely to generalize to other data well. ``` by_country3 %>% semi_join(bad_fit, by = "country") %>% mutate( # create centered data data_cent = purrr::map(data, center_value), # create cubic (3rd order) data mod_cubic = purrr::map(data_cent, lm_quad_2, var = "year_cent + I(year_cent^2) + I(year_cent^3)"), # get predictions for 3rd order model data_cubic = purrr::map2(data_cent, mod_cubic, add_predictions)) %>% unnest(data_cubic) %>% ggplot(aes(x = year, group = country))+ geom_point(aes(y = lifeExp, colour = country))+ geom_line(aes(y = pred, colour = country))+ facet_wrap(~country)+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) ``` * interpretibility of coefficients beyond quadratic term becomes less strait forward to explain ### Multiple graphs in chunk There are a variety of ways to have multiple graphs outputted and aligned side by side: * build graphs separately and use `gridExtra::grid.arrange()` * Ensure metrics have been gathered into a single column and then use `facet_wrap()`/`facet_grid()` (`ggforce` is a helpful extension package to ggplot2 that gives more functionality to these faceting functions) * manipulate chunk options, e.g. figures below have the following options set in the R code chunk: `out.width = "33%", fig.asp = 1, fig.width = 3, fig.show='hold',, fig.align='default'` ``` nz <- filter(gapminder, country == "New Zealand") nz %>% ggplot(aes(year, lifeExp)) + geom_line() + ggtitle("Full data = ") nz_mod <- lm(lifeExp ~ year, data = nz) nz %>% add_predictions(nz_mod) %>% ggplot(aes(year, pred)) + geom_line() + ggtitle("Linear trend + ") nz %>% add_residuals(nz_mod) %>% ggplot(aes(year, resid)) + geom_hline(yintercept = 0, colour = "white", size = 3) + geom_line() + ggtitle("Remaining pattern") ``` ### list(quantile()) examples Some of these examples may not represent best practices. ``` prob_vals <- c(0, .25, .5, .75, 1) iris %>% group_by(Species) %>% summarise(Petal.Length_q = list(quantile(Petal.Length))) %>% mutate(probs = list(prob_vals)) %>% unnest() ``` ``` ## # A tibble: 15 x 3 ## Species Petal.Length_q probs ## <fct> <dbl> <dbl> ## 1 setosa 1 0 ## 2 setosa 1.4 0.25 ## 3 setosa 1.5 0.5 ## 4 setosa 1.58 0.75 ## 5 setosa 1.9 1 ## 6 versicolor 3 0 ## 7 versicolor 4 0.25 ## 8 versicolor 4.35 0.5 ## 9 versicolor 4.6 0.75 ## 10 versicolor 5.1 1 ## 11 virginica 4.5 0 ## 12 virginica 5.1 0.25 ## 13 virginica 5.55 0.5 ## 14 virginica 5.88 0.75 ## 15 virginica 6.9 1 ``` *Example for using quantile across range of columns* *Also notice dynamic method for extracting names* ``` iris %>% group_by(Species) %>% summarise_all(funs(list(quantile(., probs = prob_vals)))) %>% mutate(probs = map(Petal.Length, names)) %>% unnest() ``` ``` ## # A tibble: 15 x 6 ## Species Sepal.Length Sepal.Width Petal.Length Petal.Width probs ## <fct> <dbl> <dbl> <dbl> <dbl> <chr> ## 1 setosa 4.3 2.3 1 0.1 0% ## 2 setosa 4.8 3.2 1.4 0.2 25% ## 3 setosa 5 3.4 1.5 0.2 50% ## 4 setosa 5.2 3.68 1.58 0.3 75% ## 5 setosa 5.8 4.4 1.9 0.6 100% ## 6 versicolor 4.9 2 3 1 0% ## 7 versicolor 5.6 2.52 4 1.2 25% ## 8 versicolor 5.9 2.8 4.35 1.3 50% ## 9 versicolor 6.3 3 4.6 1.5 75% ## 10 versicolor 7 3.4 5.1 1.8 100% ## 11 virginica 4.9 2.2 4.5 1.4 0% ## 12 virginica 6.22 2.8 5.1 1.8 25% ## 13 virginica 6.5 3 5.55 2 50% ## 14 virginica 6.9 3.18 5.88 2.3 75% ## 15 virginica 7.9 3.8 6.9 2.5 100% ``` ### Extracting names Maybe not best practice: ``` quantile(1:100) %>% as.data.frame() %>% rownames_to_column() ``` ``` ## rowname . ## 1 0% 1.00 ## 2 25% 25.75 ## 3 50% 50.50 ## 4 75% 75.25 ## 5 100% 100.00 ``` Better would be to use `enframe()` here: ``` quantile(1:100) %>% tibble::enframe() ``` ``` ## # A tibble: 5 x 2 ## name value ## <chr> <dbl> ## 1 0% 1 ## 2 25% 25.8 ## 3 50% 50.5 ## 4 75% 75.2 ## 5 100% 100 ``` ### `invoke_map` example (book) I liked Hadley’s example with invoke\_map and wanted to save it: ``` sim <- tribble( ~f, ~params, "runif", list(min = -1, max = -1), "rnorm", list(sd = 5), "rpois", list(lambda = 10) ) sim %>% mutate(sims = invoke_map(f, params, n = 10)) ``` ``` ## # A tibble: 3 x 3 ## f params sims ## <chr> <list> <list> ## 1 runif <list [2]> <dbl [10]> ## 2 rnorm <list [1]> <dbl [10]> ## 3 rpois <list [1]> <int [10]> ``` ### named list example (book) I liked Hadley’s example where you have a list of named vectors that you need to iterate over both the values as well as the names and the use of enframe to facilitate this. Below is the copied example and notes: ``` x <- list( a = 1:5, b = 3:4, c = 5:6 ) df <- enframe(x) df ``` ``` ## # A tibble: 3 x 2 ## name value ## <chr> <list> ## 1 a <int [5]> ## 2 b <int [2]> ## 3 c <int [2]> ``` The advantage of this structure is that it generalises in a straightforward way \- names are useful if you have character vector of metadata, but don’t help if you have other types of data, or multiple vectors. Now if you want to iterate over names and values in parallel, you can use `map2()`: ``` df %>% mutate( smry = map2_chr(name, value, ~ stringr::str_c(.x, ": ", .y[1])) ) ``` ``` ## # A tibble: 3 x 3 ## name value smry ## <chr> <list> <chr> ## 1 a <int [5]> a: 1 ## 2 b <int [2]> b: 3 ## 3 c <int [2]> c: 5 ``` *Make sure the following packages are installed:*
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/27-r-markdown.html
Ch. 27: R Markdown ================== **Functions and notes:** * shortcut for inserting code chunk is cmd/ctrl\+alt\+i * shortcut for running entire code chunks: cmd/ctrl\+shift\+enter * chunk options + chunk name is first part after type of code in chunk, e.g. code chunk by name: `"```{r by-name}"` + `eval = FALSE` show example output code, but don’t evaluate + `include = FALSE` evaluate code but don’t show code or output + `echo = FALSE` is for when you just want the output but not the code itself + `message = FALSE` or `warning = False` prevents messages or warnings appearing in the finished line + `error = TRUE` causes code to render even if there is an error + `results = 'hide'` hides printied output and `fig.show = 'hide'` hides plots - allows you to hide particular bits of output + `cache = TRUE` save output of chunk to separate folder (speeds\-up rendering) + `dependson = "chunk_name"` update chunk if dependency changes + `cache.extra` if output from function changes, will re\-render – useful for if you only want to update if for example a file changes, e.g. ``` rawdata <- readr::read_csv("a_very_large_file.csv") ``` * good idea to name code chunks after main object created * `knitr::clean_cache` clear out your caches * `knitr::opts_chunk` use to change knitting options + e.g. ``` # when writing books and tutorials knitr::opts_chunk$set( comment = "#>", collapse = TRUE ) # hiding code for report knitr::opts_chunk$set( echo = FALSE ) # may also set `message = FALSE` and `warning = FALSE` ``` * `rmarkdown::render` programmatically knit documents + e.g. `rmarkdown::render("27-r-markdown.Rmd", output_format = "all")` to render all formats in YAML header * `knitr::kable` to make dataframe more visible for printing when knitting + also see `xtable`, `stargazer`, `pander`, `tables`, and `ascii` packages * `format` helpful when inserting numbers into texts, e.g. ``` comma <- function(x) format(x, digits = 2, big.mark = ",") comma(3452345) ``` ``` ## [1] "3,452,345" ``` ``` comma(.12358124331) ``` ``` ## [1] "0.12" ``` * Use `params:` in YAML header to add in specific values or create parameterized reports, e.g. ``` params: start: !r lubridate::ymd("2015-01-01") snapshot: !r lubridate::ymd_hms("2015-01-01 12:30:00") ``` * Full chunk options here: <https://yihui.name/knitr/options/> 27\.2 R Markdown basics ----------------------- ### 27\.2\.1 1. Create a new notebook using *File \> New File \> R Notebook*. Read the instructions. Practice running the chunks. Verify that you can modify the code, re\-run it, and see modified output. Done seperately. 2. Create a new R Markdown document with *File \> New File \> R Markdown…* Knit it by clicking the appropriate button. Knit it by using the appropriate keyboard short cut. Verify that you can modify the input and see the output update. Done seperately. 3. Compare and contrast the R notebook and R markdown files you created above. How are the outputs similar? How are they different? How are the inputs similar? How are they different? What happens if you copy the YAML header from one to the other? * Both by default have code chunks display ‘in\-line’ while working, though with RMD can force to not output in\-line. * When rendering, default of notebooks will be to render whichever chunks have been rendered during interactive session, whereas RMD document needs directions from code chunk options + I generally prefer .Rmd files to notebooks.[46](#fn46) 4. Create one new R Markdown document for each of the three built\-in formats: HTML, PDF and Word. Knit each of the three documents. How does the output differ? How does the input differ? (You may need to install LaTeX in order to build the PDF output — RStudio will prompt you if this is necessary.) Done seperately. HTML does not have page numbers. Plots or other outputs with interactive components will often only be viewable from html (e.g. flexdashboard, plotly, …). Some input options will work across all formats, e.g. `toc: true`, however other options like code folding may be specific to a format, e.g. code folding will only work with html. 27\.3: Text formatting with Markdown ------------------------------------ *Print file from Hadley’s github page with commmon formatting:* ``` cat(readr::read_file("https://raw.githubusercontent.com/hadley/r4ds/master/rmarkdown/markdown.Rmd")) ``` *Other notes* The following will actually run in the console when knitted (and not in the knitted document): ``` summary(mpg) ``` ### 27\.3\.1 1. Practice what you’ve learned by creating a brief CV. The title should be your name, and you should include headings for (at least) education or employment. Each of the sections should include a bulleted list of jobs/degrees. Highlight the year in bold. *this is a weak example (see \_\_ for better examples):* 2. Using the R Markdown quick reference, figure out how to: 3. Add a footnote. Here is a foonote reference[47](#fn47) and another[48](#fn48) and a 3rd[49](#fn49) and an in\-line one[50](#fn50) 2. Add a horizontal rule. --- A [linked phrase](http://example.com/ "Title"). --- pagebreaks above and below (AKA horizontal rules) --- 3. Add a block quote. > There is no spoon. > > \-The Matrix 3. Copy and paste the contents of `diamond-sizes.Rmd` from <https://github.com/hadley/r4ds/tree/master/rmarkdown> in to a local R markdown document. Check that you can run it, then add text after the frequency polygon that describes its most striking features. * It’s interesting that the count of number of diamonds spikes at whole numbers… 27\.4: Code chunks ------------------ ### 27\.4\.7 1. Add a section that explores how diamond sizes vary by cut, colour, and clarity. Assume you’re writing a report for someone who doesn’t know R, and instead of setting `echo = FALSE` on each chunk, set a global option. * put this into a code chunk: ``` knitr::opts_chunk$set(echo = FALSE) ``` 2. Download `diamond-sizes.Rmd` from <https://github.com/hadley/r4ds/tree/master/rmarkdown>. Add a section that describes the largest 20 diamonds, including a table that displays their most important attributes. ``` diamonds %>% filter(min_rank(-carat) <= 20) %>% select(starts_with("c")) %>% arrange(desc(carat)) %>% knitr::kable(caption = "The four C's of the 20 biggest diamonds") ``` Table 1: The four C’s of the 20 biggest diamonds | carat | cut | color | clarity | | --- | --- | --- | --- | | 5\.01 | Fair | J | I1 | | 4\.50 | Fair | J | I1 | | 4\.13 | Fair | H | I1 | | 4\.01 | Premium | I | I1 | | 4\.01 | Premium | J | I1 | | 4\.00 | Very Good | I | I1 | | 3\.67 | Premium | I | I1 | | 3\.65 | Fair | H | I1 | | 3\.51 | Premium | J | VS2 | | 3\.50 | Ideal | H | I1 | | 3\.40 | Fair | D | I1 | | 3\.24 | Premium | H | I1 | | 3\.22 | Ideal | I | I1 | | 3\.11 | Fair | J | I1 | | 3\.05 | Premium | E | I1 | | 3\.04 | Very Good | I | SI2 | | 3\.04 | Premium | I | SI2 | | 3\.02 | Fair | I | I1 | | 3\.01 | Premium | I | I1 | | 3\.01 | Premium | F | I1 | | 3\.01 | Fair | H | I1 | | 3\.01 | Premium | G | SI2 | | 3\.01 | Ideal | J | SI2 | | 3\.01 | Ideal | J | I1 | | 3\.01 | Premium | I | SI2 | | 3\.01 | Fair | I | SI2 | | 3\.01 | Fair | I | SI2 | | 3\.01 | Good | I | SI2 | | 3\.01 | Good | I | SI2 | | 3\.01 | Good | H | SI2 | | 3\.01 | Premium | J | SI2 | | 3\.01 | Premium | J | SI2 | 3. Modify `diamonds-sizes.Rmd` to use `comma()` to produce nicely formatted output. Also include the percentage of diamonds that are larger than 2\.5 carats. ``` diamonds %>% summarise(`proportion big` = (sum(carat > 2.5) / n()) %>% comma()) %>% knitr::kable() ``` | proportion big | | --- | | 0\.0023 | 4. Set up a network of chunks where `d` depends on `c` and `b`, and both `b` and `c` depend on `a`. Have each chunk print `lubridate::now()`, set `cache = TRUE`, then verify your understanding of caching. ``` lubridate::now() ``` ``` ## [1] "2019-06-05 19:31:49 EDT" ``` ``` lubridate::now() ``` ``` ## [1] "2019-06-05 19:31:49 EDT" ``` ``` lubridate::now() ``` ``` ## [1] "2019-06-05 19:31:50 EDT" ``` ``` lubridate::now() ``` ``` ## [1] "2019-06-05 19:31:50 EDT" ``` *Make sure the following packages are installed:* 27\.2 R Markdown basics ----------------------- ### 27\.2\.1 1. Create a new notebook using *File \> New File \> R Notebook*. Read the instructions. Practice running the chunks. Verify that you can modify the code, re\-run it, and see modified output. Done seperately. 2. Create a new R Markdown document with *File \> New File \> R Markdown…* Knit it by clicking the appropriate button. Knit it by using the appropriate keyboard short cut. Verify that you can modify the input and see the output update. Done seperately. 3. Compare and contrast the R notebook and R markdown files you created above. How are the outputs similar? How are they different? How are the inputs similar? How are they different? What happens if you copy the YAML header from one to the other? * Both by default have code chunks display ‘in\-line’ while working, though with RMD can force to not output in\-line. * When rendering, default of notebooks will be to render whichever chunks have been rendered during interactive session, whereas RMD document needs directions from code chunk options + I generally prefer .Rmd files to notebooks.[46](#fn46) 4. Create one new R Markdown document for each of the three built\-in formats: HTML, PDF and Word. Knit each of the three documents. How does the output differ? How does the input differ? (You may need to install LaTeX in order to build the PDF output — RStudio will prompt you if this is necessary.) Done seperately. HTML does not have page numbers. Plots or other outputs with interactive components will often only be viewable from html (e.g. flexdashboard, plotly, …). Some input options will work across all formats, e.g. `toc: true`, however other options like code folding may be specific to a format, e.g. code folding will only work with html. ### 27\.2\.1 1. Create a new notebook using *File \> New File \> R Notebook*. Read the instructions. Practice running the chunks. Verify that you can modify the code, re\-run it, and see modified output. Done seperately. 2. Create a new R Markdown document with *File \> New File \> R Markdown…* Knit it by clicking the appropriate button. Knit it by using the appropriate keyboard short cut. Verify that you can modify the input and see the output update. Done seperately. 3. Compare and contrast the R notebook and R markdown files you created above. How are the outputs similar? How are they different? How are the inputs similar? How are they different? What happens if you copy the YAML header from one to the other? * Both by default have code chunks display ‘in\-line’ while working, though with RMD can force to not output in\-line. * When rendering, default of notebooks will be to render whichever chunks have been rendered during interactive session, whereas RMD document needs directions from code chunk options + I generally prefer .Rmd files to notebooks.[46](#fn46) 4. Create one new R Markdown document for each of the three built\-in formats: HTML, PDF and Word. Knit each of the three documents. How does the output differ? How does the input differ? (You may need to install LaTeX in order to build the PDF output — RStudio will prompt you if this is necessary.) Done seperately. HTML does not have page numbers. Plots or other outputs with interactive components will often only be viewable from html (e.g. flexdashboard, plotly, …). Some input options will work across all formats, e.g. `toc: true`, however other options like code folding may be specific to a format, e.g. code folding will only work with html. 27\.3: Text formatting with Markdown ------------------------------------ *Print file from Hadley’s github page with commmon formatting:* ``` cat(readr::read_file("https://raw.githubusercontent.com/hadley/r4ds/master/rmarkdown/markdown.Rmd")) ``` *Other notes* The following will actually run in the console when knitted (and not in the knitted document): ``` summary(mpg) ``` ### 27\.3\.1 1. Practice what you’ve learned by creating a brief CV. The title should be your name, and you should include headings for (at least) education or employment. Each of the sections should include a bulleted list of jobs/degrees. Highlight the year in bold. *this is a weak example (see \_\_ for better examples):* 2. Using the R Markdown quick reference, figure out how to: 3. Add a footnote. Here is a foonote reference[47](#fn47) and another[48](#fn48) and a 3rd[49](#fn49) and an in\-line one[50](#fn50) 2. Add a horizontal rule. --- A [linked phrase](http://example.com/ "Title"). --- pagebreaks above and below (AKA horizontal rules) --- 3. Add a block quote. > There is no spoon. > > \-The Matrix 3. Copy and paste the contents of `diamond-sizes.Rmd` from <https://github.com/hadley/r4ds/tree/master/rmarkdown> in to a local R markdown document. Check that you can run it, then add text after the frequency polygon that describes its most striking features. * It’s interesting that the count of number of diamonds spikes at whole numbers… ### 27\.3\.1 1. Practice what you’ve learned by creating a brief CV. The title should be your name, and you should include headings for (at least) education or employment. Each of the sections should include a bulleted list of jobs/degrees. Highlight the year in bold. *this is a weak example (see \_\_ for better examples):* 2. Using the R Markdown quick reference, figure out how to: 3. Add a footnote. Here is a foonote reference[47](#fn47) and another[48](#fn48) and a 3rd[49](#fn49) and an in\-line one[50](#fn50) 2. Add a horizontal rule. --- A [linked phrase](http://example.com/ "Title"). --- pagebreaks above and below (AKA horizontal rules) --- 3. Add a block quote. > There is no spoon. > > \-The Matrix 3. Copy and paste the contents of `diamond-sizes.Rmd` from <https://github.com/hadley/r4ds/tree/master/rmarkdown> in to a local R markdown document. Check that you can run it, then add text after the frequency polygon that describes its most striking features. * It’s interesting that the count of number of diamonds spikes at whole numbers… 27\.4: Code chunks ------------------ ### 27\.4\.7 1. Add a section that explores how diamond sizes vary by cut, colour, and clarity. Assume you’re writing a report for someone who doesn’t know R, and instead of setting `echo = FALSE` on each chunk, set a global option. * put this into a code chunk: ``` knitr::opts_chunk$set(echo = FALSE) ``` 2. Download `diamond-sizes.Rmd` from <https://github.com/hadley/r4ds/tree/master/rmarkdown>. Add a section that describes the largest 20 diamonds, including a table that displays their most important attributes. ``` diamonds %>% filter(min_rank(-carat) <= 20) %>% select(starts_with("c")) %>% arrange(desc(carat)) %>% knitr::kable(caption = "The four C's of the 20 biggest diamonds") ``` Table 1: The four C’s of the 20 biggest diamonds | carat | cut | color | clarity | | --- | --- | --- | --- | | 5\.01 | Fair | J | I1 | | 4\.50 | Fair | J | I1 | | 4\.13 | Fair | H | I1 | | 4\.01 | Premium | I | I1 | | 4\.01 | Premium | J | I1 | | 4\.00 | Very Good | I | I1 | | 3\.67 | Premium | I | I1 | | 3\.65 | Fair | H | I1 | | 3\.51 | Premium | J | VS2 | | 3\.50 | Ideal | H | I1 | | 3\.40 | Fair | D | I1 | | 3\.24 | Premium | H | I1 | | 3\.22 | Ideal | I | I1 | | 3\.11 | Fair | J | I1 | | 3\.05 | Premium | E | I1 | | 3\.04 | Very Good | I | SI2 | | 3\.04 | Premium | I | SI2 | | 3\.02 | Fair | I | I1 | | 3\.01 | Premium | I | I1 | | 3\.01 | Premium | F | I1 | | 3\.01 | Fair | H | I1 | | 3\.01 | Premium | G | SI2 | | 3\.01 | Ideal | J | SI2 | | 3\.01 | Ideal | J | I1 | | 3\.01 | Premium | I | SI2 | | 3\.01 | Fair | I | SI2 | | 3\.01 | Fair | I | SI2 | | 3\.01 | Good | I | SI2 | | 3\.01 | Good | I | SI2 | | 3\.01 | Good | H | SI2 | | 3\.01 | Premium | J | SI2 | | 3\.01 | Premium | J | SI2 | 3. Modify `diamonds-sizes.Rmd` to use `comma()` to produce nicely formatted output. Also include the percentage of diamonds that are larger than 2\.5 carats. ``` diamonds %>% summarise(`proportion big` = (sum(carat > 2.5) / n()) %>% comma()) %>% knitr::kable() ``` | proportion big | | --- | | 0\.0023 | 4. Set up a network of chunks where `d` depends on `c` and `b`, and both `b` and `c` depend on `a`. Have each chunk print `lubridate::now()`, set `cache = TRUE`, then verify your understanding of caching. ``` lubridate::now() ``` ``` ## [1] "2019-06-05 19:31:49 EDT" ``` ``` lubridate::now() ``` ``` ## [1] "2019-06-05 19:31:49 EDT" ``` ``` lubridate::now() ``` ``` ## [1] "2019-06-05 19:31:50 EDT" ``` ``` lubridate::now() ``` ``` ## [1] "2019-06-05 19:31:50 EDT" ``` *Make sure the following packages are installed:* ### 27\.4\.7 1. Add a section that explores how diamond sizes vary by cut, colour, and clarity. Assume you’re writing a report for someone who doesn’t know R, and instead of setting `echo = FALSE` on each chunk, set a global option. * put this into a code chunk: ``` knitr::opts_chunk$set(echo = FALSE) ``` 2. Download `diamond-sizes.Rmd` from <https://github.com/hadley/r4ds/tree/master/rmarkdown>. Add a section that describes the largest 20 diamonds, including a table that displays their most important attributes. ``` diamonds %>% filter(min_rank(-carat) <= 20) %>% select(starts_with("c")) %>% arrange(desc(carat)) %>% knitr::kable(caption = "The four C's of the 20 biggest diamonds") ``` Table 1: The four C’s of the 20 biggest diamonds | carat | cut | color | clarity | | --- | --- | --- | --- | | 5\.01 | Fair | J | I1 | | 4\.50 | Fair | J | I1 | | 4\.13 | Fair | H | I1 | | 4\.01 | Premium | I | I1 | | 4\.01 | Premium | J | I1 | | 4\.00 | Very Good | I | I1 | | 3\.67 | Premium | I | I1 | | 3\.65 | Fair | H | I1 | | 3\.51 | Premium | J | VS2 | | 3\.50 | Ideal | H | I1 | | 3\.40 | Fair | D | I1 | | 3\.24 | Premium | H | I1 | | 3\.22 | Ideal | I | I1 | | 3\.11 | Fair | J | I1 | | 3\.05 | Premium | E | I1 | | 3\.04 | Very Good | I | SI2 | | 3\.04 | Premium | I | SI2 | | 3\.02 | Fair | I | I1 | | 3\.01 | Premium | I | I1 | | 3\.01 | Premium | F | I1 | | 3\.01 | Fair | H | I1 | | 3\.01 | Premium | G | SI2 | | 3\.01 | Ideal | J | SI2 | | 3\.01 | Ideal | J | I1 | | 3\.01 | Premium | I | SI2 | | 3\.01 | Fair | I | SI2 | | 3\.01 | Fair | I | SI2 | | 3\.01 | Good | I | SI2 | | 3\.01 | Good | I | SI2 | | 3\.01 | Good | H | SI2 | | 3\.01 | Premium | J | SI2 | | 3\.01 | Premium | J | SI2 | 3. Modify `diamonds-sizes.Rmd` to use `comma()` to produce nicely formatted output. Also include the percentage of diamonds that are larger than 2\.5 carats. ``` diamonds %>% summarise(`proportion big` = (sum(carat > 2.5) / n()) %>% comma()) %>% knitr::kable() ``` | proportion big | | --- | | 0\.0023 | 4. Set up a network of chunks where `d` depends on `c` and `b`, and both `b` and `c` depend on `a`. Have each chunk print `lubridate::now()`, set `cache = TRUE`, then verify your understanding of caching. ``` lubridate::now() ``` ``` ## [1] "2019-06-05 19:31:49 EDT" ``` ``` lubridate::now() ``` ``` ## [1] "2019-06-05 19:31:49 EDT" ``` ``` lubridate::now() ``` ``` ## [1] "2019-06-05 19:31:50 EDT" ``` ``` lubridate::now() ``` ``` ## [1] "2019-06-05 19:31:50 EDT" ``` *Make sure the following packages are installed:*
Data Science
brshallo.github.io
https://brshallo.github.io/r4ds_solutions/28-graphics-for-communication.html
Ch. 28: Graphics for communication ================================== **Functions and notes:** * `labs()` to add labels + common args: `title`, `subtitle`, `caption`, `x`, `y`, `colour`, … + for mathematical equations use `quote` and see `?plotmath` - e.g. within `labs()` could do `y = quote(alpha + beta + frac(delta, theta))` * `geom_text()` similar to `geom_point()` but with argument `label` that adds text where the point would be + use `nudge_x` and `nudge_y` to move position around + use `vjust` (‘top’, ‘center’, or ‘bottom’) and `hjust` (‘left’, ‘center’, or ‘right’) to control alignment of text + can use `+Inf` and `-Inf` to put text in exact corners + use `stringr::str_wrap()` to automatically add line breaks + `geom_label()` is like `geom_text()` but draws a box around the data that makes easier to see (can adjust `alpha` and `fill` of background box) + `ggrepel::geom_label_repel()` is like `geom_label()` but prevents overlap of labels * `geom_hline()` and `geom_vline` for reference lines (often use `size = 2` and `colour = white`) * `geom_rect()` to draw rectangle around points (controlled by `xmin`, `xmax`, `ymin`, `ymax`) * `geom_segment()` to draw attention to a point with an arrow, (common args: `arrow`, `x`, `y`, `xend`, `yend`) * `annotate` can add in labels by hand (not from values of dataframe) * `scale_x_continuous()`, `scale_y_continuous()`, `scale_colour_discrete()`, … `scale_{aes}_{scale type}()` + `breaks` and `labels` are key args (can set `labels = NULL` to remove values) + `scale_colour_brewer(palette = "Set1")`for color blind people + `scale_colour_manual()` for definining colours with specific values, e.g. `scale_colour_manual(values = c(Republican = "red", Democratic = "blue"))` + for continuous scales try `scale_colour_gradient()`, `scale_fill_gradient()`, `scale_colour_gradient2()` (two colour gradient, e.g. \+ / \- values), `viridis::scale_colour_viridis()` + date scales are a little different, e.g. `scale_x_date()` takes args `date_labels` (e.g. `date_labels = "'%y"`) and `date_breaks` (e.g. `date_breaks = "2 days"`) + `scale_x_log10()`, `scale_y_log10`… to substitute values with a particular transformation * `theme()` customize any non\-data components of plots + e.g. remove legend with `theme(legend.position = "none")` (could also have inputted “left”, “top”, “bottom”, or “right”) * `guides()` to control display of individual legends – use in conjunction with `guide_legend()` or `guide_colourbar()` * `coord_cartesian()` to zoom using `xlim` and `ylim` args * can customize your themes, e.g. `theme_bw()`, `theme_classic()`…, see `ggthemes` for a bunch of others * `ggsave()` defaults to save most recent plot + key options: `fig.width`, `fig.height`, `fig.asp`, `out.width`, `out.height` (see chapter for details) + other options: `fig.align`, `fig.cap`, `dev` (e.g. `dev = "png"`) 28\.2: Label ------------ ### 28\.2\.1 1. Create one plot on the fuel economy data with customised `title`, `subtitle`, `caption`, `x`, `y`, and `colour` labels. ``` mpg %>% ggplot(aes(x = hwy, displ))+ geom_count(aes(colour = class))+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class") ``` 2. The `geom_smooth()` is somewhat misleading because the `hwy` for large engines is skewed upwards due to the inclusion of lightweight sports cars with big engines. Use your modelling tools to fit and display a better model. ``` mpg %>% ggplot(aes(x = hwy, displ))+ geom_count(aes(colour = class))+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class")+ geom_smooth() ``` You could take into account the class of the car ``` mpg %>% ggplot(aes(x = hwy, displ, colour = class))+ geom_count()+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class")+ geom_smooth()+ facet_wrap(~class) ``` 3. Take an exploratory graphic that you’ve created in the last month, and add informative titles to make it easier for others to understand. Done seperately. 28\.3: Annotations ------------------ ### 28\.3\.1 1. Use `geom_text()` with infinite positions to place text at the four corners of the plot. ``` data_label <- tibble(x = c(Inf, -Inf), hjust = c("right", "left"), y = c(Inf, -Inf), vjust = c("top", "bottom")) %>% expand(nesting(x, hjust), nesting(y, vjust)) %>% mutate(label = glue::glue("hjust: {hjust}; vjust: {vjust}")) mpg %>% ggplot(aes(x = hwy, displ))+ geom_count(aes(colour = class))+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class")+ geom_text(aes(x = x, y = y, label = label, hjust = hjust, vjust = vjust), data = data_label) ``` 2. Read the documentation for `annotate()`. How can you use it to add a text label to a plot without having to create a tibble? * function adds geoms, but not mapped from variables of a dataframe, so can pass in small items or single labels ``` mpg %>% ggplot(aes(x = hwy, displ))+ geom_count(aes(colour = class))+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class")+ annotate("text", x = Inf, y = Inf, label = paste0("Mean highway mpg: ", round(mean(mpg$hwy))), vjust = "top", hjust = "right") ``` 3. How do labels with `geom_text()` interact with faceting? How can you add a label to a single facet? How can you put a different label in each facet? (Hint: think about the underlying data.) ``` data_label_single <- tibble(x = Inf, y = Inf, label = paste0("Mean highway mpg: ", round(mean(mpg$hwy)))) data_label <- mpg %>% group_by(class) %>% summarise(hwy = round(mean(hwy))) %>% mutate(label = paste0("hwy mpg for ", class, ": ", hwy)) %>% mutate(x = Inf, y = Inf) mpg %>% ggplot(aes(x = hwy, displ))+ geom_count(aes(colour = class))+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class")+ facet_wrap(~class)+ geom_smooth()+ geom_text(aes(x = x, y = y, label = label), data = data_label, vjust = "top", hjust = "right") ``` 4. What arguments to `geom_label()` control the appearance of the background box? * `fill` argument controls background color * `alpha` controls it’s relative brighness ``` best_in_class <- mpg %>% group_by(class) %>% filter(row_number(desc(hwy)) == 1) ggplot(mpg, aes(displ, hwy)) + geom_point(aes(colour = class)) + geom_label(aes(label = model), data = best_in_class, nudge_y = 2, alpha = 0.1, fill = "green") ``` 5. What are the four arguments to `arrow()`? How do they work? Create a series of plots that demonstrate the most important options. ``` b <- ggplot(mtcars, aes(wt, mpg)) + geom_point() df <- data.frame(x1 = 2.62, x2 = 3.57, y1 = 21.0, y2 = 15.0) b + geom_curve( aes(x = x1, y = y1, xend = x2, yend = y2), data = df, arrow = arrow(length = unit(0.03, "npc")) ) ``` * `angle` (in degrees), `length` (use `unit()` function to specify with number and type, e.g. “inches”), `ends` (“last”, “first”, or “both” – specifying which end), `type` (“open” or “closed”) * See [28\.3\.1\.5](28-graphics-for-communication.html#section-105) for more notes on line options (not specific to `arrow()`) 28\.4: Scales ------------- ### 28\.4\.4 1. Why doesn’t the following code override the default scale? ``` df <- tibble(x = rnorm(100), y = rnorm(100)) ggplot(df, aes(x, y)) + geom_hex() + scale_colour_gradient(low = "white", high = "red") + coord_fixed() ``` * `geom_hex` uses `fill`, not `colour` ``` df <- tibble(x = rnorm(100), y = rnorm(100)) ggplot(df, aes(x, y)) + geom_hex() + scale_fill_gradient(low = "white", high = "red") + coord_fixed() ``` 2. What is the first argument to every scale? How does it compare to `labs()`? * `name`, i.e. what the title will be for that axis/legend/… `labs` first argument is `...` so requires you to name the input 3. Change the display of the presidential terms by: 1. Combining the two variants shown above. 2. Improving the display of the y axis. 3. Labelling each term with the name of the president. 4. Adding informative plot labels. 5. Placing breaks every 4 years (this is trickier than it seems!). ``` presidential %>% mutate(id = 33L + row_number()) %>% ggplot(aes(start, id, colour = party)) + geom_point() + geom_segment(aes(xend = end, yend = id)) + geom_text(aes(label = name), vjust = "bottom", nudge_y = 0.2)+ scale_colour_manual(values = c(Republican = "red", Democratic = "blue"))+ scale_x_date("Year in 20th and 21st century", date_breaks = "4 years", date_labels = "'%y")+ # scale_x_date(NULL, breaks = presidential$start, date_labels = "'%y")+ scale_y_continuous(breaks = c(36, 39, 42), labels = c("36th", "39th", "42nd"))+ labs(y = "President number", x = "Year") ``` 4. Use `override.aes` to make the legend on the following plot easier to see. ``` diamonds %>% ggplot(aes(carat, price)) + geom_point(aes(colour = cut), alpha = 1/20)+ guides(colour = guide_legend(override.aes = list(alpha = 1))) ``` Appendix -------- ### 28\.3\.1\.5 Not `arrow()` function specifically, but other line end options ``` ggplot(mpg, aes(displ, hwy)) + geom_point(aes(colour = class)) + geom_segment(aes(xend = displ +5, yend = hwy + 5), data = best_in_class, lineend = "round") ``` ``` b <- ggplot(mtcars, aes(wt, mpg)) + geom_point() df <- data.frame(x1 = 2.62, x2 = 3.57, y1 = 21.0, y2 = 15.0) b + geom_curve(aes(x = x1, y = y1, xend = x2, yend = y2, colour = "curve"), data = df) + geom_segment(aes(x = x1, y = y1, xend = x2, yend = y2, colour = "segment"), data = df) ``` ``` b + geom_curve(aes(x = x1, y = y1, xend = x2, yend = y2), data = df, curvature = -0.2) ``` ``` b + geom_curve(aes(x = x1, y = y1, xend = x2, yend = y2), data = df, curvature = 1) ``` Grolemund, Garrett, and Hadley Wickham. 2017\. *R for Data Science: Import, Tidy, Transform, Visualize, and Model Data*. 1st ed. O’Reilly Media. 28\.2: Label ------------ ### 28\.2\.1 1. Create one plot on the fuel economy data with customised `title`, `subtitle`, `caption`, `x`, `y`, and `colour` labels. ``` mpg %>% ggplot(aes(x = hwy, displ))+ geom_count(aes(colour = class))+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class") ``` 2. The `geom_smooth()` is somewhat misleading because the `hwy` for large engines is skewed upwards due to the inclusion of lightweight sports cars with big engines. Use your modelling tools to fit and display a better model. ``` mpg %>% ggplot(aes(x = hwy, displ))+ geom_count(aes(colour = class))+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class")+ geom_smooth() ``` You could take into account the class of the car ``` mpg %>% ggplot(aes(x = hwy, displ, colour = class))+ geom_count()+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class")+ geom_smooth()+ facet_wrap(~class) ``` 3. Take an exploratory graphic that you’ve created in the last month, and add informative titles to make it easier for others to understand. Done seperately. ### 28\.2\.1 1. Create one plot on the fuel economy data with customised `title`, `subtitle`, `caption`, `x`, `y`, and `colour` labels. ``` mpg %>% ggplot(aes(x = hwy, displ))+ geom_count(aes(colour = class))+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class") ``` 2. The `geom_smooth()` is somewhat misleading because the `hwy` for large engines is skewed upwards due to the inclusion of lightweight sports cars with big engines. Use your modelling tools to fit and display a better model. ``` mpg %>% ggplot(aes(x = hwy, displ))+ geom_count(aes(colour = class))+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class")+ geom_smooth() ``` You could take into account the class of the car ``` mpg %>% ggplot(aes(x = hwy, displ, colour = class))+ geom_count()+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class")+ geom_smooth()+ facet_wrap(~class) ``` 3. Take an exploratory graphic that you’ve created in the last month, and add informative titles to make it easier for others to understand. Done seperately. 28\.3: Annotations ------------------ ### 28\.3\.1 1. Use `geom_text()` with infinite positions to place text at the four corners of the plot. ``` data_label <- tibble(x = c(Inf, -Inf), hjust = c("right", "left"), y = c(Inf, -Inf), vjust = c("top", "bottom")) %>% expand(nesting(x, hjust), nesting(y, vjust)) %>% mutate(label = glue::glue("hjust: {hjust}; vjust: {vjust}")) mpg %>% ggplot(aes(x = hwy, displ))+ geom_count(aes(colour = class))+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class")+ geom_text(aes(x = x, y = y, label = label, hjust = hjust, vjust = vjust), data = data_label) ``` 2. Read the documentation for `annotate()`. How can you use it to add a text label to a plot without having to create a tibble? * function adds geoms, but not mapped from variables of a dataframe, so can pass in small items or single labels ``` mpg %>% ggplot(aes(x = hwy, displ))+ geom_count(aes(colour = class))+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class")+ annotate("text", x = Inf, y = Inf, label = paste0("Mean highway mpg: ", round(mean(mpg$hwy))), vjust = "top", hjust = "right") ``` 3. How do labels with `geom_text()` interact with faceting? How can you add a label to a single facet? How can you put a different label in each facet? (Hint: think about the underlying data.) ``` data_label_single <- tibble(x = Inf, y = Inf, label = paste0("Mean highway mpg: ", round(mean(mpg$hwy)))) data_label <- mpg %>% group_by(class) %>% summarise(hwy = round(mean(hwy))) %>% mutate(label = paste0("hwy mpg for ", class, ": ", hwy)) %>% mutate(x = Inf, y = Inf) mpg %>% ggplot(aes(x = hwy, displ))+ geom_count(aes(colour = class))+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class")+ facet_wrap(~class)+ geom_smooth()+ geom_text(aes(x = x, y = y, label = label), data = data_label, vjust = "top", hjust = "right") ``` 4. What arguments to `geom_label()` control the appearance of the background box? * `fill` argument controls background color * `alpha` controls it’s relative brighness ``` best_in_class <- mpg %>% group_by(class) %>% filter(row_number(desc(hwy)) == 1) ggplot(mpg, aes(displ, hwy)) + geom_point(aes(colour = class)) + geom_label(aes(label = model), data = best_in_class, nudge_y = 2, alpha = 0.1, fill = "green") ``` 5. What are the four arguments to `arrow()`? How do they work? Create a series of plots that demonstrate the most important options. ``` b <- ggplot(mtcars, aes(wt, mpg)) + geom_point() df <- data.frame(x1 = 2.62, x2 = 3.57, y1 = 21.0, y2 = 15.0) b + geom_curve( aes(x = x1, y = y1, xend = x2, yend = y2), data = df, arrow = arrow(length = unit(0.03, "npc")) ) ``` * `angle` (in degrees), `length` (use `unit()` function to specify with number and type, e.g. “inches”), `ends` (“last”, “first”, or “both” – specifying which end), `type` (“open” or “closed”) * See [28\.3\.1\.5](28-graphics-for-communication.html#section-105) for more notes on line options (not specific to `arrow()`) ### 28\.3\.1 1. Use `geom_text()` with infinite positions to place text at the four corners of the plot. ``` data_label <- tibble(x = c(Inf, -Inf), hjust = c("right", "left"), y = c(Inf, -Inf), vjust = c("top", "bottom")) %>% expand(nesting(x, hjust), nesting(y, vjust)) %>% mutate(label = glue::glue("hjust: {hjust}; vjust: {vjust}")) mpg %>% ggplot(aes(x = hwy, displ))+ geom_count(aes(colour = class))+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class")+ geom_text(aes(x = x, y = y, label = label, hjust = hjust, vjust = vjust), data = data_label) ``` 2. Read the documentation for `annotate()`. How can you use it to add a text label to a plot without having to create a tibble? * function adds geoms, but not mapped from variables of a dataframe, so can pass in small items or single labels ``` mpg %>% ggplot(aes(x = hwy, displ))+ geom_count(aes(colour = class))+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class")+ annotate("text", x = Inf, y = Inf, label = paste0("Mean highway mpg: ", round(mean(mpg$hwy))), vjust = "top", hjust = "right") ``` 3. How do labels with `geom_text()` interact with faceting? How can you add a label to a single facet? How can you put a different label in each facet? (Hint: think about the underlying data.) ``` data_label_single <- tibble(x = Inf, y = Inf, label = paste0("Mean highway mpg: ", round(mean(mpg$hwy)))) data_label <- mpg %>% group_by(class) %>% summarise(hwy = round(mean(hwy))) %>% mutate(label = paste0("hwy mpg for ", class, ": ", hwy)) %>% mutate(x = Inf, y = Inf) mpg %>% ggplot(aes(x = hwy, displ))+ geom_count(aes(colour = class))+ labs(title = "Larger displacement has lower gas mileage efficiency", subtitle = "SUV and pickup classes` tend to be highest on disp", caption = "Data is for cars made in either 1999 or 2008", colour = "Car class")+ facet_wrap(~class)+ geom_smooth()+ geom_text(aes(x = x, y = y, label = label), data = data_label, vjust = "top", hjust = "right") ``` 4. What arguments to `geom_label()` control the appearance of the background box? * `fill` argument controls background color * `alpha` controls it’s relative brighness ``` best_in_class <- mpg %>% group_by(class) %>% filter(row_number(desc(hwy)) == 1) ggplot(mpg, aes(displ, hwy)) + geom_point(aes(colour = class)) + geom_label(aes(label = model), data = best_in_class, nudge_y = 2, alpha = 0.1, fill = "green") ``` 5. What are the four arguments to `arrow()`? How do they work? Create a series of plots that demonstrate the most important options. ``` b <- ggplot(mtcars, aes(wt, mpg)) + geom_point() df <- data.frame(x1 = 2.62, x2 = 3.57, y1 = 21.0, y2 = 15.0) b + geom_curve( aes(x = x1, y = y1, xend = x2, yend = y2), data = df, arrow = arrow(length = unit(0.03, "npc")) ) ``` * `angle` (in degrees), `length` (use `unit()` function to specify with number and type, e.g. “inches”), `ends` (“last”, “first”, or “both” – specifying which end), `type` (“open” or “closed”) * See [28\.3\.1\.5](28-graphics-for-communication.html#section-105) for more notes on line options (not specific to `arrow()`) 28\.4: Scales ------------- ### 28\.4\.4 1. Why doesn’t the following code override the default scale? ``` df <- tibble(x = rnorm(100), y = rnorm(100)) ggplot(df, aes(x, y)) + geom_hex() + scale_colour_gradient(low = "white", high = "red") + coord_fixed() ``` * `geom_hex` uses `fill`, not `colour` ``` df <- tibble(x = rnorm(100), y = rnorm(100)) ggplot(df, aes(x, y)) + geom_hex() + scale_fill_gradient(low = "white", high = "red") + coord_fixed() ``` 2. What is the first argument to every scale? How does it compare to `labs()`? * `name`, i.e. what the title will be for that axis/legend/… `labs` first argument is `...` so requires you to name the input 3. Change the display of the presidential terms by: 1. Combining the two variants shown above. 2. Improving the display of the y axis. 3. Labelling each term with the name of the president. 4. Adding informative plot labels. 5. Placing breaks every 4 years (this is trickier than it seems!). ``` presidential %>% mutate(id = 33L + row_number()) %>% ggplot(aes(start, id, colour = party)) + geom_point() + geom_segment(aes(xend = end, yend = id)) + geom_text(aes(label = name), vjust = "bottom", nudge_y = 0.2)+ scale_colour_manual(values = c(Republican = "red", Democratic = "blue"))+ scale_x_date("Year in 20th and 21st century", date_breaks = "4 years", date_labels = "'%y")+ # scale_x_date(NULL, breaks = presidential$start, date_labels = "'%y")+ scale_y_continuous(breaks = c(36, 39, 42), labels = c("36th", "39th", "42nd"))+ labs(y = "President number", x = "Year") ``` 4. Use `override.aes` to make the legend on the following plot easier to see. ``` diamonds %>% ggplot(aes(carat, price)) + geom_point(aes(colour = cut), alpha = 1/20)+ guides(colour = guide_legend(override.aes = list(alpha = 1))) ``` ### 28\.4\.4 1. Why doesn’t the following code override the default scale? ``` df <- tibble(x = rnorm(100), y = rnorm(100)) ggplot(df, aes(x, y)) + geom_hex() + scale_colour_gradient(low = "white", high = "red") + coord_fixed() ``` * `geom_hex` uses `fill`, not `colour` ``` df <- tibble(x = rnorm(100), y = rnorm(100)) ggplot(df, aes(x, y)) + geom_hex() + scale_fill_gradient(low = "white", high = "red") + coord_fixed() ``` 2. What is the first argument to every scale? How does it compare to `labs()`? * `name`, i.e. what the title will be for that axis/legend/… `labs` first argument is `...` so requires you to name the input 3. Change the display of the presidential terms by: 1. Combining the two variants shown above. 2. Improving the display of the y axis. 3. Labelling each term with the name of the president. 4. Adding informative plot labels. 5. Placing breaks every 4 years (this is trickier than it seems!). ``` presidential %>% mutate(id = 33L + row_number()) %>% ggplot(aes(start, id, colour = party)) + geom_point() + geom_segment(aes(xend = end, yend = id)) + geom_text(aes(label = name), vjust = "bottom", nudge_y = 0.2)+ scale_colour_manual(values = c(Republican = "red", Democratic = "blue"))+ scale_x_date("Year in 20th and 21st century", date_breaks = "4 years", date_labels = "'%y")+ # scale_x_date(NULL, breaks = presidential$start, date_labels = "'%y")+ scale_y_continuous(breaks = c(36, 39, 42), labels = c("36th", "39th", "42nd"))+ labs(y = "President number", x = "Year") ``` 4. Use `override.aes` to make the legend on the following plot easier to see. ``` diamonds %>% ggplot(aes(carat, price)) + geom_point(aes(colour = cut), alpha = 1/20)+ guides(colour = guide_legend(override.aes = list(alpha = 1))) ``` Appendix -------- ### 28\.3\.1\.5 Not `arrow()` function specifically, but other line end options ``` ggplot(mpg, aes(displ, hwy)) + geom_point(aes(colour = class)) + geom_segment(aes(xend = displ +5, yend = hwy + 5), data = best_in_class, lineend = "round") ``` ``` b <- ggplot(mtcars, aes(wt, mpg)) + geom_point() df <- data.frame(x1 = 2.62, x2 = 3.57, y1 = 21.0, y2 = 15.0) b + geom_curve(aes(x = x1, y = y1, xend = x2, yend = y2, colour = "curve"), data = df) + geom_segment(aes(x = x1, y = y1, xend = x2, yend = y2, colour = "segment"), data = df) ``` ``` b + geom_curve(aes(x = x1, y = y1, xend = x2, yend = y2), data = df, curvature = -0.2) ``` ``` b + geom_curve(aes(x = x1, y = y1, xend = x2, yend = y2), data = df, curvature = 1) ``` Grolemund, Garrett, and Hadley Wickham. 2017\. *R for Data Science: Import, Tidy, Transform, Visualize, and Model Data*. 1st ed. O’Reilly Media. ### 28\.3\.1\.5 Not `arrow()` function specifically, but other line end options ``` ggplot(mpg, aes(displ, hwy)) + geom_point(aes(colour = class)) + geom_segment(aes(xend = displ +5, yend = hwy + 5), data = best_in_class, lineend = "round") ``` ``` b <- ggplot(mtcars, aes(wt, mpg)) + geom_point() df <- data.frame(x1 = 2.62, x2 = 3.57, y1 = 21.0, y2 = 15.0) b + geom_curve(aes(x = x1, y = y1, xend = x2, yend = y2, colour = "curve"), data = df) + geom_segment(aes(x = x1, y = y1, xend = x2, yend = y2, colour = "segment"), data = df) ``` ``` b + geom_curve(aes(x = x1, y = y1, xend = x2, yend = y2), data = df, curvature = -0.2) ``` ``` b + geom_curve(aes(x = x1, y = y1, xend = x2, yend = y2), data = df, curvature = 1) ``` Grolemund, Garrett, and Hadley Wickham. 2017\. *R for Data Science: Import, Tidy, Transform, Visualize, and Model Data*. 1st ed. O’Reilly Media.
Data Science
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/intro.html
Introduction ============ This document provides some tools, demonstrations, and more to make data processing, programming, modeling, visualization, and presentation easier. The goal here is to instill a foundation for sound data science, and also to provide some of the *why* behind the approaches demonstrated, so that one can better implement them. Focus is given to tools and methods that are generalizable, replicable, and efficient. Intended Audience ----------------- If you are a budding data scientist in academia, industry, or elsewhere, the content in this document should provide enough knowledge for you to do better work under a variety of circumstances, and from data creation to presentation of results. You probably have already tried to analyze data in some form before this, but may be struggling to do so in an efficient way. Hopefully this will allow one to extend what they know, fill in gaps, and otherwise try some new tricks. Programming Language -------------------- While the programming language focus is on R, where applicable (which is most of the time), [Python notebooks are also available](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks), and they include the same exercises. Furthermore, much of the actual content and concepts discussed would apply to any programming language used for data science, and the concepts for programming, modeling, visualization, reproducibility and more are applicable for anyone engaged in data science. To use R effectively enough for the content here you need to also install RStudio (which is not R itself), and know how to install and load packages. Additional Practice ------------------- Exercises are provided to get more practice with the things learned. In addition, references are available to dive into even more extensions of the things demonstrated here. Outline ------- The content is divided into five sections. While parts assume at least the information in Part 1, they are otherwise fairly independent and can be explored on their own. ### Part 1: Information Processing * Understanding Basic R Approaches to Gathering and Processing Data + Overview of Data Structures + Getting data in and out + Indexing * Getting Acquainted with Other Approaches to Data Processing + Pipes, and how to use them + tidyverse + data.table + Misc. ### Part 2: Programming Basics * Using R more fully + Dealing with objects + Iterative programming + Writing functions * Going further + Code style + Vectorization + Regular expressions ### Part 3: Modeling * Model Exploration + Key concepts + Understanding and fitting models + Overview of extensions * Model Criticism + Model Assessment + Model Comparison * Machine Learning + Concepts + Demonstration of techniques ### Part 4: Visualization * Thinking Visually + Visualizing Information + Color + Contrast + and more… * Using ggplot2 + Aesthetics + Layers + Themes + and more… * Adding Interactivity + Package demos + Shiny ### Part 5: Presentation * Building Better Data\-Driven Products + Reproducibility concepts * Starting out with R markdown + Standard documents * Customization and more + Themes, CSS, etc. Workshops --------- This document also serves as a basis for several workshops. To follow along with the examples, clone/download the related section repos. Downloading any one of them will have an R project and associated data, such that the code from any section should run. * [R I: Information Processing](https://github.com/m-clark/R-I-Basics) * [R II: Programming](https://github.com/m-clark/R-II-Programming) * [R III: Modeling](https://github.com/m-clark/R-III-Modeling) * [R IV: Visualization](https://github.com/m-clark/R-IV-Visualization) * [R V: Presentation](https://github.com/m-clark/R-V-Presentation) Other ----- Color coding in text: * emphasis * package * function * object/class * link Some key packages used in the following demonstrations and exercises: tidyverse (several packages), data.table, tidymodels, rmarkdown ### Python notebooks The related Python notebooks may be found [here](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks). The primary modules used and demonstrated are numpy and pandas for the first two parts, and throughout the rest. Besides that, statsmodels, scikit\-learn, plotnine, plotly, and others are employed. ### Other R packages Many other packages are also used for data or minor demonstration, so feel free to install as we come across them. Here are a few. ggplot2movies, nycflights13, DT, highcharter, magrittr, maps, mgcv (already comes with base R), plotly, quantmod, readr, visNetwork, emmeans, ggeffects ### History This was initially a document that only contained the information processing and visualization chapters, and it filled a need specifically for workshops. Over time, it has increased to include other things I think are generally missing from those who simply learn how to get to the end results. The content is born out of the gaps I see in those I consult with, and also just ‘I wish I’d known that’ type of experience. In the interim, textbooks have come out, authored by some who develop the packages being used here, that could be used as a next step, as they offer more detail. They are listed in the [references](references.html#references) section. ### Current Efforts At present most of the content is more or less as I want it, though I’m filling it out with some minor additions here and there over time. Currently I’m improving the Python documents as well. Intended Audience ----------------- If you are a budding data scientist in academia, industry, or elsewhere, the content in this document should provide enough knowledge for you to do better work under a variety of circumstances, and from data creation to presentation of results. You probably have already tried to analyze data in some form before this, but may be struggling to do so in an efficient way. Hopefully this will allow one to extend what they know, fill in gaps, and otherwise try some new tricks. Programming Language -------------------- While the programming language focus is on R, where applicable (which is most of the time), [Python notebooks are also available](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks), and they include the same exercises. Furthermore, much of the actual content and concepts discussed would apply to any programming language used for data science, and the concepts for programming, modeling, visualization, reproducibility and more are applicable for anyone engaged in data science. To use R effectively enough for the content here you need to also install RStudio (which is not R itself), and know how to install and load packages. Additional Practice ------------------- Exercises are provided to get more practice with the things learned. In addition, references are available to dive into even more extensions of the things demonstrated here. Outline ------- The content is divided into five sections. While parts assume at least the information in Part 1, they are otherwise fairly independent and can be explored on their own. ### Part 1: Information Processing * Understanding Basic R Approaches to Gathering and Processing Data + Overview of Data Structures + Getting data in and out + Indexing * Getting Acquainted with Other Approaches to Data Processing + Pipes, and how to use them + tidyverse + data.table + Misc. ### Part 2: Programming Basics * Using R more fully + Dealing with objects + Iterative programming + Writing functions * Going further + Code style + Vectorization + Regular expressions ### Part 3: Modeling * Model Exploration + Key concepts + Understanding and fitting models + Overview of extensions * Model Criticism + Model Assessment + Model Comparison * Machine Learning + Concepts + Demonstration of techniques ### Part 4: Visualization * Thinking Visually + Visualizing Information + Color + Contrast + and more… * Using ggplot2 + Aesthetics + Layers + Themes + and more… * Adding Interactivity + Package demos + Shiny ### Part 5: Presentation * Building Better Data\-Driven Products + Reproducibility concepts * Starting out with R markdown + Standard documents * Customization and more + Themes, CSS, etc. ### Part 1: Information Processing * Understanding Basic R Approaches to Gathering and Processing Data + Overview of Data Structures + Getting data in and out + Indexing * Getting Acquainted with Other Approaches to Data Processing + Pipes, and how to use them + tidyverse + data.table + Misc. ### Part 2: Programming Basics * Using R more fully + Dealing with objects + Iterative programming + Writing functions * Going further + Code style + Vectorization + Regular expressions ### Part 3: Modeling * Model Exploration + Key concepts + Understanding and fitting models + Overview of extensions * Model Criticism + Model Assessment + Model Comparison * Machine Learning + Concepts + Demonstration of techniques ### Part 4: Visualization * Thinking Visually + Visualizing Information + Color + Contrast + and more… * Using ggplot2 + Aesthetics + Layers + Themes + and more… * Adding Interactivity + Package demos + Shiny ### Part 5: Presentation * Building Better Data\-Driven Products + Reproducibility concepts * Starting out with R markdown + Standard documents * Customization and more + Themes, CSS, etc. Workshops --------- This document also serves as a basis for several workshops. To follow along with the examples, clone/download the related section repos. Downloading any one of them will have an R project and associated data, such that the code from any section should run. * [R I: Information Processing](https://github.com/m-clark/R-I-Basics) * [R II: Programming](https://github.com/m-clark/R-II-Programming) * [R III: Modeling](https://github.com/m-clark/R-III-Modeling) * [R IV: Visualization](https://github.com/m-clark/R-IV-Visualization) * [R V: Presentation](https://github.com/m-clark/R-V-Presentation) Other ----- Color coding in text: * emphasis * package * function * object/class * link Some key packages used in the following demonstrations and exercises: tidyverse (several packages), data.table, tidymodels, rmarkdown ### Python notebooks The related Python notebooks may be found [here](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks). The primary modules used and demonstrated are numpy and pandas for the first two parts, and throughout the rest. Besides that, statsmodels, scikit\-learn, plotnine, plotly, and others are employed. ### Other R packages Many other packages are also used for data or minor demonstration, so feel free to install as we come across them. Here are a few. ggplot2movies, nycflights13, DT, highcharter, magrittr, maps, mgcv (already comes with base R), plotly, quantmod, readr, visNetwork, emmeans, ggeffects ### History This was initially a document that only contained the information processing and visualization chapters, and it filled a need specifically for workshops. Over time, it has increased to include other things I think are generally missing from those who simply learn how to get to the end results. The content is born out of the gaps I see in those I consult with, and also just ‘I wish I’d known that’ type of experience. In the interim, textbooks have come out, authored by some who develop the packages being used here, that could be used as a next step, as they offer more detail. They are listed in the [references](references.html#references) section. ### Current Efforts At present most of the content is more or less as I want it, though I’m filling it out with some minor additions here and there over time. Currently I’m improving the Python documents as well. ### Python notebooks The related Python notebooks may be found [here](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks). The primary modules used and demonstrated are numpy and pandas for the first two parts, and throughout the rest. Besides that, statsmodels, scikit\-learn, plotnine, plotly, and others are employed. ### Other R packages Many other packages are also used for data or minor demonstration, so feel free to install as we come across them. Here are a few. ggplot2movies, nycflights13, DT, highcharter, magrittr, maps, mgcv (already comes with base R), plotly, quantmod, readr, visNetwork, emmeans, ggeffects ### History This was initially a document that only contained the information processing and visualization chapters, and it filled a need specifically for workshops. Over time, it has increased to include other things I think are generally missing from those who simply learn how to get to the end results. The content is born out of the gaps I see in those I consult with, and also just ‘I wish I’d known that’ type of experience. In the interim, textbooks have come out, authored by some who develop the packages being used here, that could be used as a next step, as they offer more detail. They are listed in the [references](references.html#references) section. ### Current Efforts At present most of the content is more or less as I want it, though I’m filling it out with some minor additions here and there over time. Currently I’m improving the Python documents as well.
Data Visualization
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/intro.html
Introduction ============ This document provides some tools, demonstrations, and more to make data processing, programming, modeling, visualization, and presentation easier. The goal here is to instill a foundation for sound data science, and also to provide some of the *why* behind the approaches demonstrated, so that one can better implement them. Focus is given to tools and methods that are generalizable, replicable, and efficient. Intended Audience ----------------- If you are a budding data scientist in academia, industry, or elsewhere, the content in this document should provide enough knowledge for you to do better work under a variety of circumstances, and from data creation to presentation of results. You probably have already tried to analyze data in some form before this, but may be struggling to do so in an efficient way. Hopefully this will allow one to extend what they know, fill in gaps, and otherwise try some new tricks. Programming Language -------------------- While the programming language focus is on R, where applicable (which is most of the time), [Python notebooks are also available](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks), and they include the same exercises. Furthermore, much of the actual content and concepts discussed would apply to any programming language used for data science, and the concepts for programming, modeling, visualization, reproducibility and more are applicable for anyone engaged in data science. To use R effectively enough for the content here you need to also install RStudio (which is not R itself), and know how to install and load packages. Additional Practice ------------------- Exercises are provided to get more practice with the things learned. In addition, references are available to dive into even more extensions of the things demonstrated here. Outline ------- The content is divided into five sections. While parts assume at least the information in Part 1, they are otherwise fairly independent and can be explored on their own. ### Part 1: Information Processing * Understanding Basic R Approaches to Gathering and Processing Data + Overview of Data Structures + Getting data in and out + Indexing * Getting Acquainted with Other Approaches to Data Processing + Pipes, and how to use them + tidyverse + data.table + Misc. ### Part 2: Programming Basics * Using R more fully + Dealing with objects + Iterative programming + Writing functions * Going further + Code style + Vectorization + Regular expressions ### Part 3: Modeling * Model Exploration + Key concepts + Understanding and fitting models + Overview of extensions * Model Criticism + Model Assessment + Model Comparison * Machine Learning + Concepts + Demonstration of techniques ### Part 4: Visualization * Thinking Visually + Visualizing Information + Color + Contrast + and more… * Using ggplot2 + Aesthetics + Layers + Themes + and more… * Adding Interactivity + Package demos + Shiny ### Part 5: Presentation * Building Better Data\-Driven Products + Reproducibility concepts * Starting out with R markdown + Standard documents * Customization and more + Themes, CSS, etc. Workshops --------- This document also serves as a basis for several workshops. To follow along with the examples, clone/download the related section repos. Downloading any one of them will have an R project and associated data, such that the code from any section should run. * [R I: Information Processing](https://github.com/m-clark/R-I-Basics) * [R II: Programming](https://github.com/m-clark/R-II-Programming) * [R III: Modeling](https://github.com/m-clark/R-III-Modeling) * [R IV: Visualization](https://github.com/m-clark/R-IV-Visualization) * [R V: Presentation](https://github.com/m-clark/R-V-Presentation) Other ----- Color coding in text: * emphasis * package * function * object/class * link Some key packages used in the following demonstrations and exercises: tidyverse (several packages), data.table, tidymodels, rmarkdown ### Python notebooks The related Python notebooks may be found [here](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks). The primary modules used and demonstrated are numpy and pandas for the first two parts, and throughout the rest. Besides that, statsmodels, scikit\-learn, plotnine, plotly, and others are employed. ### Other R packages Many other packages are also used for data or minor demonstration, so feel free to install as we come across them. Here are a few. ggplot2movies, nycflights13, DT, highcharter, magrittr, maps, mgcv (already comes with base R), plotly, quantmod, readr, visNetwork, emmeans, ggeffects ### History This was initially a document that only contained the information processing and visualization chapters, and it filled a need specifically for workshops. Over time, it has increased to include other things I think are generally missing from those who simply learn how to get to the end results. The content is born out of the gaps I see in those I consult with, and also just ‘I wish I’d known that’ type of experience. In the interim, textbooks have come out, authored by some who develop the packages being used here, that could be used as a next step, as they offer more detail. They are listed in the [references](references.html#references) section. ### Current Efforts At present most of the content is more or less as I want it, though I’m filling it out with some minor additions here and there over time. Currently I’m improving the Python documents as well. Intended Audience ----------------- If you are a budding data scientist in academia, industry, or elsewhere, the content in this document should provide enough knowledge for you to do better work under a variety of circumstances, and from data creation to presentation of results. You probably have already tried to analyze data in some form before this, but may be struggling to do so in an efficient way. Hopefully this will allow one to extend what they know, fill in gaps, and otherwise try some new tricks. Programming Language -------------------- While the programming language focus is on R, where applicable (which is most of the time), [Python notebooks are also available](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks), and they include the same exercises. Furthermore, much of the actual content and concepts discussed would apply to any programming language used for data science, and the concepts for programming, modeling, visualization, reproducibility and more are applicable for anyone engaged in data science. To use R effectively enough for the content here you need to also install RStudio (which is not R itself), and know how to install and load packages. Additional Practice ------------------- Exercises are provided to get more practice with the things learned. In addition, references are available to dive into even more extensions of the things demonstrated here. Outline ------- The content is divided into five sections. While parts assume at least the information in Part 1, they are otherwise fairly independent and can be explored on their own. ### Part 1: Information Processing * Understanding Basic R Approaches to Gathering and Processing Data + Overview of Data Structures + Getting data in and out + Indexing * Getting Acquainted with Other Approaches to Data Processing + Pipes, and how to use them + tidyverse + data.table + Misc. ### Part 2: Programming Basics * Using R more fully + Dealing with objects + Iterative programming + Writing functions * Going further + Code style + Vectorization + Regular expressions ### Part 3: Modeling * Model Exploration + Key concepts + Understanding and fitting models + Overview of extensions * Model Criticism + Model Assessment + Model Comparison * Machine Learning + Concepts + Demonstration of techniques ### Part 4: Visualization * Thinking Visually + Visualizing Information + Color + Contrast + and more… * Using ggplot2 + Aesthetics + Layers + Themes + and more… * Adding Interactivity + Package demos + Shiny ### Part 5: Presentation * Building Better Data\-Driven Products + Reproducibility concepts * Starting out with R markdown + Standard documents * Customization and more + Themes, CSS, etc. ### Part 1: Information Processing * Understanding Basic R Approaches to Gathering and Processing Data + Overview of Data Structures + Getting data in and out + Indexing * Getting Acquainted with Other Approaches to Data Processing + Pipes, and how to use them + tidyverse + data.table + Misc. ### Part 2: Programming Basics * Using R more fully + Dealing with objects + Iterative programming + Writing functions * Going further + Code style + Vectorization + Regular expressions ### Part 3: Modeling * Model Exploration + Key concepts + Understanding and fitting models + Overview of extensions * Model Criticism + Model Assessment + Model Comparison * Machine Learning + Concepts + Demonstration of techniques ### Part 4: Visualization * Thinking Visually + Visualizing Information + Color + Contrast + and more… * Using ggplot2 + Aesthetics + Layers + Themes + and more… * Adding Interactivity + Package demos + Shiny ### Part 5: Presentation * Building Better Data\-Driven Products + Reproducibility concepts * Starting out with R markdown + Standard documents * Customization and more + Themes, CSS, etc. Workshops --------- This document also serves as a basis for several workshops. To follow along with the examples, clone/download the related section repos. Downloading any one of them will have an R project and associated data, such that the code from any section should run. * [R I: Information Processing](https://github.com/m-clark/R-I-Basics) * [R II: Programming](https://github.com/m-clark/R-II-Programming) * [R III: Modeling](https://github.com/m-clark/R-III-Modeling) * [R IV: Visualization](https://github.com/m-clark/R-IV-Visualization) * [R V: Presentation](https://github.com/m-clark/R-V-Presentation) Other ----- Color coding in text: * emphasis * package * function * object/class * link Some key packages used in the following demonstrations and exercises: tidyverse (several packages), data.table, tidymodels, rmarkdown ### Python notebooks The related Python notebooks may be found [here](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks). The primary modules used and demonstrated are numpy and pandas for the first two parts, and throughout the rest. Besides that, statsmodels, scikit\-learn, plotnine, plotly, and others are employed. ### Other R packages Many other packages are also used for data or minor demonstration, so feel free to install as we come across them. Here are a few. ggplot2movies, nycflights13, DT, highcharter, magrittr, maps, mgcv (already comes with base R), plotly, quantmod, readr, visNetwork, emmeans, ggeffects ### History This was initially a document that only contained the information processing and visualization chapters, and it filled a need specifically for workshops. Over time, it has increased to include other things I think are generally missing from those who simply learn how to get to the end results. The content is born out of the gaps I see in those I consult with, and also just ‘I wish I’d known that’ type of experience. In the interim, textbooks have come out, authored by some who develop the packages being used here, that could be used as a next step, as they offer more detail. They are listed in the [references](references.html#references) section. ### Current Efforts At present most of the content is more or less as I want it, though I’m filling it out with some minor additions here and there over time. Currently I’m improving the Python documents as well. ### Python notebooks The related Python notebooks may be found [here](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks). The primary modules used and demonstrated are numpy and pandas for the first two parts, and throughout the rest. Besides that, statsmodels, scikit\-learn, plotnine, plotly, and others are employed. ### Other R packages Many other packages are also used for data or minor demonstration, so feel free to install as we come across them. Here are a few. ggplot2movies, nycflights13, DT, highcharter, magrittr, maps, mgcv (already comes with base R), plotly, quantmod, readr, visNetwork, emmeans, ggeffects ### History This was initially a document that only contained the information processing and visualization chapters, and it filled a need specifically for workshops. Over time, it has increased to include other things I think are generally missing from those who simply learn how to get to the end results. The content is born out of the gaps I see in those I consult with, and also just ‘I wish I’d known that’ type of experience. In the interim, textbooks have come out, authored by some who develop the packages being used here, that could be used as a next step, as they offer more detail. They are listed in the [references](references.html#references) section. ### Current Efforts At present most of the content is more or less as I want it, though I’m filling it out with some minor additions here and there over time. Currently I’m improving the Python documents as well.
Data Visualization
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/intro.html
Introduction ============ This document provides some tools, demonstrations, and more to make data processing, programming, modeling, visualization, and presentation easier. The goal here is to instill a foundation for sound data science, and also to provide some of the *why* behind the approaches demonstrated, so that one can better implement them. Focus is given to tools and methods that are generalizable, replicable, and efficient. Intended Audience ----------------- If you are a budding data scientist in academia, industry, or elsewhere, the content in this document should provide enough knowledge for you to do better work under a variety of circumstances, and from data creation to presentation of results. You probably have already tried to analyze data in some form before this, but may be struggling to do so in an efficient way. Hopefully this will allow one to extend what they know, fill in gaps, and otherwise try some new tricks. Programming Language -------------------- While the programming language focus is on R, where applicable (which is most of the time), [Python notebooks are also available](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks), and they include the same exercises. Furthermore, much of the actual content and concepts discussed would apply to any programming language used for data science, and the concepts for programming, modeling, visualization, reproducibility and more are applicable for anyone engaged in data science. To use R effectively enough for the content here you need to also install RStudio (which is not R itself), and know how to install and load packages. Additional Practice ------------------- Exercises are provided to get more practice with the things learned. In addition, references are available to dive into even more extensions of the things demonstrated here. Outline ------- The content is divided into five sections. While parts assume at least the information in Part 1, they are otherwise fairly independent and can be explored on their own. ### Part 1: Information Processing * Understanding Basic R Approaches to Gathering and Processing Data + Overview of Data Structures + Getting data in and out + Indexing * Getting Acquainted with Other Approaches to Data Processing + Pipes, and how to use them + tidyverse + data.table + Misc. ### Part 2: Programming Basics * Using R more fully + Dealing with objects + Iterative programming + Writing functions * Going further + Code style + Vectorization + Regular expressions ### Part 3: Modeling * Model Exploration + Key concepts + Understanding and fitting models + Overview of extensions * Model Criticism + Model Assessment + Model Comparison * Machine Learning + Concepts + Demonstration of techniques ### Part 4: Visualization * Thinking Visually + Visualizing Information + Color + Contrast + and more… * Using ggplot2 + Aesthetics + Layers + Themes + and more… * Adding Interactivity + Package demos + Shiny ### Part 5: Presentation * Building Better Data\-Driven Products + Reproducibility concepts * Starting out with R markdown + Standard documents * Customization and more + Themes, CSS, etc. Workshops --------- This document also serves as a basis for several workshops. To follow along with the examples, clone/download the related section repos. Downloading any one of them will have an R project and associated data, such that the code from any section should run. * [R I: Information Processing](https://github.com/m-clark/R-I-Basics) * [R II: Programming](https://github.com/m-clark/R-II-Programming) * [R III: Modeling](https://github.com/m-clark/R-III-Modeling) * [R IV: Visualization](https://github.com/m-clark/R-IV-Visualization) * [R V: Presentation](https://github.com/m-clark/R-V-Presentation) Other ----- Color coding in text: * emphasis * package * function * object/class * link Some key packages used in the following demonstrations and exercises: tidyverse (several packages), data.table, tidymodels, rmarkdown ### Python notebooks The related Python notebooks may be found [here](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks). The primary modules used and demonstrated are numpy and pandas for the first two parts, and throughout the rest. Besides that, statsmodels, scikit\-learn, plotnine, plotly, and others are employed. ### Other R packages Many other packages are also used for data or minor demonstration, so feel free to install as we come across them. Here are a few. ggplot2movies, nycflights13, DT, highcharter, magrittr, maps, mgcv (already comes with base R), plotly, quantmod, readr, visNetwork, emmeans, ggeffects ### History This was initially a document that only contained the information processing and visualization chapters, and it filled a need specifically for workshops. Over time, it has increased to include other things I think are generally missing from those who simply learn how to get to the end results. The content is born out of the gaps I see in those I consult with, and also just ‘I wish I’d known that’ type of experience. In the interim, textbooks have come out, authored by some who develop the packages being used here, that could be used as a next step, as they offer more detail. They are listed in the [references](references.html#references) section. ### Current Efforts At present most of the content is more or less as I want it, though I’m filling it out with some minor additions here and there over time. Currently I’m improving the Python documents as well. Intended Audience ----------------- If you are a budding data scientist in academia, industry, or elsewhere, the content in this document should provide enough knowledge for you to do better work under a variety of circumstances, and from data creation to presentation of results. You probably have already tried to analyze data in some form before this, but may be struggling to do so in an efficient way. Hopefully this will allow one to extend what they know, fill in gaps, and otherwise try some new tricks. Programming Language -------------------- While the programming language focus is on R, where applicable (which is most of the time), [Python notebooks are also available](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks), and they include the same exercises. Furthermore, much of the actual content and concepts discussed would apply to any programming language used for data science, and the concepts for programming, modeling, visualization, reproducibility and more are applicable for anyone engaged in data science. To use R effectively enough for the content here you need to also install RStudio (which is not R itself), and know how to install and load packages. Additional Practice ------------------- Exercises are provided to get more practice with the things learned. In addition, references are available to dive into even more extensions of the things demonstrated here. Outline ------- The content is divided into five sections. While parts assume at least the information in Part 1, they are otherwise fairly independent and can be explored on their own. ### Part 1: Information Processing * Understanding Basic R Approaches to Gathering and Processing Data + Overview of Data Structures + Getting data in and out + Indexing * Getting Acquainted with Other Approaches to Data Processing + Pipes, and how to use them + tidyverse + data.table + Misc. ### Part 2: Programming Basics * Using R more fully + Dealing with objects + Iterative programming + Writing functions * Going further + Code style + Vectorization + Regular expressions ### Part 3: Modeling * Model Exploration + Key concepts + Understanding and fitting models + Overview of extensions * Model Criticism + Model Assessment + Model Comparison * Machine Learning + Concepts + Demonstration of techniques ### Part 4: Visualization * Thinking Visually + Visualizing Information + Color + Contrast + and more… * Using ggplot2 + Aesthetics + Layers + Themes + and more… * Adding Interactivity + Package demos + Shiny ### Part 5: Presentation * Building Better Data\-Driven Products + Reproducibility concepts * Starting out with R markdown + Standard documents * Customization and more + Themes, CSS, etc. ### Part 1: Information Processing * Understanding Basic R Approaches to Gathering and Processing Data + Overview of Data Structures + Getting data in and out + Indexing * Getting Acquainted with Other Approaches to Data Processing + Pipes, and how to use them + tidyverse + data.table + Misc. ### Part 2: Programming Basics * Using R more fully + Dealing with objects + Iterative programming + Writing functions * Going further + Code style + Vectorization + Regular expressions ### Part 3: Modeling * Model Exploration + Key concepts + Understanding and fitting models + Overview of extensions * Model Criticism + Model Assessment + Model Comparison * Machine Learning + Concepts + Demonstration of techniques ### Part 4: Visualization * Thinking Visually + Visualizing Information + Color + Contrast + and more… * Using ggplot2 + Aesthetics + Layers + Themes + and more… * Adding Interactivity + Package demos + Shiny ### Part 5: Presentation * Building Better Data\-Driven Products + Reproducibility concepts * Starting out with R markdown + Standard documents * Customization and more + Themes, CSS, etc. Workshops --------- This document also serves as a basis for several workshops. To follow along with the examples, clone/download the related section repos. Downloading any one of them will have an R project and associated data, such that the code from any section should run. * [R I: Information Processing](https://github.com/m-clark/R-I-Basics) * [R II: Programming](https://github.com/m-clark/R-II-Programming) * [R III: Modeling](https://github.com/m-clark/R-III-Modeling) * [R IV: Visualization](https://github.com/m-clark/R-IV-Visualization) * [R V: Presentation](https://github.com/m-clark/R-V-Presentation) Other ----- Color coding in text: * emphasis * package * function * object/class * link Some key packages used in the following demonstrations and exercises: tidyverse (several packages), data.table, tidymodels, rmarkdown ### Python notebooks The related Python notebooks may be found [here](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks). The primary modules used and demonstrated are numpy and pandas for the first two parts, and throughout the rest. Besides that, statsmodels, scikit\-learn, plotnine, plotly, and others are employed. ### Other R packages Many other packages are also used for data or minor demonstration, so feel free to install as we come across them. Here are a few. ggplot2movies, nycflights13, DT, highcharter, magrittr, maps, mgcv (already comes with base R), plotly, quantmod, readr, visNetwork, emmeans, ggeffects ### History This was initially a document that only contained the information processing and visualization chapters, and it filled a need specifically for workshops. Over time, it has increased to include other things I think are generally missing from those who simply learn how to get to the end results. The content is born out of the gaps I see in those I consult with, and also just ‘I wish I’d known that’ type of experience. In the interim, textbooks have come out, authored by some who develop the packages being used here, that could be used as a next step, as they offer more detail. They are listed in the [references](references.html#references) section. ### Current Efforts At present most of the content is more or less as I want it, though I’m filling it out with some minor additions here and there over time. Currently I’m improving the Python documents as well. ### Python notebooks The related Python notebooks may be found [here](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks). The primary modules used and demonstrated are numpy and pandas for the first two parts, and throughout the rest. Besides that, statsmodels, scikit\-learn, plotnine, plotly, and others are employed. ### Other R packages Many other packages are also used for data or minor demonstration, so feel free to install as we come across them. Here are a few. ggplot2movies, nycflights13, DT, highcharter, magrittr, maps, mgcv (already comes with base R), plotly, quantmod, readr, visNetwork, emmeans, ggeffects ### History This was initially a document that only contained the information processing and visualization chapters, and it filled a need specifically for workshops. Over time, it has increased to include other things I think are generally missing from those who simply learn how to get to the end results. The content is born out of the gaps I see in those I consult with, and also just ‘I wish I’d known that’ type of experience. In the interim, textbooks have come out, authored by some who develop the packages being used here, that could be used as a next step, as they offer more detail. They are listed in the [references](references.html#references) section. ### Current Efforts At present most of the content is more or less as I want it, though I’m filling it out with some minor additions here and there over time. Currently I’m improving the Python documents as well.
Text Analysis
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/intro.html
Introduction ============ This document provides some tools, demonstrations, and more to make data processing, programming, modeling, visualization, and presentation easier. The goal here is to instill a foundation for sound data science, and also to provide some of the *why* behind the approaches demonstrated, so that one can better implement them. Focus is given to tools and methods that are generalizable, replicable, and efficient. Intended Audience ----------------- If you are a budding data scientist in academia, industry, or elsewhere, the content in this document should provide enough knowledge for you to do better work under a variety of circumstances, and from data creation to presentation of results. You probably have already tried to analyze data in some form before this, but may be struggling to do so in an efficient way. Hopefully this will allow one to extend what they know, fill in gaps, and otherwise try some new tricks. Programming Language -------------------- While the programming language focus is on R, where applicable (which is most of the time), [Python notebooks are also available](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks), and they include the same exercises. Furthermore, much of the actual content and concepts discussed would apply to any programming language used for data science, and the concepts for programming, modeling, visualization, reproducibility and more are applicable for anyone engaged in data science. To use R effectively enough for the content here you need to also install RStudio (which is not R itself), and know how to install and load packages. Additional Practice ------------------- Exercises are provided to get more practice with the things learned. In addition, references are available to dive into even more extensions of the things demonstrated here. Outline ------- The content is divided into five sections. While parts assume at least the information in Part 1, they are otherwise fairly independent and can be explored on their own. ### Part 1: Information Processing * Understanding Basic R Approaches to Gathering and Processing Data + Overview of Data Structures + Getting data in and out + Indexing * Getting Acquainted with Other Approaches to Data Processing + Pipes, and how to use them + tidyverse + data.table + Misc. ### Part 2: Programming Basics * Using R more fully + Dealing with objects + Iterative programming + Writing functions * Going further + Code style + Vectorization + Regular expressions ### Part 3: Modeling * Model Exploration + Key concepts + Understanding and fitting models + Overview of extensions * Model Criticism + Model Assessment + Model Comparison * Machine Learning + Concepts + Demonstration of techniques ### Part 4: Visualization * Thinking Visually + Visualizing Information + Color + Contrast + and more… * Using ggplot2 + Aesthetics + Layers + Themes + and more… * Adding Interactivity + Package demos + Shiny ### Part 5: Presentation * Building Better Data\-Driven Products + Reproducibility concepts * Starting out with R markdown + Standard documents * Customization and more + Themes, CSS, etc. Workshops --------- This document also serves as a basis for several workshops. To follow along with the examples, clone/download the related section repos. Downloading any one of them will have an R project and associated data, such that the code from any section should run. * [R I: Information Processing](https://github.com/m-clark/R-I-Basics) * [R II: Programming](https://github.com/m-clark/R-II-Programming) * [R III: Modeling](https://github.com/m-clark/R-III-Modeling) * [R IV: Visualization](https://github.com/m-clark/R-IV-Visualization) * [R V: Presentation](https://github.com/m-clark/R-V-Presentation) Other ----- Color coding in text: * emphasis * package * function * object/class * link Some key packages used in the following demonstrations and exercises: tidyverse (several packages), data.table, tidymodels, rmarkdown ### Python notebooks The related Python notebooks may be found [here](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks). The primary modules used and demonstrated are numpy and pandas for the first two parts, and throughout the rest. Besides that, statsmodels, scikit\-learn, plotnine, plotly, and others are employed. ### Other R packages Many other packages are also used for data or minor demonstration, so feel free to install as we come across them. Here are a few. ggplot2movies, nycflights13, DT, highcharter, magrittr, maps, mgcv (already comes with base R), plotly, quantmod, readr, visNetwork, emmeans, ggeffects ### History This was initially a document that only contained the information processing and visualization chapters, and it filled a need specifically for workshops. Over time, it has increased to include other things I think are generally missing from those who simply learn how to get to the end results. The content is born out of the gaps I see in those I consult with, and also just ‘I wish I’d known that’ type of experience. In the interim, textbooks have come out, authored by some who develop the packages being used here, that could be used as a next step, as they offer more detail. They are listed in the [references](references.html#references) section. ### Current Efforts At present most of the content is more or less as I want it, though I’m filling it out with some minor additions here and there over time. Currently I’m improving the Python documents as well. Intended Audience ----------------- If you are a budding data scientist in academia, industry, or elsewhere, the content in this document should provide enough knowledge for you to do better work under a variety of circumstances, and from data creation to presentation of results. You probably have already tried to analyze data in some form before this, but may be struggling to do so in an efficient way. Hopefully this will allow one to extend what they know, fill in gaps, and otherwise try some new tricks. Programming Language -------------------- While the programming language focus is on R, where applicable (which is most of the time), [Python notebooks are also available](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks), and they include the same exercises. Furthermore, much of the actual content and concepts discussed would apply to any programming language used for data science, and the concepts for programming, modeling, visualization, reproducibility and more are applicable for anyone engaged in data science. To use R effectively enough for the content here you need to also install RStudio (which is not R itself), and know how to install and load packages. Additional Practice ------------------- Exercises are provided to get more practice with the things learned. In addition, references are available to dive into even more extensions of the things demonstrated here. Outline ------- The content is divided into five sections. While parts assume at least the information in Part 1, they are otherwise fairly independent and can be explored on their own. ### Part 1: Information Processing * Understanding Basic R Approaches to Gathering and Processing Data + Overview of Data Structures + Getting data in and out + Indexing * Getting Acquainted with Other Approaches to Data Processing + Pipes, and how to use them + tidyverse + data.table + Misc. ### Part 2: Programming Basics * Using R more fully + Dealing with objects + Iterative programming + Writing functions * Going further + Code style + Vectorization + Regular expressions ### Part 3: Modeling * Model Exploration + Key concepts + Understanding and fitting models + Overview of extensions * Model Criticism + Model Assessment + Model Comparison * Machine Learning + Concepts + Demonstration of techniques ### Part 4: Visualization * Thinking Visually + Visualizing Information + Color + Contrast + and more… * Using ggplot2 + Aesthetics + Layers + Themes + and more… * Adding Interactivity + Package demos + Shiny ### Part 5: Presentation * Building Better Data\-Driven Products + Reproducibility concepts * Starting out with R markdown + Standard documents * Customization and more + Themes, CSS, etc. ### Part 1: Information Processing * Understanding Basic R Approaches to Gathering and Processing Data + Overview of Data Structures + Getting data in and out + Indexing * Getting Acquainted with Other Approaches to Data Processing + Pipes, and how to use them + tidyverse + data.table + Misc. ### Part 2: Programming Basics * Using R more fully + Dealing with objects + Iterative programming + Writing functions * Going further + Code style + Vectorization + Regular expressions ### Part 3: Modeling * Model Exploration + Key concepts + Understanding and fitting models + Overview of extensions * Model Criticism + Model Assessment + Model Comparison * Machine Learning + Concepts + Demonstration of techniques ### Part 4: Visualization * Thinking Visually + Visualizing Information + Color + Contrast + and more… * Using ggplot2 + Aesthetics + Layers + Themes + and more… * Adding Interactivity + Package demos + Shiny ### Part 5: Presentation * Building Better Data\-Driven Products + Reproducibility concepts * Starting out with R markdown + Standard documents * Customization and more + Themes, CSS, etc. Workshops --------- This document also serves as a basis for several workshops. To follow along with the examples, clone/download the related section repos. Downloading any one of them will have an R project and associated data, such that the code from any section should run. * [R I: Information Processing](https://github.com/m-clark/R-I-Basics) * [R II: Programming](https://github.com/m-clark/R-II-Programming) * [R III: Modeling](https://github.com/m-clark/R-III-Modeling) * [R IV: Visualization](https://github.com/m-clark/R-IV-Visualization) * [R V: Presentation](https://github.com/m-clark/R-V-Presentation) Other ----- Color coding in text: * emphasis * package * function * object/class * link Some key packages used in the following demonstrations and exercises: tidyverse (several packages), data.table, tidymodels, rmarkdown ### Python notebooks The related Python notebooks may be found [here](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks). The primary modules used and demonstrated are numpy and pandas for the first two parts, and throughout the rest. Besides that, statsmodels, scikit\-learn, plotnine, plotly, and others are employed. ### Other R packages Many other packages are also used for data or minor demonstration, so feel free to install as we come across them. Here are a few. ggplot2movies, nycflights13, DT, highcharter, magrittr, maps, mgcv (already comes with base R), plotly, quantmod, readr, visNetwork, emmeans, ggeffects ### History This was initially a document that only contained the information processing and visualization chapters, and it filled a need specifically for workshops. Over time, it has increased to include other things I think are generally missing from those who simply learn how to get to the end results. The content is born out of the gaps I see in those I consult with, and also just ‘I wish I’d known that’ type of experience. In the interim, textbooks have come out, authored by some who develop the packages being used here, that could be used as a next step, as they offer more detail. They are listed in the [references](references.html#references) section. ### Current Efforts At present most of the content is more or less as I want it, though I’m filling it out with some minor additions here and there over time. Currently I’m improving the Python documents as well. ### Python notebooks The related Python notebooks may be found [here](https://github.com/m-clark/data-processing-and-visualization/tree/master/jupyter_notebooks). The primary modules used and demonstrated are numpy and pandas for the first two parts, and throughout the rest. Besides that, statsmodels, scikit\-learn, plotnine, plotly, and others are employed. ### Other R packages Many other packages are also used for data or minor demonstration, so feel free to install as we come across them. Here are a few. ggplot2movies, nycflights13, DT, highcharter, magrittr, maps, mgcv (already comes with base R), plotly, quantmod, readr, visNetwork, emmeans, ggeffects ### History This was initially a document that only contained the information processing and visualization chapters, and it filled a need specifically for workshops. Over time, it has increased to include other things I think are generally missing from those who simply learn how to get to the end results. The content is born out of the gaps I see in those I consult with, and also just ‘I wish I’d known that’ type of experience. In the interim, textbooks have come out, authored by some who develop the packages being used here, that could be used as a next step, as they offer more detail. They are listed in the [references](references.html#references) section. ### Current Efforts At present most of the content is more or less as I want it, though I’m filling it out with some minor additions here and there over time. Currently I’m improving the Python documents as well.
Text Analysis
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/data_structures.html
Data Structures =============== The goal of data science is to use data to understand the world around you. The primary tool of data science is a programming language that can convert human intention and collected evidence to actionable results. The tool we’ll demonstrate here is R. In order to use R to understand the world around you, you have to know the basics of how R works. Everything in R revolves around information in the form of data, so let’s start with how data exists within R. R has several core data structures, and we’ll take a look at each. * Vectors * Factors * Lists * Matrices/arrays * Data frames The more you know about R data structures, the more you’ll know how to use them, how packages use them, and you’ll also better understand why things go wrong when they do, and the further you’ll be able to go with your data. Furthermore, most of these data structures are common to many programming languages (e.g. vectors, lists, matrices), so what you learn with R will often generalize to other languages as well. R and other programming languages are used via an IDE (integrated development environment), which makes programming vastly easier through syntax highlighting, code completion, and more. RStudio the IDE of choice for R, while Python is varied (e.g. PyCharm for software developers, Spyder for users of Anaconda), and others like VSCode might be useful for many languages. Vectors ------- *Vectors* form the basis of R data structures. Two main types are atomic and lists, but we’ll talk about lists separately. Here is an R vector. The *elements* of the vector are numeric values. ``` x = c(1, 3, 2, 5, 4) x ``` ``` [1] 1 3 2 5 4 ``` All elements of an atomic vector are the same *type*. Example types include: * character * numeric (double) * integer * logical In addition, there are special kinds of values like NA (‘not available’ i.e. missing), NULL, NaN (not a number), Inf (infinite) and so forth. You can use typeof to examine an object’s type, or use an `is` function, e.g. is.logical, to check if an object is a specific type. ### Character strings When dealing with text, objects of the character class are what you’d typically be dealing with. ``` x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory') class(x) ``` ``` [1] "character" ``` Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, a special class we’ll talk about later, you could have a column where each entry is one of the works of Shakespeare. ### Factors An important type of vector is a factor. Factors are used to represent categorical data structures. Although not exactly precise, one can think of factors as integers with labels. For example, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the *levels*. ``` x = factor(rep(letters[1:3], e = 10)) x ``` ``` [1] a a a a a a a a a a b b b b b b b b b b c c c c c c c c c c Levels: a b c ``` ``` attributes(x) ``` ``` $levels [1] "a" "b" "c" $class [1] "factor" ``` The underlying representation is numeric, but it is important to remember that factors are *categorical*. Thus, they can’t be used as numbers would be, as the following demonstrates. ``` x_num = as.numeric(x) # convert to a numeric object sum(x_num) ``` ``` [1] 60 ``` ``` sum(x) ``` ``` Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors ``` #### Strings vs. factors The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods may require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages. For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days. Some packages to note to help you with processing strings and factors: * forcats * stringr ### Logicals Logical scalar/vectors are those that take on one of two values: `TRUE` or `FALSE`. They are especially useful in flagging whether to run certain parts of code, and indexing certain parts of data structures (e.g. taking rows that correspond to TRUE). We’ll talk about the latter usage later. Here is a logical vector. ``` my_logic = c(TRUE, FALSE, TRUE, FALSE, TRUE, TRUE) ``` Note also that logicals are also treated as binary 0:1, and so, for example, taking the mean will provide the proportion of `TRUE` values. ``` !my_logic ``` ``` [1] FALSE TRUE FALSE TRUE FALSE FALSE ``` ``` as.numeric(my_logic) ``` ``` [1] 1 0 1 0 1 1 ``` ``` mean(my_logic) ``` ``` [1] 0.6666667 ``` ### Numeric and integer The most common type of data structure you’ll deal with are integer and numeric vectors. ``` ints = -3:3 # integer sequences are easily constructed with the colon operator class(ints) ``` ``` [1] "integer" ``` ``` x = rnorm(5) # 5 random values from the standard normal distribution x ``` ``` [1] -0.7613756 0.4875454 -0.2499864 -1.1288420 0.5874086 ``` ``` typeof(x) ``` ``` [1] "double" ``` ``` class(x) ``` ``` [1] "numeric" ``` ``` typeof(ints) ``` ``` [1] "integer" ``` ``` is.numeric(ints) # also numeric! ``` ``` [1] TRUE ``` The main difference between the two is that integers regard whole numbers only and are otherwise smaller in size in memory, but practically speaking you typically won’t distinguish them for most of your data science needs. ### Dates Another common data structure you’ll deal with is a date variable. Typically dates require special treatment and to work as intended, but they can be stored as character strings or factors if desired. The following shows some of the base R functionality for this. ``` Sys.Date() ``` ``` [1] "2020-08-19" ``` ``` x = as.Date(c(Sys.Date(), '2020-09-01')) x ``` ``` [1] "2020-08-19" "2020-09-01" ``` In almost every case however, a package like lubridate will make processing them much easier. The following shows how to strip out certain aspects of a date using it. ``` library(lubridate) month(Sys.Date()) ``` ``` [1] 8 ``` ``` day(Sys.Date()) ``` ``` [1] 19 ``` ``` wday(Sys.Date(), label = TRUE ) ``` ``` [1] Wed Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` ``` quarter(Sys.Date()) ``` ``` [1] 3 ``` ``` as_date('2000-01-01') + 100 ``` ``` [1] "2000-04-10" ``` In general though, dates are treated as numeric variables, with consistent (but arbitrary) starting point. If you use these in analysis, you’ll probably want to make zero a useful value (e.g. the starting date). ``` as.numeric(Sys.Date()) ``` ``` [1] 18493 ``` ``` as.Date(10, origin = '2000-01-01') # 10 days after a supplied origin ``` ``` [1] "2000-01-11" ``` For visualization purposes, you can typically treat date variables as is, as ordered factors, or use the values as labels, and get the desired result. Matrices -------- With multiple dimensions, we are dealing with arrays. Matrices are two dimensional (2\-d) arrays, and extremely commonly used for scientific computing. The vectors making up a matrix *must all be of the same type*. For example, all values in a matrix might be numeric, or all character strings. ### Creating a matrix Creating a matrix can be done in a variety of ways. ``` # create vectors x = 1:4 y = 5:8 z = 9:12 rbind(x, y, z) # row bind ``` ``` [,1] [,2] [,3] [,4] x 1 2 3 4 y 5 6 7 8 z 9 10 11 12 ``` ``` cbind(x, y, z) # column bind ``` ``` x y z [1,] 1 5 9 [2,] 2 6 10 [3,] 3 7 11 [4,] 4 8 12 ``` ``` matrix( c(x, y, z), nrow = 3, ncol = 4, byrow = TRUE ) ``` ``` [,1] [,2] [,3] [,4] [1,] 1 2 3 4 [2,] 5 6 7 8 [3,] 9 10 11 12 ``` Lists ----- Lists in R are highly flexible objects, and probably the most commonly used for applied data science. Unlike vectors, whose elements must be of the same type, lists can contain anything as their elements, even other lists. Here is a list. We use the list function to create it. ``` x = list(1, "apple", list(3, "cat")) x ``` ``` [[1]] [1] 1 [[2]] [1] "apple" [[3]] [[3]][[1]] [1] 3 [[3]][[2]] [1] "cat" ``` We often want to loop some function over a list. ``` for (element in x) print(class(element)) ``` ``` [1] "numeric" [1] "character" [1] "list" ``` Lists can, and often do, have named elements, which we can then extract by name. ``` x = list("a" = 25, "b" = -1, "c" = 0) x[["b"]] ``` ``` [1] -1 ``` Almost all standard models in base R and other packages return an object that is a list. Knowing how to work with a list will allow you to easily access the contents of the model object for further processing. Python has similar structures, lists and dictionaries, where the latter works similarly to R’s named list. Data Frames ----------- Data frames are a very commonly used data structure, and are essentially a representation of data in a table format with rows and columns. Elements of a data frame can be different types, and this is because the data.frame class is actually just a list. As such, everything about lists applies to them. But they can also be indexed by row or column as well, just like matrices. There are other very common types of object classes associated with packages that are both a data.frame and some other type of structure (e.g. tibbles in the tidyverse). Usually your data frame will come directly from import or manipulation of other R objects (e.g. matrices). However, you should know how to create one from scratch. ### Creating a data frame The following will create a data frame with two columns, `a` and `b`. ``` mydf = data.frame( a = c(1, 5, 2), b = c(3, 8, 1) ) ``` Much to the disdain of the tidyverse, we can add row names also. ``` rownames(mydf) = paste0('row', 1:3) mydf ``` ``` a b row1 1 3 row2 5 8 row3 2 1 ``` Everything about lists applies to data.frames, so we can add, select, and remove elements of a data frame just like lists. However we’ll visit this more in depth later, and see that we’ll have much more flexibility with data frames than we would lists for common data analysis and visualization. Data Structure Exercises ------------------------ ### Exercise 1 Create an object that is a matrix and/or a data.frame, and inspect its *class* or *structure* (use the class or str functions on the object you just created). ### Exercise 2 Create a list of 3 elements, the first of which contains character strings, the second numbers, and the third, the data.frame or matrix you just created in Exercise 1\. ### Thinking Exercises * How is a factor different from a character vector? * How is a data.frame the same as and different from a matrix? * How is a data.frame the same as and different from a list? Python Data Structures Notebook ------------------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/dataStructures.ipynb) Vectors ------- *Vectors* form the basis of R data structures. Two main types are atomic and lists, but we’ll talk about lists separately. Here is an R vector. The *elements* of the vector are numeric values. ``` x = c(1, 3, 2, 5, 4) x ``` ``` [1] 1 3 2 5 4 ``` All elements of an atomic vector are the same *type*. Example types include: * character * numeric (double) * integer * logical In addition, there are special kinds of values like NA (‘not available’ i.e. missing), NULL, NaN (not a number), Inf (infinite) and so forth. You can use typeof to examine an object’s type, or use an `is` function, e.g. is.logical, to check if an object is a specific type. ### Character strings When dealing with text, objects of the character class are what you’d typically be dealing with. ``` x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory') class(x) ``` ``` [1] "character" ``` Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, a special class we’ll talk about later, you could have a column where each entry is one of the works of Shakespeare. ### Factors An important type of vector is a factor. Factors are used to represent categorical data structures. Although not exactly precise, one can think of factors as integers with labels. For example, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the *levels*. ``` x = factor(rep(letters[1:3], e = 10)) x ``` ``` [1] a a a a a a a a a a b b b b b b b b b b c c c c c c c c c c Levels: a b c ``` ``` attributes(x) ``` ``` $levels [1] "a" "b" "c" $class [1] "factor" ``` The underlying representation is numeric, but it is important to remember that factors are *categorical*. Thus, they can’t be used as numbers would be, as the following demonstrates. ``` x_num = as.numeric(x) # convert to a numeric object sum(x_num) ``` ``` [1] 60 ``` ``` sum(x) ``` ``` Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors ``` #### Strings vs. factors The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods may require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages. For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days. Some packages to note to help you with processing strings and factors: * forcats * stringr ### Logicals Logical scalar/vectors are those that take on one of two values: `TRUE` or `FALSE`. They are especially useful in flagging whether to run certain parts of code, and indexing certain parts of data structures (e.g. taking rows that correspond to TRUE). We’ll talk about the latter usage later. Here is a logical vector. ``` my_logic = c(TRUE, FALSE, TRUE, FALSE, TRUE, TRUE) ``` Note also that logicals are also treated as binary 0:1, and so, for example, taking the mean will provide the proportion of `TRUE` values. ``` !my_logic ``` ``` [1] FALSE TRUE FALSE TRUE FALSE FALSE ``` ``` as.numeric(my_logic) ``` ``` [1] 1 0 1 0 1 1 ``` ``` mean(my_logic) ``` ``` [1] 0.6666667 ``` ### Numeric and integer The most common type of data structure you’ll deal with are integer and numeric vectors. ``` ints = -3:3 # integer sequences are easily constructed with the colon operator class(ints) ``` ``` [1] "integer" ``` ``` x = rnorm(5) # 5 random values from the standard normal distribution x ``` ``` [1] -0.7613756 0.4875454 -0.2499864 -1.1288420 0.5874086 ``` ``` typeof(x) ``` ``` [1] "double" ``` ``` class(x) ``` ``` [1] "numeric" ``` ``` typeof(ints) ``` ``` [1] "integer" ``` ``` is.numeric(ints) # also numeric! ``` ``` [1] TRUE ``` The main difference between the two is that integers regard whole numbers only and are otherwise smaller in size in memory, but practically speaking you typically won’t distinguish them for most of your data science needs. ### Dates Another common data structure you’ll deal with is a date variable. Typically dates require special treatment and to work as intended, but they can be stored as character strings or factors if desired. The following shows some of the base R functionality for this. ``` Sys.Date() ``` ``` [1] "2020-08-19" ``` ``` x = as.Date(c(Sys.Date(), '2020-09-01')) x ``` ``` [1] "2020-08-19" "2020-09-01" ``` In almost every case however, a package like lubridate will make processing them much easier. The following shows how to strip out certain aspects of a date using it. ``` library(lubridate) month(Sys.Date()) ``` ``` [1] 8 ``` ``` day(Sys.Date()) ``` ``` [1] 19 ``` ``` wday(Sys.Date(), label = TRUE ) ``` ``` [1] Wed Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` ``` quarter(Sys.Date()) ``` ``` [1] 3 ``` ``` as_date('2000-01-01') + 100 ``` ``` [1] "2000-04-10" ``` In general though, dates are treated as numeric variables, with consistent (but arbitrary) starting point. If you use these in analysis, you’ll probably want to make zero a useful value (e.g. the starting date). ``` as.numeric(Sys.Date()) ``` ``` [1] 18493 ``` ``` as.Date(10, origin = '2000-01-01') # 10 days after a supplied origin ``` ``` [1] "2000-01-11" ``` For visualization purposes, you can typically treat date variables as is, as ordered factors, or use the values as labels, and get the desired result. ### Character strings When dealing with text, objects of the character class are what you’d typically be dealing with. ``` x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory') class(x) ``` ``` [1] "character" ``` Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, a special class we’ll talk about later, you could have a column where each entry is one of the works of Shakespeare. ### Factors An important type of vector is a factor. Factors are used to represent categorical data structures. Although not exactly precise, one can think of factors as integers with labels. For example, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the *levels*. ``` x = factor(rep(letters[1:3], e = 10)) x ``` ``` [1] a a a a a a a a a a b b b b b b b b b b c c c c c c c c c c Levels: a b c ``` ``` attributes(x) ``` ``` $levels [1] "a" "b" "c" $class [1] "factor" ``` The underlying representation is numeric, but it is important to remember that factors are *categorical*. Thus, they can’t be used as numbers would be, as the following demonstrates. ``` x_num = as.numeric(x) # convert to a numeric object sum(x_num) ``` ``` [1] 60 ``` ``` sum(x) ``` ``` Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors ``` #### Strings vs. factors The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods may require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages. For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days. Some packages to note to help you with processing strings and factors: * forcats * stringr #### Strings vs. factors The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods may require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages. For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days. Some packages to note to help you with processing strings and factors: * forcats * stringr ### Logicals Logical scalar/vectors are those that take on one of two values: `TRUE` or `FALSE`. They are especially useful in flagging whether to run certain parts of code, and indexing certain parts of data structures (e.g. taking rows that correspond to TRUE). We’ll talk about the latter usage later. Here is a logical vector. ``` my_logic = c(TRUE, FALSE, TRUE, FALSE, TRUE, TRUE) ``` Note also that logicals are also treated as binary 0:1, and so, for example, taking the mean will provide the proportion of `TRUE` values. ``` !my_logic ``` ``` [1] FALSE TRUE FALSE TRUE FALSE FALSE ``` ``` as.numeric(my_logic) ``` ``` [1] 1 0 1 0 1 1 ``` ``` mean(my_logic) ``` ``` [1] 0.6666667 ``` ### Numeric and integer The most common type of data structure you’ll deal with are integer and numeric vectors. ``` ints = -3:3 # integer sequences are easily constructed with the colon operator class(ints) ``` ``` [1] "integer" ``` ``` x = rnorm(5) # 5 random values from the standard normal distribution x ``` ``` [1] -0.7613756 0.4875454 -0.2499864 -1.1288420 0.5874086 ``` ``` typeof(x) ``` ``` [1] "double" ``` ``` class(x) ``` ``` [1] "numeric" ``` ``` typeof(ints) ``` ``` [1] "integer" ``` ``` is.numeric(ints) # also numeric! ``` ``` [1] TRUE ``` The main difference between the two is that integers regard whole numbers only and are otherwise smaller in size in memory, but practically speaking you typically won’t distinguish them for most of your data science needs. ### Dates Another common data structure you’ll deal with is a date variable. Typically dates require special treatment and to work as intended, but they can be stored as character strings or factors if desired. The following shows some of the base R functionality for this. ``` Sys.Date() ``` ``` [1] "2020-08-19" ``` ``` x = as.Date(c(Sys.Date(), '2020-09-01')) x ``` ``` [1] "2020-08-19" "2020-09-01" ``` In almost every case however, a package like lubridate will make processing them much easier. The following shows how to strip out certain aspects of a date using it. ``` library(lubridate) month(Sys.Date()) ``` ``` [1] 8 ``` ``` day(Sys.Date()) ``` ``` [1] 19 ``` ``` wday(Sys.Date(), label = TRUE ) ``` ``` [1] Wed Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` ``` quarter(Sys.Date()) ``` ``` [1] 3 ``` ``` as_date('2000-01-01') + 100 ``` ``` [1] "2000-04-10" ``` In general though, dates are treated as numeric variables, with consistent (but arbitrary) starting point. If you use these in analysis, you’ll probably want to make zero a useful value (e.g. the starting date). ``` as.numeric(Sys.Date()) ``` ``` [1] 18493 ``` ``` as.Date(10, origin = '2000-01-01') # 10 days after a supplied origin ``` ``` [1] "2000-01-11" ``` For visualization purposes, you can typically treat date variables as is, as ordered factors, or use the values as labels, and get the desired result. Matrices -------- With multiple dimensions, we are dealing with arrays. Matrices are two dimensional (2\-d) arrays, and extremely commonly used for scientific computing. The vectors making up a matrix *must all be of the same type*. For example, all values in a matrix might be numeric, or all character strings. ### Creating a matrix Creating a matrix can be done in a variety of ways. ``` # create vectors x = 1:4 y = 5:8 z = 9:12 rbind(x, y, z) # row bind ``` ``` [,1] [,2] [,3] [,4] x 1 2 3 4 y 5 6 7 8 z 9 10 11 12 ``` ``` cbind(x, y, z) # column bind ``` ``` x y z [1,] 1 5 9 [2,] 2 6 10 [3,] 3 7 11 [4,] 4 8 12 ``` ``` matrix( c(x, y, z), nrow = 3, ncol = 4, byrow = TRUE ) ``` ``` [,1] [,2] [,3] [,4] [1,] 1 2 3 4 [2,] 5 6 7 8 [3,] 9 10 11 12 ``` ### Creating a matrix Creating a matrix can be done in a variety of ways. ``` # create vectors x = 1:4 y = 5:8 z = 9:12 rbind(x, y, z) # row bind ``` ``` [,1] [,2] [,3] [,4] x 1 2 3 4 y 5 6 7 8 z 9 10 11 12 ``` ``` cbind(x, y, z) # column bind ``` ``` x y z [1,] 1 5 9 [2,] 2 6 10 [3,] 3 7 11 [4,] 4 8 12 ``` ``` matrix( c(x, y, z), nrow = 3, ncol = 4, byrow = TRUE ) ``` ``` [,1] [,2] [,3] [,4] [1,] 1 2 3 4 [2,] 5 6 7 8 [3,] 9 10 11 12 ``` Lists ----- Lists in R are highly flexible objects, and probably the most commonly used for applied data science. Unlike vectors, whose elements must be of the same type, lists can contain anything as their elements, even other lists. Here is a list. We use the list function to create it. ``` x = list(1, "apple", list(3, "cat")) x ``` ``` [[1]] [1] 1 [[2]] [1] "apple" [[3]] [[3]][[1]] [1] 3 [[3]][[2]] [1] "cat" ``` We often want to loop some function over a list. ``` for (element in x) print(class(element)) ``` ``` [1] "numeric" [1] "character" [1] "list" ``` Lists can, and often do, have named elements, which we can then extract by name. ``` x = list("a" = 25, "b" = -1, "c" = 0) x[["b"]] ``` ``` [1] -1 ``` Almost all standard models in base R and other packages return an object that is a list. Knowing how to work with a list will allow you to easily access the contents of the model object for further processing. Python has similar structures, lists and dictionaries, where the latter works similarly to R’s named list. Data Frames ----------- Data frames are a very commonly used data structure, and are essentially a representation of data in a table format with rows and columns. Elements of a data frame can be different types, and this is because the data.frame class is actually just a list. As such, everything about lists applies to them. But they can also be indexed by row or column as well, just like matrices. There are other very common types of object classes associated with packages that are both a data.frame and some other type of structure (e.g. tibbles in the tidyverse). Usually your data frame will come directly from import or manipulation of other R objects (e.g. matrices). However, you should know how to create one from scratch. ### Creating a data frame The following will create a data frame with two columns, `a` and `b`. ``` mydf = data.frame( a = c(1, 5, 2), b = c(3, 8, 1) ) ``` Much to the disdain of the tidyverse, we can add row names also. ``` rownames(mydf) = paste0('row', 1:3) mydf ``` ``` a b row1 1 3 row2 5 8 row3 2 1 ``` Everything about lists applies to data.frames, so we can add, select, and remove elements of a data frame just like lists. However we’ll visit this more in depth later, and see that we’ll have much more flexibility with data frames than we would lists for common data analysis and visualization. ### Creating a data frame The following will create a data frame with two columns, `a` and `b`. ``` mydf = data.frame( a = c(1, 5, 2), b = c(3, 8, 1) ) ``` Much to the disdain of the tidyverse, we can add row names also. ``` rownames(mydf) = paste0('row', 1:3) mydf ``` ``` a b row1 1 3 row2 5 8 row3 2 1 ``` Everything about lists applies to data.frames, so we can add, select, and remove elements of a data frame just like lists. However we’ll visit this more in depth later, and see that we’ll have much more flexibility with data frames than we would lists for common data analysis and visualization. Data Structure Exercises ------------------------ ### Exercise 1 Create an object that is a matrix and/or a data.frame, and inspect its *class* or *structure* (use the class or str functions on the object you just created). ### Exercise 2 Create a list of 3 elements, the first of which contains character strings, the second numbers, and the third, the data.frame or matrix you just created in Exercise 1\. ### Thinking Exercises * How is a factor different from a character vector? * How is a data.frame the same as and different from a matrix? * How is a data.frame the same as and different from a list? ### Exercise 1 Create an object that is a matrix and/or a data.frame, and inspect its *class* or *structure* (use the class or str functions on the object you just created). ### Exercise 2 Create a list of 3 elements, the first of which contains character strings, the second numbers, and the third, the data.frame or matrix you just created in Exercise 1\. ### Thinking Exercises * How is a factor different from a character vector? * How is a data.frame the same as and different from a matrix? * How is a data.frame the same as and different from a list? Python Data Structures Notebook ------------------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/dataStructures.ipynb)
Data Visualization
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/data_structures.html
Data Structures =============== The goal of data science is to use data to understand the world around you. The primary tool of data science is a programming language that can convert human intention and collected evidence to actionable results. The tool we’ll demonstrate here is R. In order to use R to understand the world around you, you have to know the basics of how R works. Everything in R revolves around information in the form of data, so let’s start with how data exists within R. R has several core data structures, and we’ll take a look at each. * Vectors * Factors * Lists * Matrices/arrays * Data frames The more you know about R data structures, the more you’ll know how to use them, how packages use them, and you’ll also better understand why things go wrong when they do, and the further you’ll be able to go with your data. Furthermore, most of these data structures are common to many programming languages (e.g. vectors, lists, matrices), so what you learn with R will often generalize to other languages as well. R and other programming languages are used via an IDE (integrated development environment), which makes programming vastly easier through syntax highlighting, code completion, and more. RStudio the IDE of choice for R, while Python is varied (e.g. PyCharm for software developers, Spyder for users of Anaconda), and others like VSCode might be useful for many languages. Vectors ------- *Vectors* form the basis of R data structures. Two main types are atomic and lists, but we’ll talk about lists separately. Here is an R vector. The *elements* of the vector are numeric values. ``` x = c(1, 3, 2, 5, 4) x ``` ``` [1] 1 3 2 5 4 ``` All elements of an atomic vector are the same *type*. Example types include: * character * numeric (double) * integer * logical In addition, there are special kinds of values like NA (‘not available’ i.e. missing), NULL, NaN (not a number), Inf (infinite) and so forth. You can use typeof to examine an object’s type, or use an `is` function, e.g. is.logical, to check if an object is a specific type. ### Character strings When dealing with text, objects of the character class are what you’d typically be dealing with. ``` x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory') class(x) ``` ``` [1] "character" ``` Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, a special class we’ll talk about later, you could have a column where each entry is one of the works of Shakespeare. ### Factors An important type of vector is a factor. Factors are used to represent categorical data structures. Although not exactly precise, one can think of factors as integers with labels. For example, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the *levels*. ``` x = factor(rep(letters[1:3], e = 10)) x ``` ``` [1] a a a a a a a a a a b b b b b b b b b b c c c c c c c c c c Levels: a b c ``` ``` attributes(x) ``` ``` $levels [1] "a" "b" "c" $class [1] "factor" ``` The underlying representation is numeric, but it is important to remember that factors are *categorical*. Thus, they can’t be used as numbers would be, as the following demonstrates. ``` x_num = as.numeric(x) # convert to a numeric object sum(x_num) ``` ``` [1] 60 ``` ``` sum(x) ``` ``` Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors ``` #### Strings vs. factors The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods may require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages. For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days. Some packages to note to help you with processing strings and factors: * forcats * stringr ### Logicals Logical scalar/vectors are those that take on one of two values: `TRUE` or `FALSE`. They are especially useful in flagging whether to run certain parts of code, and indexing certain parts of data structures (e.g. taking rows that correspond to TRUE). We’ll talk about the latter usage later. Here is a logical vector. ``` my_logic = c(TRUE, FALSE, TRUE, FALSE, TRUE, TRUE) ``` Note also that logicals are also treated as binary 0:1, and so, for example, taking the mean will provide the proportion of `TRUE` values. ``` !my_logic ``` ``` [1] FALSE TRUE FALSE TRUE FALSE FALSE ``` ``` as.numeric(my_logic) ``` ``` [1] 1 0 1 0 1 1 ``` ``` mean(my_logic) ``` ``` [1] 0.6666667 ``` ### Numeric and integer The most common type of data structure you’ll deal with are integer and numeric vectors. ``` ints = -3:3 # integer sequences are easily constructed with the colon operator class(ints) ``` ``` [1] "integer" ``` ``` x = rnorm(5) # 5 random values from the standard normal distribution x ``` ``` [1] -0.7613756 0.4875454 -0.2499864 -1.1288420 0.5874086 ``` ``` typeof(x) ``` ``` [1] "double" ``` ``` class(x) ``` ``` [1] "numeric" ``` ``` typeof(ints) ``` ``` [1] "integer" ``` ``` is.numeric(ints) # also numeric! ``` ``` [1] TRUE ``` The main difference between the two is that integers regard whole numbers only and are otherwise smaller in size in memory, but practically speaking you typically won’t distinguish them for most of your data science needs. ### Dates Another common data structure you’ll deal with is a date variable. Typically dates require special treatment and to work as intended, but they can be stored as character strings or factors if desired. The following shows some of the base R functionality for this. ``` Sys.Date() ``` ``` [1] "2020-08-19" ``` ``` x = as.Date(c(Sys.Date(), '2020-09-01')) x ``` ``` [1] "2020-08-19" "2020-09-01" ``` In almost every case however, a package like lubridate will make processing them much easier. The following shows how to strip out certain aspects of a date using it. ``` library(lubridate) month(Sys.Date()) ``` ``` [1] 8 ``` ``` day(Sys.Date()) ``` ``` [1] 19 ``` ``` wday(Sys.Date(), label = TRUE ) ``` ``` [1] Wed Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` ``` quarter(Sys.Date()) ``` ``` [1] 3 ``` ``` as_date('2000-01-01') + 100 ``` ``` [1] "2000-04-10" ``` In general though, dates are treated as numeric variables, with consistent (but arbitrary) starting point. If you use these in analysis, you’ll probably want to make zero a useful value (e.g. the starting date). ``` as.numeric(Sys.Date()) ``` ``` [1] 18493 ``` ``` as.Date(10, origin = '2000-01-01') # 10 days after a supplied origin ``` ``` [1] "2000-01-11" ``` For visualization purposes, you can typically treat date variables as is, as ordered factors, or use the values as labels, and get the desired result. Matrices -------- With multiple dimensions, we are dealing with arrays. Matrices are two dimensional (2\-d) arrays, and extremely commonly used for scientific computing. The vectors making up a matrix *must all be of the same type*. For example, all values in a matrix might be numeric, or all character strings. ### Creating a matrix Creating a matrix can be done in a variety of ways. ``` # create vectors x = 1:4 y = 5:8 z = 9:12 rbind(x, y, z) # row bind ``` ``` [,1] [,2] [,3] [,4] x 1 2 3 4 y 5 6 7 8 z 9 10 11 12 ``` ``` cbind(x, y, z) # column bind ``` ``` x y z [1,] 1 5 9 [2,] 2 6 10 [3,] 3 7 11 [4,] 4 8 12 ``` ``` matrix( c(x, y, z), nrow = 3, ncol = 4, byrow = TRUE ) ``` ``` [,1] [,2] [,3] [,4] [1,] 1 2 3 4 [2,] 5 6 7 8 [3,] 9 10 11 12 ``` Lists ----- Lists in R are highly flexible objects, and probably the most commonly used for applied data science. Unlike vectors, whose elements must be of the same type, lists can contain anything as their elements, even other lists. Here is a list. We use the list function to create it. ``` x = list(1, "apple", list(3, "cat")) x ``` ``` [[1]] [1] 1 [[2]] [1] "apple" [[3]] [[3]][[1]] [1] 3 [[3]][[2]] [1] "cat" ``` We often want to loop some function over a list. ``` for (element in x) print(class(element)) ``` ``` [1] "numeric" [1] "character" [1] "list" ``` Lists can, and often do, have named elements, which we can then extract by name. ``` x = list("a" = 25, "b" = -1, "c" = 0) x[["b"]] ``` ``` [1] -1 ``` Almost all standard models in base R and other packages return an object that is a list. Knowing how to work with a list will allow you to easily access the contents of the model object for further processing. Python has similar structures, lists and dictionaries, where the latter works similarly to R’s named list. Data Frames ----------- Data frames are a very commonly used data structure, and are essentially a representation of data in a table format with rows and columns. Elements of a data frame can be different types, and this is because the data.frame class is actually just a list. As such, everything about lists applies to them. But they can also be indexed by row or column as well, just like matrices. There are other very common types of object classes associated with packages that are both a data.frame and some other type of structure (e.g. tibbles in the tidyverse). Usually your data frame will come directly from import or manipulation of other R objects (e.g. matrices). However, you should know how to create one from scratch. ### Creating a data frame The following will create a data frame with two columns, `a` and `b`. ``` mydf = data.frame( a = c(1, 5, 2), b = c(3, 8, 1) ) ``` Much to the disdain of the tidyverse, we can add row names also. ``` rownames(mydf) = paste0('row', 1:3) mydf ``` ``` a b row1 1 3 row2 5 8 row3 2 1 ``` Everything about lists applies to data.frames, so we can add, select, and remove elements of a data frame just like lists. However we’ll visit this more in depth later, and see that we’ll have much more flexibility with data frames than we would lists for common data analysis and visualization. Data Structure Exercises ------------------------ ### Exercise 1 Create an object that is a matrix and/or a data.frame, and inspect its *class* or *structure* (use the class or str functions on the object you just created). ### Exercise 2 Create a list of 3 elements, the first of which contains character strings, the second numbers, and the third, the data.frame or matrix you just created in Exercise 1\. ### Thinking Exercises * How is a factor different from a character vector? * How is a data.frame the same as and different from a matrix? * How is a data.frame the same as and different from a list? Python Data Structures Notebook ------------------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/dataStructures.ipynb) Vectors ------- *Vectors* form the basis of R data structures. Two main types are atomic and lists, but we’ll talk about lists separately. Here is an R vector. The *elements* of the vector are numeric values. ``` x = c(1, 3, 2, 5, 4) x ``` ``` [1] 1 3 2 5 4 ``` All elements of an atomic vector are the same *type*. Example types include: * character * numeric (double) * integer * logical In addition, there are special kinds of values like NA (‘not available’ i.e. missing), NULL, NaN (not a number), Inf (infinite) and so forth. You can use typeof to examine an object’s type, or use an `is` function, e.g. is.logical, to check if an object is a specific type. ### Character strings When dealing with text, objects of the character class are what you’d typically be dealing with. ``` x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory') class(x) ``` ``` [1] "character" ``` Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, a special class we’ll talk about later, you could have a column where each entry is one of the works of Shakespeare. ### Factors An important type of vector is a factor. Factors are used to represent categorical data structures. Although not exactly precise, one can think of factors as integers with labels. For example, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the *levels*. ``` x = factor(rep(letters[1:3], e = 10)) x ``` ``` [1] a a a a a a a a a a b b b b b b b b b b c c c c c c c c c c Levels: a b c ``` ``` attributes(x) ``` ``` $levels [1] "a" "b" "c" $class [1] "factor" ``` The underlying representation is numeric, but it is important to remember that factors are *categorical*. Thus, they can’t be used as numbers would be, as the following demonstrates. ``` x_num = as.numeric(x) # convert to a numeric object sum(x_num) ``` ``` [1] 60 ``` ``` sum(x) ``` ``` Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors ``` #### Strings vs. factors The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods may require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages. For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days. Some packages to note to help you with processing strings and factors: * forcats * stringr ### Logicals Logical scalar/vectors are those that take on one of two values: `TRUE` or `FALSE`. They are especially useful in flagging whether to run certain parts of code, and indexing certain parts of data structures (e.g. taking rows that correspond to TRUE). We’ll talk about the latter usage later. Here is a logical vector. ``` my_logic = c(TRUE, FALSE, TRUE, FALSE, TRUE, TRUE) ``` Note also that logicals are also treated as binary 0:1, and so, for example, taking the mean will provide the proportion of `TRUE` values. ``` !my_logic ``` ``` [1] FALSE TRUE FALSE TRUE FALSE FALSE ``` ``` as.numeric(my_logic) ``` ``` [1] 1 0 1 0 1 1 ``` ``` mean(my_logic) ``` ``` [1] 0.6666667 ``` ### Numeric and integer The most common type of data structure you’ll deal with are integer and numeric vectors. ``` ints = -3:3 # integer sequences are easily constructed with the colon operator class(ints) ``` ``` [1] "integer" ``` ``` x = rnorm(5) # 5 random values from the standard normal distribution x ``` ``` [1] -0.7613756 0.4875454 -0.2499864 -1.1288420 0.5874086 ``` ``` typeof(x) ``` ``` [1] "double" ``` ``` class(x) ``` ``` [1] "numeric" ``` ``` typeof(ints) ``` ``` [1] "integer" ``` ``` is.numeric(ints) # also numeric! ``` ``` [1] TRUE ``` The main difference between the two is that integers regard whole numbers only and are otherwise smaller in size in memory, but practically speaking you typically won’t distinguish them for most of your data science needs. ### Dates Another common data structure you’ll deal with is a date variable. Typically dates require special treatment and to work as intended, but they can be stored as character strings or factors if desired. The following shows some of the base R functionality for this. ``` Sys.Date() ``` ``` [1] "2020-08-19" ``` ``` x = as.Date(c(Sys.Date(), '2020-09-01')) x ``` ``` [1] "2020-08-19" "2020-09-01" ``` In almost every case however, a package like lubridate will make processing them much easier. The following shows how to strip out certain aspects of a date using it. ``` library(lubridate) month(Sys.Date()) ``` ``` [1] 8 ``` ``` day(Sys.Date()) ``` ``` [1] 19 ``` ``` wday(Sys.Date(), label = TRUE ) ``` ``` [1] Wed Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` ``` quarter(Sys.Date()) ``` ``` [1] 3 ``` ``` as_date('2000-01-01') + 100 ``` ``` [1] "2000-04-10" ``` In general though, dates are treated as numeric variables, with consistent (but arbitrary) starting point. If you use these in analysis, you’ll probably want to make zero a useful value (e.g. the starting date). ``` as.numeric(Sys.Date()) ``` ``` [1] 18493 ``` ``` as.Date(10, origin = '2000-01-01') # 10 days after a supplied origin ``` ``` [1] "2000-01-11" ``` For visualization purposes, you can typically treat date variables as is, as ordered factors, or use the values as labels, and get the desired result. ### Character strings When dealing with text, objects of the character class are what you’d typically be dealing with. ``` x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory') class(x) ``` ``` [1] "character" ``` Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, a special class we’ll talk about later, you could have a column where each entry is one of the works of Shakespeare. ### Factors An important type of vector is a factor. Factors are used to represent categorical data structures. Although not exactly precise, one can think of factors as integers with labels. For example, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the *levels*. ``` x = factor(rep(letters[1:3], e = 10)) x ``` ``` [1] a a a a a a a a a a b b b b b b b b b b c c c c c c c c c c Levels: a b c ``` ``` attributes(x) ``` ``` $levels [1] "a" "b" "c" $class [1] "factor" ``` The underlying representation is numeric, but it is important to remember that factors are *categorical*. Thus, they can’t be used as numbers would be, as the following demonstrates. ``` x_num = as.numeric(x) # convert to a numeric object sum(x_num) ``` ``` [1] 60 ``` ``` sum(x) ``` ``` Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors ``` #### Strings vs. factors The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods may require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages. For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days. Some packages to note to help you with processing strings and factors: * forcats * stringr #### Strings vs. factors The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods may require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages. For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days. Some packages to note to help you with processing strings and factors: * forcats * stringr ### Logicals Logical scalar/vectors are those that take on one of two values: `TRUE` or `FALSE`. They are especially useful in flagging whether to run certain parts of code, and indexing certain parts of data structures (e.g. taking rows that correspond to TRUE). We’ll talk about the latter usage later. Here is a logical vector. ``` my_logic = c(TRUE, FALSE, TRUE, FALSE, TRUE, TRUE) ``` Note also that logicals are also treated as binary 0:1, and so, for example, taking the mean will provide the proportion of `TRUE` values. ``` !my_logic ``` ``` [1] FALSE TRUE FALSE TRUE FALSE FALSE ``` ``` as.numeric(my_logic) ``` ``` [1] 1 0 1 0 1 1 ``` ``` mean(my_logic) ``` ``` [1] 0.6666667 ``` ### Numeric and integer The most common type of data structure you’ll deal with are integer and numeric vectors. ``` ints = -3:3 # integer sequences are easily constructed with the colon operator class(ints) ``` ``` [1] "integer" ``` ``` x = rnorm(5) # 5 random values from the standard normal distribution x ``` ``` [1] -0.7613756 0.4875454 -0.2499864 -1.1288420 0.5874086 ``` ``` typeof(x) ``` ``` [1] "double" ``` ``` class(x) ``` ``` [1] "numeric" ``` ``` typeof(ints) ``` ``` [1] "integer" ``` ``` is.numeric(ints) # also numeric! ``` ``` [1] TRUE ``` The main difference between the two is that integers regard whole numbers only and are otherwise smaller in size in memory, but practically speaking you typically won’t distinguish them for most of your data science needs. ### Dates Another common data structure you’ll deal with is a date variable. Typically dates require special treatment and to work as intended, but they can be stored as character strings or factors if desired. The following shows some of the base R functionality for this. ``` Sys.Date() ``` ``` [1] "2020-08-19" ``` ``` x = as.Date(c(Sys.Date(), '2020-09-01')) x ``` ``` [1] "2020-08-19" "2020-09-01" ``` In almost every case however, a package like lubridate will make processing them much easier. The following shows how to strip out certain aspects of a date using it. ``` library(lubridate) month(Sys.Date()) ``` ``` [1] 8 ``` ``` day(Sys.Date()) ``` ``` [1] 19 ``` ``` wday(Sys.Date(), label = TRUE ) ``` ``` [1] Wed Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` ``` quarter(Sys.Date()) ``` ``` [1] 3 ``` ``` as_date('2000-01-01') + 100 ``` ``` [1] "2000-04-10" ``` In general though, dates are treated as numeric variables, with consistent (but arbitrary) starting point. If you use these in analysis, you’ll probably want to make zero a useful value (e.g. the starting date). ``` as.numeric(Sys.Date()) ``` ``` [1] 18493 ``` ``` as.Date(10, origin = '2000-01-01') # 10 days after a supplied origin ``` ``` [1] "2000-01-11" ``` For visualization purposes, you can typically treat date variables as is, as ordered factors, or use the values as labels, and get the desired result. Matrices -------- With multiple dimensions, we are dealing with arrays. Matrices are two dimensional (2\-d) arrays, and extremely commonly used for scientific computing. The vectors making up a matrix *must all be of the same type*. For example, all values in a matrix might be numeric, or all character strings. ### Creating a matrix Creating a matrix can be done in a variety of ways. ``` # create vectors x = 1:4 y = 5:8 z = 9:12 rbind(x, y, z) # row bind ``` ``` [,1] [,2] [,3] [,4] x 1 2 3 4 y 5 6 7 8 z 9 10 11 12 ``` ``` cbind(x, y, z) # column bind ``` ``` x y z [1,] 1 5 9 [2,] 2 6 10 [3,] 3 7 11 [4,] 4 8 12 ``` ``` matrix( c(x, y, z), nrow = 3, ncol = 4, byrow = TRUE ) ``` ``` [,1] [,2] [,3] [,4] [1,] 1 2 3 4 [2,] 5 6 7 8 [3,] 9 10 11 12 ``` ### Creating a matrix Creating a matrix can be done in a variety of ways. ``` # create vectors x = 1:4 y = 5:8 z = 9:12 rbind(x, y, z) # row bind ``` ``` [,1] [,2] [,3] [,4] x 1 2 3 4 y 5 6 7 8 z 9 10 11 12 ``` ``` cbind(x, y, z) # column bind ``` ``` x y z [1,] 1 5 9 [2,] 2 6 10 [3,] 3 7 11 [4,] 4 8 12 ``` ``` matrix( c(x, y, z), nrow = 3, ncol = 4, byrow = TRUE ) ``` ``` [,1] [,2] [,3] [,4] [1,] 1 2 3 4 [2,] 5 6 7 8 [3,] 9 10 11 12 ``` Lists ----- Lists in R are highly flexible objects, and probably the most commonly used for applied data science. Unlike vectors, whose elements must be of the same type, lists can contain anything as their elements, even other lists. Here is a list. We use the list function to create it. ``` x = list(1, "apple", list(3, "cat")) x ``` ``` [[1]] [1] 1 [[2]] [1] "apple" [[3]] [[3]][[1]] [1] 3 [[3]][[2]] [1] "cat" ``` We often want to loop some function over a list. ``` for (element in x) print(class(element)) ``` ``` [1] "numeric" [1] "character" [1] "list" ``` Lists can, and often do, have named elements, which we can then extract by name. ``` x = list("a" = 25, "b" = -1, "c" = 0) x[["b"]] ``` ``` [1] -1 ``` Almost all standard models in base R and other packages return an object that is a list. Knowing how to work with a list will allow you to easily access the contents of the model object for further processing. Python has similar structures, lists and dictionaries, where the latter works similarly to R’s named list. Data Frames ----------- Data frames are a very commonly used data structure, and are essentially a representation of data in a table format with rows and columns. Elements of a data frame can be different types, and this is because the data.frame class is actually just a list. As such, everything about lists applies to them. But they can also be indexed by row or column as well, just like matrices. There are other very common types of object classes associated with packages that are both a data.frame and some other type of structure (e.g. tibbles in the tidyverse). Usually your data frame will come directly from import or manipulation of other R objects (e.g. matrices). However, you should know how to create one from scratch. ### Creating a data frame The following will create a data frame with two columns, `a` and `b`. ``` mydf = data.frame( a = c(1, 5, 2), b = c(3, 8, 1) ) ``` Much to the disdain of the tidyverse, we can add row names also. ``` rownames(mydf) = paste0('row', 1:3) mydf ``` ``` a b row1 1 3 row2 5 8 row3 2 1 ``` Everything about lists applies to data.frames, so we can add, select, and remove elements of a data frame just like lists. However we’ll visit this more in depth later, and see that we’ll have much more flexibility with data frames than we would lists for common data analysis and visualization. ### Creating a data frame The following will create a data frame with two columns, `a` and `b`. ``` mydf = data.frame( a = c(1, 5, 2), b = c(3, 8, 1) ) ``` Much to the disdain of the tidyverse, we can add row names also. ``` rownames(mydf) = paste0('row', 1:3) mydf ``` ``` a b row1 1 3 row2 5 8 row3 2 1 ``` Everything about lists applies to data.frames, so we can add, select, and remove elements of a data frame just like lists. However we’ll visit this more in depth later, and see that we’ll have much more flexibility with data frames than we would lists for common data analysis and visualization. Data Structure Exercises ------------------------ ### Exercise 1 Create an object that is a matrix and/or a data.frame, and inspect its *class* or *structure* (use the class or str functions on the object you just created). ### Exercise 2 Create a list of 3 elements, the first of which contains character strings, the second numbers, and the third, the data.frame or matrix you just created in Exercise 1\. ### Thinking Exercises * How is a factor different from a character vector? * How is a data.frame the same as and different from a matrix? * How is a data.frame the same as and different from a list? ### Exercise 1 Create an object that is a matrix and/or a data.frame, and inspect its *class* or *structure* (use the class or str functions on the object you just created). ### Exercise 2 Create a list of 3 elements, the first of which contains character strings, the second numbers, and the third, the data.frame or matrix you just created in Exercise 1\. ### Thinking Exercises * How is a factor different from a character vector? * How is a data.frame the same as and different from a matrix? * How is a data.frame the same as and different from a list? Python Data Structures Notebook ------------------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/dataStructures.ipynb)
Data Visualization
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/data_structures.html
Data Structures =============== The goal of data science is to use data to understand the world around you. The primary tool of data science is a programming language that can convert human intention and collected evidence to actionable results. The tool we’ll demonstrate here is R. In order to use R to understand the world around you, you have to know the basics of how R works. Everything in R revolves around information in the form of data, so let’s start with how data exists within R. R has several core data structures, and we’ll take a look at each. * Vectors * Factors * Lists * Matrices/arrays * Data frames The more you know about R data structures, the more you’ll know how to use them, how packages use them, and you’ll also better understand why things go wrong when they do, and the further you’ll be able to go with your data. Furthermore, most of these data structures are common to many programming languages (e.g. vectors, lists, matrices), so what you learn with R will often generalize to other languages as well. R and other programming languages are used via an IDE (integrated development environment), which makes programming vastly easier through syntax highlighting, code completion, and more. RStudio the IDE of choice for R, while Python is varied (e.g. PyCharm for software developers, Spyder for users of Anaconda), and others like VSCode might be useful for many languages. Vectors ------- *Vectors* form the basis of R data structures. Two main types are atomic and lists, but we’ll talk about lists separately. Here is an R vector. The *elements* of the vector are numeric values. ``` x = c(1, 3, 2, 5, 4) x ``` ``` [1] 1 3 2 5 4 ``` All elements of an atomic vector are the same *type*. Example types include: * character * numeric (double) * integer * logical In addition, there are special kinds of values like NA (‘not available’ i.e. missing), NULL, NaN (not a number), Inf (infinite) and so forth. You can use typeof to examine an object’s type, or use an `is` function, e.g. is.logical, to check if an object is a specific type. ### Character strings When dealing with text, objects of the character class are what you’d typically be dealing with. ``` x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory') class(x) ``` ``` [1] "character" ``` Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, a special class we’ll talk about later, you could have a column where each entry is one of the works of Shakespeare. ### Factors An important type of vector is a factor. Factors are used to represent categorical data structures. Although not exactly precise, one can think of factors as integers with labels. For example, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the *levels*. ``` x = factor(rep(letters[1:3], e = 10)) x ``` ``` [1] a a a a a a a a a a b b b b b b b b b b c c c c c c c c c c Levels: a b c ``` ``` attributes(x) ``` ``` $levels [1] "a" "b" "c" $class [1] "factor" ``` The underlying representation is numeric, but it is important to remember that factors are *categorical*. Thus, they can’t be used as numbers would be, as the following demonstrates. ``` x_num = as.numeric(x) # convert to a numeric object sum(x_num) ``` ``` [1] 60 ``` ``` sum(x) ``` ``` Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors ``` #### Strings vs. factors The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods may require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages. For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days. Some packages to note to help you with processing strings and factors: * forcats * stringr ### Logicals Logical scalar/vectors are those that take on one of two values: `TRUE` or `FALSE`. They are especially useful in flagging whether to run certain parts of code, and indexing certain parts of data structures (e.g. taking rows that correspond to TRUE). We’ll talk about the latter usage later. Here is a logical vector. ``` my_logic = c(TRUE, FALSE, TRUE, FALSE, TRUE, TRUE) ``` Note also that logicals are also treated as binary 0:1, and so, for example, taking the mean will provide the proportion of `TRUE` values. ``` !my_logic ``` ``` [1] FALSE TRUE FALSE TRUE FALSE FALSE ``` ``` as.numeric(my_logic) ``` ``` [1] 1 0 1 0 1 1 ``` ``` mean(my_logic) ``` ``` [1] 0.6666667 ``` ### Numeric and integer The most common type of data structure you’ll deal with are integer and numeric vectors. ``` ints = -3:3 # integer sequences are easily constructed with the colon operator class(ints) ``` ``` [1] "integer" ``` ``` x = rnorm(5) # 5 random values from the standard normal distribution x ``` ``` [1] -0.7613756 0.4875454 -0.2499864 -1.1288420 0.5874086 ``` ``` typeof(x) ``` ``` [1] "double" ``` ``` class(x) ``` ``` [1] "numeric" ``` ``` typeof(ints) ``` ``` [1] "integer" ``` ``` is.numeric(ints) # also numeric! ``` ``` [1] TRUE ``` The main difference between the two is that integers regard whole numbers only and are otherwise smaller in size in memory, but practically speaking you typically won’t distinguish them for most of your data science needs. ### Dates Another common data structure you’ll deal with is a date variable. Typically dates require special treatment and to work as intended, but they can be stored as character strings or factors if desired. The following shows some of the base R functionality for this. ``` Sys.Date() ``` ``` [1] "2020-08-19" ``` ``` x = as.Date(c(Sys.Date(), '2020-09-01')) x ``` ``` [1] "2020-08-19" "2020-09-01" ``` In almost every case however, a package like lubridate will make processing them much easier. The following shows how to strip out certain aspects of a date using it. ``` library(lubridate) month(Sys.Date()) ``` ``` [1] 8 ``` ``` day(Sys.Date()) ``` ``` [1] 19 ``` ``` wday(Sys.Date(), label = TRUE ) ``` ``` [1] Wed Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` ``` quarter(Sys.Date()) ``` ``` [1] 3 ``` ``` as_date('2000-01-01') + 100 ``` ``` [1] "2000-04-10" ``` In general though, dates are treated as numeric variables, with consistent (but arbitrary) starting point. If you use these in analysis, you’ll probably want to make zero a useful value (e.g. the starting date). ``` as.numeric(Sys.Date()) ``` ``` [1] 18493 ``` ``` as.Date(10, origin = '2000-01-01') # 10 days after a supplied origin ``` ``` [1] "2000-01-11" ``` For visualization purposes, you can typically treat date variables as is, as ordered factors, or use the values as labels, and get the desired result. Matrices -------- With multiple dimensions, we are dealing with arrays. Matrices are two dimensional (2\-d) arrays, and extremely commonly used for scientific computing. The vectors making up a matrix *must all be of the same type*. For example, all values in a matrix might be numeric, or all character strings. ### Creating a matrix Creating a matrix can be done in a variety of ways. ``` # create vectors x = 1:4 y = 5:8 z = 9:12 rbind(x, y, z) # row bind ``` ``` [,1] [,2] [,3] [,4] x 1 2 3 4 y 5 6 7 8 z 9 10 11 12 ``` ``` cbind(x, y, z) # column bind ``` ``` x y z [1,] 1 5 9 [2,] 2 6 10 [3,] 3 7 11 [4,] 4 8 12 ``` ``` matrix( c(x, y, z), nrow = 3, ncol = 4, byrow = TRUE ) ``` ``` [,1] [,2] [,3] [,4] [1,] 1 2 3 4 [2,] 5 6 7 8 [3,] 9 10 11 12 ``` Lists ----- Lists in R are highly flexible objects, and probably the most commonly used for applied data science. Unlike vectors, whose elements must be of the same type, lists can contain anything as their elements, even other lists. Here is a list. We use the list function to create it. ``` x = list(1, "apple", list(3, "cat")) x ``` ``` [[1]] [1] 1 [[2]] [1] "apple" [[3]] [[3]][[1]] [1] 3 [[3]][[2]] [1] "cat" ``` We often want to loop some function over a list. ``` for (element in x) print(class(element)) ``` ``` [1] "numeric" [1] "character" [1] "list" ``` Lists can, and often do, have named elements, which we can then extract by name. ``` x = list("a" = 25, "b" = -1, "c" = 0) x[["b"]] ``` ``` [1] -1 ``` Almost all standard models in base R and other packages return an object that is a list. Knowing how to work with a list will allow you to easily access the contents of the model object for further processing. Python has similar structures, lists and dictionaries, where the latter works similarly to R’s named list. Data Frames ----------- Data frames are a very commonly used data structure, and are essentially a representation of data in a table format with rows and columns. Elements of a data frame can be different types, and this is because the data.frame class is actually just a list. As such, everything about lists applies to them. But they can also be indexed by row or column as well, just like matrices. There are other very common types of object classes associated with packages that are both a data.frame and some other type of structure (e.g. tibbles in the tidyverse). Usually your data frame will come directly from import or manipulation of other R objects (e.g. matrices). However, you should know how to create one from scratch. ### Creating a data frame The following will create a data frame with two columns, `a` and `b`. ``` mydf = data.frame( a = c(1, 5, 2), b = c(3, 8, 1) ) ``` Much to the disdain of the tidyverse, we can add row names also. ``` rownames(mydf) = paste0('row', 1:3) mydf ``` ``` a b row1 1 3 row2 5 8 row3 2 1 ``` Everything about lists applies to data.frames, so we can add, select, and remove elements of a data frame just like lists. However we’ll visit this more in depth later, and see that we’ll have much more flexibility with data frames than we would lists for common data analysis and visualization. Data Structure Exercises ------------------------ ### Exercise 1 Create an object that is a matrix and/or a data.frame, and inspect its *class* or *structure* (use the class or str functions on the object you just created). ### Exercise 2 Create a list of 3 elements, the first of which contains character strings, the second numbers, and the third, the data.frame or matrix you just created in Exercise 1\. ### Thinking Exercises * How is a factor different from a character vector? * How is a data.frame the same as and different from a matrix? * How is a data.frame the same as and different from a list? Python Data Structures Notebook ------------------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/dataStructures.ipynb) Vectors ------- *Vectors* form the basis of R data structures. Two main types are atomic and lists, but we’ll talk about lists separately. Here is an R vector. The *elements* of the vector are numeric values. ``` x = c(1, 3, 2, 5, 4) x ``` ``` [1] 1 3 2 5 4 ``` All elements of an atomic vector are the same *type*. Example types include: * character * numeric (double) * integer * logical In addition, there are special kinds of values like NA (‘not available’ i.e. missing), NULL, NaN (not a number), Inf (infinite) and so forth. You can use typeof to examine an object’s type, or use an `is` function, e.g. is.logical, to check if an object is a specific type. ### Character strings When dealing with text, objects of the character class are what you’d typically be dealing with. ``` x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory') class(x) ``` ``` [1] "character" ``` Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, a special class we’ll talk about later, you could have a column where each entry is one of the works of Shakespeare. ### Factors An important type of vector is a factor. Factors are used to represent categorical data structures. Although not exactly precise, one can think of factors as integers with labels. For example, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the *levels*. ``` x = factor(rep(letters[1:3], e = 10)) x ``` ``` [1] a a a a a a a a a a b b b b b b b b b b c c c c c c c c c c Levels: a b c ``` ``` attributes(x) ``` ``` $levels [1] "a" "b" "c" $class [1] "factor" ``` The underlying representation is numeric, but it is important to remember that factors are *categorical*. Thus, they can’t be used as numbers would be, as the following demonstrates. ``` x_num = as.numeric(x) # convert to a numeric object sum(x_num) ``` ``` [1] 60 ``` ``` sum(x) ``` ``` Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors ``` #### Strings vs. factors The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods may require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages. For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days. Some packages to note to help you with processing strings and factors: * forcats * stringr ### Logicals Logical scalar/vectors are those that take on one of two values: `TRUE` or `FALSE`. They are especially useful in flagging whether to run certain parts of code, and indexing certain parts of data structures (e.g. taking rows that correspond to TRUE). We’ll talk about the latter usage later. Here is a logical vector. ``` my_logic = c(TRUE, FALSE, TRUE, FALSE, TRUE, TRUE) ``` Note also that logicals are also treated as binary 0:1, and so, for example, taking the mean will provide the proportion of `TRUE` values. ``` !my_logic ``` ``` [1] FALSE TRUE FALSE TRUE FALSE FALSE ``` ``` as.numeric(my_logic) ``` ``` [1] 1 0 1 0 1 1 ``` ``` mean(my_logic) ``` ``` [1] 0.6666667 ``` ### Numeric and integer The most common type of data structure you’ll deal with are integer and numeric vectors. ``` ints = -3:3 # integer sequences are easily constructed with the colon operator class(ints) ``` ``` [1] "integer" ``` ``` x = rnorm(5) # 5 random values from the standard normal distribution x ``` ``` [1] -0.7613756 0.4875454 -0.2499864 -1.1288420 0.5874086 ``` ``` typeof(x) ``` ``` [1] "double" ``` ``` class(x) ``` ``` [1] "numeric" ``` ``` typeof(ints) ``` ``` [1] "integer" ``` ``` is.numeric(ints) # also numeric! ``` ``` [1] TRUE ``` The main difference between the two is that integers regard whole numbers only and are otherwise smaller in size in memory, but practically speaking you typically won’t distinguish them for most of your data science needs. ### Dates Another common data structure you’ll deal with is a date variable. Typically dates require special treatment and to work as intended, but they can be stored as character strings or factors if desired. The following shows some of the base R functionality for this. ``` Sys.Date() ``` ``` [1] "2020-08-19" ``` ``` x = as.Date(c(Sys.Date(), '2020-09-01')) x ``` ``` [1] "2020-08-19" "2020-09-01" ``` In almost every case however, a package like lubridate will make processing them much easier. The following shows how to strip out certain aspects of a date using it. ``` library(lubridate) month(Sys.Date()) ``` ``` [1] 8 ``` ``` day(Sys.Date()) ``` ``` [1] 19 ``` ``` wday(Sys.Date(), label = TRUE ) ``` ``` [1] Wed Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` ``` quarter(Sys.Date()) ``` ``` [1] 3 ``` ``` as_date('2000-01-01') + 100 ``` ``` [1] "2000-04-10" ``` In general though, dates are treated as numeric variables, with consistent (but arbitrary) starting point. If you use these in analysis, you’ll probably want to make zero a useful value (e.g. the starting date). ``` as.numeric(Sys.Date()) ``` ``` [1] 18493 ``` ``` as.Date(10, origin = '2000-01-01') # 10 days after a supplied origin ``` ``` [1] "2000-01-11" ``` For visualization purposes, you can typically treat date variables as is, as ordered factors, or use the values as labels, and get the desired result. ### Character strings When dealing with text, objects of the character class are what you’d typically be dealing with. ``` x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory') class(x) ``` ``` [1] "character" ``` Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, a special class we’ll talk about later, you could have a column where each entry is one of the works of Shakespeare. ### Factors An important type of vector is a factor. Factors are used to represent categorical data structures. Although not exactly precise, one can think of factors as integers with labels. For example, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the *levels*. ``` x = factor(rep(letters[1:3], e = 10)) x ``` ``` [1] a a a a a a a a a a b b b b b b b b b b c c c c c c c c c c Levels: a b c ``` ``` attributes(x) ``` ``` $levels [1] "a" "b" "c" $class [1] "factor" ``` The underlying representation is numeric, but it is important to remember that factors are *categorical*. Thus, they can’t be used as numbers would be, as the following demonstrates. ``` x_num = as.numeric(x) # convert to a numeric object sum(x_num) ``` ``` [1] 60 ``` ``` sum(x) ``` ``` Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors ``` #### Strings vs. factors The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods may require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages. For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days. Some packages to note to help you with processing strings and factors: * forcats * stringr #### Strings vs. factors The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods may require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages. For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days. Some packages to note to help you with processing strings and factors: * forcats * stringr ### Logicals Logical scalar/vectors are those that take on one of two values: `TRUE` or `FALSE`. They are especially useful in flagging whether to run certain parts of code, and indexing certain parts of data structures (e.g. taking rows that correspond to TRUE). We’ll talk about the latter usage later. Here is a logical vector. ``` my_logic = c(TRUE, FALSE, TRUE, FALSE, TRUE, TRUE) ``` Note also that logicals are also treated as binary 0:1, and so, for example, taking the mean will provide the proportion of `TRUE` values. ``` !my_logic ``` ``` [1] FALSE TRUE FALSE TRUE FALSE FALSE ``` ``` as.numeric(my_logic) ``` ``` [1] 1 0 1 0 1 1 ``` ``` mean(my_logic) ``` ``` [1] 0.6666667 ``` ### Numeric and integer The most common type of data structure you’ll deal with are integer and numeric vectors. ``` ints = -3:3 # integer sequences are easily constructed with the colon operator class(ints) ``` ``` [1] "integer" ``` ``` x = rnorm(5) # 5 random values from the standard normal distribution x ``` ``` [1] -0.7613756 0.4875454 -0.2499864 -1.1288420 0.5874086 ``` ``` typeof(x) ``` ``` [1] "double" ``` ``` class(x) ``` ``` [1] "numeric" ``` ``` typeof(ints) ``` ``` [1] "integer" ``` ``` is.numeric(ints) # also numeric! ``` ``` [1] TRUE ``` The main difference between the two is that integers regard whole numbers only and are otherwise smaller in size in memory, but practically speaking you typically won’t distinguish them for most of your data science needs. ### Dates Another common data structure you’ll deal with is a date variable. Typically dates require special treatment and to work as intended, but they can be stored as character strings or factors if desired. The following shows some of the base R functionality for this. ``` Sys.Date() ``` ``` [1] "2020-08-19" ``` ``` x = as.Date(c(Sys.Date(), '2020-09-01')) x ``` ``` [1] "2020-08-19" "2020-09-01" ``` In almost every case however, a package like lubridate will make processing them much easier. The following shows how to strip out certain aspects of a date using it. ``` library(lubridate) month(Sys.Date()) ``` ``` [1] 8 ``` ``` day(Sys.Date()) ``` ``` [1] 19 ``` ``` wday(Sys.Date(), label = TRUE ) ``` ``` [1] Wed Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` ``` quarter(Sys.Date()) ``` ``` [1] 3 ``` ``` as_date('2000-01-01') + 100 ``` ``` [1] "2000-04-10" ``` In general though, dates are treated as numeric variables, with consistent (but arbitrary) starting point. If you use these in analysis, you’ll probably want to make zero a useful value (e.g. the starting date). ``` as.numeric(Sys.Date()) ``` ``` [1] 18493 ``` ``` as.Date(10, origin = '2000-01-01') # 10 days after a supplied origin ``` ``` [1] "2000-01-11" ``` For visualization purposes, you can typically treat date variables as is, as ordered factors, or use the values as labels, and get the desired result. Matrices -------- With multiple dimensions, we are dealing with arrays. Matrices are two dimensional (2\-d) arrays, and extremely commonly used for scientific computing. The vectors making up a matrix *must all be of the same type*. For example, all values in a matrix might be numeric, or all character strings. ### Creating a matrix Creating a matrix can be done in a variety of ways. ``` # create vectors x = 1:4 y = 5:8 z = 9:12 rbind(x, y, z) # row bind ``` ``` [,1] [,2] [,3] [,4] x 1 2 3 4 y 5 6 7 8 z 9 10 11 12 ``` ``` cbind(x, y, z) # column bind ``` ``` x y z [1,] 1 5 9 [2,] 2 6 10 [3,] 3 7 11 [4,] 4 8 12 ``` ``` matrix( c(x, y, z), nrow = 3, ncol = 4, byrow = TRUE ) ``` ``` [,1] [,2] [,3] [,4] [1,] 1 2 3 4 [2,] 5 6 7 8 [3,] 9 10 11 12 ``` ### Creating a matrix Creating a matrix can be done in a variety of ways. ``` # create vectors x = 1:4 y = 5:8 z = 9:12 rbind(x, y, z) # row bind ``` ``` [,1] [,2] [,3] [,4] x 1 2 3 4 y 5 6 7 8 z 9 10 11 12 ``` ``` cbind(x, y, z) # column bind ``` ``` x y z [1,] 1 5 9 [2,] 2 6 10 [3,] 3 7 11 [4,] 4 8 12 ``` ``` matrix( c(x, y, z), nrow = 3, ncol = 4, byrow = TRUE ) ``` ``` [,1] [,2] [,3] [,4] [1,] 1 2 3 4 [2,] 5 6 7 8 [3,] 9 10 11 12 ``` Lists ----- Lists in R are highly flexible objects, and probably the most commonly used for applied data science. Unlike vectors, whose elements must be of the same type, lists can contain anything as their elements, even other lists. Here is a list. We use the list function to create it. ``` x = list(1, "apple", list(3, "cat")) x ``` ``` [[1]] [1] 1 [[2]] [1] "apple" [[3]] [[3]][[1]] [1] 3 [[3]][[2]] [1] "cat" ``` We often want to loop some function over a list. ``` for (element in x) print(class(element)) ``` ``` [1] "numeric" [1] "character" [1] "list" ``` Lists can, and often do, have named elements, which we can then extract by name. ``` x = list("a" = 25, "b" = -1, "c" = 0) x[["b"]] ``` ``` [1] -1 ``` Almost all standard models in base R and other packages return an object that is a list. Knowing how to work with a list will allow you to easily access the contents of the model object for further processing. Python has similar structures, lists and dictionaries, where the latter works similarly to R’s named list. Data Frames ----------- Data frames are a very commonly used data structure, and are essentially a representation of data in a table format with rows and columns. Elements of a data frame can be different types, and this is because the data.frame class is actually just a list. As such, everything about lists applies to them. But they can also be indexed by row or column as well, just like matrices. There are other very common types of object classes associated with packages that are both a data.frame and some other type of structure (e.g. tibbles in the tidyverse). Usually your data frame will come directly from import or manipulation of other R objects (e.g. matrices). However, you should know how to create one from scratch. ### Creating a data frame The following will create a data frame with two columns, `a` and `b`. ``` mydf = data.frame( a = c(1, 5, 2), b = c(3, 8, 1) ) ``` Much to the disdain of the tidyverse, we can add row names also. ``` rownames(mydf) = paste0('row', 1:3) mydf ``` ``` a b row1 1 3 row2 5 8 row3 2 1 ``` Everything about lists applies to data.frames, so we can add, select, and remove elements of a data frame just like lists. However we’ll visit this more in depth later, and see that we’ll have much more flexibility with data frames than we would lists for common data analysis and visualization. ### Creating a data frame The following will create a data frame with two columns, `a` and `b`. ``` mydf = data.frame( a = c(1, 5, 2), b = c(3, 8, 1) ) ``` Much to the disdain of the tidyverse, we can add row names also. ``` rownames(mydf) = paste0('row', 1:3) mydf ``` ``` a b row1 1 3 row2 5 8 row3 2 1 ``` Everything about lists applies to data.frames, so we can add, select, and remove elements of a data frame just like lists. However we’ll visit this more in depth later, and see that we’ll have much more flexibility with data frames than we would lists for common data analysis and visualization. Data Structure Exercises ------------------------ ### Exercise 1 Create an object that is a matrix and/or a data.frame, and inspect its *class* or *structure* (use the class or str functions on the object you just created). ### Exercise 2 Create a list of 3 elements, the first of which contains character strings, the second numbers, and the third, the data.frame or matrix you just created in Exercise 1\. ### Thinking Exercises * How is a factor different from a character vector? * How is a data.frame the same as and different from a matrix? * How is a data.frame the same as and different from a list? ### Exercise 1 Create an object that is a matrix and/or a data.frame, and inspect its *class* or *structure* (use the class or str functions on the object you just created). ### Exercise 2 Create a list of 3 elements, the first of which contains character strings, the second numbers, and the third, the data.frame or matrix you just created in Exercise 1\. ### Thinking Exercises * How is a factor different from a character vector? * How is a data.frame the same as and different from a matrix? * How is a data.frame the same as and different from a list? Python Data Structures Notebook ------------------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/dataStructures.ipynb)
Text Analysis
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/data_structures.html
Data Structures =============== The goal of data science is to use data to understand the world around you. The primary tool of data science is a programming language that can convert human intention and collected evidence to actionable results. The tool we’ll demonstrate here is R. In order to use R to understand the world around you, you have to know the basics of how R works. Everything in R revolves around information in the form of data, so let’s start with how data exists within R. R has several core data structures, and we’ll take a look at each. * Vectors * Factors * Lists * Matrices/arrays * Data frames The more you know about R data structures, the more you’ll know how to use them, how packages use them, and you’ll also better understand why things go wrong when they do, and the further you’ll be able to go with your data. Furthermore, most of these data structures are common to many programming languages (e.g. vectors, lists, matrices), so what you learn with R will often generalize to other languages as well. R and other programming languages are used via an IDE (integrated development environment), which makes programming vastly easier through syntax highlighting, code completion, and more. RStudio the IDE of choice for R, while Python is varied (e.g. PyCharm for software developers, Spyder for users of Anaconda), and others like VSCode might be useful for many languages. Vectors ------- *Vectors* form the basis of R data structures. Two main types are atomic and lists, but we’ll talk about lists separately. Here is an R vector. The *elements* of the vector are numeric values. ``` x = c(1, 3, 2, 5, 4) x ``` ``` [1] 1 3 2 5 4 ``` All elements of an atomic vector are the same *type*. Example types include: * character * numeric (double) * integer * logical In addition, there are special kinds of values like NA (‘not available’ i.e. missing), NULL, NaN (not a number), Inf (infinite) and so forth. You can use typeof to examine an object’s type, or use an `is` function, e.g. is.logical, to check if an object is a specific type. ### Character strings When dealing with text, objects of the character class are what you’d typically be dealing with. ``` x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory') class(x) ``` ``` [1] "character" ``` Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, a special class we’ll talk about later, you could have a column where each entry is one of the works of Shakespeare. ### Factors An important type of vector is a factor. Factors are used to represent categorical data structures. Although not exactly precise, one can think of factors as integers with labels. For example, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the *levels*. ``` x = factor(rep(letters[1:3], e = 10)) x ``` ``` [1] a a a a a a a a a a b b b b b b b b b b c c c c c c c c c c Levels: a b c ``` ``` attributes(x) ``` ``` $levels [1] "a" "b" "c" $class [1] "factor" ``` The underlying representation is numeric, but it is important to remember that factors are *categorical*. Thus, they can’t be used as numbers would be, as the following demonstrates. ``` x_num = as.numeric(x) # convert to a numeric object sum(x_num) ``` ``` [1] 60 ``` ``` sum(x) ``` ``` Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors ``` #### Strings vs. factors The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods may require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages. For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days. Some packages to note to help you with processing strings and factors: * forcats * stringr ### Logicals Logical scalar/vectors are those that take on one of two values: `TRUE` or `FALSE`. They are especially useful in flagging whether to run certain parts of code, and indexing certain parts of data structures (e.g. taking rows that correspond to TRUE). We’ll talk about the latter usage later. Here is a logical vector. ``` my_logic = c(TRUE, FALSE, TRUE, FALSE, TRUE, TRUE) ``` Note also that logicals are also treated as binary 0:1, and so, for example, taking the mean will provide the proportion of `TRUE` values. ``` !my_logic ``` ``` [1] FALSE TRUE FALSE TRUE FALSE FALSE ``` ``` as.numeric(my_logic) ``` ``` [1] 1 0 1 0 1 1 ``` ``` mean(my_logic) ``` ``` [1] 0.6666667 ``` ### Numeric and integer The most common type of data structure you’ll deal with are integer and numeric vectors. ``` ints = -3:3 # integer sequences are easily constructed with the colon operator class(ints) ``` ``` [1] "integer" ``` ``` x = rnorm(5) # 5 random values from the standard normal distribution x ``` ``` [1] -0.7613756 0.4875454 -0.2499864 -1.1288420 0.5874086 ``` ``` typeof(x) ``` ``` [1] "double" ``` ``` class(x) ``` ``` [1] "numeric" ``` ``` typeof(ints) ``` ``` [1] "integer" ``` ``` is.numeric(ints) # also numeric! ``` ``` [1] TRUE ``` The main difference between the two is that integers regard whole numbers only and are otherwise smaller in size in memory, but practically speaking you typically won’t distinguish them for most of your data science needs. ### Dates Another common data structure you’ll deal with is a date variable. Typically dates require special treatment and to work as intended, but they can be stored as character strings or factors if desired. The following shows some of the base R functionality for this. ``` Sys.Date() ``` ``` [1] "2020-08-19" ``` ``` x = as.Date(c(Sys.Date(), '2020-09-01')) x ``` ``` [1] "2020-08-19" "2020-09-01" ``` In almost every case however, a package like lubridate will make processing them much easier. The following shows how to strip out certain aspects of a date using it. ``` library(lubridate) month(Sys.Date()) ``` ``` [1] 8 ``` ``` day(Sys.Date()) ``` ``` [1] 19 ``` ``` wday(Sys.Date(), label = TRUE ) ``` ``` [1] Wed Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` ``` quarter(Sys.Date()) ``` ``` [1] 3 ``` ``` as_date('2000-01-01') + 100 ``` ``` [1] "2000-04-10" ``` In general though, dates are treated as numeric variables, with consistent (but arbitrary) starting point. If you use these in analysis, you’ll probably want to make zero a useful value (e.g. the starting date). ``` as.numeric(Sys.Date()) ``` ``` [1] 18493 ``` ``` as.Date(10, origin = '2000-01-01') # 10 days after a supplied origin ``` ``` [1] "2000-01-11" ``` For visualization purposes, you can typically treat date variables as is, as ordered factors, or use the values as labels, and get the desired result. Matrices -------- With multiple dimensions, we are dealing with arrays. Matrices are two dimensional (2\-d) arrays, and extremely commonly used for scientific computing. The vectors making up a matrix *must all be of the same type*. For example, all values in a matrix might be numeric, or all character strings. ### Creating a matrix Creating a matrix can be done in a variety of ways. ``` # create vectors x = 1:4 y = 5:8 z = 9:12 rbind(x, y, z) # row bind ``` ``` [,1] [,2] [,3] [,4] x 1 2 3 4 y 5 6 7 8 z 9 10 11 12 ``` ``` cbind(x, y, z) # column bind ``` ``` x y z [1,] 1 5 9 [2,] 2 6 10 [3,] 3 7 11 [4,] 4 8 12 ``` ``` matrix( c(x, y, z), nrow = 3, ncol = 4, byrow = TRUE ) ``` ``` [,1] [,2] [,3] [,4] [1,] 1 2 3 4 [2,] 5 6 7 8 [3,] 9 10 11 12 ``` Lists ----- Lists in R are highly flexible objects, and probably the most commonly used for applied data science. Unlike vectors, whose elements must be of the same type, lists can contain anything as their elements, even other lists. Here is a list. We use the list function to create it. ``` x = list(1, "apple", list(3, "cat")) x ``` ``` [[1]] [1] 1 [[2]] [1] "apple" [[3]] [[3]][[1]] [1] 3 [[3]][[2]] [1] "cat" ``` We often want to loop some function over a list. ``` for (element in x) print(class(element)) ``` ``` [1] "numeric" [1] "character" [1] "list" ``` Lists can, and often do, have named elements, which we can then extract by name. ``` x = list("a" = 25, "b" = -1, "c" = 0) x[["b"]] ``` ``` [1] -1 ``` Almost all standard models in base R and other packages return an object that is a list. Knowing how to work with a list will allow you to easily access the contents of the model object for further processing. Python has similar structures, lists and dictionaries, where the latter works similarly to R’s named list. Data Frames ----------- Data frames are a very commonly used data structure, and are essentially a representation of data in a table format with rows and columns. Elements of a data frame can be different types, and this is because the data.frame class is actually just a list. As such, everything about lists applies to them. But they can also be indexed by row or column as well, just like matrices. There are other very common types of object classes associated with packages that are both a data.frame and some other type of structure (e.g. tibbles in the tidyverse). Usually your data frame will come directly from import or manipulation of other R objects (e.g. matrices). However, you should know how to create one from scratch. ### Creating a data frame The following will create a data frame with two columns, `a` and `b`. ``` mydf = data.frame( a = c(1, 5, 2), b = c(3, 8, 1) ) ``` Much to the disdain of the tidyverse, we can add row names also. ``` rownames(mydf) = paste0('row', 1:3) mydf ``` ``` a b row1 1 3 row2 5 8 row3 2 1 ``` Everything about lists applies to data.frames, so we can add, select, and remove elements of a data frame just like lists. However we’ll visit this more in depth later, and see that we’ll have much more flexibility with data frames than we would lists for common data analysis and visualization. Data Structure Exercises ------------------------ ### Exercise 1 Create an object that is a matrix and/or a data.frame, and inspect its *class* or *structure* (use the class or str functions on the object you just created). ### Exercise 2 Create a list of 3 elements, the first of which contains character strings, the second numbers, and the third, the data.frame or matrix you just created in Exercise 1\. ### Thinking Exercises * How is a factor different from a character vector? * How is a data.frame the same as and different from a matrix? * How is a data.frame the same as and different from a list? Python Data Structures Notebook ------------------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/dataStructures.ipynb) Vectors ------- *Vectors* form the basis of R data structures. Two main types are atomic and lists, but we’ll talk about lists separately. Here is an R vector. The *elements* of the vector are numeric values. ``` x = c(1, 3, 2, 5, 4) x ``` ``` [1] 1 3 2 5 4 ``` All elements of an atomic vector are the same *type*. Example types include: * character * numeric (double) * integer * logical In addition, there are special kinds of values like NA (‘not available’ i.e. missing), NULL, NaN (not a number), Inf (infinite) and so forth. You can use typeof to examine an object’s type, or use an `is` function, e.g. is.logical, to check if an object is a specific type. ### Character strings When dealing with text, objects of the character class are what you’d typically be dealing with. ``` x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory') class(x) ``` ``` [1] "character" ``` Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, a special class we’ll talk about later, you could have a column where each entry is one of the works of Shakespeare. ### Factors An important type of vector is a factor. Factors are used to represent categorical data structures. Although not exactly precise, one can think of factors as integers with labels. For example, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the *levels*. ``` x = factor(rep(letters[1:3], e = 10)) x ``` ``` [1] a a a a a a a a a a b b b b b b b b b b c c c c c c c c c c Levels: a b c ``` ``` attributes(x) ``` ``` $levels [1] "a" "b" "c" $class [1] "factor" ``` The underlying representation is numeric, but it is important to remember that factors are *categorical*. Thus, they can’t be used as numbers would be, as the following demonstrates. ``` x_num = as.numeric(x) # convert to a numeric object sum(x_num) ``` ``` [1] 60 ``` ``` sum(x) ``` ``` Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors ``` #### Strings vs. factors The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods may require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages. For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days. Some packages to note to help you with processing strings and factors: * forcats * stringr ### Logicals Logical scalar/vectors are those that take on one of two values: `TRUE` or `FALSE`. They are especially useful in flagging whether to run certain parts of code, and indexing certain parts of data structures (e.g. taking rows that correspond to TRUE). We’ll talk about the latter usage later. Here is a logical vector. ``` my_logic = c(TRUE, FALSE, TRUE, FALSE, TRUE, TRUE) ``` Note also that logicals are also treated as binary 0:1, and so, for example, taking the mean will provide the proportion of `TRUE` values. ``` !my_logic ``` ``` [1] FALSE TRUE FALSE TRUE FALSE FALSE ``` ``` as.numeric(my_logic) ``` ``` [1] 1 0 1 0 1 1 ``` ``` mean(my_logic) ``` ``` [1] 0.6666667 ``` ### Numeric and integer The most common type of data structure you’ll deal with are integer and numeric vectors. ``` ints = -3:3 # integer sequences are easily constructed with the colon operator class(ints) ``` ``` [1] "integer" ``` ``` x = rnorm(5) # 5 random values from the standard normal distribution x ``` ``` [1] -0.7613756 0.4875454 -0.2499864 -1.1288420 0.5874086 ``` ``` typeof(x) ``` ``` [1] "double" ``` ``` class(x) ``` ``` [1] "numeric" ``` ``` typeof(ints) ``` ``` [1] "integer" ``` ``` is.numeric(ints) # also numeric! ``` ``` [1] TRUE ``` The main difference between the two is that integers regard whole numbers only and are otherwise smaller in size in memory, but practically speaking you typically won’t distinguish them for most of your data science needs. ### Dates Another common data structure you’ll deal with is a date variable. Typically dates require special treatment and to work as intended, but they can be stored as character strings or factors if desired. The following shows some of the base R functionality for this. ``` Sys.Date() ``` ``` [1] "2020-08-19" ``` ``` x = as.Date(c(Sys.Date(), '2020-09-01')) x ``` ``` [1] "2020-08-19" "2020-09-01" ``` In almost every case however, a package like lubridate will make processing them much easier. The following shows how to strip out certain aspects of a date using it. ``` library(lubridate) month(Sys.Date()) ``` ``` [1] 8 ``` ``` day(Sys.Date()) ``` ``` [1] 19 ``` ``` wday(Sys.Date(), label = TRUE ) ``` ``` [1] Wed Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` ``` quarter(Sys.Date()) ``` ``` [1] 3 ``` ``` as_date('2000-01-01') + 100 ``` ``` [1] "2000-04-10" ``` In general though, dates are treated as numeric variables, with consistent (but arbitrary) starting point. If you use these in analysis, you’ll probably want to make zero a useful value (e.g. the starting date). ``` as.numeric(Sys.Date()) ``` ``` [1] 18493 ``` ``` as.Date(10, origin = '2000-01-01') # 10 days after a supplied origin ``` ``` [1] "2000-01-11" ``` For visualization purposes, you can typically treat date variables as is, as ordered factors, or use the values as labels, and get the desired result. ### Character strings When dealing with text, objects of the character class are what you’d typically be dealing with. ``` x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory') class(x) ``` ``` [1] "character" ``` Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, a special class we’ll talk about later, you could have a column where each entry is one of the works of Shakespeare. ### Factors An important type of vector is a factor. Factors are used to represent categorical data structures. Although not exactly precise, one can think of factors as integers with labels. For example, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the *levels*. ``` x = factor(rep(letters[1:3], e = 10)) x ``` ``` [1] a a a a a a a a a a b b b b b b b b b b c c c c c c c c c c Levels: a b c ``` ``` attributes(x) ``` ``` $levels [1] "a" "b" "c" $class [1] "factor" ``` The underlying representation is numeric, but it is important to remember that factors are *categorical*. Thus, they can’t be used as numbers would be, as the following demonstrates. ``` x_num = as.numeric(x) # convert to a numeric object sum(x_num) ``` ``` [1] 60 ``` ``` sum(x) ``` ``` Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors ``` #### Strings vs. factors The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods may require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages. For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days. Some packages to note to help you with processing strings and factors: * forcats * stringr #### Strings vs. factors The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods may require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages. For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days. Some packages to note to help you with processing strings and factors: * forcats * stringr ### Logicals Logical scalar/vectors are those that take on one of two values: `TRUE` or `FALSE`. They are especially useful in flagging whether to run certain parts of code, and indexing certain parts of data structures (e.g. taking rows that correspond to TRUE). We’ll talk about the latter usage later. Here is a logical vector. ``` my_logic = c(TRUE, FALSE, TRUE, FALSE, TRUE, TRUE) ``` Note also that logicals are also treated as binary 0:1, and so, for example, taking the mean will provide the proportion of `TRUE` values. ``` !my_logic ``` ``` [1] FALSE TRUE FALSE TRUE FALSE FALSE ``` ``` as.numeric(my_logic) ``` ``` [1] 1 0 1 0 1 1 ``` ``` mean(my_logic) ``` ``` [1] 0.6666667 ``` ### Numeric and integer The most common type of data structure you’ll deal with are integer and numeric vectors. ``` ints = -3:3 # integer sequences are easily constructed with the colon operator class(ints) ``` ``` [1] "integer" ``` ``` x = rnorm(5) # 5 random values from the standard normal distribution x ``` ``` [1] -0.7613756 0.4875454 -0.2499864 -1.1288420 0.5874086 ``` ``` typeof(x) ``` ``` [1] "double" ``` ``` class(x) ``` ``` [1] "numeric" ``` ``` typeof(ints) ``` ``` [1] "integer" ``` ``` is.numeric(ints) # also numeric! ``` ``` [1] TRUE ``` The main difference between the two is that integers regard whole numbers only and are otherwise smaller in size in memory, but practically speaking you typically won’t distinguish them for most of your data science needs. ### Dates Another common data structure you’ll deal with is a date variable. Typically dates require special treatment and to work as intended, but they can be stored as character strings or factors if desired. The following shows some of the base R functionality for this. ``` Sys.Date() ``` ``` [1] "2020-08-19" ``` ``` x = as.Date(c(Sys.Date(), '2020-09-01')) x ``` ``` [1] "2020-08-19" "2020-09-01" ``` In almost every case however, a package like lubridate will make processing them much easier. The following shows how to strip out certain aspects of a date using it. ``` library(lubridate) month(Sys.Date()) ``` ``` [1] 8 ``` ``` day(Sys.Date()) ``` ``` [1] 19 ``` ``` wday(Sys.Date(), label = TRUE ) ``` ``` [1] Wed Levels: Sun < Mon < Tue < Wed < Thu < Fri < Sat ``` ``` quarter(Sys.Date()) ``` ``` [1] 3 ``` ``` as_date('2000-01-01') + 100 ``` ``` [1] "2000-04-10" ``` In general though, dates are treated as numeric variables, with consistent (but arbitrary) starting point. If you use these in analysis, you’ll probably want to make zero a useful value (e.g. the starting date). ``` as.numeric(Sys.Date()) ``` ``` [1] 18493 ``` ``` as.Date(10, origin = '2000-01-01') # 10 days after a supplied origin ``` ``` [1] "2000-01-11" ``` For visualization purposes, you can typically treat date variables as is, as ordered factors, or use the values as labels, and get the desired result. Matrices -------- With multiple dimensions, we are dealing with arrays. Matrices are two dimensional (2\-d) arrays, and extremely commonly used for scientific computing. The vectors making up a matrix *must all be of the same type*. For example, all values in a matrix might be numeric, or all character strings. ### Creating a matrix Creating a matrix can be done in a variety of ways. ``` # create vectors x = 1:4 y = 5:8 z = 9:12 rbind(x, y, z) # row bind ``` ``` [,1] [,2] [,3] [,4] x 1 2 3 4 y 5 6 7 8 z 9 10 11 12 ``` ``` cbind(x, y, z) # column bind ``` ``` x y z [1,] 1 5 9 [2,] 2 6 10 [3,] 3 7 11 [4,] 4 8 12 ``` ``` matrix( c(x, y, z), nrow = 3, ncol = 4, byrow = TRUE ) ``` ``` [,1] [,2] [,3] [,4] [1,] 1 2 3 4 [2,] 5 6 7 8 [3,] 9 10 11 12 ``` ### Creating a matrix Creating a matrix can be done in a variety of ways. ``` # create vectors x = 1:4 y = 5:8 z = 9:12 rbind(x, y, z) # row bind ``` ``` [,1] [,2] [,3] [,4] x 1 2 3 4 y 5 6 7 8 z 9 10 11 12 ``` ``` cbind(x, y, z) # column bind ``` ``` x y z [1,] 1 5 9 [2,] 2 6 10 [3,] 3 7 11 [4,] 4 8 12 ``` ``` matrix( c(x, y, z), nrow = 3, ncol = 4, byrow = TRUE ) ``` ``` [,1] [,2] [,3] [,4] [1,] 1 2 3 4 [2,] 5 6 7 8 [3,] 9 10 11 12 ``` Lists ----- Lists in R are highly flexible objects, and probably the most commonly used for applied data science. Unlike vectors, whose elements must be of the same type, lists can contain anything as their elements, even other lists. Here is a list. We use the list function to create it. ``` x = list(1, "apple", list(3, "cat")) x ``` ``` [[1]] [1] 1 [[2]] [1] "apple" [[3]] [[3]][[1]] [1] 3 [[3]][[2]] [1] "cat" ``` We often want to loop some function over a list. ``` for (element in x) print(class(element)) ``` ``` [1] "numeric" [1] "character" [1] "list" ``` Lists can, and often do, have named elements, which we can then extract by name. ``` x = list("a" = 25, "b" = -1, "c" = 0) x[["b"]] ``` ``` [1] -1 ``` Almost all standard models in base R and other packages return an object that is a list. Knowing how to work with a list will allow you to easily access the contents of the model object for further processing. Python has similar structures, lists and dictionaries, where the latter works similarly to R’s named list. Data Frames ----------- Data frames are a very commonly used data structure, and are essentially a representation of data in a table format with rows and columns. Elements of a data frame can be different types, and this is because the data.frame class is actually just a list. As such, everything about lists applies to them. But they can also be indexed by row or column as well, just like matrices. There are other very common types of object classes associated with packages that are both a data.frame and some other type of structure (e.g. tibbles in the tidyverse). Usually your data frame will come directly from import or manipulation of other R objects (e.g. matrices). However, you should know how to create one from scratch. ### Creating a data frame The following will create a data frame with two columns, `a` and `b`. ``` mydf = data.frame( a = c(1, 5, 2), b = c(3, 8, 1) ) ``` Much to the disdain of the tidyverse, we can add row names also. ``` rownames(mydf) = paste0('row', 1:3) mydf ``` ``` a b row1 1 3 row2 5 8 row3 2 1 ``` Everything about lists applies to data.frames, so we can add, select, and remove elements of a data frame just like lists. However we’ll visit this more in depth later, and see that we’ll have much more flexibility with data frames than we would lists for common data analysis and visualization. ### Creating a data frame The following will create a data frame with two columns, `a` and `b`. ``` mydf = data.frame( a = c(1, 5, 2), b = c(3, 8, 1) ) ``` Much to the disdain of the tidyverse, we can add row names also. ``` rownames(mydf) = paste0('row', 1:3) mydf ``` ``` a b row1 1 3 row2 5 8 row3 2 1 ``` Everything about lists applies to data.frames, so we can add, select, and remove elements of a data frame just like lists. However we’ll visit this more in depth later, and see that we’ll have much more flexibility with data frames than we would lists for common data analysis and visualization. Data Structure Exercises ------------------------ ### Exercise 1 Create an object that is a matrix and/or a data.frame, and inspect its *class* or *structure* (use the class or str functions on the object you just created). ### Exercise 2 Create a list of 3 elements, the first of which contains character strings, the second numbers, and the third, the data.frame or matrix you just created in Exercise 1\. ### Thinking Exercises * How is a factor different from a character vector? * How is a data.frame the same as and different from a matrix? * How is a data.frame the same as and different from a list? ### Exercise 1 Create an object that is a matrix and/or a data.frame, and inspect its *class* or *structure* (use the class or str functions on the object you just created). ### Exercise 2 Create a list of 3 elements, the first of which contains character strings, the second numbers, and the third, the data.frame or matrix you just created in Exercise 1\. ### Thinking Exercises * How is a factor different from a character vector? * How is a data.frame the same as and different from a matrix? * How is a data.frame the same as and different from a list? Python Data Structures Notebook ------------------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/dataStructures.ipynb)
Text Analysis
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/io.html
Input/Output ============ Until you get comfortable getting data into your chosen programming and analytical tool, you’re not going to use it as much as you could. You should at least be able to read in common data formats like comma/tab\-separated, Excel, etc. Standard methods in base R for reading in tabular data include the following functions: * read.table * read.csv * readLines Base R also comes with the foreign package for reading in other types of files, especially other statistical packages. However, while you may see it still in use, it’s not as useful as what’s found in other packages. Most of the `read.*` functions are going to have a corresponding `write.*`function. For example if I read a comma\-separated file as follows: ``` mydata = read.csv('data/myfile.csv') ``` Then I would save an R object, e.g. a data frame, as a csv file as follows: ``` write.csv(mydata, file = 'data/newfile.csv') ``` Better \& Faster Approaches --------------------------- Now that you know how base R tools work, you can mostly forget those functions, as there are some better and faster ways to read in data. The readr package has read\_csv, read\_delim, and others. These make assumptions about what type each vector is after an initial scan of the data, then proceed accordingly. If you don’t have ‘big’ data, the subsequent speed gain won’t help much, however, such an approach actually can be used as a diagnostic to pick up potential data entry errors, as warnings are given when unexpected observations occur. The data.table package provides a faster version read.table, and is typically faster than readr approaches (fread). A package for reading in foreign statistical files is haven, which has functions like read\_spss and read\_dta for SPSS and Stata files respectively. The package readxl is a clean way to read Excel files that doesn’t require any outside packages or languages. The package rio uses haven, readxl etc., but with just two functions for everything: import, export (also convert). At least for common tabular data types, readr and haven will likely serve most of your needs, at least to start. Reading in data is usually a one\-off event, such that you’ll never need to use the package again after the data is loaded. In that case, you might use the following approach, so that you don’t need to attach the whole package. ``` readr::read_csv('fileloc/filename.csv') ``` You can use that for any package, which can help avoid naming conflicts by not loading a bunch of different packages. Furthermore, if you need packages that do have a naming conflict, using this approach will ensure the function from the package you want will be used. R\-specific Data ---------------- R provides the means to read and store compressed data types. While there are a variety of ways to do so save and save.image are probably the most common. To save a one or more objects we can use save as follows: ``` save(object1, object2, file = 'data/myfile.RData') ``` To get those objects when we next use R, we can use the load function. ``` load('data/myfile.RData') ``` The save.image function works just like save, but saves all objects currently in your working environment. You would still just use load to load the objects back into your working environment. This might seem handy at first glance, but I would suggest you be more precise with which objects you save[1](#fn1). ### R Datasets If you just needs some quick data to learn a new package or try something out, you can consider the datasets package that’s automatically loaded with R. To be honest, most of them are not very accessible conceptually, too small to be interesting, or have other issues, but again, this doesn’t preclude them from helping you learn new things. ``` head(iris) ``` ``` Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5.0 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa ``` In addition, many packages come with their own data, and these are either available when you load the package, can be loaded with the data function. ``` data(mcycle, package = 'MASS') # loads data without loading package head(mcycle) ``` ``` times accel 1 2.4 0.0 2 2.6 -1.3 3 3.2 -2.7 4 3.6 0.0 5 4.0 -2.7 6 6.2 -2.7 ``` Other Types of Data ------------------- Be aware that R can handle practically any type of data you want to throw at it. Some examples include: * JSON * SQL * XML * YAML * MongoDB * NETCDF * text (e.g. a novel) * shapefiles (e.g. for geographic data) * Google spreadsheets And many, many others. On the Horizon -------------- feather is designed to make reading/writing data frames efficient, and the really nice thing about it is that it works in both Python and R. It’s still in early stages of development on the R side though. Big Data -------- You may come across the situation where your data cannot be held in memory. One of the first things to be aware of for data processing is that you may not need to have the data all in memory at once. Before shifting to a hardware solution, consider if the following is possible. * *Chunking*: reading and processing the data in chunks * Line at a time: dealing with individual lines of data * Other data formats: for example SQL databases (sqldf package, src\_dbi in dplyr) However, it may be that the end result is still too large. In that case you’ll have to consider a cluster\-based or distributed data situation. Of course R will have tools for that as well. * disk.frame * DBI * sparklyr [And more](https://db.rstudio.com/). I/O Exercises ------------- ### Exercise 1 Use readr and haven to read the following files. Use the URL just like you would any file name. The latter is a Stata file. You can use the RStudio’s menu approach to import the file if you want. * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/cars.csv](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/cars.csv) * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/presvote.dta](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/presvote.dta) If you downloaded the data for this workshop, the files can be accessed in that folder. ### Thinking Exercises Why might you use read\_csv from the readr package rather than read.csv in base R? What is your definition of ‘big’ data? Python I/O Notebook ------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/io.ipynb) Better \& Faster Approaches --------------------------- Now that you know how base R tools work, you can mostly forget those functions, as there are some better and faster ways to read in data. The readr package has read\_csv, read\_delim, and others. These make assumptions about what type each vector is after an initial scan of the data, then proceed accordingly. If you don’t have ‘big’ data, the subsequent speed gain won’t help much, however, such an approach actually can be used as a diagnostic to pick up potential data entry errors, as warnings are given when unexpected observations occur. The data.table package provides a faster version read.table, and is typically faster than readr approaches (fread). A package for reading in foreign statistical files is haven, which has functions like read\_spss and read\_dta for SPSS and Stata files respectively. The package readxl is a clean way to read Excel files that doesn’t require any outside packages or languages. The package rio uses haven, readxl etc., but with just two functions for everything: import, export (also convert). At least for common tabular data types, readr and haven will likely serve most of your needs, at least to start. Reading in data is usually a one\-off event, such that you’ll never need to use the package again after the data is loaded. In that case, you might use the following approach, so that you don’t need to attach the whole package. ``` readr::read_csv('fileloc/filename.csv') ``` You can use that for any package, which can help avoid naming conflicts by not loading a bunch of different packages. Furthermore, if you need packages that do have a naming conflict, using this approach will ensure the function from the package you want will be used. R\-specific Data ---------------- R provides the means to read and store compressed data types. While there are a variety of ways to do so save and save.image are probably the most common. To save a one or more objects we can use save as follows: ``` save(object1, object2, file = 'data/myfile.RData') ``` To get those objects when we next use R, we can use the load function. ``` load('data/myfile.RData') ``` The save.image function works just like save, but saves all objects currently in your working environment. You would still just use load to load the objects back into your working environment. This might seem handy at first glance, but I would suggest you be more precise with which objects you save[1](#fn1). ### R Datasets If you just needs some quick data to learn a new package or try something out, you can consider the datasets package that’s automatically loaded with R. To be honest, most of them are not very accessible conceptually, too small to be interesting, or have other issues, but again, this doesn’t preclude them from helping you learn new things. ``` head(iris) ``` ``` Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5.0 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa ``` In addition, many packages come with their own data, and these are either available when you load the package, can be loaded with the data function. ``` data(mcycle, package = 'MASS') # loads data without loading package head(mcycle) ``` ``` times accel 1 2.4 0.0 2 2.6 -1.3 3 3.2 -2.7 4 3.6 0.0 5 4.0 -2.7 6 6.2 -2.7 ``` ### R Datasets If you just needs some quick data to learn a new package or try something out, you can consider the datasets package that’s automatically loaded with R. To be honest, most of them are not very accessible conceptually, too small to be interesting, or have other issues, but again, this doesn’t preclude them from helping you learn new things. ``` head(iris) ``` ``` Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5.0 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa ``` In addition, many packages come with their own data, and these are either available when you load the package, can be loaded with the data function. ``` data(mcycle, package = 'MASS') # loads data without loading package head(mcycle) ``` ``` times accel 1 2.4 0.0 2 2.6 -1.3 3 3.2 -2.7 4 3.6 0.0 5 4.0 -2.7 6 6.2 -2.7 ``` Other Types of Data ------------------- Be aware that R can handle practically any type of data you want to throw at it. Some examples include: * JSON * SQL * XML * YAML * MongoDB * NETCDF * text (e.g. a novel) * shapefiles (e.g. for geographic data) * Google spreadsheets And many, many others. On the Horizon -------------- feather is designed to make reading/writing data frames efficient, and the really nice thing about it is that it works in both Python and R. It’s still in early stages of development on the R side though. Big Data -------- You may come across the situation where your data cannot be held in memory. One of the first things to be aware of for data processing is that you may not need to have the data all in memory at once. Before shifting to a hardware solution, consider if the following is possible. * *Chunking*: reading and processing the data in chunks * Line at a time: dealing with individual lines of data * Other data formats: for example SQL databases (sqldf package, src\_dbi in dplyr) However, it may be that the end result is still too large. In that case you’ll have to consider a cluster\-based or distributed data situation. Of course R will have tools for that as well. * disk.frame * DBI * sparklyr [And more](https://db.rstudio.com/). I/O Exercises ------------- ### Exercise 1 Use readr and haven to read the following files. Use the URL just like you would any file name. The latter is a Stata file. You can use the RStudio’s menu approach to import the file if you want. * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/cars.csv](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/cars.csv) * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/presvote.dta](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/presvote.dta) If you downloaded the data for this workshop, the files can be accessed in that folder. ### Thinking Exercises Why might you use read\_csv from the readr package rather than read.csv in base R? What is your definition of ‘big’ data? ### Exercise 1 Use readr and haven to read the following files. Use the URL just like you would any file name. The latter is a Stata file. You can use the RStudio’s menu approach to import the file if you want. * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/cars.csv](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/cars.csv) * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/presvote.dta](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/presvote.dta) If you downloaded the data for this workshop, the files can be accessed in that folder. ### Thinking Exercises Why might you use read\_csv from the readr package rather than read.csv in base R? What is your definition of ‘big’ data? Python I/O Notebook ------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/io.ipynb)
Data Visualization
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/io.html
Input/Output ============ Until you get comfortable getting data into your chosen programming and analytical tool, you’re not going to use it as much as you could. You should at least be able to read in common data formats like comma/tab\-separated, Excel, etc. Standard methods in base R for reading in tabular data include the following functions: * read.table * read.csv * readLines Base R also comes with the foreign package for reading in other types of files, especially other statistical packages. However, while you may see it still in use, it’s not as useful as what’s found in other packages. Most of the `read.*` functions are going to have a corresponding `write.*`function. For example if I read a comma\-separated file as follows: ``` mydata = read.csv('data/myfile.csv') ``` Then I would save an R object, e.g. a data frame, as a csv file as follows: ``` write.csv(mydata, file = 'data/newfile.csv') ``` Better \& Faster Approaches --------------------------- Now that you know how base R tools work, you can mostly forget those functions, as there are some better and faster ways to read in data. The readr package has read\_csv, read\_delim, and others. These make assumptions about what type each vector is after an initial scan of the data, then proceed accordingly. If you don’t have ‘big’ data, the subsequent speed gain won’t help much, however, such an approach actually can be used as a diagnostic to pick up potential data entry errors, as warnings are given when unexpected observations occur. The data.table package provides a faster version read.table, and is typically faster than readr approaches (fread). A package for reading in foreign statistical files is haven, which has functions like read\_spss and read\_dta for SPSS and Stata files respectively. The package readxl is a clean way to read Excel files that doesn’t require any outside packages or languages. The package rio uses haven, readxl etc., but with just two functions for everything: import, export (also convert). At least for common tabular data types, readr and haven will likely serve most of your needs, at least to start. Reading in data is usually a one\-off event, such that you’ll never need to use the package again after the data is loaded. In that case, you might use the following approach, so that you don’t need to attach the whole package. ``` readr::read_csv('fileloc/filename.csv') ``` You can use that for any package, which can help avoid naming conflicts by not loading a bunch of different packages. Furthermore, if you need packages that do have a naming conflict, using this approach will ensure the function from the package you want will be used. R\-specific Data ---------------- R provides the means to read and store compressed data types. While there are a variety of ways to do so save and save.image are probably the most common. To save a one or more objects we can use save as follows: ``` save(object1, object2, file = 'data/myfile.RData') ``` To get those objects when we next use R, we can use the load function. ``` load('data/myfile.RData') ``` The save.image function works just like save, but saves all objects currently in your working environment. You would still just use load to load the objects back into your working environment. This might seem handy at first glance, but I would suggest you be more precise with which objects you save[1](#fn1). ### R Datasets If you just needs some quick data to learn a new package or try something out, you can consider the datasets package that’s automatically loaded with R. To be honest, most of them are not very accessible conceptually, too small to be interesting, or have other issues, but again, this doesn’t preclude them from helping you learn new things. ``` head(iris) ``` ``` Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5.0 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa ``` In addition, many packages come with their own data, and these are either available when you load the package, can be loaded with the data function. ``` data(mcycle, package = 'MASS') # loads data without loading package head(mcycle) ``` ``` times accel 1 2.4 0.0 2 2.6 -1.3 3 3.2 -2.7 4 3.6 0.0 5 4.0 -2.7 6 6.2 -2.7 ``` Other Types of Data ------------------- Be aware that R can handle practically any type of data you want to throw at it. Some examples include: * JSON * SQL * XML * YAML * MongoDB * NETCDF * text (e.g. a novel) * shapefiles (e.g. for geographic data) * Google spreadsheets And many, many others. On the Horizon -------------- feather is designed to make reading/writing data frames efficient, and the really nice thing about it is that it works in both Python and R. It’s still in early stages of development on the R side though. Big Data -------- You may come across the situation where your data cannot be held in memory. One of the first things to be aware of for data processing is that you may not need to have the data all in memory at once. Before shifting to a hardware solution, consider if the following is possible. * *Chunking*: reading and processing the data in chunks * Line at a time: dealing with individual lines of data * Other data formats: for example SQL databases (sqldf package, src\_dbi in dplyr) However, it may be that the end result is still too large. In that case you’ll have to consider a cluster\-based or distributed data situation. Of course R will have tools for that as well. * disk.frame * DBI * sparklyr [And more](https://db.rstudio.com/). I/O Exercises ------------- ### Exercise 1 Use readr and haven to read the following files. Use the URL just like you would any file name. The latter is a Stata file. You can use the RStudio’s menu approach to import the file if you want. * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/cars.csv](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/cars.csv) * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/presvote.dta](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/presvote.dta) If you downloaded the data for this workshop, the files can be accessed in that folder. ### Thinking Exercises Why might you use read\_csv from the readr package rather than read.csv in base R? What is your definition of ‘big’ data? Python I/O Notebook ------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/io.ipynb) Better \& Faster Approaches --------------------------- Now that you know how base R tools work, you can mostly forget those functions, as there are some better and faster ways to read in data. The readr package has read\_csv, read\_delim, and others. These make assumptions about what type each vector is after an initial scan of the data, then proceed accordingly. If you don’t have ‘big’ data, the subsequent speed gain won’t help much, however, such an approach actually can be used as a diagnostic to pick up potential data entry errors, as warnings are given when unexpected observations occur. The data.table package provides a faster version read.table, and is typically faster than readr approaches (fread). A package for reading in foreign statistical files is haven, which has functions like read\_spss and read\_dta for SPSS and Stata files respectively. The package readxl is a clean way to read Excel files that doesn’t require any outside packages or languages. The package rio uses haven, readxl etc., but with just two functions for everything: import, export (also convert). At least for common tabular data types, readr and haven will likely serve most of your needs, at least to start. Reading in data is usually a one\-off event, such that you’ll never need to use the package again after the data is loaded. In that case, you might use the following approach, so that you don’t need to attach the whole package. ``` readr::read_csv('fileloc/filename.csv') ``` You can use that for any package, which can help avoid naming conflicts by not loading a bunch of different packages. Furthermore, if you need packages that do have a naming conflict, using this approach will ensure the function from the package you want will be used. R\-specific Data ---------------- R provides the means to read and store compressed data types. While there are a variety of ways to do so save and save.image are probably the most common. To save a one or more objects we can use save as follows: ``` save(object1, object2, file = 'data/myfile.RData') ``` To get those objects when we next use R, we can use the load function. ``` load('data/myfile.RData') ``` The save.image function works just like save, but saves all objects currently in your working environment. You would still just use load to load the objects back into your working environment. This might seem handy at first glance, but I would suggest you be more precise with which objects you save[1](#fn1). ### R Datasets If you just needs some quick data to learn a new package or try something out, you can consider the datasets package that’s automatically loaded with R. To be honest, most of them are not very accessible conceptually, too small to be interesting, or have other issues, but again, this doesn’t preclude them from helping you learn new things. ``` head(iris) ``` ``` Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5.0 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa ``` In addition, many packages come with their own data, and these are either available when you load the package, can be loaded with the data function. ``` data(mcycle, package = 'MASS') # loads data without loading package head(mcycle) ``` ``` times accel 1 2.4 0.0 2 2.6 -1.3 3 3.2 -2.7 4 3.6 0.0 5 4.0 -2.7 6 6.2 -2.7 ``` ### R Datasets If you just needs some quick data to learn a new package or try something out, you can consider the datasets package that’s automatically loaded with R. To be honest, most of them are not very accessible conceptually, too small to be interesting, or have other issues, but again, this doesn’t preclude them from helping you learn new things. ``` head(iris) ``` ``` Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5.0 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa ``` In addition, many packages come with their own data, and these are either available when you load the package, can be loaded with the data function. ``` data(mcycle, package = 'MASS') # loads data without loading package head(mcycle) ``` ``` times accel 1 2.4 0.0 2 2.6 -1.3 3 3.2 -2.7 4 3.6 0.0 5 4.0 -2.7 6 6.2 -2.7 ``` Other Types of Data ------------------- Be aware that R can handle practically any type of data you want to throw at it. Some examples include: * JSON * SQL * XML * YAML * MongoDB * NETCDF * text (e.g. a novel) * shapefiles (e.g. for geographic data) * Google spreadsheets And many, many others. On the Horizon -------------- feather is designed to make reading/writing data frames efficient, and the really nice thing about it is that it works in both Python and R. It’s still in early stages of development on the R side though. Big Data -------- You may come across the situation where your data cannot be held in memory. One of the first things to be aware of for data processing is that you may not need to have the data all in memory at once. Before shifting to a hardware solution, consider if the following is possible. * *Chunking*: reading and processing the data in chunks * Line at a time: dealing with individual lines of data * Other data formats: for example SQL databases (sqldf package, src\_dbi in dplyr) However, it may be that the end result is still too large. In that case you’ll have to consider a cluster\-based or distributed data situation. Of course R will have tools for that as well. * disk.frame * DBI * sparklyr [And more](https://db.rstudio.com/). I/O Exercises ------------- ### Exercise 1 Use readr and haven to read the following files. Use the URL just like you would any file name. The latter is a Stata file. You can use the RStudio’s menu approach to import the file if you want. * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/cars.csv](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/cars.csv) * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/presvote.dta](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/presvote.dta) If you downloaded the data for this workshop, the files can be accessed in that folder. ### Thinking Exercises Why might you use read\_csv from the readr package rather than read.csv in base R? What is your definition of ‘big’ data? ### Exercise 1 Use readr and haven to read the following files. Use the URL just like you would any file name. The latter is a Stata file. You can use the RStudio’s menu approach to import the file if you want. * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/cars.csv](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/cars.csv) * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/presvote.dta](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/presvote.dta) If you downloaded the data for this workshop, the files can be accessed in that folder. ### Thinking Exercises Why might you use read\_csv from the readr package rather than read.csv in base R? What is your definition of ‘big’ data? Python I/O Notebook ------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/io.ipynb)
Data Visualization
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/io.html
Input/Output ============ Until you get comfortable getting data into your chosen programming and analytical tool, you’re not going to use it as much as you could. You should at least be able to read in common data formats like comma/tab\-separated, Excel, etc. Standard methods in base R for reading in tabular data include the following functions: * read.table * read.csv * readLines Base R also comes with the foreign package for reading in other types of files, especially other statistical packages. However, while you may see it still in use, it’s not as useful as what’s found in other packages. Most of the `read.*` functions are going to have a corresponding `write.*`function. For example if I read a comma\-separated file as follows: ``` mydata = read.csv('data/myfile.csv') ``` Then I would save an R object, e.g. a data frame, as a csv file as follows: ``` write.csv(mydata, file = 'data/newfile.csv') ``` Better \& Faster Approaches --------------------------- Now that you know how base R tools work, you can mostly forget those functions, as there are some better and faster ways to read in data. The readr package has read\_csv, read\_delim, and others. These make assumptions about what type each vector is after an initial scan of the data, then proceed accordingly. If you don’t have ‘big’ data, the subsequent speed gain won’t help much, however, such an approach actually can be used as a diagnostic to pick up potential data entry errors, as warnings are given when unexpected observations occur. The data.table package provides a faster version read.table, and is typically faster than readr approaches (fread). A package for reading in foreign statistical files is haven, which has functions like read\_spss and read\_dta for SPSS and Stata files respectively. The package readxl is a clean way to read Excel files that doesn’t require any outside packages or languages. The package rio uses haven, readxl etc., but with just two functions for everything: import, export (also convert). At least for common tabular data types, readr and haven will likely serve most of your needs, at least to start. Reading in data is usually a one\-off event, such that you’ll never need to use the package again after the data is loaded. In that case, you might use the following approach, so that you don’t need to attach the whole package. ``` readr::read_csv('fileloc/filename.csv') ``` You can use that for any package, which can help avoid naming conflicts by not loading a bunch of different packages. Furthermore, if you need packages that do have a naming conflict, using this approach will ensure the function from the package you want will be used. R\-specific Data ---------------- R provides the means to read and store compressed data types. While there are a variety of ways to do so save and save.image are probably the most common. To save a one or more objects we can use save as follows: ``` save(object1, object2, file = 'data/myfile.RData') ``` To get those objects when we next use R, we can use the load function. ``` load('data/myfile.RData') ``` The save.image function works just like save, but saves all objects currently in your working environment. You would still just use load to load the objects back into your working environment. This might seem handy at first glance, but I would suggest you be more precise with which objects you save[1](#fn1). ### R Datasets If you just needs some quick data to learn a new package or try something out, you can consider the datasets package that’s automatically loaded with R. To be honest, most of them are not very accessible conceptually, too small to be interesting, or have other issues, but again, this doesn’t preclude them from helping you learn new things. ``` head(iris) ``` ``` Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5.0 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa ``` In addition, many packages come with their own data, and these are either available when you load the package, can be loaded with the data function. ``` data(mcycle, package = 'MASS') # loads data without loading package head(mcycle) ``` ``` times accel 1 2.4 0.0 2 2.6 -1.3 3 3.2 -2.7 4 3.6 0.0 5 4.0 -2.7 6 6.2 -2.7 ``` Other Types of Data ------------------- Be aware that R can handle practically any type of data you want to throw at it. Some examples include: * JSON * SQL * XML * YAML * MongoDB * NETCDF * text (e.g. a novel) * shapefiles (e.g. for geographic data) * Google spreadsheets And many, many others. On the Horizon -------------- feather is designed to make reading/writing data frames efficient, and the really nice thing about it is that it works in both Python and R. It’s still in early stages of development on the R side though. Big Data -------- You may come across the situation where your data cannot be held in memory. One of the first things to be aware of for data processing is that you may not need to have the data all in memory at once. Before shifting to a hardware solution, consider if the following is possible. * *Chunking*: reading and processing the data in chunks * Line at a time: dealing with individual lines of data * Other data formats: for example SQL databases (sqldf package, src\_dbi in dplyr) However, it may be that the end result is still too large. In that case you’ll have to consider a cluster\-based or distributed data situation. Of course R will have tools for that as well. * disk.frame * DBI * sparklyr [And more](https://db.rstudio.com/). I/O Exercises ------------- ### Exercise 1 Use readr and haven to read the following files. Use the URL just like you would any file name. The latter is a Stata file. You can use the RStudio’s menu approach to import the file if you want. * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/cars.csv](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/cars.csv) * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/presvote.dta](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/presvote.dta) If you downloaded the data for this workshop, the files can be accessed in that folder. ### Thinking Exercises Why might you use read\_csv from the readr package rather than read.csv in base R? What is your definition of ‘big’ data? Python I/O Notebook ------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/io.ipynb) Better \& Faster Approaches --------------------------- Now that you know how base R tools work, you can mostly forget those functions, as there are some better and faster ways to read in data. The readr package has read\_csv, read\_delim, and others. These make assumptions about what type each vector is after an initial scan of the data, then proceed accordingly. If you don’t have ‘big’ data, the subsequent speed gain won’t help much, however, such an approach actually can be used as a diagnostic to pick up potential data entry errors, as warnings are given when unexpected observations occur. The data.table package provides a faster version read.table, and is typically faster than readr approaches (fread). A package for reading in foreign statistical files is haven, which has functions like read\_spss and read\_dta for SPSS and Stata files respectively. The package readxl is a clean way to read Excel files that doesn’t require any outside packages or languages. The package rio uses haven, readxl etc., but with just two functions for everything: import, export (also convert). At least for common tabular data types, readr and haven will likely serve most of your needs, at least to start. Reading in data is usually a one\-off event, such that you’ll never need to use the package again after the data is loaded. In that case, you might use the following approach, so that you don’t need to attach the whole package. ``` readr::read_csv('fileloc/filename.csv') ``` You can use that for any package, which can help avoid naming conflicts by not loading a bunch of different packages. Furthermore, if you need packages that do have a naming conflict, using this approach will ensure the function from the package you want will be used. R\-specific Data ---------------- R provides the means to read and store compressed data types. While there are a variety of ways to do so save and save.image are probably the most common. To save a one or more objects we can use save as follows: ``` save(object1, object2, file = 'data/myfile.RData') ``` To get those objects when we next use R, we can use the load function. ``` load('data/myfile.RData') ``` The save.image function works just like save, but saves all objects currently in your working environment. You would still just use load to load the objects back into your working environment. This might seem handy at first glance, but I would suggest you be more precise with which objects you save[1](#fn1). ### R Datasets If you just needs some quick data to learn a new package or try something out, you can consider the datasets package that’s automatically loaded with R. To be honest, most of them are not very accessible conceptually, too small to be interesting, or have other issues, but again, this doesn’t preclude them from helping you learn new things. ``` head(iris) ``` ``` Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5.0 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa ``` In addition, many packages come with their own data, and these are either available when you load the package, can be loaded with the data function. ``` data(mcycle, package = 'MASS') # loads data without loading package head(mcycle) ``` ``` times accel 1 2.4 0.0 2 2.6 -1.3 3 3.2 -2.7 4 3.6 0.0 5 4.0 -2.7 6 6.2 -2.7 ``` ### R Datasets If you just needs some quick data to learn a new package or try something out, you can consider the datasets package that’s automatically loaded with R. To be honest, most of them are not very accessible conceptually, too small to be interesting, or have other issues, but again, this doesn’t preclude them from helping you learn new things. ``` head(iris) ``` ``` Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5.0 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa ``` In addition, many packages come with their own data, and these are either available when you load the package, can be loaded with the data function. ``` data(mcycle, package = 'MASS') # loads data without loading package head(mcycle) ``` ``` times accel 1 2.4 0.0 2 2.6 -1.3 3 3.2 -2.7 4 3.6 0.0 5 4.0 -2.7 6 6.2 -2.7 ``` Other Types of Data ------------------- Be aware that R can handle practically any type of data you want to throw at it. Some examples include: * JSON * SQL * XML * YAML * MongoDB * NETCDF * text (e.g. a novel) * shapefiles (e.g. for geographic data) * Google spreadsheets And many, many others. On the Horizon -------------- feather is designed to make reading/writing data frames efficient, and the really nice thing about it is that it works in both Python and R. It’s still in early stages of development on the R side though. Big Data -------- You may come across the situation where your data cannot be held in memory. One of the first things to be aware of for data processing is that you may not need to have the data all in memory at once. Before shifting to a hardware solution, consider if the following is possible. * *Chunking*: reading and processing the data in chunks * Line at a time: dealing with individual lines of data * Other data formats: for example SQL databases (sqldf package, src\_dbi in dplyr) However, it may be that the end result is still too large. In that case you’ll have to consider a cluster\-based or distributed data situation. Of course R will have tools for that as well. * disk.frame * DBI * sparklyr [And more](https://db.rstudio.com/). I/O Exercises ------------- ### Exercise 1 Use readr and haven to read the following files. Use the URL just like you would any file name. The latter is a Stata file. You can use the RStudio’s menu approach to import the file if you want. * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/cars.csv](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/cars.csv) * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/presvote.dta](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/presvote.dta) If you downloaded the data for this workshop, the files can be accessed in that folder. ### Thinking Exercises Why might you use read\_csv from the readr package rather than read.csv in base R? What is your definition of ‘big’ data? ### Exercise 1 Use readr and haven to read the following files. Use the URL just like you would any file name. The latter is a Stata file. You can use the RStudio’s menu approach to import the file if you want. * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/cars.csv](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/cars.csv) * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/presvote.dta](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/presvote.dta) If you downloaded the data for this workshop, the files can be accessed in that folder. ### Thinking Exercises Why might you use read\_csv from the readr package rather than read.csv in base R? What is your definition of ‘big’ data? Python I/O Notebook ------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/io.ipynb)
Text Analysis
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/io.html
Input/Output ============ Until you get comfortable getting data into your chosen programming and analytical tool, you’re not going to use it as much as you could. You should at least be able to read in common data formats like comma/tab\-separated, Excel, etc. Standard methods in base R for reading in tabular data include the following functions: * read.table * read.csv * readLines Base R also comes with the foreign package for reading in other types of files, especially other statistical packages. However, while you may see it still in use, it’s not as useful as what’s found in other packages. Most of the `read.*` functions are going to have a corresponding `write.*`function. For example if I read a comma\-separated file as follows: ``` mydata = read.csv('data/myfile.csv') ``` Then I would save an R object, e.g. a data frame, as a csv file as follows: ``` write.csv(mydata, file = 'data/newfile.csv') ``` Better \& Faster Approaches --------------------------- Now that you know how base R tools work, you can mostly forget those functions, as there are some better and faster ways to read in data. The readr package has read\_csv, read\_delim, and others. These make assumptions about what type each vector is after an initial scan of the data, then proceed accordingly. If you don’t have ‘big’ data, the subsequent speed gain won’t help much, however, such an approach actually can be used as a diagnostic to pick up potential data entry errors, as warnings are given when unexpected observations occur. The data.table package provides a faster version read.table, and is typically faster than readr approaches (fread). A package for reading in foreign statistical files is haven, which has functions like read\_spss and read\_dta for SPSS and Stata files respectively. The package readxl is a clean way to read Excel files that doesn’t require any outside packages or languages. The package rio uses haven, readxl etc., but with just two functions for everything: import, export (also convert). At least for common tabular data types, readr and haven will likely serve most of your needs, at least to start. Reading in data is usually a one\-off event, such that you’ll never need to use the package again after the data is loaded. In that case, you might use the following approach, so that you don’t need to attach the whole package. ``` readr::read_csv('fileloc/filename.csv') ``` You can use that for any package, which can help avoid naming conflicts by not loading a bunch of different packages. Furthermore, if you need packages that do have a naming conflict, using this approach will ensure the function from the package you want will be used. R\-specific Data ---------------- R provides the means to read and store compressed data types. While there are a variety of ways to do so save and save.image are probably the most common. To save a one or more objects we can use save as follows: ``` save(object1, object2, file = 'data/myfile.RData') ``` To get those objects when we next use R, we can use the load function. ``` load('data/myfile.RData') ``` The save.image function works just like save, but saves all objects currently in your working environment. You would still just use load to load the objects back into your working environment. This might seem handy at first glance, but I would suggest you be more precise with which objects you save[1](#fn1). ### R Datasets If you just needs some quick data to learn a new package or try something out, you can consider the datasets package that’s automatically loaded with R. To be honest, most of them are not very accessible conceptually, too small to be interesting, or have other issues, but again, this doesn’t preclude them from helping you learn new things. ``` head(iris) ``` ``` Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5.0 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa ``` In addition, many packages come with their own data, and these are either available when you load the package, can be loaded with the data function. ``` data(mcycle, package = 'MASS') # loads data without loading package head(mcycle) ``` ``` times accel 1 2.4 0.0 2 2.6 -1.3 3 3.2 -2.7 4 3.6 0.0 5 4.0 -2.7 6 6.2 -2.7 ``` Other Types of Data ------------------- Be aware that R can handle practically any type of data you want to throw at it. Some examples include: * JSON * SQL * XML * YAML * MongoDB * NETCDF * text (e.g. a novel) * shapefiles (e.g. for geographic data) * Google spreadsheets And many, many others. On the Horizon -------------- feather is designed to make reading/writing data frames efficient, and the really nice thing about it is that it works in both Python and R. It’s still in early stages of development on the R side though. Big Data -------- You may come across the situation where your data cannot be held in memory. One of the first things to be aware of for data processing is that you may not need to have the data all in memory at once. Before shifting to a hardware solution, consider if the following is possible. * *Chunking*: reading and processing the data in chunks * Line at a time: dealing with individual lines of data * Other data formats: for example SQL databases (sqldf package, src\_dbi in dplyr) However, it may be that the end result is still too large. In that case you’ll have to consider a cluster\-based or distributed data situation. Of course R will have tools for that as well. * disk.frame * DBI * sparklyr [And more](https://db.rstudio.com/). I/O Exercises ------------- ### Exercise 1 Use readr and haven to read the following files. Use the URL just like you would any file name. The latter is a Stata file. You can use the RStudio’s menu approach to import the file if you want. * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/cars.csv](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/cars.csv) * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/presvote.dta](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/presvote.dta) If you downloaded the data for this workshop, the files can be accessed in that folder. ### Thinking Exercises Why might you use read\_csv from the readr package rather than read.csv in base R? What is your definition of ‘big’ data? Python I/O Notebook ------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/io.ipynb) Better \& Faster Approaches --------------------------- Now that you know how base R tools work, you can mostly forget those functions, as there are some better and faster ways to read in data. The readr package has read\_csv, read\_delim, and others. These make assumptions about what type each vector is after an initial scan of the data, then proceed accordingly. If you don’t have ‘big’ data, the subsequent speed gain won’t help much, however, such an approach actually can be used as a diagnostic to pick up potential data entry errors, as warnings are given when unexpected observations occur. The data.table package provides a faster version read.table, and is typically faster than readr approaches (fread). A package for reading in foreign statistical files is haven, which has functions like read\_spss and read\_dta for SPSS and Stata files respectively. The package readxl is a clean way to read Excel files that doesn’t require any outside packages or languages. The package rio uses haven, readxl etc., but with just two functions for everything: import, export (also convert). At least for common tabular data types, readr and haven will likely serve most of your needs, at least to start. Reading in data is usually a one\-off event, such that you’ll never need to use the package again after the data is loaded. In that case, you might use the following approach, so that you don’t need to attach the whole package. ``` readr::read_csv('fileloc/filename.csv') ``` You can use that for any package, which can help avoid naming conflicts by not loading a bunch of different packages. Furthermore, if you need packages that do have a naming conflict, using this approach will ensure the function from the package you want will be used. R\-specific Data ---------------- R provides the means to read and store compressed data types. While there are a variety of ways to do so save and save.image are probably the most common. To save a one or more objects we can use save as follows: ``` save(object1, object2, file = 'data/myfile.RData') ``` To get those objects when we next use R, we can use the load function. ``` load('data/myfile.RData') ``` The save.image function works just like save, but saves all objects currently in your working environment. You would still just use load to load the objects back into your working environment. This might seem handy at first glance, but I would suggest you be more precise with which objects you save[1](#fn1). ### R Datasets If you just needs some quick data to learn a new package or try something out, you can consider the datasets package that’s automatically loaded with R. To be honest, most of them are not very accessible conceptually, too small to be interesting, or have other issues, but again, this doesn’t preclude them from helping you learn new things. ``` head(iris) ``` ``` Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5.0 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa ``` In addition, many packages come with their own data, and these are either available when you load the package, can be loaded with the data function. ``` data(mcycle, package = 'MASS') # loads data without loading package head(mcycle) ``` ``` times accel 1 2.4 0.0 2 2.6 -1.3 3 3.2 -2.7 4 3.6 0.0 5 4.0 -2.7 6 6.2 -2.7 ``` ### R Datasets If you just needs some quick data to learn a new package or try something out, you can consider the datasets package that’s automatically loaded with R. To be honest, most of them are not very accessible conceptually, too small to be interesting, or have other issues, but again, this doesn’t preclude them from helping you learn new things. ``` head(iris) ``` ``` Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5.0 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa ``` In addition, many packages come with their own data, and these are either available when you load the package, can be loaded with the data function. ``` data(mcycle, package = 'MASS') # loads data without loading package head(mcycle) ``` ``` times accel 1 2.4 0.0 2 2.6 -1.3 3 3.2 -2.7 4 3.6 0.0 5 4.0 -2.7 6 6.2 -2.7 ``` Other Types of Data ------------------- Be aware that R can handle practically any type of data you want to throw at it. Some examples include: * JSON * SQL * XML * YAML * MongoDB * NETCDF * text (e.g. a novel) * shapefiles (e.g. for geographic data) * Google spreadsheets And many, many others. On the Horizon -------------- feather is designed to make reading/writing data frames efficient, and the really nice thing about it is that it works in both Python and R. It’s still in early stages of development on the R side though. Big Data -------- You may come across the situation where your data cannot be held in memory. One of the first things to be aware of for data processing is that you may not need to have the data all in memory at once. Before shifting to a hardware solution, consider if the following is possible. * *Chunking*: reading and processing the data in chunks * Line at a time: dealing with individual lines of data * Other data formats: for example SQL databases (sqldf package, src\_dbi in dplyr) However, it may be that the end result is still too large. In that case you’ll have to consider a cluster\-based or distributed data situation. Of course R will have tools for that as well. * disk.frame * DBI * sparklyr [And more](https://db.rstudio.com/). I/O Exercises ------------- ### Exercise 1 Use readr and haven to read the following files. Use the URL just like you would any file name. The latter is a Stata file. You can use the RStudio’s menu approach to import the file if you want. * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/cars.csv](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/cars.csv) * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/presvote.dta](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/presvote.dta) If you downloaded the data for this workshop, the files can be accessed in that folder. ### Thinking Exercises Why might you use read\_csv from the readr package rather than read.csv in base R? What is your definition of ‘big’ data? ### Exercise 1 Use readr and haven to read the following files. Use the URL just like you would any file name. The latter is a Stata file. You can use the RStudio’s menu approach to import the file if you want. * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/cars.csv](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/cars.csv) * [https://raw.githubusercontent.com/m\-clark/data\-processing\-and\-visualization/master/data/presvote.dta](https://raw.githubusercontent.com/m-clark/data-processing-and-visualization/master/data/presvote.dta) If you downloaded the data for this workshop, the files can be accessed in that folder. ### Thinking Exercises Why might you use read\_csv from the readr package rather than read.csv in base R? What is your definition of ‘big’ data? Python I/O Notebook ------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/io.ipynb)
Text Analysis
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/indexing.html
Indexing ======== What follows is probably more of a refresher for those that have used R quite a bit already. Presumably you’ve had enough R exposure to be aware of some of this. However, much of data processing regards data frames, or other tables of mixed data types, so more time will be spent on slicing and dicing of data frames instead. Even so, it would be impossible to use R effectively without knowing how to handle basic data types. Slicing Vectors --------------- Taking individual parts of a vector of values is straightforward and something you’ll likely need to do a lot. The basic idea is to provide the indices for which elements you want to exract. ``` letters[4:6] # lower case letters a-z ``` ``` [1] "d" "e" "f" ``` ``` letters[c(13, 10, 3)] ``` ``` [1] "m" "j" "c" ``` Slicing Matrices/data.frames ---------------------------- With 2\-d objects we can specify rows and columns. Rows are indexed to the left of the comma, columns to the right. ``` myMatrix[1, 2:3] # matrix[rows, columns] ``` Label\-based Indexing --------------------- We can do this by name if they are available. ``` mydf['row1', 'b'] ``` Position\-based Indexing ------------------------ Otherwise we can index by number. ``` mydf[1, 2] ``` Mixed Indexing -------------- Even both! ``` mydf['row1', 2] ``` If the row/column value is empty, all rows/columns are retained. ``` mydf['row1', ] mydf[, 'b'] ``` Non\-contiguous --------------- Note that the indices supplied do not have to be in order or in sequence. ``` mydf[c(1, 3), ] ``` Boolean ------- Boolean indexing requires some `TRUE`\-`FALSE` indicator. In the following, if column A has a value greater than or equal to 2, it is `TRUE` and is selected. Otherwise it is `FALSE` and will be dropped. ``` mydf[mydf$a >= 2, ] ``` List/data.frame Extraction -------------------------- We have a couple ways to get at elements of a list, and likewise for data frames as they are also lists. \[ : grab a slice of elements/columns \[\[ : grab specific elements/columns $ : grab specific elements/columns @: extract slot for S4 objects ``` my_list_or_df[2:4] ``` ``` my_list_or_df[['name']] ``` ``` my_list_or_df$name ``` ``` my_list@name ``` In general, position\-based indexing should be avoided, except in the case of iterative programming of the sort that will be covered later. The reason is that these become *magic numbers* when not commented, such that no one will know what they refer to. In addition, any change to the rows/columns of data will render the numbers incorrect, where labels would still be applicable. Indexing Exercises ------------------ This following is a refresher of base R indexing only. Here is a matrix, a data.frame and a list. ``` mymatrix = matrix(rnorm(100), 10, 10) mydf = cars mylist = list(mymatrix, thisdf = mydf) ``` ### Exercise 1 For the matrix, in separate operations, take a slice of rows, a selection of columns, and a single element. ### Exercise 2 For the data.frame, grab a column in 3 different ways. ### Exercise 3 For the list, grab an element by number and by name. Python Indexing Notebook ------------------------ [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/indexing.ipynb) Slicing Vectors --------------- Taking individual parts of a vector of values is straightforward and something you’ll likely need to do a lot. The basic idea is to provide the indices for which elements you want to exract. ``` letters[4:6] # lower case letters a-z ``` ``` [1] "d" "e" "f" ``` ``` letters[c(13, 10, 3)] ``` ``` [1] "m" "j" "c" ``` Slicing Matrices/data.frames ---------------------------- With 2\-d objects we can specify rows and columns. Rows are indexed to the left of the comma, columns to the right. ``` myMatrix[1, 2:3] # matrix[rows, columns] ``` Label\-based Indexing --------------------- We can do this by name if they are available. ``` mydf['row1', 'b'] ``` Position\-based Indexing ------------------------ Otherwise we can index by number. ``` mydf[1, 2] ``` Mixed Indexing -------------- Even both! ``` mydf['row1', 2] ``` If the row/column value is empty, all rows/columns are retained. ``` mydf['row1', ] mydf[, 'b'] ``` Non\-contiguous --------------- Note that the indices supplied do not have to be in order or in sequence. ``` mydf[c(1, 3), ] ``` Boolean ------- Boolean indexing requires some `TRUE`\-`FALSE` indicator. In the following, if column A has a value greater than or equal to 2, it is `TRUE` and is selected. Otherwise it is `FALSE` and will be dropped. ``` mydf[mydf$a >= 2, ] ``` List/data.frame Extraction -------------------------- We have a couple ways to get at elements of a list, and likewise for data frames as they are also lists. \[ : grab a slice of elements/columns \[\[ : grab specific elements/columns $ : grab specific elements/columns @: extract slot for S4 objects ``` my_list_or_df[2:4] ``` ``` my_list_or_df[['name']] ``` ``` my_list_or_df$name ``` ``` my_list@name ``` In general, position\-based indexing should be avoided, except in the case of iterative programming of the sort that will be covered later. The reason is that these become *magic numbers* when not commented, such that no one will know what they refer to. In addition, any change to the rows/columns of data will render the numbers incorrect, where labels would still be applicable. Indexing Exercises ------------------ This following is a refresher of base R indexing only. Here is a matrix, a data.frame and a list. ``` mymatrix = matrix(rnorm(100), 10, 10) mydf = cars mylist = list(mymatrix, thisdf = mydf) ``` ### Exercise 1 For the matrix, in separate operations, take a slice of rows, a selection of columns, and a single element. ### Exercise 2 For the data.frame, grab a column in 3 different ways. ### Exercise 3 For the list, grab an element by number and by name. ### Exercise 1 For the matrix, in separate operations, take a slice of rows, a selection of columns, and a single element. ### Exercise 2 For the data.frame, grab a column in 3 different ways. ### Exercise 3 For the list, grab an element by number and by name. Python Indexing Notebook ------------------------ [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/indexing.ipynb)
Data Visualization
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/indexing.html
Indexing ======== What follows is probably more of a refresher for those that have used R quite a bit already. Presumably you’ve had enough R exposure to be aware of some of this. However, much of data processing regards data frames, or other tables of mixed data types, so more time will be spent on slicing and dicing of data frames instead. Even so, it would be impossible to use R effectively without knowing how to handle basic data types. Slicing Vectors --------------- Taking individual parts of a vector of values is straightforward and something you’ll likely need to do a lot. The basic idea is to provide the indices for which elements you want to exract. ``` letters[4:6] # lower case letters a-z ``` ``` [1] "d" "e" "f" ``` ``` letters[c(13, 10, 3)] ``` ``` [1] "m" "j" "c" ``` Slicing Matrices/data.frames ---------------------------- With 2\-d objects we can specify rows and columns. Rows are indexed to the left of the comma, columns to the right. ``` myMatrix[1, 2:3] # matrix[rows, columns] ``` Label\-based Indexing --------------------- We can do this by name if they are available. ``` mydf['row1', 'b'] ``` Position\-based Indexing ------------------------ Otherwise we can index by number. ``` mydf[1, 2] ``` Mixed Indexing -------------- Even both! ``` mydf['row1', 2] ``` If the row/column value is empty, all rows/columns are retained. ``` mydf['row1', ] mydf[, 'b'] ``` Non\-contiguous --------------- Note that the indices supplied do not have to be in order or in sequence. ``` mydf[c(1, 3), ] ``` Boolean ------- Boolean indexing requires some `TRUE`\-`FALSE` indicator. In the following, if column A has a value greater than or equal to 2, it is `TRUE` and is selected. Otherwise it is `FALSE` and will be dropped. ``` mydf[mydf$a >= 2, ] ``` List/data.frame Extraction -------------------------- We have a couple ways to get at elements of a list, and likewise for data frames as they are also lists. \[ : grab a slice of elements/columns \[\[ : grab specific elements/columns $ : grab specific elements/columns @: extract slot for S4 objects ``` my_list_or_df[2:4] ``` ``` my_list_or_df[['name']] ``` ``` my_list_or_df$name ``` ``` my_list@name ``` In general, position\-based indexing should be avoided, except in the case of iterative programming of the sort that will be covered later. The reason is that these become *magic numbers* when not commented, such that no one will know what they refer to. In addition, any change to the rows/columns of data will render the numbers incorrect, where labels would still be applicable. Indexing Exercises ------------------ This following is a refresher of base R indexing only. Here is a matrix, a data.frame and a list. ``` mymatrix = matrix(rnorm(100), 10, 10) mydf = cars mylist = list(mymatrix, thisdf = mydf) ``` ### Exercise 1 For the matrix, in separate operations, take a slice of rows, a selection of columns, and a single element. ### Exercise 2 For the data.frame, grab a column in 3 different ways. ### Exercise 3 For the list, grab an element by number and by name. Python Indexing Notebook ------------------------ [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/indexing.ipynb) Slicing Vectors --------------- Taking individual parts of a vector of values is straightforward and something you’ll likely need to do a lot. The basic idea is to provide the indices for which elements you want to exract. ``` letters[4:6] # lower case letters a-z ``` ``` [1] "d" "e" "f" ``` ``` letters[c(13, 10, 3)] ``` ``` [1] "m" "j" "c" ``` Slicing Matrices/data.frames ---------------------------- With 2\-d objects we can specify rows and columns. Rows are indexed to the left of the comma, columns to the right. ``` myMatrix[1, 2:3] # matrix[rows, columns] ``` Label\-based Indexing --------------------- We can do this by name if they are available. ``` mydf['row1', 'b'] ``` Position\-based Indexing ------------------------ Otherwise we can index by number. ``` mydf[1, 2] ``` Mixed Indexing -------------- Even both! ``` mydf['row1', 2] ``` If the row/column value is empty, all rows/columns are retained. ``` mydf['row1', ] mydf[, 'b'] ``` Non\-contiguous --------------- Note that the indices supplied do not have to be in order or in sequence. ``` mydf[c(1, 3), ] ``` Boolean ------- Boolean indexing requires some `TRUE`\-`FALSE` indicator. In the following, if column A has a value greater than or equal to 2, it is `TRUE` and is selected. Otherwise it is `FALSE` and will be dropped. ``` mydf[mydf$a >= 2, ] ``` List/data.frame Extraction -------------------------- We have a couple ways to get at elements of a list, and likewise for data frames as they are also lists. \[ : grab a slice of elements/columns \[\[ : grab specific elements/columns $ : grab specific elements/columns @: extract slot for S4 objects ``` my_list_or_df[2:4] ``` ``` my_list_or_df[['name']] ``` ``` my_list_or_df$name ``` ``` my_list@name ``` In general, position\-based indexing should be avoided, except in the case of iterative programming of the sort that will be covered later. The reason is that these become *magic numbers* when not commented, such that no one will know what they refer to. In addition, any change to the rows/columns of data will render the numbers incorrect, where labels would still be applicable. Indexing Exercises ------------------ This following is a refresher of base R indexing only. Here is a matrix, a data.frame and a list. ``` mymatrix = matrix(rnorm(100), 10, 10) mydf = cars mylist = list(mymatrix, thisdf = mydf) ``` ### Exercise 1 For the matrix, in separate operations, take a slice of rows, a selection of columns, and a single element. ### Exercise 2 For the data.frame, grab a column in 3 different ways. ### Exercise 3 For the list, grab an element by number and by name. ### Exercise 1 For the matrix, in separate operations, take a slice of rows, a selection of columns, and a single element. ### Exercise 2 For the data.frame, grab a column in 3 different ways. ### Exercise 3 For the list, grab an element by number and by name. Python Indexing Notebook ------------------------ [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/indexing.ipynb)
Data Visualization
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/indexing.html
Indexing ======== What follows is probably more of a refresher for those that have used R quite a bit already. Presumably you’ve had enough R exposure to be aware of some of this. However, much of data processing regards data frames, or other tables of mixed data types, so more time will be spent on slicing and dicing of data frames instead. Even so, it would be impossible to use R effectively without knowing how to handle basic data types. Slicing Vectors --------------- Taking individual parts of a vector of values is straightforward and something you’ll likely need to do a lot. The basic idea is to provide the indices for which elements you want to exract. ``` letters[4:6] # lower case letters a-z ``` ``` [1] "d" "e" "f" ``` ``` letters[c(13, 10, 3)] ``` ``` [1] "m" "j" "c" ``` Slicing Matrices/data.frames ---------------------------- With 2\-d objects we can specify rows and columns. Rows are indexed to the left of the comma, columns to the right. ``` myMatrix[1, 2:3] # matrix[rows, columns] ``` Label\-based Indexing --------------------- We can do this by name if they are available. ``` mydf['row1', 'b'] ``` Position\-based Indexing ------------------------ Otherwise we can index by number. ``` mydf[1, 2] ``` Mixed Indexing -------------- Even both! ``` mydf['row1', 2] ``` If the row/column value is empty, all rows/columns are retained. ``` mydf['row1', ] mydf[, 'b'] ``` Non\-contiguous --------------- Note that the indices supplied do not have to be in order or in sequence. ``` mydf[c(1, 3), ] ``` Boolean ------- Boolean indexing requires some `TRUE`\-`FALSE` indicator. In the following, if column A has a value greater than or equal to 2, it is `TRUE` and is selected. Otherwise it is `FALSE` and will be dropped. ``` mydf[mydf$a >= 2, ] ``` List/data.frame Extraction -------------------------- We have a couple ways to get at elements of a list, and likewise for data frames as they are also lists. \[ : grab a slice of elements/columns \[\[ : grab specific elements/columns $ : grab specific elements/columns @: extract slot for S4 objects ``` my_list_or_df[2:4] ``` ``` my_list_or_df[['name']] ``` ``` my_list_or_df$name ``` ``` my_list@name ``` In general, position\-based indexing should be avoided, except in the case of iterative programming of the sort that will be covered later. The reason is that these become *magic numbers* when not commented, such that no one will know what they refer to. In addition, any change to the rows/columns of data will render the numbers incorrect, where labels would still be applicable. Indexing Exercises ------------------ This following is a refresher of base R indexing only. Here is a matrix, a data.frame and a list. ``` mymatrix = matrix(rnorm(100), 10, 10) mydf = cars mylist = list(mymatrix, thisdf = mydf) ``` ### Exercise 1 For the matrix, in separate operations, take a slice of rows, a selection of columns, and a single element. ### Exercise 2 For the data.frame, grab a column in 3 different ways. ### Exercise 3 For the list, grab an element by number and by name. Python Indexing Notebook ------------------------ [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/indexing.ipynb) Slicing Vectors --------------- Taking individual parts of a vector of values is straightforward and something you’ll likely need to do a lot. The basic idea is to provide the indices for which elements you want to exract. ``` letters[4:6] # lower case letters a-z ``` ``` [1] "d" "e" "f" ``` ``` letters[c(13, 10, 3)] ``` ``` [1] "m" "j" "c" ``` Slicing Matrices/data.frames ---------------------------- With 2\-d objects we can specify rows and columns. Rows are indexed to the left of the comma, columns to the right. ``` myMatrix[1, 2:3] # matrix[rows, columns] ``` Label\-based Indexing --------------------- We can do this by name if they are available. ``` mydf['row1', 'b'] ``` Position\-based Indexing ------------------------ Otherwise we can index by number. ``` mydf[1, 2] ``` Mixed Indexing -------------- Even both! ``` mydf['row1', 2] ``` If the row/column value is empty, all rows/columns are retained. ``` mydf['row1', ] mydf[, 'b'] ``` Non\-contiguous --------------- Note that the indices supplied do not have to be in order or in sequence. ``` mydf[c(1, 3), ] ``` Boolean ------- Boolean indexing requires some `TRUE`\-`FALSE` indicator. In the following, if column A has a value greater than or equal to 2, it is `TRUE` and is selected. Otherwise it is `FALSE` and will be dropped. ``` mydf[mydf$a >= 2, ] ``` List/data.frame Extraction -------------------------- We have a couple ways to get at elements of a list, and likewise for data frames as they are also lists. \[ : grab a slice of elements/columns \[\[ : grab specific elements/columns $ : grab specific elements/columns @: extract slot for S4 objects ``` my_list_or_df[2:4] ``` ``` my_list_or_df[['name']] ``` ``` my_list_or_df$name ``` ``` my_list@name ``` In general, position\-based indexing should be avoided, except in the case of iterative programming of the sort that will be covered later. The reason is that these become *magic numbers* when not commented, such that no one will know what they refer to. In addition, any change to the rows/columns of data will render the numbers incorrect, where labels would still be applicable. Indexing Exercises ------------------ This following is a refresher of base R indexing only. Here is a matrix, a data.frame and a list. ``` mymatrix = matrix(rnorm(100), 10, 10) mydf = cars mylist = list(mymatrix, thisdf = mydf) ``` ### Exercise 1 For the matrix, in separate operations, take a slice of rows, a selection of columns, and a single element. ### Exercise 2 For the data.frame, grab a column in 3 different ways. ### Exercise 3 For the list, grab an element by number and by name. ### Exercise 1 For the matrix, in separate operations, take a slice of rows, a selection of columns, and a single element. ### Exercise 2 For the data.frame, grab a column in 3 different ways. ### Exercise 3 For the list, grab an element by number and by name. Python Indexing Notebook ------------------------ [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/indexing.ipynb)
Text Analysis
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/indexing.html
Indexing ======== What follows is probably more of a refresher for those that have used R quite a bit already. Presumably you’ve had enough R exposure to be aware of some of this. However, much of data processing regards data frames, or other tables of mixed data types, so more time will be spent on slicing and dicing of data frames instead. Even so, it would be impossible to use R effectively without knowing how to handle basic data types. Slicing Vectors --------------- Taking individual parts of a vector of values is straightforward and something you’ll likely need to do a lot. The basic idea is to provide the indices for which elements you want to exract. ``` letters[4:6] # lower case letters a-z ``` ``` [1] "d" "e" "f" ``` ``` letters[c(13, 10, 3)] ``` ``` [1] "m" "j" "c" ``` Slicing Matrices/data.frames ---------------------------- With 2\-d objects we can specify rows and columns. Rows are indexed to the left of the comma, columns to the right. ``` myMatrix[1, 2:3] # matrix[rows, columns] ``` Label\-based Indexing --------------------- We can do this by name if they are available. ``` mydf['row1', 'b'] ``` Position\-based Indexing ------------------------ Otherwise we can index by number. ``` mydf[1, 2] ``` Mixed Indexing -------------- Even both! ``` mydf['row1', 2] ``` If the row/column value is empty, all rows/columns are retained. ``` mydf['row1', ] mydf[, 'b'] ``` Non\-contiguous --------------- Note that the indices supplied do not have to be in order or in sequence. ``` mydf[c(1, 3), ] ``` Boolean ------- Boolean indexing requires some `TRUE`\-`FALSE` indicator. In the following, if column A has a value greater than or equal to 2, it is `TRUE` and is selected. Otherwise it is `FALSE` and will be dropped. ``` mydf[mydf$a >= 2, ] ``` List/data.frame Extraction -------------------------- We have a couple ways to get at elements of a list, and likewise for data frames as they are also lists. \[ : grab a slice of elements/columns \[\[ : grab specific elements/columns $ : grab specific elements/columns @: extract slot for S4 objects ``` my_list_or_df[2:4] ``` ``` my_list_or_df[['name']] ``` ``` my_list_or_df$name ``` ``` my_list@name ``` In general, position\-based indexing should be avoided, except in the case of iterative programming of the sort that will be covered later. The reason is that these become *magic numbers* when not commented, such that no one will know what they refer to. In addition, any change to the rows/columns of data will render the numbers incorrect, where labels would still be applicable. Indexing Exercises ------------------ This following is a refresher of base R indexing only. Here is a matrix, a data.frame and a list. ``` mymatrix = matrix(rnorm(100), 10, 10) mydf = cars mylist = list(mymatrix, thisdf = mydf) ``` ### Exercise 1 For the matrix, in separate operations, take a slice of rows, a selection of columns, and a single element. ### Exercise 2 For the data.frame, grab a column in 3 different ways. ### Exercise 3 For the list, grab an element by number and by name. Python Indexing Notebook ------------------------ [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/indexing.ipynb) Slicing Vectors --------------- Taking individual parts of a vector of values is straightforward and something you’ll likely need to do a lot. The basic idea is to provide the indices for which elements you want to exract. ``` letters[4:6] # lower case letters a-z ``` ``` [1] "d" "e" "f" ``` ``` letters[c(13, 10, 3)] ``` ``` [1] "m" "j" "c" ``` Slicing Matrices/data.frames ---------------------------- With 2\-d objects we can specify rows and columns. Rows are indexed to the left of the comma, columns to the right. ``` myMatrix[1, 2:3] # matrix[rows, columns] ``` Label\-based Indexing --------------------- We can do this by name if they are available. ``` mydf['row1', 'b'] ``` Position\-based Indexing ------------------------ Otherwise we can index by number. ``` mydf[1, 2] ``` Mixed Indexing -------------- Even both! ``` mydf['row1', 2] ``` If the row/column value is empty, all rows/columns are retained. ``` mydf['row1', ] mydf[, 'b'] ``` Non\-contiguous --------------- Note that the indices supplied do not have to be in order or in sequence. ``` mydf[c(1, 3), ] ``` Boolean ------- Boolean indexing requires some `TRUE`\-`FALSE` indicator. In the following, if column A has a value greater than or equal to 2, it is `TRUE` and is selected. Otherwise it is `FALSE` and will be dropped. ``` mydf[mydf$a >= 2, ] ``` List/data.frame Extraction -------------------------- We have a couple ways to get at elements of a list, and likewise for data frames as they are also lists. \[ : grab a slice of elements/columns \[\[ : grab specific elements/columns $ : grab specific elements/columns @: extract slot for S4 objects ``` my_list_or_df[2:4] ``` ``` my_list_or_df[['name']] ``` ``` my_list_or_df$name ``` ``` my_list@name ``` In general, position\-based indexing should be avoided, except in the case of iterative programming of the sort that will be covered later. The reason is that these become *magic numbers* when not commented, such that no one will know what they refer to. In addition, any change to the rows/columns of data will render the numbers incorrect, where labels would still be applicable. Indexing Exercises ------------------ This following is a refresher of base R indexing only. Here is a matrix, a data.frame and a list. ``` mymatrix = matrix(rnorm(100), 10, 10) mydf = cars mylist = list(mymatrix, thisdf = mydf) ``` ### Exercise 1 For the matrix, in separate operations, take a slice of rows, a selection of columns, and a single element. ### Exercise 2 For the data.frame, grab a column in 3 different ways. ### Exercise 3 For the list, grab an element by number and by name. ### Exercise 1 For the matrix, in separate operations, take a slice of rows, a selection of columns, and a single element. ### Exercise 2 For the data.frame, grab a column in 3 different ways. ### Exercise 3 For the list, grab an element by number and by name. Python Indexing Notebook ------------------------ [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/indexing.ipynb)
Text Analysis
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/pipes.html
Pipes ===== Pipes are operators that send what comes before the pipe to what comes after. There are many different pipes, and some packages that use their own. However, the vast majority of packages use the same pipe: %\>% Here, we’ll focus on their use with the dplyr package, and the tidyverse more generally. Pipes are also utilized heavily in visualization. Example: ``` mydf %>% select(var1, var2) %>% filter(var1 == 'Yes') %>% summary() ``` ``` Start with a data.frame %>% select columns from it %>% filter/subset it %>% get a summary ``` Note that you should never type the pipe by hand. The keyboard shortcut is `Ctrl/Cmd + Shft + M`. Using Variables as They are Created ----------------------------------- One nice thing about pipelines is that we can use variables as soon as they are created, without having to break out separate objects/steps. ``` mydf %>% mutate(newvar1 = var1 + var2, newvar2 = newvar1 / var3) %>% summarise(newvar2avg = mean(newvar2)) ``` Pipes for Visualization (more later) ------------------------------------ The following provides a means to think about pipes for visualization. It’s just a generic example for now, but we’ll see more later. ``` basegraph %>% points %>% lines %>% layout ``` The Dot ------- Most functions are not ‘pipe\-aware’ by default, or at least, do not have `data` as their first argument as most functions in the tidyverse do, as well as others using tidystyle. In the following we try to send our data frame to lm for a regression. ``` mydf %>% lm(y ~ x) # error ``` Other pipes could potentially work in this situation, e.g. %$% in magrittr. But generally, when you come upon this, you can use a dot to represent the object before the pipe. ``` mydf %>% lm(y ~ x, data=.) # . == mydf ``` Flexibility ----------- Piping is not just for data.frames. For example, the more general list objects can be used as well, and would be the primary object for the purrr family of functions we’ll discuss later. As a final example, we’ll create a function in an abstract way with pipes. * The following starts with a character vector. * Sends it to a recursive function (named ..). + .. is created on\-the\-fly, and has a single argument (`.`). * After the function is created, it’s used on ., which represents the string before the pipe. * Result: pipes between the words[2](#fn2). ``` c('Ceci', "n'est", 'pas', 'une', 'pipe!') %>% { .. <- . %>% if (length(.) == 1) . else paste(.[1], '%>%', ..(.[-1])) ..(.) } ``` ``` [1] "Ceci %>% n'est %>% pas %>% une %>% pipe!" ``` > Put that in your pipe and smoke it René Magritte! Pipes Summary ------------- Pipes are best used interactively, though you can use them within functions as well, and they are extremely useful for data exploration. Nowadays, more and more packages are being made that are ‘pipe\-aware’, especially many visualization packages. See the magrittr package for more types of pipes, and more detail on pipes is provided in these [slides](https://m-clark.github.io/workshops/dplyr/mainSlides.html). Using Variables as They are Created ----------------------------------- One nice thing about pipelines is that we can use variables as soon as they are created, without having to break out separate objects/steps. ``` mydf %>% mutate(newvar1 = var1 + var2, newvar2 = newvar1 / var3) %>% summarise(newvar2avg = mean(newvar2)) ``` Pipes for Visualization (more later) ------------------------------------ The following provides a means to think about pipes for visualization. It’s just a generic example for now, but we’ll see more later. ``` basegraph %>% points %>% lines %>% layout ``` The Dot ------- Most functions are not ‘pipe\-aware’ by default, or at least, do not have `data` as their first argument as most functions in the tidyverse do, as well as others using tidystyle. In the following we try to send our data frame to lm for a regression. ``` mydf %>% lm(y ~ x) # error ``` Other pipes could potentially work in this situation, e.g. %$% in magrittr. But generally, when you come upon this, you can use a dot to represent the object before the pipe. ``` mydf %>% lm(y ~ x, data=.) # . == mydf ``` Flexibility ----------- Piping is not just for data.frames. For example, the more general list objects can be used as well, and would be the primary object for the purrr family of functions we’ll discuss later. As a final example, we’ll create a function in an abstract way with pipes. * The following starts with a character vector. * Sends it to a recursive function (named ..). + .. is created on\-the\-fly, and has a single argument (`.`). * After the function is created, it’s used on ., which represents the string before the pipe. * Result: pipes between the words[2](#fn2). ``` c('Ceci', "n'est", 'pas', 'une', 'pipe!') %>% { .. <- . %>% if (length(.) == 1) . else paste(.[1], '%>%', ..(.[-1])) ..(.) } ``` ``` [1] "Ceci %>% n'est %>% pas %>% une %>% pipe!" ``` > Put that in your pipe and smoke it René Magritte! Pipes Summary ------------- Pipes are best used interactively, though you can use them within functions as well, and they are extremely useful for data exploration. Nowadays, more and more packages are being made that are ‘pipe\-aware’, especially many visualization packages. See the magrittr package for more types of pipes, and more detail on pipes is provided in these [slides](https://m-clark.github.io/workshops/dplyr/mainSlides.html).
Data Visualization
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/pipes.html
Pipes ===== Pipes are operators that send what comes before the pipe to what comes after. There are many different pipes, and some packages that use their own. However, the vast majority of packages use the same pipe: %\>% Here, we’ll focus on their use with the dplyr package, and the tidyverse more generally. Pipes are also utilized heavily in visualization. Example: ``` mydf %>% select(var1, var2) %>% filter(var1 == 'Yes') %>% summary() ``` ``` Start with a data.frame %>% select columns from it %>% filter/subset it %>% get a summary ``` Note that you should never type the pipe by hand. The keyboard shortcut is `Ctrl/Cmd + Shft + M`. Using Variables as They are Created ----------------------------------- One nice thing about pipelines is that we can use variables as soon as they are created, without having to break out separate objects/steps. ``` mydf %>% mutate(newvar1 = var1 + var2, newvar2 = newvar1 / var3) %>% summarise(newvar2avg = mean(newvar2)) ``` Pipes for Visualization (more later) ------------------------------------ The following provides a means to think about pipes for visualization. It’s just a generic example for now, but we’ll see more later. ``` basegraph %>% points %>% lines %>% layout ``` The Dot ------- Most functions are not ‘pipe\-aware’ by default, or at least, do not have `data` as their first argument as most functions in the tidyverse do, as well as others using tidystyle. In the following we try to send our data frame to lm for a regression. ``` mydf %>% lm(y ~ x) # error ``` Other pipes could potentially work in this situation, e.g. %$% in magrittr. But generally, when you come upon this, you can use a dot to represent the object before the pipe. ``` mydf %>% lm(y ~ x, data=.) # . == mydf ``` Flexibility ----------- Piping is not just for data.frames. For example, the more general list objects can be used as well, and would be the primary object for the purrr family of functions we’ll discuss later. As a final example, we’ll create a function in an abstract way with pipes. * The following starts with a character vector. * Sends it to a recursive function (named ..). + .. is created on\-the\-fly, and has a single argument (`.`). * After the function is created, it’s used on ., which represents the string before the pipe. * Result: pipes between the words[2](#fn2). ``` c('Ceci', "n'est", 'pas', 'une', 'pipe!') %>% { .. <- . %>% if (length(.) == 1) . else paste(.[1], '%>%', ..(.[-1])) ..(.) } ``` ``` [1] "Ceci %>% n'est %>% pas %>% une %>% pipe!" ``` > Put that in your pipe and smoke it René Magritte! Pipes Summary ------------- Pipes are best used interactively, though you can use them within functions as well, and they are extremely useful for data exploration. Nowadays, more and more packages are being made that are ‘pipe\-aware’, especially many visualization packages. See the magrittr package for more types of pipes, and more detail on pipes is provided in these [slides](https://m-clark.github.io/workshops/dplyr/mainSlides.html). Using Variables as They are Created ----------------------------------- One nice thing about pipelines is that we can use variables as soon as they are created, without having to break out separate objects/steps. ``` mydf %>% mutate(newvar1 = var1 + var2, newvar2 = newvar1 / var3) %>% summarise(newvar2avg = mean(newvar2)) ``` Pipes for Visualization (more later) ------------------------------------ The following provides a means to think about pipes for visualization. It’s just a generic example for now, but we’ll see more later. ``` basegraph %>% points %>% lines %>% layout ``` The Dot ------- Most functions are not ‘pipe\-aware’ by default, or at least, do not have `data` as their first argument as most functions in the tidyverse do, as well as others using tidystyle. In the following we try to send our data frame to lm for a regression. ``` mydf %>% lm(y ~ x) # error ``` Other pipes could potentially work in this situation, e.g. %$% in magrittr. But generally, when you come upon this, you can use a dot to represent the object before the pipe. ``` mydf %>% lm(y ~ x, data=.) # . == mydf ``` Flexibility ----------- Piping is not just for data.frames. For example, the more general list objects can be used as well, and would be the primary object for the purrr family of functions we’ll discuss later. As a final example, we’ll create a function in an abstract way with pipes. * The following starts with a character vector. * Sends it to a recursive function (named ..). + .. is created on\-the\-fly, and has a single argument (`.`). * After the function is created, it’s used on ., which represents the string before the pipe. * Result: pipes between the words[2](#fn2). ``` c('Ceci', "n'est", 'pas', 'une', 'pipe!') %>% { .. <- . %>% if (length(.) == 1) . else paste(.[1], '%>%', ..(.[-1])) ..(.) } ``` ``` [1] "Ceci %>% n'est %>% pas %>% une %>% pipe!" ``` > Put that in your pipe and smoke it René Magritte! Pipes Summary ------------- Pipes are best used interactively, though you can use them within functions as well, and they are extremely useful for data exploration. Nowadays, more and more packages are being made that are ‘pipe\-aware’, especially many visualization packages. See the magrittr package for more types of pipes, and more detail on pipes is provided in these [slides](https://m-clark.github.io/workshops/dplyr/mainSlides.html).
Data Visualization
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/pipes.html
Pipes ===== Pipes are operators that send what comes before the pipe to what comes after. There are many different pipes, and some packages that use their own. However, the vast majority of packages use the same pipe: %\>% Here, we’ll focus on their use with the dplyr package, and the tidyverse more generally. Pipes are also utilized heavily in visualization. Example: ``` mydf %>% select(var1, var2) %>% filter(var1 == 'Yes') %>% summary() ``` ``` Start with a data.frame %>% select columns from it %>% filter/subset it %>% get a summary ``` Note that you should never type the pipe by hand. The keyboard shortcut is `Ctrl/Cmd + Shft + M`. Using Variables as They are Created ----------------------------------- One nice thing about pipelines is that we can use variables as soon as they are created, without having to break out separate objects/steps. ``` mydf %>% mutate(newvar1 = var1 + var2, newvar2 = newvar1 / var3) %>% summarise(newvar2avg = mean(newvar2)) ``` Pipes for Visualization (more later) ------------------------------------ The following provides a means to think about pipes for visualization. It’s just a generic example for now, but we’ll see more later. ``` basegraph %>% points %>% lines %>% layout ``` The Dot ------- Most functions are not ‘pipe\-aware’ by default, or at least, do not have `data` as their first argument as most functions in the tidyverse do, as well as others using tidystyle. In the following we try to send our data frame to lm for a regression. ``` mydf %>% lm(y ~ x) # error ``` Other pipes could potentially work in this situation, e.g. %$% in magrittr. But generally, when you come upon this, you can use a dot to represent the object before the pipe. ``` mydf %>% lm(y ~ x, data=.) # . == mydf ``` Flexibility ----------- Piping is not just for data.frames. For example, the more general list objects can be used as well, and would be the primary object for the purrr family of functions we’ll discuss later. As a final example, we’ll create a function in an abstract way with pipes. * The following starts with a character vector. * Sends it to a recursive function (named ..). + .. is created on\-the\-fly, and has a single argument (`.`). * After the function is created, it’s used on ., which represents the string before the pipe. * Result: pipes between the words[2](#fn2). ``` c('Ceci', "n'est", 'pas', 'une', 'pipe!') %>% { .. <- . %>% if (length(.) == 1) . else paste(.[1], '%>%', ..(.[-1])) ..(.) } ``` ``` [1] "Ceci %>% n'est %>% pas %>% une %>% pipe!" ``` > Put that in your pipe and smoke it René Magritte! Pipes Summary ------------- Pipes are best used interactively, though you can use them within functions as well, and they are extremely useful for data exploration. Nowadays, more and more packages are being made that are ‘pipe\-aware’, especially many visualization packages. See the magrittr package for more types of pipes, and more detail on pipes is provided in these [slides](https://m-clark.github.io/workshops/dplyr/mainSlides.html). Using Variables as They are Created ----------------------------------- One nice thing about pipelines is that we can use variables as soon as they are created, without having to break out separate objects/steps. ``` mydf %>% mutate(newvar1 = var1 + var2, newvar2 = newvar1 / var3) %>% summarise(newvar2avg = mean(newvar2)) ``` Pipes for Visualization (more later) ------------------------------------ The following provides a means to think about pipes for visualization. It’s just a generic example for now, but we’ll see more later. ``` basegraph %>% points %>% lines %>% layout ``` The Dot ------- Most functions are not ‘pipe\-aware’ by default, or at least, do not have `data` as their first argument as most functions in the tidyverse do, as well as others using tidystyle. In the following we try to send our data frame to lm for a regression. ``` mydf %>% lm(y ~ x) # error ``` Other pipes could potentially work in this situation, e.g. %$% in magrittr. But generally, when you come upon this, you can use a dot to represent the object before the pipe. ``` mydf %>% lm(y ~ x, data=.) # . == mydf ``` Flexibility ----------- Piping is not just for data.frames. For example, the more general list objects can be used as well, and would be the primary object for the purrr family of functions we’ll discuss later. As a final example, we’ll create a function in an abstract way with pipes. * The following starts with a character vector. * Sends it to a recursive function (named ..). + .. is created on\-the\-fly, and has a single argument (`.`). * After the function is created, it’s used on ., which represents the string before the pipe. * Result: pipes between the words[2](#fn2). ``` c('Ceci', "n'est", 'pas', 'une', 'pipe!') %>% { .. <- . %>% if (length(.) == 1) . else paste(.[1], '%>%', ..(.[-1])) ..(.) } ``` ``` [1] "Ceci %>% n'est %>% pas %>% une %>% pipe!" ``` > Put that in your pipe and smoke it René Magritte! Pipes Summary ------------- Pipes are best used interactively, though you can use them within functions as well, and they are extremely useful for data exploration. Nowadays, more and more packages are being made that are ‘pipe\-aware’, especially many visualization packages. See the magrittr package for more types of pipes, and more detail on pipes is provided in these [slides](https://m-clark.github.io/workshops/dplyr/mainSlides.html).
Text Analysis
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/pipes.html
Pipes ===== Pipes are operators that send what comes before the pipe to what comes after. There are many different pipes, and some packages that use their own. However, the vast majority of packages use the same pipe: %\>% Here, we’ll focus on their use with the dplyr package, and the tidyverse more generally. Pipes are also utilized heavily in visualization. Example: ``` mydf %>% select(var1, var2) %>% filter(var1 == 'Yes') %>% summary() ``` ``` Start with a data.frame %>% select columns from it %>% filter/subset it %>% get a summary ``` Note that you should never type the pipe by hand. The keyboard shortcut is `Ctrl/Cmd + Shft + M`. Using Variables as They are Created ----------------------------------- One nice thing about pipelines is that we can use variables as soon as they are created, without having to break out separate objects/steps. ``` mydf %>% mutate(newvar1 = var1 + var2, newvar2 = newvar1 / var3) %>% summarise(newvar2avg = mean(newvar2)) ``` Pipes for Visualization (more later) ------------------------------------ The following provides a means to think about pipes for visualization. It’s just a generic example for now, but we’ll see more later. ``` basegraph %>% points %>% lines %>% layout ``` The Dot ------- Most functions are not ‘pipe\-aware’ by default, or at least, do not have `data` as their first argument as most functions in the tidyverse do, as well as others using tidystyle. In the following we try to send our data frame to lm for a regression. ``` mydf %>% lm(y ~ x) # error ``` Other pipes could potentially work in this situation, e.g. %$% in magrittr. But generally, when you come upon this, you can use a dot to represent the object before the pipe. ``` mydf %>% lm(y ~ x, data=.) # . == mydf ``` Flexibility ----------- Piping is not just for data.frames. For example, the more general list objects can be used as well, and would be the primary object for the purrr family of functions we’ll discuss later. As a final example, we’ll create a function in an abstract way with pipes. * The following starts with a character vector. * Sends it to a recursive function (named ..). + .. is created on\-the\-fly, and has a single argument (`.`). * After the function is created, it’s used on ., which represents the string before the pipe. * Result: pipes between the words[2](#fn2). ``` c('Ceci', "n'est", 'pas', 'une', 'pipe!') %>% { .. <- . %>% if (length(.) == 1) . else paste(.[1], '%>%', ..(.[-1])) ..(.) } ``` ``` [1] "Ceci %>% n'est %>% pas %>% une %>% pipe!" ``` > Put that in your pipe and smoke it René Magritte! Pipes Summary ------------- Pipes are best used interactively, though you can use them within functions as well, and they are extremely useful for data exploration. Nowadays, more and more packages are being made that are ‘pipe\-aware’, especially many visualization packages. See the magrittr package for more types of pipes, and more detail on pipes is provided in these [slides](https://m-clark.github.io/workshops/dplyr/mainSlides.html). Using Variables as They are Created ----------------------------------- One nice thing about pipelines is that we can use variables as soon as they are created, without having to break out separate objects/steps. ``` mydf %>% mutate(newvar1 = var1 + var2, newvar2 = newvar1 / var3) %>% summarise(newvar2avg = mean(newvar2)) ``` Pipes for Visualization (more later) ------------------------------------ The following provides a means to think about pipes for visualization. It’s just a generic example for now, but we’ll see more later. ``` basegraph %>% points %>% lines %>% layout ``` The Dot ------- Most functions are not ‘pipe\-aware’ by default, or at least, do not have `data` as their first argument as most functions in the tidyverse do, as well as others using tidystyle. In the following we try to send our data frame to lm for a regression. ``` mydf %>% lm(y ~ x) # error ``` Other pipes could potentially work in this situation, e.g. %$% in magrittr. But generally, when you come upon this, you can use a dot to represent the object before the pipe. ``` mydf %>% lm(y ~ x, data=.) # . == mydf ``` Flexibility ----------- Piping is not just for data.frames. For example, the more general list objects can be used as well, and would be the primary object for the purrr family of functions we’ll discuss later. As a final example, we’ll create a function in an abstract way with pipes. * The following starts with a character vector. * Sends it to a recursive function (named ..). + .. is created on\-the\-fly, and has a single argument (`.`). * After the function is created, it’s used on ., which represents the string before the pipe. * Result: pipes between the words[2](#fn2). ``` c('Ceci', "n'est", 'pas', 'une', 'pipe!') %>% { .. <- . %>% if (length(.) == 1) . else paste(.[1], '%>%', ..(.[-1])) ..(.) } ``` ``` [1] "Ceci %>% n'est %>% pas %>% une %>% pipe!" ``` > Put that in your pipe and smoke it René Magritte! Pipes Summary ------------- Pipes are best used interactively, though you can use them within functions as well, and they are extremely useful for data exploration. Nowadays, more and more packages are being made that are ‘pipe\-aware’, especially many visualization packages. See the magrittr package for more types of pipes, and more detail on pipes is provided in these [slides](https://m-clark.github.io/workshops/dplyr/mainSlides.html).
Text Analysis
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/tidyverse.html
Tidyverse ========= What is the Tidyverse? ---------------------- The tidyverse consists of a few key packages: * ggplot2: data visualization * tibble: tibbles, a modern re\-imagining of data frames * tidyr: data tidying * readr: data import * purrr: functional programming, e.g. alternate approaches to apply * dplyr: data manipulation And of course the tidyverse package itself, which will load all of the above in a way that will avoid naming conflicts. ``` library(tidyverse) ``` ``` Loading tidyverse: ggplot2 Loading tidyverse: tibble Loading tidyverse: tidyr Loading tidyverse: readr Loading tidyverse: purrr Loading tidyverse: dplyr Conflicts with tidy packages ------------------------- filter(): dplyr, stats lag(): dplyr, stats ``` In addition, there are other packages like lubridate, rvest, stringr and others in the **hadleyverse** that are also greatly useful. What is Tidy? ------------- *Tidy data* refers to data arranged in a way that makes data processing, analysis, and visualization simpler. In a tidy data set: * Each variable must have its own column. * Each observation must have its own row. * Each value must have its own cell. Think *long* before *wide*. dplyr ----- dplyr provides a grammar of data manipulation (like ggplot2 does for visualization). It is the next iteration of plyr, but plyr is deprecated and no longer used. It’s focused on tools for working with data frames, with over 100 functions that might be of specific use to you. It has three main goals: * Make the most important data manipulation tasks easier. * Do them faster. * Use the same interface to work with data frames, data tables or a database. Some key operations include: * select: grab columns + select helpers: one\_of, starts\_with, num\_range etc. * filter/slice: grab rows * group\_by: grouped operations * mutate/transmute: create new variables * summarize: summarize/aggregate There are various (SQL\-like) join/merge functions: * inner\_join, left\_join etc. And there are a lot of little things like: * n, n\_distinct, nth, n\_groups, count, recode, between In addition, there is no need to quote variable names. ### An example Let’s say we want to select from our data the following variables: * Start with the **ID** variable * The variables **X1** through **X10**, which are not all grouped together, and there are many more *X\** columns * The variables **var1** and **var2**, which are the only variables with *var* in their name * Any variable with a name that starts with **XYZ** How might we go about this in a dataset of possibly hundreds or even thousands of columns? There are several base R approaches that we could go with, but often they will be tedious, or require multiple objects to be created just to get the columns you want. Let’s start with the worst choice. ``` newData = oldData[,c(1,2,3,4, etc.)] ``` Using numeric indexes, or rather *magic numbers*, is not conducive to readability or reproducibility. If anything changes about the data columns, the numbers may no longer be applicable, and you’d have to redo the line again. We could name the variables explicitly. ``` newData = oldData[,c('ID','X1', 'X2', etc.)] ``` This would be fine if there are only a handful. But if you’re trying to reduce a 1000 column data set to several dozen it’s tedious, and generally not pretty regardless. A more advanced alternative regards a two\-step approach with [regular expressions](more.html#regular-expressions). This requires that you know something about regex (and you should), but it is difficult to read/understand by those who don’t, and often by even yourself if it’s more complicated. In any case, you first will need to create an object that represents the column names first, otherwise it looks unwieldy if used within brackets or a function like subset. ``` cols = c('ID', paste0('X', 1:10), 'var1', 'var2', grep(colnames(oldData), '^XYZ', value=T)) newData = oldData[,cols] # or via subset newData = subset(oldData, select = cols) ``` Now consider there is even more to do. What if you also want observations where **Z** is **Yes**, Q is **No**, and only the observations with the top 50 values of **var2**, ordered by **var1** (descending)? Probably the more straightforward way in R to do so would be something like the following, where each part is broken out and we continuously write over the object as we modify it. ``` # three operations and overwriting or creating new objects if we want clarity newData = newData[oldData$Z == 'Yes' & oldData$Q == 'No',] newData = newData[order(newData$var2, decreasing=T)[1:50],] newData = newData[order(newData$var1, decreasing=T),] ``` And this is for fairly straightforward operations. Now consider doing all of the previous in one piped operation. The dplyr package will allow us to do something like the following. ``` newData = oldData %>% select(num_range('X', 1:10), contains('var'), starts_with('XYZ')) %>% filter(Z == 'Yes', Q == 'No') %>% top_n(n=50, var2) %>% arrange(desc(var1)) ``` Even if it hadn’t been explained before, you might have been able to guess a little as to what was going on. The code is fairly succinct, we don’t have to keep referencing objects repeatedly, and no explicit intermediary objects are created. dplyr and piping is an *alternative*. You can do all this sort of stuff with base R, for example, with functions like with, within, subset, transform, etc. Though the initial base R approach depicted is fairly concise, in general, it can potentially be: * more verbose * less legible * less amenable to additional data changes * requires esoteric knowledge (e.g. regular expressions) * often requires creation of new objects (even if we just want to explore) * often slower, possibly greatly Running Example --------------- The following data was scraped initially scraped from the web as follows. It is data from the NBA basketball league for the last season with things like player names, position, team name, points per game, field goal percentage, and various other statistics. We’ll use it as an example to demonstrate various functionality found within dplyr. ``` library(rvest) current_year = lubridate::year(Sys.Date()) url = glue::glue("http://www.basketball-reference.com/leagues/NBA_{current_year-1}_totals.html") bball = read_html(url) %>% html_nodes("#totals_stats") %>% html_table() %>% data.frame() save(bball, file='data/bball.RData') ``` However you can just load it into your workspace as below. Note that when initially gathered from the website, the data is all character strings. We’ll fix this later. The following shows the data as it will eventually be. ``` load('data/bball.RData') glimpse(bball[,1:5]) ``` ``` Rows: 734 Columns: 5 $ Rk <chr> "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "16", "16", "17", "18", "19", "20", "Rk", "21", "22", "23", "23", "23", "24", "25", "26", "27", "28", "28", "28", "… $ Player <chr> "Álex Abrines", "Quincy Acy", "Jaylen Adams", "Steven Adams", "Bam Adebayo", "Deng Adel", "DeVaughn Akoon-Purcell", "LaMarcus Aldridge", "Rawle Alkins", "Grayson Allen", "Jarrett Allen", "Kadeem Allen",… $ Pos <chr> "SG", "PF", "PG", "C", "C", "SF", "SG", "C", "SG", "SG", "C", "SG", "PF", "SF", "SF", "PF", "PF", "PF", "C", "PF", "PF", "PF", "Pos", "SF", "PG", "SF", "SF", "SF", "PG", "C", "SG", "PF", "SG", "SG", "SG… $ Age <chr> "25", "28", "22", "25", "21", "21", "25", "33", "21", "23", "20", "26", "28", "25", "25", "30", "30", "30", "20", "24", "21", "34", "Age", "21", "24", "33", "33", "33", "31", "20", "23", "19", "25", "25… $ Tm <chr> "OKC", "PHO", "ATL", "OKC", "MIA", "CLE", "DEN", "SAS", "CHI", "UTA", "BRK", "NYK", "POR", "ATL", "MEM", "TOT", "PHO", "MIA", "IND", "MIL", "DAL", "HOU", "Tm", "TOR", "CHI", "TOT", "PHO", "WAS", "ORL", … ``` Selecting Columns ----------------- Often you do not need the entire data set. While this is easily handled in base R (as shown earlier), it can be more clear to use select in dplyr. Now we won’t have to create separate objects, use quotes or $, etc. ``` bball %>% select(Player, Tm, Pos) %>% head() ``` ``` Player Tm Pos 1 Álex Abrines OKC SG 2 Quincy Acy PHO PF 3 Jaylen Adams ATL PG 4 Steven Adams OKC C 5 Bam Adebayo MIA C 6 Deng Adel CLE SF ``` What if we want to drop some variables? ``` bball %>% select(-Player, -Tm, -Pos) %>% head() ``` ``` Rk Age G GS MP FG FGA FG. X3P X3PA X3P. X2P X2PA X2P. eFG. FT FTA FT. ORB DRB TRB AST STL BLK TOV PF PTS 1 1 25 31 2 588 56 157 .357 41 127 .323 15 30 .500 .487 12 13 .923 5 43 48 20 17 6 14 53 165 2 2 28 10 0 123 4 18 .222 2 15 .133 2 3 .667 .278 7 10 .700 3 22 25 8 1 4 4 24 17 3 3 22 34 1 428 38 110 .345 25 74 .338 13 36 .361 .459 7 9 .778 11 49 60 65 14 5 28 45 108 4 4 25 80 80 2669 481 809 .595 0 2 .000 481 807 .596 .595 146 292 .500 391 369 760 124 117 76 135 204 1108 5 5 21 82 28 1913 280 486 .576 3 15 .200 277 471 .588 .579 166 226 .735 165 432 597 184 71 65 121 203 729 6 6 21 19 3 194 11 36 .306 6 23 .261 5 13 .385 .389 4 4 1.000 3 16 19 5 1 4 6 13 32 ``` ### Helper functions Sometimes, we have a lot of variables to select, and if they have a common naming scheme, this can be very easy. ``` bball %>% select(Player, contains("3P"), ends_with("RB")) %>% arrange(desc(TRB)) %>% head() ``` ``` Player X3P X3PA X3P. ORB DRB TRB 1 Player 3P 3PA 3P% ORB DRB TRB 2 Player 3P 3PA 3P% ORB DRB TRB 3 Player 3P 3PA 3P% ORB DRB TRB 4 Player 3P 3PA 3P% ORB DRB TRB 5 Player 3P 3PA 3P% ORB DRB TRB 6 Player 3P 3PA 3P% ORB DRB TRB ``` The select also has helper functions to make selecting columns even easier. I probably don’t even need to explain what’s being done above, and this is the power of the tidyverse way. Here is the list of *helper functions* to be aware of: * starts\_with: starts with a prefix * ends\_with: ends with a suffix * contains: contains a literal string * matches: matches a regular expression * num\_range: a numerical range like x01, x02, x03\. * one\_of: variables in character vector. * everything: all variables. Filtering Rows -------------- There are repeated header rows in this data[3](#fn3), so we need to drop them. This is also why everything was character string when we first scraped it, because having any character strings in a column coerces the entire column to be character, since all elements of a vector [need to be of the same type](data_structures.html#vectors). Character string is chosen over others because anything can be converted to a string, but not everything can be a number. Filtering by rows requires the basic indexing knowledge [we talked about before](indexing.html#indexing), especially Boolean indexing. In the following, `Rk`, or rank, is for all intents and purposes just a row id, but if it equals the actual text ‘Rk’ instead of something else, we know we’re dealing with a header row, so we’ll drop it. ``` bball = bball %>% filter(Rk != "Rk") ``` * filter returns rows with matching conditions. * slice allows for a numeric indexing approach[4](#fn4). Say we want to look at forwards (SF or PF) over the age of 35\. The following will do this, and since some players play on multiple teams, we’ll want only the unique information on the variables of interest. The function distinct allows us to do this. ``` bball %>% filter(Age > 35, Pos == "SF" | Pos == "PF") %>% distinct(Player, Pos, Age) ``` ``` Player Pos Age 1 Vince Carter PF 42 2 Kyle Korver PF 37 3 Dirk Nowitzki PF 40 ``` Maybe we want just the first 10 rows. This is often the case when we perform some operation and need to quickly verify that what we’re doing is working in principle. ``` bball %>% slice(1:10) ``` ``` Rk Player Pos Age Tm G GS MP FG FGA FG. X3P X3PA X3P. X2P X2PA X2P. eFG. FT FTA FT. ORB DRB TRB AST STL BLK TOV PF PTS 1 1 Álex Abrines SG 25 OKC 31 2 588 56 157 .357 41 127 .323 15 30 .500 .487 12 13 .923 5 43 48 20 17 6 14 53 165 2 2 Quincy Acy PF 28 PHO 10 0 123 4 18 .222 2 15 .133 2 3 .667 .278 7 10 .700 3 22 25 8 1 4 4 24 17 3 3 Jaylen Adams PG 22 ATL 34 1 428 38 110 .345 25 74 .338 13 36 .361 .459 7 9 .778 11 49 60 65 14 5 28 45 108 4 4 Steven Adams C 25 OKC 80 80 2669 481 809 .595 0 2 .000 481 807 .596 .595 146 292 .500 391 369 760 124 117 76 135 204 1108 5 5 Bam Adebayo C 21 MIA 82 28 1913 280 486 .576 3 15 .200 277 471 .588 .579 166 226 .735 165 432 597 184 71 65 121 203 729 6 6 Deng Adel SF 21 CLE 19 3 194 11 36 .306 6 23 .261 5 13 .385 .389 4 4 1.000 3 16 19 5 1 4 6 13 32 7 7 DeVaughn Akoon-Purcell SG 25 DEN 7 0 22 3 10 .300 0 4 .000 3 6 .500 .300 1 2 .500 1 3 4 6 2 0 2 4 7 8 8 LaMarcus Aldridge C 33 SAS 81 81 2687 684 1319 .519 10 42 .238 674 1277 .528 .522 349 412 .847 251 493 744 194 43 107 144 179 1727 9 9 Rawle Alkins SG 21 CHI 10 1 120 13 39 .333 3 12 .250 10 27 .370 .372 8 12 .667 11 15 26 13 1 0 8 7 37 10 10 Grayson Allen SG 23 UTA 38 2 416 67 178 .376 32 99 .323 35 79 .443 .466 45 60 .750 3 20 23 25 6 6 33 47 211 ``` We can use filtering even with variables just created. ``` bball %>% unite("posTeam", Pos, Tm) %>% # create a new variable filter(posTeam == "SG_GSW") %>% # use it for filtering select(Player, posTeam, Age) %>% # use it for selection arrange(desc(Age)) # descending order ``` ``` Player posTeam Age 1 Klay Thompson SG_GSW 28 2 Damion Lee SG_GSW 26 3 Jacob Evans SG_GSW 21 ``` Being able to use a newly created variable on the fly, possibly only to filter or create some other variable, goes a long way toward easy visualization and generation of desired summary statistics. Generating New Data ------------------- One of the most common data processing tasks is generating new variables. The function mutate takes a vector and returns one of the same dimension. In addition, there is mutate\_at, mutate\_if, and mutate\_all to help with specific scenarios. To demonstrate, we’ll use mutate\_at to make appropriate columns numeric, i.e. everything except `Player`, `Pos`, and `Tm`. It takes two inputs, variables and functions to apply. As there are multiple variables and (potentially) multiple functions, we use the vars and funs functions to denote them[5](#fn5). ``` bball = bball %>% mutate(across(c(-Player, -Pos, -Tm), as.numeric)) glimpse(bball[,1:7]) ``` ``` Rows: 708 Columns: 7 $ Rk <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 16, 16, 17, 18, 19, 20, 21, 22, 23, 23, 23, 24, 25, 26, 27, 28, 28, 28, 29, 30, 31, 32, 33, 33, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,… $ Player <chr> "Álex Abrines", "Quincy Acy", "Jaylen Adams", "Steven Adams", "Bam Adebayo", "Deng Adel", "DeVaughn Akoon-Purcell", "LaMarcus Aldridge", "Rawle Alkins", "Grayson Allen", "Jarrett Allen", "Kadeem Allen",… $ Pos <chr> "SG", "PF", "PG", "C", "C", "SF", "SG", "C", "SG", "SG", "C", "SG", "PF", "SF", "SF", "PF", "PF", "PF", "C", "PF", "PF", "PF", "SF", "PG", "SF", "SF", "SF", "PG", "C", "SG", "PF", "SG", "SG", "SG", "PG"… $ Age <dbl> 25, 28, 22, 25, 21, 21, 25, 33, 21, 23, 20, 26, 28, 25, 25, 30, 30, 30, 20, 24, 21, 34, 21, 24, 33, 33, 33, 31, 20, 23, 19, 25, 25, 25, 22, 21, 20, 34, 26, 26, 26, 28, 23, 30, 30, 32, 29, 25, 22, 30, 32… $ Tm <chr> "OKC", "PHO", "ATL", "OKC", "MIA", "CLE", "DEN", "SAS", "CHI", "UTA", "BRK", "NYK", "POR", "ATL", "MEM", "TOT", "PHO", "MIA", "IND", "MIL", "DAL", "HOU", "TOR", "CHI", "TOT", "PHO", "WAS", "ORL", "PHO",… $ G <dbl> 31, 10, 34, 80, 82, 19, 7, 81, 10, 38, 80, 19, 81, 48, 43, 25, 15, 10, 3, 72, 2, 10, 67, 81, 69, 26, 43, 81, 71, 43, 62, 15, 11, 4, 16, 47, 47, 38, 77, 49, 28, 43, 30, 75, 34, 51, 67, 82, 81, 26, 79, 68… $ GS <dbl> 2, 0, 1, 80, 28, 3, 0, 81, 1, 2, 80, 1, 81, 4, 40, 8, 8, 0, 0, 72, 0, 2, 6, 32, 69, 26, 43, 81, 70, 13, 4, 0, 0, 0, 0, 45, 1, 0, 77, 49, 28, 38, 3, 72, 6, 18, 35, 82, 18, 2, 1, 3, 15, 27, 0, 12, 49, 1, … ``` Now that the data columns are of the correct type, the following demonstrates how we can use the standard mutate function to create composites of existing variables. ``` bball = bball %>% mutate( trueShooting = PTS / (2 * (FGA + (.44 * FTA))), effectiveFG = (FG + (.5 * X3P)) / FGA, shootingDif = trueShooting - FG. ) summary(select(bball, shootingDif)) # select and others don't have to be piped to use ``` ``` shootingDif Min. :-0.08561 1st Qu.: 0.06722 Median : 0.09829 Mean : 0.09420 3rd Qu.: 0.12379 Max. : 0.53192 NA's :6 ``` Grouping and Summarizing Data ----------------------------- Another very common task is to look at group\-based statistics, and we can use group\_by and summarize to help us in this regard[6](#fn6). Base R has things like aggregate, by, and tapply for this, but they should not be used, as this approach is much more straightforward, flexible, and faster. Conceptually we are doing a three\-phase task: **split**, **apply**, **combine**. We split the data into subsets, apply a function, and then combine the results back into a single output. In applying a function, we may do any of the previously demonstrated tasks: calculate some statistic, generate new data, or even filter to a reduced part of the data. For this demonstration, I’m going to start putting together several things we’ve demonstrated thus far. Ultimately we’ll create a variable called trueShooting, which represents ‘true shooting percentage’, and get an average for each position, and compare it to the average field goal percentage. ``` bball %>% select(Pos, FG, FGA, FG., FTA, X3P, PTS) %>% mutate( trueShooting = PTS / (2 * (FGA + (.44 * FTA))), effectiveFG = (FG + (.5 * X3P)) / FGA, shootingDif = trueShooting - FG. ) %>% group_by(Pos) %>% summarize( `Mean FG%` = mean(FG., na.rm = TRUE), `Mean True Shooting` = mean(trueShooting, na.rm = TRUE) ) ``` ``` # A tibble: 11 x 3 Pos `Mean FG%` `Mean True Shooting` <chr> <dbl> <dbl> 1 C 0.522 0.572 2 C-PF 0.407 0.530 3 PF 0.442 0.536 4 PF-C 0.356 0.492 5 PF-SF 0.419 0.544 6 PG 0.409 0.512 7 SF 0.425 0.529 8 SF-SG 0.431 0.558 9 SG 0.407 0.517 10 SG-PF 0.416 0.582 11 SG-SF 0.38 0.466 ``` We can do even more with grouped data. Specifically, we can create a new *list\-column* in the data, the elements of which can be anything, even the results of an analysis for each group. As such, we can use tidyr’s unnest to get back to a standard data frame. To demonstrate, the following will group data by position, then get the correlation between field\-goal percentage and free\-throw shooting percentage. Some players are listed with multiple positions, so we will reduce those to whatever their first position is using case\_when. ``` bball %>% mutate( Pos = case_when( Pos == 'PG-SG' ~ 'PG', Pos == 'C-PF' ~ 'C', Pos == 'SF-SG' ~ 'SF', Pos == 'PF-C' | Pos == 'PF-SF' ~ 'PF', Pos == 'SG-PF' | Pos == 'SG-SF' ~ 'SG', TRUE ~ Pos )) %>% nest_by(Pos) %>% mutate(FgFt_Corr = list(cor(data$FG., data$FT., use = 'complete'))) %>% unnest(c(Pos, FgFt_Corr)) ``` ``` # A tibble: 5 x 3 # Groups: Pos [5] Pos data FgFt_Corr <chr> <list<tbl_df[,32]>> <dbl> 1 C [121 × 32] -0.122 2 PF [150 × 32] -0.0186 3 PG [139 × 32] 0.0857 4 SF [120 × 32] 0.00422 5 SG [178 × 32] -0.0585 ``` As a reminder, data frames are lists. As such, anything can go into the ‘columns’, even regression models! ``` library(nycflights13) carriers = group_by(flights, carrier) group_size(carriers) # if you're curious, there is a function to quickly get group Ns ``` ``` [1] 18460 32729 714 54635 48110 54173 685 3260 342 26397 32 58665 20536 5162 12275 601 ``` ``` mods = flights %>% nest_by(carrier) %>% mutate(model = list(lm(arr_delay ~ dep_time, data = data)) ) mods ``` ``` # A tibble: 16 x 3 # Rowwise: carrier carrier data model <chr> <list<tbl_df[,18]>> <list> 1 9E [18,460 × 18] <lm> 2 AA [32,729 × 18] <lm> 3 AS [714 × 18] <lm> 4 B6 [54,635 × 18] <lm> 5 DL [48,110 × 18] <lm> 6 EV [54,173 × 18] <lm> 7 F9 [685 × 18] <lm> 8 FL [3,260 × 18] <lm> 9 HA [342 × 18] <lm> 10 MQ [26,397 × 18] <lm> 11 OO [32 × 18] <lm> 12 UA [58,665 × 18] <lm> 13 US [20,536 × 18] <lm> 14 VX [5,162 × 18] <lm> 15 WN [12,275 × 18] <lm> 16 YV [601 × 18] <lm> ``` ``` mods %>% summarize( carrier = carrier, `Adjusted Rsq` = summary(model)$adj.r.squared, coef_dep_time = coef(model)[2] ) ``` ``` # A tibble: 16 x 3 # Groups: carrier [16] carrier `Adjusted Rsq` coef_dep_time <chr> <dbl> <dbl> 1 9E 0.0513 0.0252 2 AA 0.0504 0.0209 3 AS 0.0815 0.0186 4 B6 0.0241 0.0120 5 DL 0.0347 0.0179 6 EV 0.0836 0.0290 7 F9 0.0998 0.0484 8 FL 0.0261 0.0183 9 HA -0.00124 -0.0578 10 MQ 0.0499 0.0218 11 OO -0.0189 0.0394 12 UA 0.0673 0.0220 13 US 0.0575 0.0174 14 VX 0.111 0.0362 15 WN 0.119 0.0345 16 YV 0.137 0.0805 ``` You can use group\_by on more than one variable, e.g. `group_by(var1, var2)` Renaming Columns ---------------- Tibbles in the tidyverse don’t really have a problem with variable names starting with numbers or incorporating symbols and spaces. I would still suggest it is poor practice, because even if your data set looks fine, you’ll possibly encounter problems with modeling and visualization packages using that data. However, as a demonstration, we can ‘fix’ some of the variable names. One issue is that when we scraped the data and converted it to a data.frame, the names that started with a number, like `3P` for ‘three point baskets made’, were made into `X3P`, because that’s the way R works by default. In addition, `3P%`, i.e. three point percentage made, was made into `3P.` with a dot for the percent sign. Same goes for the 2P (two\-pointers) and FT (free\-throw) variables. We can use rename to change column names. A basic example is as follows. ``` data %>% rename(new_name = old_name, new_name2 = old_name2) ``` Very straightforward. However, oftentimes we’ll need to change *patterns*, as with our current problem. The following uses str\_replace and str\_remove from stringr to look for a pattern in a name, and replace that pattern with some other pattern. It uses *regular expressions* for the patterns. ``` bball = bball %>% rename_with( str_replace, # function contains('.'), # columns pattern = '\\.', # function arguments replacement = '%' ) %>% rename_with(str_remove, starts_with('X'), pattern = 'X') colnames(bball) ``` ``` [1] "Rk" "Player" "Pos" "Age" "Tm" "G" "GS" "MP" "FG" "FGA" "FG%" "3P" "3PA" "3P%" [15] "2P" "2PA" "2P%" "eFG%" "FT" "FTA" "FT%" "ORB" "DRB" "TRB" "AST" "STL" "BLK" "TOV" [29] "PF" "PTS" "trueShooting" "effectiveFG" "shootingDif" ``` Merging Data ------------ Merging data is yet another very common data task, as data often comes from multiple sources. In order to do this, we need some common identifier among the sources by which to join them. The following is a list of dplyr join functions. inner\_join: return all rows from x where there are matching values in y, and all columns from x and y. If there are multiple matches between x and y, all combination of the matches are returned. left\_join: return all rows from x, and all columns from x and y. Rows in x with no match in y will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned. right\_join: return all rows from y, and all columns from x and y. Rows in y with no match in x will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned. semi\_join: return all rows from x where there are matching values in y, keeping just columns from x. It differs from an inner join because an inner join will return one row of x for each matching row of y, where a semi join will never duplicate rows of x. anti\_join: return all rows from x where there are not matching values in y, keeping just columns from x. full\_join: return all rows and all columns from both x and y. Where there are not matching values, returns NA for the one missing. Probably the most common is a left join, where we have one primary data set, and are adding data from another source to it while retaining it as a base. The following is a simple demonstration. ``` band_members ``` ``` # A tibble: 3 x 2 Name Band <chr> <chr> 1 Seth Com Truise 2 Francis Pixies 3 Bubba The New Year ``` ``` band_instruments ``` ``` # A tibble: 3 x 2 Name Instrument <chr> <chr> 1 Francis Guitar 2 Bubba Guitar 3 Seth Synthesizer ``` ``` left_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 3 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar ``` When we don’t have a one to one match, the result of the different types of join will become more apparent. ``` band_members ``` ``` # A tibble: 4 x 2 Name Band <chr> <chr> 1 Seth Com Truise 2 Francis Pixies 3 Bubba The New Year 4 Stephen Pavement ``` ``` band_instruments ``` ``` # A tibble: 4 x 2 Name Instrument <chr> <chr> 1 Seth Synthesizer 2 Francis Guitar 3 Bubba Guitar 4 Steve Rage ``` ``` left_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 4 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Stephen Pavement <NA> ``` ``` right_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 4 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Steve <NA> Rage ``` ``` inner_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 3 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar ``` ``` full_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 5 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Stephen Pavement <NA> 5 Steve <NA> Rage ``` ``` anti_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 1 x 2 Name Band <chr> <chr> 1 Stephen Pavement ``` ``` anti_join(band_instruments, band_members) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 1 x 2 Name Instrument <chr> <chr> 1 Steve Rage ``` Merges can get quite complex, and involve multiple data sources. In many cases you may have to do a lot of processing before getting to the merge, but dplyr’s joins will help quite a bit. Pivoting axes ------------- The tidyr package can be thought of as a specialized subset of dplyr’s functionality, as well as an update to the previous reshape and reshape2 packages[7](#fn7). Some of its functions for manipulating data you’ll want to be familiar with are: * pivot\_longer: convert data from a wider format to longer one * pivot\_wider: convert data from a longer format to wider one * unite: paste together multiple columns into one * separate: complement of unite * unnest: expand ‘list columns’ The following example shows how we take a ‘wide\-form’ data set, where multiple columns represent different stock prices, and turn it into two columns, one representing stock name, and one for the price. We need to know which columns to work on, which is the first entry. This function works very much like select, where you can use helpers. Then we need to give a name to the column(s) representing the indicators of what were multiple columns in the wide format. And finally we need to specify the column(s) of the values. ``` library(tidyr) stocks <- data.frame( time = as.Date('2009-01-01') + 0:9, X = rnorm(10, 0, 1), Y = rnorm(10, 0, 2), Z = rnorm(10, 0, 4) ) stocks %>% head ``` ``` time X Y Z 1 2009-01-01 -1.23994442 -4.8515935 3.7985281 2 2009-01-02 0.65851483 0.9552487 -2.7255786 3 2009-01-03 -0.91146059 -0.0321312 0.6175274 4 2009-01-04 1.85598621 1.1919978 -2.4837558 5 2009-01-05 0.37266866 0.6297287 -1.1330732 6 2009-01-06 -0.06072664 -2.8673242 1.7155168 ``` ``` stocks %>% pivot_longer( cols = -time, # works similar to using select() names_to = 'stock', # the name of the column that will have column names as labels values_to = 'price' # the name of the column for the values ) %>% head() ``` ``` # A tibble: 6 x 3 time stock price <date> <chr> <dbl> 1 2009-01-01 X -1.24 2 2009-01-01 Y -4.85 3 2009-01-01 Z 3.80 4 2009-01-02 X 0.659 5 2009-01-02 Y 0.955 6 2009-01-02 Z -2.73 ``` Here is a more complex example where we can handle multiple repeated entries. We additionally add another column for labeling, and posit the separator for the column names. ``` library(tidyr) stocks <- data.frame( time = as.Date('2009-01-01') + 0:9, X_1 = rnorm(10, 0, 1), X_2 = rnorm(10, 0, 1), Y_1 = rnorm(10, 0, 2), Y_2 = rnorm(10, 0, 2), Z_1 = rnorm(10, 0, 4), Z_2 = rnorm(10, 0, 4) ) head(stocks) ``` ``` time X_1 X_2 Y_1 Y_2 Z_1 Z_2 1 2009-01-01 -0.9675529 -0.72793192 0.7516393 0.03321408 3.7485540 0.3945022 2 2009-01-02 -0.1780449 0.08926355 -0.1976137 1.53569057 -0.0315400 7.6285628 3 2009-01-03 0.2958189 0.38118235 1.6730362 -1.13635638 0.1543268 -5.9254785 4 2009-01-04 -0.7805814 -0.67370673 -0.5696378 -3.62905335 -2.4256959 6.6867209 5 2009-01-05 1.7910958 -0.32353046 -1.6786235 -1.55989831 -4.4294289 -8.1844866 6 2009-01-06 1.1623828 -0.27362716 -0.3116307 2.73462718 0.6675895 1.9884072 ``` ``` stocks %>% pivot_longer( cols = -time, names_to = c('stock', 'entry'), names_sep = '_', values_to = 'price' ) %>% head() ``` ``` # A tibble: 6 x 4 time stock entry price <date> <chr> <chr> <dbl> 1 2009-01-01 X 1 -0.968 2 2009-01-01 X 2 -0.728 3 2009-01-01 Y 1 0.752 4 2009-01-01 Y 2 0.0332 5 2009-01-01 Z 1 3.75 6 2009-01-01 Z 2 0.395 ``` Note that the latter is an example of *tidy data* while the former is not. Why do we generally prefer such data? Precisely because the most common data operations, grouping, filtering, etc., would work notably more efficiently with such data. This is especially the case for visualization. The following demonstrates the separate function utilized for a very common data processing task\- dealing with names. Here’ we’ll separate player into first and last names based on the space. ``` bball %>% separate(Player, into=c('first_name', 'last_name'), sep=' ') %>% select(1:5) %>% head() ``` ``` Rk first_name last_name Pos Age 1 1 Álex Abrines SG 25 2 2 Quincy Acy PF 28 3 3 Jaylen Adams PG 22 4 4 Steven Adams C 25 5 5 Bam Adebayo C 21 6 6 Deng Adel SF 21 ``` Note that this won’t necessarily apply to every name, so further processing may be required. More Tidyverse -------------- * dplyr functions: There are over a hundred utility functions that perform very common tasks. You really need to be aware of them, as their use will come up often. * broom: Convert statistical analysis objects from R into tidy data frames, so that they can more easily be combined, reshaped and otherwise processed with tools like dplyr, tidyr and ggplot2\. * tidy\*: a lot of packages out there are now ‘tidy’, though not a part of the official tidyverse. Some examples of the ones I’ve used: + tidycensus + tidybayes + tidytext + modelr Seriously, there are [a lot](https://www.r-pkg.org/search.html?q=tidy). Personal Opinion ---------------- The dplyr grammar is clear for a lot of standard data processing tasks, and some not so common. Extremely useful for data exploration and visualization. * No need to create/overwrite existing objects * Can overwrite columns and use as they are created * Makes it easy to look at anything, and do otherwise tedious data checks Drawbacks: * Not as fast as data.table or even some base R approaches for many things[8](#fn8) * The *mindset* can make for unnecessary complication + e.g. There is no need to pipe to create a single new variable * Some approaches, are not very intuitive * Notably less ability to work with some very common data structures (e.g. matrices) All in all, if you’ve only been using base R approaches, the tidyverse will change your R life! It makes all the sorts of things you do all the time easier and clearer. Highly recommended! Tidyverse Exercises ------------------- ### Exercise 0 Install and load the dplyr ggplot2movies packages. Look at the help file for the `movies` data set, which contains data from IMDB. ``` install.packages('ggplot2movies') library(ggplot2movies) data('movies') ``` ### Exercise 1 Using the movies data set, perform each of the following actions separately. #### Exercise 1a Use mutate to create a centered version of the rating variable. A centered variable is one whose mean has been subtracted from it. The process will take the following form: ``` data %>% mutate(new_var_name = '?') ``` #### Exercise 1b Use filter to create a new data frame that has only movies from the years 2000 and beyond. Use the greater than or equal operator `>=`. #### Exercise 1c Use select to create a new data frame that only has the `title`, `year`, `budget`, `length`, `rating` and `votes` variables. There are at least 3 ways to do this. #### Exercise 1d Rename the `length` column to `length_in_min` (i.e. length in minutes). ### Exercise 2 Use group\_by to group the data by year, and summarize to create a new variable that is the average budget. The summarize function works just like mutate in this case. Use the mean function to get the average, but you’ll also need to use the argument `na.rm = TRUE` within it because the earliest years have no budget recorded. ### Exercise 3 Use pivot\_longer to create a ‘tidy’ data set from the following. ``` dat = tibble(id = 1:10, x = rnorm(10), y = rnorm(10)) ``` ### Exercise 4 Now put several actions together in one set of piped operations. * Filter movies released *after* 1990 * select the same variables as before but also the `mpaa`, `Action`, and `Drama` variables * group by `mpaa` *and* (your choice) `Action` *or* `Drama` * get the average rating It should spit out something like the following: ``` # A tibble: 10 x 3 # Groups: mpaa [5] mpaa Drama AvgRating <chr> <int> <dbl> 1 "" 0 5.94 2 "" 1 6.20 3 "NC-17" 0 4.28 4 "NC-17" 1 4.62 5 "PG" 0 5.19 6 "PG" 1 6.15 7 "PG-13" 0 5.44 8 "PG-13" 1 6.14 9 "R" 0 4.86 10 "R" 1 5.94 ``` Python Pandas Notebook ---------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/pandaverse.ipynb) What is the Tidyverse? ---------------------- The tidyverse consists of a few key packages: * ggplot2: data visualization * tibble: tibbles, a modern re\-imagining of data frames * tidyr: data tidying * readr: data import * purrr: functional programming, e.g. alternate approaches to apply * dplyr: data manipulation And of course the tidyverse package itself, which will load all of the above in a way that will avoid naming conflicts. ``` library(tidyverse) ``` ``` Loading tidyverse: ggplot2 Loading tidyverse: tibble Loading tidyverse: tidyr Loading tidyverse: readr Loading tidyverse: purrr Loading tidyverse: dplyr Conflicts with tidy packages ------------------------- filter(): dplyr, stats lag(): dplyr, stats ``` In addition, there are other packages like lubridate, rvest, stringr and others in the **hadleyverse** that are also greatly useful. What is Tidy? ------------- *Tidy data* refers to data arranged in a way that makes data processing, analysis, and visualization simpler. In a tidy data set: * Each variable must have its own column. * Each observation must have its own row. * Each value must have its own cell. Think *long* before *wide*. dplyr ----- dplyr provides a grammar of data manipulation (like ggplot2 does for visualization). It is the next iteration of plyr, but plyr is deprecated and no longer used. It’s focused on tools for working with data frames, with over 100 functions that might be of specific use to you. It has three main goals: * Make the most important data manipulation tasks easier. * Do them faster. * Use the same interface to work with data frames, data tables or a database. Some key operations include: * select: grab columns + select helpers: one\_of, starts\_with, num\_range etc. * filter/slice: grab rows * group\_by: grouped operations * mutate/transmute: create new variables * summarize: summarize/aggregate There are various (SQL\-like) join/merge functions: * inner\_join, left\_join etc. And there are a lot of little things like: * n, n\_distinct, nth, n\_groups, count, recode, between In addition, there is no need to quote variable names. ### An example Let’s say we want to select from our data the following variables: * Start with the **ID** variable * The variables **X1** through **X10**, which are not all grouped together, and there are many more *X\** columns * The variables **var1** and **var2**, which are the only variables with *var* in their name * Any variable with a name that starts with **XYZ** How might we go about this in a dataset of possibly hundreds or even thousands of columns? There are several base R approaches that we could go with, but often they will be tedious, or require multiple objects to be created just to get the columns you want. Let’s start with the worst choice. ``` newData = oldData[,c(1,2,3,4, etc.)] ``` Using numeric indexes, or rather *magic numbers*, is not conducive to readability or reproducibility. If anything changes about the data columns, the numbers may no longer be applicable, and you’d have to redo the line again. We could name the variables explicitly. ``` newData = oldData[,c('ID','X1', 'X2', etc.)] ``` This would be fine if there are only a handful. But if you’re trying to reduce a 1000 column data set to several dozen it’s tedious, and generally not pretty regardless. A more advanced alternative regards a two\-step approach with [regular expressions](more.html#regular-expressions). This requires that you know something about regex (and you should), but it is difficult to read/understand by those who don’t, and often by even yourself if it’s more complicated. In any case, you first will need to create an object that represents the column names first, otherwise it looks unwieldy if used within brackets or a function like subset. ``` cols = c('ID', paste0('X', 1:10), 'var1', 'var2', grep(colnames(oldData), '^XYZ', value=T)) newData = oldData[,cols] # or via subset newData = subset(oldData, select = cols) ``` Now consider there is even more to do. What if you also want observations where **Z** is **Yes**, Q is **No**, and only the observations with the top 50 values of **var2**, ordered by **var1** (descending)? Probably the more straightforward way in R to do so would be something like the following, where each part is broken out and we continuously write over the object as we modify it. ``` # three operations and overwriting or creating new objects if we want clarity newData = newData[oldData$Z == 'Yes' & oldData$Q == 'No',] newData = newData[order(newData$var2, decreasing=T)[1:50],] newData = newData[order(newData$var1, decreasing=T),] ``` And this is for fairly straightforward operations. Now consider doing all of the previous in one piped operation. The dplyr package will allow us to do something like the following. ``` newData = oldData %>% select(num_range('X', 1:10), contains('var'), starts_with('XYZ')) %>% filter(Z == 'Yes', Q == 'No') %>% top_n(n=50, var2) %>% arrange(desc(var1)) ``` Even if it hadn’t been explained before, you might have been able to guess a little as to what was going on. The code is fairly succinct, we don’t have to keep referencing objects repeatedly, and no explicit intermediary objects are created. dplyr and piping is an *alternative*. You can do all this sort of stuff with base R, for example, with functions like with, within, subset, transform, etc. Though the initial base R approach depicted is fairly concise, in general, it can potentially be: * more verbose * less legible * less amenable to additional data changes * requires esoteric knowledge (e.g. regular expressions) * often requires creation of new objects (even if we just want to explore) * often slower, possibly greatly ### An example Let’s say we want to select from our data the following variables: * Start with the **ID** variable * The variables **X1** through **X10**, which are not all grouped together, and there are many more *X\** columns * The variables **var1** and **var2**, which are the only variables with *var* in their name * Any variable with a name that starts with **XYZ** How might we go about this in a dataset of possibly hundreds or even thousands of columns? There are several base R approaches that we could go with, but often they will be tedious, or require multiple objects to be created just to get the columns you want. Let’s start with the worst choice. ``` newData = oldData[,c(1,2,3,4, etc.)] ``` Using numeric indexes, or rather *magic numbers*, is not conducive to readability or reproducibility. If anything changes about the data columns, the numbers may no longer be applicable, and you’d have to redo the line again. We could name the variables explicitly. ``` newData = oldData[,c('ID','X1', 'X2', etc.)] ``` This would be fine if there are only a handful. But if you’re trying to reduce a 1000 column data set to several dozen it’s tedious, and generally not pretty regardless. A more advanced alternative regards a two\-step approach with [regular expressions](more.html#regular-expressions). This requires that you know something about regex (and you should), but it is difficult to read/understand by those who don’t, and often by even yourself if it’s more complicated. In any case, you first will need to create an object that represents the column names first, otherwise it looks unwieldy if used within brackets or a function like subset. ``` cols = c('ID', paste0('X', 1:10), 'var1', 'var2', grep(colnames(oldData), '^XYZ', value=T)) newData = oldData[,cols] # or via subset newData = subset(oldData, select = cols) ``` Now consider there is even more to do. What if you also want observations where **Z** is **Yes**, Q is **No**, and only the observations with the top 50 values of **var2**, ordered by **var1** (descending)? Probably the more straightforward way in R to do so would be something like the following, where each part is broken out and we continuously write over the object as we modify it. ``` # three operations and overwriting or creating new objects if we want clarity newData = newData[oldData$Z == 'Yes' & oldData$Q == 'No',] newData = newData[order(newData$var2, decreasing=T)[1:50],] newData = newData[order(newData$var1, decreasing=T),] ``` And this is for fairly straightforward operations. Now consider doing all of the previous in one piped operation. The dplyr package will allow us to do something like the following. ``` newData = oldData %>% select(num_range('X', 1:10), contains('var'), starts_with('XYZ')) %>% filter(Z == 'Yes', Q == 'No') %>% top_n(n=50, var2) %>% arrange(desc(var1)) ``` Even if it hadn’t been explained before, you might have been able to guess a little as to what was going on. The code is fairly succinct, we don’t have to keep referencing objects repeatedly, and no explicit intermediary objects are created. dplyr and piping is an *alternative*. You can do all this sort of stuff with base R, for example, with functions like with, within, subset, transform, etc. Though the initial base R approach depicted is fairly concise, in general, it can potentially be: * more verbose * less legible * less amenable to additional data changes * requires esoteric knowledge (e.g. regular expressions) * often requires creation of new objects (even if we just want to explore) * often slower, possibly greatly Running Example --------------- The following data was scraped initially scraped from the web as follows. It is data from the NBA basketball league for the last season with things like player names, position, team name, points per game, field goal percentage, and various other statistics. We’ll use it as an example to demonstrate various functionality found within dplyr. ``` library(rvest) current_year = lubridate::year(Sys.Date()) url = glue::glue("http://www.basketball-reference.com/leagues/NBA_{current_year-1}_totals.html") bball = read_html(url) %>% html_nodes("#totals_stats") %>% html_table() %>% data.frame() save(bball, file='data/bball.RData') ``` However you can just load it into your workspace as below. Note that when initially gathered from the website, the data is all character strings. We’ll fix this later. The following shows the data as it will eventually be. ``` load('data/bball.RData') glimpse(bball[,1:5]) ``` ``` Rows: 734 Columns: 5 $ Rk <chr> "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "16", "16", "17", "18", "19", "20", "Rk", "21", "22", "23", "23", "23", "24", "25", "26", "27", "28", "28", "28", "… $ Player <chr> "Álex Abrines", "Quincy Acy", "Jaylen Adams", "Steven Adams", "Bam Adebayo", "Deng Adel", "DeVaughn Akoon-Purcell", "LaMarcus Aldridge", "Rawle Alkins", "Grayson Allen", "Jarrett Allen", "Kadeem Allen",… $ Pos <chr> "SG", "PF", "PG", "C", "C", "SF", "SG", "C", "SG", "SG", "C", "SG", "PF", "SF", "SF", "PF", "PF", "PF", "C", "PF", "PF", "PF", "Pos", "SF", "PG", "SF", "SF", "SF", "PG", "C", "SG", "PF", "SG", "SG", "SG… $ Age <chr> "25", "28", "22", "25", "21", "21", "25", "33", "21", "23", "20", "26", "28", "25", "25", "30", "30", "30", "20", "24", "21", "34", "Age", "21", "24", "33", "33", "33", "31", "20", "23", "19", "25", "25… $ Tm <chr> "OKC", "PHO", "ATL", "OKC", "MIA", "CLE", "DEN", "SAS", "CHI", "UTA", "BRK", "NYK", "POR", "ATL", "MEM", "TOT", "PHO", "MIA", "IND", "MIL", "DAL", "HOU", "Tm", "TOR", "CHI", "TOT", "PHO", "WAS", "ORL", … ``` Selecting Columns ----------------- Often you do not need the entire data set. While this is easily handled in base R (as shown earlier), it can be more clear to use select in dplyr. Now we won’t have to create separate objects, use quotes or $, etc. ``` bball %>% select(Player, Tm, Pos) %>% head() ``` ``` Player Tm Pos 1 Álex Abrines OKC SG 2 Quincy Acy PHO PF 3 Jaylen Adams ATL PG 4 Steven Adams OKC C 5 Bam Adebayo MIA C 6 Deng Adel CLE SF ``` What if we want to drop some variables? ``` bball %>% select(-Player, -Tm, -Pos) %>% head() ``` ``` Rk Age G GS MP FG FGA FG. X3P X3PA X3P. X2P X2PA X2P. eFG. FT FTA FT. ORB DRB TRB AST STL BLK TOV PF PTS 1 1 25 31 2 588 56 157 .357 41 127 .323 15 30 .500 .487 12 13 .923 5 43 48 20 17 6 14 53 165 2 2 28 10 0 123 4 18 .222 2 15 .133 2 3 .667 .278 7 10 .700 3 22 25 8 1 4 4 24 17 3 3 22 34 1 428 38 110 .345 25 74 .338 13 36 .361 .459 7 9 .778 11 49 60 65 14 5 28 45 108 4 4 25 80 80 2669 481 809 .595 0 2 .000 481 807 .596 .595 146 292 .500 391 369 760 124 117 76 135 204 1108 5 5 21 82 28 1913 280 486 .576 3 15 .200 277 471 .588 .579 166 226 .735 165 432 597 184 71 65 121 203 729 6 6 21 19 3 194 11 36 .306 6 23 .261 5 13 .385 .389 4 4 1.000 3 16 19 5 1 4 6 13 32 ``` ### Helper functions Sometimes, we have a lot of variables to select, and if they have a common naming scheme, this can be very easy. ``` bball %>% select(Player, contains("3P"), ends_with("RB")) %>% arrange(desc(TRB)) %>% head() ``` ``` Player X3P X3PA X3P. ORB DRB TRB 1 Player 3P 3PA 3P% ORB DRB TRB 2 Player 3P 3PA 3P% ORB DRB TRB 3 Player 3P 3PA 3P% ORB DRB TRB 4 Player 3P 3PA 3P% ORB DRB TRB 5 Player 3P 3PA 3P% ORB DRB TRB 6 Player 3P 3PA 3P% ORB DRB TRB ``` The select also has helper functions to make selecting columns even easier. I probably don’t even need to explain what’s being done above, and this is the power of the tidyverse way. Here is the list of *helper functions* to be aware of: * starts\_with: starts with a prefix * ends\_with: ends with a suffix * contains: contains a literal string * matches: matches a regular expression * num\_range: a numerical range like x01, x02, x03\. * one\_of: variables in character vector. * everything: all variables. ### Helper functions Sometimes, we have a lot of variables to select, and if they have a common naming scheme, this can be very easy. ``` bball %>% select(Player, contains("3P"), ends_with("RB")) %>% arrange(desc(TRB)) %>% head() ``` ``` Player X3P X3PA X3P. ORB DRB TRB 1 Player 3P 3PA 3P% ORB DRB TRB 2 Player 3P 3PA 3P% ORB DRB TRB 3 Player 3P 3PA 3P% ORB DRB TRB 4 Player 3P 3PA 3P% ORB DRB TRB 5 Player 3P 3PA 3P% ORB DRB TRB 6 Player 3P 3PA 3P% ORB DRB TRB ``` The select also has helper functions to make selecting columns even easier. I probably don’t even need to explain what’s being done above, and this is the power of the tidyverse way. Here is the list of *helper functions* to be aware of: * starts\_with: starts with a prefix * ends\_with: ends with a suffix * contains: contains a literal string * matches: matches a regular expression * num\_range: a numerical range like x01, x02, x03\. * one\_of: variables in character vector. * everything: all variables. Filtering Rows -------------- There are repeated header rows in this data[3](#fn3), so we need to drop them. This is also why everything was character string when we first scraped it, because having any character strings in a column coerces the entire column to be character, since all elements of a vector [need to be of the same type](data_structures.html#vectors). Character string is chosen over others because anything can be converted to a string, but not everything can be a number. Filtering by rows requires the basic indexing knowledge [we talked about before](indexing.html#indexing), especially Boolean indexing. In the following, `Rk`, or rank, is for all intents and purposes just a row id, but if it equals the actual text ‘Rk’ instead of something else, we know we’re dealing with a header row, so we’ll drop it. ``` bball = bball %>% filter(Rk != "Rk") ``` * filter returns rows with matching conditions. * slice allows for a numeric indexing approach[4](#fn4). Say we want to look at forwards (SF or PF) over the age of 35\. The following will do this, and since some players play on multiple teams, we’ll want only the unique information on the variables of interest. The function distinct allows us to do this. ``` bball %>% filter(Age > 35, Pos == "SF" | Pos == "PF") %>% distinct(Player, Pos, Age) ``` ``` Player Pos Age 1 Vince Carter PF 42 2 Kyle Korver PF 37 3 Dirk Nowitzki PF 40 ``` Maybe we want just the first 10 rows. This is often the case when we perform some operation and need to quickly verify that what we’re doing is working in principle. ``` bball %>% slice(1:10) ``` ``` Rk Player Pos Age Tm G GS MP FG FGA FG. X3P X3PA X3P. X2P X2PA X2P. eFG. FT FTA FT. ORB DRB TRB AST STL BLK TOV PF PTS 1 1 Álex Abrines SG 25 OKC 31 2 588 56 157 .357 41 127 .323 15 30 .500 .487 12 13 .923 5 43 48 20 17 6 14 53 165 2 2 Quincy Acy PF 28 PHO 10 0 123 4 18 .222 2 15 .133 2 3 .667 .278 7 10 .700 3 22 25 8 1 4 4 24 17 3 3 Jaylen Adams PG 22 ATL 34 1 428 38 110 .345 25 74 .338 13 36 .361 .459 7 9 .778 11 49 60 65 14 5 28 45 108 4 4 Steven Adams C 25 OKC 80 80 2669 481 809 .595 0 2 .000 481 807 .596 .595 146 292 .500 391 369 760 124 117 76 135 204 1108 5 5 Bam Adebayo C 21 MIA 82 28 1913 280 486 .576 3 15 .200 277 471 .588 .579 166 226 .735 165 432 597 184 71 65 121 203 729 6 6 Deng Adel SF 21 CLE 19 3 194 11 36 .306 6 23 .261 5 13 .385 .389 4 4 1.000 3 16 19 5 1 4 6 13 32 7 7 DeVaughn Akoon-Purcell SG 25 DEN 7 0 22 3 10 .300 0 4 .000 3 6 .500 .300 1 2 .500 1 3 4 6 2 0 2 4 7 8 8 LaMarcus Aldridge C 33 SAS 81 81 2687 684 1319 .519 10 42 .238 674 1277 .528 .522 349 412 .847 251 493 744 194 43 107 144 179 1727 9 9 Rawle Alkins SG 21 CHI 10 1 120 13 39 .333 3 12 .250 10 27 .370 .372 8 12 .667 11 15 26 13 1 0 8 7 37 10 10 Grayson Allen SG 23 UTA 38 2 416 67 178 .376 32 99 .323 35 79 .443 .466 45 60 .750 3 20 23 25 6 6 33 47 211 ``` We can use filtering even with variables just created. ``` bball %>% unite("posTeam", Pos, Tm) %>% # create a new variable filter(posTeam == "SG_GSW") %>% # use it for filtering select(Player, posTeam, Age) %>% # use it for selection arrange(desc(Age)) # descending order ``` ``` Player posTeam Age 1 Klay Thompson SG_GSW 28 2 Damion Lee SG_GSW 26 3 Jacob Evans SG_GSW 21 ``` Being able to use a newly created variable on the fly, possibly only to filter or create some other variable, goes a long way toward easy visualization and generation of desired summary statistics. Generating New Data ------------------- One of the most common data processing tasks is generating new variables. The function mutate takes a vector and returns one of the same dimension. In addition, there is mutate\_at, mutate\_if, and mutate\_all to help with specific scenarios. To demonstrate, we’ll use mutate\_at to make appropriate columns numeric, i.e. everything except `Player`, `Pos`, and `Tm`. It takes two inputs, variables and functions to apply. As there are multiple variables and (potentially) multiple functions, we use the vars and funs functions to denote them[5](#fn5). ``` bball = bball %>% mutate(across(c(-Player, -Pos, -Tm), as.numeric)) glimpse(bball[,1:7]) ``` ``` Rows: 708 Columns: 7 $ Rk <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 16, 16, 17, 18, 19, 20, 21, 22, 23, 23, 23, 24, 25, 26, 27, 28, 28, 28, 29, 30, 31, 32, 33, 33, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,… $ Player <chr> "Álex Abrines", "Quincy Acy", "Jaylen Adams", "Steven Adams", "Bam Adebayo", "Deng Adel", "DeVaughn Akoon-Purcell", "LaMarcus Aldridge", "Rawle Alkins", "Grayson Allen", "Jarrett Allen", "Kadeem Allen",… $ Pos <chr> "SG", "PF", "PG", "C", "C", "SF", "SG", "C", "SG", "SG", "C", "SG", "PF", "SF", "SF", "PF", "PF", "PF", "C", "PF", "PF", "PF", "SF", "PG", "SF", "SF", "SF", "PG", "C", "SG", "PF", "SG", "SG", "SG", "PG"… $ Age <dbl> 25, 28, 22, 25, 21, 21, 25, 33, 21, 23, 20, 26, 28, 25, 25, 30, 30, 30, 20, 24, 21, 34, 21, 24, 33, 33, 33, 31, 20, 23, 19, 25, 25, 25, 22, 21, 20, 34, 26, 26, 26, 28, 23, 30, 30, 32, 29, 25, 22, 30, 32… $ Tm <chr> "OKC", "PHO", "ATL", "OKC", "MIA", "CLE", "DEN", "SAS", "CHI", "UTA", "BRK", "NYK", "POR", "ATL", "MEM", "TOT", "PHO", "MIA", "IND", "MIL", "DAL", "HOU", "TOR", "CHI", "TOT", "PHO", "WAS", "ORL", "PHO",… $ G <dbl> 31, 10, 34, 80, 82, 19, 7, 81, 10, 38, 80, 19, 81, 48, 43, 25, 15, 10, 3, 72, 2, 10, 67, 81, 69, 26, 43, 81, 71, 43, 62, 15, 11, 4, 16, 47, 47, 38, 77, 49, 28, 43, 30, 75, 34, 51, 67, 82, 81, 26, 79, 68… $ GS <dbl> 2, 0, 1, 80, 28, 3, 0, 81, 1, 2, 80, 1, 81, 4, 40, 8, 8, 0, 0, 72, 0, 2, 6, 32, 69, 26, 43, 81, 70, 13, 4, 0, 0, 0, 0, 45, 1, 0, 77, 49, 28, 38, 3, 72, 6, 18, 35, 82, 18, 2, 1, 3, 15, 27, 0, 12, 49, 1, … ``` Now that the data columns are of the correct type, the following demonstrates how we can use the standard mutate function to create composites of existing variables. ``` bball = bball %>% mutate( trueShooting = PTS / (2 * (FGA + (.44 * FTA))), effectiveFG = (FG + (.5 * X3P)) / FGA, shootingDif = trueShooting - FG. ) summary(select(bball, shootingDif)) # select and others don't have to be piped to use ``` ``` shootingDif Min. :-0.08561 1st Qu.: 0.06722 Median : 0.09829 Mean : 0.09420 3rd Qu.: 0.12379 Max. : 0.53192 NA's :6 ``` Grouping and Summarizing Data ----------------------------- Another very common task is to look at group\-based statistics, and we can use group\_by and summarize to help us in this regard[6](#fn6). Base R has things like aggregate, by, and tapply for this, but they should not be used, as this approach is much more straightforward, flexible, and faster. Conceptually we are doing a three\-phase task: **split**, **apply**, **combine**. We split the data into subsets, apply a function, and then combine the results back into a single output. In applying a function, we may do any of the previously demonstrated tasks: calculate some statistic, generate new data, or even filter to a reduced part of the data. For this demonstration, I’m going to start putting together several things we’ve demonstrated thus far. Ultimately we’ll create a variable called trueShooting, which represents ‘true shooting percentage’, and get an average for each position, and compare it to the average field goal percentage. ``` bball %>% select(Pos, FG, FGA, FG., FTA, X3P, PTS) %>% mutate( trueShooting = PTS / (2 * (FGA + (.44 * FTA))), effectiveFG = (FG + (.5 * X3P)) / FGA, shootingDif = trueShooting - FG. ) %>% group_by(Pos) %>% summarize( `Mean FG%` = mean(FG., na.rm = TRUE), `Mean True Shooting` = mean(trueShooting, na.rm = TRUE) ) ``` ``` # A tibble: 11 x 3 Pos `Mean FG%` `Mean True Shooting` <chr> <dbl> <dbl> 1 C 0.522 0.572 2 C-PF 0.407 0.530 3 PF 0.442 0.536 4 PF-C 0.356 0.492 5 PF-SF 0.419 0.544 6 PG 0.409 0.512 7 SF 0.425 0.529 8 SF-SG 0.431 0.558 9 SG 0.407 0.517 10 SG-PF 0.416 0.582 11 SG-SF 0.38 0.466 ``` We can do even more with grouped data. Specifically, we can create a new *list\-column* in the data, the elements of which can be anything, even the results of an analysis for each group. As such, we can use tidyr’s unnest to get back to a standard data frame. To demonstrate, the following will group data by position, then get the correlation between field\-goal percentage and free\-throw shooting percentage. Some players are listed with multiple positions, so we will reduce those to whatever their first position is using case\_when. ``` bball %>% mutate( Pos = case_when( Pos == 'PG-SG' ~ 'PG', Pos == 'C-PF' ~ 'C', Pos == 'SF-SG' ~ 'SF', Pos == 'PF-C' | Pos == 'PF-SF' ~ 'PF', Pos == 'SG-PF' | Pos == 'SG-SF' ~ 'SG', TRUE ~ Pos )) %>% nest_by(Pos) %>% mutate(FgFt_Corr = list(cor(data$FG., data$FT., use = 'complete'))) %>% unnest(c(Pos, FgFt_Corr)) ``` ``` # A tibble: 5 x 3 # Groups: Pos [5] Pos data FgFt_Corr <chr> <list<tbl_df[,32]>> <dbl> 1 C [121 × 32] -0.122 2 PF [150 × 32] -0.0186 3 PG [139 × 32] 0.0857 4 SF [120 × 32] 0.00422 5 SG [178 × 32] -0.0585 ``` As a reminder, data frames are lists. As such, anything can go into the ‘columns’, even regression models! ``` library(nycflights13) carriers = group_by(flights, carrier) group_size(carriers) # if you're curious, there is a function to quickly get group Ns ``` ``` [1] 18460 32729 714 54635 48110 54173 685 3260 342 26397 32 58665 20536 5162 12275 601 ``` ``` mods = flights %>% nest_by(carrier) %>% mutate(model = list(lm(arr_delay ~ dep_time, data = data)) ) mods ``` ``` # A tibble: 16 x 3 # Rowwise: carrier carrier data model <chr> <list<tbl_df[,18]>> <list> 1 9E [18,460 × 18] <lm> 2 AA [32,729 × 18] <lm> 3 AS [714 × 18] <lm> 4 B6 [54,635 × 18] <lm> 5 DL [48,110 × 18] <lm> 6 EV [54,173 × 18] <lm> 7 F9 [685 × 18] <lm> 8 FL [3,260 × 18] <lm> 9 HA [342 × 18] <lm> 10 MQ [26,397 × 18] <lm> 11 OO [32 × 18] <lm> 12 UA [58,665 × 18] <lm> 13 US [20,536 × 18] <lm> 14 VX [5,162 × 18] <lm> 15 WN [12,275 × 18] <lm> 16 YV [601 × 18] <lm> ``` ``` mods %>% summarize( carrier = carrier, `Adjusted Rsq` = summary(model)$adj.r.squared, coef_dep_time = coef(model)[2] ) ``` ``` # A tibble: 16 x 3 # Groups: carrier [16] carrier `Adjusted Rsq` coef_dep_time <chr> <dbl> <dbl> 1 9E 0.0513 0.0252 2 AA 0.0504 0.0209 3 AS 0.0815 0.0186 4 B6 0.0241 0.0120 5 DL 0.0347 0.0179 6 EV 0.0836 0.0290 7 F9 0.0998 0.0484 8 FL 0.0261 0.0183 9 HA -0.00124 -0.0578 10 MQ 0.0499 0.0218 11 OO -0.0189 0.0394 12 UA 0.0673 0.0220 13 US 0.0575 0.0174 14 VX 0.111 0.0362 15 WN 0.119 0.0345 16 YV 0.137 0.0805 ``` You can use group\_by on more than one variable, e.g. `group_by(var1, var2)` Renaming Columns ---------------- Tibbles in the tidyverse don’t really have a problem with variable names starting with numbers or incorporating symbols and spaces. I would still suggest it is poor practice, because even if your data set looks fine, you’ll possibly encounter problems with modeling and visualization packages using that data. However, as a demonstration, we can ‘fix’ some of the variable names. One issue is that when we scraped the data and converted it to a data.frame, the names that started with a number, like `3P` for ‘three point baskets made’, were made into `X3P`, because that’s the way R works by default. In addition, `3P%`, i.e. three point percentage made, was made into `3P.` with a dot for the percent sign. Same goes for the 2P (two\-pointers) and FT (free\-throw) variables. We can use rename to change column names. A basic example is as follows. ``` data %>% rename(new_name = old_name, new_name2 = old_name2) ``` Very straightforward. However, oftentimes we’ll need to change *patterns*, as with our current problem. The following uses str\_replace and str\_remove from stringr to look for a pattern in a name, and replace that pattern with some other pattern. It uses *regular expressions* for the patterns. ``` bball = bball %>% rename_with( str_replace, # function contains('.'), # columns pattern = '\\.', # function arguments replacement = '%' ) %>% rename_with(str_remove, starts_with('X'), pattern = 'X') colnames(bball) ``` ``` [1] "Rk" "Player" "Pos" "Age" "Tm" "G" "GS" "MP" "FG" "FGA" "FG%" "3P" "3PA" "3P%" [15] "2P" "2PA" "2P%" "eFG%" "FT" "FTA" "FT%" "ORB" "DRB" "TRB" "AST" "STL" "BLK" "TOV" [29] "PF" "PTS" "trueShooting" "effectiveFG" "shootingDif" ``` Merging Data ------------ Merging data is yet another very common data task, as data often comes from multiple sources. In order to do this, we need some common identifier among the sources by which to join them. The following is a list of dplyr join functions. inner\_join: return all rows from x where there are matching values in y, and all columns from x and y. If there are multiple matches between x and y, all combination of the matches are returned. left\_join: return all rows from x, and all columns from x and y. Rows in x with no match in y will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned. right\_join: return all rows from y, and all columns from x and y. Rows in y with no match in x will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned. semi\_join: return all rows from x where there are matching values in y, keeping just columns from x. It differs from an inner join because an inner join will return one row of x for each matching row of y, where a semi join will never duplicate rows of x. anti\_join: return all rows from x where there are not matching values in y, keeping just columns from x. full\_join: return all rows and all columns from both x and y. Where there are not matching values, returns NA for the one missing. Probably the most common is a left join, where we have one primary data set, and are adding data from another source to it while retaining it as a base. The following is a simple demonstration. ``` band_members ``` ``` # A tibble: 3 x 2 Name Band <chr> <chr> 1 Seth Com Truise 2 Francis Pixies 3 Bubba The New Year ``` ``` band_instruments ``` ``` # A tibble: 3 x 2 Name Instrument <chr> <chr> 1 Francis Guitar 2 Bubba Guitar 3 Seth Synthesizer ``` ``` left_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 3 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar ``` When we don’t have a one to one match, the result of the different types of join will become more apparent. ``` band_members ``` ``` # A tibble: 4 x 2 Name Band <chr> <chr> 1 Seth Com Truise 2 Francis Pixies 3 Bubba The New Year 4 Stephen Pavement ``` ``` band_instruments ``` ``` # A tibble: 4 x 2 Name Instrument <chr> <chr> 1 Seth Synthesizer 2 Francis Guitar 3 Bubba Guitar 4 Steve Rage ``` ``` left_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 4 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Stephen Pavement <NA> ``` ``` right_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 4 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Steve <NA> Rage ``` ``` inner_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 3 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar ``` ``` full_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 5 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Stephen Pavement <NA> 5 Steve <NA> Rage ``` ``` anti_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 1 x 2 Name Band <chr> <chr> 1 Stephen Pavement ``` ``` anti_join(band_instruments, band_members) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 1 x 2 Name Instrument <chr> <chr> 1 Steve Rage ``` Merges can get quite complex, and involve multiple data sources. In many cases you may have to do a lot of processing before getting to the merge, but dplyr’s joins will help quite a bit. Pivoting axes ------------- The tidyr package can be thought of as a specialized subset of dplyr’s functionality, as well as an update to the previous reshape and reshape2 packages[7](#fn7). Some of its functions for manipulating data you’ll want to be familiar with are: * pivot\_longer: convert data from a wider format to longer one * pivot\_wider: convert data from a longer format to wider one * unite: paste together multiple columns into one * separate: complement of unite * unnest: expand ‘list columns’ The following example shows how we take a ‘wide\-form’ data set, where multiple columns represent different stock prices, and turn it into two columns, one representing stock name, and one for the price. We need to know which columns to work on, which is the first entry. This function works very much like select, where you can use helpers. Then we need to give a name to the column(s) representing the indicators of what were multiple columns in the wide format. And finally we need to specify the column(s) of the values. ``` library(tidyr) stocks <- data.frame( time = as.Date('2009-01-01') + 0:9, X = rnorm(10, 0, 1), Y = rnorm(10, 0, 2), Z = rnorm(10, 0, 4) ) stocks %>% head ``` ``` time X Y Z 1 2009-01-01 -1.23994442 -4.8515935 3.7985281 2 2009-01-02 0.65851483 0.9552487 -2.7255786 3 2009-01-03 -0.91146059 -0.0321312 0.6175274 4 2009-01-04 1.85598621 1.1919978 -2.4837558 5 2009-01-05 0.37266866 0.6297287 -1.1330732 6 2009-01-06 -0.06072664 -2.8673242 1.7155168 ``` ``` stocks %>% pivot_longer( cols = -time, # works similar to using select() names_to = 'stock', # the name of the column that will have column names as labels values_to = 'price' # the name of the column for the values ) %>% head() ``` ``` # A tibble: 6 x 3 time stock price <date> <chr> <dbl> 1 2009-01-01 X -1.24 2 2009-01-01 Y -4.85 3 2009-01-01 Z 3.80 4 2009-01-02 X 0.659 5 2009-01-02 Y 0.955 6 2009-01-02 Z -2.73 ``` Here is a more complex example where we can handle multiple repeated entries. We additionally add another column for labeling, and posit the separator for the column names. ``` library(tidyr) stocks <- data.frame( time = as.Date('2009-01-01') + 0:9, X_1 = rnorm(10, 0, 1), X_2 = rnorm(10, 0, 1), Y_1 = rnorm(10, 0, 2), Y_2 = rnorm(10, 0, 2), Z_1 = rnorm(10, 0, 4), Z_2 = rnorm(10, 0, 4) ) head(stocks) ``` ``` time X_1 X_2 Y_1 Y_2 Z_1 Z_2 1 2009-01-01 -0.9675529 -0.72793192 0.7516393 0.03321408 3.7485540 0.3945022 2 2009-01-02 -0.1780449 0.08926355 -0.1976137 1.53569057 -0.0315400 7.6285628 3 2009-01-03 0.2958189 0.38118235 1.6730362 -1.13635638 0.1543268 -5.9254785 4 2009-01-04 -0.7805814 -0.67370673 -0.5696378 -3.62905335 -2.4256959 6.6867209 5 2009-01-05 1.7910958 -0.32353046 -1.6786235 -1.55989831 -4.4294289 -8.1844866 6 2009-01-06 1.1623828 -0.27362716 -0.3116307 2.73462718 0.6675895 1.9884072 ``` ``` stocks %>% pivot_longer( cols = -time, names_to = c('stock', 'entry'), names_sep = '_', values_to = 'price' ) %>% head() ``` ``` # A tibble: 6 x 4 time stock entry price <date> <chr> <chr> <dbl> 1 2009-01-01 X 1 -0.968 2 2009-01-01 X 2 -0.728 3 2009-01-01 Y 1 0.752 4 2009-01-01 Y 2 0.0332 5 2009-01-01 Z 1 3.75 6 2009-01-01 Z 2 0.395 ``` Note that the latter is an example of *tidy data* while the former is not. Why do we generally prefer such data? Precisely because the most common data operations, grouping, filtering, etc., would work notably more efficiently with such data. This is especially the case for visualization. The following demonstrates the separate function utilized for a very common data processing task\- dealing with names. Here’ we’ll separate player into first and last names based on the space. ``` bball %>% separate(Player, into=c('first_name', 'last_name'), sep=' ') %>% select(1:5) %>% head() ``` ``` Rk first_name last_name Pos Age 1 1 Álex Abrines SG 25 2 2 Quincy Acy PF 28 3 3 Jaylen Adams PG 22 4 4 Steven Adams C 25 5 5 Bam Adebayo C 21 6 6 Deng Adel SF 21 ``` Note that this won’t necessarily apply to every name, so further processing may be required. More Tidyverse -------------- * dplyr functions: There are over a hundred utility functions that perform very common tasks. You really need to be aware of them, as their use will come up often. * broom: Convert statistical analysis objects from R into tidy data frames, so that they can more easily be combined, reshaped and otherwise processed with tools like dplyr, tidyr and ggplot2\. * tidy\*: a lot of packages out there are now ‘tidy’, though not a part of the official tidyverse. Some examples of the ones I’ve used: + tidycensus + tidybayes + tidytext + modelr Seriously, there are [a lot](https://www.r-pkg.org/search.html?q=tidy). Personal Opinion ---------------- The dplyr grammar is clear for a lot of standard data processing tasks, and some not so common. Extremely useful for data exploration and visualization. * No need to create/overwrite existing objects * Can overwrite columns and use as they are created * Makes it easy to look at anything, and do otherwise tedious data checks Drawbacks: * Not as fast as data.table or even some base R approaches for many things[8](#fn8) * The *mindset* can make for unnecessary complication + e.g. There is no need to pipe to create a single new variable * Some approaches, are not very intuitive * Notably less ability to work with some very common data structures (e.g. matrices) All in all, if you’ve only been using base R approaches, the tidyverse will change your R life! It makes all the sorts of things you do all the time easier and clearer. Highly recommended! Tidyverse Exercises ------------------- ### Exercise 0 Install and load the dplyr ggplot2movies packages. Look at the help file for the `movies` data set, which contains data from IMDB. ``` install.packages('ggplot2movies') library(ggplot2movies) data('movies') ``` ### Exercise 1 Using the movies data set, perform each of the following actions separately. #### Exercise 1a Use mutate to create a centered version of the rating variable. A centered variable is one whose mean has been subtracted from it. The process will take the following form: ``` data %>% mutate(new_var_name = '?') ``` #### Exercise 1b Use filter to create a new data frame that has only movies from the years 2000 and beyond. Use the greater than or equal operator `>=`. #### Exercise 1c Use select to create a new data frame that only has the `title`, `year`, `budget`, `length`, `rating` and `votes` variables. There are at least 3 ways to do this. #### Exercise 1d Rename the `length` column to `length_in_min` (i.e. length in minutes). ### Exercise 2 Use group\_by to group the data by year, and summarize to create a new variable that is the average budget. The summarize function works just like mutate in this case. Use the mean function to get the average, but you’ll also need to use the argument `na.rm = TRUE` within it because the earliest years have no budget recorded. ### Exercise 3 Use pivot\_longer to create a ‘tidy’ data set from the following. ``` dat = tibble(id = 1:10, x = rnorm(10), y = rnorm(10)) ``` ### Exercise 4 Now put several actions together in one set of piped operations. * Filter movies released *after* 1990 * select the same variables as before but also the `mpaa`, `Action`, and `Drama` variables * group by `mpaa` *and* (your choice) `Action` *or* `Drama` * get the average rating It should spit out something like the following: ``` # A tibble: 10 x 3 # Groups: mpaa [5] mpaa Drama AvgRating <chr> <int> <dbl> 1 "" 0 5.94 2 "" 1 6.20 3 "NC-17" 0 4.28 4 "NC-17" 1 4.62 5 "PG" 0 5.19 6 "PG" 1 6.15 7 "PG-13" 0 5.44 8 "PG-13" 1 6.14 9 "R" 0 4.86 10 "R" 1 5.94 ``` ### Exercise 0 Install and load the dplyr ggplot2movies packages. Look at the help file for the `movies` data set, which contains data from IMDB. ``` install.packages('ggplot2movies') library(ggplot2movies) data('movies') ``` ### Exercise 1 Using the movies data set, perform each of the following actions separately. #### Exercise 1a Use mutate to create a centered version of the rating variable. A centered variable is one whose mean has been subtracted from it. The process will take the following form: ``` data %>% mutate(new_var_name = '?') ``` #### Exercise 1b Use filter to create a new data frame that has only movies from the years 2000 and beyond. Use the greater than or equal operator `>=`. #### Exercise 1c Use select to create a new data frame that only has the `title`, `year`, `budget`, `length`, `rating` and `votes` variables. There are at least 3 ways to do this. #### Exercise 1d Rename the `length` column to `length_in_min` (i.e. length in minutes). #### Exercise 1a Use mutate to create a centered version of the rating variable. A centered variable is one whose mean has been subtracted from it. The process will take the following form: ``` data %>% mutate(new_var_name = '?') ``` #### Exercise 1b Use filter to create a new data frame that has only movies from the years 2000 and beyond. Use the greater than or equal operator `>=`. #### Exercise 1c Use select to create a new data frame that only has the `title`, `year`, `budget`, `length`, `rating` and `votes` variables. There are at least 3 ways to do this. #### Exercise 1d Rename the `length` column to `length_in_min` (i.e. length in minutes). ### Exercise 2 Use group\_by to group the data by year, and summarize to create a new variable that is the average budget. The summarize function works just like mutate in this case. Use the mean function to get the average, but you’ll also need to use the argument `na.rm = TRUE` within it because the earliest years have no budget recorded. ### Exercise 3 Use pivot\_longer to create a ‘tidy’ data set from the following. ``` dat = tibble(id = 1:10, x = rnorm(10), y = rnorm(10)) ``` ### Exercise 4 Now put several actions together in one set of piped operations. * Filter movies released *after* 1990 * select the same variables as before but also the `mpaa`, `Action`, and `Drama` variables * group by `mpaa` *and* (your choice) `Action` *or* `Drama` * get the average rating It should spit out something like the following: ``` # A tibble: 10 x 3 # Groups: mpaa [5] mpaa Drama AvgRating <chr> <int> <dbl> 1 "" 0 5.94 2 "" 1 6.20 3 "NC-17" 0 4.28 4 "NC-17" 1 4.62 5 "PG" 0 5.19 6 "PG" 1 6.15 7 "PG-13" 0 5.44 8 "PG-13" 1 6.14 9 "R" 0 4.86 10 "R" 1 5.94 ``` Python Pandas Notebook ---------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/pandaverse.ipynb)
Data Visualization
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/tidyverse.html
Tidyverse ========= What is the Tidyverse? ---------------------- The tidyverse consists of a few key packages: * ggplot2: data visualization * tibble: tibbles, a modern re\-imagining of data frames * tidyr: data tidying * readr: data import * purrr: functional programming, e.g. alternate approaches to apply * dplyr: data manipulation And of course the tidyverse package itself, which will load all of the above in a way that will avoid naming conflicts. ``` library(tidyverse) ``` ``` Loading tidyverse: ggplot2 Loading tidyverse: tibble Loading tidyverse: tidyr Loading tidyverse: readr Loading tidyverse: purrr Loading tidyverse: dplyr Conflicts with tidy packages ------------------------- filter(): dplyr, stats lag(): dplyr, stats ``` In addition, there are other packages like lubridate, rvest, stringr and others in the **hadleyverse** that are also greatly useful. What is Tidy? ------------- *Tidy data* refers to data arranged in a way that makes data processing, analysis, and visualization simpler. In a tidy data set: * Each variable must have its own column. * Each observation must have its own row. * Each value must have its own cell. Think *long* before *wide*. dplyr ----- dplyr provides a grammar of data manipulation (like ggplot2 does for visualization). It is the next iteration of plyr, but plyr is deprecated and no longer used. It’s focused on tools for working with data frames, with over 100 functions that might be of specific use to you. It has three main goals: * Make the most important data manipulation tasks easier. * Do them faster. * Use the same interface to work with data frames, data tables or a database. Some key operations include: * select: grab columns + select helpers: one\_of, starts\_with, num\_range etc. * filter/slice: grab rows * group\_by: grouped operations * mutate/transmute: create new variables * summarize: summarize/aggregate There are various (SQL\-like) join/merge functions: * inner\_join, left\_join etc. And there are a lot of little things like: * n, n\_distinct, nth, n\_groups, count, recode, between In addition, there is no need to quote variable names. ### An example Let’s say we want to select from our data the following variables: * Start with the **ID** variable * The variables **X1** through **X10**, which are not all grouped together, and there are many more *X\** columns * The variables **var1** and **var2**, which are the only variables with *var* in their name * Any variable with a name that starts with **XYZ** How might we go about this in a dataset of possibly hundreds or even thousands of columns? There are several base R approaches that we could go with, but often they will be tedious, or require multiple objects to be created just to get the columns you want. Let’s start with the worst choice. ``` newData = oldData[,c(1,2,3,4, etc.)] ``` Using numeric indexes, or rather *magic numbers*, is not conducive to readability or reproducibility. If anything changes about the data columns, the numbers may no longer be applicable, and you’d have to redo the line again. We could name the variables explicitly. ``` newData = oldData[,c('ID','X1', 'X2', etc.)] ``` This would be fine if there are only a handful. But if you’re trying to reduce a 1000 column data set to several dozen it’s tedious, and generally not pretty regardless. A more advanced alternative regards a two\-step approach with [regular expressions](more.html#regular-expressions). This requires that you know something about regex (and you should), but it is difficult to read/understand by those who don’t, and often by even yourself if it’s more complicated. In any case, you first will need to create an object that represents the column names first, otherwise it looks unwieldy if used within brackets or a function like subset. ``` cols = c('ID', paste0('X', 1:10), 'var1', 'var2', grep(colnames(oldData), '^XYZ', value=T)) newData = oldData[,cols] # or via subset newData = subset(oldData, select = cols) ``` Now consider there is even more to do. What if you also want observations where **Z** is **Yes**, Q is **No**, and only the observations with the top 50 values of **var2**, ordered by **var1** (descending)? Probably the more straightforward way in R to do so would be something like the following, where each part is broken out and we continuously write over the object as we modify it. ``` # three operations and overwriting or creating new objects if we want clarity newData = newData[oldData$Z == 'Yes' & oldData$Q == 'No',] newData = newData[order(newData$var2, decreasing=T)[1:50],] newData = newData[order(newData$var1, decreasing=T),] ``` And this is for fairly straightforward operations. Now consider doing all of the previous in one piped operation. The dplyr package will allow us to do something like the following. ``` newData = oldData %>% select(num_range('X', 1:10), contains('var'), starts_with('XYZ')) %>% filter(Z == 'Yes', Q == 'No') %>% top_n(n=50, var2) %>% arrange(desc(var1)) ``` Even if it hadn’t been explained before, you might have been able to guess a little as to what was going on. The code is fairly succinct, we don’t have to keep referencing objects repeatedly, and no explicit intermediary objects are created. dplyr and piping is an *alternative*. You can do all this sort of stuff with base R, for example, with functions like with, within, subset, transform, etc. Though the initial base R approach depicted is fairly concise, in general, it can potentially be: * more verbose * less legible * less amenable to additional data changes * requires esoteric knowledge (e.g. regular expressions) * often requires creation of new objects (even if we just want to explore) * often slower, possibly greatly Running Example --------------- The following data was scraped initially scraped from the web as follows. It is data from the NBA basketball league for the last season with things like player names, position, team name, points per game, field goal percentage, and various other statistics. We’ll use it as an example to demonstrate various functionality found within dplyr. ``` library(rvest) current_year = lubridate::year(Sys.Date()) url = glue::glue("http://www.basketball-reference.com/leagues/NBA_{current_year-1}_totals.html") bball = read_html(url) %>% html_nodes("#totals_stats") %>% html_table() %>% data.frame() save(bball, file='data/bball.RData') ``` However you can just load it into your workspace as below. Note that when initially gathered from the website, the data is all character strings. We’ll fix this later. The following shows the data as it will eventually be. ``` load('data/bball.RData') glimpse(bball[,1:5]) ``` ``` Rows: 734 Columns: 5 $ Rk <chr> "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "16", "16", "17", "18", "19", "20", "Rk", "21", "22", "23", "23", "23", "24", "25", "26", "27", "28", "28", "28", "… $ Player <chr> "Álex Abrines", "Quincy Acy", "Jaylen Adams", "Steven Adams", "Bam Adebayo", "Deng Adel", "DeVaughn Akoon-Purcell", "LaMarcus Aldridge", "Rawle Alkins", "Grayson Allen", "Jarrett Allen", "Kadeem Allen",… $ Pos <chr> "SG", "PF", "PG", "C", "C", "SF", "SG", "C", "SG", "SG", "C", "SG", "PF", "SF", "SF", "PF", "PF", "PF", "C", "PF", "PF", "PF", "Pos", "SF", "PG", "SF", "SF", "SF", "PG", "C", "SG", "PF", "SG", "SG", "SG… $ Age <chr> "25", "28", "22", "25", "21", "21", "25", "33", "21", "23", "20", "26", "28", "25", "25", "30", "30", "30", "20", "24", "21", "34", "Age", "21", "24", "33", "33", "33", "31", "20", "23", "19", "25", "25… $ Tm <chr> "OKC", "PHO", "ATL", "OKC", "MIA", "CLE", "DEN", "SAS", "CHI", "UTA", "BRK", "NYK", "POR", "ATL", "MEM", "TOT", "PHO", "MIA", "IND", "MIL", "DAL", "HOU", "Tm", "TOR", "CHI", "TOT", "PHO", "WAS", "ORL", … ``` Selecting Columns ----------------- Often you do not need the entire data set. While this is easily handled in base R (as shown earlier), it can be more clear to use select in dplyr. Now we won’t have to create separate objects, use quotes or $, etc. ``` bball %>% select(Player, Tm, Pos) %>% head() ``` ``` Player Tm Pos 1 Álex Abrines OKC SG 2 Quincy Acy PHO PF 3 Jaylen Adams ATL PG 4 Steven Adams OKC C 5 Bam Adebayo MIA C 6 Deng Adel CLE SF ``` What if we want to drop some variables? ``` bball %>% select(-Player, -Tm, -Pos) %>% head() ``` ``` Rk Age G GS MP FG FGA FG. X3P X3PA X3P. X2P X2PA X2P. eFG. FT FTA FT. ORB DRB TRB AST STL BLK TOV PF PTS 1 1 25 31 2 588 56 157 .357 41 127 .323 15 30 .500 .487 12 13 .923 5 43 48 20 17 6 14 53 165 2 2 28 10 0 123 4 18 .222 2 15 .133 2 3 .667 .278 7 10 .700 3 22 25 8 1 4 4 24 17 3 3 22 34 1 428 38 110 .345 25 74 .338 13 36 .361 .459 7 9 .778 11 49 60 65 14 5 28 45 108 4 4 25 80 80 2669 481 809 .595 0 2 .000 481 807 .596 .595 146 292 .500 391 369 760 124 117 76 135 204 1108 5 5 21 82 28 1913 280 486 .576 3 15 .200 277 471 .588 .579 166 226 .735 165 432 597 184 71 65 121 203 729 6 6 21 19 3 194 11 36 .306 6 23 .261 5 13 .385 .389 4 4 1.000 3 16 19 5 1 4 6 13 32 ``` ### Helper functions Sometimes, we have a lot of variables to select, and if they have a common naming scheme, this can be very easy. ``` bball %>% select(Player, contains("3P"), ends_with("RB")) %>% arrange(desc(TRB)) %>% head() ``` ``` Player X3P X3PA X3P. ORB DRB TRB 1 Player 3P 3PA 3P% ORB DRB TRB 2 Player 3P 3PA 3P% ORB DRB TRB 3 Player 3P 3PA 3P% ORB DRB TRB 4 Player 3P 3PA 3P% ORB DRB TRB 5 Player 3P 3PA 3P% ORB DRB TRB 6 Player 3P 3PA 3P% ORB DRB TRB ``` The select also has helper functions to make selecting columns even easier. I probably don’t even need to explain what’s being done above, and this is the power of the tidyverse way. Here is the list of *helper functions* to be aware of: * starts\_with: starts with a prefix * ends\_with: ends with a suffix * contains: contains a literal string * matches: matches a regular expression * num\_range: a numerical range like x01, x02, x03\. * one\_of: variables in character vector. * everything: all variables. Filtering Rows -------------- There are repeated header rows in this data[3](#fn3), so we need to drop them. This is also why everything was character string when we first scraped it, because having any character strings in a column coerces the entire column to be character, since all elements of a vector [need to be of the same type](data_structures.html#vectors). Character string is chosen over others because anything can be converted to a string, but not everything can be a number. Filtering by rows requires the basic indexing knowledge [we talked about before](indexing.html#indexing), especially Boolean indexing. In the following, `Rk`, or rank, is for all intents and purposes just a row id, but if it equals the actual text ‘Rk’ instead of something else, we know we’re dealing with a header row, so we’ll drop it. ``` bball = bball %>% filter(Rk != "Rk") ``` * filter returns rows with matching conditions. * slice allows for a numeric indexing approach[4](#fn4). Say we want to look at forwards (SF or PF) over the age of 35\. The following will do this, and since some players play on multiple teams, we’ll want only the unique information on the variables of interest. The function distinct allows us to do this. ``` bball %>% filter(Age > 35, Pos == "SF" | Pos == "PF") %>% distinct(Player, Pos, Age) ``` ``` Player Pos Age 1 Vince Carter PF 42 2 Kyle Korver PF 37 3 Dirk Nowitzki PF 40 ``` Maybe we want just the first 10 rows. This is often the case when we perform some operation and need to quickly verify that what we’re doing is working in principle. ``` bball %>% slice(1:10) ``` ``` Rk Player Pos Age Tm G GS MP FG FGA FG. X3P X3PA X3P. X2P X2PA X2P. eFG. FT FTA FT. ORB DRB TRB AST STL BLK TOV PF PTS 1 1 Álex Abrines SG 25 OKC 31 2 588 56 157 .357 41 127 .323 15 30 .500 .487 12 13 .923 5 43 48 20 17 6 14 53 165 2 2 Quincy Acy PF 28 PHO 10 0 123 4 18 .222 2 15 .133 2 3 .667 .278 7 10 .700 3 22 25 8 1 4 4 24 17 3 3 Jaylen Adams PG 22 ATL 34 1 428 38 110 .345 25 74 .338 13 36 .361 .459 7 9 .778 11 49 60 65 14 5 28 45 108 4 4 Steven Adams C 25 OKC 80 80 2669 481 809 .595 0 2 .000 481 807 .596 .595 146 292 .500 391 369 760 124 117 76 135 204 1108 5 5 Bam Adebayo C 21 MIA 82 28 1913 280 486 .576 3 15 .200 277 471 .588 .579 166 226 .735 165 432 597 184 71 65 121 203 729 6 6 Deng Adel SF 21 CLE 19 3 194 11 36 .306 6 23 .261 5 13 .385 .389 4 4 1.000 3 16 19 5 1 4 6 13 32 7 7 DeVaughn Akoon-Purcell SG 25 DEN 7 0 22 3 10 .300 0 4 .000 3 6 .500 .300 1 2 .500 1 3 4 6 2 0 2 4 7 8 8 LaMarcus Aldridge C 33 SAS 81 81 2687 684 1319 .519 10 42 .238 674 1277 .528 .522 349 412 .847 251 493 744 194 43 107 144 179 1727 9 9 Rawle Alkins SG 21 CHI 10 1 120 13 39 .333 3 12 .250 10 27 .370 .372 8 12 .667 11 15 26 13 1 0 8 7 37 10 10 Grayson Allen SG 23 UTA 38 2 416 67 178 .376 32 99 .323 35 79 .443 .466 45 60 .750 3 20 23 25 6 6 33 47 211 ``` We can use filtering even with variables just created. ``` bball %>% unite("posTeam", Pos, Tm) %>% # create a new variable filter(posTeam == "SG_GSW") %>% # use it for filtering select(Player, posTeam, Age) %>% # use it for selection arrange(desc(Age)) # descending order ``` ``` Player posTeam Age 1 Klay Thompson SG_GSW 28 2 Damion Lee SG_GSW 26 3 Jacob Evans SG_GSW 21 ``` Being able to use a newly created variable on the fly, possibly only to filter or create some other variable, goes a long way toward easy visualization and generation of desired summary statistics. Generating New Data ------------------- One of the most common data processing tasks is generating new variables. The function mutate takes a vector and returns one of the same dimension. In addition, there is mutate\_at, mutate\_if, and mutate\_all to help with specific scenarios. To demonstrate, we’ll use mutate\_at to make appropriate columns numeric, i.e. everything except `Player`, `Pos`, and `Tm`. It takes two inputs, variables and functions to apply. As there are multiple variables and (potentially) multiple functions, we use the vars and funs functions to denote them[5](#fn5). ``` bball = bball %>% mutate(across(c(-Player, -Pos, -Tm), as.numeric)) glimpse(bball[,1:7]) ``` ``` Rows: 708 Columns: 7 $ Rk <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 16, 16, 17, 18, 19, 20, 21, 22, 23, 23, 23, 24, 25, 26, 27, 28, 28, 28, 29, 30, 31, 32, 33, 33, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,… $ Player <chr> "Álex Abrines", "Quincy Acy", "Jaylen Adams", "Steven Adams", "Bam Adebayo", "Deng Adel", "DeVaughn Akoon-Purcell", "LaMarcus Aldridge", "Rawle Alkins", "Grayson Allen", "Jarrett Allen", "Kadeem Allen",… $ Pos <chr> "SG", "PF", "PG", "C", "C", "SF", "SG", "C", "SG", "SG", "C", "SG", "PF", "SF", "SF", "PF", "PF", "PF", "C", "PF", "PF", "PF", "SF", "PG", "SF", "SF", "SF", "PG", "C", "SG", "PF", "SG", "SG", "SG", "PG"… $ Age <dbl> 25, 28, 22, 25, 21, 21, 25, 33, 21, 23, 20, 26, 28, 25, 25, 30, 30, 30, 20, 24, 21, 34, 21, 24, 33, 33, 33, 31, 20, 23, 19, 25, 25, 25, 22, 21, 20, 34, 26, 26, 26, 28, 23, 30, 30, 32, 29, 25, 22, 30, 32… $ Tm <chr> "OKC", "PHO", "ATL", "OKC", "MIA", "CLE", "DEN", "SAS", "CHI", "UTA", "BRK", "NYK", "POR", "ATL", "MEM", "TOT", "PHO", "MIA", "IND", "MIL", "DAL", "HOU", "TOR", "CHI", "TOT", "PHO", "WAS", "ORL", "PHO",… $ G <dbl> 31, 10, 34, 80, 82, 19, 7, 81, 10, 38, 80, 19, 81, 48, 43, 25, 15, 10, 3, 72, 2, 10, 67, 81, 69, 26, 43, 81, 71, 43, 62, 15, 11, 4, 16, 47, 47, 38, 77, 49, 28, 43, 30, 75, 34, 51, 67, 82, 81, 26, 79, 68… $ GS <dbl> 2, 0, 1, 80, 28, 3, 0, 81, 1, 2, 80, 1, 81, 4, 40, 8, 8, 0, 0, 72, 0, 2, 6, 32, 69, 26, 43, 81, 70, 13, 4, 0, 0, 0, 0, 45, 1, 0, 77, 49, 28, 38, 3, 72, 6, 18, 35, 82, 18, 2, 1, 3, 15, 27, 0, 12, 49, 1, … ``` Now that the data columns are of the correct type, the following demonstrates how we can use the standard mutate function to create composites of existing variables. ``` bball = bball %>% mutate( trueShooting = PTS / (2 * (FGA + (.44 * FTA))), effectiveFG = (FG + (.5 * X3P)) / FGA, shootingDif = trueShooting - FG. ) summary(select(bball, shootingDif)) # select and others don't have to be piped to use ``` ``` shootingDif Min. :-0.08561 1st Qu.: 0.06722 Median : 0.09829 Mean : 0.09420 3rd Qu.: 0.12379 Max. : 0.53192 NA's :6 ``` Grouping and Summarizing Data ----------------------------- Another very common task is to look at group\-based statistics, and we can use group\_by and summarize to help us in this regard[6](#fn6). Base R has things like aggregate, by, and tapply for this, but they should not be used, as this approach is much more straightforward, flexible, and faster. Conceptually we are doing a three\-phase task: **split**, **apply**, **combine**. We split the data into subsets, apply a function, and then combine the results back into a single output. In applying a function, we may do any of the previously demonstrated tasks: calculate some statistic, generate new data, or even filter to a reduced part of the data. For this demonstration, I’m going to start putting together several things we’ve demonstrated thus far. Ultimately we’ll create a variable called trueShooting, which represents ‘true shooting percentage’, and get an average for each position, and compare it to the average field goal percentage. ``` bball %>% select(Pos, FG, FGA, FG., FTA, X3P, PTS) %>% mutate( trueShooting = PTS / (2 * (FGA + (.44 * FTA))), effectiveFG = (FG + (.5 * X3P)) / FGA, shootingDif = trueShooting - FG. ) %>% group_by(Pos) %>% summarize( `Mean FG%` = mean(FG., na.rm = TRUE), `Mean True Shooting` = mean(trueShooting, na.rm = TRUE) ) ``` ``` # A tibble: 11 x 3 Pos `Mean FG%` `Mean True Shooting` <chr> <dbl> <dbl> 1 C 0.522 0.572 2 C-PF 0.407 0.530 3 PF 0.442 0.536 4 PF-C 0.356 0.492 5 PF-SF 0.419 0.544 6 PG 0.409 0.512 7 SF 0.425 0.529 8 SF-SG 0.431 0.558 9 SG 0.407 0.517 10 SG-PF 0.416 0.582 11 SG-SF 0.38 0.466 ``` We can do even more with grouped data. Specifically, we can create a new *list\-column* in the data, the elements of which can be anything, even the results of an analysis for each group. As such, we can use tidyr’s unnest to get back to a standard data frame. To demonstrate, the following will group data by position, then get the correlation between field\-goal percentage and free\-throw shooting percentage. Some players are listed with multiple positions, so we will reduce those to whatever their first position is using case\_when. ``` bball %>% mutate( Pos = case_when( Pos == 'PG-SG' ~ 'PG', Pos == 'C-PF' ~ 'C', Pos == 'SF-SG' ~ 'SF', Pos == 'PF-C' | Pos == 'PF-SF' ~ 'PF', Pos == 'SG-PF' | Pos == 'SG-SF' ~ 'SG', TRUE ~ Pos )) %>% nest_by(Pos) %>% mutate(FgFt_Corr = list(cor(data$FG., data$FT., use = 'complete'))) %>% unnest(c(Pos, FgFt_Corr)) ``` ``` # A tibble: 5 x 3 # Groups: Pos [5] Pos data FgFt_Corr <chr> <list<tbl_df[,32]>> <dbl> 1 C [121 × 32] -0.122 2 PF [150 × 32] -0.0186 3 PG [139 × 32] 0.0857 4 SF [120 × 32] 0.00422 5 SG [178 × 32] -0.0585 ``` As a reminder, data frames are lists. As such, anything can go into the ‘columns’, even regression models! ``` library(nycflights13) carriers = group_by(flights, carrier) group_size(carriers) # if you're curious, there is a function to quickly get group Ns ``` ``` [1] 18460 32729 714 54635 48110 54173 685 3260 342 26397 32 58665 20536 5162 12275 601 ``` ``` mods = flights %>% nest_by(carrier) %>% mutate(model = list(lm(arr_delay ~ dep_time, data = data)) ) mods ``` ``` # A tibble: 16 x 3 # Rowwise: carrier carrier data model <chr> <list<tbl_df[,18]>> <list> 1 9E [18,460 × 18] <lm> 2 AA [32,729 × 18] <lm> 3 AS [714 × 18] <lm> 4 B6 [54,635 × 18] <lm> 5 DL [48,110 × 18] <lm> 6 EV [54,173 × 18] <lm> 7 F9 [685 × 18] <lm> 8 FL [3,260 × 18] <lm> 9 HA [342 × 18] <lm> 10 MQ [26,397 × 18] <lm> 11 OO [32 × 18] <lm> 12 UA [58,665 × 18] <lm> 13 US [20,536 × 18] <lm> 14 VX [5,162 × 18] <lm> 15 WN [12,275 × 18] <lm> 16 YV [601 × 18] <lm> ``` ``` mods %>% summarize( carrier = carrier, `Adjusted Rsq` = summary(model)$adj.r.squared, coef_dep_time = coef(model)[2] ) ``` ``` # A tibble: 16 x 3 # Groups: carrier [16] carrier `Adjusted Rsq` coef_dep_time <chr> <dbl> <dbl> 1 9E 0.0513 0.0252 2 AA 0.0504 0.0209 3 AS 0.0815 0.0186 4 B6 0.0241 0.0120 5 DL 0.0347 0.0179 6 EV 0.0836 0.0290 7 F9 0.0998 0.0484 8 FL 0.0261 0.0183 9 HA -0.00124 -0.0578 10 MQ 0.0499 0.0218 11 OO -0.0189 0.0394 12 UA 0.0673 0.0220 13 US 0.0575 0.0174 14 VX 0.111 0.0362 15 WN 0.119 0.0345 16 YV 0.137 0.0805 ``` You can use group\_by on more than one variable, e.g. `group_by(var1, var2)` Renaming Columns ---------------- Tibbles in the tidyverse don’t really have a problem with variable names starting with numbers or incorporating symbols and spaces. I would still suggest it is poor practice, because even if your data set looks fine, you’ll possibly encounter problems with modeling and visualization packages using that data. However, as a demonstration, we can ‘fix’ some of the variable names. One issue is that when we scraped the data and converted it to a data.frame, the names that started with a number, like `3P` for ‘three point baskets made’, were made into `X3P`, because that’s the way R works by default. In addition, `3P%`, i.e. three point percentage made, was made into `3P.` with a dot for the percent sign. Same goes for the 2P (two\-pointers) and FT (free\-throw) variables. We can use rename to change column names. A basic example is as follows. ``` data %>% rename(new_name = old_name, new_name2 = old_name2) ``` Very straightforward. However, oftentimes we’ll need to change *patterns*, as with our current problem. The following uses str\_replace and str\_remove from stringr to look for a pattern in a name, and replace that pattern with some other pattern. It uses *regular expressions* for the patterns. ``` bball = bball %>% rename_with( str_replace, # function contains('.'), # columns pattern = '\\.', # function arguments replacement = '%' ) %>% rename_with(str_remove, starts_with('X'), pattern = 'X') colnames(bball) ``` ``` [1] "Rk" "Player" "Pos" "Age" "Tm" "G" "GS" "MP" "FG" "FGA" "FG%" "3P" "3PA" "3P%" [15] "2P" "2PA" "2P%" "eFG%" "FT" "FTA" "FT%" "ORB" "DRB" "TRB" "AST" "STL" "BLK" "TOV" [29] "PF" "PTS" "trueShooting" "effectiveFG" "shootingDif" ``` Merging Data ------------ Merging data is yet another very common data task, as data often comes from multiple sources. In order to do this, we need some common identifier among the sources by which to join them. The following is a list of dplyr join functions. inner\_join: return all rows from x where there are matching values in y, and all columns from x and y. If there are multiple matches between x and y, all combination of the matches are returned. left\_join: return all rows from x, and all columns from x and y. Rows in x with no match in y will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned. right\_join: return all rows from y, and all columns from x and y. Rows in y with no match in x will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned. semi\_join: return all rows from x where there are matching values in y, keeping just columns from x. It differs from an inner join because an inner join will return one row of x for each matching row of y, where a semi join will never duplicate rows of x. anti\_join: return all rows from x where there are not matching values in y, keeping just columns from x. full\_join: return all rows and all columns from both x and y. Where there are not matching values, returns NA for the one missing. Probably the most common is a left join, where we have one primary data set, and are adding data from another source to it while retaining it as a base. The following is a simple demonstration. ``` band_members ``` ``` # A tibble: 3 x 2 Name Band <chr> <chr> 1 Seth Com Truise 2 Francis Pixies 3 Bubba The New Year ``` ``` band_instruments ``` ``` # A tibble: 3 x 2 Name Instrument <chr> <chr> 1 Francis Guitar 2 Bubba Guitar 3 Seth Synthesizer ``` ``` left_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 3 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar ``` When we don’t have a one to one match, the result of the different types of join will become more apparent. ``` band_members ``` ``` # A tibble: 4 x 2 Name Band <chr> <chr> 1 Seth Com Truise 2 Francis Pixies 3 Bubba The New Year 4 Stephen Pavement ``` ``` band_instruments ``` ``` # A tibble: 4 x 2 Name Instrument <chr> <chr> 1 Seth Synthesizer 2 Francis Guitar 3 Bubba Guitar 4 Steve Rage ``` ``` left_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 4 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Stephen Pavement <NA> ``` ``` right_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 4 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Steve <NA> Rage ``` ``` inner_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 3 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar ``` ``` full_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 5 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Stephen Pavement <NA> 5 Steve <NA> Rage ``` ``` anti_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 1 x 2 Name Band <chr> <chr> 1 Stephen Pavement ``` ``` anti_join(band_instruments, band_members) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 1 x 2 Name Instrument <chr> <chr> 1 Steve Rage ``` Merges can get quite complex, and involve multiple data sources. In many cases you may have to do a lot of processing before getting to the merge, but dplyr’s joins will help quite a bit. Pivoting axes ------------- The tidyr package can be thought of as a specialized subset of dplyr’s functionality, as well as an update to the previous reshape and reshape2 packages[7](#fn7). Some of its functions for manipulating data you’ll want to be familiar with are: * pivot\_longer: convert data from a wider format to longer one * pivot\_wider: convert data from a longer format to wider one * unite: paste together multiple columns into one * separate: complement of unite * unnest: expand ‘list columns’ The following example shows how we take a ‘wide\-form’ data set, where multiple columns represent different stock prices, and turn it into two columns, one representing stock name, and one for the price. We need to know which columns to work on, which is the first entry. This function works very much like select, where you can use helpers. Then we need to give a name to the column(s) representing the indicators of what were multiple columns in the wide format. And finally we need to specify the column(s) of the values. ``` library(tidyr) stocks <- data.frame( time = as.Date('2009-01-01') + 0:9, X = rnorm(10, 0, 1), Y = rnorm(10, 0, 2), Z = rnorm(10, 0, 4) ) stocks %>% head ``` ``` time X Y Z 1 2009-01-01 -1.23994442 -4.8515935 3.7985281 2 2009-01-02 0.65851483 0.9552487 -2.7255786 3 2009-01-03 -0.91146059 -0.0321312 0.6175274 4 2009-01-04 1.85598621 1.1919978 -2.4837558 5 2009-01-05 0.37266866 0.6297287 -1.1330732 6 2009-01-06 -0.06072664 -2.8673242 1.7155168 ``` ``` stocks %>% pivot_longer( cols = -time, # works similar to using select() names_to = 'stock', # the name of the column that will have column names as labels values_to = 'price' # the name of the column for the values ) %>% head() ``` ``` # A tibble: 6 x 3 time stock price <date> <chr> <dbl> 1 2009-01-01 X -1.24 2 2009-01-01 Y -4.85 3 2009-01-01 Z 3.80 4 2009-01-02 X 0.659 5 2009-01-02 Y 0.955 6 2009-01-02 Z -2.73 ``` Here is a more complex example where we can handle multiple repeated entries. We additionally add another column for labeling, and posit the separator for the column names. ``` library(tidyr) stocks <- data.frame( time = as.Date('2009-01-01') + 0:9, X_1 = rnorm(10, 0, 1), X_2 = rnorm(10, 0, 1), Y_1 = rnorm(10, 0, 2), Y_2 = rnorm(10, 0, 2), Z_1 = rnorm(10, 0, 4), Z_2 = rnorm(10, 0, 4) ) head(stocks) ``` ``` time X_1 X_2 Y_1 Y_2 Z_1 Z_2 1 2009-01-01 -0.9675529 -0.72793192 0.7516393 0.03321408 3.7485540 0.3945022 2 2009-01-02 -0.1780449 0.08926355 -0.1976137 1.53569057 -0.0315400 7.6285628 3 2009-01-03 0.2958189 0.38118235 1.6730362 -1.13635638 0.1543268 -5.9254785 4 2009-01-04 -0.7805814 -0.67370673 -0.5696378 -3.62905335 -2.4256959 6.6867209 5 2009-01-05 1.7910958 -0.32353046 -1.6786235 -1.55989831 -4.4294289 -8.1844866 6 2009-01-06 1.1623828 -0.27362716 -0.3116307 2.73462718 0.6675895 1.9884072 ``` ``` stocks %>% pivot_longer( cols = -time, names_to = c('stock', 'entry'), names_sep = '_', values_to = 'price' ) %>% head() ``` ``` # A tibble: 6 x 4 time stock entry price <date> <chr> <chr> <dbl> 1 2009-01-01 X 1 -0.968 2 2009-01-01 X 2 -0.728 3 2009-01-01 Y 1 0.752 4 2009-01-01 Y 2 0.0332 5 2009-01-01 Z 1 3.75 6 2009-01-01 Z 2 0.395 ``` Note that the latter is an example of *tidy data* while the former is not. Why do we generally prefer such data? Precisely because the most common data operations, grouping, filtering, etc., would work notably more efficiently with such data. This is especially the case for visualization. The following demonstrates the separate function utilized for a very common data processing task\- dealing with names. Here’ we’ll separate player into first and last names based on the space. ``` bball %>% separate(Player, into=c('first_name', 'last_name'), sep=' ') %>% select(1:5) %>% head() ``` ``` Rk first_name last_name Pos Age 1 1 Álex Abrines SG 25 2 2 Quincy Acy PF 28 3 3 Jaylen Adams PG 22 4 4 Steven Adams C 25 5 5 Bam Adebayo C 21 6 6 Deng Adel SF 21 ``` Note that this won’t necessarily apply to every name, so further processing may be required. More Tidyverse -------------- * dplyr functions: There are over a hundred utility functions that perform very common tasks. You really need to be aware of them, as their use will come up often. * broom: Convert statistical analysis objects from R into tidy data frames, so that they can more easily be combined, reshaped and otherwise processed with tools like dplyr, tidyr and ggplot2\. * tidy\*: a lot of packages out there are now ‘tidy’, though not a part of the official tidyverse. Some examples of the ones I’ve used: + tidycensus + tidybayes + tidytext + modelr Seriously, there are [a lot](https://www.r-pkg.org/search.html?q=tidy). Personal Opinion ---------------- The dplyr grammar is clear for a lot of standard data processing tasks, and some not so common. Extremely useful for data exploration and visualization. * No need to create/overwrite existing objects * Can overwrite columns and use as they are created * Makes it easy to look at anything, and do otherwise tedious data checks Drawbacks: * Not as fast as data.table or even some base R approaches for many things[8](#fn8) * The *mindset* can make for unnecessary complication + e.g. There is no need to pipe to create a single new variable * Some approaches, are not very intuitive * Notably less ability to work with some very common data structures (e.g. matrices) All in all, if you’ve only been using base R approaches, the tidyverse will change your R life! It makes all the sorts of things you do all the time easier and clearer. Highly recommended! Tidyverse Exercises ------------------- ### Exercise 0 Install and load the dplyr ggplot2movies packages. Look at the help file for the `movies` data set, which contains data from IMDB. ``` install.packages('ggplot2movies') library(ggplot2movies) data('movies') ``` ### Exercise 1 Using the movies data set, perform each of the following actions separately. #### Exercise 1a Use mutate to create a centered version of the rating variable. A centered variable is one whose mean has been subtracted from it. The process will take the following form: ``` data %>% mutate(new_var_name = '?') ``` #### Exercise 1b Use filter to create a new data frame that has only movies from the years 2000 and beyond. Use the greater than or equal operator `>=`. #### Exercise 1c Use select to create a new data frame that only has the `title`, `year`, `budget`, `length`, `rating` and `votes` variables. There are at least 3 ways to do this. #### Exercise 1d Rename the `length` column to `length_in_min` (i.e. length in minutes). ### Exercise 2 Use group\_by to group the data by year, and summarize to create a new variable that is the average budget. The summarize function works just like mutate in this case. Use the mean function to get the average, but you’ll also need to use the argument `na.rm = TRUE` within it because the earliest years have no budget recorded. ### Exercise 3 Use pivot\_longer to create a ‘tidy’ data set from the following. ``` dat = tibble(id = 1:10, x = rnorm(10), y = rnorm(10)) ``` ### Exercise 4 Now put several actions together in one set of piped operations. * Filter movies released *after* 1990 * select the same variables as before but also the `mpaa`, `Action`, and `Drama` variables * group by `mpaa` *and* (your choice) `Action` *or* `Drama` * get the average rating It should spit out something like the following: ``` # A tibble: 10 x 3 # Groups: mpaa [5] mpaa Drama AvgRating <chr> <int> <dbl> 1 "" 0 5.94 2 "" 1 6.20 3 "NC-17" 0 4.28 4 "NC-17" 1 4.62 5 "PG" 0 5.19 6 "PG" 1 6.15 7 "PG-13" 0 5.44 8 "PG-13" 1 6.14 9 "R" 0 4.86 10 "R" 1 5.94 ``` Python Pandas Notebook ---------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/pandaverse.ipynb) What is the Tidyverse? ---------------------- The tidyverse consists of a few key packages: * ggplot2: data visualization * tibble: tibbles, a modern re\-imagining of data frames * tidyr: data tidying * readr: data import * purrr: functional programming, e.g. alternate approaches to apply * dplyr: data manipulation And of course the tidyverse package itself, which will load all of the above in a way that will avoid naming conflicts. ``` library(tidyverse) ``` ``` Loading tidyverse: ggplot2 Loading tidyverse: tibble Loading tidyverse: tidyr Loading tidyverse: readr Loading tidyverse: purrr Loading tidyverse: dplyr Conflicts with tidy packages ------------------------- filter(): dplyr, stats lag(): dplyr, stats ``` In addition, there are other packages like lubridate, rvest, stringr and others in the **hadleyverse** that are also greatly useful. What is Tidy? ------------- *Tidy data* refers to data arranged in a way that makes data processing, analysis, and visualization simpler. In a tidy data set: * Each variable must have its own column. * Each observation must have its own row. * Each value must have its own cell. Think *long* before *wide*. dplyr ----- dplyr provides a grammar of data manipulation (like ggplot2 does for visualization). It is the next iteration of plyr, but plyr is deprecated and no longer used. It’s focused on tools for working with data frames, with over 100 functions that might be of specific use to you. It has three main goals: * Make the most important data manipulation tasks easier. * Do them faster. * Use the same interface to work with data frames, data tables or a database. Some key operations include: * select: grab columns + select helpers: one\_of, starts\_with, num\_range etc. * filter/slice: grab rows * group\_by: grouped operations * mutate/transmute: create new variables * summarize: summarize/aggregate There are various (SQL\-like) join/merge functions: * inner\_join, left\_join etc. And there are a lot of little things like: * n, n\_distinct, nth, n\_groups, count, recode, between In addition, there is no need to quote variable names. ### An example Let’s say we want to select from our data the following variables: * Start with the **ID** variable * The variables **X1** through **X10**, which are not all grouped together, and there are many more *X\** columns * The variables **var1** and **var2**, which are the only variables with *var* in their name * Any variable with a name that starts with **XYZ** How might we go about this in a dataset of possibly hundreds or even thousands of columns? There are several base R approaches that we could go with, but often they will be tedious, or require multiple objects to be created just to get the columns you want. Let’s start with the worst choice. ``` newData = oldData[,c(1,2,3,4, etc.)] ``` Using numeric indexes, or rather *magic numbers*, is not conducive to readability or reproducibility. If anything changes about the data columns, the numbers may no longer be applicable, and you’d have to redo the line again. We could name the variables explicitly. ``` newData = oldData[,c('ID','X1', 'X2', etc.)] ``` This would be fine if there are only a handful. But if you’re trying to reduce a 1000 column data set to several dozen it’s tedious, and generally not pretty regardless. A more advanced alternative regards a two\-step approach with [regular expressions](more.html#regular-expressions). This requires that you know something about regex (and you should), but it is difficult to read/understand by those who don’t, and often by even yourself if it’s more complicated. In any case, you first will need to create an object that represents the column names first, otherwise it looks unwieldy if used within brackets or a function like subset. ``` cols = c('ID', paste0('X', 1:10), 'var1', 'var2', grep(colnames(oldData), '^XYZ', value=T)) newData = oldData[,cols] # or via subset newData = subset(oldData, select = cols) ``` Now consider there is even more to do. What if you also want observations where **Z** is **Yes**, Q is **No**, and only the observations with the top 50 values of **var2**, ordered by **var1** (descending)? Probably the more straightforward way in R to do so would be something like the following, where each part is broken out and we continuously write over the object as we modify it. ``` # three operations and overwriting or creating new objects if we want clarity newData = newData[oldData$Z == 'Yes' & oldData$Q == 'No',] newData = newData[order(newData$var2, decreasing=T)[1:50],] newData = newData[order(newData$var1, decreasing=T),] ``` And this is for fairly straightforward operations. Now consider doing all of the previous in one piped operation. The dplyr package will allow us to do something like the following. ``` newData = oldData %>% select(num_range('X', 1:10), contains('var'), starts_with('XYZ')) %>% filter(Z == 'Yes', Q == 'No') %>% top_n(n=50, var2) %>% arrange(desc(var1)) ``` Even if it hadn’t been explained before, you might have been able to guess a little as to what was going on. The code is fairly succinct, we don’t have to keep referencing objects repeatedly, and no explicit intermediary objects are created. dplyr and piping is an *alternative*. You can do all this sort of stuff with base R, for example, with functions like with, within, subset, transform, etc. Though the initial base R approach depicted is fairly concise, in general, it can potentially be: * more verbose * less legible * less amenable to additional data changes * requires esoteric knowledge (e.g. regular expressions) * often requires creation of new objects (even if we just want to explore) * often slower, possibly greatly ### An example Let’s say we want to select from our data the following variables: * Start with the **ID** variable * The variables **X1** through **X10**, which are not all grouped together, and there are many more *X\** columns * The variables **var1** and **var2**, which are the only variables with *var* in their name * Any variable with a name that starts with **XYZ** How might we go about this in a dataset of possibly hundreds or even thousands of columns? There are several base R approaches that we could go with, but often they will be tedious, or require multiple objects to be created just to get the columns you want. Let’s start with the worst choice. ``` newData = oldData[,c(1,2,3,4, etc.)] ``` Using numeric indexes, or rather *magic numbers*, is not conducive to readability or reproducibility. If anything changes about the data columns, the numbers may no longer be applicable, and you’d have to redo the line again. We could name the variables explicitly. ``` newData = oldData[,c('ID','X1', 'X2', etc.)] ``` This would be fine if there are only a handful. But if you’re trying to reduce a 1000 column data set to several dozen it’s tedious, and generally not pretty regardless. A more advanced alternative regards a two\-step approach with [regular expressions](more.html#regular-expressions). This requires that you know something about regex (and you should), but it is difficult to read/understand by those who don’t, and often by even yourself if it’s more complicated. In any case, you first will need to create an object that represents the column names first, otherwise it looks unwieldy if used within brackets or a function like subset. ``` cols = c('ID', paste0('X', 1:10), 'var1', 'var2', grep(colnames(oldData), '^XYZ', value=T)) newData = oldData[,cols] # or via subset newData = subset(oldData, select = cols) ``` Now consider there is even more to do. What if you also want observations where **Z** is **Yes**, Q is **No**, and only the observations with the top 50 values of **var2**, ordered by **var1** (descending)? Probably the more straightforward way in R to do so would be something like the following, where each part is broken out and we continuously write over the object as we modify it. ``` # three operations and overwriting or creating new objects if we want clarity newData = newData[oldData$Z == 'Yes' & oldData$Q == 'No',] newData = newData[order(newData$var2, decreasing=T)[1:50],] newData = newData[order(newData$var1, decreasing=T),] ``` And this is for fairly straightforward operations. Now consider doing all of the previous in one piped operation. The dplyr package will allow us to do something like the following. ``` newData = oldData %>% select(num_range('X', 1:10), contains('var'), starts_with('XYZ')) %>% filter(Z == 'Yes', Q == 'No') %>% top_n(n=50, var2) %>% arrange(desc(var1)) ``` Even if it hadn’t been explained before, you might have been able to guess a little as to what was going on. The code is fairly succinct, we don’t have to keep referencing objects repeatedly, and no explicit intermediary objects are created. dplyr and piping is an *alternative*. You can do all this sort of stuff with base R, for example, with functions like with, within, subset, transform, etc. Though the initial base R approach depicted is fairly concise, in general, it can potentially be: * more verbose * less legible * less amenable to additional data changes * requires esoteric knowledge (e.g. regular expressions) * often requires creation of new objects (even if we just want to explore) * often slower, possibly greatly Running Example --------------- The following data was scraped initially scraped from the web as follows. It is data from the NBA basketball league for the last season with things like player names, position, team name, points per game, field goal percentage, and various other statistics. We’ll use it as an example to demonstrate various functionality found within dplyr. ``` library(rvest) current_year = lubridate::year(Sys.Date()) url = glue::glue("http://www.basketball-reference.com/leagues/NBA_{current_year-1}_totals.html") bball = read_html(url) %>% html_nodes("#totals_stats") %>% html_table() %>% data.frame() save(bball, file='data/bball.RData') ``` However you can just load it into your workspace as below. Note that when initially gathered from the website, the data is all character strings. We’ll fix this later. The following shows the data as it will eventually be. ``` load('data/bball.RData') glimpse(bball[,1:5]) ``` ``` Rows: 734 Columns: 5 $ Rk <chr> "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "16", "16", "17", "18", "19", "20", "Rk", "21", "22", "23", "23", "23", "24", "25", "26", "27", "28", "28", "28", "… $ Player <chr> "Álex Abrines", "Quincy Acy", "Jaylen Adams", "Steven Adams", "Bam Adebayo", "Deng Adel", "DeVaughn Akoon-Purcell", "LaMarcus Aldridge", "Rawle Alkins", "Grayson Allen", "Jarrett Allen", "Kadeem Allen",… $ Pos <chr> "SG", "PF", "PG", "C", "C", "SF", "SG", "C", "SG", "SG", "C", "SG", "PF", "SF", "SF", "PF", "PF", "PF", "C", "PF", "PF", "PF", "Pos", "SF", "PG", "SF", "SF", "SF", "PG", "C", "SG", "PF", "SG", "SG", "SG… $ Age <chr> "25", "28", "22", "25", "21", "21", "25", "33", "21", "23", "20", "26", "28", "25", "25", "30", "30", "30", "20", "24", "21", "34", "Age", "21", "24", "33", "33", "33", "31", "20", "23", "19", "25", "25… $ Tm <chr> "OKC", "PHO", "ATL", "OKC", "MIA", "CLE", "DEN", "SAS", "CHI", "UTA", "BRK", "NYK", "POR", "ATL", "MEM", "TOT", "PHO", "MIA", "IND", "MIL", "DAL", "HOU", "Tm", "TOR", "CHI", "TOT", "PHO", "WAS", "ORL", … ``` Selecting Columns ----------------- Often you do not need the entire data set. While this is easily handled in base R (as shown earlier), it can be more clear to use select in dplyr. Now we won’t have to create separate objects, use quotes or $, etc. ``` bball %>% select(Player, Tm, Pos) %>% head() ``` ``` Player Tm Pos 1 Álex Abrines OKC SG 2 Quincy Acy PHO PF 3 Jaylen Adams ATL PG 4 Steven Adams OKC C 5 Bam Adebayo MIA C 6 Deng Adel CLE SF ``` What if we want to drop some variables? ``` bball %>% select(-Player, -Tm, -Pos) %>% head() ``` ``` Rk Age G GS MP FG FGA FG. X3P X3PA X3P. X2P X2PA X2P. eFG. FT FTA FT. ORB DRB TRB AST STL BLK TOV PF PTS 1 1 25 31 2 588 56 157 .357 41 127 .323 15 30 .500 .487 12 13 .923 5 43 48 20 17 6 14 53 165 2 2 28 10 0 123 4 18 .222 2 15 .133 2 3 .667 .278 7 10 .700 3 22 25 8 1 4 4 24 17 3 3 22 34 1 428 38 110 .345 25 74 .338 13 36 .361 .459 7 9 .778 11 49 60 65 14 5 28 45 108 4 4 25 80 80 2669 481 809 .595 0 2 .000 481 807 .596 .595 146 292 .500 391 369 760 124 117 76 135 204 1108 5 5 21 82 28 1913 280 486 .576 3 15 .200 277 471 .588 .579 166 226 .735 165 432 597 184 71 65 121 203 729 6 6 21 19 3 194 11 36 .306 6 23 .261 5 13 .385 .389 4 4 1.000 3 16 19 5 1 4 6 13 32 ``` ### Helper functions Sometimes, we have a lot of variables to select, and if they have a common naming scheme, this can be very easy. ``` bball %>% select(Player, contains("3P"), ends_with("RB")) %>% arrange(desc(TRB)) %>% head() ``` ``` Player X3P X3PA X3P. ORB DRB TRB 1 Player 3P 3PA 3P% ORB DRB TRB 2 Player 3P 3PA 3P% ORB DRB TRB 3 Player 3P 3PA 3P% ORB DRB TRB 4 Player 3P 3PA 3P% ORB DRB TRB 5 Player 3P 3PA 3P% ORB DRB TRB 6 Player 3P 3PA 3P% ORB DRB TRB ``` The select also has helper functions to make selecting columns even easier. I probably don’t even need to explain what’s being done above, and this is the power of the tidyverse way. Here is the list of *helper functions* to be aware of: * starts\_with: starts with a prefix * ends\_with: ends with a suffix * contains: contains a literal string * matches: matches a regular expression * num\_range: a numerical range like x01, x02, x03\. * one\_of: variables in character vector. * everything: all variables. ### Helper functions Sometimes, we have a lot of variables to select, and if they have a common naming scheme, this can be very easy. ``` bball %>% select(Player, contains("3P"), ends_with("RB")) %>% arrange(desc(TRB)) %>% head() ``` ``` Player X3P X3PA X3P. ORB DRB TRB 1 Player 3P 3PA 3P% ORB DRB TRB 2 Player 3P 3PA 3P% ORB DRB TRB 3 Player 3P 3PA 3P% ORB DRB TRB 4 Player 3P 3PA 3P% ORB DRB TRB 5 Player 3P 3PA 3P% ORB DRB TRB 6 Player 3P 3PA 3P% ORB DRB TRB ``` The select also has helper functions to make selecting columns even easier. I probably don’t even need to explain what’s being done above, and this is the power of the tidyverse way. Here is the list of *helper functions* to be aware of: * starts\_with: starts with a prefix * ends\_with: ends with a suffix * contains: contains a literal string * matches: matches a regular expression * num\_range: a numerical range like x01, x02, x03\. * one\_of: variables in character vector. * everything: all variables. Filtering Rows -------------- There are repeated header rows in this data[3](#fn3), so we need to drop them. This is also why everything was character string when we first scraped it, because having any character strings in a column coerces the entire column to be character, since all elements of a vector [need to be of the same type](data_structures.html#vectors). Character string is chosen over others because anything can be converted to a string, but not everything can be a number. Filtering by rows requires the basic indexing knowledge [we talked about before](indexing.html#indexing), especially Boolean indexing. In the following, `Rk`, or rank, is for all intents and purposes just a row id, but if it equals the actual text ‘Rk’ instead of something else, we know we’re dealing with a header row, so we’ll drop it. ``` bball = bball %>% filter(Rk != "Rk") ``` * filter returns rows with matching conditions. * slice allows for a numeric indexing approach[4](#fn4). Say we want to look at forwards (SF or PF) over the age of 35\. The following will do this, and since some players play on multiple teams, we’ll want only the unique information on the variables of interest. The function distinct allows us to do this. ``` bball %>% filter(Age > 35, Pos == "SF" | Pos == "PF") %>% distinct(Player, Pos, Age) ``` ``` Player Pos Age 1 Vince Carter PF 42 2 Kyle Korver PF 37 3 Dirk Nowitzki PF 40 ``` Maybe we want just the first 10 rows. This is often the case when we perform some operation and need to quickly verify that what we’re doing is working in principle. ``` bball %>% slice(1:10) ``` ``` Rk Player Pos Age Tm G GS MP FG FGA FG. X3P X3PA X3P. X2P X2PA X2P. eFG. FT FTA FT. ORB DRB TRB AST STL BLK TOV PF PTS 1 1 Álex Abrines SG 25 OKC 31 2 588 56 157 .357 41 127 .323 15 30 .500 .487 12 13 .923 5 43 48 20 17 6 14 53 165 2 2 Quincy Acy PF 28 PHO 10 0 123 4 18 .222 2 15 .133 2 3 .667 .278 7 10 .700 3 22 25 8 1 4 4 24 17 3 3 Jaylen Adams PG 22 ATL 34 1 428 38 110 .345 25 74 .338 13 36 .361 .459 7 9 .778 11 49 60 65 14 5 28 45 108 4 4 Steven Adams C 25 OKC 80 80 2669 481 809 .595 0 2 .000 481 807 .596 .595 146 292 .500 391 369 760 124 117 76 135 204 1108 5 5 Bam Adebayo C 21 MIA 82 28 1913 280 486 .576 3 15 .200 277 471 .588 .579 166 226 .735 165 432 597 184 71 65 121 203 729 6 6 Deng Adel SF 21 CLE 19 3 194 11 36 .306 6 23 .261 5 13 .385 .389 4 4 1.000 3 16 19 5 1 4 6 13 32 7 7 DeVaughn Akoon-Purcell SG 25 DEN 7 0 22 3 10 .300 0 4 .000 3 6 .500 .300 1 2 .500 1 3 4 6 2 0 2 4 7 8 8 LaMarcus Aldridge C 33 SAS 81 81 2687 684 1319 .519 10 42 .238 674 1277 .528 .522 349 412 .847 251 493 744 194 43 107 144 179 1727 9 9 Rawle Alkins SG 21 CHI 10 1 120 13 39 .333 3 12 .250 10 27 .370 .372 8 12 .667 11 15 26 13 1 0 8 7 37 10 10 Grayson Allen SG 23 UTA 38 2 416 67 178 .376 32 99 .323 35 79 .443 .466 45 60 .750 3 20 23 25 6 6 33 47 211 ``` We can use filtering even with variables just created. ``` bball %>% unite("posTeam", Pos, Tm) %>% # create a new variable filter(posTeam == "SG_GSW") %>% # use it for filtering select(Player, posTeam, Age) %>% # use it for selection arrange(desc(Age)) # descending order ``` ``` Player posTeam Age 1 Klay Thompson SG_GSW 28 2 Damion Lee SG_GSW 26 3 Jacob Evans SG_GSW 21 ``` Being able to use a newly created variable on the fly, possibly only to filter or create some other variable, goes a long way toward easy visualization and generation of desired summary statistics. Generating New Data ------------------- One of the most common data processing tasks is generating new variables. The function mutate takes a vector and returns one of the same dimension. In addition, there is mutate\_at, mutate\_if, and mutate\_all to help with specific scenarios. To demonstrate, we’ll use mutate\_at to make appropriate columns numeric, i.e. everything except `Player`, `Pos`, and `Tm`. It takes two inputs, variables and functions to apply. As there are multiple variables and (potentially) multiple functions, we use the vars and funs functions to denote them[5](#fn5). ``` bball = bball %>% mutate(across(c(-Player, -Pos, -Tm), as.numeric)) glimpse(bball[,1:7]) ``` ``` Rows: 708 Columns: 7 $ Rk <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 16, 16, 17, 18, 19, 20, 21, 22, 23, 23, 23, 24, 25, 26, 27, 28, 28, 28, 29, 30, 31, 32, 33, 33, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,… $ Player <chr> "Álex Abrines", "Quincy Acy", "Jaylen Adams", "Steven Adams", "Bam Adebayo", "Deng Adel", "DeVaughn Akoon-Purcell", "LaMarcus Aldridge", "Rawle Alkins", "Grayson Allen", "Jarrett Allen", "Kadeem Allen",… $ Pos <chr> "SG", "PF", "PG", "C", "C", "SF", "SG", "C", "SG", "SG", "C", "SG", "PF", "SF", "SF", "PF", "PF", "PF", "C", "PF", "PF", "PF", "SF", "PG", "SF", "SF", "SF", "PG", "C", "SG", "PF", "SG", "SG", "SG", "PG"… $ Age <dbl> 25, 28, 22, 25, 21, 21, 25, 33, 21, 23, 20, 26, 28, 25, 25, 30, 30, 30, 20, 24, 21, 34, 21, 24, 33, 33, 33, 31, 20, 23, 19, 25, 25, 25, 22, 21, 20, 34, 26, 26, 26, 28, 23, 30, 30, 32, 29, 25, 22, 30, 32… $ Tm <chr> "OKC", "PHO", "ATL", "OKC", "MIA", "CLE", "DEN", "SAS", "CHI", "UTA", "BRK", "NYK", "POR", "ATL", "MEM", "TOT", "PHO", "MIA", "IND", "MIL", "DAL", "HOU", "TOR", "CHI", "TOT", "PHO", "WAS", "ORL", "PHO",… $ G <dbl> 31, 10, 34, 80, 82, 19, 7, 81, 10, 38, 80, 19, 81, 48, 43, 25, 15, 10, 3, 72, 2, 10, 67, 81, 69, 26, 43, 81, 71, 43, 62, 15, 11, 4, 16, 47, 47, 38, 77, 49, 28, 43, 30, 75, 34, 51, 67, 82, 81, 26, 79, 68… $ GS <dbl> 2, 0, 1, 80, 28, 3, 0, 81, 1, 2, 80, 1, 81, 4, 40, 8, 8, 0, 0, 72, 0, 2, 6, 32, 69, 26, 43, 81, 70, 13, 4, 0, 0, 0, 0, 45, 1, 0, 77, 49, 28, 38, 3, 72, 6, 18, 35, 82, 18, 2, 1, 3, 15, 27, 0, 12, 49, 1, … ``` Now that the data columns are of the correct type, the following demonstrates how we can use the standard mutate function to create composites of existing variables. ``` bball = bball %>% mutate( trueShooting = PTS / (2 * (FGA + (.44 * FTA))), effectiveFG = (FG + (.5 * X3P)) / FGA, shootingDif = trueShooting - FG. ) summary(select(bball, shootingDif)) # select and others don't have to be piped to use ``` ``` shootingDif Min. :-0.08561 1st Qu.: 0.06722 Median : 0.09829 Mean : 0.09420 3rd Qu.: 0.12379 Max. : 0.53192 NA's :6 ``` Grouping and Summarizing Data ----------------------------- Another very common task is to look at group\-based statistics, and we can use group\_by and summarize to help us in this regard[6](#fn6). Base R has things like aggregate, by, and tapply for this, but they should not be used, as this approach is much more straightforward, flexible, and faster. Conceptually we are doing a three\-phase task: **split**, **apply**, **combine**. We split the data into subsets, apply a function, and then combine the results back into a single output. In applying a function, we may do any of the previously demonstrated tasks: calculate some statistic, generate new data, or even filter to a reduced part of the data. For this demonstration, I’m going to start putting together several things we’ve demonstrated thus far. Ultimately we’ll create a variable called trueShooting, which represents ‘true shooting percentage’, and get an average for each position, and compare it to the average field goal percentage. ``` bball %>% select(Pos, FG, FGA, FG., FTA, X3P, PTS) %>% mutate( trueShooting = PTS / (2 * (FGA + (.44 * FTA))), effectiveFG = (FG + (.5 * X3P)) / FGA, shootingDif = trueShooting - FG. ) %>% group_by(Pos) %>% summarize( `Mean FG%` = mean(FG., na.rm = TRUE), `Mean True Shooting` = mean(trueShooting, na.rm = TRUE) ) ``` ``` # A tibble: 11 x 3 Pos `Mean FG%` `Mean True Shooting` <chr> <dbl> <dbl> 1 C 0.522 0.572 2 C-PF 0.407 0.530 3 PF 0.442 0.536 4 PF-C 0.356 0.492 5 PF-SF 0.419 0.544 6 PG 0.409 0.512 7 SF 0.425 0.529 8 SF-SG 0.431 0.558 9 SG 0.407 0.517 10 SG-PF 0.416 0.582 11 SG-SF 0.38 0.466 ``` We can do even more with grouped data. Specifically, we can create a new *list\-column* in the data, the elements of which can be anything, even the results of an analysis for each group. As such, we can use tidyr’s unnest to get back to a standard data frame. To demonstrate, the following will group data by position, then get the correlation between field\-goal percentage and free\-throw shooting percentage. Some players are listed with multiple positions, so we will reduce those to whatever their first position is using case\_when. ``` bball %>% mutate( Pos = case_when( Pos == 'PG-SG' ~ 'PG', Pos == 'C-PF' ~ 'C', Pos == 'SF-SG' ~ 'SF', Pos == 'PF-C' | Pos == 'PF-SF' ~ 'PF', Pos == 'SG-PF' | Pos == 'SG-SF' ~ 'SG', TRUE ~ Pos )) %>% nest_by(Pos) %>% mutate(FgFt_Corr = list(cor(data$FG., data$FT., use = 'complete'))) %>% unnest(c(Pos, FgFt_Corr)) ``` ``` # A tibble: 5 x 3 # Groups: Pos [5] Pos data FgFt_Corr <chr> <list<tbl_df[,32]>> <dbl> 1 C [121 × 32] -0.122 2 PF [150 × 32] -0.0186 3 PG [139 × 32] 0.0857 4 SF [120 × 32] 0.00422 5 SG [178 × 32] -0.0585 ``` As a reminder, data frames are lists. As such, anything can go into the ‘columns’, even regression models! ``` library(nycflights13) carriers = group_by(flights, carrier) group_size(carriers) # if you're curious, there is a function to quickly get group Ns ``` ``` [1] 18460 32729 714 54635 48110 54173 685 3260 342 26397 32 58665 20536 5162 12275 601 ``` ``` mods = flights %>% nest_by(carrier) %>% mutate(model = list(lm(arr_delay ~ dep_time, data = data)) ) mods ``` ``` # A tibble: 16 x 3 # Rowwise: carrier carrier data model <chr> <list<tbl_df[,18]>> <list> 1 9E [18,460 × 18] <lm> 2 AA [32,729 × 18] <lm> 3 AS [714 × 18] <lm> 4 B6 [54,635 × 18] <lm> 5 DL [48,110 × 18] <lm> 6 EV [54,173 × 18] <lm> 7 F9 [685 × 18] <lm> 8 FL [3,260 × 18] <lm> 9 HA [342 × 18] <lm> 10 MQ [26,397 × 18] <lm> 11 OO [32 × 18] <lm> 12 UA [58,665 × 18] <lm> 13 US [20,536 × 18] <lm> 14 VX [5,162 × 18] <lm> 15 WN [12,275 × 18] <lm> 16 YV [601 × 18] <lm> ``` ``` mods %>% summarize( carrier = carrier, `Adjusted Rsq` = summary(model)$adj.r.squared, coef_dep_time = coef(model)[2] ) ``` ``` # A tibble: 16 x 3 # Groups: carrier [16] carrier `Adjusted Rsq` coef_dep_time <chr> <dbl> <dbl> 1 9E 0.0513 0.0252 2 AA 0.0504 0.0209 3 AS 0.0815 0.0186 4 B6 0.0241 0.0120 5 DL 0.0347 0.0179 6 EV 0.0836 0.0290 7 F9 0.0998 0.0484 8 FL 0.0261 0.0183 9 HA -0.00124 -0.0578 10 MQ 0.0499 0.0218 11 OO -0.0189 0.0394 12 UA 0.0673 0.0220 13 US 0.0575 0.0174 14 VX 0.111 0.0362 15 WN 0.119 0.0345 16 YV 0.137 0.0805 ``` You can use group\_by on more than one variable, e.g. `group_by(var1, var2)` Renaming Columns ---------------- Tibbles in the tidyverse don’t really have a problem with variable names starting with numbers or incorporating symbols and spaces. I would still suggest it is poor practice, because even if your data set looks fine, you’ll possibly encounter problems with modeling and visualization packages using that data. However, as a demonstration, we can ‘fix’ some of the variable names. One issue is that when we scraped the data and converted it to a data.frame, the names that started with a number, like `3P` for ‘three point baskets made’, were made into `X3P`, because that’s the way R works by default. In addition, `3P%`, i.e. three point percentage made, was made into `3P.` with a dot for the percent sign. Same goes for the 2P (two\-pointers) and FT (free\-throw) variables. We can use rename to change column names. A basic example is as follows. ``` data %>% rename(new_name = old_name, new_name2 = old_name2) ``` Very straightforward. However, oftentimes we’ll need to change *patterns*, as with our current problem. The following uses str\_replace and str\_remove from stringr to look for a pattern in a name, and replace that pattern with some other pattern. It uses *regular expressions* for the patterns. ``` bball = bball %>% rename_with( str_replace, # function contains('.'), # columns pattern = '\\.', # function arguments replacement = '%' ) %>% rename_with(str_remove, starts_with('X'), pattern = 'X') colnames(bball) ``` ``` [1] "Rk" "Player" "Pos" "Age" "Tm" "G" "GS" "MP" "FG" "FGA" "FG%" "3P" "3PA" "3P%" [15] "2P" "2PA" "2P%" "eFG%" "FT" "FTA" "FT%" "ORB" "DRB" "TRB" "AST" "STL" "BLK" "TOV" [29] "PF" "PTS" "trueShooting" "effectiveFG" "shootingDif" ``` Merging Data ------------ Merging data is yet another very common data task, as data often comes from multiple sources. In order to do this, we need some common identifier among the sources by which to join them. The following is a list of dplyr join functions. inner\_join: return all rows from x where there are matching values in y, and all columns from x and y. If there are multiple matches between x and y, all combination of the matches are returned. left\_join: return all rows from x, and all columns from x and y. Rows in x with no match in y will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned. right\_join: return all rows from y, and all columns from x and y. Rows in y with no match in x will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned. semi\_join: return all rows from x where there are matching values in y, keeping just columns from x. It differs from an inner join because an inner join will return one row of x for each matching row of y, where a semi join will never duplicate rows of x. anti\_join: return all rows from x where there are not matching values in y, keeping just columns from x. full\_join: return all rows and all columns from both x and y. Where there are not matching values, returns NA for the one missing. Probably the most common is a left join, where we have one primary data set, and are adding data from another source to it while retaining it as a base. The following is a simple demonstration. ``` band_members ``` ``` # A tibble: 3 x 2 Name Band <chr> <chr> 1 Seth Com Truise 2 Francis Pixies 3 Bubba The New Year ``` ``` band_instruments ``` ``` # A tibble: 3 x 2 Name Instrument <chr> <chr> 1 Francis Guitar 2 Bubba Guitar 3 Seth Synthesizer ``` ``` left_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 3 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar ``` When we don’t have a one to one match, the result of the different types of join will become more apparent. ``` band_members ``` ``` # A tibble: 4 x 2 Name Band <chr> <chr> 1 Seth Com Truise 2 Francis Pixies 3 Bubba The New Year 4 Stephen Pavement ``` ``` band_instruments ``` ``` # A tibble: 4 x 2 Name Instrument <chr> <chr> 1 Seth Synthesizer 2 Francis Guitar 3 Bubba Guitar 4 Steve Rage ``` ``` left_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 4 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Stephen Pavement <NA> ``` ``` right_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 4 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Steve <NA> Rage ``` ``` inner_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 3 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar ``` ``` full_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 5 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Stephen Pavement <NA> 5 Steve <NA> Rage ``` ``` anti_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 1 x 2 Name Band <chr> <chr> 1 Stephen Pavement ``` ``` anti_join(band_instruments, band_members) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 1 x 2 Name Instrument <chr> <chr> 1 Steve Rage ``` Merges can get quite complex, and involve multiple data sources. In many cases you may have to do a lot of processing before getting to the merge, but dplyr’s joins will help quite a bit. Pivoting axes ------------- The tidyr package can be thought of as a specialized subset of dplyr’s functionality, as well as an update to the previous reshape and reshape2 packages[7](#fn7). Some of its functions for manipulating data you’ll want to be familiar with are: * pivot\_longer: convert data from a wider format to longer one * pivot\_wider: convert data from a longer format to wider one * unite: paste together multiple columns into one * separate: complement of unite * unnest: expand ‘list columns’ The following example shows how we take a ‘wide\-form’ data set, where multiple columns represent different stock prices, and turn it into two columns, one representing stock name, and one for the price. We need to know which columns to work on, which is the first entry. This function works very much like select, where you can use helpers. Then we need to give a name to the column(s) representing the indicators of what were multiple columns in the wide format. And finally we need to specify the column(s) of the values. ``` library(tidyr) stocks <- data.frame( time = as.Date('2009-01-01') + 0:9, X = rnorm(10, 0, 1), Y = rnorm(10, 0, 2), Z = rnorm(10, 0, 4) ) stocks %>% head ``` ``` time X Y Z 1 2009-01-01 -1.23994442 -4.8515935 3.7985281 2 2009-01-02 0.65851483 0.9552487 -2.7255786 3 2009-01-03 -0.91146059 -0.0321312 0.6175274 4 2009-01-04 1.85598621 1.1919978 -2.4837558 5 2009-01-05 0.37266866 0.6297287 -1.1330732 6 2009-01-06 -0.06072664 -2.8673242 1.7155168 ``` ``` stocks %>% pivot_longer( cols = -time, # works similar to using select() names_to = 'stock', # the name of the column that will have column names as labels values_to = 'price' # the name of the column for the values ) %>% head() ``` ``` # A tibble: 6 x 3 time stock price <date> <chr> <dbl> 1 2009-01-01 X -1.24 2 2009-01-01 Y -4.85 3 2009-01-01 Z 3.80 4 2009-01-02 X 0.659 5 2009-01-02 Y 0.955 6 2009-01-02 Z -2.73 ``` Here is a more complex example where we can handle multiple repeated entries. We additionally add another column for labeling, and posit the separator for the column names. ``` library(tidyr) stocks <- data.frame( time = as.Date('2009-01-01') + 0:9, X_1 = rnorm(10, 0, 1), X_2 = rnorm(10, 0, 1), Y_1 = rnorm(10, 0, 2), Y_2 = rnorm(10, 0, 2), Z_1 = rnorm(10, 0, 4), Z_2 = rnorm(10, 0, 4) ) head(stocks) ``` ``` time X_1 X_2 Y_1 Y_2 Z_1 Z_2 1 2009-01-01 -0.9675529 -0.72793192 0.7516393 0.03321408 3.7485540 0.3945022 2 2009-01-02 -0.1780449 0.08926355 -0.1976137 1.53569057 -0.0315400 7.6285628 3 2009-01-03 0.2958189 0.38118235 1.6730362 -1.13635638 0.1543268 -5.9254785 4 2009-01-04 -0.7805814 -0.67370673 -0.5696378 -3.62905335 -2.4256959 6.6867209 5 2009-01-05 1.7910958 -0.32353046 -1.6786235 -1.55989831 -4.4294289 -8.1844866 6 2009-01-06 1.1623828 -0.27362716 -0.3116307 2.73462718 0.6675895 1.9884072 ``` ``` stocks %>% pivot_longer( cols = -time, names_to = c('stock', 'entry'), names_sep = '_', values_to = 'price' ) %>% head() ``` ``` # A tibble: 6 x 4 time stock entry price <date> <chr> <chr> <dbl> 1 2009-01-01 X 1 -0.968 2 2009-01-01 X 2 -0.728 3 2009-01-01 Y 1 0.752 4 2009-01-01 Y 2 0.0332 5 2009-01-01 Z 1 3.75 6 2009-01-01 Z 2 0.395 ``` Note that the latter is an example of *tidy data* while the former is not. Why do we generally prefer such data? Precisely because the most common data operations, grouping, filtering, etc., would work notably more efficiently with such data. This is especially the case for visualization. The following demonstrates the separate function utilized for a very common data processing task\- dealing with names. Here’ we’ll separate player into first and last names based on the space. ``` bball %>% separate(Player, into=c('first_name', 'last_name'), sep=' ') %>% select(1:5) %>% head() ``` ``` Rk first_name last_name Pos Age 1 1 Álex Abrines SG 25 2 2 Quincy Acy PF 28 3 3 Jaylen Adams PG 22 4 4 Steven Adams C 25 5 5 Bam Adebayo C 21 6 6 Deng Adel SF 21 ``` Note that this won’t necessarily apply to every name, so further processing may be required. More Tidyverse -------------- * dplyr functions: There are over a hundred utility functions that perform very common tasks. You really need to be aware of them, as their use will come up often. * broom: Convert statistical analysis objects from R into tidy data frames, so that they can more easily be combined, reshaped and otherwise processed with tools like dplyr, tidyr and ggplot2\. * tidy\*: a lot of packages out there are now ‘tidy’, though not a part of the official tidyverse. Some examples of the ones I’ve used: + tidycensus + tidybayes + tidytext + modelr Seriously, there are [a lot](https://www.r-pkg.org/search.html?q=tidy). Personal Opinion ---------------- The dplyr grammar is clear for a lot of standard data processing tasks, and some not so common. Extremely useful for data exploration and visualization. * No need to create/overwrite existing objects * Can overwrite columns and use as they are created * Makes it easy to look at anything, and do otherwise tedious data checks Drawbacks: * Not as fast as data.table or even some base R approaches for many things[8](#fn8) * The *mindset* can make for unnecessary complication + e.g. There is no need to pipe to create a single new variable * Some approaches, are not very intuitive * Notably less ability to work with some very common data structures (e.g. matrices) All in all, if you’ve only been using base R approaches, the tidyverse will change your R life! It makes all the sorts of things you do all the time easier and clearer. Highly recommended! Tidyverse Exercises ------------------- ### Exercise 0 Install and load the dplyr ggplot2movies packages. Look at the help file for the `movies` data set, which contains data from IMDB. ``` install.packages('ggplot2movies') library(ggplot2movies) data('movies') ``` ### Exercise 1 Using the movies data set, perform each of the following actions separately. #### Exercise 1a Use mutate to create a centered version of the rating variable. A centered variable is one whose mean has been subtracted from it. The process will take the following form: ``` data %>% mutate(new_var_name = '?') ``` #### Exercise 1b Use filter to create a new data frame that has only movies from the years 2000 and beyond. Use the greater than or equal operator `>=`. #### Exercise 1c Use select to create a new data frame that only has the `title`, `year`, `budget`, `length`, `rating` and `votes` variables. There are at least 3 ways to do this. #### Exercise 1d Rename the `length` column to `length_in_min` (i.e. length in minutes). ### Exercise 2 Use group\_by to group the data by year, and summarize to create a new variable that is the average budget. The summarize function works just like mutate in this case. Use the mean function to get the average, but you’ll also need to use the argument `na.rm = TRUE` within it because the earliest years have no budget recorded. ### Exercise 3 Use pivot\_longer to create a ‘tidy’ data set from the following. ``` dat = tibble(id = 1:10, x = rnorm(10), y = rnorm(10)) ``` ### Exercise 4 Now put several actions together in one set of piped operations. * Filter movies released *after* 1990 * select the same variables as before but also the `mpaa`, `Action`, and `Drama` variables * group by `mpaa` *and* (your choice) `Action` *or* `Drama` * get the average rating It should spit out something like the following: ``` # A tibble: 10 x 3 # Groups: mpaa [5] mpaa Drama AvgRating <chr> <int> <dbl> 1 "" 0 5.94 2 "" 1 6.20 3 "NC-17" 0 4.28 4 "NC-17" 1 4.62 5 "PG" 0 5.19 6 "PG" 1 6.15 7 "PG-13" 0 5.44 8 "PG-13" 1 6.14 9 "R" 0 4.86 10 "R" 1 5.94 ``` ### Exercise 0 Install and load the dplyr ggplot2movies packages. Look at the help file for the `movies` data set, which contains data from IMDB. ``` install.packages('ggplot2movies') library(ggplot2movies) data('movies') ``` ### Exercise 1 Using the movies data set, perform each of the following actions separately. #### Exercise 1a Use mutate to create a centered version of the rating variable. A centered variable is one whose mean has been subtracted from it. The process will take the following form: ``` data %>% mutate(new_var_name = '?') ``` #### Exercise 1b Use filter to create a new data frame that has only movies from the years 2000 and beyond. Use the greater than or equal operator `>=`. #### Exercise 1c Use select to create a new data frame that only has the `title`, `year`, `budget`, `length`, `rating` and `votes` variables. There are at least 3 ways to do this. #### Exercise 1d Rename the `length` column to `length_in_min` (i.e. length in minutes). #### Exercise 1a Use mutate to create a centered version of the rating variable. A centered variable is one whose mean has been subtracted from it. The process will take the following form: ``` data %>% mutate(new_var_name = '?') ``` #### Exercise 1b Use filter to create a new data frame that has only movies from the years 2000 and beyond. Use the greater than or equal operator `>=`. #### Exercise 1c Use select to create a new data frame that only has the `title`, `year`, `budget`, `length`, `rating` and `votes` variables. There are at least 3 ways to do this. #### Exercise 1d Rename the `length` column to `length_in_min` (i.e. length in minutes). ### Exercise 2 Use group\_by to group the data by year, and summarize to create a new variable that is the average budget. The summarize function works just like mutate in this case. Use the mean function to get the average, but you’ll also need to use the argument `na.rm = TRUE` within it because the earliest years have no budget recorded. ### Exercise 3 Use pivot\_longer to create a ‘tidy’ data set from the following. ``` dat = tibble(id = 1:10, x = rnorm(10), y = rnorm(10)) ``` ### Exercise 4 Now put several actions together in one set of piped operations. * Filter movies released *after* 1990 * select the same variables as before but also the `mpaa`, `Action`, and `Drama` variables * group by `mpaa` *and* (your choice) `Action` *or* `Drama` * get the average rating It should spit out something like the following: ``` # A tibble: 10 x 3 # Groups: mpaa [5] mpaa Drama AvgRating <chr> <int> <dbl> 1 "" 0 5.94 2 "" 1 6.20 3 "NC-17" 0 4.28 4 "NC-17" 1 4.62 5 "PG" 0 5.19 6 "PG" 1 6.15 7 "PG-13" 0 5.44 8 "PG-13" 1 6.14 9 "R" 0 4.86 10 "R" 1 5.94 ``` Python Pandas Notebook ---------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/pandaverse.ipynb)
Data Visualization
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/tidyverse.html
Tidyverse ========= What is the Tidyverse? ---------------------- The tidyverse consists of a few key packages: * ggplot2: data visualization * tibble: tibbles, a modern re\-imagining of data frames * tidyr: data tidying * readr: data import * purrr: functional programming, e.g. alternate approaches to apply * dplyr: data manipulation And of course the tidyverse package itself, which will load all of the above in a way that will avoid naming conflicts. ``` library(tidyverse) ``` ``` Loading tidyverse: ggplot2 Loading tidyverse: tibble Loading tidyverse: tidyr Loading tidyverse: readr Loading tidyverse: purrr Loading tidyverse: dplyr Conflicts with tidy packages ------------------------- filter(): dplyr, stats lag(): dplyr, stats ``` In addition, there are other packages like lubridate, rvest, stringr and others in the **hadleyverse** that are also greatly useful. What is Tidy? ------------- *Tidy data* refers to data arranged in a way that makes data processing, analysis, and visualization simpler. In a tidy data set: * Each variable must have its own column. * Each observation must have its own row. * Each value must have its own cell. Think *long* before *wide*. dplyr ----- dplyr provides a grammar of data manipulation (like ggplot2 does for visualization). It is the next iteration of plyr, but plyr is deprecated and no longer used. It’s focused on tools for working with data frames, with over 100 functions that might be of specific use to you. It has three main goals: * Make the most important data manipulation tasks easier. * Do them faster. * Use the same interface to work with data frames, data tables or a database. Some key operations include: * select: grab columns + select helpers: one\_of, starts\_with, num\_range etc. * filter/slice: grab rows * group\_by: grouped operations * mutate/transmute: create new variables * summarize: summarize/aggregate There are various (SQL\-like) join/merge functions: * inner\_join, left\_join etc. And there are a lot of little things like: * n, n\_distinct, nth, n\_groups, count, recode, between In addition, there is no need to quote variable names. ### An example Let’s say we want to select from our data the following variables: * Start with the **ID** variable * The variables **X1** through **X10**, which are not all grouped together, and there are many more *X\** columns * The variables **var1** and **var2**, which are the only variables with *var* in their name * Any variable with a name that starts with **XYZ** How might we go about this in a dataset of possibly hundreds or even thousands of columns? There are several base R approaches that we could go with, but often they will be tedious, or require multiple objects to be created just to get the columns you want. Let’s start with the worst choice. ``` newData = oldData[,c(1,2,3,4, etc.)] ``` Using numeric indexes, or rather *magic numbers*, is not conducive to readability or reproducibility. If anything changes about the data columns, the numbers may no longer be applicable, and you’d have to redo the line again. We could name the variables explicitly. ``` newData = oldData[,c('ID','X1', 'X2', etc.)] ``` This would be fine if there are only a handful. But if you’re trying to reduce a 1000 column data set to several dozen it’s tedious, and generally not pretty regardless. A more advanced alternative regards a two\-step approach with [regular expressions](more.html#regular-expressions). This requires that you know something about regex (and you should), but it is difficult to read/understand by those who don’t, and often by even yourself if it’s more complicated. In any case, you first will need to create an object that represents the column names first, otherwise it looks unwieldy if used within brackets or a function like subset. ``` cols = c('ID', paste0('X', 1:10), 'var1', 'var2', grep(colnames(oldData), '^XYZ', value=T)) newData = oldData[,cols] # or via subset newData = subset(oldData, select = cols) ``` Now consider there is even more to do. What if you also want observations where **Z** is **Yes**, Q is **No**, and only the observations with the top 50 values of **var2**, ordered by **var1** (descending)? Probably the more straightforward way in R to do so would be something like the following, where each part is broken out and we continuously write over the object as we modify it. ``` # three operations and overwriting or creating new objects if we want clarity newData = newData[oldData$Z == 'Yes' & oldData$Q == 'No',] newData = newData[order(newData$var2, decreasing=T)[1:50],] newData = newData[order(newData$var1, decreasing=T),] ``` And this is for fairly straightforward operations. Now consider doing all of the previous in one piped operation. The dplyr package will allow us to do something like the following. ``` newData = oldData %>% select(num_range('X', 1:10), contains('var'), starts_with('XYZ')) %>% filter(Z == 'Yes', Q == 'No') %>% top_n(n=50, var2) %>% arrange(desc(var1)) ``` Even if it hadn’t been explained before, you might have been able to guess a little as to what was going on. The code is fairly succinct, we don’t have to keep referencing objects repeatedly, and no explicit intermediary objects are created. dplyr and piping is an *alternative*. You can do all this sort of stuff with base R, for example, with functions like with, within, subset, transform, etc. Though the initial base R approach depicted is fairly concise, in general, it can potentially be: * more verbose * less legible * less amenable to additional data changes * requires esoteric knowledge (e.g. regular expressions) * often requires creation of new objects (even if we just want to explore) * often slower, possibly greatly Running Example --------------- The following data was scraped initially scraped from the web as follows. It is data from the NBA basketball league for the last season with things like player names, position, team name, points per game, field goal percentage, and various other statistics. We’ll use it as an example to demonstrate various functionality found within dplyr. ``` library(rvest) current_year = lubridate::year(Sys.Date()) url = glue::glue("http://www.basketball-reference.com/leagues/NBA_{current_year-1}_totals.html") bball = read_html(url) %>% html_nodes("#totals_stats") %>% html_table() %>% data.frame() save(bball, file='data/bball.RData') ``` However you can just load it into your workspace as below. Note that when initially gathered from the website, the data is all character strings. We’ll fix this later. The following shows the data as it will eventually be. ``` load('data/bball.RData') glimpse(bball[,1:5]) ``` ``` Rows: 734 Columns: 5 $ Rk <chr> "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "16", "16", "17", "18", "19", "20", "Rk", "21", "22", "23", "23", "23", "24", "25", "26", "27", "28", "28", "28", "… $ Player <chr> "Álex Abrines", "Quincy Acy", "Jaylen Adams", "Steven Adams", "Bam Adebayo", "Deng Adel", "DeVaughn Akoon-Purcell", "LaMarcus Aldridge", "Rawle Alkins", "Grayson Allen", "Jarrett Allen", "Kadeem Allen",… $ Pos <chr> "SG", "PF", "PG", "C", "C", "SF", "SG", "C", "SG", "SG", "C", "SG", "PF", "SF", "SF", "PF", "PF", "PF", "C", "PF", "PF", "PF", "Pos", "SF", "PG", "SF", "SF", "SF", "PG", "C", "SG", "PF", "SG", "SG", "SG… $ Age <chr> "25", "28", "22", "25", "21", "21", "25", "33", "21", "23", "20", "26", "28", "25", "25", "30", "30", "30", "20", "24", "21", "34", "Age", "21", "24", "33", "33", "33", "31", "20", "23", "19", "25", "25… $ Tm <chr> "OKC", "PHO", "ATL", "OKC", "MIA", "CLE", "DEN", "SAS", "CHI", "UTA", "BRK", "NYK", "POR", "ATL", "MEM", "TOT", "PHO", "MIA", "IND", "MIL", "DAL", "HOU", "Tm", "TOR", "CHI", "TOT", "PHO", "WAS", "ORL", … ``` Selecting Columns ----------------- Often you do not need the entire data set. While this is easily handled in base R (as shown earlier), it can be more clear to use select in dplyr. Now we won’t have to create separate objects, use quotes or $, etc. ``` bball %>% select(Player, Tm, Pos) %>% head() ``` ``` Player Tm Pos 1 Álex Abrines OKC SG 2 Quincy Acy PHO PF 3 Jaylen Adams ATL PG 4 Steven Adams OKC C 5 Bam Adebayo MIA C 6 Deng Adel CLE SF ``` What if we want to drop some variables? ``` bball %>% select(-Player, -Tm, -Pos) %>% head() ``` ``` Rk Age G GS MP FG FGA FG. X3P X3PA X3P. X2P X2PA X2P. eFG. FT FTA FT. ORB DRB TRB AST STL BLK TOV PF PTS 1 1 25 31 2 588 56 157 .357 41 127 .323 15 30 .500 .487 12 13 .923 5 43 48 20 17 6 14 53 165 2 2 28 10 0 123 4 18 .222 2 15 .133 2 3 .667 .278 7 10 .700 3 22 25 8 1 4 4 24 17 3 3 22 34 1 428 38 110 .345 25 74 .338 13 36 .361 .459 7 9 .778 11 49 60 65 14 5 28 45 108 4 4 25 80 80 2669 481 809 .595 0 2 .000 481 807 .596 .595 146 292 .500 391 369 760 124 117 76 135 204 1108 5 5 21 82 28 1913 280 486 .576 3 15 .200 277 471 .588 .579 166 226 .735 165 432 597 184 71 65 121 203 729 6 6 21 19 3 194 11 36 .306 6 23 .261 5 13 .385 .389 4 4 1.000 3 16 19 5 1 4 6 13 32 ``` ### Helper functions Sometimes, we have a lot of variables to select, and if they have a common naming scheme, this can be very easy. ``` bball %>% select(Player, contains("3P"), ends_with("RB")) %>% arrange(desc(TRB)) %>% head() ``` ``` Player X3P X3PA X3P. ORB DRB TRB 1 Player 3P 3PA 3P% ORB DRB TRB 2 Player 3P 3PA 3P% ORB DRB TRB 3 Player 3P 3PA 3P% ORB DRB TRB 4 Player 3P 3PA 3P% ORB DRB TRB 5 Player 3P 3PA 3P% ORB DRB TRB 6 Player 3P 3PA 3P% ORB DRB TRB ``` The select also has helper functions to make selecting columns even easier. I probably don’t even need to explain what’s being done above, and this is the power of the tidyverse way. Here is the list of *helper functions* to be aware of: * starts\_with: starts with a prefix * ends\_with: ends with a suffix * contains: contains a literal string * matches: matches a regular expression * num\_range: a numerical range like x01, x02, x03\. * one\_of: variables in character vector. * everything: all variables. Filtering Rows -------------- There are repeated header rows in this data[3](#fn3), so we need to drop them. This is also why everything was character string when we first scraped it, because having any character strings in a column coerces the entire column to be character, since all elements of a vector [need to be of the same type](data_structures.html#vectors). Character string is chosen over others because anything can be converted to a string, but not everything can be a number. Filtering by rows requires the basic indexing knowledge [we talked about before](indexing.html#indexing), especially Boolean indexing. In the following, `Rk`, or rank, is for all intents and purposes just a row id, but if it equals the actual text ‘Rk’ instead of something else, we know we’re dealing with a header row, so we’ll drop it. ``` bball = bball %>% filter(Rk != "Rk") ``` * filter returns rows with matching conditions. * slice allows for a numeric indexing approach[4](#fn4). Say we want to look at forwards (SF or PF) over the age of 35\. The following will do this, and since some players play on multiple teams, we’ll want only the unique information on the variables of interest. The function distinct allows us to do this. ``` bball %>% filter(Age > 35, Pos == "SF" | Pos == "PF") %>% distinct(Player, Pos, Age) ``` ``` Player Pos Age 1 Vince Carter PF 42 2 Kyle Korver PF 37 3 Dirk Nowitzki PF 40 ``` Maybe we want just the first 10 rows. This is often the case when we perform some operation and need to quickly verify that what we’re doing is working in principle. ``` bball %>% slice(1:10) ``` ``` Rk Player Pos Age Tm G GS MP FG FGA FG. X3P X3PA X3P. X2P X2PA X2P. eFG. FT FTA FT. ORB DRB TRB AST STL BLK TOV PF PTS 1 1 Álex Abrines SG 25 OKC 31 2 588 56 157 .357 41 127 .323 15 30 .500 .487 12 13 .923 5 43 48 20 17 6 14 53 165 2 2 Quincy Acy PF 28 PHO 10 0 123 4 18 .222 2 15 .133 2 3 .667 .278 7 10 .700 3 22 25 8 1 4 4 24 17 3 3 Jaylen Adams PG 22 ATL 34 1 428 38 110 .345 25 74 .338 13 36 .361 .459 7 9 .778 11 49 60 65 14 5 28 45 108 4 4 Steven Adams C 25 OKC 80 80 2669 481 809 .595 0 2 .000 481 807 .596 .595 146 292 .500 391 369 760 124 117 76 135 204 1108 5 5 Bam Adebayo C 21 MIA 82 28 1913 280 486 .576 3 15 .200 277 471 .588 .579 166 226 .735 165 432 597 184 71 65 121 203 729 6 6 Deng Adel SF 21 CLE 19 3 194 11 36 .306 6 23 .261 5 13 .385 .389 4 4 1.000 3 16 19 5 1 4 6 13 32 7 7 DeVaughn Akoon-Purcell SG 25 DEN 7 0 22 3 10 .300 0 4 .000 3 6 .500 .300 1 2 .500 1 3 4 6 2 0 2 4 7 8 8 LaMarcus Aldridge C 33 SAS 81 81 2687 684 1319 .519 10 42 .238 674 1277 .528 .522 349 412 .847 251 493 744 194 43 107 144 179 1727 9 9 Rawle Alkins SG 21 CHI 10 1 120 13 39 .333 3 12 .250 10 27 .370 .372 8 12 .667 11 15 26 13 1 0 8 7 37 10 10 Grayson Allen SG 23 UTA 38 2 416 67 178 .376 32 99 .323 35 79 .443 .466 45 60 .750 3 20 23 25 6 6 33 47 211 ``` We can use filtering even with variables just created. ``` bball %>% unite("posTeam", Pos, Tm) %>% # create a new variable filter(posTeam == "SG_GSW") %>% # use it for filtering select(Player, posTeam, Age) %>% # use it for selection arrange(desc(Age)) # descending order ``` ``` Player posTeam Age 1 Klay Thompson SG_GSW 28 2 Damion Lee SG_GSW 26 3 Jacob Evans SG_GSW 21 ``` Being able to use a newly created variable on the fly, possibly only to filter or create some other variable, goes a long way toward easy visualization and generation of desired summary statistics. Generating New Data ------------------- One of the most common data processing tasks is generating new variables. The function mutate takes a vector and returns one of the same dimension. In addition, there is mutate\_at, mutate\_if, and mutate\_all to help with specific scenarios. To demonstrate, we’ll use mutate\_at to make appropriate columns numeric, i.e. everything except `Player`, `Pos`, and `Tm`. It takes two inputs, variables and functions to apply. As there are multiple variables and (potentially) multiple functions, we use the vars and funs functions to denote them[5](#fn5). ``` bball = bball %>% mutate(across(c(-Player, -Pos, -Tm), as.numeric)) glimpse(bball[,1:7]) ``` ``` Rows: 708 Columns: 7 $ Rk <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 16, 16, 17, 18, 19, 20, 21, 22, 23, 23, 23, 24, 25, 26, 27, 28, 28, 28, 29, 30, 31, 32, 33, 33, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,… $ Player <chr> "Álex Abrines", "Quincy Acy", "Jaylen Adams", "Steven Adams", "Bam Adebayo", "Deng Adel", "DeVaughn Akoon-Purcell", "LaMarcus Aldridge", "Rawle Alkins", "Grayson Allen", "Jarrett Allen", "Kadeem Allen",… $ Pos <chr> "SG", "PF", "PG", "C", "C", "SF", "SG", "C", "SG", "SG", "C", "SG", "PF", "SF", "SF", "PF", "PF", "PF", "C", "PF", "PF", "PF", "SF", "PG", "SF", "SF", "SF", "PG", "C", "SG", "PF", "SG", "SG", "SG", "PG"… $ Age <dbl> 25, 28, 22, 25, 21, 21, 25, 33, 21, 23, 20, 26, 28, 25, 25, 30, 30, 30, 20, 24, 21, 34, 21, 24, 33, 33, 33, 31, 20, 23, 19, 25, 25, 25, 22, 21, 20, 34, 26, 26, 26, 28, 23, 30, 30, 32, 29, 25, 22, 30, 32… $ Tm <chr> "OKC", "PHO", "ATL", "OKC", "MIA", "CLE", "DEN", "SAS", "CHI", "UTA", "BRK", "NYK", "POR", "ATL", "MEM", "TOT", "PHO", "MIA", "IND", "MIL", "DAL", "HOU", "TOR", "CHI", "TOT", "PHO", "WAS", "ORL", "PHO",… $ G <dbl> 31, 10, 34, 80, 82, 19, 7, 81, 10, 38, 80, 19, 81, 48, 43, 25, 15, 10, 3, 72, 2, 10, 67, 81, 69, 26, 43, 81, 71, 43, 62, 15, 11, 4, 16, 47, 47, 38, 77, 49, 28, 43, 30, 75, 34, 51, 67, 82, 81, 26, 79, 68… $ GS <dbl> 2, 0, 1, 80, 28, 3, 0, 81, 1, 2, 80, 1, 81, 4, 40, 8, 8, 0, 0, 72, 0, 2, 6, 32, 69, 26, 43, 81, 70, 13, 4, 0, 0, 0, 0, 45, 1, 0, 77, 49, 28, 38, 3, 72, 6, 18, 35, 82, 18, 2, 1, 3, 15, 27, 0, 12, 49, 1, … ``` Now that the data columns are of the correct type, the following demonstrates how we can use the standard mutate function to create composites of existing variables. ``` bball = bball %>% mutate( trueShooting = PTS / (2 * (FGA + (.44 * FTA))), effectiveFG = (FG + (.5 * X3P)) / FGA, shootingDif = trueShooting - FG. ) summary(select(bball, shootingDif)) # select and others don't have to be piped to use ``` ``` shootingDif Min. :-0.08561 1st Qu.: 0.06722 Median : 0.09829 Mean : 0.09420 3rd Qu.: 0.12379 Max. : 0.53192 NA's :6 ``` Grouping and Summarizing Data ----------------------------- Another very common task is to look at group\-based statistics, and we can use group\_by and summarize to help us in this regard[6](#fn6). Base R has things like aggregate, by, and tapply for this, but they should not be used, as this approach is much more straightforward, flexible, and faster. Conceptually we are doing a three\-phase task: **split**, **apply**, **combine**. We split the data into subsets, apply a function, and then combine the results back into a single output. In applying a function, we may do any of the previously demonstrated tasks: calculate some statistic, generate new data, or even filter to a reduced part of the data. For this demonstration, I’m going to start putting together several things we’ve demonstrated thus far. Ultimately we’ll create a variable called trueShooting, which represents ‘true shooting percentage’, and get an average for each position, and compare it to the average field goal percentage. ``` bball %>% select(Pos, FG, FGA, FG., FTA, X3P, PTS) %>% mutate( trueShooting = PTS / (2 * (FGA + (.44 * FTA))), effectiveFG = (FG + (.5 * X3P)) / FGA, shootingDif = trueShooting - FG. ) %>% group_by(Pos) %>% summarize( `Mean FG%` = mean(FG., na.rm = TRUE), `Mean True Shooting` = mean(trueShooting, na.rm = TRUE) ) ``` ``` # A tibble: 11 x 3 Pos `Mean FG%` `Mean True Shooting` <chr> <dbl> <dbl> 1 C 0.522 0.572 2 C-PF 0.407 0.530 3 PF 0.442 0.536 4 PF-C 0.356 0.492 5 PF-SF 0.419 0.544 6 PG 0.409 0.512 7 SF 0.425 0.529 8 SF-SG 0.431 0.558 9 SG 0.407 0.517 10 SG-PF 0.416 0.582 11 SG-SF 0.38 0.466 ``` We can do even more with grouped data. Specifically, we can create a new *list\-column* in the data, the elements of which can be anything, even the results of an analysis for each group. As such, we can use tidyr’s unnest to get back to a standard data frame. To demonstrate, the following will group data by position, then get the correlation between field\-goal percentage and free\-throw shooting percentage. Some players are listed with multiple positions, so we will reduce those to whatever their first position is using case\_when. ``` bball %>% mutate( Pos = case_when( Pos == 'PG-SG' ~ 'PG', Pos == 'C-PF' ~ 'C', Pos == 'SF-SG' ~ 'SF', Pos == 'PF-C' | Pos == 'PF-SF' ~ 'PF', Pos == 'SG-PF' | Pos == 'SG-SF' ~ 'SG', TRUE ~ Pos )) %>% nest_by(Pos) %>% mutate(FgFt_Corr = list(cor(data$FG., data$FT., use = 'complete'))) %>% unnest(c(Pos, FgFt_Corr)) ``` ``` # A tibble: 5 x 3 # Groups: Pos [5] Pos data FgFt_Corr <chr> <list<tbl_df[,32]>> <dbl> 1 C [121 × 32] -0.122 2 PF [150 × 32] -0.0186 3 PG [139 × 32] 0.0857 4 SF [120 × 32] 0.00422 5 SG [178 × 32] -0.0585 ``` As a reminder, data frames are lists. As such, anything can go into the ‘columns’, even regression models! ``` library(nycflights13) carriers = group_by(flights, carrier) group_size(carriers) # if you're curious, there is a function to quickly get group Ns ``` ``` [1] 18460 32729 714 54635 48110 54173 685 3260 342 26397 32 58665 20536 5162 12275 601 ``` ``` mods = flights %>% nest_by(carrier) %>% mutate(model = list(lm(arr_delay ~ dep_time, data = data)) ) mods ``` ``` # A tibble: 16 x 3 # Rowwise: carrier carrier data model <chr> <list<tbl_df[,18]>> <list> 1 9E [18,460 × 18] <lm> 2 AA [32,729 × 18] <lm> 3 AS [714 × 18] <lm> 4 B6 [54,635 × 18] <lm> 5 DL [48,110 × 18] <lm> 6 EV [54,173 × 18] <lm> 7 F9 [685 × 18] <lm> 8 FL [3,260 × 18] <lm> 9 HA [342 × 18] <lm> 10 MQ [26,397 × 18] <lm> 11 OO [32 × 18] <lm> 12 UA [58,665 × 18] <lm> 13 US [20,536 × 18] <lm> 14 VX [5,162 × 18] <lm> 15 WN [12,275 × 18] <lm> 16 YV [601 × 18] <lm> ``` ``` mods %>% summarize( carrier = carrier, `Adjusted Rsq` = summary(model)$adj.r.squared, coef_dep_time = coef(model)[2] ) ``` ``` # A tibble: 16 x 3 # Groups: carrier [16] carrier `Adjusted Rsq` coef_dep_time <chr> <dbl> <dbl> 1 9E 0.0513 0.0252 2 AA 0.0504 0.0209 3 AS 0.0815 0.0186 4 B6 0.0241 0.0120 5 DL 0.0347 0.0179 6 EV 0.0836 0.0290 7 F9 0.0998 0.0484 8 FL 0.0261 0.0183 9 HA -0.00124 -0.0578 10 MQ 0.0499 0.0218 11 OO -0.0189 0.0394 12 UA 0.0673 0.0220 13 US 0.0575 0.0174 14 VX 0.111 0.0362 15 WN 0.119 0.0345 16 YV 0.137 0.0805 ``` You can use group\_by on more than one variable, e.g. `group_by(var1, var2)` Renaming Columns ---------------- Tibbles in the tidyverse don’t really have a problem with variable names starting with numbers or incorporating symbols and spaces. I would still suggest it is poor practice, because even if your data set looks fine, you’ll possibly encounter problems with modeling and visualization packages using that data. However, as a demonstration, we can ‘fix’ some of the variable names. One issue is that when we scraped the data and converted it to a data.frame, the names that started with a number, like `3P` for ‘three point baskets made’, were made into `X3P`, because that’s the way R works by default. In addition, `3P%`, i.e. three point percentage made, was made into `3P.` with a dot for the percent sign. Same goes for the 2P (two\-pointers) and FT (free\-throw) variables. We can use rename to change column names. A basic example is as follows. ``` data %>% rename(new_name = old_name, new_name2 = old_name2) ``` Very straightforward. However, oftentimes we’ll need to change *patterns*, as with our current problem. The following uses str\_replace and str\_remove from stringr to look for a pattern in a name, and replace that pattern with some other pattern. It uses *regular expressions* for the patterns. ``` bball = bball %>% rename_with( str_replace, # function contains('.'), # columns pattern = '\\.', # function arguments replacement = '%' ) %>% rename_with(str_remove, starts_with('X'), pattern = 'X') colnames(bball) ``` ``` [1] "Rk" "Player" "Pos" "Age" "Tm" "G" "GS" "MP" "FG" "FGA" "FG%" "3P" "3PA" "3P%" [15] "2P" "2PA" "2P%" "eFG%" "FT" "FTA" "FT%" "ORB" "DRB" "TRB" "AST" "STL" "BLK" "TOV" [29] "PF" "PTS" "trueShooting" "effectiveFG" "shootingDif" ``` Merging Data ------------ Merging data is yet another very common data task, as data often comes from multiple sources. In order to do this, we need some common identifier among the sources by which to join them. The following is a list of dplyr join functions. inner\_join: return all rows from x where there are matching values in y, and all columns from x and y. If there are multiple matches between x and y, all combination of the matches are returned. left\_join: return all rows from x, and all columns from x and y. Rows in x with no match in y will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned. right\_join: return all rows from y, and all columns from x and y. Rows in y with no match in x will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned. semi\_join: return all rows from x where there are matching values in y, keeping just columns from x. It differs from an inner join because an inner join will return one row of x for each matching row of y, where a semi join will never duplicate rows of x. anti\_join: return all rows from x where there are not matching values in y, keeping just columns from x. full\_join: return all rows and all columns from both x and y. Where there are not matching values, returns NA for the one missing. Probably the most common is a left join, where we have one primary data set, and are adding data from another source to it while retaining it as a base. The following is a simple demonstration. ``` band_members ``` ``` # A tibble: 3 x 2 Name Band <chr> <chr> 1 Seth Com Truise 2 Francis Pixies 3 Bubba The New Year ``` ``` band_instruments ``` ``` # A tibble: 3 x 2 Name Instrument <chr> <chr> 1 Francis Guitar 2 Bubba Guitar 3 Seth Synthesizer ``` ``` left_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 3 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar ``` When we don’t have a one to one match, the result of the different types of join will become more apparent. ``` band_members ``` ``` # A tibble: 4 x 2 Name Band <chr> <chr> 1 Seth Com Truise 2 Francis Pixies 3 Bubba The New Year 4 Stephen Pavement ``` ``` band_instruments ``` ``` # A tibble: 4 x 2 Name Instrument <chr> <chr> 1 Seth Synthesizer 2 Francis Guitar 3 Bubba Guitar 4 Steve Rage ``` ``` left_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 4 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Stephen Pavement <NA> ``` ``` right_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 4 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Steve <NA> Rage ``` ``` inner_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 3 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar ``` ``` full_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 5 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Stephen Pavement <NA> 5 Steve <NA> Rage ``` ``` anti_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 1 x 2 Name Band <chr> <chr> 1 Stephen Pavement ``` ``` anti_join(band_instruments, band_members) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 1 x 2 Name Instrument <chr> <chr> 1 Steve Rage ``` Merges can get quite complex, and involve multiple data sources. In many cases you may have to do a lot of processing before getting to the merge, but dplyr’s joins will help quite a bit. Pivoting axes ------------- The tidyr package can be thought of as a specialized subset of dplyr’s functionality, as well as an update to the previous reshape and reshape2 packages[7](#fn7). Some of its functions for manipulating data you’ll want to be familiar with are: * pivot\_longer: convert data from a wider format to longer one * pivot\_wider: convert data from a longer format to wider one * unite: paste together multiple columns into one * separate: complement of unite * unnest: expand ‘list columns’ The following example shows how we take a ‘wide\-form’ data set, where multiple columns represent different stock prices, and turn it into two columns, one representing stock name, and one for the price. We need to know which columns to work on, which is the first entry. This function works very much like select, where you can use helpers. Then we need to give a name to the column(s) representing the indicators of what were multiple columns in the wide format. And finally we need to specify the column(s) of the values. ``` library(tidyr) stocks <- data.frame( time = as.Date('2009-01-01') + 0:9, X = rnorm(10, 0, 1), Y = rnorm(10, 0, 2), Z = rnorm(10, 0, 4) ) stocks %>% head ``` ``` time X Y Z 1 2009-01-01 -1.23994442 -4.8515935 3.7985281 2 2009-01-02 0.65851483 0.9552487 -2.7255786 3 2009-01-03 -0.91146059 -0.0321312 0.6175274 4 2009-01-04 1.85598621 1.1919978 -2.4837558 5 2009-01-05 0.37266866 0.6297287 -1.1330732 6 2009-01-06 -0.06072664 -2.8673242 1.7155168 ``` ``` stocks %>% pivot_longer( cols = -time, # works similar to using select() names_to = 'stock', # the name of the column that will have column names as labels values_to = 'price' # the name of the column for the values ) %>% head() ``` ``` # A tibble: 6 x 3 time stock price <date> <chr> <dbl> 1 2009-01-01 X -1.24 2 2009-01-01 Y -4.85 3 2009-01-01 Z 3.80 4 2009-01-02 X 0.659 5 2009-01-02 Y 0.955 6 2009-01-02 Z -2.73 ``` Here is a more complex example where we can handle multiple repeated entries. We additionally add another column for labeling, and posit the separator for the column names. ``` library(tidyr) stocks <- data.frame( time = as.Date('2009-01-01') + 0:9, X_1 = rnorm(10, 0, 1), X_2 = rnorm(10, 0, 1), Y_1 = rnorm(10, 0, 2), Y_2 = rnorm(10, 0, 2), Z_1 = rnorm(10, 0, 4), Z_2 = rnorm(10, 0, 4) ) head(stocks) ``` ``` time X_1 X_2 Y_1 Y_2 Z_1 Z_2 1 2009-01-01 -0.9675529 -0.72793192 0.7516393 0.03321408 3.7485540 0.3945022 2 2009-01-02 -0.1780449 0.08926355 -0.1976137 1.53569057 -0.0315400 7.6285628 3 2009-01-03 0.2958189 0.38118235 1.6730362 -1.13635638 0.1543268 -5.9254785 4 2009-01-04 -0.7805814 -0.67370673 -0.5696378 -3.62905335 -2.4256959 6.6867209 5 2009-01-05 1.7910958 -0.32353046 -1.6786235 -1.55989831 -4.4294289 -8.1844866 6 2009-01-06 1.1623828 -0.27362716 -0.3116307 2.73462718 0.6675895 1.9884072 ``` ``` stocks %>% pivot_longer( cols = -time, names_to = c('stock', 'entry'), names_sep = '_', values_to = 'price' ) %>% head() ``` ``` # A tibble: 6 x 4 time stock entry price <date> <chr> <chr> <dbl> 1 2009-01-01 X 1 -0.968 2 2009-01-01 X 2 -0.728 3 2009-01-01 Y 1 0.752 4 2009-01-01 Y 2 0.0332 5 2009-01-01 Z 1 3.75 6 2009-01-01 Z 2 0.395 ``` Note that the latter is an example of *tidy data* while the former is not. Why do we generally prefer such data? Precisely because the most common data operations, grouping, filtering, etc., would work notably more efficiently with such data. This is especially the case for visualization. The following demonstrates the separate function utilized for a very common data processing task\- dealing with names. Here’ we’ll separate player into first and last names based on the space. ``` bball %>% separate(Player, into=c('first_name', 'last_name'), sep=' ') %>% select(1:5) %>% head() ``` ``` Rk first_name last_name Pos Age 1 1 Álex Abrines SG 25 2 2 Quincy Acy PF 28 3 3 Jaylen Adams PG 22 4 4 Steven Adams C 25 5 5 Bam Adebayo C 21 6 6 Deng Adel SF 21 ``` Note that this won’t necessarily apply to every name, so further processing may be required. More Tidyverse -------------- * dplyr functions: There are over a hundred utility functions that perform very common tasks. You really need to be aware of them, as their use will come up often. * broom: Convert statistical analysis objects from R into tidy data frames, so that they can more easily be combined, reshaped and otherwise processed with tools like dplyr, tidyr and ggplot2\. * tidy\*: a lot of packages out there are now ‘tidy’, though not a part of the official tidyverse. Some examples of the ones I’ve used: + tidycensus + tidybayes + tidytext + modelr Seriously, there are [a lot](https://www.r-pkg.org/search.html?q=tidy). Personal Opinion ---------------- The dplyr grammar is clear for a lot of standard data processing tasks, and some not so common. Extremely useful for data exploration and visualization. * No need to create/overwrite existing objects * Can overwrite columns and use as they are created * Makes it easy to look at anything, and do otherwise tedious data checks Drawbacks: * Not as fast as data.table or even some base R approaches for many things[8](#fn8) * The *mindset* can make for unnecessary complication + e.g. There is no need to pipe to create a single new variable * Some approaches, are not very intuitive * Notably less ability to work with some very common data structures (e.g. matrices) All in all, if you’ve only been using base R approaches, the tidyverse will change your R life! It makes all the sorts of things you do all the time easier and clearer. Highly recommended! Tidyverse Exercises ------------------- ### Exercise 0 Install and load the dplyr ggplot2movies packages. Look at the help file for the `movies` data set, which contains data from IMDB. ``` install.packages('ggplot2movies') library(ggplot2movies) data('movies') ``` ### Exercise 1 Using the movies data set, perform each of the following actions separately. #### Exercise 1a Use mutate to create a centered version of the rating variable. A centered variable is one whose mean has been subtracted from it. The process will take the following form: ``` data %>% mutate(new_var_name = '?') ``` #### Exercise 1b Use filter to create a new data frame that has only movies from the years 2000 and beyond. Use the greater than or equal operator `>=`. #### Exercise 1c Use select to create a new data frame that only has the `title`, `year`, `budget`, `length`, `rating` and `votes` variables. There are at least 3 ways to do this. #### Exercise 1d Rename the `length` column to `length_in_min` (i.e. length in minutes). ### Exercise 2 Use group\_by to group the data by year, and summarize to create a new variable that is the average budget. The summarize function works just like mutate in this case. Use the mean function to get the average, but you’ll also need to use the argument `na.rm = TRUE` within it because the earliest years have no budget recorded. ### Exercise 3 Use pivot\_longer to create a ‘tidy’ data set from the following. ``` dat = tibble(id = 1:10, x = rnorm(10), y = rnorm(10)) ``` ### Exercise 4 Now put several actions together in one set of piped operations. * Filter movies released *after* 1990 * select the same variables as before but also the `mpaa`, `Action`, and `Drama` variables * group by `mpaa` *and* (your choice) `Action` *or* `Drama` * get the average rating It should spit out something like the following: ``` # A tibble: 10 x 3 # Groups: mpaa [5] mpaa Drama AvgRating <chr> <int> <dbl> 1 "" 0 5.94 2 "" 1 6.20 3 "NC-17" 0 4.28 4 "NC-17" 1 4.62 5 "PG" 0 5.19 6 "PG" 1 6.15 7 "PG-13" 0 5.44 8 "PG-13" 1 6.14 9 "R" 0 4.86 10 "R" 1 5.94 ``` Python Pandas Notebook ---------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/pandaverse.ipynb) What is the Tidyverse? ---------------------- The tidyverse consists of a few key packages: * ggplot2: data visualization * tibble: tibbles, a modern re\-imagining of data frames * tidyr: data tidying * readr: data import * purrr: functional programming, e.g. alternate approaches to apply * dplyr: data manipulation And of course the tidyverse package itself, which will load all of the above in a way that will avoid naming conflicts. ``` library(tidyverse) ``` ``` Loading tidyverse: ggplot2 Loading tidyverse: tibble Loading tidyverse: tidyr Loading tidyverse: readr Loading tidyverse: purrr Loading tidyverse: dplyr Conflicts with tidy packages ------------------------- filter(): dplyr, stats lag(): dplyr, stats ``` In addition, there are other packages like lubridate, rvest, stringr and others in the **hadleyverse** that are also greatly useful. What is Tidy? ------------- *Tidy data* refers to data arranged in a way that makes data processing, analysis, and visualization simpler. In a tidy data set: * Each variable must have its own column. * Each observation must have its own row. * Each value must have its own cell. Think *long* before *wide*. dplyr ----- dplyr provides a grammar of data manipulation (like ggplot2 does for visualization). It is the next iteration of plyr, but plyr is deprecated and no longer used. It’s focused on tools for working with data frames, with over 100 functions that might be of specific use to you. It has three main goals: * Make the most important data manipulation tasks easier. * Do them faster. * Use the same interface to work with data frames, data tables or a database. Some key operations include: * select: grab columns + select helpers: one\_of, starts\_with, num\_range etc. * filter/slice: grab rows * group\_by: grouped operations * mutate/transmute: create new variables * summarize: summarize/aggregate There are various (SQL\-like) join/merge functions: * inner\_join, left\_join etc. And there are a lot of little things like: * n, n\_distinct, nth, n\_groups, count, recode, between In addition, there is no need to quote variable names. ### An example Let’s say we want to select from our data the following variables: * Start with the **ID** variable * The variables **X1** through **X10**, which are not all grouped together, and there are many more *X\** columns * The variables **var1** and **var2**, which are the only variables with *var* in their name * Any variable with a name that starts with **XYZ** How might we go about this in a dataset of possibly hundreds or even thousands of columns? There are several base R approaches that we could go with, but often they will be tedious, or require multiple objects to be created just to get the columns you want. Let’s start with the worst choice. ``` newData = oldData[,c(1,2,3,4, etc.)] ``` Using numeric indexes, or rather *magic numbers*, is not conducive to readability or reproducibility. If anything changes about the data columns, the numbers may no longer be applicable, and you’d have to redo the line again. We could name the variables explicitly. ``` newData = oldData[,c('ID','X1', 'X2', etc.)] ``` This would be fine if there are only a handful. But if you’re trying to reduce a 1000 column data set to several dozen it’s tedious, and generally not pretty regardless. A more advanced alternative regards a two\-step approach with [regular expressions](more.html#regular-expressions). This requires that you know something about regex (and you should), but it is difficult to read/understand by those who don’t, and often by even yourself if it’s more complicated. In any case, you first will need to create an object that represents the column names first, otherwise it looks unwieldy if used within brackets or a function like subset. ``` cols = c('ID', paste0('X', 1:10), 'var1', 'var2', grep(colnames(oldData), '^XYZ', value=T)) newData = oldData[,cols] # or via subset newData = subset(oldData, select = cols) ``` Now consider there is even more to do. What if you also want observations where **Z** is **Yes**, Q is **No**, and only the observations with the top 50 values of **var2**, ordered by **var1** (descending)? Probably the more straightforward way in R to do so would be something like the following, where each part is broken out and we continuously write over the object as we modify it. ``` # three operations and overwriting or creating new objects if we want clarity newData = newData[oldData$Z == 'Yes' & oldData$Q == 'No',] newData = newData[order(newData$var2, decreasing=T)[1:50],] newData = newData[order(newData$var1, decreasing=T),] ``` And this is for fairly straightforward operations. Now consider doing all of the previous in one piped operation. The dplyr package will allow us to do something like the following. ``` newData = oldData %>% select(num_range('X', 1:10), contains('var'), starts_with('XYZ')) %>% filter(Z == 'Yes', Q == 'No') %>% top_n(n=50, var2) %>% arrange(desc(var1)) ``` Even if it hadn’t been explained before, you might have been able to guess a little as to what was going on. The code is fairly succinct, we don’t have to keep referencing objects repeatedly, and no explicit intermediary objects are created. dplyr and piping is an *alternative*. You can do all this sort of stuff with base R, for example, with functions like with, within, subset, transform, etc. Though the initial base R approach depicted is fairly concise, in general, it can potentially be: * more verbose * less legible * less amenable to additional data changes * requires esoteric knowledge (e.g. regular expressions) * often requires creation of new objects (even if we just want to explore) * often slower, possibly greatly ### An example Let’s say we want to select from our data the following variables: * Start with the **ID** variable * The variables **X1** through **X10**, which are not all grouped together, and there are many more *X\** columns * The variables **var1** and **var2**, which are the only variables with *var* in their name * Any variable with a name that starts with **XYZ** How might we go about this in a dataset of possibly hundreds or even thousands of columns? There are several base R approaches that we could go with, but often they will be tedious, or require multiple objects to be created just to get the columns you want. Let’s start with the worst choice. ``` newData = oldData[,c(1,2,3,4, etc.)] ``` Using numeric indexes, or rather *magic numbers*, is not conducive to readability or reproducibility. If anything changes about the data columns, the numbers may no longer be applicable, and you’d have to redo the line again. We could name the variables explicitly. ``` newData = oldData[,c('ID','X1', 'X2', etc.)] ``` This would be fine if there are only a handful. But if you’re trying to reduce a 1000 column data set to several dozen it’s tedious, and generally not pretty regardless. A more advanced alternative regards a two\-step approach with [regular expressions](more.html#regular-expressions). This requires that you know something about regex (and you should), but it is difficult to read/understand by those who don’t, and often by even yourself if it’s more complicated. In any case, you first will need to create an object that represents the column names first, otherwise it looks unwieldy if used within brackets or a function like subset. ``` cols = c('ID', paste0('X', 1:10), 'var1', 'var2', grep(colnames(oldData), '^XYZ', value=T)) newData = oldData[,cols] # or via subset newData = subset(oldData, select = cols) ``` Now consider there is even more to do. What if you also want observations where **Z** is **Yes**, Q is **No**, and only the observations with the top 50 values of **var2**, ordered by **var1** (descending)? Probably the more straightforward way in R to do so would be something like the following, where each part is broken out and we continuously write over the object as we modify it. ``` # three operations and overwriting or creating new objects if we want clarity newData = newData[oldData$Z == 'Yes' & oldData$Q == 'No',] newData = newData[order(newData$var2, decreasing=T)[1:50],] newData = newData[order(newData$var1, decreasing=T),] ``` And this is for fairly straightforward operations. Now consider doing all of the previous in one piped operation. The dplyr package will allow us to do something like the following. ``` newData = oldData %>% select(num_range('X', 1:10), contains('var'), starts_with('XYZ')) %>% filter(Z == 'Yes', Q == 'No') %>% top_n(n=50, var2) %>% arrange(desc(var1)) ``` Even if it hadn’t been explained before, you might have been able to guess a little as to what was going on. The code is fairly succinct, we don’t have to keep referencing objects repeatedly, and no explicit intermediary objects are created. dplyr and piping is an *alternative*. You can do all this sort of stuff with base R, for example, with functions like with, within, subset, transform, etc. Though the initial base R approach depicted is fairly concise, in general, it can potentially be: * more verbose * less legible * less amenable to additional data changes * requires esoteric knowledge (e.g. regular expressions) * often requires creation of new objects (even if we just want to explore) * often slower, possibly greatly Running Example --------------- The following data was scraped initially scraped from the web as follows. It is data from the NBA basketball league for the last season with things like player names, position, team name, points per game, field goal percentage, and various other statistics. We’ll use it as an example to demonstrate various functionality found within dplyr. ``` library(rvest) current_year = lubridate::year(Sys.Date()) url = glue::glue("http://www.basketball-reference.com/leagues/NBA_{current_year-1}_totals.html") bball = read_html(url) %>% html_nodes("#totals_stats") %>% html_table() %>% data.frame() save(bball, file='data/bball.RData') ``` However you can just load it into your workspace as below. Note that when initially gathered from the website, the data is all character strings. We’ll fix this later. The following shows the data as it will eventually be. ``` load('data/bball.RData') glimpse(bball[,1:5]) ``` ``` Rows: 734 Columns: 5 $ Rk <chr> "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "16", "16", "17", "18", "19", "20", "Rk", "21", "22", "23", "23", "23", "24", "25", "26", "27", "28", "28", "28", "… $ Player <chr> "Álex Abrines", "Quincy Acy", "Jaylen Adams", "Steven Adams", "Bam Adebayo", "Deng Adel", "DeVaughn Akoon-Purcell", "LaMarcus Aldridge", "Rawle Alkins", "Grayson Allen", "Jarrett Allen", "Kadeem Allen",… $ Pos <chr> "SG", "PF", "PG", "C", "C", "SF", "SG", "C", "SG", "SG", "C", "SG", "PF", "SF", "SF", "PF", "PF", "PF", "C", "PF", "PF", "PF", "Pos", "SF", "PG", "SF", "SF", "SF", "PG", "C", "SG", "PF", "SG", "SG", "SG… $ Age <chr> "25", "28", "22", "25", "21", "21", "25", "33", "21", "23", "20", "26", "28", "25", "25", "30", "30", "30", "20", "24", "21", "34", "Age", "21", "24", "33", "33", "33", "31", "20", "23", "19", "25", "25… $ Tm <chr> "OKC", "PHO", "ATL", "OKC", "MIA", "CLE", "DEN", "SAS", "CHI", "UTA", "BRK", "NYK", "POR", "ATL", "MEM", "TOT", "PHO", "MIA", "IND", "MIL", "DAL", "HOU", "Tm", "TOR", "CHI", "TOT", "PHO", "WAS", "ORL", … ``` Selecting Columns ----------------- Often you do not need the entire data set. While this is easily handled in base R (as shown earlier), it can be more clear to use select in dplyr. Now we won’t have to create separate objects, use quotes or $, etc. ``` bball %>% select(Player, Tm, Pos) %>% head() ``` ``` Player Tm Pos 1 Álex Abrines OKC SG 2 Quincy Acy PHO PF 3 Jaylen Adams ATL PG 4 Steven Adams OKC C 5 Bam Adebayo MIA C 6 Deng Adel CLE SF ``` What if we want to drop some variables? ``` bball %>% select(-Player, -Tm, -Pos) %>% head() ``` ``` Rk Age G GS MP FG FGA FG. X3P X3PA X3P. X2P X2PA X2P. eFG. FT FTA FT. ORB DRB TRB AST STL BLK TOV PF PTS 1 1 25 31 2 588 56 157 .357 41 127 .323 15 30 .500 .487 12 13 .923 5 43 48 20 17 6 14 53 165 2 2 28 10 0 123 4 18 .222 2 15 .133 2 3 .667 .278 7 10 .700 3 22 25 8 1 4 4 24 17 3 3 22 34 1 428 38 110 .345 25 74 .338 13 36 .361 .459 7 9 .778 11 49 60 65 14 5 28 45 108 4 4 25 80 80 2669 481 809 .595 0 2 .000 481 807 .596 .595 146 292 .500 391 369 760 124 117 76 135 204 1108 5 5 21 82 28 1913 280 486 .576 3 15 .200 277 471 .588 .579 166 226 .735 165 432 597 184 71 65 121 203 729 6 6 21 19 3 194 11 36 .306 6 23 .261 5 13 .385 .389 4 4 1.000 3 16 19 5 1 4 6 13 32 ``` ### Helper functions Sometimes, we have a lot of variables to select, and if they have a common naming scheme, this can be very easy. ``` bball %>% select(Player, contains("3P"), ends_with("RB")) %>% arrange(desc(TRB)) %>% head() ``` ``` Player X3P X3PA X3P. ORB DRB TRB 1 Player 3P 3PA 3P% ORB DRB TRB 2 Player 3P 3PA 3P% ORB DRB TRB 3 Player 3P 3PA 3P% ORB DRB TRB 4 Player 3P 3PA 3P% ORB DRB TRB 5 Player 3P 3PA 3P% ORB DRB TRB 6 Player 3P 3PA 3P% ORB DRB TRB ``` The select also has helper functions to make selecting columns even easier. I probably don’t even need to explain what’s being done above, and this is the power of the tidyverse way. Here is the list of *helper functions* to be aware of: * starts\_with: starts with a prefix * ends\_with: ends with a suffix * contains: contains a literal string * matches: matches a regular expression * num\_range: a numerical range like x01, x02, x03\. * one\_of: variables in character vector. * everything: all variables. ### Helper functions Sometimes, we have a lot of variables to select, and if they have a common naming scheme, this can be very easy. ``` bball %>% select(Player, contains("3P"), ends_with("RB")) %>% arrange(desc(TRB)) %>% head() ``` ``` Player X3P X3PA X3P. ORB DRB TRB 1 Player 3P 3PA 3P% ORB DRB TRB 2 Player 3P 3PA 3P% ORB DRB TRB 3 Player 3P 3PA 3P% ORB DRB TRB 4 Player 3P 3PA 3P% ORB DRB TRB 5 Player 3P 3PA 3P% ORB DRB TRB 6 Player 3P 3PA 3P% ORB DRB TRB ``` The select also has helper functions to make selecting columns even easier. I probably don’t even need to explain what’s being done above, and this is the power of the tidyverse way. Here is the list of *helper functions* to be aware of: * starts\_with: starts with a prefix * ends\_with: ends with a suffix * contains: contains a literal string * matches: matches a regular expression * num\_range: a numerical range like x01, x02, x03\. * one\_of: variables in character vector. * everything: all variables. Filtering Rows -------------- There are repeated header rows in this data[3](#fn3), so we need to drop them. This is also why everything was character string when we first scraped it, because having any character strings in a column coerces the entire column to be character, since all elements of a vector [need to be of the same type](data_structures.html#vectors). Character string is chosen over others because anything can be converted to a string, but not everything can be a number. Filtering by rows requires the basic indexing knowledge [we talked about before](indexing.html#indexing), especially Boolean indexing. In the following, `Rk`, or rank, is for all intents and purposes just a row id, but if it equals the actual text ‘Rk’ instead of something else, we know we’re dealing with a header row, so we’ll drop it. ``` bball = bball %>% filter(Rk != "Rk") ``` * filter returns rows with matching conditions. * slice allows for a numeric indexing approach[4](#fn4). Say we want to look at forwards (SF or PF) over the age of 35\. The following will do this, and since some players play on multiple teams, we’ll want only the unique information on the variables of interest. The function distinct allows us to do this. ``` bball %>% filter(Age > 35, Pos == "SF" | Pos == "PF") %>% distinct(Player, Pos, Age) ``` ``` Player Pos Age 1 Vince Carter PF 42 2 Kyle Korver PF 37 3 Dirk Nowitzki PF 40 ``` Maybe we want just the first 10 rows. This is often the case when we perform some operation and need to quickly verify that what we’re doing is working in principle. ``` bball %>% slice(1:10) ``` ``` Rk Player Pos Age Tm G GS MP FG FGA FG. X3P X3PA X3P. X2P X2PA X2P. eFG. FT FTA FT. ORB DRB TRB AST STL BLK TOV PF PTS 1 1 Álex Abrines SG 25 OKC 31 2 588 56 157 .357 41 127 .323 15 30 .500 .487 12 13 .923 5 43 48 20 17 6 14 53 165 2 2 Quincy Acy PF 28 PHO 10 0 123 4 18 .222 2 15 .133 2 3 .667 .278 7 10 .700 3 22 25 8 1 4 4 24 17 3 3 Jaylen Adams PG 22 ATL 34 1 428 38 110 .345 25 74 .338 13 36 .361 .459 7 9 .778 11 49 60 65 14 5 28 45 108 4 4 Steven Adams C 25 OKC 80 80 2669 481 809 .595 0 2 .000 481 807 .596 .595 146 292 .500 391 369 760 124 117 76 135 204 1108 5 5 Bam Adebayo C 21 MIA 82 28 1913 280 486 .576 3 15 .200 277 471 .588 .579 166 226 .735 165 432 597 184 71 65 121 203 729 6 6 Deng Adel SF 21 CLE 19 3 194 11 36 .306 6 23 .261 5 13 .385 .389 4 4 1.000 3 16 19 5 1 4 6 13 32 7 7 DeVaughn Akoon-Purcell SG 25 DEN 7 0 22 3 10 .300 0 4 .000 3 6 .500 .300 1 2 .500 1 3 4 6 2 0 2 4 7 8 8 LaMarcus Aldridge C 33 SAS 81 81 2687 684 1319 .519 10 42 .238 674 1277 .528 .522 349 412 .847 251 493 744 194 43 107 144 179 1727 9 9 Rawle Alkins SG 21 CHI 10 1 120 13 39 .333 3 12 .250 10 27 .370 .372 8 12 .667 11 15 26 13 1 0 8 7 37 10 10 Grayson Allen SG 23 UTA 38 2 416 67 178 .376 32 99 .323 35 79 .443 .466 45 60 .750 3 20 23 25 6 6 33 47 211 ``` We can use filtering even with variables just created. ``` bball %>% unite("posTeam", Pos, Tm) %>% # create a new variable filter(posTeam == "SG_GSW") %>% # use it for filtering select(Player, posTeam, Age) %>% # use it for selection arrange(desc(Age)) # descending order ``` ``` Player posTeam Age 1 Klay Thompson SG_GSW 28 2 Damion Lee SG_GSW 26 3 Jacob Evans SG_GSW 21 ``` Being able to use a newly created variable on the fly, possibly only to filter or create some other variable, goes a long way toward easy visualization and generation of desired summary statistics. Generating New Data ------------------- One of the most common data processing tasks is generating new variables. The function mutate takes a vector and returns one of the same dimension. In addition, there is mutate\_at, mutate\_if, and mutate\_all to help with specific scenarios. To demonstrate, we’ll use mutate\_at to make appropriate columns numeric, i.e. everything except `Player`, `Pos`, and `Tm`. It takes two inputs, variables and functions to apply. As there are multiple variables and (potentially) multiple functions, we use the vars and funs functions to denote them[5](#fn5). ``` bball = bball %>% mutate(across(c(-Player, -Pos, -Tm), as.numeric)) glimpse(bball[,1:7]) ``` ``` Rows: 708 Columns: 7 $ Rk <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 16, 16, 17, 18, 19, 20, 21, 22, 23, 23, 23, 24, 25, 26, 27, 28, 28, 28, 29, 30, 31, 32, 33, 33, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,… $ Player <chr> "Álex Abrines", "Quincy Acy", "Jaylen Adams", "Steven Adams", "Bam Adebayo", "Deng Adel", "DeVaughn Akoon-Purcell", "LaMarcus Aldridge", "Rawle Alkins", "Grayson Allen", "Jarrett Allen", "Kadeem Allen",… $ Pos <chr> "SG", "PF", "PG", "C", "C", "SF", "SG", "C", "SG", "SG", "C", "SG", "PF", "SF", "SF", "PF", "PF", "PF", "C", "PF", "PF", "PF", "SF", "PG", "SF", "SF", "SF", "PG", "C", "SG", "PF", "SG", "SG", "SG", "PG"… $ Age <dbl> 25, 28, 22, 25, 21, 21, 25, 33, 21, 23, 20, 26, 28, 25, 25, 30, 30, 30, 20, 24, 21, 34, 21, 24, 33, 33, 33, 31, 20, 23, 19, 25, 25, 25, 22, 21, 20, 34, 26, 26, 26, 28, 23, 30, 30, 32, 29, 25, 22, 30, 32… $ Tm <chr> "OKC", "PHO", "ATL", "OKC", "MIA", "CLE", "DEN", "SAS", "CHI", "UTA", "BRK", "NYK", "POR", "ATL", "MEM", "TOT", "PHO", "MIA", "IND", "MIL", "DAL", "HOU", "TOR", "CHI", "TOT", "PHO", "WAS", "ORL", "PHO",… $ G <dbl> 31, 10, 34, 80, 82, 19, 7, 81, 10, 38, 80, 19, 81, 48, 43, 25, 15, 10, 3, 72, 2, 10, 67, 81, 69, 26, 43, 81, 71, 43, 62, 15, 11, 4, 16, 47, 47, 38, 77, 49, 28, 43, 30, 75, 34, 51, 67, 82, 81, 26, 79, 68… $ GS <dbl> 2, 0, 1, 80, 28, 3, 0, 81, 1, 2, 80, 1, 81, 4, 40, 8, 8, 0, 0, 72, 0, 2, 6, 32, 69, 26, 43, 81, 70, 13, 4, 0, 0, 0, 0, 45, 1, 0, 77, 49, 28, 38, 3, 72, 6, 18, 35, 82, 18, 2, 1, 3, 15, 27, 0, 12, 49, 1, … ``` Now that the data columns are of the correct type, the following demonstrates how we can use the standard mutate function to create composites of existing variables. ``` bball = bball %>% mutate( trueShooting = PTS / (2 * (FGA + (.44 * FTA))), effectiveFG = (FG + (.5 * X3P)) / FGA, shootingDif = trueShooting - FG. ) summary(select(bball, shootingDif)) # select and others don't have to be piped to use ``` ``` shootingDif Min. :-0.08561 1st Qu.: 0.06722 Median : 0.09829 Mean : 0.09420 3rd Qu.: 0.12379 Max. : 0.53192 NA's :6 ``` Grouping and Summarizing Data ----------------------------- Another very common task is to look at group\-based statistics, and we can use group\_by and summarize to help us in this regard[6](#fn6). Base R has things like aggregate, by, and tapply for this, but they should not be used, as this approach is much more straightforward, flexible, and faster. Conceptually we are doing a three\-phase task: **split**, **apply**, **combine**. We split the data into subsets, apply a function, and then combine the results back into a single output. In applying a function, we may do any of the previously demonstrated tasks: calculate some statistic, generate new data, or even filter to a reduced part of the data. For this demonstration, I’m going to start putting together several things we’ve demonstrated thus far. Ultimately we’ll create a variable called trueShooting, which represents ‘true shooting percentage’, and get an average for each position, and compare it to the average field goal percentage. ``` bball %>% select(Pos, FG, FGA, FG., FTA, X3P, PTS) %>% mutate( trueShooting = PTS / (2 * (FGA + (.44 * FTA))), effectiveFG = (FG + (.5 * X3P)) / FGA, shootingDif = trueShooting - FG. ) %>% group_by(Pos) %>% summarize( `Mean FG%` = mean(FG., na.rm = TRUE), `Mean True Shooting` = mean(trueShooting, na.rm = TRUE) ) ``` ``` # A tibble: 11 x 3 Pos `Mean FG%` `Mean True Shooting` <chr> <dbl> <dbl> 1 C 0.522 0.572 2 C-PF 0.407 0.530 3 PF 0.442 0.536 4 PF-C 0.356 0.492 5 PF-SF 0.419 0.544 6 PG 0.409 0.512 7 SF 0.425 0.529 8 SF-SG 0.431 0.558 9 SG 0.407 0.517 10 SG-PF 0.416 0.582 11 SG-SF 0.38 0.466 ``` We can do even more with grouped data. Specifically, we can create a new *list\-column* in the data, the elements of which can be anything, even the results of an analysis for each group. As such, we can use tidyr’s unnest to get back to a standard data frame. To demonstrate, the following will group data by position, then get the correlation between field\-goal percentage and free\-throw shooting percentage. Some players are listed with multiple positions, so we will reduce those to whatever their first position is using case\_when. ``` bball %>% mutate( Pos = case_when( Pos == 'PG-SG' ~ 'PG', Pos == 'C-PF' ~ 'C', Pos == 'SF-SG' ~ 'SF', Pos == 'PF-C' | Pos == 'PF-SF' ~ 'PF', Pos == 'SG-PF' | Pos == 'SG-SF' ~ 'SG', TRUE ~ Pos )) %>% nest_by(Pos) %>% mutate(FgFt_Corr = list(cor(data$FG., data$FT., use = 'complete'))) %>% unnest(c(Pos, FgFt_Corr)) ``` ``` # A tibble: 5 x 3 # Groups: Pos [5] Pos data FgFt_Corr <chr> <list<tbl_df[,32]>> <dbl> 1 C [121 × 32] -0.122 2 PF [150 × 32] -0.0186 3 PG [139 × 32] 0.0857 4 SF [120 × 32] 0.00422 5 SG [178 × 32] -0.0585 ``` As a reminder, data frames are lists. As such, anything can go into the ‘columns’, even regression models! ``` library(nycflights13) carriers = group_by(flights, carrier) group_size(carriers) # if you're curious, there is a function to quickly get group Ns ``` ``` [1] 18460 32729 714 54635 48110 54173 685 3260 342 26397 32 58665 20536 5162 12275 601 ``` ``` mods = flights %>% nest_by(carrier) %>% mutate(model = list(lm(arr_delay ~ dep_time, data = data)) ) mods ``` ``` # A tibble: 16 x 3 # Rowwise: carrier carrier data model <chr> <list<tbl_df[,18]>> <list> 1 9E [18,460 × 18] <lm> 2 AA [32,729 × 18] <lm> 3 AS [714 × 18] <lm> 4 B6 [54,635 × 18] <lm> 5 DL [48,110 × 18] <lm> 6 EV [54,173 × 18] <lm> 7 F9 [685 × 18] <lm> 8 FL [3,260 × 18] <lm> 9 HA [342 × 18] <lm> 10 MQ [26,397 × 18] <lm> 11 OO [32 × 18] <lm> 12 UA [58,665 × 18] <lm> 13 US [20,536 × 18] <lm> 14 VX [5,162 × 18] <lm> 15 WN [12,275 × 18] <lm> 16 YV [601 × 18] <lm> ``` ``` mods %>% summarize( carrier = carrier, `Adjusted Rsq` = summary(model)$adj.r.squared, coef_dep_time = coef(model)[2] ) ``` ``` # A tibble: 16 x 3 # Groups: carrier [16] carrier `Adjusted Rsq` coef_dep_time <chr> <dbl> <dbl> 1 9E 0.0513 0.0252 2 AA 0.0504 0.0209 3 AS 0.0815 0.0186 4 B6 0.0241 0.0120 5 DL 0.0347 0.0179 6 EV 0.0836 0.0290 7 F9 0.0998 0.0484 8 FL 0.0261 0.0183 9 HA -0.00124 -0.0578 10 MQ 0.0499 0.0218 11 OO -0.0189 0.0394 12 UA 0.0673 0.0220 13 US 0.0575 0.0174 14 VX 0.111 0.0362 15 WN 0.119 0.0345 16 YV 0.137 0.0805 ``` You can use group\_by on more than one variable, e.g. `group_by(var1, var2)` Renaming Columns ---------------- Tibbles in the tidyverse don’t really have a problem with variable names starting with numbers or incorporating symbols and spaces. I would still suggest it is poor practice, because even if your data set looks fine, you’ll possibly encounter problems with modeling and visualization packages using that data. However, as a demonstration, we can ‘fix’ some of the variable names. One issue is that when we scraped the data and converted it to a data.frame, the names that started with a number, like `3P` for ‘three point baskets made’, were made into `X3P`, because that’s the way R works by default. In addition, `3P%`, i.e. three point percentage made, was made into `3P.` with a dot for the percent sign. Same goes for the 2P (two\-pointers) and FT (free\-throw) variables. We can use rename to change column names. A basic example is as follows. ``` data %>% rename(new_name = old_name, new_name2 = old_name2) ``` Very straightforward. However, oftentimes we’ll need to change *patterns*, as with our current problem. The following uses str\_replace and str\_remove from stringr to look for a pattern in a name, and replace that pattern with some other pattern. It uses *regular expressions* for the patterns. ``` bball = bball %>% rename_with( str_replace, # function contains('.'), # columns pattern = '\\.', # function arguments replacement = '%' ) %>% rename_with(str_remove, starts_with('X'), pattern = 'X') colnames(bball) ``` ``` [1] "Rk" "Player" "Pos" "Age" "Tm" "G" "GS" "MP" "FG" "FGA" "FG%" "3P" "3PA" "3P%" [15] "2P" "2PA" "2P%" "eFG%" "FT" "FTA" "FT%" "ORB" "DRB" "TRB" "AST" "STL" "BLK" "TOV" [29] "PF" "PTS" "trueShooting" "effectiveFG" "shootingDif" ``` Merging Data ------------ Merging data is yet another very common data task, as data often comes from multiple sources. In order to do this, we need some common identifier among the sources by which to join them. The following is a list of dplyr join functions. inner\_join: return all rows from x where there are matching values in y, and all columns from x and y. If there are multiple matches between x and y, all combination of the matches are returned. left\_join: return all rows from x, and all columns from x and y. Rows in x with no match in y will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned. right\_join: return all rows from y, and all columns from x and y. Rows in y with no match in x will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned. semi\_join: return all rows from x where there are matching values in y, keeping just columns from x. It differs from an inner join because an inner join will return one row of x for each matching row of y, where a semi join will never duplicate rows of x. anti\_join: return all rows from x where there are not matching values in y, keeping just columns from x. full\_join: return all rows and all columns from both x and y. Where there are not matching values, returns NA for the one missing. Probably the most common is a left join, where we have one primary data set, and are adding data from another source to it while retaining it as a base. The following is a simple demonstration. ``` band_members ``` ``` # A tibble: 3 x 2 Name Band <chr> <chr> 1 Seth Com Truise 2 Francis Pixies 3 Bubba The New Year ``` ``` band_instruments ``` ``` # A tibble: 3 x 2 Name Instrument <chr> <chr> 1 Francis Guitar 2 Bubba Guitar 3 Seth Synthesizer ``` ``` left_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 3 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar ``` When we don’t have a one to one match, the result of the different types of join will become more apparent. ``` band_members ``` ``` # A tibble: 4 x 2 Name Band <chr> <chr> 1 Seth Com Truise 2 Francis Pixies 3 Bubba The New Year 4 Stephen Pavement ``` ``` band_instruments ``` ``` # A tibble: 4 x 2 Name Instrument <chr> <chr> 1 Seth Synthesizer 2 Francis Guitar 3 Bubba Guitar 4 Steve Rage ``` ``` left_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 4 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Stephen Pavement <NA> ``` ``` right_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 4 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Steve <NA> Rage ``` ``` inner_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 3 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar ``` ``` full_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 5 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Stephen Pavement <NA> 5 Steve <NA> Rage ``` ``` anti_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 1 x 2 Name Band <chr> <chr> 1 Stephen Pavement ``` ``` anti_join(band_instruments, band_members) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 1 x 2 Name Instrument <chr> <chr> 1 Steve Rage ``` Merges can get quite complex, and involve multiple data sources. In many cases you may have to do a lot of processing before getting to the merge, but dplyr’s joins will help quite a bit. Pivoting axes ------------- The tidyr package can be thought of as a specialized subset of dplyr’s functionality, as well as an update to the previous reshape and reshape2 packages[7](#fn7). Some of its functions for manipulating data you’ll want to be familiar with are: * pivot\_longer: convert data from a wider format to longer one * pivot\_wider: convert data from a longer format to wider one * unite: paste together multiple columns into one * separate: complement of unite * unnest: expand ‘list columns’ The following example shows how we take a ‘wide\-form’ data set, where multiple columns represent different stock prices, and turn it into two columns, one representing stock name, and one for the price. We need to know which columns to work on, which is the first entry. This function works very much like select, where you can use helpers. Then we need to give a name to the column(s) representing the indicators of what were multiple columns in the wide format. And finally we need to specify the column(s) of the values. ``` library(tidyr) stocks <- data.frame( time = as.Date('2009-01-01') + 0:9, X = rnorm(10, 0, 1), Y = rnorm(10, 0, 2), Z = rnorm(10, 0, 4) ) stocks %>% head ``` ``` time X Y Z 1 2009-01-01 -1.23994442 -4.8515935 3.7985281 2 2009-01-02 0.65851483 0.9552487 -2.7255786 3 2009-01-03 -0.91146059 -0.0321312 0.6175274 4 2009-01-04 1.85598621 1.1919978 -2.4837558 5 2009-01-05 0.37266866 0.6297287 -1.1330732 6 2009-01-06 -0.06072664 -2.8673242 1.7155168 ``` ``` stocks %>% pivot_longer( cols = -time, # works similar to using select() names_to = 'stock', # the name of the column that will have column names as labels values_to = 'price' # the name of the column for the values ) %>% head() ``` ``` # A tibble: 6 x 3 time stock price <date> <chr> <dbl> 1 2009-01-01 X -1.24 2 2009-01-01 Y -4.85 3 2009-01-01 Z 3.80 4 2009-01-02 X 0.659 5 2009-01-02 Y 0.955 6 2009-01-02 Z -2.73 ``` Here is a more complex example where we can handle multiple repeated entries. We additionally add another column for labeling, and posit the separator for the column names. ``` library(tidyr) stocks <- data.frame( time = as.Date('2009-01-01') + 0:9, X_1 = rnorm(10, 0, 1), X_2 = rnorm(10, 0, 1), Y_1 = rnorm(10, 0, 2), Y_2 = rnorm(10, 0, 2), Z_1 = rnorm(10, 0, 4), Z_2 = rnorm(10, 0, 4) ) head(stocks) ``` ``` time X_1 X_2 Y_1 Y_2 Z_1 Z_2 1 2009-01-01 -0.9675529 -0.72793192 0.7516393 0.03321408 3.7485540 0.3945022 2 2009-01-02 -0.1780449 0.08926355 -0.1976137 1.53569057 -0.0315400 7.6285628 3 2009-01-03 0.2958189 0.38118235 1.6730362 -1.13635638 0.1543268 -5.9254785 4 2009-01-04 -0.7805814 -0.67370673 -0.5696378 -3.62905335 -2.4256959 6.6867209 5 2009-01-05 1.7910958 -0.32353046 -1.6786235 -1.55989831 -4.4294289 -8.1844866 6 2009-01-06 1.1623828 -0.27362716 -0.3116307 2.73462718 0.6675895 1.9884072 ``` ``` stocks %>% pivot_longer( cols = -time, names_to = c('stock', 'entry'), names_sep = '_', values_to = 'price' ) %>% head() ``` ``` # A tibble: 6 x 4 time stock entry price <date> <chr> <chr> <dbl> 1 2009-01-01 X 1 -0.968 2 2009-01-01 X 2 -0.728 3 2009-01-01 Y 1 0.752 4 2009-01-01 Y 2 0.0332 5 2009-01-01 Z 1 3.75 6 2009-01-01 Z 2 0.395 ``` Note that the latter is an example of *tidy data* while the former is not. Why do we generally prefer such data? Precisely because the most common data operations, grouping, filtering, etc., would work notably more efficiently with such data. This is especially the case for visualization. The following demonstrates the separate function utilized for a very common data processing task\- dealing with names. Here’ we’ll separate player into first and last names based on the space. ``` bball %>% separate(Player, into=c('first_name', 'last_name'), sep=' ') %>% select(1:5) %>% head() ``` ``` Rk first_name last_name Pos Age 1 1 Álex Abrines SG 25 2 2 Quincy Acy PF 28 3 3 Jaylen Adams PG 22 4 4 Steven Adams C 25 5 5 Bam Adebayo C 21 6 6 Deng Adel SF 21 ``` Note that this won’t necessarily apply to every name, so further processing may be required. More Tidyverse -------------- * dplyr functions: There are over a hundred utility functions that perform very common tasks. You really need to be aware of them, as their use will come up often. * broom: Convert statistical analysis objects from R into tidy data frames, so that they can more easily be combined, reshaped and otherwise processed with tools like dplyr, tidyr and ggplot2\. * tidy\*: a lot of packages out there are now ‘tidy’, though not a part of the official tidyverse. Some examples of the ones I’ve used: + tidycensus + tidybayes + tidytext + modelr Seriously, there are [a lot](https://www.r-pkg.org/search.html?q=tidy). Personal Opinion ---------------- The dplyr grammar is clear for a lot of standard data processing tasks, and some not so common. Extremely useful for data exploration and visualization. * No need to create/overwrite existing objects * Can overwrite columns and use as they are created * Makes it easy to look at anything, and do otherwise tedious data checks Drawbacks: * Not as fast as data.table or even some base R approaches for many things[8](#fn8) * The *mindset* can make for unnecessary complication + e.g. There is no need to pipe to create a single new variable * Some approaches, are not very intuitive * Notably less ability to work with some very common data structures (e.g. matrices) All in all, if you’ve only been using base R approaches, the tidyverse will change your R life! It makes all the sorts of things you do all the time easier and clearer. Highly recommended! Tidyverse Exercises ------------------- ### Exercise 0 Install and load the dplyr ggplot2movies packages. Look at the help file for the `movies` data set, which contains data from IMDB. ``` install.packages('ggplot2movies') library(ggplot2movies) data('movies') ``` ### Exercise 1 Using the movies data set, perform each of the following actions separately. #### Exercise 1a Use mutate to create a centered version of the rating variable. A centered variable is one whose mean has been subtracted from it. The process will take the following form: ``` data %>% mutate(new_var_name = '?') ``` #### Exercise 1b Use filter to create a new data frame that has only movies from the years 2000 and beyond. Use the greater than or equal operator `>=`. #### Exercise 1c Use select to create a new data frame that only has the `title`, `year`, `budget`, `length`, `rating` and `votes` variables. There are at least 3 ways to do this. #### Exercise 1d Rename the `length` column to `length_in_min` (i.e. length in minutes). ### Exercise 2 Use group\_by to group the data by year, and summarize to create a new variable that is the average budget. The summarize function works just like mutate in this case. Use the mean function to get the average, but you’ll also need to use the argument `na.rm = TRUE` within it because the earliest years have no budget recorded. ### Exercise 3 Use pivot\_longer to create a ‘tidy’ data set from the following. ``` dat = tibble(id = 1:10, x = rnorm(10), y = rnorm(10)) ``` ### Exercise 4 Now put several actions together in one set of piped operations. * Filter movies released *after* 1990 * select the same variables as before but also the `mpaa`, `Action`, and `Drama` variables * group by `mpaa` *and* (your choice) `Action` *or* `Drama` * get the average rating It should spit out something like the following: ``` # A tibble: 10 x 3 # Groups: mpaa [5] mpaa Drama AvgRating <chr> <int> <dbl> 1 "" 0 5.94 2 "" 1 6.20 3 "NC-17" 0 4.28 4 "NC-17" 1 4.62 5 "PG" 0 5.19 6 "PG" 1 6.15 7 "PG-13" 0 5.44 8 "PG-13" 1 6.14 9 "R" 0 4.86 10 "R" 1 5.94 ``` ### Exercise 0 Install and load the dplyr ggplot2movies packages. Look at the help file for the `movies` data set, which contains data from IMDB. ``` install.packages('ggplot2movies') library(ggplot2movies) data('movies') ``` ### Exercise 1 Using the movies data set, perform each of the following actions separately. #### Exercise 1a Use mutate to create a centered version of the rating variable. A centered variable is one whose mean has been subtracted from it. The process will take the following form: ``` data %>% mutate(new_var_name = '?') ``` #### Exercise 1b Use filter to create a new data frame that has only movies from the years 2000 and beyond. Use the greater than or equal operator `>=`. #### Exercise 1c Use select to create a new data frame that only has the `title`, `year`, `budget`, `length`, `rating` and `votes` variables. There are at least 3 ways to do this. #### Exercise 1d Rename the `length` column to `length_in_min` (i.e. length in minutes). #### Exercise 1a Use mutate to create a centered version of the rating variable. A centered variable is one whose mean has been subtracted from it. The process will take the following form: ``` data %>% mutate(new_var_name = '?') ``` #### Exercise 1b Use filter to create a new data frame that has only movies from the years 2000 and beyond. Use the greater than or equal operator `>=`. #### Exercise 1c Use select to create a new data frame that only has the `title`, `year`, `budget`, `length`, `rating` and `votes` variables. There are at least 3 ways to do this. #### Exercise 1d Rename the `length` column to `length_in_min` (i.e. length in minutes). ### Exercise 2 Use group\_by to group the data by year, and summarize to create a new variable that is the average budget. The summarize function works just like mutate in this case. Use the mean function to get the average, but you’ll also need to use the argument `na.rm = TRUE` within it because the earliest years have no budget recorded. ### Exercise 3 Use pivot\_longer to create a ‘tidy’ data set from the following. ``` dat = tibble(id = 1:10, x = rnorm(10), y = rnorm(10)) ``` ### Exercise 4 Now put several actions together in one set of piped operations. * Filter movies released *after* 1990 * select the same variables as before but also the `mpaa`, `Action`, and `Drama` variables * group by `mpaa` *and* (your choice) `Action` *or* `Drama` * get the average rating It should spit out something like the following: ``` # A tibble: 10 x 3 # Groups: mpaa [5] mpaa Drama AvgRating <chr> <int> <dbl> 1 "" 0 5.94 2 "" 1 6.20 3 "NC-17" 0 4.28 4 "NC-17" 1 4.62 5 "PG" 0 5.19 6 "PG" 1 6.15 7 "PG-13" 0 5.44 8 "PG-13" 1 6.14 9 "R" 0 4.86 10 "R" 1 5.94 ``` Python Pandas Notebook ---------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/pandaverse.ipynb)
Text Analysis
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/tidyverse.html
Tidyverse ========= What is the Tidyverse? ---------------------- The tidyverse consists of a few key packages: * ggplot2: data visualization * tibble: tibbles, a modern re\-imagining of data frames * tidyr: data tidying * readr: data import * purrr: functional programming, e.g. alternate approaches to apply * dplyr: data manipulation And of course the tidyverse package itself, which will load all of the above in a way that will avoid naming conflicts. ``` library(tidyverse) ``` ``` Loading tidyverse: ggplot2 Loading tidyverse: tibble Loading tidyverse: tidyr Loading tidyverse: readr Loading tidyverse: purrr Loading tidyverse: dplyr Conflicts with tidy packages ------------------------- filter(): dplyr, stats lag(): dplyr, stats ``` In addition, there are other packages like lubridate, rvest, stringr and others in the **hadleyverse** that are also greatly useful. What is Tidy? ------------- *Tidy data* refers to data arranged in a way that makes data processing, analysis, and visualization simpler. In a tidy data set: * Each variable must have its own column. * Each observation must have its own row. * Each value must have its own cell. Think *long* before *wide*. dplyr ----- dplyr provides a grammar of data manipulation (like ggplot2 does for visualization). It is the next iteration of plyr, but plyr is deprecated and no longer used. It’s focused on tools for working with data frames, with over 100 functions that might be of specific use to you. It has three main goals: * Make the most important data manipulation tasks easier. * Do them faster. * Use the same interface to work with data frames, data tables or a database. Some key operations include: * select: grab columns + select helpers: one\_of, starts\_with, num\_range etc. * filter/slice: grab rows * group\_by: grouped operations * mutate/transmute: create new variables * summarize: summarize/aggregate There are various (SQL\-like) join/merge functions: * inner\_join, left\_join etc. And there are a lot of little things like: * n, n\_distinct, nth, n\_groups, count, recode, between In addition, there is no need to quote variable names. ### An example Let’s say we want to select from our data the following variables: * Start with the **ID** variable * The variables **X1** through **X10**, which are not all grouped together, and there are many more *X\** columns * The variables **var1** and **var2**, which are the only variables with *var* in their name * Any variable with a name that starts with **XYZ** How might we go about this in a dataset of possibly hundreds or even thousands of columns? There are several base R approaches that we could go with, but often they will be tedious, or require multiple objects to be created just to get the columns you want. Let’s start with the worst choice. ``` newData = oldData[,c(1,2,3,4, etc.)] ``` Using numeric indexes, or rather *magic numbers*, is not conducive to readability or reproducibility. If anything changes about the data columns, the numbers may no longer be applicable, and you’d have to redo the line again. We could name the variables explicitly. ``` newData = oldData[,c('ID','X1', 'X2', etc.)] ``` This would be fine if there are only a handful. But if you’re trying to reduce a 1000 column data set to several dozen it’s tedious, and generally not pretty regardless. A more advanced alternative regards a two\-step approach with [regular expressions](more.html#regular-expressions). This requires that you know something about regex (and you should), but it is difficult to read/understand by those who don’t, and often by even yourself if it’s more complicated. In any case, you first will need to create an object that represents the column names first, otherwise it looks unwieldy if used within brackets or a function like subset. ``` cols = c('ID', paste0('X', 1:10), 'var1', 'var2', grep(colnames(oldData), '^XYZ', value=T)) newData = oldData[,cols] # or via subset newData = subset(oldData, select = cols) ``` Now consider there is even more to do. What if you also want observations where **Z** is **Yes**, Q is **No**, and only the observations with the top 50 values of **var2**, ordered by **var1** (descending)? Probably the more straightforward way in R to do so would be something like the following, where each part is broken out and we continuously write over the object as we modify it. ``` # three operations and overwriting or creating new objects if we want clarity newData = newData[oldData$Z == 'Yes' & oldData$Q == 'No',] newData = newData[order(newData$var2, decreasing=T)[1:50],] newData = newData[order(newData$var1, decreasing=T),] ``` And this is for fairly straightforward operations. Now consider doing all of the previous in one piped operation. The dplyr package will allow us to do something like the following. ``` newData = oldData %>% select(num_range('X', 1:10), contains('var'), starts_with('XYZ')) %>% filter(Z == 'Yes', Q == 'No') %>% top_n(n=50, var2) %>% arrange(desc(var1)) ``` Even if it hadn’t been explained before, you might have been able to guess a little as to what was going on. The code is fairly succinct, we don’t have to keep referencing objects repeatedly, and no explicit intermediary objects are created. dplyr and piping is an *alternative*. You can do all this sort of stuff with base R, for example, with functions like with, within, subset, transform, etc. Though the initial base R approach depicted is fairly concise, in general, it can potentially be: * more verbose * less legible * less amenable to additional data changes * requires esoteric knowledge (e.g. regular expressions) * often requires creation of new objects (even if we just want to explore) * often slower, possibly greatly Running Example --------------- The following data was scraped initially scraped from the web as follows. It is data from the NBA basketball league for the last season with things like player names, position, team name, points per game, field goal percentage, and various other statistics. We’ll use it as an example to demonstrate various functionality found within dplyr. ``` library(rvest) current_year = lubridate::year(Sys.Date()) url = glue::glue("http://www.basketball-reference.com/leagues/NBA_{current_year-1}_totals.html") bball = read_html(url) %>% html_nodes("#totals_stats") %>% html_table() %>% data.frame() save(bball, file='data/bball.RData') ``` However you can just load it into your workspace as below. Note that when initially gathered from the website, the data is all character strings. We’ll fix this later. The following shows the data as it will eventually be. ``` load('data/bball.RData') glimpse(bball[,1:5]) ``` ``` Rows: 734 Columns: 5 $ Rk <chr> "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "16", "16", "17", "18", "19", "20", "Rk", "21", "22", "23", "23", "23", "24", "25", "26", "27", "28", "28", "28", "… $ Player <chr> "Álex Abrines", "Quincy Acy", "Jaylen Adams", "Steven Adams", "Bam Adebayo", "Deng Adel", "DeVaughn Akoon-Purcell", "LaMarcus Aldridge", "Rawle Alkins", "Grayson Allen", "Jarrett Allen", "Kadeem Allen",… $ Pos <chr> "SG", "PF", "PG", "C", "C", "SF", "SG", "C", "SG", "SG", "C", "SG", "PF", "SF", "SF", "PF", "PF", "PF", "C", "PF", "PF", "PF", "Pos", "SF", "PG", "SF", "SF", "SF", "PG", "C", "SG", "PF", "SG", "SG", "SG… $ Age <chr> "25", "28", "22", "25", "21", "21", "25", "33", "21", "23", "20", "26", "28", "25", "25", "30", "30", "30", "20", "24", "21", "34", "Age", "21", "24", "33", "33", "33", "31", "20", "23", "19", "25", "25… $ Tm <chr> "OKC", "PHO", "ATL", "OKC", "MIA", "CLE", "DEN", "SAS", "CHI", "UTA", "BRK", "NYK", "POR", "ATL", "MEM", "TOT", "PHO", "MIA", "IND", "MIL", "DAL", "HOU", "Tm", "TOR", "CHI", "TOT", "PHO", "WAS", "ORL", … ``` Selecting Columns ----------------- Often you do not need the entire data set. While this is easily handled in base R (as shown earlier), it can be more clear to use select in dplyr. Now we won’t have to create separate objects, use quotes or $, etc. ``` bball %>% select(Player, Tm, Pos) %>% head() ``` ``` Player Tm Pos 1 Álex Abrines OKC SG 2 Quincy Acy PHO PF 3 Jaylen Adams ATL PG 4 Steven Adams OKC C 5 Bam Adebayo MIA C 6 Deng Adel CLE SF ``` What if we want to drop some variables? ``` bball %>% select(-Player, -Tm, -Pos) %>% head() ``` ``` Rk Age G GS MP FG FGA FG. X3P X3PA X3P. X2P X2PA X2P. eFG. FT FTA FT. ORB DRB TRB AST STL BLK TOV PF PTS 1 1 25 31 2 588 56 157 .357 41 127 .323 15 30 .500 .487 12 13 .923 5 43 48 20 17 6 14 53 165 2 2 28 10 0 123 4 18 .222 2 15 .133 2 3 .667 .278 7 10 .700 3 22 25 8 1 4 4 24 17 3 3 22 34 1 428 38 110 .345 25 74 .338 13 36 .361 .459 7 9 .778 11 49 60 65 14 5 28 45 108 4 4 25 80 80 2669 481 809 .595 0 2 .000 481 807 .596 .595 146 292 .500 391 369 760 124 117 76 135 204 1108 5 5 21 82 28 1913 280 486 .576 3 15 .200 277 471 .588 .579 166 226 .735 165 432 597 184 71 65 121 203 729 6 6 21 19 3 194 11 36 .306 6 23 .261 5 13 .385 .389 4 4 1.000 3 16 19 5 1 4 6 13 32 ``` ### Helper functions Sometimes, we have a lot of variables to select, and if they have a common naming scheme, this can be very easy. ``` bball %>% select(Player, contains("3P"), ends_with("RB")) %>% arrange(desc(TRB)) %>% head() ``` ``` Player X3P X3PA X3P. ORB DRB TRB 1 Player 3P 3PA 3P% ORB DRB TRB 2 Player 3P 3PA 3P% ORB DRB TRB 3 Player 3P 3PA 3P% ORB DRB TRB 4 Player 3P 3PA 3P% ORB DRB TRB 5 Player 3P 3PA 3P% ORB DRB TRB 6 Player 3P 3PA 3P% ORB DRB TRB ``` The select also has helper functions to make selecting columns even easier. I probably don’t even need to explain what’s being done above, and this is the power of the tidyverse way. Here is the list of *helper functions* to be aware of: * starts\_with: starts with a prefix * ends\_with: ends with a suffix * contains: contains a literal string * matches: matches a regular expression * num\_range: a numerical range like x01, x02, x03\. * one\_of: variables in character vector. * everything: all variables. Filtering Rows -------------- There are repeated header rows in this data[3](#fn3), so we need to drop them. This is also why everything was character string when we first scraped it, because having any character strings in a column coerces the entire column to be character, since all elements of a vector [need to be of the same type](data_structures.html#vectors). Character string is chosen over others because anything can be converted to a string, but not everything can be a number. Filtering by rows requires the basic indexing knowledge [we talked about before](indexing.html#indexing), especially Boolean indexing. In the following, `Rk`, or rank, is for all intents and purposes just a row id, but if it equals the actual text ‘Rk’ instead of something else, we know we’re dealing with a header row, so we’ll drop it. ``` bball = bball %>% filter(Rk != "Rk") ``` * filter returns rows with matching conditions. * slice allows for a numeric indexing approach[4](#fn4). Say we want to look at forwards (SF or PF) over the age of 35\. The following will do this, and since some players play on multiple teams, we’ll want only the unique information on the variables of interest. The function distinct allows us to do this. ``` bball %>% filter(Age > 35, Pos == "SF" | Pos == "PF") %>% distinct(Player, Pos, Age) ``` ``` Player Pos Age 1 Vince Carter PF 42 2 Kyle Korver PF 37 3 Dirk Nowitzki PF 40 ``` Maybe we want just the first 10 rows. This is often the case when we perform some operation and need to quickly verify that what we’re doing is working in principle. ``` bball %>% slice(1:10) ``` ``` Rk Player Pos Age Tm G GS MP FG FGA FG. X3P X3PA X3P. X2P X2PA X2P. eFG. FT FTA FT. ORB DRB TRB AST STL BLK TOV PF PTS 1 1 Álex Abrines SG 25 OKC 31 2 588 56 157 .357 41 127 .323 15 30 .500 .487 12 13 .923 5 43 48 20 17 6 14 53 165 2 2 Quincy Acy PF 28 PHO 10 0 123 4 18 .222 2 15 .133 2 3 .667 .278 7 10 .700 3 22 25 8 1 4 4 24 17 3 3 Jaylen Adams PG 22 ATL 34 1 428 38 110 .345 25 74 .338 13 36 .361 .459 7 9 .778 11 49 60 65 14 5 28 45 108 4 4 Steven Adams C 25 OKC 80 80 2669 481 809 .595 0 2 .000 481 807 .596 .595 146 292 .500 391 369 760 124 117 76 135 204 1108 5 5 Bam Adebayo C 21 MIA 82 28 1913 280 486 .576 3 15 .200 277 471 .588 .579 166 226 .735 165 432 597 184 71 65 121 203 729 6 6 Deng Adel SF 21 CLE 19 3 194 11 36 .306 6 23 .261 5 13 .385 .389 4 4 1.000 3 16 19 5 1 4 6 13 32 7 7 DeVaughn Akoon-Purcell SG 25 DEN 7 0 22 3 10 .300 0 4 .000 3 6 .500 .300 1 2 .500 1 3 4 6 2 0 2 4 7 8 8 LaMarcus Aldridge C 33 SAS 81 81 2687 684 1319 .519 10 42 .238 674 1277 .528 .522 349 412 .847 251 493 744 194 43 107 144 179 1727 9 9 Rawle Alkins SG 21 CHI 10 1 120 13 39 .333 3 12 .250 10 27 .370 .372 8 12 .667 11 15 26 13 1 0 8 7 37 10 10 Grayson Allen SG 23 UTA 38 2 416 67 178 .376 32 99 .323 35 79 .443 .466 45 60 .750 3 20 23 25 6 6 33 47 211 ``` We can use filtering even with variables just created. ``` bball %>% unite("posTeam", Pos, Tm) %>% # create a new variable filter(posTeam == "SG_GSW") %>% # use it for filtering select(Player, posTeam, Age) %>% # use it for selection arrange(desc(Age)) # descending order ``` ``` Player posTeam Age 1 Klay Thompson SG_GSW 28 2 Damion Lee SG_GSW 26 3 Jacob Evans SG_GSW 21 ``` Being able to use a newly created variable on the fly, possibly only to filter or create some other variable, goes a long way toward easy visualization and generation of desired summary statistics. Generating New Data ------------------- One of the most common data processing tasks is generating new variables. The function mutate takes a vector and returns one of the same dimension. In addition, there is mutate\_at, mutate\_if, and mutate\_all to help with specific scenarios. To demonstrate, we’ll use mutate\_at to make appropriate columns numeric, i.e. everything except `Player`, `Pos`, and `Tm`. It takes two inputs, variables and functions to apply. As there are multiple variables and (potentially) multiple functions, we use the vars and funs functions to denote them[5](#fn5). ``` bball = bball %>% mutate(across(c(-Player, -Pos, -Tm), as.numeric)) glimpse(bball[,1:7]) ``` ``` Rows: 708 Columns: 7 $ Rk <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 16, 16, 17, 18, 19, 20, 21, 22, 23, 23, 23, 24, 25, 26, 27, 28, 28, 28, 29, 30, 31, 32, 33, 33, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,… $ Player <chr> "Álex Abrines", "Quincy Acy", "Jaylen Adams", "Steven Adams", "Bam Adebayo", "Deng Adel", "DeVaughn Akoon-Purcell", "LaMarcus Aldridge", "Rawle Alkins", "Grayson Allen", "Jarrett Allen", "Kadeem Allen",… $ Pos <chr> "SG", "PF", "PG", "C", "C", "SF", "SG", "C", "SG", "SG", "C", "SG", "PF", "SF", "SF", "PF", "PF", "PF", "C", "PF", "PF", "PF", "SF", "PG", "SF", "SF", "SF", "PG", "C", "SG", "PF", "SG", "SG", "SG", "PG"… $ Age <dbl> 25, 28, 22, 25, 21, 21, 25, 33, 21, 23, 20, 26, 28, 25, 25, 30, 30, 30, 20, 24, 21, 34, 21, 24, 33, 33, 33, 31, 20, 23, 19, 25, 25, 25, 22, 21, 20, 34, 26, 26, 26, 28, 23, 30, 30, 32, 29, 25, 22, 30, 32… $ Tm <chr> "OKC", "PHO", "ATL", "OKC", "MIA", "CLE", "DEN", "SAS", "CHI", "UTA", "BRK", "NYK", "POR", "ATL", "MEM", "TOT", "PHO", "MIA", "IND", "MIL", "DAL", "HOU", "TOR", "CHI", "TOT", "PHO", "WAS", "ORL", "PHO",… $ G <dbl> 31, 10, 34, 80, 82, 19, 7, 81, 10, 38, 80, 19, 81, 48, 43, 25, 15, 10, 3, 72, 2, 10, 67, 81, 69, 26, 43, 81, 71, 43, 62, 15, 11, 4, 16, 47, 47, 38, 77, 49, 28, 43, 30, 75, 34, 51, 67, 82, 81, 26, 79, 68… $ GS <dbl> 2, 0, 1, 80, 28, 3, 0, 81, 1, 2, 80, 1, 81, 4, 40, 8, 8, 0, 0, 72, 0, 2, 6, 32, 69, 26, 43, 81, 70, 13, 4, 0, 0, 0, 0, 45, 1, 0, 77, 49, 28, 38, 3, 72, 6, 18, 35, 82, 18, 2, 1, 3, 15, 27, 0, 12, 49, 1, … ``` Now that the data columns are of the correct type, the following demonstrates how we can use the standard mutate function to create composites of existing variables. ``` bball = bball %>% mutate( trueShooting = PTS / (2 * (FGA + (.44 * FTA))), effectiveFG = (FG + (.5 * X3P)) / FGA, shootingDif = trueShooting - FG. ) summary(select(bball, shootingDif)) # select and others don't have to be piped to use ``` ``` shootingDif Min. :-0.08561 1st Qu.: 0.06722 Median : 0.09829 Mean : 0.09420 3rd Qu.: 0.12379 Max. : 0.53192 NA's :6 ``` Grouping and Summarizing Data ----------------------------- Another very common task is to look at group\-based statistics, and we can use group\_by and summarize to help us in this regard[6](#fn6). Base R has things like aggregate, by, and tapply for this, but they should not be used, as this approach is much more straightforward, flexible, and faster. Conceptually we are doing a three\-phase task: **split**, **apply**, **combine**. We split the data into subsets, apply a function, and then combine the results back into a single output. In applying a function, we may do any of the previously demonstrated tasks: calculate some statistic, generate new data, or even filter to a reduced part of the data. For this demonstration, I’m going to start putting together several things we’ve demonstrated thus far. Ultimately we’ll create a variable called trueShooting, which represents ‘true shooting percentage’, and get an average for each position, and compare it to the average field goal percentage. ``` bball %>% select(Pos, FG, FGA, FG., FTA, X3P, PTS) %>% mutate( trueShooting = PTS / (2 * (FGA + (.44 * FTA))), effectiveFG = (FG + (.5 * X3P)) / FGA, shootingDif = trueShooting - FG. ) %>% group_by(Pos) %>% summarize( `Mean FG%` = mean(FG., na.rm = TRUE), `Mean True Shooting` = mean(trueShooting, na.rm = TRUE) ) ``` ``` # A tibble: 11 x 3 Pos `Mean FG%` `Mean True Shooting` <chr> <dbl> <dbl> 1 C 0.522 0.572 2 C-PF 0.407 0.530 3 PF 0.442 0.536 4 PF-C 0.356 0.492 5 PF-SF 0.419 0.544 6 PG 0.409 0.512 7 SF 0.425 0.529 8 SF-SG 0.431 0.558 9 SG 0.407 0.517 10 SG-PF 0.416 0.582 11 SG-SF 0.38 0.466 ``` We can do even more with grouped data. Specifically, we can create a new *list\-column* in the data, the elements of which can be anything, even the results of an analysis for each group. As such, we can use tidyr’s unnest to get back to a standard data frame. To demonstrate, the following will group data by position, then get the correlation between field\-goal percentage and free\-throw shooting percentage. Some players are listed with multiple positions, so we will reduce those to whatever their first position is using case\_when. ``` bball %>% mutate( Pos = case_when( Pos == 'PG-SG' ~ 'PG', Pos == 'C-PF' ~ 'C', Pos == 'SF-SG' ~ 'SF', Pos == 'PF-C' | Pos == 'PF-SF' ~ 'PF', Pos == 'SG-PF' | Pos == 'SG-SF' ~ 'SG', TRUE ~ Pos )) %>% nest_by(Pos) %>% mutate(FgFt_Corr = list(cor(data$FG., data$FT., use = 'complete'))) %>% unnest(c(Pos, FgFt_Corr)) ``` ``` # A tibble: 5 x 3 # Groups: Pos [5] Pos data FgFt_Corr <chr> <list<tbl_df[,32]>> <dbl> 1 C [121 × 32] -0.122 2 PF [150 × 32] -0.0186 3 PG [139 × 32] 0.0857 4 SF [120 × 32] 0.00422 5 SG [178 × 32] -0.0585 ``` As a reminder, data frames are lists. As such, anything can go into the ‘columns’, even regression models! ``` library(nycflights13) carriers = group_by(flights, carrier) group_size(carriers) # if you're curious, there is a function to quickly get group Ns ``` ``` [1] 18460 32729 714 54635 48110 54173 685 3260 342 26397 32 58665 20536 5162 12275 601 ``` ``` mods = flights %>% nest_by(carrier) %>% mutate(model = list(lm(arr_delay ~ dep_time, data = data)) ) mods ``` ``` # A tibble: 16 x 3 # Rowwise: carrier carrier data model <chr> <list<tbl_df[,18]>> <list> 1 9E [18,460 × 18] <lm> 2 AA [32,729 × 18] <lm> 3 AS [714 × 18] <lm> 4 B6 [54,635 × 18] <lm> 5 DL [48,110 × 18] <lm> 6 EV [54,173 × 18] <lm> 7 F9 [685 × 18] <lm> 8 FL [3,260 × 18] <lm> 9 HA [342 × 18] <lm> 10 MQ [26,397 × 18] <lm> 11 OO [32 × 18] <lm> 12 UA [58,665 × 18] <lm> 13 US [20,536 × 18] <lm> 14 VX [5,162 × 18] <lm> 15 WN [12,275 × 18] <lm> 16 YV [601 × 18] <lm> ``` ``` mods %>% summarize( carrier = carrier, `Adjusted Rsq` = summary(model)$adj.r.squared, coef_dep_time = coef(model)[2] ) ``` ``` # A tibble: 16 x 3 # Groups: carrier [16] carrier `Adjusted Rsq` coef_dep_time <chr> <dbl> <dbl> 1 9E 0.0513 0.0252 2 AA 0.0504 0.0209 3 AS 0.0815 0.0186 4 B6 0.0241 0.0120 5 DL 0.0347 0.0179 6 EV 0.0836 0.0290 7 F9 0.0998 0.0484 8 FL 0.0261 0.0183 9 HA -0.00124 -0.0578 10 MQ 0.0499 0.0218 11 OO -0.0189 0.0394 12 UA 0.0673 0.0220 13 US 0.0575 0.0174 14 VX 0.111 0.0362 15 WN 0.119 0.0345 16 YV 0.137 0.0805 ``` You can use group\_by on more than one variable, e.g. `group_by(var1, var2)` Renaming Columns ---------------- Tibbles in the tidyverse don’t really have a problem with variable names starting with numbers or incorporating symbols and spaces. I would still suggest it is poor practice, because even if your data set looks fine, you’ll possibly encounter problems with modeling and visualization packages using that data. However, as a demonstration, we can ‘fix’ some of the variable names. One issue is that when we scraped the data and converted it to a data.frame, the names that started with a number, like `3P` for ‘three point baskets made’, were made into `X3P`, because that’s the way R works by default. In addition, `3P%`, i.e. three point percentage made, was made into `3P.` with a dot for the percent sign. Same goes for the 2P (two\-pointers) and FT (free\-throw) variables. We can use rename to change column names. A basic example is as follows. ``` data %>% rename(new_name = old_name, new_name2 = old_name2) ``` Very straightforward. However, oftentimes we’ll need to change *patterns*, as with our current problem. The following uses str\_replace and str\_remove from stringr to look for a pattern in a name, and replace that pattern with some other pattern. It uses *regular expressions* for the patterns. ``` bball = bball %>% rename_with( str_replace, # function contains('.'), # columns pattern = '\\.', # function arguments replacement = '%' ) %>% rename_with(str_remove, starts_with('X'), pattern = 'X') colnames(bball) ``` ``` [1] "Rk" "Player" "Pos" "Age" "Tm" "G" "GS" "MP" "FG" "FGA" "FG%" "3P" "3PA" "3P%" [15] "2P" "2PA" "2P%" "eFG%" "FT" "FTA" "FT%" "ORB" "DRB" "TRB" "AST" "STL" "BLK" "TOV" [29] "PF" "PTS" "trueShooting" "effectiveFG" "shootingDif" ``` Merging Data ------------ Merging data is yet another very common data task, as data often comes from multiple sources. In order to do this, we need some common identifier among the sources by which to join them. The following is a list of dplyr join functions. inner\_join: return all rows from x where there are matching values in y, and all columns from x and y. If there are multiple matches between x and y, all combination of the matches are returned. left\_join: return all rows from x, and all columns from x and y. Rows in x with no match in y will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned. right\_join: return all rows from y, and all columns from x and y. Rows in y with no match in x will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned. semi\_join: return all rows from x where there are matching values in y, keeping just columns from x. It differs from an inner join because an inner join will return one row of x for each matching row of y, where a semi join will never duplicate rows of x. anti\_join: return all rows from x where there are not matching values in y, keeping just columns from x. full\_join: return all rows and all columns from both x and y. Where there are not matching values, returns NA for the one missing. Probably the most common is a left join, where we have one primary data set, and are adding data from another source to it while retaining it as a base. The following is a simple demonstration. ``` band_members ``` ``` # A tibble: 3 x 2 Name Band <chr> <chr> 1 Seth Com Truise 2 Francis Pixies 3 Bubba The New Year ``` ``` band_instruments ``` ``` # A tibble: 3 x 2 Name Instrument <chr> <chr> 1 Francis Guitar 2 Bubba Guitar 3 Seth Synthesizer ``` ``` left_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 3 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar ``` When we don’t have a one to one match, the result of the different types of join will become more apparent. ``` band_members ``` ``` # A tibble: 4 x 2 Name Band <chr> <chr> 1 Seth Com Truise 2 Francis Pixies 3 Bubba The New Year 4 Stephen Pavement ``` ``` band_instruments ``` ``` # A tibble: 4 x 2 Name Instrument <chr> <chr> 1 Seth Synthesizer 2 Francis Guitar 3 Bubba Guitar 4 Steve Rage ``` ``` left_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 4 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Stephen Pavement <NA> ``` ``` right_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 4 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Steve <NA> Rage ``` ``` inner_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 3 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar ``` ``` full_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 5 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Stephen Pavement <NA> 5 Steve <NA> Rage ``` ``` anti_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 1 x 2 Name Band <chr> <chr> 1 Stephen Pavement ``` ``` anti_join(band_instruments, band_members) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 1 x 2 Name Instrument <chr> <chr> 1 Steve Rage ``` Merges can get quite complex, and involve multiple data sources. In many cases you may have to do a lot of processing before getting to the merge, but dplyr’s joins will help quite a bit. Pivoting axes ------------- The tidyr package can be thought of as a specialized subset of dplyr’s functionality, as well as an update to the previous reshape and reshape2 packages[7](#fn7). Some of its functions for manipulating data you’ll want to be familiar with are: * pivot\_longer: convert data from a wider format to longer one * pivot\_wider: convert data from a longer format to wider one * unite: paste together multiple columns into one * separate: complement of unite * unnest: expand ‘list columns’ The following example shows how we take a ‘wide\-form’ data set, where multiple columns represent different stock prices, and turn it into two columns, one representing stock name, and one for the price. We need to know which columns to work on, which is the first entry. This function works very much like select, where you can use helpers. Then we need to give a name to the column(s) representing the indicators of what were multiple columns in the wide format. And finally we need to specify the column(s) of the values. ``` library(tidyr) stocks <- data.frame( time = as.Date('2009-01-01') + 0:9, X = rnorm(10, 0, 1), Y = rnorm(10, 0, 2), Z = rnorm(10, 0, 4) ) stocks %>% head ``` ``` time X Y Z 1 2009-01-01 -1.23994442 -4.8515935 3.7985281 2 2009-01-02 0.65851483 0.9552487 -2.7255786 3 2009-01-03 -0.91146059 -0.0321312 0.6175274 4 2009-01-04 1.85598621 1.1919978 -2.4837558 5 2009-01-05 0.37266866 0.6297287 -1.1330732 6 2009-01-06 -0.06072664 -2.8673242 1.7155168 ``` ``` stocks %>% pivot_longer( cols = -time, # works similar to using select() names_to = 'stock', # the name of the column that will have column names as labels values_to = 'price' # the name of the column for the values ) %>% head() ``` ``` # A tibble: 6 x 3 time stock price <date> <chr> <dbl> 1 2009-01-01 X -1.24 2 2009-01-01 Y -4.85 3 2009-01-01 Z 3.80 4 2009-01-02 X 0.659 5 2009-01-02 Y 0.955 6 2009-01-02 Z -2.73 ``` Here is a more complex example where we can handle multiple repeated entries. We additionally add another column for labeling, and posit the separator for the column names. ``` library(tidyr) stocks <- data.frame( time = as.Date('2009-01-01') + 0:9, X_1 = rnorm(10, 0, 1), X_2 = rnorm(10, 0, 1), Y_1 = rnorm(10, 0, 2), Y_2 = rnorm(10, 0, 2), Z_1 = rnorm(10, 0, 4), Z_2 = rnorm(10, 0, 4) ) head(stocks) ``` ``` time X_1 X_2 Y_1 Y_2 Z_1 Z_2 1 2009-01-01 -0.9675529 -0.72793192 0.7516393 0.03321408 3.7485540 0.3945022 2 2009-01-02 -0.1780449 0.08926355 -0.1976137 1.53569057 -0.0315400 7.6285628 3 2009-01-03 0.2958189 0.38118235 1.6730362 -1.13635638 0.1543268 -5.9254785 4 2009-01-04 -0.7805814 -0.67370673 -0.5696378 -3.62905335 -2.4256959 6.6867209 5 2009-01-05 1.7910958 -0.32353046 -1.6786235 -1.55989831 -4.4294289 -8.1844866 6 2009-01-06 1.1623828 -0.27362716 -0.3116307 2.73462718 0.6675895 1.9884072 ``` ``` stocks %>% pivot_longer( cols = -time, names_to = c('stock', 'entry'), names_sep = '_', values_to = 'price' ) %>% head() ``` ``` # A tibble: 6 x 4 time stock entry price <date> <chr> <chr> <dbl> 1 2009-01-01 X 1 -0.968 2 2009-01-01 X 2 -0.728 3 2009-01-01 Y 1 0.752 4 2009-01-01 Y 2 0.0332 5 2009-01-01 Z 1 3.75 6 2009-01-01 Z 2 0.395 ``` Note that the latter is an example of *tidy data* while the former is not. Why do we generally prefer such data? Precisely because the most common data operations, grouping, filtering, etc., would work notably more efficiently with such data. This is especially the case for visualization. The following demonstrates the separate function utilized for a very common data processing task\- dealing with names. Here’ we’ll separate player into first and last names based on the space. ``` bball %>% separate(Player, into=c('first_name', 'last_name'), sep=' ') %>% select(1:5) %>% head() ``` ``` Rk first_name last_name Pos Age 1 1 Álex Abrines SG 25 2 2 Quincy Acy PF 28 3 3 Jaylen Adams PG 22 4 4 Steven Adams C 25 5 5 Bam Adebayo C 21 6 6 Deng Adel SF 21 ``` Note that this won’t necessarily apply to every name, so further processing may be required. More Tidyverse -------------- * dplyr functions: There are over a hundred utility functions that perform very common tasks. You really need to be aware of them, as their use will come up often. * broom: Convert statistical analysis objects from R into tidy data frames, so that they can more easily be combined, reshaped and otherwise processed with tools like dplyr, tidyr and ggplot2\. * tidy\*: a lot of packages out there are now ‘tidy’, though not a part of the official tidyverse. Some examples of the ones I’ve used: + tidycensus + tidybayes + tidytext + modelr Seriously, there are [a lot](https://www.r-pkg.org/search.html?q=tidy). Personal Opinion ---------------- The dplyr grammar is clear for a lot of standard data processing tasks, and some not so common. Extremely useful for data exploration and visualization. * No need to create/overwrite existing objects * Can overwrite columns and use as they are created * Makes it easy to look at anything, and do otherwise tedious data checks Drawbacks: * Not as fast as data.table or even some base R approaches for many things[8](#fn8) * The *mindset* can make for unnecessary complication + e.g. There is no need to pipe to create a single new variable * Some approaches, are not very intuitive * Notably less ability to work with some very common data structures (e.g. matrices) All in all, if you’ve only been using base R approaches, the tidyverse will change your R life! It makes all the sorts of things you do all the time easier and clearer. Highly recommended! Tidyverse Exercises ------------------- ### Exercise 0 Install and load the dplyr ggplot2movies packages. Look at the help file for the `movies` data set, which contains data from IMDB. ``` install.packages('ggplot2movies') library(ggplot2movies) data('movies') ``` ### Exercise 1 Using the movies data set, perform each of the following actions separately. #### Exercise 1a Use mutate to create a centered version of the rating variable. A centered variable is one whose mean has been subtracted from it. The process will take the following form: ``` data %>% mutate(new_var_name = '?') ``` #### Exercise 1b Use filter to create a new data frame that has only movies from the years 2000 and beyond. Use the greater than or equal operator `>=`. #### Exercise 1c Use select to create a new data frame that only has the `title`, `year`, `budget`, `length`, `rating` and `votes` variables. There are at least 3 ways to do this. #### Exercise 1d Rename the `length` column to `length_in_min` (i.e. length in minutes). ### Exercise 2 Use group\_by to group the data by year, and summarize to create a new variable that is the average budget. The summarize function works just like mutate in this case. Use the mean function to get the average, but you’ll also need to use the argument `na.rm = TRUE` within it because the earliest years have no budget recorded. ### Exercise 3 Use pivot\_longer to create a ‘tidy’ data set from the following. ``` dat = tibble(id = 1:10, x = rnorm(10), y = rnorm(10)) ``` ### Exercise 4 Now put several actions together in one set of piped operations. * Filter movies released *after* 1990 * select the same variables as before but also the `mpaa`, `Action`, and `Drama` variables * group by `mpaa` *and* (your choice) `Action` *or* `Drama` * get the average rating It should spit out something like the following: ``` # A tibble: 10 x 3 # Groups: mpaa [5] mpaa Drama AvgRating <chr> <int> <dbl> 1 "" 0 5.94 2 "" 1 6.20 3 "NC-17" 0 4.28 4 "NC-17" 1 4.62 5 "PG" 0 5.19 6 "PG" 1 6.15 7 "PG-13" 0 5.44 8 "PG-13" 1 6.14 9 "R" 0 4.86 10 "R" 1 5.94 ``` Python Pandas Notebook ---------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/pandaverse.ipynb) What is the Tidyverse? ---------------------- The tidyverse consists of a few key packages: * ggplot2: data visualization * tibble: tibbles, a modern re\-imagining of data frames * tidyr: data tidying * readr: data import * purrr: functional programming, e.g. alternate approaches to apply * dplyr: data manipulation And of course the tidyverse package itself, which will load all of the above in a way that will avoid naming conflicts. ``` library(tidyverse) ``` ``` Loading tidyverse: ggplot2 Loading tidyverse: tibble Loading tidyverse: tidyr Loading tidyverse: readr Loading tidyverse: purrr Loading tidyverse: dplyr Conflicts with tidy packages ------------------------- filter(): dplyr, stats lag(): dplyr, stats ``` In addition, there are other packages like lubridate, rvest, stringr and others in the **hadleyverse** that are also greatly useful. What is Tidy? ------------- *Tidy data* refers to data arranged in a way that makes data processing, analysis, and visualization simpler. In a tidy data set: * Each variable must have its own column. * Each observation must have its own row. * Each value must have its own cell. Think *long* before *wide*. dplyr ----- dplyr provides a grammar of data manipulation (like ggplot2 does for visualization). It is the next iteration of plyr, but plyr is deprecated and no longer used. It’s focused on tools for working with data frames, with over 100 functions that might be of specific use to you. It has three main goals: * Make the most important data manipulation tasks easier. * Do them faster. * Use the same interface to work with data frames, data tables or a database. Some key operations include: * select: grab columns + select helpers: one\_of, starts\_with, num\_range etc. * filter/slice: grab rows * group\_by: grouped operations * mutate/transmute: create new variables * summarize: summarize/aggregate There are various (SQL\-like) join/merge functions: * inner\_join, left\_join etc. And there are a lot of little things like: * n, n\_distinct, nth, n\_groups, count, recode, between In addition, there is no need to quote variable names. ### An example Let’s say we want to select from our data the following variables: * Start with the **ID** variable * The variables **X1** through **X10**, which are not all grouped together, and there are many more *X\** columns * The variables **var1** and **var2**, which are the only variables with *var* in their name * Any variable with a name that starts with **XYZ** How might we go about this in a dataset of possibly hundreds or even thousands of columns? There are several base R approaches that we could go with, but often they will be tedious, or require multiple objects to be created just to get the columns you want. Let’s start with the worst choice. ``` newData = oldData[,c(1,2,3,4, etc.)] ``` Using numeric indexes, or rather *magic numbers*, is not conducive to readability or reproducibility. If anything changes about the data columns, the numbers may no longer be applicable, and you’d have to redo the line again. We could name the variables explicitly. ``` newData = oldData[,c('ID','X1', 'X2', etc.)] ``` This would be fine if there are only a handful. But if you’re trying to reduce a 1000 column data set to several dozen it’s tedious, and generally not pretty regardless. A more advanced alternative regards a two\-step approach with [regular expressions](more.html#regular-expressions). This requires that you know something about regex (and you should), but it is difficult to read/understand by those who don’t, and often by even yourself if it’s more complicated. In any case, you first will need to create an object that represents the column names first, otherwise it looks unwieldy if used within brackets or a function like subset. ``` cols = c('ID', paste0('X', 1:10), 'var1', 'var2', grep(colnames(oldData), '^XYZ', value=T)) newData = oldData[,cols] # or via subset newData = subset(oldData, select = cols) ``` Now consider there is even more to do. What if you also want observations where **Z** is **Yes**, Q is **No**, and only the observations with the top 50 values of **var2**, ordered by **var1** (descending)? Probably the more straightforward way in R to do so would be something like the following, where each part is broken out and we continuously write over the object as we modify it. ``` # three operations and overwriting or creating new objects if we want clarity newData = newData[oldData$Z == 'Yes' & oldData$Q == 'No',] newData = newData[order(newData$var2, decreasing=T)[1:50],] newData = newData[order(newData$var1, decreasing=T),] ``` And this is for fairly straightforward operations. Now consider doing all of the previous in one piped operation. The dplyr package will allow us to do something like the following. ``` newData = oldData %>% select(num_range('X', 1:10), contains('var'), starts_with('XYZ')) %>% filter(Z == 'Yes', Q == 'No') %>% top_n(n=50, var2) %>% arrange(desc(var1)) ``` Even if it hadn’t been explained before, you might have been able to guess a little as to what was going on. The code is fairly succinct, we don’t have to keep referencing objects repeatedly, and no explicit intermediary objects are created. dplyr and piping is an *alternative*. You can do all this sort of stuff with base R, for example, with functions like with, within, subset, transform, etc. Though the initial base R approach depicted is fairly concise, in general, it can potentially be: * more verbose * less legible * less amenable to additional data changes * requires esoteric knowledge (e.g. regular expressions) * often requires creation of new objects (even if we just want to explore) * often slower, possibly greatly ### An example Let’s say we want to select from our data the following variables: * Start with the **ID** variable * The variables **X1** through **X10**, which are not all grouped together, and there are many more *X\** columns * The variables **var1** and **var2**, which are the only variables with *var* in their name * Any variable with a name that starts with **XYZ** How might we go about this in a dataset of possibly hundreds or even thousands of columns? There are several base R approaches that we could go with, but often they will be tedious, or require multiple objects to be created just to get the columns you want. Let’s start with the worst choice. ``` newData = oldData[,c(1,2,3,4, etc.)] ``` Using numeric indexes, or rather *magic numbers*, is not conducive to readability or reproducibility. If anything changes about the data columns, the numbers may no longer be applicable, and you’d have to redo the line again. We could name the variables explicitly. ``` newData = oldData[,c('ID','X1', 'X2', etc.)] ``` This would be fine if there are only a handful. But if you’re trying to reduce a 1000 column data set to several dozen it’s tedious, and generally not pretty regardless. A more advanced alternative regards a two\-step approach with [regular expressions](more.html#regular-expressions). This requires that you know something about regex (and you should), but it is difficult to read/understand by those who don’t, and often by even yourself if it’s more complicated. In any case, you first will need to create an object that represents the column names first, otherwise it looks unwieldy if used within brackets or a function like subset. ``` cols = c('ID', paste0('X', 1:10), 'var1', 'var2', grep(colnames(oldData), '^XYZ', value=T)) newData = oldData[,cols] # or via subset newData = subset(oldData, select = cols) ``` Now consider there is even more to do. What if you also want observations where **Z** is **Yes**, Q is **No**, and only the observations with the top 50 values of **var2**, ordered by **var1** (descending)? Probably the more straightforward way in R to do so would be something like the following, where each part is broken out and we continuously write over the object as we modify it. ``` # three operations and overwriting or creating new objects if we want clarity newData = newData[oldData$Z == 'Yes' & oldData$Q == 'No',] newData = newData[order(newData$var2, decreasing=T)[1:50],] newData = newData[order(newData$var1, decreasing=T),] ``` And this is for fairly straightforward operations. Now consider doing all of the previous in one piped operation. The dplyr package will allow us to do something like the following. ``` newData = oldData %>% select(num_range('X', 1:10), contains('var'), starts_with('XYZ')) %>% filter(Z == 'Yes', Q == 'No') %>% top_n(n=50, var2) %>% arrange(desc(var1)) ``` Even if it hadn’t been explained before, you might have been able to guess a little as to what was going on. The code is fairly succinct, we don’t have to keep referencing objects repeatedly, and no explicit intermediary objects are created. dplyr and piping is an *alternative*. You can do all this sort of stuff with base R, for example, with functions like with, within, subset, transform, etc. Though the initial base R approach depicted is fairly concise, in general, it can potentially be: * more verbose * less legible * less amenable to additional data changes * requires esoteric knowledge (e.g. regular expressions) * often requires creation of new objects (even if we just want to explore) * often slower, possibly greatly Running Example --------------- The following data was scraped initially scraped from the web as follows. It is data from the NBA basketball league for the last season with things like player names, position, team name, points per game, field goal percentage, and various other statistics. We’ll use it as an example to demonstrate various functionality found within dplyr. ``` library(rvest) current_year = lubridate::year(Sys.Date()) url = glue::glue("http://www.basketball-reference.com/leagues/NBA_{current_year-1}_totals.html") bball = read_html(url) %>% html_nodes("#totals_stats") %>% html_table() %>% data.frame() save(bball, file='data/bball.RData') ``` However you can just load it into your workspace as below. Note that when initially gathered from the website, the data is all character strings. We’ll fix this later. The following shows the data as it will eventually be. ``` load('data/bball.RData') glimpse(bball[,1:5]) ``` ``` Rows: 734 Columns: 5 $ Rk <chr> "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "16", "16", "17", "18", "19", "20", "Rk", "21", "22", "23", "23", "23", "24", "25", "26", "27", "28", "28", "28", "… $ Player <chr> "Álex Abrines", "Quincy Acy", "Jaylen Adams", "Steven Adams", "Bam Adebayo", "Deng Adel", "DeVaughn Akoon-Purcell", "LaMarcus Aldridge", "Rawle Alkins", "Grayson Allen", "Jarrett Allen", "Kadeem Allen",… $ Pos <chr> "SG", "PF", "PG", "C", "C", "SF", "SG", "C", "SG", "SG", "C", "SG", "PF", "SF", "SF", "PF", "PF", "PF", "C", "PF", "PF", "PF", "Pos", "SF", "PG", "SF", "SF", "SF", "PG", "C", "SG", "PF", "SG", "SG", "SG… $ Age <chr> "25", "28", "22", "25", "21", "21", "25", "33", "21", "23", "20", "26", "28", "25", "25", "30", "30", "30", "20", "24", "21", "34", "Age", "21", "24", "33", "33", "33", "31", "20", "23", "19", "25", "25… $ Tm <chr> "OKC", "PHO", "ATL", "OKC", "MIA", "CLE", "DEN", "SAS", "CHI", "UTA", "BRK", "NYK", "POR", "ATL", "MEM", "TOT", "PHO", "MIA", "IND", "MIL", "DAL", "HOU", "Tm", "TOR", "CHI", "TOT", "PHO", "WAS", "ORL", … ``` Selecting Columns ----------------- Often you do not need the entire data set. While this is easily handled in base R (as shown earlier), it can be more clear to use select in dplyr. Now we won’t have to create separate objects, use quotes or $, etc. ``` bball %>% select(Player, Tm, Pos) %>% head() ``` ``` Player Tm Pos 1 Álex Abrines OKC SG 2 Quincy Acy PHO PF 3 Jaylen Adams ATL PG 4 Steven Adams OKC C 5 Bam Adebayo MIA C 6 Deng Adel CLE SF ``` What if we want to drop some variables? ``` bball %>% select(-Player, -Tm, -Pos) %>% head() ``` ``` Rk Age G GS MP FG FGA FG. X3P X3PA X3P. X2P X2PA X2P. eFG. FT FTA FT. ORB DRB TRB AST STL BLK TOV PF PTS 1 1 25 31 2 588 56 157 .357 41 127 .323 15 30 .500 .487 12 13 .923 5 43 48 20 17 6 14 53 165 2 2 28 10 0 123 4 18 .222 2 15 .133 2 3 .667 .278 7 10 .700 3 22 25 8 1 4 4 24 17 3 3 22 34 1 428 38 110 .345 25 74 .338 13 36 .361 .459 7 9 .778 11 49 60 65 14 5 28 45 108 4 4 25 80 80 2669 481 809 .595 0 2 .000 481 807 .596 .595 146 292 .500 391 369 760 124 117 76 135 204 1108 5 5 21 82 28 1913 280 486 .576 3 15 .200 277 471 .588 .579 166 226 .735 165 432 597 184 71 65 121 203 729 6 6 21 19 3 194 11 36 .306 6 23 .261 5 13 .385 .389 4 4 1.000 3 16 19 5 1 4 6 13 32 ``` ### Helper functions Sometimes, we have a lot of variables to select, and if they have a common naming scheme, this can be very easy. ``` bball %>% select(Player, contains("3P"), ends_with("RB")) %>% arrange(desc(TRB)) %>% head() ``` ``` Player X3P X3PA X3P. ORB DRB TRB 1 Player 3P 3PA 3P% ORB DRB TRB 2 Player 3P 3PA 3P% ORB DRB TRB 3 Player 3P 3PA 3P% ORB DRB TRB 4 Player 3P 3PA 3P% ORB DRB TRB 5 Player 3P 3PA 3P% ORB DRB TRB 6 Player 3P 3PA 3P% ORB DRB TRB ``` The select also has helper functions to make selecting columns even easier. I probably don’t even need to explain what’s being done above, and this is the power of the tidyverse way. Here is the list of *helper functions* to be aware of: * starts\_with: starts with a prefix * ends\_with: ends with a suffix * contains: contains a literal string * matches: matches a regular expression * num\_range: a numerical range like x01, x02, x03\. * one\_of: variables in character vector. * everything: all variables. ### Helper functions Sometimes, we have a lot of variables to select, and if they have a common naming scheme, this can be very easy. ``` bball %>% select(Player, contains("3P"), ends_with("RB")) %>% arrange(desc(TRB)) %>% head() ``` ``` Player X3P X3PA X3P. ORB DRB TRB 1 Player 3P 3PA 3P% ORB DRB TRB 2 Player 3P 3PA 3P% ORB DRB TRB 3 Player 3P 3PA 3P% ORB DRB TRB 4 Player 3P 3PA 3P% ORB DRB TRB 5 Player 3P 3PA 3P% ORB DRB TRB 6 Player 3P 3PA 3P% ORB DRB TRB ``` The select also has helper functions to make selecting columns even easier. I probably don’t even need to explain what’s being done above, and this is the power of the tidyverse way. Here is the list of *helper functions* to be aware of: * starts\_with: starts with a prefix * ends\_with: ends with a suffix * contains: contains a literal string * matches: matches a regular expression * num\_range: a numerical range like x01, x02, x03\. * one\_of: variables in character vector. * everything: all variables. Filtering Rows -------------- There are repeated header rows in this data[3](#fn3), so we need to drop them. This is also why everything was character string when we first scraped it, because having any character strings in a column coerces the entire column to be character, since all elements of a vector [need to be of the same type](data_structures.html#vectors). Character string is chosen over others because anything can be converted to a string, but not everything can be a number. Filtering by rows requires the basic indexing knowledge [we talked about before](indexing.html#indexing), especially Boolean indexing. In the following, `Rk`, or rank, is for all intents and purposes just a row id, but if it equals the actual text ‘Rk’ instead of something else, we know we’re dealing with a header row, so we’ll drop it. ``` bball = bball %>% filter(Rk != "Rk") ``` * filter returns rows with matching conditions. * slice allows for a numeric indexing approach[4](#fn4). Say we want to look at forwards (SF or PF) over the age of 35\. The following will do this, and since some players play on multiple teams, we’ll want only the unique information on the variables of interest. The function distinct allows us to do this. ``` bball %>% filter(Age > 35, Pos == "SF" | Pos == "PF") %>% distinct(Player, Pos, Age) ``` ``` Player Pos Age 1 Vince Carter PF 42 2 Kyle Korver PF 37 3 Dirk Nowitzki PF 40 ``` Maybe we want just the first 10 rows. This is often the case when we perform some operation and need to quickly verify that what we’re doing is working in principle. ``` bball %>% slice(1:10) ``` ``` Rk Player Pos Age Tm G GS MP FG FGA FG. X3P X3PA X3P. X2P X2PA X2P. eFG. FT FTA FT. ORB DRB TRB AST STL BLK TOV PF PTS 1 1 Álex Abrines SG 25 OKC 31 2 588 56 157 .357 41 127 .323 15 30 .500 .487 12 13 .923 5 43 48 20 17 6 14 53 165 2 2 Quincy Acy PF 28 PHO 10 0 123 4 18 .222 2 15 .133 2 3 .667 .278 7 10 .700 3 22 25 8 1 4 4 24 17 3 3 Jaylen Adams PG 22 ATL 34 1 428 38 110 .345 25 74 .338 13 36 .361 .459 7 9 .778 11 49 60 65 14 5 28 45 108 4 4 Steven Adams C 25 OKC 80 80 2669 481 809 .595 0 2 .000 481 807 .596 .595 146 292 .500 391 369 760 124 117 76 135 204 1108 5 5 Bam Adebayo C 21 MIA 82 28 1913 280 486 .576 3 15 .200 277 471 .588 .579 166 226 .735 165 432 597 184 71 65 121 203 729 6 6 Deng Adel SF 21 CLE 19 3 194 11 36 .306 6 23 .261 5 13 .385 .389 4 4 1.000 3 16 19 5 1 4 6 13 32 7 7 DeVaughn Akoon-Purcell SG 25 DEN 7 0 22 3 10 .300 0 4 .000 3 6 .500 .300 1 2 .500 1 3 4 6 2 0 2 4 7 8 8 LaMarcus Aldridge C 33 SAS 81 81 2687 684 1319 .519 10 42 .238 674 1277 .528 .522 349 412 .847 251 493 744 194 43 107 144 179 1727 9 9 Rawle Alkins SG 21 CHI 10 1 120 13 39 .333 3 12 .250 10 27 .370 .372 8 12 .667 11 15 26 13 1 0 8 7 37 10 10 Grayson Allen SG 23 UTA 38 2 416 67 178 .376 32 99 .323 35 79 .443 .466 45 60 .750 3 20 23 25 6 6 33 47 211 ``` We can use filtering even with variables just created. ``` bball %>% unite("posTeam", Pos, Tm) %>% # create a new variable filter(posTeam == "SG_GSW") %>% # use it for filtering select(Player, posTeam, Age) %>% # use it for selection arrange(desc(Age)) # descending order ``` ``` Player posTeam Age 1 Klay Thompson SG_GSW 28 2 Damion Lee SG_GSW 26 3 Jacob Evans SG_GSW 21 ``` Being able to use a newly created variable on the fly, possibly only to filter or create some other variable, goes a long way toward easy visualization and generation of desired summary statistics. Generating New Data ------------------- One of the most common data processing tasks is generating new variables. The function mutate takes a vector and returns one of the same dimension. In addition, there is mutate\_at, mutate\_if, and mutate\_all to help with specific scenarios. To demonstrate, we’ll use mutate\_at to make appropriate columns numeric, i.e. everything except `Player`, `Pos`, and `Tm`. It takes two inputs, variables and functions to apply. As there are multiple variables and (potentially) multiple functions, we use the vars and funs functions to denote them[5](#fn5). ``` bball = bball %>% mutate(across(c(-Player, -Pos, -Tm), as.numeric)) glimpse(bball[,1:7]) ``` ``` Rows: 708 Columns: 7 $ Rk <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 16, 16, 17, 18, 19, 20, 21, 22, 23, 23, 23, 24, 25, 26, 27, 28, 28, 28, 29, 30, 31, 32, 33, 33, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,… $ Player <chr> "Álex Abrines", "Quincy Acy", "Jaylen Adams", "Steven Adams", "Bam Adebayo", "Deng Adel", "DeVaughn Akoon-Purcell", "LaMarcus Aldridge", "Rawle Alkins", "Grayson Allen", "Jarrett Allen", "Kadeem Allen",… $ Pos <chr> "SG", "PF", "PG", "C", "C", "SF", "SG", "C", "SG", "SG", "C", "SG", "PF", "SF", "SF", "PF", "PF", "PF", "C", "PF", "PF", "PF", "SF", "PG", "SF", "SF", "SF", "PG", "C", "SG", "PF", "SG", "SG", "SG", "PG"… $ Age <dbl> 25, 28, 22, 25, 21, 21, 25, 33, 21, 23, 20, 26, 28, 25, 25, 30, 30, 30, 20, 24, 21, 34, 21, 24, 33, 33, 33, 31, 20, 23, 19, 25, 25, 25, 22, 21, 20, 34, 26, 26, 26, 28, 23, 30, 30, 32, 29, 25, 22, 30, 32… $ Tm <chr> "OKC", "PHO", "ATL", "OKC", "MIA", "CLE", "DEN", "SAS", "CHI", "UTA", "BRK", "NYK", "POR", "ATL", "MEM", "TOT", "PHO", "MIA", "IND", "MIL", "DAL", "HOU", "TOR", "CHI", "TOT", "PHO", "WAS", "ORL", "PHO",… $ G <dbl> 31, 10, 34, 80, 82, 19, 7, 81, 10, 38, 80, 19, 81, 48, 43, 25, 15, 10, 3, 72, 2, 10, 67, 81, 69, 26, 43, 81, 71, 43, 62, 15, 11, 4, 16, 47, 47, 38, 77, 49, 28, 43, 30, 75, 34, 51, 67, 82, 81, 26, 79, 68… $ GS <dbl> 2, 0, 1, 80, 28, 3, 0, 81, 1, 2, 80, 1, 81, 4, 40, 8, 8, 0, 0, 72, 0, 2, 6, 32, 69, 26, 43, 81, 70, 13, 4, 0, 0, 0, 0, 45, 1, 0, 77, 49, 28, 38, 3, 72, 6, 18, 35, 82, 18, 2, 1, 3, 15, 27, 0, 12, 49, 1, … ``` Now that the data columns are of the correct type, the following demonstrates how we can use the standard mutate function to create composites of existing variables. ``` bball = bball %>% mutate( trueShooting = PTS / (2 * (FGA + (.44 * FTA))), effectiveFG = (FG + (.5 * X3P)) / FGA, shootingDif = trueShooting - FG. ) summary(select(bball, shootingDif)) # select and others don't have to be piped to use ``` ``` shootingDif Min. :-0.08561 1st Qu.: 0.06722 Median : 0.09829 Mean : 0.09420 3rd Qu.: 0.12379 Max. : 0.53192 NA's :6 ``` Grouping and Summarizing Data ----------------------------- Another very common task is to look at group\-based statistics, and we can use group\_by and summarize to help us in this regard[6](#fn6). Base R has things like aggregate, by, and tapply for this, but they should not be used, as this approach is much more straightforward, flexible, and faster. Conceptually we are doing a three\-phase task: **split**, **apply**, **combine**. We split the data into subsets, apply a function, and then combine the results back into a single output. In applying a function, we may do any of the previously demonstrated tasks: calculate some statistic, generate new data, or even filter to a reduced part of the data. For this demonstration, I’m going to start putting together several things we’ve demonstrated thus far. Ultimately we’ll create a variable called trueShooting, which represents ‘true shooting percentage’, and get an average for each position, and compare it to the average field goal percentage. ``` bball %>% select(Pos, FG, FGA, FG., FTA, X3P, PTS) %>% mutate( trueShooting = PTS / (2 * (FGA + (.44 * FTA))), effectiveFG = (FG + (.5 * X3P)) / FGA, shootingDif = trueShooting - FG. ) %>% group_by(Pos) %>% summarize( `Mean FG%` = mean(FG., na.rm = TRUE), `Mean True Shooting` = mean(trueShooting, na.rm = TRUE) ) ``` ``` # A tibble: 11 x 3 Pos `Mean FG%` `Mean True Shooting` <chr> <dbl> <dbl> 1 C 0.522 0.572 2 C-PF 0.407 0.530 3 PF 0.442 0.536 4 PF-C 0.356 0.492 5 PF-SF 0.419 0.544 6 PG 0.409 0.512 7 SF 0.425 0.529 8 SF-SG 0.431 0.558 9 SG 0.407 0.517 10 SG-PF 0.416 0.582 11 SG-SF 0.38 0.466 ``` We can do even more with grouped data. Specifically, we can create a new *list\-column* in the data, the elements of which can be anything, even the results of an analysis for each group. As such, we can use tidyr’s unnest to get back to a standard data frame. To demonstrate, the following will group data by position, then get the correlation between field\-goal percentage and free\-throw shooting percentage. Some players are listed with multiple positions, so we will reduce those to whatever their first position is using case\_when. ``` bball %>% mutate( Pos = case_when( Pos == 'PG-SG' ~ 'PG', Pos == 'C-PF' ~ 'C', Pos == 'SF-SG' ~ 'SF', Pos == 'PF-C' | Pos == 'PF-SF' ~ 'PF', Pos == 'SG-PF' | Pos == 'SG-SF' ~ 'SG', TRUE ~ Pos )) %>% nest_by(Pos) %>% mutate(FgFt_Corr = list(cor(data$FG., data$FT., use = 'complete'))) %>% unnest(c(Pos, FgFt_Corr)) ``` ``` # A tibble: 5 x 3 # Groups: Pos [5] Pos data FgFt_Corr <chr> <list<tbl_df[,32]>> <dbl> 1 C [121 × 32] -0.122 2 PF [150 × 32] -0.0186 3 PG [139 × 32] 0.0857 4 SF [120 × 32] 0.00422 5 SG [178 × 32] -0.0585 ``` As a reminder, data frames are lists. As such, anything can go into the ‘columns’, even regression models! ``` library(nycflights13) carriers = group_by(flights, carrier) group_size(carriers) # if you're curious, there is a function to quickly get group Ns ``` ``` [1] 18460 32729 714 54635 48110 54173 685 3260 342 26397 32 58665 20536 5162 12275 601 ``` ``` mods = flights %>% nest_by(carrier) %>% mutate(model = list(lm(arr_delay ~ dep_time, data = data)) ) mods ``` ``` # A tibble: 16 x 3 # Rowwise: carrier carrier data model <chr> <list<tbl_df[,18]>> <list> 1 9E [18,460 × 18] <lm> 2 AA [32,729 × 18] <lm> 3 AS [714 × 18] <lm> 4 B6 [54,635 × 18] <lm> 5 DL [48,110 × 18] <lm> 6 EV [54,173 × 18] <lm> 7 F9 [685 × 18] <lm> 8 FL [3,260 × 18] <lm> 9 HA [342 × 18] <lm> 10 MQ [26,397 × 18] <lm> 11 OO [32 × 18] <lm> 12 UA [58,665 × 18] <lm> 13 US [20,536 × 18] <lm> 14 VX [5,162 × 18] <lm> 15 WN [12,275 × 18] <lm> 16 YV [601 × 18] <lm> ``` ``` mods %>% summarize( carrier = carrier, `Adjusted Rsq` = summary(model)$adj.r.squared, coef_dep_time = coef(model)[2] ) ``` ``` # A tibble: 16 x 3 # Groups: carrier [16] carrier `Adjusted Rsq` coef_dep_time <chr> <dbl> <dbl> 1 9E 0.0513 0.0252 2 AA 0.0504 0.0209 3 AS 0.0815 0.0186 4 B6 0.0241 0.0120 5 DL 0.0347 0.0179 6 EV 0.0836 0.0290 7 F9 0.0998 0.0484 8 FL 0.0261 0.0183 9 HA -0.00124 -0.0578 10 MQ 0.0499 0.0218 11 OO -0.0189 0.0394 12 UA 0.0673 0.0220 13 US 0.0575 0.0174 14 VX 0.111 0.0362 15 WN 0.119 0.0345 16 YV 0.137 0.0805 ``` You can use group\_by on more than one variable, e.g. `group_by(var1, var2)` Renaming Columns ---------------- Tibbles in the tidyverse don’t really have a problem with variable names starting with numbers or incorporating symbols and spaces. I would still suggest it is poor practice, because even if your data set looks fine, you’ll possibly encounter problems with modeling and visualization packages using that data. However, as a demonstration, we can ‘fix’ some of the variable names. One issue is that when we scraped the data and converted it to a data.frame, the names that started with a number, like `3P` for ‘three point baskets made’, were made into `X3P`, because that’s the way R works by default. In addition, `3P%`, i.e. three point percentage made, was made into `3P.` with a dot for the percent sign. Same goes for the 2P (two\-pointers) and FT (free\-throw) variables. We can use rename to change column names. A basic example is as follows. ``` data %>% rename(new_name = old_name, new_name2 = old_name2) ``` Very straightforward. However, oftentimes we’ll need to change *patterns*, as with our current problem. The following uses str\_replace and str\_remove from stringr to look for a pattern in a name, and replace that pattern with some other pattern. It uses *regular expressions* for the patterns. ``` bball = bball %>% rename_with( str_replace, # function contains('.'), # columns pattern = '\\.', # function arguments replacement = '%' ) %>% rename_with(str_remove, starts_with('X'), pattern = 'X') colnames(bball) ``` ``` [1] "Rk" "Player" "Pos" "Age" "Tm" "G" "GS" "MP" "FG" "FGA" "FG%" "3P" "3PA" "3P%" [15] "2P" "2PA" "2P%" "eFG%" "FT" "FTA" "FT%" "ORB" "DRB" "TRB" "AST" "STL" "BLK" "TOV" [29] "PF" "PTS" "trueShooting" "effectiveFG" "shootingDif" ``` Merging Data ------------ Merging data is yet another very common data task, as data often comes from multiple sources. In order to do this, we need some common identifier among the sources by which to join them. The following is a list of dplyr join functions. inner\_join: return all rows from x where there are matching values in y, and all columns from x and y. If there are multiple matches between x and y, all combination of the matches are returned. left\_join: return all rows from x, and all columns from x and y. Rows in x with no match in y will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned. right\_join: return all rows from y, and all columns from x and y. Rows in y with no match in x will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned. semi\_join: return all rows from x where there are matching values in y, keeping just columns from x. It differs from an inner join because an inner join will return one row of x for each matching row of y, where a semi join will never duplicate rows of x. anti\_join: return all rows from x where there are not matching values in y, keeping just columns from x. full\_join: return all rows and all columns from both x and y. Where there are not matching values, returns NA for the one missing. Probably the most common is a left join, where we have one primary data set, and are adding data from another source to it while retaining it as a base. The following is a simple demonstration. ``` band_members ``` ``` # A tibble: 3 x 2 Name Band <chr> <chr> 1 Seth Com Truise 2 Francis Pixies 3 Bubba The New Year ``` ``` band_instruments ``` ``` # A tibble: 3 x 2 Name Instrument <chr> <chr> 1 Francis Guitar 2 Bubba Guitar 3 Seth Synthesizer ``` ``` left_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 3 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar ``` When we don’t have a one to one match, the result of the different types of join will become more apparent. ``` band_members ``` ``` # A tibble: 4 x 2 Name Band <chr> <chr> 1 Seth Com Truise 2 Francis Pixies 3 Bubba The New Year 4 Stephen Pavement ``` ``` band_instruments ``` ``` # A tibble: 4 x 2 Name Instrument <chr> <chr> 1 Seth Synthesizer 2 Francis Guitar 3 Bubba Guitar 4 Steve Rage ``` ``` left_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 4 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Stephen Pavement <NA> ``` ``` right_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 4 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Steve <NA> Rage ``` ``` inner_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 3 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar ``` ``` full_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 5 x 3 Name Band Instrument <chr> <chr> <chr> 1 Seth Com Truise Synthesizer 2 Francis Pixies Guitar 3 Bubba The New Year Guitar 4 Stephen Pavement <NA> 5 Steve <NA> Rage ``` ``` anti_join(band_members, band_instruments) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 1 x 2 Name Band <chr> <chr> 1 Stephen Pavement ``` ``` anti_join(band_instruments, band_members) ``` ``` Joining, by = "Name" ``` ``` # A tibble: 1 x 2 Name Instrument <chr> <chr> 1 Steve Rage ``` Merges can get quite complex, and involve multiple data sources. In many cases you may have to do a lot of processing before getting to the merge, but dplyr’s joins will help quite a bit. Pivoting axes ------------- The tidyr package can be thought of as a specialized subset of dplyr’s functionality, as well as an update to the previous reshape and reshape2 packages[7](#fn7). Some of its functions for manipulating data you’ll want to be familiar with are: * pivot\_longer: convert data from a wider format to longer one * pivot\_wider: convert data from a longer format to wider one * unite: paste together multiple columns into one * separate: complement of unite * unnest: expand ‘list columns’ The following example shows how we take a ‘wide\-form’ data set, where multiple columns represent different stock prices, and turn it into two columns, one representing stock name, and one for the price. We need to know which columns to work on, which is the first entry. This function works very much like select, where you can use helpers. Then we need to give a name to the column(s) representing the indicators of what were multiple columns in the wide format. And finally we need to specify the column(s) of the values. ``` library(tidyr) stocks <- data.frame( time = as.Date('2009-01-01') + 0:9, X = rnorm(10, 0, 1), Y = rnorm(10, 0, 2), Z = rnorm(10, 0, 4) ) stocks %>% head ``` ``` time X Y Z 1 2009-01-01 -1.23994442 -4.8515935 3.7985281 2 2009-01-02 0.65851483 0.9552487 -2.7255786 3 2009-01-03 -0.91146059 -0.0321312 0.6175274 4 2009-01-04 1.85598621 1.1919978 -2.4837558 5 2009-01-05 0.37266866 0.6297287 -1.1330732 6 2009-01-06 -0.06072664 -2.8673242 1.7155168 ``` ``` stocks %>% pivot_longer( cols = -time, # works similar to using select() names_to = 'stock', # the name of the column that will have column names as labels values_to = 'price' # the name of the column for the values ) %>% head() ``` ``` # A tibble: 6 x 3 time stock price <date> <chr> <dbl> 1 2009-01-01 X -1.24 2 2009-01-01 Y -4.85 3 2009-01-01 Z 3.80 4 2009-01-02 X 0.659 5 2009-01-02 Y 0.955 6 2009-01-02 Z -2.73 ``` Here is a more complex example where we can handle multiple repeated entries. We additionally add another column for labeling, and posit the separator for the column names. ``` library(tidyr) stocks <- data.frame( time = as.Date('2009-01-01') + 0:9, X_1 = rnorm(10, 0, 1), X_2 = rnorm(10, 0, 1), Y_1 = rnorm(10, 0, 2), Y_2 = rnorm(10, 0, 2), Z_1 = rnorm(10, 0, 4), Z_2 = rnorm(10, 0, 4) ) head(stocks) ``` ``` time X_1 X_2 Y_1 Y_2 Z_1 Z_2 1 2009-01-01 -0.9675529 -0.72793192 0.7516393 0.03321408 3.7485540 0.3945022 2 2009-01-02 -0.1780449 0.08926355 -0.1976137 1.53569057 -0.0315400 7.6285628 3 2009-01-03 0.2958189 0.38118235 1.6730362 -1.13635638 0.1543268 -5.9254785 4 2009-01-04 -0.7805814 -0.67370673 -0.5696378 -3.62905335 -2.4256959 6.6867209 5 2009-01-05 1.7910958 -0.32353046 -1.6786235 -1.55989831 -4.4294289 -8.1844866 6 2009-01-06 1.1623828 -0.27362716 -0.3116307 2.73462718 0.6675895 1.9884072 ``` ``` stocks %>% pivot_longer( cols = -time, names_to = c('stock', 'entry'), names_sep = '_', values_to = 'price' ) %>% head() ``` ``` # A tibble: 6 x 4 time stock entry price <date> <chr> <chr> <dbl> 1 2009-01-01 X 1 -0.968 2 2009-01-01 X 2 -0.728 3 2009-01-01 Y 1 0.752 4 2009-01-01 Y 2 0.0332 5 2009-01-01 Z 1 3.75 6 2009-01-01 Z 2 0.395 ``` Note that the latter is an example of *tidy data* while the former is not. Why do we generally prefer such data? Precisely because the most common data operations, grouping, filtering, etc., would work notably more efficiently with such data. This is especially the case for visualization. The following demonstrates the separate function utilized for a very common data processing task\- dealing with names. Here’ we’ll separate player into first and last names based on the space. ``` bball %>% separate(Player, into=c('first_name', 'last_name'), sep=' ') %>% select(1:5) %>% head() ``` ``` Rk first_name last_name Pos Age 1 1 Álex Abrines SG 25 2 2 Quincy Acy PF 28 3 3 Jaylen Adams PG 22 4 4 Steven Adams C 25 5 5 Bam Adebayo C 21 6 6 Deng Adel SF 21 ``` Note that this won’t necessarily apply to every name, so further processing may be required. More Tidyverse -------------- * dplyr functions: There are over a hundred utility functions that perform very common tasks. You really need to be aware of them, as their use will come up often. * broom: Convert statistical analysis objects from R into tidy data frames, so that they can more easily be combined, reshaped and otherwise processed with tools like dplyr, tidyr and ggplot2\. * tidy\*: a lot of packages out there are now ‘tidy’, though not a part of the official tidyverse. Some examples of the ones I’ve used: + tidycensus + tidybayes + tidytext + modelr Seriously, there are [a lot](https://www.r-pkg.org/search.html?q=tidy). Personal Opinion ---------------- The dplyr grammar is clear for a lot of standard data processing tasks, and some not so common. Extremely useful for data exploration and visualization. * No need to create/overwrite existing objects * Can overwrite columns and use as they are created * Makes it easy to look at anything, and do otherwise tedious data checks Drawbacks: * Not as fast as data.table or even some base R approaches for many things[8](#fn8) * The *mindset* can make for unnecessary complication + e.g. There is no need to pipe to create a single new variable * Some approaches, are not very intuitive * Notably less ability to work with some very common data structures (e.g. matrices) All in all, if you’ve only been using base R approaches, the tidyverse will change your R life! It makes all the sorts of things you do all the time easier and clearer. Highly recommended! Tidyverse Exercises ------------------- ### Exercise 0 Install and load the dplyr ggplot2movies packages. Look at the help file for the `movies` data set, which contains data from IMDB. ``` install.packages('ggplot2movies') library(ggplot2movies) data('movies') ``` ### Exercise 1 Using the movies data set, perform each of the following actions separately. #### Exercise 1a Use mutate to create a centered version of the rating variable. A centered variable is one whose mean has been subtracted from it. The process will take the following form: ``` data %>% mutate(new_var_name = '?') ``` #### Exercise 1b Use filter to create a new data frame that has only movies from the years 2000 and beyond. Use the greater than or equal operator `>=`. #### Exercise 1c Use select to create a new data frame that only has the `title`, `year`, `budget`, `length`, `rating` and `votes` variables. There are at least 3 ways to do this. #### Exercise 1d Rename the `length` column to `length_in_min` (i.e. length in minutes). ### Exercise 2 Use group\_by to group the data by year, and summarize to create a new variable that is the average budget. The summarize function works just like mutate in this case. Use the mean function to get the average, but you’ll also need to use the argument `na.rm = TRUE` within it because the earliest years have no budget recorded. ### Exercise 3 Use pivot\_longer to create a ‘tidy’ data set from the following. ``` dat = tibble(id = 1:10, x = rnorm(10), y = rnorm(10)) ``` ### Exercise 4 Now put several actions together in one set of piped operations. * Filter movies released *after* 1990 * select the same variables as before but also the `mpaa`, `Action`, and `Drama` variables * group by `mpaa` *and* (your choice) `Action` *or* `Drama` * get the average rating It should spit out something like the following: ``` # A tibble: 10 x 3 # Groups: mpaa [5] mpaa Drama AvgRating <chr> <int> <dbl> 1 "" 0 5.94 2 "" 1 6.20 3 "NC-17" 0 4.28 4 "NC-17" 1 4.62 5 "PG" 0 5.19 6 "PG" 1 6.15 7 "PG-13" 0 5.44 8 "PG-13" 1 6.14 9 "R" 0 4.86 10 "R" 1 5.94 ``` ### Exercise 0 Install and load the dplyr ggplot2movies packages. Look at the help file for the `movies` data set, which contains data from IMDB. ``` install.packages('ggplot2movies') library(ggplot2movies) data('movies') ``` ### Exercise 1 Using the movies data set, perform each of the following actions separately. #### Exercise 1a Use mutate to create a centered version of the rating variable. A centered variable is one whose mean has been subtracted from it. The process will take the following form: ``` data %>% mutate(new_var_name = '?') ``` #### Exercise 1b Use filter to create a new data frame that has only movies from the years 2000 and beyond. Use the greater than or equal operator `>=`. #### Exercise 1c Use select to create a new data frame that only has the `title`, `year`, `budget`, `length`, `rating` and `votes` variables. There are at least 3 ways to do this. #### Exercise 1d Rename the `length` column to `length_in_min` (i.e. length in minutes). #### Exercise 1a Use mutate to create a centered version of the rating variable. A centered variable is one whose mean has been subtracted from it. The process will take the following form: ``` data %>% mutate(new_var_name = '?') ``` #### Exercise 1b Use filter to create a new data frame that has only movies from the years 2000 and beyond. Use the greater than or equal operator `>=`. #### Exercise 1c Use select to create a new data frame that only has the `title`, `year`, `budget`, `length`, `rating` and `votes` variables. There are at least 3 ways to do this. #### Exercise 1d Rename the `length` column to `length_in_min` (i.e. length in minutes). ### Exercise 2 Use group\_by to group the data by year, and summarize to create a new variable that is the average budget. The summarize function works just like mutate in this case. Use the mean function to get the average, but you’ll also need to use the argument `na.rm = TRUE` within it because the earliest years have no budget recorded. ### Exercise 3 Use pivot\_longer to create a ‘tidy’ data set from the following. ``` dat = tibble(id = 1:10, x = rnorm(10), y = rnorm(10)) ``` ### Exercise 4 Now put several actions together in one set of piped operations. * Filter movies released *after* 1990 * select the same variables as before but also the `mpaa`, `Action`, and `Drama` variables * group by `mpaa` *and* (your choice) `Action` *or* `Drama` * get the average rating It should spit out something like the following: ``` # A tibble: 10 x 3 # Groups: mpaa [5] mpaa Drama AvgRating <chr> <int> <dbl> 1 "" 0 5.94 2 "" 1 6.20 3 "NC-17" 0 4.28 4 "NC-17" 1 4.62 5 "PG" 0 5.19 6 "PG" 1 6.15 7 "PG-13" 0 5.44 8 "PG-13" 1 6.14 9 "R" 0 4.86 10 "R" 1 5.94 ``` Python Pandas Notebook ---------------------- [Available on GitHub](https://github.com/m-clark/data-processing-and-visualization/blob/master/jupyter_notebooks/pandaverse.ipynb)
Text Analysis
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/data_table.html
data.table ========== Another package for data processing that has been useful to many is data.table. It works in a notably different way than dplyr. However, you’d use it for the same reasons, e.g. subset, grouping, update, ordered joins etc., but with key advantages in speed and memory efficiency. Like dplyr, the data objects are both data.frames and a package specific class. ``` library(data.table) dt = data.table(x = sample(1:10, 6), g = 1:3, y = runif(6)) class(dt) ``` ``` [1] "data.table" "data.frame" ``` data.table Basics ----------------- In general, data.table works with brackets as in base R data frames. However, in order to use data.table effectively you’ll need to forget the data frame similarity. The brackets actually work like a function call, with several key arguments. Consider the following notation to start. ``` x[i, j, by, keyby, with = TRUE, ...] ``` Importantly: *you don’t use the brackets as you would with data.frames*. What **i** and **j** can be are fairly complex. In general, you use **i** for filtering by rows. ``` dt[2] # rows! not columns as with standard data.frame dt[2,] ``` ``` x g y 1: 5 2 0.1079452 x g y 1: 5 2 0.1079452 ``` You use **j** to select (by name!) or create new columns. We can define a new column with the :\= operator. ``` dt[,x] dt[,z := x+y] # dt now has a new column dt[,z] dt[g > 1, mean(z), by = g] dt ``` ``` [1] 6 5 2 9 8 1 [1] 6.908980 5.107945 2.843715 9.780681 8.215221 1.334649 g V1 1: 2 6.661583 2: 3 2.089182 x g y z 1: 6 1 0.9089802 6.908980 2: 5 2 0.1079452 5.107945 3: 2 3 0.8437154 2.843715 4: 9 1 0.7806815 9.780681 5: 8 2 0.2152209 8.215221 6: 1 3 0.3346486 1.334649 ``` Because **j** is an argument, dropping columns is awkward. ``` dt[, -y] # creates negative values of y dt[, -'y', with = F] # drops y, but now needs quotes ## dt[, y := NULL] # drops y, but this is just a base R approach ## dt$y = NULL ``` ``` [1] -0.9089802 -0.1079452 -0.8437154 -0.7806815 -0.2152209 -0.3346486 x g z 1: 6 1 6.908980 2: 5 2 5.107945 3: 2 3 2.843715 4: 9 1 9.780681 5: 8 2 8.215221 6: 1 3 1.334649 ``` Data table does not make unnecessary copies. For example if we do the following… ``` DT = data.table(A = 5:1, B = letters[5:1]) DT2 = DT DT3 = copy(DT) ``` DT2 and DT are just names for the same table. You’d actually need to use the copy function to make an explicit copy, otherwise whatever you do to DT2 will be done to DT. ``` DT2[,q:=1] DT ``` ``` A B q 1: 5 e 1 2: 4 d 1 3: 3 c 1 4: 2 b 1 5: 1 a 1 ``` ``` DT3 ``` ``` A B 1: 5 e 2: 4 d 3: 3 c 4: 2 b 5: 1 a ``` Grouped Operations ------------------ We can now attempt a ‘group\-by’ operation, along with creation of a new variable. Note that these operations actually modify the dt object *in place*, a key distinction with dplyr. Fewer copies means less of a memory hit. ``` dt1 = dt2 = dt dt[, sum(x, y), by = g] # sum of all x and y values ``` ``` g V1 1: 1 16.689662 2: 2 13.323166 3: 3 4.178364 ``` ``` dt1[, mysum := sum(x), by = g] # add new variable to the original data dt1 ``` ``` x g y z mysum 1: 6 1 0.9089802 6.908980 15 2: 5 2 0.1079452 5.107945 13 3: 2 3 0.8437154 2.843715 3 4: 9 1 0.7806815 9.780681 15 5: 8 2 0.2152209 8.215221 13 6: 1 3 0.3346486 1.334649 3 ``` We can also create groupings on the fly. For a new summary data set, we’ll take the following approach\- we create a grouping based on whether `g` is a value of one or not, then get the mean and sum of `x` for those two categories. The corresponding dplyr approach is also shown (but not evaluated) for comparison. ``` dt2[, list(mean_x = mean(x), sum_x = sum(x)), by = g == 1] ``` ``` g mean_x sum_x 1: TRUE 7.5 15 2: FALSE 4.0 16 ``` ``` ## dt2 %>% ## group_by(g == 1) %>% ## summarise(mean_x = mean(x), sum_x = sum(x)) ``` Faster! ------- As mentioned, the reason to use data.table is speed. If you have large data or large operations it’ll be useful. ### Joins Joins can not only be faster but also easy to do. Note that the `i` argument can be a data.table object itself. I compare its speed to the comparable dplyr’s left\_join function. ``` dt1 = setkey(dt1, x) dt1[dt2] dt1_df = dt2_df = as.data.frame(dt1) left_join(dt1_df, dt2_df, by = 'x') ``` | func | mean (microseconds) | | --- | --- | | dt\_join | 504\.77 | | dplyr\_join | 1588\.46 | ### Group by We can use the setkey function to order a data set by a certain column(s). This ordering is done by reference; again, no copy is made. Doing this will allow for faster grouped operations, though you likely will only see the speed gain with very large data. The timing regards creating a new variable ``` test_dt0 = data.table(x = rnorm(10000000), g = sample(letters, 10000000, replace = T)) test_dt1 = copy(test_dt0) test_dt2 = setkey(test_dt1, g) identical(test_dt0, test_dt1) ``` ``` [1] FALSE ``` ``` identical(test_dt1, test_dt2) ``` ``` [1] TRUE ``` ``` test_dt0 = test_dt0[, mean := mean(x), by = g] test_dt1 = test_dt1[, mean := mean(x), by = g] test_dt2 = test_dt2[, mean := mean(x), by = g] ``` | func | mean (milliseconds) | | --- | --- | | test\_dt0 | 381\.29 | | test\_dt1 | 118\.52 | | test\_dt2 | 109\.97 | ### String matching The chin function returns a vector of the *positions* of (first) matches of its first argument in its second, where both arguments are character vectors. Essentially it’s just like the %in% function for character vectors. Consider the following. We sample the first 14 letters 1000 times with replacement and see which ones match in a subset of another subset of letters. I compare the same operation to stringr and the stringi package whose functionality stringr using. They are both far slower than chin. ``` lets_1 = sample(letters[1:14], 1000, replace=T) lets_1 %chin% letters[13:26] %>% head(10) ``` ``` [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE ``` ``` # stri_detect_regex(lets_1, paste(letters[13:26], collapse='|')) ``` ### Reading files If you use data.table for nothing else, you’d still want to consider it strongly for reading in large text files. The function fread may be quite useful in being memory efficient too. I compare it to readr. ``` fread('data/cars.csv') ``` | func | mean (microseconds) | | --- | --- | | dt | 430\.91 | | readr | 2900\.19 | ### More speed The following demonstrates some timings from [here](http://stackoverflow.com/questions/3505701/r-grouping-functions-sapply-vs-lapply-vs-apply-vs-tapply-vs-by-vs-aggrega/34167477#34167477). I reproduced it on my own machine based on 50 million observations. The grouped operations that are applied are just a sum and length on a vector. By the way, never, ever use aggregate. For anything. | fun | elapsed | | --- | --- | | aggregate | 56\.857 | | by | 18\.118 | | dplyr | 14\.447 | | sapply | 12\.200 | | lapply | 11\.309 | | tapply | 10\.570 | | data.table | 0\.866 | Ever. Really. Another thing to note is that the tidy approach is more about clarity and code efficiency relative to base R, as well as doing important background data checks and returning more usable results. In practice, it likely won’t be notably faster except in some cases, like with aggregate. Pipe with data.table -------------------- Piping can be done with data.table objects too, using the brackets, but it’s awkward at best. ``` mydt[, newvar := mean(x), ][, newvar2 := sum(newvar), by = group][, -'y', with = FALSE] mydt[, newvar := mean(x), ][, newvar2 := sum(newvar), by = group ][,-'y', with=FALSE] ``` Probably better to just use a standard pipe and dot approach if you really need it. ``` mydt[, newvar := mean(x), ] %>% .[, newvar2 := sum(newvar), by = group] %>% .[, -'y', with = FALSE] ``` data.table Summary ------------------ Faster and more memory\-efficient methods are great to have. If you have large data this is one package that can help. * For reading data * Especially for group\-by and joins. Drawbacks: * Complex * The syntax can be awkward * It doesn’t work like a data.frame, which can be confusing * Piping with brackets isn’t really feasible, and the dot approach is awkward * Does not have its own ‘verse’, though many packages use it If speed and/or memory is (potentially) a concern, data.table. For interactive exploration, dplyr. Piping allows one to use both, so no need to choose. And on the horizon… Faster dplyr Alternatives ------------------------- So we have data.table as a starting point for faster data processing operations, but there are others. The dtplyr package implements the data.table back\-end for dplyr, so that you can seamlessly use them together. The newer package tidyfast works directly with a data.table object, but uses dplyr\-esque functions. The following shows times for a counting unique arrival times in the nycflights13 flights data (336776 rows). | package | timing | | --- | --- | | dplyr | 10\.580 | | dtplyr | 4\.575 | | data.table | 3\.519 | | tidyfast | 3\.507 | | a Median time in milliseconds to do a count of arr\_time on nycflights::flights | | --- | Just for giggles I did the same in Python with a pandas DataFrame, and it was notably slower than all of these options (almost 10x slower than standard dplyr). A lot of folks that use Python think R is slow, but that is mostly because they don’t know how to effectively program with R for data science. #### Out of memory situations For very large data sets, especially in cases where distributed data solutions like Spark (and sparklyr) are not viable for practical or security reasons, you may need to try another approach. The disk.frame package does data processing on disk rather than in memory, as is the case with default R approaches. This allows you to process data that may be too large or time consuming to do so otherwise. For example, it’d be a great option if you are starting out with extremely large data, but for which your subset of interest is easily manageable within R. With disk.frame, you can do the initial filtering and selection before bringing it into memory. data.table Exercises -------------------- ### Exercise 0 Install and load the data.table package. Create the following data table. ``` mydt = data.table( expand.grid(x = 1:3, y = c('a', 'b', 'c')), z = sample(1:20, 9) ) ``` ### Exercise 1 Create a new object that contains only the ‘a’ group. Think back to how you use a logical to select rows. ### Exercise 2 Create a new object that is the sum of z grouped by x. You don’t need to name the sum variable. data.table Basics ----------------- In general, data.table works with brackets as in base R data frames. However, in order to use data.table effectively you’ll need to forget the data frame similarity. The brackets actually work like a function call, with several key arguments. Consider the following notation to start. ``` x[i, j, by, keyby, with = TRUE, ...] ``` Importantly: *you don’t use the brackets as you would with data.frames*. What **i** and **j** can be are fairly complex. In general, you use **i** for filtering by rows. ``` dt[2] # rows! not columns as with standard data.frame dt[2,] ``` ``` x g y 1: 5 2 0.1079452 x g y 1: 5 2 0.1079452 ``` You use **j** to select (by name!) or create new columns. We can define a new column with the :\= operator. ``` dt[,x] dt[,z := x+y] # dt now has a new column dt[,z] dt[g > 1, mean(z), by = g] dt ``` ``` [1] 6 5 2 9 8 1 [1] 6.908980 5.107945 2.843715 9.780681 8.215221 1.334649 g V1 1: 2 6.661583 2: 3 2.089182 x g y z 1: 6 1 0.9089802 6.908980 2: 5 2 0.1079452 5.107945 3: 2 3 0.8437154 2.843715 4: 9 1 0.7806815 9.780681 5: 8 2 0.2152209 8.215221 6: 1 3 0.3346486 1.334649 ``` Because **j** is an argument, dropping columns is awkward. ``` dt[, -y] # creates negative values of y dt[, -'y', with = F] # drops y, but now needs quotes ## dt[, y := NULL] # drops y, but this is just a base R approach ## dt$y = NULL ``` ``` [1] -0.9089802 -0.1079452 -0.8437154 -0.7806815 -0.2152209 -0.3346486 x g z 1: 6 1 6.908980 2: 5 2 5.107945 3: 2 3 2.843715 4: 9 1 9.780681 5: 8 2 8.215221 6: 1 3 1.334649 ``` Data table does not make unnecessary copies. For example if we do the following… ``` DT = data.table(A = 5:1, B = letters[5:1]) DT2 = DT DT3 = copy(DT) ``` DT2 and DT are just names for the same table. You’d actually need to use the copy function to make an explicit copy, otherwise whatever you do to DT2 will be done to DT. ``` DT2[,q:=1] DT ``` ``` A B q 1: 5 e 1 2: 4 d 1 3: 3 c 1 4: 2 b 1 5: 1 a 1 ``` ``` DT3 ``` ``` A B 1: 5 e 2: 4 d 3: 3 c 4: 2 b 5: 1 a ``` Grouped Operations ------------------ We can now attempt a ‘group\-by’ operation, along with creation of a new variable. Note that these operations actually modify the dt object *in place*, a key distinction with dplyr. Fewer copies means less of a memory hit. ``` dt1 = dt2 = dt dt[, sum(x, y), by = g] # sum of all x and y values ``` ``` g V1 1: 1 16.689662 2: 2 13.323166 3: 3 4.178364 ``` ``` dt1[, mysum := sum(x), by = g] # add new variable to the original data dt1 ``` ``` x g y z mysum 1: 6 1 0.9089802 6.908980 15 2: 5 2 0.1079452 5.107945 13 3: 2 3 0.8437154 2.843715 3 4: 9 1 0.7806815 9.780681 15 5: 8 2 0.2152209 8.215221 13 6: 1 3 0.3346486 1.334649 3 ``` We can also create groupings on the fly. For a new summary data set, we’ll take the following approach\- we create a grouping based on whether `g` is a value of one or not, then get the mean and sum of `x` for those two categories. The corresponding dplyr approach is also shown (but not evaluated) for comparison. ``` dt2[, list(mean_x = mean(x), sum_x = sum(x)), by = g == 1] ``` ``` g mean_x sum_x 1: TRUE 7.5 15 2: FALSE 4.0 16 ``` ``` ## dt2 %>% ## group_by(g == 1) %>% ## summarise(mean_x = mean(x), sum_x = sum(x)) ``` Faster! ------- As mentioned, the reason to use data.table is speed. If you have large data or large operations it’ll be useful. ### Joins Joins can not only be faster but also easy to do. Note that the `i` argument can be a data.table object itself. I compare its speed to the comparable dplyr’s left\_join function. ``` dt1 = setkey(dt1, x) dt1[dt2] dt1_df = dt2_df = as.data.frame(dt1) left_join(dt1_df, dt2_df, by = 'x') ``` | func | mean (microseconds) | | --- | --- | | dt\_join | 504\.77 | | dplyr\_join | 1588\.46 | ### Group by We can use the setkey function to order a data set by a certain column(s). This ordering is done by reference; again, no copy is made. Doing this will allow for faster grouped operations, though you likely will only see the speed gain with very large data. The timing regards creating a new variable ``` test_dt0 = data.table(x = rnorm(10000000), g = sample(letters, 10000000, replace = T)) test_dt1 = copy(test_dt0) test_dt2 = setkey(test_dt1, g) identical(test_dt0, test_dt1) ``` ``` [1] FALSE ``` ``` identical(test_dt1, test_dt2) ``` ``` [1] TRUE ``` ``` test_dt0 = test_dt0[, mean := mean(x), by = g] test_dt1 = test_dt1[, mean := mean(x), by = g] test_dt2 = test_dt2[, mean := mean(x), by = g] ``` | func | mean (milliseconds) | | --- | --- | | test\_dt0 | 381\.29 | | test\_dt1 | 118\.52 | | test\_dt2 | 109\.97 | ### String matching The chin function returns a vector of the *positions* of (first) matches of its first argument in its second, where both arguments are character vectors. Essentially it’s just like the %in% function for character vectors. Consider the following. We sample the first 14 letters 1000 times with replacement and see which ones match in a subset of another subset of letters. I compare the same operation to stringr and the stringi package whose functionality stringr using. They are both far slower than chin. ``` lets_1 = sample(letters[1:14], 1000, replace=T) lets_1 %chin% letters[13:26] %>% head(10) ``` ``` [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE ``` ``` # stri_detect_regex(lets_1, paste(letters[13:26], collapse='|')) ``` ### Reading files If you use data.table for nothing else, you’d still want to consider it strongly for reading in large text files. The function fread may be quite useful in being memory efficient too. I compare it to readr. ``` fread('data/cars.csv') ``` | func | mean (microseconds) | | --- | --- | | dt | 430\.91 | | readr | 2900\.19 | ### More speed The following demonstrates some timings from [here](http://stackoverflow.com/questions/3505701/r-grouping-functions-sapply-vs-lapply-vs-apply-vs-tapply-vs-by-vs-aggrega/34167477#34167477). I reproduced it on my own machine based on 50 million observations. The grouped operations that are applied are just a sum and length on a vector. By the way, never, ever use aggregate. For anything. | fun | elapsed | | --- | --- | | aggregate | 56\.857 | | by | 18\.118 | | dplyr | 14\.447 | | sapply | 12\.200 | | lapply | 11\.309 | | tapply | 10\.570 | | data.table | 0\.866 | Ever. Really. Another thing to note is that the tidy approach is more about clarity and code efficiency relative to base R, as well as doing important background data checks and returning more usable results. In practice, it likely won’t be notably faster except in some cases, like with aggregate. ### Joins Joins can not only be faster but also easy to do. Note that the `i` argument can be a data.table object itself. I compare its speed to the comparable dplyr’s left\_join function. ``` dt1 = setkey(dt1, x) dt1[dt2] dt1_df = dt2_df = as.data.frame(dt1) left_join(dt1_df, dt2_df, by = 'x') ``` | func | mean (microseconds) | | --- | --- | | dt\_join | 504\.77 | | dplyr\_join | 1588\.46 | ### Group by We can use the setkey function to order a data set by a certain column(s). This ordering is done by reference; again, no copy is made. Doing this will allow for faster grouped operations, though you likely will only see the speed gain with very large data. The timing regards creating a new variable ``` test_dt0 = data.table(x = rnorm(10000000), g = sample(letters, 10000000, replace = T)) test_dt1 = copy(test_dt0) test_dt2 = setkey(test_dt1, g) identical(test_dt0, test_dt1) ``` ``` [1] FALSE ``` ``` identical(test_dt1, test_dt2) ``` ``` [1] TRUE ``` ``` test_dt0 = test_dt0[, mean := mean(x), by = g] test_dt1 = test_dt1[, mean := mean(x), by = g] test_dt2 = test_dt2[, mean := mean(x), by = g] ``` | func | mean (milliseconds) | | --- | --- | | test\_dt0 | 381\.29 | | test\_dt1 | 118\.52 | | test\_dt2 | 109\.97 | ### String matching The chin function returns a vector of the *positions* of (first) matches of its first argument in its second, where both arguments are character vectors. Essentially it’s just like the %in% function for character vectors. Consider the following. We sample the first 14 letters 1000 times with replacement and see which ones match in a subset of another subset of letters. I compare the same operation to stringr and the stringi package whose functionality stringr using. They are both far slower than chin. ``` lets_1 = sample(letters[1:14], 1000, replace=T) lets_1 %chin% letters[13:26] %>% head(10) ``` ``` [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE ``` ``` # stri_detect_regex(lets_1, paste(letters[13:26], collapse='|')) ``` ### Reading files If you use data.table for nothing else, you’d still want to consider it strongly for reading in large text files. The function fread may be quite useful in being memory efficient too. I compare it to readr. ``` fread('data/cars.csv') ``` | func | mean (microseconds) | | --- | --- | | dt | 430\.91 | | readr | 2900\.19 | ### More speed The following demonstrates some timings from [here](http://stackoverflow.com/questions/3505701/r-grouping-functions-sapply-vs-lapply-vs-apply-vs-tapply-vs-by-vs-aggrega/34167477#34167477). I reproduced it on my own machine based on 50 million observations. The grouped operations that are applied are just a sum and length on a vector. By the way, never, ever use aggregate. For anything. | fun | elapsed | | --- | --- | | aggregate | 56\.857 | | by | 18\.118 | | dplyr | 14\.447 | | sapply | 12\.200 | | lapply | 11\.309 | | tapply | 10\.570 | | data.table | 0\.866 | Ever. Really. Another thing to note is that the tidy approach is more about clarity and code efficiency relative to base R, as well as doing important background data checks and returning more usable results. In practice, it likely won’t be notably faster except in some cases, like with aggregate. Pipe with data.table -------------------- Piping can be done with data.table objects too, using the brackets, but it’s awkward at best. ``` mydt[, newvar := mean(x), ][, newvar2 := sum(newvar), by = group][, -'y', with = FALSE] mydt[, newvar := mean(x), ][, newvar2 := sum(newvar), by = group ][,-'y', with=FALSE] ``` Probably better to just use a standard pipe and dot approach if you really need it. ``` mydt[, newvar := mean(x), ] %>% .[, newvar2 := sum(newvar), by = group] %>% .[, -'y', with = FALSE] ``` data.table Summary ------------------ Faster and more memory\-efficient methods are great to have. If you have large data this is one package that can help. * For reading data * Especially for group\-by and joins. Drawbacks: * Complex * The syntax can be awkward * It doesn’t work like a data.frame, which can be confusing * Piping with brackets isn’t really feasible, and the dot approach is awkward * Does not have its own ‘verse’, though many packages use it If speed and/or memory is (potentially) a concern, data.table. For interactive exploration, dplyr. Piping allows one to use both, so no need to choose. And on the horizon… Faster dplyr Alternatives ------------------------- So we have data.table as a starting point for faster data processing operations, but there are others. The dtplyr package implements the data.table back\-end for dplyr, so that you can seamlessly use them together. The newer package tidyfast works directly with a data.table object, but uses dplyr\-esque functions. The following shows times for a counting unique arrival times in the nycflights13 flights data (336776 rows). | package | timing | | --- | --- | | dplyr | 10\.580 | | dtplyr | 4\.575 | | data.table | 3\.519 | | tidyfast | 3\.507 | | a Median time in milliseconds to do a count of arr\_time on nycflights::flights | | --- | Just for giggles I did the same in Python with a pandas DataFrame, and it was notably slower than all of these options (almost 10x slower than standard dplyr). A lot of folks that use Python think R is slow, but that is mostly because they don’t know how to effectively program with R for data science. #### Out of memory situations For very large data sets, especially in cases where distributed data solutions like Spark (and sparklyr) are not viable for practical or security reasons, you may need to try another approach. The disk.frame package does data processing on disk rather than in memory, as is the case with default R approaches. This allows you to process data that may be too large or time consuming to do so otherwise. For example, it’d be a great option if you are starting out with extremely large data, but for which your subset of interest is easily manageable within R. With disk.frame, you can do the initial filtering and selection before bringing it into memory. #### Out of memory situations For very large data sets, especially in cases where distributed data solutions like Spark (and sparklyr) are not viable for practical or security reasons, you may need to try another approach. The disk.frame package does data processing on disk rather than in memory, as is the case with default R approaches. This allows you to process data that may be too large or time consuming to do so otherwise. For example, it’d be a great option if you are starting out with extremely large data, but for which your subset of interest is easily manageable within R. With disk.frame, you can do the initial filtering and selection before bringing it into memory. data.table Exercises -------------------- ### Exercise 0 Install and load the data.table package. Create the following data table. ``` mydt = data.table( expand.grid(x = 1:3, y = c('a', 'b', 'c')), z = sample(1:20, 9) ) ``` ### Exercise 1 Create a new object that contains only the ‘a’ group. Think back to how you use a logical to select rows. ### Exercise 2 Create a new object that is the sum of z grouped by x. You don’t need to name the sum variable. ### Exercise 0 Install and load the data.table package. Create the following data table. ``` mydt = data.table( expand.grid(x = 1:3, y = c('a', 'b', 'c')), z = sample(1:20, 9) ) ``` ### Exercise 1 Create a new object that contains only the ‘a’ group. Think back to how you use a logical to select rows. ### Exercise 2 Create a new object that is the sum of z grouped by x. You don’t need to name the sum variable.
Data Visualization
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/data_table.html
data.table ========== Another package for data processing that has been useful to many is data.table. It works in a notably different way than dplyr. However, you’d use it for the same reasons, e.g. subset, grouping, update, ordered joins etc., but with key advantages in speed and memory efficiency. Like dplyr, the data objects are both data.frames and a package specific class. ``` library(data.table) dt = data.table(x = sample(1:10, 6), g = 1:3, y = runif(6)) class(dt) ``` ``` [1] "data.table" "data.frame" ``` data.table Basics ----------------- In general, data.table works with brackets as in base R data frames. However, in order to use data.table effectively you’ll need to forget the data frame similarity. The brackets actually work like a function call, with several key arguments. Consider the following notation to start. ``` x[i, j, by, keyby, with = TRUE, ...] ``` Importantly: *you don’t use the brackets as you would with data.frames*. What **i** and **j** can be are fairly complex. In general, you use **i** for filtering by rows. ``` dt[2] # rows! not columns as with standard data.frame dt[2,] ``` ``` x g y 1: 5 2 0.1079452 x g y 1: 5 2 0.1079452 ``` You use **j** to select (by name!) or create new columns. We can define a new column with the :\= operator. ``` dt[,x] dt[,z := x+y] # dt now has a new column dt[,z] dt[g > 1, mean(z), by = g] dt ``` ``` [1] 6 5 2 9 8 1 [1] 6.908980 5.107945 2.843715 9.780681 8.215221 1.334649 g V1 1: 2 6.661583 2: 3 2.089182 x g y z 1: 6 1 0.9089802 6.908980 2: 5 2 0.1079452 5.107945 3: 2 3 0.8437154 2.843715 4: 9 1 0.7806815 9.780681 5: 8 2 0.2152209 8.215221 6: 1 3 0.3346486 1.334649 ``` Because **j** is an argument, dropping columns is awkward. ``` dt[, -y] # creates negative values of y dt[, -'y', with = F] # drops y, but now needs quotes ## dt[, y := NULL] # drops y, but this is just a base R approach ## dt$y = NULL ``` ``` [1] -0.9089802 -0.1079452 -0.8437154 -0.7806815 -0.2152209 -0.3346486 x g z 1: 6 1 6.908980 2: 5 2 5.107945 3: 2 3 2.843715 4: 9 1 9.780681 5: 8 2 8.215221 6: 1 3 1.334649 ``` Data table does not make unnecessary copies. For example if we do the following… ``` DT = data.table(A = 5:1, B = letters[5:1]) DT2 = DT DT3 = copy(DT) ``` DT2 and DT are just names for the same table. You’d actually need to use the copy function to make an explicit copy, otherwise whatever you do to DT2 will be done to DT. ``` DT2[,q:=1] DT ``` ``` A B q 1: 5 e 1 2: 4 d 1 3: 3 c 1 4: 2 b 1 5: 1 a 1 ``` ``` DT3 ``` ``` A B 1: 5 e 2: 4 d 3: 3 c 4: 2 b 5: 1 a ``` Grouped Operations ------------------ We can now attempt a ‘group\-by’ operation, along with creation of a new variable. Note that these operations actually modify the dt object *in place*, a key distinction with dplyr. Fewer copies means less of a memory hit. ``` dt1 = dt2 = dt dt[, sum(x, y), by = g] # sum of all x and y values ``` ``` g V1 1: 1 16.689662 2: 2 13.323166 3: 3 4.178364 ``` ``` dt1[, mysum := sum(x), by = g] # add new variable to the original data dt1 ``` ``` x g y z mysum 1: 6 1 0.9089802 6.908980 15 2: 5 2 0.1079452 5.107945 13 3: 2 3 0.8437154 2.843715 3 4: 9 1 0.7806815 9.780681 15 5: 8 2 0.2152209 8.215221 13 6: 1 3 0.3346486 1.334649 3 ``` We can also create groupings on the fly. For a new summary data set, we’ll take the following approach\- we create a grouping based on whether `g` is a value of one or not, then get the mean and sum of `x` for those two categories. The corresponding dplyr approach is also shown (but not evaluated) for comparison. ``` dt2[, list(mean_x = mean(x), sum_x = sum(x)), by = g == 1] ``` ``` g mean_x sum_x 1: TRUE 7.5 15 2: FALSE 4.0 16 ``` ``` ## dt2 %>% ## group_by(g == 1) %>% ## summarise(mean_x = mean(x), sum_x = sum(x)) ``` Faster! ------- As mentioned, the reason to use data.table is speed. If you have large data or large operations it’ll be useful. ### Joins Joins can not only be faster but also easy to do. Note that the `i` argument can be a data.table object itself. I compare its speed to the comparable dplyr’s left\_join function. ``` dt1 = setkey(dt1, x) dt1[dt2] dt1_df = dt2_df = as.data.frame(dt1) left_join(dt1_df, dt2_df, by = 'x') ``` | func | mean (microseconds) | | --- | --- | | dt\_join | 504\.77 | | dplyr\_join | 1588\.46 | ### Group by We can use the setkey function to order a data set by a certain column(s). This ordering is done by reference; again, no copy is made. Doing this will allow for faster grouped operations, though you likely will only see the speed gain with very large data. The timing regards creating a new variable ``` test_dt0 = data.table(x = rnorm(10000000), g = sample(letters, 10000000, replace = T)) test_dt1 = copy(test_dt0) test_dt2 = setkey(test_dt1, g) identical(test_dt0, test_dt1) ``` ``` [1] FALSE ``` ``` identical(test_dt1, test_dt2) ``` ``` [1] TRUE ``` ``` test_dt0 = test_dt0[, mean := mean(x), by = g] test_dt1 = test_dt1[, mean := mean(x), by = g] test_dt2 = test_dt2[, mean := mean(x), by = g] ``` | func | mean (milliseconds) | | --- | --- | | test\_dt0 | 381\.29 | | test\_dt1 | 118\.52 | | test\_dt2 | 109\.97 | ### String matching The chin function returns a vector of the *positions* of (first) matches of its first argument in its second, where both arguments are character vectors. Essentially it’s just like the %in% function for character vectors. Consider the following. We sample the first 14 letters 1000 times with replacement and see which ones match in a subset of another subset of letters. I compare the same operation to stringr and the stringi package whose functionality stringr using. They are both far slower than chin. ``` lets_1 = sample(letters[1:14], 1000, replace=T) lets_1 %chin% letters[13:26] %>% head(10) ``` ``` [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE ``` ``` # stri_detect_regex(lets_1, paste(letters[13:26], collapse='|')) ``` ### Reading files If you use data.table for nothing else, you’d still want to consider it strongly for reading in large text files. The function fread may be quite useful in being memory efficient too. I compare it to readr. ``` fread('data/cars.csv') ``` | func | mean (microseconds) | | --- | --- | | dt | 430\.91 | | readr | 2900\.19 | ### More speed The following demonstrates some timings from [here](http://stackoverflow.com/questions/3505701/r-grouping-functions-sapply-vs-lapply-vs-apply-vs-tapply-vs-by-vs-aggrega/34167477#34167477). I reproduced it on my own machine based on 50 million observations. The grouped operations that are applied are just a sum and length on a vector. By the way, never, ever use aggregate. For anything. | fun | elapsed | | --- | --- | | aggregate | 56\.857 | | by | 18\.118 | | dplyr | 14\.447 | | sapply | 12\.200 | | lapply | 11\.309 | | tapply | 10\.570 | | data.table | 0\.866 | Ever. Really. Another thing to note is that the tidy approach is more about clarity and code efficiency relative to base R, as well as doing important background data checks and returning more usable results. In practice, it likely won’t be notably faster except in some cases, like with aggregate. Pipe with data.table -------------------- Piping can be done with data.table objects too, using the brackets, but it’s awkward at best. ``` mydt[, newvar := mean(x), ][, newvar2 := sum(newvar), by = group][, -'y', with = FALSE] mydt[, newvar := mean(x), ][, newvar2 := sum(newvar), by = group ][,-'y', with=FALSE] ``` Probably better to just use a standard pipe and dot approach if you really need it. ``` mydt[, newvar := mean(x), ] %>% .[, newvar2 := sum(newvar), by = group] %>% .[, -'y', with = FALSE] ``` data.table Summary ------------------ Faster and more memory\-efficient methods are great to have. If you have large data this is one package that can help. * For reading data * Especially for group\-by and joins. Drawbacks: * Complex * The syntax can be awkward * It doesn’t work like a data.frame, which can be confusing * Piping with brackets isn’t really feasible, and the dot approach is awkward * Does not have its own ‘verse’, though many packages use it If speed and/or memory is (potentially) a concern, data.table. For interactive exploration, dplyr. Piping allows one to use both, so no need to choose. And on the horizon… Faster dplyr Alternatives ------------------------- So we have data.table as a starting point for faster data processing operations, but there are others. The dtplyr package implements the data.table back\-end for dplyr, so that you can seamlessly use them together. The newer package tidyfast works directly with a data.table object, but uses dplyr\-esque functions. The following shows times for a counting unique arrival times in the nycflights13 flights data (336776 rows). | package | timing | | --- | --- | | dplyr | 10\.580 | | dtplyr | 4\.575 | | data.table | 3\.519 | | tidyfast | 3\.507 | | a Median time in milliseconds to do a count of arr\_time on nycflights::flights | | --- | Just for giggles I did the same in Python with a pandas DataFrame, and it was notably slower than all of these options (almost 10x slower than standard dplyr). A lot of folks that use Python think R is slow, but that is mostly because they don’t know how to effectively program with R for data science. #### Out of memory situations For very large data sets, especially in cases where distributed data solutions like Spark (and sparklyr) are not viable for practical or security reasons, you may need to try another approach. The disk.frame package does data processing on disk rather than in memory, as is the case with default R approaches. This allows you to process data that may be too large or time consuming to do so otherwise. For example, it’d be a great option if you are starting out with extremely large data, but for which your subset of interest is easily manageable within R. With disk.frame, you can do the initial filtering and selection before bringing it into memory. data.table Exercises -------------------- ### Exercise 0 Install and load the data.table package. Create the following data table. ``` mydt = data.table( expand.grid(x = 1:3, y = c('a', 'b', 'c')), z = sample(1:20, 9) ) ``` ### Exercise 1 Create a new object that contains only the ‘a’ group. Think back to how you use a logical to select rows. ### Exercise 2 Create a new object that is the sum of z grouped by x. You don’t need to name the sum variable. data.table Basics ----------------- In general, data.table works with brackets as in base R data frames. However, in order to use data.table effectively you’ll need to forget the data frame similarity. The brackets actually work like a function call, with several key arguments. Consider the following notation to start. ``` x[i, j, by, keyby, with = TRUE, ...] ``` Importantly: *you don’t use the brackets as you would with data.frames*. What **i** and **j** can be are fairly complex. In general, you use **i** for filtering by rows. ``` dt[2] # rows! not columns as with standard data.frame dt[2,] ``` ``` x g y 1: 5 2 0.1079452 x g y 1: 5 2 0.1079452 ``` You use **j** to select (by name!) or create new columns. We can define a new column with the :\= operator. ``` dt[,x] dt[,z := x+y] # dt now has a new column dt[,z] dt[g > 1, mean(z), by = g] dt ``` ``` [1] 6 5 2 9 8 1 [1] 6.908980 5.107945 2.843715 9.780681 8.215221 1.334649 g V1 1: 2 6.661583 2: 3 2.089182 x g y z 1: 6 1 0.9089802 6.908980 2: 5 2 0.1079452 5.107945 3: 2 3 0.8437154 2.843715 4: 9 1 0.7806815 9.780681 5: 8 2 0.2152209 8.215221 6: 1 3 0.3346486 1.334649 ``` Because **j** is an argument, dropping columns is awkward. ``` dt[, -y] # creates negative values of y dt[, -'y', with = F] # drops y, but now needs quotes ## dt[, y := NULL] # drops y, but this is just a base R approach ## dt$y = NULL ``` ``` [1] -0.9089802 -0.1079452 -0.8437154 -0.7806815 -0.2152209 -0.3346486 x g z 1: 6 1 6.908980 2: 5 2 5.107945 3: 2 3 2.843715 4: 9 1 9.780681 5: 8 2 8.215221 6: 1 3 1.334649 ``` Data table does not make unnecessary copies. For example if we do the following… ``` DT = data.table(A = 5:1, B = letters[5:1]) DT2 = DT DT3 = copy(DT) ``` DT2 and DT are just names for the same table. You’d actually need to use the copy function to make an explicit copy, otherwise whatever you do to DT2 will be done to DT. ``` DT2[,q:=1] DT ``` ``` A B q 1: 5 e 1 2: 4 d 1 3: 3 c 1 4: 2 b 1 5: 1 a 1 ``` ``` DT3 ``` ``` A B 1: 5 e 2: 4 d 3: 3 c 4: 2 b 5: 1 a ``` Grouped Operations ------------------ We can now attempt a ‘group\-by’ operation, along with creation of a new variable. Note that these operations actually modify the dt object *in place*, a key distinction with dplyr. Fewer copies means less of a memory hit. ``` dt1 = dt2 = dt dt[, sum(x, y), by = g] # sum of all x and y values ``` ``` g V1 1: 1 16.689662 2: 2 13.323166 3: 3 4.178364 ``` ``` dt1[, mysum := sum(x), by = g] # add new variable to the original data dt1 ``` ``` x g y z mysum 1: 6 1 0.9089802 6.908980 15 2: 5 2 0.1079452 5.107945 13 3: 2 3 0.8437154 2.843715 3 4: 9 1 0.7806815 9.780681 15 5: 8 2 0.2152209 8.215221 13 6: 1 3 0.3346486 1.334649 3 ``` We can also create groupings on the fly. For a new summary data set, we’ll take the following approach\- we create a grouping based on whether `g` is a value of one or not, then get the mean and sum of `x` for those two categories. The corresponding dplyr approach is also shown (but not evaluated) for comparison. ``` dt2[, list(mean_x = mean(x), sum_x = sum(x)), by = g == 1] ``` ``` g mean_x sum_x 1: TRUE 7.5 15 2: FALSE 4.0 16 ``` ``` ## dt2 %>% ## group_by(g == 1) %>% ## summarise(mean_x = mean(x), sum_x = sum(x)) ``` Faster! ------- As mentioned, the reason to use data.table is speed. If you have large data or large operations it’ll be useful. ### Joins Joins can not only be faster but also easy to do. Note that the `i` argument can be a data.table object itself. I compare its speed to the comparable dplyr’s left\_join function. ``` dt1 = setkey(dt1, x) dt1[dt2] dt1_df = dt2_df = as.data.frame(dt1) left_join(dt1_df, dt2_df, by = 'x') ``` | func | mean (microseconds) | | --- | --- | | dt\_join | 504\.77 | | dplyr\_join | 1588\.46 | ### Group by We can use the setkey function to order a data set by a certain column(s). This ordering is done by reference; again, no copy is made. Doing this will allow for faster grouped operations, though you likely will only see the speed gain with very large data. The timing regards creating a new variable ``` test_dt0 = data.table(x = rnorm(10000000), g = sample(letters, 10000000, replace = T)) test_dt1 = copy(test_dt0) test_dt2 = setkey(test_dt1, g) identical(test_dt0, test_dt1) ``` ``` [1] FALSE ``` ``` identical(test_dt1, test_dt2) ``` ``` [1] TRUE ``` ``` test_dt0 = test_dt0[, mean := mean(x), by = g] test_dt1 = test_dt1[, mean := mean(x), by = g] test_dt2 = test_dt2[, mean := mean(x), by = g] ``` | func | mean (milliseconds) | | --- | --- | | test\_dt0 | 381\.29 | | test\_dt1 | 118\.52 | | test\_dt2 | 109\.97 | ### String matching The chin function returns a vector of the *positions* of (first) matches of its first argument in its second, where both arguments are character vectors. Essentially it’s just like the %in% function for character vectors. Consider the following. We sample the first 14 letters 1000 times with replacement and see which ones match in a subset of another subset of letters. I compare the same operation to stringr and the stringi package whose functionality stringr using. They are both far slower than chin. ``` lets_1 = sample(letters[1:14], 1000, replace=T) lets_1 %chin% letters[13:26] %>% head(10) ``` ``` [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE ``` ``` # stri_detect_regex(lets_1, paste(letters[13:26], collapse='|')) ``` ### Reading files If you use data.table for nothing else, you’d still want to consider it strongly for reading in large text files. The function fread may be quite useful in being memory efficient too. I compare it to readr. ``` fread('data/cars.csv') ``` | func | mean (microseconds) | | --- | --- | | dt | 430\.91 | | readr | 2900\.19 | ### More speed The following demonstrates some timings from [here](http://stackoverflow.com/questions/3505701/r-grouping-functions-sapply-vs-lapply-vs-apply-vs-tapply-vs-by-vs-aggrega/34167477#34167477). I reproduced it on my own machine based on 50 million observations. The grouped operations that are applied are just a sum and length on a vector. By the way, never, ever use aggregate. For anything. | fun | elapsed | | --- | --- | | aggregate | 56\.857 | | by | 18\.118 | | dplyr | 14\.447 | | sapply | 12\.200 | | lapply | 11\.309 | | tapply | 10\.570 | | data.table | 0\.866 | Ever. Really. Another thing to note is that the tidy approach is more about clarity and code efficiency relative to base R, as well as doing important background data checks and returning more usable results. In practice, it likely won’t be notably faster except in some cases, like with aggregate. ### Joins Joins can not only be faster but also easy to do. Note that the `i` argument can be a data.table object itself. I compare its speed to the comparable dplyr’s left\_join function. ``` dt1 = setkey(dt1, x) dt1[dt2] dt1_df = dt2_df = as.data.frame(dt1) left_join(dt1_df, dt2_df, by = 'x') ``` | func | mean (microseconds) | | --- | --- | | dt\_join | 504\.77 | | dplyr\_join | 1588\.46 | ### Group by We can use the setkey function to order a data set by a certain column(s). This ordering is done by reference; again, no copy is made. Doing this will allow for faster grouped operations, though you likely will only see the speed gain with very large data. The timing regards creating a new variable ``` test_dt0 = data.table(x = rnorm(10000000), g = sample(letters, 10000000, replace = T)) test_dt1 = copy(test_dt0) test_dt2 = setkey(test_dt1, g) identical(test_dt0, test_dt1) ``` ``` [1] FALSE ``` ``` identical(test_dt1, test_dt2) ``` ``` [1] TRUE ``` ``` test_dt0 = test_dt0[, mean := mean(x), by = g] test_dt1 = test_dt1[, mean := mean(x), by = g] test_dt2 = test_dt2[, mean := mean(x), by = g] ``` | func | mean (milliseconds) | | --- | --- | | test\_dt0 | 381\.29 | | test\_dt1 | 118\.52 | | test\_dt2 | 109\.97 | ### String matching The chin function returns a vector of the *positions* of (first) matches of its first argument in its second, where both arguments are character vectors. Essentially it’s just like the %in% function for character vectors. Consider the following. We sample the first 14 letters 1000 times with replacement and see which ones match in a subset of another subset of letters. I compare the same operation to stringr and the stringi package whose functionality stringr using. They are both far slower than chin. ``` lets_1 = sample(letters[1:14], 1000, replace=T) lets_1 %chin% letters[13:26] %>% head(10) ``` ``` [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE ``` ``` # stri_detect_regex(lets_1, paste(letters[13:26], collapse='|')) ``` ### Reading files If you use data.table for nothing else, you’d still want to consider it strongly for reading in large text files. The function fread may be quite useful in being memory efficient too. I compare it to readr. ``` fread('data/cars.csv') ``` | func | mean (microseconds) | | --- | --- | | dt | 430\.91 | | readr | 2900\.19 | ### More speed The following demonstrates some timings from [here](http://stackoverflow.com/questions/3505701/r-grouping-functions-sapply-vs-lapply-vs-apply-vs-tapply-vs-by-vs-aggrega/34167477#34167477). I reproduced it on my own machine based on 50 million observations. The grouped operations that are applied are just a sum and length on a vector. By the way, never, ever use aggregate. For anything. | fun | elapsed | | --- | --- | | aggregate | 56\.857 | | by | 18\.118 | | dplyr | 14\.447 | | sapply | 12\.200 | | lapply | 11\.309 | | tapply | 10\.570 | | data.table | 0\.866 | Ever. Really. Another thing to note is that the tidy approach is more about clarity and code efficiency relative to base R, as well as doing important background data checks and returning more usable results. In practice, it likely won’t be notably faster except in some cases, like with aggregate. Pipe with data.table -------------------- Piping can be done with data.table objects too, using the brackets, but it’s awkward at best. ``` mydt[, newvar := mean(x), ][, newvar2 := sum(newvar), by = group][, -'y', with = FALSE] mydt[, newvar := mean(x), ][, newvar2 := sum(newvar), by = group ][,-'y', with=FALSE] ``` Probably better to just use a standard pipe and dot approach if you really need it. ``` mydt[, newvar := mean(x), ] %>% .[, newvar2 := sum(newvar), by = group] %>% .[, -'y', with = FALSE] ``` data.table Summary ------------------ Faster and more memory\-efficient methods are great to have. If you have large data this is one package that can help. * For reading data * Especially for group\-by and joins. Drawbacks: * Complex * The syntax can be awkward * It doesn’t work like a data.frame, which can be confusing * Piping with brackets isn’t really feasible, and the dot approach is awkward * Does not have its own ‘verse’, though many packages use it If speed and/or memory is (potentially) a concern, data.table. For interactive exploration, dplyr. Piping allows one to use both, so no need to choose. And on the horizon… Faster dplyr Alternatives ------------------------- So we have data.table as a starting point for faster data processing operations, but there are others. The dtplyr package implements the data.table back\-end for dplyr, so that you can seamlessly use them together. The newer package tidyfast works directly with a data.table object, but uses dplyr\-esque functions. The following shows times for a counting unique arrival times in the nycflights13 flights data (336776 rows). | package | timing | | --- | --- | | dplyr | 10\.580 | | dtplyr | 4\.575 | | data.table | 3\.519 | | tidyfast | 3\.507 | | a Median time in milliseconds to do a count of arr\_time on nycflights::flights | | --- | Just for giggles I did the same in Python with a pandas DataFrame, and it was notably slower than all of these options (almost 10x slower than standard dplyr). A lot of folks that use Python think R is slow, but that is mostly because they don’t know how to effectively program with R for data science. #### Out of memory situations For very large data sets, especially in cases where distributed data solutions like Spark (and sparklyr) are not viable for practical or security reasons, you may need to try another approach. The disk.frame package does data processing on disk rather than in memory, as is the case with default R approaches. This allows you to process data that may be too large or time consuming to do so otherwise. For example, it’d be a great option if you are starting out with extremely large data, but for which your subset of interest is easily manageable within R. With disk.frame, you can do the initial filtering and selection before bringing it into memory. #### Out of memory situations For very large data sets, especially in cases where distributed data solutions like Spark (and sparklyr) are not viable for practical or security reasons, you may need to try another approach. The disk.frame package does data processing on disk rather than in memory, as is the case with default R approaches. This allows you to process data that may be too large or time consuming to do so otherwise. For example, it’d be a great option if you are starting out with extremely large data, but for which your subset of interest is easily manageable within R. With disk.frame, you can do the initial filtering and selection before bringing it into memory. data.table Exercises -------------------- ### Exercise 0 Install and load the data.table package. Create the following data table. ``` mydt = data.table( expand.grid(x = 1:3, y = c('a', 'b', 'c')), z = sample(1:20, 9) ) ``` ### Exercise 1 Create a new object that contains only the ‘a’ group. Think back to how you use a logical to select rows. ### Exercise 2 Create a new object that is the sum of z grouped by x. You don’t need to name the sum variable. ### Exercise 0 Install and load the data.table package. Create the following data table. ``` mydt = data.table( expand.grid(x = 1:3, y = c('a', 'b', 'c')), z = sample(1:20, 9) ) ``` ### Exercise 1 Create a new object that contains only the ‘a’ group. Think back to how you use a logical to select rows. ### Exercise 2 Create a new object that is the sum of z grouped by x. You don’t need to name the sum variable.
Data Visualization
m-clark.github.io
https://m-clark.github.io/data-processing-and-visualization/data_table.html
data.table ========== Another package for data processing that has been useful to many is data.table. It works in a notably different way than dplyr. However, you’d use it for the same reasons, e.g. subset, grouping, update, ordered joins etc., but with key advantages in speed and memory efficiency. Like dplyr, the data objects are both data.frames and a package specific class. ``` library(data.table) dt = data.table(x = sample(1:10, 6), g = 1:3, y = runif(6)) class(dt) ``` ``` [1] "data.table" "data.frame" ``` data.table Basics ----------------- In general, data.table works with brackets as in base R data frames. However, in order to use data.table effectively you’ll need to forget the data frame similarity. The brackets actually work like a function call, with several key arguments. Consider the following notation to start. ``` x[i, j, by, keyby, with = TRUE, ...] ``` Importantly: *you don’t use the brackets as you would with data.frames*. What **i** and **j** can be are fairly complex. In general, you use **i** for filtering by rows. ``` dt[2] # rows! not columns as with standard data.frame dt[2,] ``` ``` x g y 1: 5 2 0.1079452 x g y 1: 5 2 0.1079452 ``` You use **j** to select (by name!) or create new columns. We can define a new column with the :\= operator. ``` dt[,x] dt[,z := x+y] # dt now has a new column dt[,z] dt[g > 1, mean(z), by = g] dt ``` ``` [1] 6 5 2 9 8 1 [1] 6.908980 5.107945 2.843715 9.780681 8.215221 1.334649 g V1 1: 2 6.661583 2: 3 2.089182 x g y z 1: 6 1 0.9089802 6.908980 2: 5 2 0.1079452 5.107945 3: 2 3 0.8437154 2.843715 4: 9 1 0.7806815 9.780681 5: 8 2 0.2152209 8.215221 6: 1 3 0.3346486 1.334649 ``` Because **j** is an argument, dropping columns is awkward. ``` dt[, -y] # creates negative values of y dt[, -'y', with = F] # drops y, but now needs quotes ## dt[, y := NULL] # drops y, but this is just a base R approach ## dt$y = NULL ``` ``` [1] -0.9089802 -0.1079452 -0.8437154 -0.7806815 -0.2152209 -0.3346486 x g z 1: 6 1 6.908980 2: 5 2 5.107945 3: 2 3 2.843715 4: 9 1 9.780681 5: 8 2 8.215221 6: 1 3 1.334649 ``` Data table does not make unnecessary copies. For example if we do the following… ``` DT = data.table(A = 5:1, B = letters[5:1]) DT2 = DT DT3 = copy(DT) ``` DT2 and DT are just names for the same table. You’d actually need to use the copy function to make an explicit copy, otherwise whatever you do to DT2 will be done to DT. ``` DT2[,q:=1] DT ``` ``` A B q 1: 5 e 1 2: 4 d 1 3: 3 c 1 4: 2 b 1 5: 1 a 1 ``` ``` DT3 ``` ``` A B 1: 5 e 2: 4 d 3: 3 c 4: 2 b 5: 1 a ``` Grouped Operations ------------------ We can now attempt a ‘group\-by’ operation, along with creation of a new variable. Note that these operations actually modify the dt object *in place*, a key distinction with dplyr. Fewer copies means less of a memory hit. ``` dt1 = dt2 = dt dt[, sum(x, y), by = g] # sum of all x and y values ``` ``` g V1 1: 1 16.689662 2: 2 13.323166 3: 3 4.178364 ``` ``` dt1[, mysum := sum(x), by = g] # add new variable to the original data dt1 ``` ``` x g y z mysum 1: 6 1 0.9089802 6.908980 15 2: 5 2 0.1079452 5.107945 13 3: 2 3 0.8437154 2.843715 3 4: 9 1 0.7806815 9.780681 15 5: 8 2 0.2152209 8.215221 13 6: 1 3 0.3346486 1.334649 3 ``` We can also create groupings on the fly. For a new summary data set, we’ll take the following approach\- we create a grouping based on whether `g` is a value of one or not, then get the mean and sum of `x` for those two categories. The corresponding dplyr approach is also shown (but not evaluated) for comparison. ``` dt2[, list(mean_x = mean(x), sum_x = sum(x)), by = g == 1] ``` ``` g mean_x sum_x 1: TRUE 7.5 15 2: FALSE 4.0 16 ``` ``` ## dt2 %>% ## group_by(g == 1) %>% ## summarise(mean_x = mean(x), sum_x = sum(x)) ``` Faster! ------- As mentioned, the reason to use data.table is speed. If you have large data or large operations it’ll be useful. ### Joins Joins can not only be faster but also easy to do. Note that the `i` argument can be a data.table object itself. I compare its speed to the comparable dplyr’s left\_join function. ``` dt1 = setkey(dt1, x) dt1[dt2] dt1_df = dt2_df = as.data.frame(dt1) left_join(dt1_df, dt2_df, by = 'x') ``` | func | mean (microseconds) | | --- | --- | | dt\_join | 504\.77 | | dplyr\_join | 1588\.46 | ### Group by We can use the setkey function to order a data set by a certain column(s). This ordering is done by reference; again, no copy is made. Doing this will allow for faster grouped operations, though you likely will only see the speed gain with very large data. The timing regards creating a new variable ``` test_dt0 = data.table(x = rnorm(10000000), g = sample(letters, 10000000, replace = T)) test_dt1 = copy(test_dt0) test_dt2 = setkey(test_dt1, g) identical(test_dt0, test_dt1) ``` ``` [1] FALSE ``` ``` identical(test_dt1, test_dt2) ``` ``` [1] TRUE ``` ``` test_dt0 = test_dt0[, mean := mean(x), by = g] test_dt1 = test_dt1[, mean := mean(x), by = g] test_dt2 = test_dt2[, mean := mean(x), by = g] ``` | func | mean (milliseconds) | | --- | --- | | test\_dt0 | 381\.29 | | test\_dt1 | 118\.52 | | test\_dt2 | 109\.97 | ### String matching The chin function returns a vector of the *positions* of (first) matches of its first argument in its second, where both arguments are character vectors. Essentially it’s just like the %in% function for character vectors. Consider the following. We sample the first 14 letters 1000 times with replacement and see which ones match in a subset of another subset of letters. I compare the same operation to stringr and the stringi package whose functionality stringr using. They are both far slower than chin. ``` lets_1 = sample(letters[1:14], 1000, replace=T) lets_1 %chin% letters[13:26] %>% head(10) ``` ``` [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE ``` ``` # stri_detect_regex(lets_1, paste(letters[13:26], collapse='|')) ``` ### Reading files If you use data.table for nothing else, you’d still want to consider it strongly for reading in large text files. The function fread may be quite useful in being memory efficient too. I compare it to readr. ``` fread('data/cars.csv') ``` | func | mean (microseconds) | | --- | --- | | dt | 430\.91 | | readr | 2900\.19 | ### More speed The following demonstrates some timings from [here](http://stackoverflow.com/questions/3505701/r-grouping-functions-sapply-vs-lapply-vs-apply-vs-tapply-vs-by-vs-aggrega/34167477#34167477). I reproduced it on my own machine based on 50 million observations. The grouped operations that are applied are just a sum and length on a vector. By the way, never, ever use aggregate. For anything. | fun | elapsed | | --- | --- | | aggregate | 56\.857 | | by | 18\.118 | | dplyr | 14\.447 | | sapply | 12\.200 | | lapply | 11\.309 | | tapply | 10\.570 | | data.table | 0\.866 | Ever. Really. Another thing to note is that the tidy approach is more about clarity and code efficiency relative to base R, as well as doing important background data checks and returning more usable results. In practice, it likely won’t be notably faster except in some cases, like with aggregate. Pipe with data.table -------------------- Piping can be done with data.table objects too, using the brackets, but it’s awkward at best. ``` mydt[, newvar := mean(x), ][, newvar2 := sum(newvar), by = group][, -'y', with = FALSE] mydt[, newvar := mean(x), ][, newvar2 := sum(newvar), by = group ][,-'y', with=FALSE] ``` Probably better to just use a standard pipe and dot approach if you really need it. ``` mydt[, newvar := mean(x), ] %>% .[, newvar2 := sum(newvar), by = group] %>% .[, -'y', with = FALSE] ``` data.table Summary ------------------ Faster and more memory\-efficient methods are great to have. If you have large data this is one package that can help. * For reading data * Especially for group\-by and joins. Drawbacks: * Complex * The syntax can be awkward * It doesn’t work like a data.frame, which can be confusing * Piping with brackets isn’t really feasible, and the dot approach is awkward * Does not have its own ‘verse’, though many packages use it If speed and/or memory is (potentially) a concern, data.table. For interactive exploration, dplyr. Piping allows one to use both, so no need to choose. And on the horizon… Faster dplyr Alternatives ------------------------- So we have data.table as a starting point for faster data processing operations, but there are others. The dtplyr package implements the data.table back\-end for dplyr, so that you can seamlessly use them together. The newer package tidyfast works directly with a data.table object, but uses dplyr\-esque functions. The following shows times for a counting unique arrival times in the nycflights13 flights data (336776 rows). | package | timing | | --- | --- | | dplyr | 10\.580 | | dtplyr | 4\.575 | | data.table | 3\.519 | | tidyfast | 3\.507 | | a Median time in milliseconds to do a count of arr\_time on nycflights::flights | | --- | Just for giggles I did the same in Python with a pandas DataFrame, and it was notably slower than all of these options (almost 10x slower than standard dplyr). A lot of folks that use Python think R is slow, but that is mostly because they don’t know how to effectively program with R for data science. #### Out of memory situations For very large data sets, especially in cases where distributed data solutions like Spark (and sparklyr) are not viable for practical or security reasons, you may need to try another approach. The disk.frame package does data processing on disk rather than in memory, as is the case with default R approaches. This allows you to process data that may be too large or time consuming to do so otherwise. For example, it’d be a great option if you are starting out with extremely large data, but for which your subset of interest is easily manageable within R. With disk.frame, you can do the initial filtering and selection before bringing it into memory. data.table Exercises -------------------- ### Exercise 0 Install and load the data.table package. Create the following data table. ``` mydt = data.table( expand.grid(x = 1:3, y = c('a', 'b', 'c')), z = sample(1:20, 9) ) ``` ### Exercise 1 Create a new object that contains only the ‘a’ group. Think back to how you use a logical to select rows. ### Exercise 2 Create a new object that is the sum of z grouped by x. You don’t need to name the sum variable. data.table Basics ----------------- In general, data.table works with brackets as in base R data frames. However, in order to use data.table effectively you’ll need to forget the data frame similarity. The brackets actually work like a function call, with several key arguments. Consider the following notation to start. ``` x[i, j, by, keyby, with = TRUE, ...] ``` Importantly: *you don’t use the brackets as you would with data.frames*. What **i** and **j** can be are fairly complex. In general, you use **i** for filtering by rows. ``` dt[2] # rows! not columns as with standard data.frame dt[2,] ``` ``` x g y 1: 5 2 0.1079452 x g y 1: 5 2 0.1079452 ``` You use **j** to select (by name!) or create new columns. We can define a new column with the :\= operator. ``` dt[,x] dt[,z := x+y] # dt now has a new column dt[,z] dt[g > 1, mean(z), by = g] dt ``` ``` [1] 6 5 2 9 8 1 [1] 6.908980 5.107945 2.843715 9.780681 8.215221 1.334649 g V1 1: 2 6.661583 2: 3 2.089182 x g y z 1: 6 1 0.9089802 6.908980 2: 5 2 0.1079452 5.107945 3: 2 3 0.8437154 2.843715 4: 9 1 0.7806815 9.780681 5: 8 2 0.2152209 8.215221 6: 1 3 0.3346486 1.334649 ``` Because **j** is an argument, dropping columns is awkward. ``` dt[, -y] # creates negative values of y dt[, -'y', with = F] # drops y, but now needs quotes ## dt[, y := NULL] # drops y, but this is just a base R approach ## dt$y = NULL ``` ``` [1] -0.9089802 -0.1079452 -0.8437154 -0.7806815 -0.2152209 -0.3346486 x g z 1: 6 1 6.908980 2: 5 2 5.107945 3: 2 3 2.843715 4: 9 1 9.780681 5: 8 2 8.215221 6: 1 3 1.334649 ``` Data table does not make unnecessary copies. For example if we do the following… ``` DT = data.table(A = 5:1, B = letters[5:1]) DT2 = DT DT3 = copy(DT) ``` DT2 and DT are just names for the same table. You’d actually need to use the copy function to make an explicit copy, otherwise whatever you do to DT2 will be done to DT. ``` DT2[,q:=1] DT ``` ``` A B q 1: 5 e 1 2: 4 d 1 3: 3 c 1 4: 2 b 1 5: 1 a 1 ``` ``` DT3 ``` ``` A B 1: 5 e 2: 4 d 3: 3 c 4: 2 b 5: 1 a ``` Grouped Operations ------------------ We can now attempt a ‘group\-by’ operation, along with creation of a new variable. Note that these operations actually modify the dt object *in place*, a key distinction with dplyr. Fewer copies means less of a memory hit. ``` dt1 = dt2 = dt dt[, sum(x, y), by = g] # sum of all x and y values ``` ``` g V1 1: 1 16.689662 2: 2 13.323166 3: 3 4.178364 ``` ``` dt1[, mysum := sum(x), by = g] # add new variable to the original data dt1 ``` ``` x g y z mysum 1: 6 1 0.9089802 6.908980 15 2: 5 2 0.1079452 5.107945 13 3: 2 3 0.8437154 2.843715 3 4: 9 1 0.7806815 9.780681 15 5: 8 2 0.2152209 8.215221 13 6: 1 3 0.3346486 1.334649 3 ``` We can also create groupings on the fly. For a new summary data set, we’ll take the following approach\- we create a grouping based on whether `g` is a value of one or not, then get the mean and sum of `x` for those two categories. The corresponding dplyr approach is also shown (but not evaluated) for comparison. ``` dt2[, list(mean_x = mean(x), sum_x = sum(x)), by = g == 1] ``` ``` g mean_x sum_x 1: TRUE 7.5 15 2: FALSE 4.0 16 ``` ``` ## dt2 %>% ## group_by(g == 1) %>% ## summarise(mean_x = mean(x), sum_x = sum(x)) ``` Faster! ------- As mentioned, the reason to use data.table is speed. If you have large data or large operations it’ll be useful. ### Joins Joins can not only be faster but also easy to do. Note that the `i` argument can be a data.table object itself. I compare its speed to the comparable dplyr’s left\_join function. ``` dt1 = setkey(dt1, x) dt1[dt2] dt1_df = dt2_df = as.data.frame(dt1) left_join(dt1_df, dt2_df, by = 'x') ``` | func | mean (microseconds) | | --- | --- | | dt\_join | 504\.77 | | dplyr\_join | 1588\.46 | ### Group by We can use the setkey function to order a data set by a certain column(s). This ordering is done by reference; again, no copy is made. Doing this will allow for faster grouped operations, though you likely will only see the speed gain with very large data. The timing regards creating a new variable ``` test_dt0 = data.table(x = rnorm(10000000), g = sample(letters, 10000000, replace = T)) test_dt1 = copy(test_dt0) test_dt2 = setkey(test_dt1, g) identical(test_dt0, test_dt1) ``` ``` [1] FALSE ``` ``` identical(test_dt1, test_dt2) ``` ``` [1] TRUE ``` ``` test_dt0 = test_dt0[, mean := mean(x), by = g] test_dt1 = test_dt1[, mean := mean(x), by = g] test_dt2 = test_dt2[, mean := mean(x), by = g] ``` | func | mean (milliseconds) | | --- | --- | | test\_dt0 | 381\.29 | | test\_dt1 | 118\.52 | | test\_dt2 | 109\.97 | ### String matching The chin function returns a vector of the *positions* of (first) matches of its first argument in its second, where both arguments are character vectors. Essentially it’s just like the %in% function for character vectors. Consider the following. We sample the first 14 letters 1000 times with replacement and see which ones match in a subset of another subset of letters. I compare the same operation to stringr and the stringi package whose functionality stringr using. They are both far slower than chin. ``` lets_1 = sample(letters[1:14], 1000, replace=T) lets_1 %chin% letters[13:26] %>% head(10) ``` ``` [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE ``` ``` # stri_detect_regex(lets_1, paste(letters[13:26], collapse='|')) ``` ### Reading files If you use data.table for nothing else, you’d still want to consider it strongly for reading in large text files. The function fread may be quite useful in being memory efficient too. I compare it to readr. ``` fread('data/cars.csv') ``` | func | mean (microseconds) | | --- | --- | | dt | 430\.91 | | readr | 2900\.19 | ### More speed The following demonstrates some timings from [here](http://stackoverflow.com/questions/3505701/r-grouping-functions-sapply-vs-lapply-vs-apply-vs-tapply-vs-by-vs-aggrega/34167477#34167477). I reproduced it on my own machine based on 50 million observations. The grouped operations that are applied are just a sum and length on a vector. By the way, never, ever use aggregate. For anything. | fun | elapsed | | --- | --- | | aggregate | 56\.857 | | by | 18\.118 | | dplyr | 14\.447 | | sapply | 12\.200 | | lapply | 11\.309 | | tapply | 10\.570 | | data.table | 0\.866 | Ever. Really. Another thing to note is that the tidy approach is more about clarity and code efficiency relative to base R, as well as doing important background data checks and returning more usable results. In practice, it likely won’t be notably faster except in some cases, like with aggregate. ### Joins Joins can not only be faster but also easy to do. Note that the `i` argument can be a data.table object itself. I compare its speed to the comparable dplyr’s left\_join function. ``` dt1 = setkey(dt1, x) dt1[dt2] dt1_df = dt2_df = as.data.frame(dt1) left_join(dt1_df, dt2_df, by = 'x') ``` | func | mean (microseconds) | | --- | --- | | dt\_join | 504\.77 | | dplyr\_join | 1588\.46 | ### Group by We can use the setkey function to order a data set by a certain column(s). This ordering is done by reference; again, no copy is made. Doing this will allow for faster grouped operations, though you likely will only see the speed gain with very large data. The timing regards creating a new variable ``` test_dt0 = data.table(x = rnorm(10000000), g = sample(letters, 10000000, replace = T)) test_dt1 = copy(test_dt0) test_dt2 = setkey(test_dt1, g) identical(test_dt0, test_dt1) ``` ``` [1] FALSE ``` ``` identical(test_dt1, test_dt2) ``` ``` [1] TRUE ``` ``` test_dt0 = test_dt0[, mean := mean(x), by = g] test_dt1 = test_dt1[, mean := mean(x), by = g] test_dt2 = test_dt2[, mean := mean(x), by = g] ``` | func | mean (milliseconds) | | --- | --- | | test\_dt0 | 381\.29 | | test\_dt1 | 118\.52 | | test\_dt2 | 109\.97 | ### String matching The chin function returns a vector of the *positions* of (first) matches of its first argument in its second, where both arguments are character vectors. Essentially it’s just like the %in% function for character vectors. Consider the following. We sample the first 14 letters 1000 times with replacement and see which ones match in a subset of another subset of letters. I compare the same operation to stringr and the stringi package whose functionality stringr using. They are both far slower than chin. ``` lets_1 = sample(letters[1:14], 1000, replace=T) lets_1 %chin% letters[13:26] %>% head(10) ``` ``` [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE ``` ``` # stri_detect_regex(lets_1, paste(letters[13:26], collapse='|')) ``` ### Reading files If you use data.table for nothing else, you’d still want to consider it strongly for reading in large text files. The function fread may be quite useful in being memory efficient too. I compare it to readr. ``` fread('data/cars.csv') ``` | func | mean (microseconds) | | --- | --- | | dt | 430\.91 | | readr | 2900\.19 | ### More speed The following demonstrates some timings from [here](http://stackoverflow.com/questions/3505701/r-grouping-functions-sapply-vs-lapply-vs-apply-vs-tapply-vs-by-vs-aggrega/34167477#34167477). I reproduced it on my own machine based on 50 million observations. The grouped operations that are applied are just a sum and length on a vector. By the way, never, ever use aggregate. For anything. | fun | elapsed | | --- | --- | | aggregate | 56\.857 | | by | 18\.118 | | dplyr | 14\.447 | | sapply | 12\.200 | | lapply | 11\.309 | | tapply | 10\.570 | | data.table | 0\.866 | Ever. Really. Another thing to note is that the tidy approach is more about clarity and code efficiency relative to base R, as well as doing important background data checks and returning more usable results. In practice, it likely won’t be notably faster except in some cases, like with aggregate. Pipe with data.table -------------------- Piping can be done with data.table objects too, using the brackets, but it’s awkward at best. ``` mydt[, newvar := mean(x), ][, newvar2 := sum(newvar), by = group][, -'y', with = FALSE] mydt[, newvar := mean(x), ][, newvar2 := sum(newvar), by = group ][,-'y', with=FALSE] ``` Probably better to just use a standard pipe and dot approach if you really need it. ``` mydt[, newvar := mean(x), ] %>% .[, newvar2 := sum(newvar), by = group] %>% .[, -'y', with = FALSE] ``` data.table Summary ------------------ Faster and more memory\-efficient methods are great to have. If you have large data this is one package that can help. * For reading data * Especially for group\-by and joins. Drawbacks: * Complex * The syntax can be awkward * It doesn’t work like a data.frame, which can be confusing * Piping with brackets isn’t really feasible, and the dot approach is awkward * Does not have its own ‘verse’, though many packages use it If speed and/or memory is (potentially) a concern, data.table. For interactive exploration, dplyr. Piping allows one to use both, so no need to choose. And on the horizon… Faster dplyr Alternatives ------------------------- So we have data.table as a starting point for faster data processing operations, but there are others. The dtplyr package implements the data.table back\-end for dplyr, so that you can seamlessly use them together. The newer package tidyfast works directly with a data.table object, but uses dplyr\-esque functions. The following shows times for a counting unique arrival times in the nycflights13 flights data (336776 rows). | package | timing | | --- | --- | | dplyr | 10\.580 | | dtplyr | 4\.575 | | data.table | 3\.519 | | tidyfast | 3\.507 | | a Median time in milliseconds to do a count of arr\_time on nycflights::flights | | --- | Just for giggles I did the same in Python with a pandas DataFrame, and it was notably slower than all of these options (almost 10x slower than standard dplyr). A lot of folks that use Python think R is slow, but that is mostly because they don’t know how to effectively program with R for data science. #### Out of memory situations For very large data sets, especially in cases where distributed data solutions like Spark (and sparklyr) are not viable for practical or security reasons, you may need to try another approach. The disk.frame package does data processing on disk rather than in memory, as is the case with default R approaches. This allows you to process data that may be too large or time consuming to do so otherwise. For example, it’d be a great option if you are starting out with extremely large data, but for which your subset of interest is easily manageable within R. With disk.frame, you can do the initial filtering and selection before bringing it into memory. #### Out of memory situations For very large data sets, especially in cases where distributed data solutions like Spark (and sparklyr) are not viable for practical or security reasons, you may need to try another approach. The disk.frame package does data processing on disk rather than in memory, as is the case with default R approaches. This allows you to process data that may be too large or time consuming to do so otherwise. For example, it’d be a great option if you are starting out with extremely large data, but for which your subset of interest is easily manageable within R. With disk.frame, you can do the initial filtering and selection before bringing it into memory. data.table Exercises -------------------- ### Exercise 0 Install and load the data.table package. Create the following data table. ``` mydt = data.table( expand.grid(x = 1:3, y = c('a', 'b', 'c')), z = sample(1:20, 9) ) ``` ### Exercise 1 Create a new object that contains only the ‘a’ group. Think back to how you use a logical to select rows. ### Exercise 2 Create a new object that is the sum of z grouped by x. You don’t need to name the sum variable. ### Exercise 0 Install and load the data.table package. Create the following data table. ``` mydt = data.table( expand.grid(x = 1:3, y = c('a', 'b', 'c')), z = sample(1:20, 9) ) ``` ### Exercise 1 Create a new object that contains only the ‘a’ group. Think back to how you use a logical to select rows. ### Exercise 2 Create a new object that is the sum of z grouped by x. You don’t need to name the sum variable.
Text Analysis