Statistical Analysis with Measurement Error or Misclassification

Written by Grace Y. Yi , Statistical Analysis with Measurement Error or Misclassification, published by Springer Science Business Media LLC 2017.

Is a treasure of a book to go with a coding book. It gives the what, why and how of Missing data , Measurement error and Misclassification.

Chapter 2 covers Measurement error, incorrect readings of precise measurement. For example reading a three as an eight.

Systemic error, Sampling error, statistical bias, each type of error has it own way of handling it. And often the data contains more than on type of error.

Naive estimators incur larger bias than than estimators obtained from valid metrics but the later ones entail more variation than the naive estimators.

Lots to think about, Chapter 9 asks a lot of good questions.

Use the most plausible method to handle missing, mis classified and  error prone data. The methods are well covered in the book.

This is a Stat’s book the key to the symbols is the beginning of the book.

It is know that ignoring measurement error can cause misleading results.

Hidden Inequalities in the Workplace

Publisher Springer International Publishing, 2018
Hidden Inequalities in the Workplace
Editors: Valerie Caven and Stefanos Nachmias

I have been commissioned by AONW to do a study on Ageism in the Workplace.  I am glad that I found this book while doing research. It is a timely book on difficult topics.

They make a business case for diversity: The real benefit assigned to diversity management is gaining competitive advantage and enhance performance thru human capital.

The Quality of Work Among Older Workers
Chapter 5 page 91
written by Christopher Lawton and Daniel Wheatly

This chapter sheds light into this under explored area of the labor market.  Concluding with that working into later life can bring benefits to society including; higher national output, lower unemployment, lower welfare costs and reduced health speeding.

Cognitive Biases in Recruitment, Selection and Promotion: The Risk of Subconscious Discrimination
written by Zara Whysall

This chapter states that despite documented benefits of workplace diversity, progress in achieving this has been slow.

This book has given me a lot to think about and a lot more to explore.

Tables with R

People around Thanksgiving table enjoying dinner
Thanksgiving Table 11/26/2009

Cirque du Soleil Kurios show has an act where they mirror a table. It is amazing to see people upside down mirroring a table.

R   programming language has several packages for doing tables with R. Basic has a function called table. Which is good enough. Sometimes you want more. At a meeting last night someone said pander was the best package. Someone else said that they liked htmltable better. Also there is xtable and tables. tables was written by someone to be like SAS PROC TABULATE.  Many choices, pick out the one that you understand the directions and meets your publishing needs. Better depends on your point of view.

table

tables

xtable

htmltable

pander

Thanksgiving Table right side up
table right side up

Functions in R

Hug Point Oregon Coast
Hug Point Oregon Coast

Wish I was at the coast.

R does a lot with functions. Let’s start with a simple function statement. The base R has a function called function.

In the following code:

f is the objects name

x is the varible

the function x +1 goes between { }

Pretty simple

f <- function(x) {x + 1}

Take
f(4)

results

[1] 5

Write your code try other functions. It is easier to write a function in R than other languages.

Data Visualisation with R

Data Visualisation with R

written by Thomas Rahlf

published by Springer International Publishing 2017

Originally published as Datendesign mit R, 2014

www.datavisualisation-r.com

This is a well written book for designers.  Part one of the book basics and techniques covers more than the basics.  Fig 2.1 is of Elements of a figure. R has the commands to put all these things on a graph.

Typefaces, fonts and symbols again more information than I usually see in an R book.

Part two is the examples. 100 examples  are on their web site. The examples talk about good design layout and readability.

One of my favorites is figure 6.3.7 Tree Map. Tree Map is a good way to see proportions. How much is each part of the budget. Small important items do not disappear.

Enjoy this book. I am having fun getting the code to work on other data.

Data the World’s Most Valuable Resource

I just read an Article in the May 6th 2017 The Economist. Briefing The data economy, Fuel of he future.

An interesting thought that data is the oil of this century.

“Data are to this century what oil was to the last one: a driver of growth and change.” page 19

This idea gives you a lot to think about.

One thing to think about is the lack of fungibility for data.

And who owns the data?

Lot’s to think about.

ggplot2

ggplot2 book cover
gggplot2 Elegant Graphics for Data Analysis

Latest addition of Hadley Wickam’s book ggplot2

Springer International Publishing 2016

This is a major update. I spent a lot of time going over the last chapters in the book.

Part 3 Data Analysis covers a different way of using ggplot2. Instead of doing analysis then plotting. Do both parts at the same time using ggplot2 plot and other new useful packages.

Chapter 9 covers tidy data. Tidy data has variables in columns  and observations  in rows. Straight forward but the data doesn’t always come that way.  Packages tidyr and dplyr  help with tidying up data.

One of things covered in Chapter 10 is pipes and the package magrittr. Using pipes makes for cleaner code.

Chapter 11 Modelling for Visualization. Introduces the new package called broom. broom package takes messy data out put of model functions such as lm, glm, anova and makes them tidy.

The beginning of the book covers aes() and that you need it for your plot and geom() you keep adding them as layers.

This a good book for learning how to use ggplot2 and new techniques for analyzing data.

Separating Data in R

I had some messy data to turn tidy. Column of data that needed to be separated into two columns. All the directions where obscure and not helpful. Try searching for a regular expression on the web.
One of the things I was puzzled over was \\.+ found out it meant gosub(). Much easier to search on. Delimiter was another puzzling thing until I realized that I could treat it the same as when I read csv files. This is the R code that worked.

library(dplyr)
library(tidyr)

tidymessydata <- (separate(messydata, State.ZIP, into = c(“State”,”Zip”), sep = ” “))

separate is a function

messydata is the data.frame and State.Zip is the column that should be two.

into is the new column names

sep is the delimiter function, space is what it was separated on. I pressed the space bar between the quotation marks.

Hopefully this is clearer than what I found for directions.

 

 

The Cox Model and Its Applications

The Cox Model and Its Applications
The Cox Model and Its Applications

The Cox Model and Its Applications published in Springer Briefs in Statistics 2016. Written by Mikhail Nikulin and Hong-Dar Isaac Wu.

I enjoyed reading this book although it has no code examples. I think I can figure out the code from the precise equations.

Cox proportional hazards model is a type of survival analysis.  The proportional hazards model was put forward by Sir David Cox in 1972.

Chapter 2 covers the basic concepts for models. Including  classical parametric models and how to handle censored data.

Chapter 3 covers the cox proportional hazards model including tampered failure time model.

Chapter 5 is about Cross-effect Models of Survival Functions.

5.2 Parametric Weibull Regression with Hetroscedastic Shape parameter.

There are lots more models. I recommend reading the book with a card you have written on explaining in a way you understand the  definitions and symbols used in the book.

 

RStudio & GitHub

Last night I learned what step I was missing to use RStudio and GitHub together. When I needed to push code to GitHub I couldn’t get it to work. This worked:

First make a repository on GitHub.

Then copy the SSH code for cloning repositories.

Open RStudio,make a new project for the repository,  go to tools tab, choose version control, pick git.

Next set up the project version control

Paste the SSH clone code in RStudio box for GitHub

Then the rest happens and RStudio is linked to GitHub and you can commit, push and pull.

I am glad I finally figured this out. Going to user groups in beneficial.