tulika goyal

B-tech 2nd year student of polymer science.

Student at IIT Roorkee

How to determine value of K

How to determine value of K: In K-means, we have clusters and each cluster has its own centroid. Sum of square of difference between centroid and the data points within a cluster constitutes within sum of square value for that cluster. Also, when the sum of square values for all the clusters are added, it becomes total within sum of square value for the cluster solution. We know that as the number of cluster increases, this value keeps on decreasing but if you plot the result you may see that the sum of squared distance decreases sharply up to some value of k, and then much more slowly after that. Here, we can find the optimum number of cluster.

How K-means forms cluster:

How K-means forms cluster: K-means picks k number of points for each cluster known as centroids. Each data point forms a cluster with the closest centroids i.e. k clusters. Finds the centroid of each cluster based on existing cluster members. Here we have new centroids. As we have new centroids, repeat step 2 and 3. Find the closest distance for each data point from new centroids and get associated with new k-clusters. Repeat this process until convergence occurs i.e. centroids does not change.

kNN (k- Nearest Neighbors)

kNN (k- Nearest Neighbors) It can be used for both classification and regression problems. However, it is more widely used in classification problems in the industry. K nearest neighbors is a simple algorithm that stores all available cases and classifies new cases by a majority vote of its k neighbors. The case being assigned to the class is most common amongst its K nearest neighbors measured by a distance function. These distance functions can be Euclidean, Manhattan, Minkowski and Hamming distance. First three functions are used for continuous function and fourth one (Hamming) for categorical variables. If K = 1, then the case is simply assigned to the class of its nearest neighbor. At times, choosing K turns out to be a challenge while performing kNN modeling. KNN can easily be mapped to our real lives. If you want to learn about a person, of whom you have no information, you might like to find out about his close friends and the circles he moves in and gain access to his/her information! Things to consider before selecting kNN: KNN is computationally expensive Variables should be normalized else higher range variables can bias it Works on pre-processing stage more before going for kNN like outlier, noise removal Python Code #Import Library from sklearn.neighbors import KNeighborsClassifier #Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset # Create KNeighbors classifier object model KNeighborsClassifier(n_neighbors=6) # default value for n_neighbors is 5 # Train the model using the training sets and check score model.fit(X, y) #Predict Output predicted= model.predict(x_test) R Code library(knn) x <- cbind(x_train,y_train) # Fitting model fit <-knn(y_train ~ ., data = x,k=5) summary(fit) #Predict Output predicted= predict(fit,x_test)

Linear regression

Linear Regression It is used to estimate real values (cost of houses, number of calls, total sales etc.) based on continuous variable(s). Here, we establish relationship between independent and dependent variables by fitting a best line. This best fit line is known as regression line and represented by a linear equation Y= a *X + b. The best way to understand linear regression is to relive this experience of childhood. Let us say, you ask a child in fifth grade to arrange people in his class by increasing order of weight, without asking them their weights! What do you think the child will do? He / she would likely look (visually analyze) at the height and build of people and arrange them using a combination of these visible parameters. This is linear regression in real life! The child has actually figured out that height and build would be correlated to the weight by a relationship, which looks like the equation above. In this equation: Y – Dependent Variable a – Slope X – Independent variable b – Intercept These coefficients a and b are derived based on minimizing the sum of squared difference of distance between data points and regression line. Look at the below example. Here we have identified the best fit line having linear equation y=0.2811x+13.9. Now using this equation, we can find the weight, knowing the height of a person. Linear Regression is of mainly two types: Simple Linear Regression and Multiple Linear Regression. Simple Linear Regression is characterized by one independent variable. And, Multiple Linear Regression(as the name suggests) is characterized by multiple (more than 1) independent variables. While finding best fit line, you can fit a polynomial or curvilinear regression. And these are known as polynomial or curvilinear regression. Python Code #Import Library #Import other necessary libraries like pandas, numpy... from sklearn import linear_model #Load Train and Test datasets #Identify feature and response variable(s) and values must be numeric and numpy arrays x_train=input_variables_values_training_datasets y_train=target_variables_values_training_datasets x_test=input_variables_values_test_datasets # Create linear regression object linear = linear_model.LinearRegression() # Train the model using the training sets and check score linear.fit(x_train, y_train) linear.score(x_train, y_train) #Equation coefficient and Intercept print('Coefficient: \n', linear.coef_) print('Intercept: \n', linear.intercept_) #Predict Output predicted= linear.predict(x_test) R Code #Load Train and Test datasets #Identify feature and response variable(s) and values must be numeric and numpy arrays x_train <- input_variables_values_training_datasets y_train <- target_variables_values_training_datasets x_test <- input_variables_values_test_datasets x <- cbind(x_train,y_train) # Train the model using the training sets and check score linear <- lm(y_train ~ ., data = x) summary(linear) #Predict Output predicted= predict(linear,x_test) 

Naive Bayes

Naive Bayes It is a classification technique based on Bayes’ theorem with an assumption of independence between predictors. In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature. For example, a fruit may be considered to be an apple if it is red, round, and about 3 inches in diameter. Even if these features depend on each other or upon the existence of the other features, a naive Bayes classifier would consider all of these properties to independently contribute to the probability that this fruit is an apple. Naive Bayesian model is easy to build and particularly useful for very large data sets. Along with simplicity, Naive Bayes is known to outperform even highly sophisticated classification methods. Bayes theorem provides a way of calculating posterior probability P(c|x) from P(c), P(x) and P(x|c). Look at the equation in image Here, P(c|x) is the posterior probability of class (target) given predictor (attribute).  P(c) is the prior probability of class.  P(x|c) is the likelihood which is the probability of predictor given class.  P(x) is the prior probability of predictor. Example: Let’s understand it using an example. Below I have a training data set of weather and corresponding target variable ‘Play’. Now, we need to classify whether players will play or not based on weather condition. Let’s follow the below steps to perform it. Step 1: Convert the data set to frequency table Step 2: Create Likelihood table by finding the probabilities like Overcast probability = 0.29 and probability of playing is 0.64. Step 3: Now, use Naive Bayesian equation to calculate the posterior probability for each class. The class with the highest posterior probability is the outcome of prediction. Problem: Players will pay if weather is sunny, is this statement is correct? We can solve it using above discussed method, so P(Yes | Sunny) = P( Sunny | Yes) * P(Yes) / P (Sunny) Here we have P (Sunny |Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P( Yes)= 9/14 = 0.64 Now, P (Yes | Sunny) = 0.33 * 0.64 / 0.36 = 0.60, which has higher probability. Naive Bayes uses a similar method to predict the probability of different class based on various attributes. This algorithm is mostly used in text classification and with problems having multiple classes. Python Code #Import Library from sklearn.naive_bayes import GaussianNB #Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset # Create SVM classification object model = GaussianNB() # there is other distribution for multinomial classes like Bernoulli Naive Bayes, Refer link # Train the model using the training sets and check score model.fit(X, y) #Predict Output predicted= model.predict(x_test) R Code library(e1071) x <- cbind(x_train,y_train) # Fitting model fit <-naiveBayes(y_train ~ ., data = x) summary(fit) #Predict Output predicted= predict(fit,x_test)

SVM (Support Vector Machine)

SVM (Support Vector Machine) It is a classification method. In this algorithm, we plot each data item as a point in n-dimensional space (where n is number of features you have) with the value of each feature being the value of a particular coordinate. For example, if we only had two features like Height and Hair length of an individual, we’d first plot these two variables in two dimensional space where each point has two co-ordinates (these co-ordinates are known as Support Vectors) Now, we will find some line that splits the data between the two differently classified groups of data. This will be the line such that the distances from the closest point in each of the two groups will be farthest away. In the example shown above, the line which splits the data into two differently classified groups is the black line, since the two closest points are the farthest apart from the line. This line is our classifier. Then, depending on where the testing data lands on either side of the line, that’s what class we can classify the new data as. Think of this algorithm as playing JezzBall in n-dimensional space. The tweaks in the game are: You can draw lines / planes at any angles (rather than just horizontal or vertical as in classic game) The objective of the game is to segregate balls of different colors in different rooms. And the balls are not moving. Python Code #Import Library from sklearn import svm #Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset # Create SVM classification object model = svm.svc() # there is various option associated with it, this is simple for classification. You can refer link, for mo# re detail. # Train the model using the training sets and check score model.fit(X, y) model.score(X, y) #Predict Output predicted= model.predict(x_test) R Code library(e1071) x <- cbind(x_train,y_train) # Fitting model fit <-svm(y_train ~ ., data = x) summary(fit) #Predict Output predicted= predict(fit,x_test

Decision Tree

Decision Tree This is one of my favorite algorithm and I use it quite frequently. It is a type of supervised learning algorithm that is mostly used for classification problems. Surprisingly, it works for both categorical and continuous dependent variables. In this algorithm, we split the population into two or more homogeneous sets. This is done based on most significant attributes/ independent variables to make as distinct groups as possible. For more details, you can read: Decision Tree Simplified. In the image above, you can see that population is classified into four different groups based on multiple attributes to identify ‘if they will play or not’. To split the population into different heterogeneous groups, it uses various techniques like Gini, Information Gain, Chi-square, entropy. The best way to understand how decision tree works, is to play Jezzball – a classic game from Microsoft (image below). Essentially, you have a room with moving walls and you need to create walls such that maximum area gets cleared off with out the balls. So, every time you split the room with a wall, you are trying to create 2 different populations with in the same room. Decision trees work in very similar fashion by dividing a population in as different groups as possible. Python Code #Import Library #Import other necessary libraries like pandas, numpy... from sklearn import tree #Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset # Create tree object model = tree.DecisionTreeClassifier(criterion='gini') # for classification, here you can change the algorithm as gini or entropy (information gain) by default it is gini # model = tree.DecisionTreeRegressor() for regression # Train the model using the training sets and check score model.fit(X, y) model.score(X, y) #Predict Output predicted= model.predict(x_test) R Code library(rpart) x <- cbind(x_train,y_train) # grow tree  fit <- rpart(y_train ~ ., data = x,method="class") summary(fit) #Predict Output predicted= predict(fit,x_test)

Logistic Regression

Logistic Regression Don’t get confused by its name! It is a classification not a regression algorithm. It is used to estimate discrete values ( Binary values like 0/1, yes/no, true/false ) based on given set of independent variable(s). In simple words, it predicts the probability of occurrence of an event by fitting data to a logit function. Hence, it is also known as logit regression. Since, it predicts the probability, its output values lies between 0 and 1 (as expected). Again, let us try and understand this through a simple example. Let’s say your friend gives you a puzzle to solve. There are only 2 outcome scenarios – either you solve it or you don’t. Now imagine, that you are being given wide range of puzzles / quizzes in an attempt to understand which subjects you are good at. The outcome to this study would be something like this – if you are given a trignometry based tenth grade problem, you are 70% likely to solve it. On the other hand, if it is grade fifth history question, the probability of getting an answer is only 30%. This is what Logistic Regression provides you. Coming to the math, the log odds of the outcome is modeled as a linear combination of the predictor variables. odds= p/ (1-p) = probability of event occurrence / probability of not event occurrence ln(odds) = ln(p/(1-p)) logit(p) = ln(p/(1-p)) = b0+b1X1+b2X2+b3X3....+bkXk Above, p is the probability of presence of the characteristic of interest. It chooses parameters that maximize the likelihood of observing the sample values rather than that minimize the sum of squared errors (like in ordinary regression). Now, you may ask, why take a log? For the sake of simplicity, let’s just say that this is one of the best mathematical way to replicate a step function. I can go in more details, but that will beat the purpose of this article. Python Code #Import Library from sklearn.linear_model import LogisticRegression #Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset # Create logistic regression object model = LogisticRegression() # Train the model using the training sets and check score model.fit(X, y) model.score(X, y) #Equation coefficient and Intercept print('Coefficient: \n', model.coef_) print('Intercept: \n', model.intercept_) #Predict Output predicted= model.predict(x_test) R Code x <- cbind(x_train,y_train) # Train the model using the training sets and check score logistic <- glm(y_train ~ ., data = x,family='binomial') summary(logistic) #Predict Output predicted= predict(logistic,x_test) Furthermore.. There are many different steps that could be tried in order to improve the model: including interaction terms removing features regularization techniques using a non-linear model

‘But what does it mean?’: Analyzing data

‘But what does it mean?’: Analyzing data  Introduction Once you have cleaned and filtered your dataset – it’s time for analysis. . Analysing data helps us to learn what our data might mean and helps us to extract answers to our questions from the dataset. Look at the data we imported. (In case you didn’t finish the previous tutorial, don’t worry. You can copy a sample spreadsheet here). This is World Bank data containing GDP, population, health expenditure and life expectancy for the years 2000-2011. Take a moment to have a look at the data. It’s pretty interesting – what could it tell us? Task: Brainstorm ideas. What could you investigate using this data? Here are some ideas we came up with: How much (in USD) is spent on healthcare in total in each country? How much (in USD) is spent per capita in each country? In which country is the most spent per person? In which country is the least spent? What is the average for each continent? For the world? What is the relationship between public and private health expenditure in each country? Where do citizens spend more (private expenditure)? Where does the state spend more (public expenditure)? Is there a relationship between expenditure on healthcare and average life expectancy? Does it make any difference if the expenditure is public or private? NOTE: With these last two questions, you have to be really careful. Even if you find a connection, it doesn’t necessarily mean that one caused the other! For example: imagine there was a sudden outbreak of the plague; it’s not always fatal, but many people who contract it will die. Public healthcare expenditure might go up. Life expectancy drops right down. That doesn’t mean that your healthcare system has suddenly become less efficient! You always have to be REALLY careful about the conclusions you draw from this kind of data… but it can still be interesting to calculate the figures. There are many more questions that could be answered using this data. Many of them relate closely to current policy debates. For example, if my country were debating its healthcare spending right now, I could use this data to explore how spending in my country has changed over time, and begin to understand how my country compares to others. Formulas So let’s dive in. The data we have is not entirely complete. At the moment, healthcare expenditure is only shown as a percentage of GDP. In order to compare total expenditure in different countries, we need to have this figure in US Dollars (USD). To calculate this, let’s introduce you to spreadsheet formulas. Formulas are what helped spreadsheets become an important tool. But how do they work? Let’s find out by playing with them… Tip: Whenever you download a dataset, the very first thing you should do is to make a copy of it. Any changes you should make should be done in this copy – the original data should remain pure and untouched! This means that you can go back and check it at any time. It’s also good practice to note where you got your data from, when and how it was retrieved. Once you have your own copy of the data (try adding ‘working copy’ or similar after the original name), create a new sheet within your spreadsheet. This is for you to mess around with whilst you learn about formulae. Now move across to the “Total fruits sold” column. Start in the first row. It’s time to write a formula… Walkthrough: Using spreadsheets to add values. Using this example data. Let’s calculate the total of fruits sold. Get the data and create a working copy. To start, move to the first row. Each formula in a spreadsheet starts with = Enter = and select the first cell you want to add. Notice how the cell reference appears in the formula? now type + and select the second cell you want to add Press Enter or tab . The formula disappears and is replaced by the value. Try changing the number in one of the original cells (apples or plums) you should see the value in total update automatically. You can type each formula individually, but it also possible to cut and paste or drag formulas across a range of cells. Copy the formula you have just written (using ctrl + c ) and paste it into the cell below (using ctrl + v ), you will get the sum of the two numbers on the row below. Alternatively click on the lower right corner of the cell (the blue square), and drag the formula down to the bottom of the column. Watch the ‘total’ column update. Feels like magic! Task: Create a formula to calculate the total amount of apples and plums sold during the week. Did you add all of the cells up manually?: That’s a lot of clicking – for big spreadsheets, adding each cell manually could take a long time. Take a look at the “spreadsheet formulae” section in the Handbook – can you see a way add a range of cells or entire columns simply? Where Next? Once you’ve got the hang of building a basic formula – the sky is your limit! The School of Data Handbook will additionally walk you through: Multiplication using spreadsheets Division using spreadsheets Copying formulae sideways Calculating minimum and maximum values Dealing with empty cells in your data (complex formulae). This stage uses Boolean logic. You may need to refer to these chapters to complete the following challenges. Multiplication and division challenge Task: Using the data from the World Bank (if you don’t have it already, download it here.). In the data we have figures for healthcare only as a % of GDP. Calculate the full amount of private health expenditure in Afghanistan in 2001 in USD. If your percentages are rusty – check out the formulae section in the Handbook. Task: Still using the World Bank Data. Find out how much money (USD) is spent on healthcare per person in Albania in 2000. Task: Calculate the mean and median values for all the columns. Task: What is the formula for healthcare expenditure per capita? Can you modify it so it’s only calculated when both values are present (i.e. neither cell is blank)?

Sort and Filter: The basics of spreadsheets

Sort and Filter: The basics of spreadsheets Introduction The most basic tool used for data wrangling is a spreadsheet. Data contained in a spreadsheet is in a structured, machine-readable format and hence can quickly be sorted and filtered. In other recipes in the handbook, you’ll learn how to use the humble spreadsheet as a power tool for carrying out simple sums (finding the total, the average etc.), applying bulk processes, or pulling out different graphs and charts. By the end of the module, you will have learned how to download data, how to import it into a spreadsheet, and how to begin cleaning and interpreting it using the ‘sort’ and ‘filter’ functions. Spreadsheets: An Overview Nowadays spreadsheets are widespread so a lot of people are familiar with them already. A variety of spreadsheet programs and applications exist. For example Microsoft’s Office package comes with Excel, the OpenOffice package comes with Calc and so on. Not surprisingly, Google decided to add spreadsheets to their documents package. Since it does not require you to purchase or install any additional software, we will be using Google Spreadsheets for this course. Depending on what you want to do you might consider using different spreadsheet software. Here are some of the considerations you might make when picking your weapon of choice: SpreadsheetGoogle SpreadsheetsOpen(Libre)OfficeMicrosoft Excel UsageFree (as in Beer)Free (as in Freedom)Commercial Data StorageGoogle DriveYour hard diskYour hard disk Needs InternetYesNoNo Installation requiredNoYesYes CollaborationYesNoNo Sharing resultsEasyHarderHarder VisualizationsLarge rangeBasic chartsBasic charts Creating a spreadsheet and uploading data In this course we will use Google docs for our data-wrangling – it allows you to start right away without need of installing software. Since the data we are working with is already public we also don’t need to worry about the fact that it is not stored on our local drive. Walktrough: Creating a Spreadsheet and uploading data. Head over to Google docs. If you are not yet logged in to Google docs, you need to login. The first step is going to be creating a new spreadsheet. Do this by clicking the create button to the left and select spreadsheet. Doing so will create a new spreadsheet for you. Let’s upload some data. You will need the file we downloaded from the World Bank in the last tutorial. If you haven’t done the tutorial or lost the file: download it here . In your spreadsheet select import from the file menu. This will open a dialog for you. Select the file you downloaded. Don’t forget to select insert new sheets, and click import Navigating and using the Spreadsheet Now we loaded some data let’s deal with the basics of spreadsheets. A spreadsheet is basically a table of “cells” in which you can input data. The cells are organized in “rows” and “columns”. Typically rows are labeled by numbers, columns by letters. This also means cells can be addressed by their “column” and “row” coordinates. The cell A1 denotes the cell in the first row in the first column, A2 the one in the second row, B1 the one in the second column and so on. To enter or change data in a cell click on it and start typing – this will change the contents of the cell. Basic navigation can be done this way or via keyboard. Find a list of keyboard shortcuts good to know below: Key or CombinationWhat it does TabEnd input on the current cell and jump to the cell right to the current one EnterEnd input and jump to the next row (This will try to be intelligent, so if you’re entering multiple columns, it will jump to the first column you are entering UpMove to the cell one row up DownMove to the cell one row down LeftMove to the cell left RightMove to the cell on the Right Ctrl+<direction>Move to the outermost cell in the direction given Shift+<direction>Select the current cell and the cell in <direction> Ctrl+Shift+<direction>Select all cells from the current to the outermost cell in <direction> Ctrl+cCopy – copies the selected cells into the clipboard Ctrl+vPaste – pastes the clipboard Ctrl+xCut – copies the selected cells into the clipboard and removes them from their original position Ctrl+zUndo – undoes the last change you made Ctrl+yRedo – undoes an undo Tip: Practice a bit, and you will find that you will become a lot faster using the keyboard than the mouse! Locking Rows and Columns The spreadsheet we are working on is quite large. You will notice, that while scrolling the column with the column labels will frequently disappear, leaving you quite lost. The same with the country names. To avoid this you can “lock” rows and columns so they don’t disappear. Walkthrough: Locking the top row Go to the Spreadsheet with our data and scroll to the top. On the top left, where the column and row labels are you’ll see a small striped area. Hover over the striped bar on top of box showing row “1”. A hand shaped cursor should appear, click and drag it down one row. Your result should look like this: Try scrolling – notice how the top row remains fixed? Sorting Data The first thing to do when looking at a new dataset is to orient yourself. This involves at looking at maximum/minimum values and sorting the data so it makes sense. Let’s look at the columns. We have data about the GDP, healthcare expenditure and life expectancy. Now let’s explore the range of data by simply sorting. Walkthrough: Sorting a dataset Select the whole sheet you want to sort. Do this by clicking on the right upper grey field, between the row and column names. Select “Sort Range…” from the “Data” menu – this will open an additional Selection Check the “Data has header row” checkbox Select the column you want to sort by in the dropdown menu Try to sort by GDP – Which country has the lowest? Try again with different values, can you sort ascending and descending? Tip: Be careful! A common mistake is to forget to select all the data. If you sort without selecting all the data, the rows will no longer match up. A version of this recipe can also be found in the Handbook. Filtering Data The next thing commonly done with datasets is to filter out the values you don’t want to see. Did you notice that some “Country Names” are actually not countries? You’ll find things like “World”, “North America” and “Arab World”. Let’s filter them out. Walkthrough: Filtering Data Select the whole table. Select “Filter” from the “Data” menu. You now should see triangles next to the column names in the first row. Click on the triangle next to country name. you should see a long list of country names in the box. Find those that are not a country and click on them (the green check mark will disappear). Now you have successfully filtered your dataset. Go ahead and play with it – the data will not be deleted, it’s just not displayed.

Look Out!: Common Misconceptions and how to avoid?

Look Out!: Common Misconceptions and how to avoid them. Introduction Do you know the popular phrase: “There are three kinds of lies: lies, damned lies and statistics”? It illustrates the common distrust of numerical data and the way it’s displayed. And it has some truth: for too long, graphical displays of numerical data have been used to manipulate people’s understanding of ‘facts’. There is a basic explanation for this. All information is included in raw data – but before raw data is processed, it’s too much for our brains to understand. Any calculation or visualisation – whether that’s as simple as calculating the average or as complex as producing a 3D chart – involves losing a certain amount of data, so that we can take it in. It’s when people lose data that’s really important and then try to make big statements about the whole data set that most mistakes get made. Often what they say is ‘true’, but it doesn’t give the full story’ In this tutorial we will talk about common misconceptions and pitfalls when people start analysing and visualising. Only if you know the common errors can you avoid making them in your own work and falling for them when they are mistakenly cited in the work of others. The average trap Have you ever read a sentence like: “The average european drinks 1 litre of beer per day”? Did you ask yourself who this mysterious “average european” was and where you could meet him? Bad news: you can’t. He or she doesn’t exist. In some countries, people drink more wine than beer. How about people who don’t drink alcohol at all? And children? Do they drink 1 litre per day too? Clearly this statement is misleading. So how did this number come together? People who make these kind of claims usually get hold of a large number: e.g. every year 109 billion liters of beer is consumed in Europe. They then simply divide that figure by the number of days per year and the total population of Europe, and then blare out the exciting news. We did the same thing two modules ago when we divided healthcare expenditure by population. Does this mean that all people spend that much money? No. It means that some spend less and some spend more – what we did was to find the average.The average makes a lot of sense – if data is normally distributed. Normal distribution is the classic bell shaped curve. The image above shows three different normal distributions. They all have the same average. And yet they are clearly different.What the average doesn’t tell you is the range of data. Most of the time we do not deal with normal distributions either: take e.g. income. The average income (something frequently reported) would suggest that half of the people would earn less and half of them would earn more than the average. This is wrong. In most countries, many more people earn below the average salary than above it. How? Incomes are not normally distributed. They show a peak around a certain level and then have a long tail towards large salaries. The chart shows actual income distribution in US$ for households up to 200,000 US$ Income from the 2011 census. You can see a large number of households have incomes around 15,000-65,000 US$, but we have a long tail skewing the average up. If the average income rises, it could be because most of the people are earning more. But it could also be that a few people in the top income group are earning way more – both would move the average. Task: If you need some figures to help you think of this, try the following: Imagine 10 people. One earns 1€, one earns 2€, one earns 3€… up to 10€. Work out the average salary. Now add 1€ to each of their salaries (2€, 3€….11€). What is the average? Now go back to the original salaries (1€, 2€, 3€ etc) and add 10€ only to the very top salary (so you have 1€, 2€, 3€… 9€, 20€). What’s the average now? Economists recognise this and have added another value. The “ GINI-Coefficient ” tells you something about the distribution of income. The “GINI-Coefficient”” is a little complicated to calculate and beyond the scope of this basic introduction. However, it is worth knowing it exists. A lot of information gets lost when we only calculate an average. Keep your eyes peeled as you read the news and browse online. Task: Can you spot examples of where the use of the average is problematic? More than just your average… So if we’re not to use the average – what should we use? There are various other measures which can be used to give a simple mean figure some more context. Combine the average figure with the range; e.g say range 20-5000 with an average of 50. Take our beer example: it would be slightly better to say 0-5 litres a day with an average of 1 litre. Use the median: the median is the value right in the middle where 50% of values are above and 50% of values are below. For the median income it holds true that 50% of people earn less and 50% of people earn more. Use quartiles or percentiles: Quartiles are like the median but for 25,50 and 75%. Percentiles are the same but for varying percent ranges (usually 10% steps.) This gives us way more information than the average – it also tells us something about the distribution of data (e.q. do 1% of the people really hold 80% of the wealth?) Size matters In data visualization, size actually matters. Look at the two column charts below: Imagine the headlines for these two graphs. For the graph on the left, you might read “Health Expenditure in Finland Explodes!”. The graph on the right might come under the headline “Health Expenditure in Finland remains mainly stable”. Now look at the data. It’s the same data presented in two different (incorrect) ways. Task: Can you spot why the data is misleading? In the graph on the left, the data doesn’t start at $0, but somewhere around $3000. This makes the differences appear proportionally much larger – for example, expenditure from 2001-2002 appears to have tripled, at least! In reality, this wasn’t the case. The square aspect ratio (the graph is the same height as width) of the graph further aggravates the effect. The graph on the right starts with $0 but has a range up to $30,000, even though our data only ranges to $9000. This is more accurate than the graph on the left, but is still confusing. No wonder people think of statistics as lies if they are used to deceive people about data. This example illustrates how important it is to visualize your data properly. Here are some simple rules: Always use a range that is appropriate to your data Note it properly on the respective axis! The changes in size we see in a chart should actually reflect the change of size in your data. So if your data shows B is 2 times A, then B should be 2 times bigger in your visualization. The simple “reflect the size” rule becomes even more difficult in 2 dimensions, when you have to worry about the total area. At one point, news outlets started to replace columns with pictures, and then continue to scale the dimensions of pictures up in the old way. The problem? If you adjust the height to reflect the change and the width automatically increases with it, the area increases even more and will become completely wrong! Confused? Look at these bubbles: Task: We want to show that B is double the size of A. Which representation is correct? Why? Answer: The diagram on the right. Remember the formula for calculating the area of a circle? (Area = πr² If this doesn’t look familiar, see here). In the left hand diagram, the radius of A (r) was doubled. This means that the total area goes up by a scale factor of four! This is wrong. If B is to represent a number twice the size of A, we need the area of B to be double the area of A. To correctly calculate this, we need to adjust the length of the radius by ⎷2. This gives us a realistic change in size. Time will tell? Time lines are also critical when displaying data. Look at the chart below: A clear stable increase in health care costs since 2002? Not quite. Notice how before 2004, there are 1 year steps. After, there is a gap between 2004 and 2007, and 2007 and 2009. This presentation makes us believe that healthcare expenditure increases continuously at the same rate since 2002 – but actually it doesn’t. So if you deal with time lines: make sure that the spacing between the data points are correct! Only then will you be able to see the trends correctly. Correlation is not causation by XKCD This misunderstanding is so common and well known that it has its own wikipedia article. There is nothing more to say about this. Simply because two data points show changes that can be correlated, it doesn’t mean that one causes the other. Context, context, context One thing incredibly important for data is context: A number or quality doesn’t mean a thing if you don’t give context. So explain what you are showing – explain how it is read, explain where the data comes from and explain what you did with it. If you give the proper context the conclusion should come right out of the data. Percent versus Percentage points change This is a common pitfall for many of us. If a value changes from 5% to 10% how many percent is the change? If you answered 5% – I’m afraid you’re wrong! The answer is 100% (10% is 200% of 5%). It’s a change in 5 percentage points. So take care the next time people try to report on elections, surveys and the like – can you spot their errors? Need a refresher on how to calculate percentage change? Check out the “Maths is Fun” page on it. Catching the thief – sensitivity and large numbers Imagine, you are a shop owner and you just installed and electronic theft detection system. The system has a 99% accuracy of detecting theft. The alarm goes off, how likely is it, that the person who just passed is a thief? It’s tempting to answer that there is a 99% chance that this person stole something. But actually, that isn’t necessarily the case. In your store you’ll have honest customers and shoplifters. However, the honest customers outnumber the thiefs:: there are 10,000 honest customers and just 1 thief. If all of them pass in front of your alarm, the alarm will sound 101 times. 1% of the time, it will mistakenly identify a honest customer as a thief – so it will sound 100 times. 99% of the time, it will correctly recognise that a shoplifter is a shoplifter. So it will probably sound once when your thief does walk past. But of the 101 times it sounds, only 1 time will there actually be a shoplifter in your store. So the chance that a person is actually a thief when it sounds is just below 1% (0.99%, if you want to be picky).

How to get data from world bank data

Walkthrough: Downloading Data from the World Bank In Data Fundamentals, we address the question of how healthcare spending affects life expectancy around the world. This is one of the answers we can find looking at data from the World Bank. Open the World Bank data portal: it lives in http://data.WorldBank.org Select Data Catalog from the menu on the top. In the long list on the bottom find “World Development Indicators” Click on the left on the tabular icon You’ll find a very different site: The Databank – The databank is an interface to the World Bank database. You can select what data you want to see from which countries for what period of time. First select the countries. We’re interested in all the countries so click on selectall (check box icon). You can see how many countries you have selected in the top right corner. Click on the Series under the Country view. Now you’ll see a long list of data series you can export. We’ll need a few of them. Select “Health expenditure, private (% GDP)”, “Health expenditure, public (% GDP)” and “Health expenditure, total (% GDP)”. Since the expenditure is in % of GDP we’ll need to get the GDP as well. Since we want to compare countries directly we’ll need GDP in US$. To do this type GDP into the search box and find the entry “GDP, PPP (current international US$)” If we want to see how healthcare expenditure affects the life expectancy we need to add life expectancy to the data. Search for “Life expectancy at birth, total (years)”. Now let’s add one more thing: Population – like this we can calculate how much is spent by and on an average person. Search for “Population” and select “Population, total”. Click on the selected Series on the top left corner. Bring GDP and Population to the top (drag-and-drop) on the side of the list, your selection should now look like this: Click on Time to select the years we are interested in. To keep things simple, select the last 10 most recent years Click on Apply Changes You’ll see a preview of the data On the top left there is a rough layout of how your downloaded file will look like. You’ll see “time” in the columns bit and “series” in the rows bit – this influences how the spreadsheet will look like. While this might be great for some people: The data is a lot easier to handle if all of our “series” are in columns and the years are different rows. So let’s change this. Your arranged organization diagram should look like this: You should noticed the Preview changed. This is how your downloaded file will look like. Now let’s go and Export If you click on the Export button a pop up will appear asking you for the format. Select CSV. It will automatically download the file – store and name it in a folder so you remember where it comes from and what it is for