Praat is a great tool for analysing speech data but lately I came across a frustrating problem. While trying to open a txt file (vector of numbers) in Praat I would get the following error message: File not recognized. File not finished.
After consulting my fellow PhD students I discovered that what I was missing was a header enabling Praat to read txt files.
The simplest way to fix this error is to add the following header to a text file using your favourite text editor:
However, if you want to automate the process then scripting can save you a lot of time. That’s why I created a function (txt2praat.R) appending this header to the original text file and saving the output to a new text file.
You can use the function in the following way: txtfile <- file.choose() txt2praat(txtfile, testfile-modified)
These commands should create a txt file (testfile - modified) appended with the short header. New file can be then opened in Praat without the error message.
I finally found some time to crunch numbers from a Kaggle swag competition. Available dataset was rather large, but I wanted to focus on the latest data (from 2013) so I only analysed MERGED2013_PP.csv. I started filtering numbers in R but then I decided to move back to Tableau for interactive visualizations. The result can be seen underneath and I hope it’s self-explanatory.
I’m back to analysing political data after finding nicely formatted data set on one of my favourite blogs. The blog post that inspired me to do it discussed the possibility of predicting election results using polls and popularity data found online. In brief, he response is: not yet. However, with the increasing number of people using digital media and opinion polls, these channel will have more impact on the future political campaigns.
I haven’t used the actual results in this analysis but I only used the variables that came with the compiled data set. The variables in questions are: Google Trends popularity, Social Media popularity, and Opinion Polls. More details about the data can be found here (text in Polish).
After loading the data, I used the missmap function to examine the missing values. It seems like there are quite a few gaps in the data about the polls, social media, and Google Trends (in the decreasing order).
To get an overview I used tableplot from the tabplot package.
The next step was plotting time series of the individual variables.
The plots above show that the overall Social Media and Google Trends activity (dark blue line) increased closer to the election day. The averaged rating (dark blue line) of all parties in the polls seemed fairly stable. This is probably not the most interesting finding so splitting the values by party/candidate would be recommended.
Autocorrelation was conducted on the cleaned data frame (NAs were removed) to show how the variables correlate with themselves.
Recently I’ve been playing with the idea of comparing popularity of various people and ideas. I’ve previously queried Wikipedia pageviews using R but I wondered whether the same can be done with Google Trends or Google Ngrams. Both of these Google services provide interesting insights into relative popularity of various queries. Luckily for me there were other people who created fantastic connections between R and Google Trends and Google Ngrams.
One of the topics that interested me as an experimental psychologist was the changing popularity of two psychoanalysts – Sigmund Freud and Carl Jung. Knowing that psychology is becoming more empirical I expected that these two gentlemen will start losing their stardom as time goes by.
Trends extracted from Google Ngrams show the peak popularity for both psychoanalysts around 1995. The relative frequency of occurence of their names seems to decline since that time.
However, the last year recorded in the Ngram data was 2008, so things could have changed since that time. To answer this question I queried Google Trends, which shows the relative frequency of Google search terms. I didn’t set the locale in the function so I assume that the results are for global searches (but I used English spelling of the names).
The results from Google Trends support the Ngram results. Decline in popularity of both Freud and Jung can observed by using this measure.
It was just a brief write-up of my analysis so feel free to modify my code:
Recently I came across an interesting data journalism project called The Migrant’s Files which collects and analyses information related to migrations. Data about the dead and missing would-be migrants was publicly available so I created a dashboard in Tableau using a Google Spreadsheet Web Connector (described in my previous post).
Here’s the result:
Tableau Desktop 9.1 is out and Web Data Connectors are available as a new data source in this version. Luckily Tableau released several working connectors to popular web data sources and Google Sheets is one of them. Today I tried to connect to Google Sheets but I couldn’t find a step-by-step description of setting it up and the official thread lacked details about configuring WDC. It took me a while to figure out how to start using Web Data Connectors so I decided to write this tutorial which hopefully will help others to start using Web Connectors. This tutorial describes how to use Tableau web connectors hosted locally.
Before starting to use Web Data Connectors you need to activate Internet Information Services (IIS) Manager (more info available in this Stackoverflow thread).
This can be done on Windows 10 by pressing Windows key and typing Windows Features.
Then go to Turn Windows Features On or Off, and tick the box next to Internet Information Services.
Now you should be able to see IIS in Control Panel > Administrative Tools.
Open IIS Manager to check the values in Binding and Path columns. Binding should be set to *:8888 (http). Path points to a folder where the Tableau Connector files should be stored.
If Binding is set to a different value than 8888 (mine was set to 80 by default) then go to Actions on click on Bindings.
Then change the Port number to 8888 and click OK.
Now your IIS Manager configuration should look like this
Check the value of the Path column. It’s set to %SystemDrive%\inetpub\wwwroot by default which in most cases means C:\inetpub\wwwroot
In order to start using Google Sheets as a data source you need to start with downloading the Web Data Connector SDK.
Unzip the zip file and go to the Examples folder.
Copy all files from the Examples folder to the Path specified in IIS. In my case it was C:\inetpub\wwwroot
Now you should be ready to start using Web Data Connectors. Go to Connect > To a server > Web Data Connector
A pop-up window like the one below should appear
If you copied the example files provided by Tableau to your IIS path then type the following into the address bar of the WDC pop-up window: http://localhost:8888/GoogleSheetsConnector.html
You should see the following window
All you need to do now is to provide a link to a Google Sheet you want to use. You might also need to log into your Google Account if a Sheet provided is private.
Now you can start using Google Sheets as a data source. Hooray!
If you need to use other connectors then just change the path in the Address bar of the pop-up window.
Quandl is a great provider of various types of data that can be easily integrated with R. I wanted to play with the available data and see what kind of insights can be gathered using R.
I wrote a short script that collects British Pound Sterling (GBP) to Polish Złoty (PLN) historic exchange rates from 1996 till today (4th September 2015).
First thing I did was loading the libraries that will be used:
Then I downloaded the exchange rates data from Quandl. Data can be downloaded in several popular formats (i.e. ts, xts, or zoo), each appropriate for different packages.
Once I collected my data, I started cleaning the data frame that was downloaded from Quandl.
The column names designating High and Low Rates had whitespaces in them, and that could cause all sorts of problems in R. I removed ‘High_est’ and ‘Low_est’ from the xts data to make plotting with dygraphs easier. Thanks for lubridate package I easily extracted years, months, and days from the Date field. I also added a ‘Volatility’ column that showed the difference between the High and Low rates.
Now I had all my data cleaned so I could start plotting. I started by shamelessly copying the code for the Reuters-like plot that was included in Quandl’s cheat sheet.
All plots were created using the following code:
The results can be seen underneath.
Volatility histogram shows clearly that there are many fields with missing values.
Comparing the blank fields with those containing non-zero values showed that the number of blank vs. non-blank is almost the same. This means that Volatility should not be used as it has a high number of missing values.
The box plots show how the GBP/PLN exchange rates fluctuate across different years and months. They also include the mean values and the variation in the exchange rates.
Last, but not least was the interactive plot created with dygraph package. That’s definitely my favourite as it allows fine-grained analysis of the underlying information (including date range selection).
I added the event lines that seemed to be relevant to the the observed maximum and minimum values of the GBP/PLN exchange rates. Around the time when Poland joined the EU, the GBP/PLN exchange rate peaked at 7.3 PLN. Four years later, just before Lehman Brothers went bankrupt, Pound Sterling was valued as low as 4.07 PLN.
The interactive plot was created using the following code:
I thought that I had enough simple plots and now was the right time for more complicated analyses. I started with the Seasonal Decomposition of Time Series by Loess using the stl function.
Seasonal, trend and irregular components were extracted from the underlying data. The main trend is showing the appreciation of GBP against PLN but the seasonal component might be (?) indicating approaching correction of that trend.
I wanted to finish the analysis with applying the Anomaly Detection package, that was released by Twitter’s engineering team, to my data collected from Quandl. My plan was to see whether there were any anomalous changes of the exchange rate during the recorded period.
Running this code resulted in obtaining one anomalous value, i.e. 7.328 PLN, which was the maximum value observed in the collected time series.
I hope that you enjoyed this walkthrough. In future posts I want to descibe more ways to analyse time series.
I found an interesting data source on FigShare showing how much University College London paid for the Article Processing Charges. According to the description of this data set it should include items between 1st April 2013 and 31st July 2014. However, there are some entries that are dated earlier and later than that. I guess that only the previously mentioned period is complete. I included all data anyway and then I added a linear regression line on top of the plot showing total cost by date. Judging by that, it seems like the costs keep on increasing. However, it is just a short period of time and I don’t know whether this trend held for a longer period of time.
Large journals (e.g. Nature Communications, PLOS One, and Lancet) stand out as taking more APCs than others. The vast majority of articles in this data set has been published with a CC BY licence.
Feel free to further explore the plots. They are all interactive.
Lately I came across a data set published by Research Councils UK (RCUK) about the diversity of grant applicants. The document didn’t include any plots, so I decided to fix that. Unfortunately, the report was only available in a pdf format which is not optimal for acquiring data.
Luckily, I came across Tabula which is an open source project that allows extracting data from tables in pdf files. With the help of Tabula I extracted data from the document to create plots. Formatting of the original document required too much data wrangling, so I only used data from the first table.
This data is showing an estimate of academic populations applying to the Research Councils.
Times Higher Education published a 2015 pay survey in April, but only now I found time to create interactive visualisations showing their data.
I had to edit the table to make it usable in Tableau. I removed the ‘All’ column from the original data source. This column most likely represented weighted means. I couldn’t find any information about how it was calculated, so I only used ‘Female’ and ‘Male’ values. Combined (‘All’) values used in my workbooks are means of these two fields.
I used Edubase data to find additional information about universities, e.g. region or location (available only for HE institutions in England).