One of the points we make in our book Everydata is that the amount of data around us is expanding and increasing at an exponential rate. In Chapter 1, we walk you through the typical day of a consumer of "little data" which is estimated to be more than 34 GB--or the amount of printed information you can fit into dozens of pickup trucks.
With all this data, an interesting new project by Alexandra Meliou at U Mass Amherst is focusing on all the ways data collected can go wrong--or what is called "Bad DATA." As explained in this article from the Daily Hampshire Gazette, her five year project will focus on "how data is accumulated and shared to gain insight into how such information is weakened by bad curation or being taken out of context."
Why does data quality matter? Because all of the data analysis and sound statistical methods can't help you make good decisions if the underlying data is not measuring what you think it is. A good analogy is to think about cooking --it is like making the best recipe in the world but using the cheapest (or even mislabeled) ingredients.
We look forward to following Meliou's work.