A recent graduate of journalism and physics seeking to mine the depths of open data and FOIs. Tools include iMac, Excel, MySQL Server, Google Refine and various other freeware including the web scraping platform ScraperWiki. Follow me on twitter @DataMinerUK.
Here is a timeline of my data journey. Starting from when I first heard about this thing called Computer-Assisted-Reporting.
Vodpod videos no longer available.
Well, computers have moved on since journalists were hacking away on spreadsheets a decade ago so I decided to see how CAR has come along. This proved puzzling. In almost all news institutions it has been overlooked.
So I had been at ITN, BBC and working at CNN during this time of data curiosity on my part. Social media provided some sort of platform to explore data in the newsroom. It being the latest buzz word that execs are actually interested in (unlike data which, in my opinion, is a much more fruitful venture when it comes to generating actual news).
So I did my data journalism stuff out of hours. Gathered a lot of news from social media during hours. This made possible by the many web applications made by developers (as they make money, the sore point in data journalism).
At the beginning of this year, I up and left the newsroom for the programming terminal. To look at applications for data, serious data. I’m now at ScraperWiki. The thinking behind this: The Times paired a programmer and journalist to start working on stories for the web. So the programmer has the journalistic platform as his playing field. So what if you pair a journalist and programmers in the programming playing field? You can make the field. You create the platform for a purpose. And then repurpose it for the story rather than repurposing the story for the medium.
It’s hard to explain but hopefully this blog about my progress will reveal whether this experiment will ultimately work.