« « What’s regex again? Why should journos care?

Journ curricula’s need for transition isn’t unique » »

Hosting #wjchat — Finding the story in the data

UPDATE: More later, but if you missed the geekery and fantastic exchange of knowledge that you get with a phenomenally sharp, inquisitive and dedicated group like the #wjchatters, you can find the transcript here.

Tomorrow, Wed. Sept. 8, we’ll be discussing “Finding the story in the data” at #wjchat, and I have been tapped to host. (One might ask why, they must really be running low on people….I kid, I kid.)

This issue is extremely important, and one that must be addressed by the Web journalism groups. There’s a difference between telling a journalistic story, but lacking the data and displaying data in an aesthetically pleasing way that doesn’t really tell a story. The best data journalism does both. It’s rare, and I believe we all need to work harder at doing more of it. A tall order for someone to do alone, or even a team, but if the community puts its collective head together, we’ve got a better shot.

I couldn’t be more excited to discuss some of the issues I’ve been spending much of the last year internalizing. I have some better ideas about this now than I did back last September, I hope, but it’s also one of my favorite topics to ponder, because there are so many ideas I know I haven’t considered. I hope to report back with a nice compilation of links and thoughts once we tap into the community’s knowledge, but for now, I’m just spreading the word.

If you read this blog, but aren’t familiar with #wjchat, it’s a gathering on Twitter on Wednesday evenings where we discuss various aspects of online journalism. It ranges from social media, to video, to how to get a job, to data. I love all the facets of journalism, so it’s a great way to guarantee yourself some great conversation and learning all wrapped up into one package. At 5 Pacific, 8 Eastern, on Wednesday, that’s tomorrow, just hop on over to Twitter and follow the hashtag #wjchat. I find it easier to follow via tweetchat.com, which gives you a little chatroom that has a nicer interface than twitter.com — the conversation can get fast and furious.

In the meantime, if there are specific topics you’d like to see discussed, or questions you’d like asked, get at me before, or during the chat.

And if you’re one of my dear NICARian mentors, it’d be really fantastic if you could find the time to drop by. Anything I’ve picked up in my career thus far is minimal, and I owe it all to the collective wisdom of those who’ve taught me what I know as a journalist and a programmer.

See you in the virtual space!

« « What’s regex again? Why should journos care?

Journ curricula’s need for transition isn’t unique » »
  • http://www.chipoglesby.com Chip Oglesby

    Maybe some ideas on how to get machine readable data through FOIA requests. Or what tools, such as scraperwiki to get information out of PDF’s. Or what databases, such as socrata or Google Fusion tables to store your information in.

    I’m really interested in hearing what people have to say about data visualization.

    [Reply]

    Michelle Minkoff Reply:

    We took these into account for the questions, thanks so much for your thoughts here and during the chat! Hope we were able to touch on some of this, and I’ll try to elaborate on the scraping and storing aspects of this type of work in the post I’m working on. The one caveat I’d make now is that Google Fusion and Socrata can only take storing info so far, and that’s where the power of a database comes in. That being said, at the Times, we did an interactive city salary database based off of a Google spreadsheet, so it is doable. It just isn’t ideal once you pass a couple hundred records.

    [Reply]