Beep. Boop. That’s the sound of my commute

In spare moments, I find myself strangely drawn to discount site Wish. On a whim, I forked out a few quid on a go-pro rip-off. It lasted less than a minute before it developed a fault. But it kind-of-works (and I got a full refund!). So I stuck my now free camera on my helmet and filmed my cycle commute.

I was interested how I might ‘see’ the commute in a way that didn’t mean sitting through 20 minutes or so of me huffing and blowing down various streets. I remembered a project by ace designer Brendan Dawes called CinemaRedux.

He created ‘visual fingerprints’ of well known films by taking one frame every second of the film and laying them out in rows of 60; one row for every minute of running time. They are fascinating and give an interesting perspective on the film, especially the use of colour. I thought this would be a nice way to see my commute. 

Brendan Dawes CinemaRedux of Taxi Driver.

A while ago a developer called Ben Sandofsky created an app called Thumberwhich creates them, but it didn’t work on my mac (it was built for leopard). So, having recently dipped my toes into python programming (I’ve been scraping twitter for some research), I thought why not see if I could do it using Python.

A lot of GiantCap development later and I got it to work. The result…

A ‘cinema redux of my ride from work

You can see its no Taxi Driver. But there’s the occasional splash of green in the grey of the road and Manchester sky. As a ‘fingerprint’ of my journey, I think it works well. The final python code that makes them is available on GitHub.

It’s clunky and inefficient. But it works and I was inordinately pleased just by the fact that it doesn’t crash (much). So what could I do next with my new found programming powers?

What does my commute sound like?

In my last job, one of the PhD students. Jack Davenport (he does some really cool stuff btw.), was working on a project called the sound of colour which explored playful ways to make music that broke away from standard interfaces like keyboards etc. One experiment included constructing a large table that users could roll loads of coloured balls across. A camera tracked the balls and converted their position and colour into data to play sounds and loop. I loved the idea. Maybe it was there in the back of my mind when I thought it might be cool to work out what the cinemaredux of my commute sounded like.

Sonification of data

Making data audible is not a new concept. As well as a healthy and diverse electronic music scene, there’s a growing and scarily creative programmers and musicians experimenting with real time coding of music. There’s also loads of interesting stuff around using it to explore research data. It’s even got a bit of a foothold in data journalism. Check out the The Center for Investigative Reporting piece on creating an audio representation of earthquake data by Michael Corey @mikejcorey. There’s code and everything. On that note you should also check out Robert Kosara’s piece Sonification: The Power, The Problems. But I digress.

After some reading around I settled on the following basic idea;

  • analyse each image generated by my cinemaredux script and work out what the dominant colour was in each. But I didn’t want one note per picture, the information was too rich for that. But at the same time I didn’t want to create loads of notes from pixels in the image. I needed to filter the data somehow.
  • convert the RGB values of each colour into a MIDI note. I chose MIDI because it gave me the most flexibility and I had a vague idea of how it worked left over from my distant past in music tech. It’s a essentially a data file with what note to play, when and for how long. No sounds etc. I thought this would be easier — once I had data from the image it would just be a case of converting numbers. It would also give me more room to experiment with what the data ‘sounded’ like later on.
Analysing an image to work out the dominant colours filtered the data to something I could use

Midifying dominant colours

Skipping over a good deal of frustrating cut-and-paste, I finally got a script together that would take each frame of the video and give me a range of the dominant colours or ‘clusters’. Converting those into notes and duration didn’t take too much messing around and, thankfully there are some very easy to use Midi libraries for Python out there!

I ended up with each image generating a kind of arpeggio from a cluster — each colour represents a note that plays for a duration equals to the ‘amount’ of that colour in the image analysis. I could have made them play at the same time for a chord, but I knew that would sound odd and the rise and fall of the notes seemed to suit the idea of motion more.

Here’s the first test output from the script — A random image of my daughter messing with the camera, analysed for four clusters. The resulting midi file was run through Garageband with a random instrument (chosen by my daughter) and looped over few bars. It grows on you! (note: the soundcloud embed is a bit flakey on chrome )

Applying the same analysis to my cinemaredux images was just an exercise in time — more images take more time to analyse. But eventually I got a midi file and this is the result. (note: the soundcloud embed is a bit flakey on chrome )

Like my thumbnail experiment I’m happy with the result because, well, it works. At some point I may do a more technical post* explaining what I did. For now though, if you want to see the code and see if you can get it to work, then head over to github.

Some further work

It would be nice if the code was neater and faster, but it works. Where it falls down is in timing. The duration of the midi file is much longer than the actual journey. That means some experimenting with the ratio of notes, tempo and number of images. But I’m happy with the result so far. I’ve also a few more ideas to try:

  • It would be nice to have a version that was more ‘tuneful’ in the traditional sense. In tutorials I’ve read, like Michael Corey’s earthquake piece mentioned earlier, its common to tune the data by mapping the values to a key e.g moving all the notes so they are in Cmajor scale. That way I guess, I could risk generate a chord for each image without it sounding like I’m constantly crashing my bike.
  • It would also be nice to look break colours up across musical tracks. Low value RGB colours like black and grey could be used to play bass notes and higher value colours on another track to play melody.
  • By using MIDI I’m not limited to playing ‘instruments’. I could, for example, use samples of the environment I cycle through and then ‘trigger’ them using the notes. e.g. red plays middle c which triggers the sound of a car. It’s also possible to use data to filter sounds. So I could use the sound from the head camera itself and use the data to apply filters and other effects over its duration.
  • Finally, it would be nice to create a cinemaredux style image just of the colours selected, like a colour based piano roll or musical score.

Conclusions

You might be reading this and thinking why? You may listen to the ‘music’ and really think ‘MY GOD MAN! WHY?’ But the process of thinking about how data points can be ‘transformed’ was fun and I’m now a lot more confident using Python to structure and manage data

There’s a lot of assumptions and work-arounds in this script. The process of making the content more musical alone means a level of engagement with music theory (and midi) that I’m not really up for right now. The more I dive into some of the areas I’ve skated over in the script, the more I become aware that there’s also similar work out there. But my approach was to see how quickly I could get a half-baked idea into a half-made product.

For now I’d be interested in what you think.

*Essentially when looking for scripts to average out the colour of an image I came across a method called k-means clustering for colour segmentation. That’s what is used to generate the stacked chart of colours. That gave me the idea for the arpeggio approach.

The GIANT CAP approach to coding for journalists.

Coding for journalists is contentious topic. Despite what anyone tells you, if you’ve never touched a programming language, its not straightforward. I’m not a coder/programmer/hacker at all; I like to tinker. But like many, I sit down with the best intentions — I’m going to learn x or y properly. I sit down with the tutorials and work my way through the first few until I get board or can’t get stuff to work like the tutorial and give up.

The best results I’ve had are when I’ve had a problem. Something I know code can do but no idea how. Knowing what it is I want is enough to get me started. I can’t honestly say I’ve ‘learned to code’ but I’ve learned enough to get the job done.

As a result I’ve settled into a methodology for working with code that works for me and I think also might work if you’re a journalist and want to try coding. It’s called GIANT CAP.

In a nutshell its this:

Google IAnd Then Cut And Paste

Breaking down the method in more detail, there are six main things to consider.

  • Pseudo code your ideas
  • Ask Google
  • Try examples
  • Cut and paste errors into Google
  • Attribute the solutions by saving links to your sources.
  • Comment the backside off your code.

Pseudo code your ideas.

Pseudo code is an approach to describing what you want to achieve in a semi-structured way. Lets say I have a million rows of spending data from government and I want to identify which rows correspond to the NHS and work out an average spend. Here’s a very basic example of some pseudo code:

1. Load the data file from a folder
2. Tell me how many rows there are
3. Look rows that match a value NHS
4. Tell me which rows they are. 
5. Work out the average value from those rows
6. Save those to a new spreadsheet.

Breaking down the job this way helps identify the block of code you’ll need. Many programming languages are designed with an eye on mirroring ‘normal language’ so in principle it shouldn’t be a huge leap to begin to make it look more like code as we progress. At this point though, try and look at one task per line but try not to be too specific. As your experience grows you can even start to throw in more code like ideas.

Here’s an example of that from something I kludged together the other day. I wanted to work out how many rows I needed to show 60 images per row based on a number of images in a folder.**

# Count how many files are in a folder. num_files
# Work out how many minutes that is. minutes = num_files/60
# Round that number up so we get full minutes

At the end of each line I’ve added a variable like num_files or a bit of ‘maths’ to work something out like minutes = num_files/60. That last equation worked directly in python — it was actual code!

Ask Google

Now you have your basic framework, you can use the lines of your pseudo code to ask Google to help. So we could take our first line and Google “How do I load a file from a folder in Python”. I’ve added the ‘in Python’ bit to differentiate the programming language. You could equally try ‘in Javascript’ for example.

This is the bit where you need to take a deep breath and dive in. Things can get quite hard-tech quickly. But if it looks daunting, my advice is ‘don’t step away too quickly’. Scan through pages and tutorials, even if you don’t understand them completely. You’ll be surprised how quickly you can join the dots just by immersing yourself in the language.

One site that pops up a lot and I guarantee will become a regular haunt, is Stack Overflow. It’s the best and worst site on the web for coding advice. There are loads of examples and lots of advice. But again, be prepared to grit your teeth and wade through some stuff that might not immediately make sense. The key thing is there is lots of code to cut and paste.

You’ll also come across sites that seem to cover loads of what you need — those sites that people lovingly curate over time. One I’ve found really handy is data scientist Chris Albon’s site. It has loads of great tutorials including working with Python from the basics to more data journalism friendly things like PANDAS.

Try examples

Once you’ve found some examples and code that makes sense, try it! Yes, this method assumes you already have some coding environment set up. But hey, this is a method not a tutorial! For the record I’ve been enjoying playing with Python and I found Anaconda a really great system. It installs python and other things you’ll here coding journalists talk about like R and Jupyter.

If you don’t have a coding environment or just want to tinker, there are plenty of places you can try code out. If you’re playing with Javascript then ‘sandboxes’ like Codepen are great fun. If you want to try python or R (very common in data journalism) then try Jupyter

Trying code out in blocks is not only a good way to learn by doing, it’s also a good way to build a library of code to use. Taking our example above, having a working block of code that takes a file from a folder or filters some data, is something we can use again and again.

Cut and paste errors into Google

Cutting and pasting code is guaranteed to throw up errors. Most commonly;

  • You’re missing something the code needs to work — code often relies on third party code that comes as a package or library
  • You’ve put the code in the wrong place — often there’s an order to the way things are done.
  • It’s the wrong version of the language — code differs between versions of the same language. E.g Look at this comparison of python 2 and python 3

If there’s one thing that programming languages do well its give you errors. They are also really bad at telling you what they mean. But most will have some things in common:

  • They’ll tell you which line in your code caused the error (or at least where it started to go wrong)
  • They will tell you the general type or effect of the error.

Here’s an example from Python:

File "testcode.py", line 112, in <module>
    print ("We've got "+len(npath)+" frames to work with ")
TypeError: must be str, not int

It tells us that at line 112 something that should be str is an int! Copying the error and putting it in google gives us plenty to go at. Again, it is helpful to stick the name of the language at the end too. e.g “TypeError: must be str, not int python”. The problem here is I’m trying to do something with a number (int) when python was expecting text (str).

Attribute the solutions by saving links.

If you find a solution to a problem, save the link. Preferably by adding it to your code as a comment. That way you’re not going to forget where the advice came from and if you need a reminder its easier to find it. It’s also important when it comes to blocks of code that you may cut and paste. You should always cite your sources — sometimes you’re required to by a licence.

Comment the backside off your code.

It’s likely that you’re not going to be programming everyday. So something that reminds you what and why you did stuff is important. I’ll also comment on development as it happens so I know what I tried to get it to work. e.g. leave a comment to describe what’s happening at a particular point in the code. This often makes code more cluttered than more experienced coders would like but it means I’m not looking across notebooks and other documents. All the process is in the code.

There’s a debate in coding circles about comments and the general feeling is that you should keep comments to a minimum — the code should be descriptive enough. I can see the sense in that, especially if you’re working with others. Comments might mean something to you but might confuse others. But in GIANTCAP you’re commenting for you so be as cryptic or wordy as you like.

Good enough to move to the next step

The GIANT CAP method won’t help you be a better coder. Some might argue the cut-and-paste bit might make matters worse through bloated code etc. But I do think it will get you on the way — it has for me.

The code I write is never pretty. It isn’t efficient. It is often slow and very rarely works first time. I don’t think I’d show some of it to anyone let alone a coder. But it works. Eventually.

I would guess that for most journalists, that’s enough. Code just needs to do a job, deliver a result we recognize and can work with. Then we can move on. Maybe a few days or months later you’ll need to do it again, or something similar, and it will all be there waiting to remind you. All the info about the way you kludged your way through last time will make the next time a bit easier and maybe remind you how much you enjoyed it when it finally worked.

If you want to see an example of some code written using the GIANT CAP method take a look at some code I wrote to sample a video file.

** You’ll notice it starts with the hash symbol, that’s because I wrote the code in Python and # is how you start a comment.

Grounding journalism education

This is a version of a Keynote speech I gave at the Journalism Education and Research Association of Australia conference #JERAA17 in December 2017

I’ve just recently changed jobs. It’s a move across. I’ve gone from being a Senior lecturer in journalism at the University of Central Lancashire to being a Senior lecturer in journalism at Manchester Metropolitan University. In some ways, nothing much has changed. But starting in a new university, well, it feels a little bit like visiting Australia from the UK; we speak the same language but each place is, in it’s own ways, very different. So it goes with universities. Each has its own unique bureaucracy!

But apart from that dissonance of being in a new environment, one of the things that gave me pause for thought was the application process…well it would. But I realised that It was the first time in a long time (I was at my last post for nearly 20 years) I was essentially asked to seriously reflect on “why do you want this job?”.

Now, if I’d been looking at a big pile of papers to grade…”why do you do this?”might have become a more existential question… But, after nearly 20 years in journalism education, it was a reality check — “why do you want to still carry on doing this” and on the basis that, given my current pension pot, I’ve possibly got another 18 years doing it, ‘what are you signing up for?’

For those of you here who are recently from industry or still with a foot in the industry, I guess those are questions you may have considered too.

So I wanted to reflect a little on that today and think about the kind of direction those next 18years might take.

I started lecturing online journalism in 1999. UCLan had started the first MA online journalism in the UK (pretty much the world I think). My background is in media production, and amongst other things my career followed the increasing use of computers in production; through the use of sequencers and samplers in music production, none-linear video editing and on to the emergence of the web — that experience and the fact that I’m an unrepentant geek is what landed me at UCLan teaching HTML to journalists. Weeks of it! That’s how you got websites up and running in the days before content management systems; I sent students on work placement to mainstream media websites and they were using FTP/Dreamweaver to publish.

The course was pretty much self-contained. There was little or no cross-over with the other postgraduate courses. The broadcasters broadcast, the print people printed and on-liners, well, we on-lined…We all toiled in our corners. and so it was in industry. I’m sure many of you will remember that time. Somewhere in the corner of the newsroom or even on another floor where the geeks lived ….the ones doing, well, who knows what.

But over the next 5 years or so, what had been seen exclusively as an output medium, albeit with its own unique properties, on the edge of journalism, many began to creep into the mainstream. People began to pay attention not just to when and how their stories appeared online but also where those stories were coming from. Digital was starting to become as much a part of the input process of reporting as it was the output of the journalistic process.

Within industry the reaction wasn’t always positive. In newsrooms there were many tales of the divide between analogue and digital news — tales of audible gasps if a person crossed the room from print to the online ‘side’. It’s maybe that it was around that time that the issues of sustainability — money — kicked in with a vengeance…It comes to something when you can cite the first dot com bubble as influencing recruitment.

Its a problem that hasn’t gone away. But it’s perhaps telling of the state of things now that one of the most high profile philanthropic funders in journalism right now, especially in projects around Trustworthy Journalism is — Craig Newmark. The same Craig Newmark who, I’m pretty sure if he’d walked in to some of the executive meetings in newsrooms I was in around 2006, might not have been so welcome. His site Craigslist was the shorthand for the evil of digital disruption and diminishing economic returns.

Now, its digital we are told, is the ‘new normal’. In the print newsrooms that were the first to grapple with digital, there has been a flip. The desk in the corner, or more likely in another building (or state) is the print desk.And it was the print newsrooms that really grappled with this. Broadcasters, perhaps because of the prevalence of state funding, especially in the UK and Australia, seemed curiously absent from the early debates.

But digital has caught up with the expectations of quality and functionality broadcast journalists demand. The promised digital disruption of video that drove many local non- broadcast newsrooms to invest in video in the mid-noughties, is finally here thanks in no small part to the ubiquity of mobile. That initial reticence to engage with digital in broadcast means we are now revisiting debates from that time. Podcasts for example are now (at least the second time around or trying) seen as viable content both editorially and economically. Although you’d be forgiven for thinking there were no podcasts before Serial!

Balancing the equation of what industry wants, how we define journalism through our actions and what the consumer does makes journalism training a moving target

What I find interesting here is that perhaps this explosion in multimedia is perhaps more fully formed and familiar to consumers than many in broadcast might feel comfortable with — a broadcast version of platform-dissonance print journalists have experienced. If you want to start a fight in a newsroom just hold your phone up in portrait not landscape — 90 degrees of professional separation. It’s a pithy but reliable example of a pervasive problem that describes the broad challenges we face in both industry and education — common practice, best practice and industry practice don’t always match.

Teaching the new normal

It’s inherent in the nature of journalism courses that they are vocational — they prepare people to work in journalism by reflecting industry practice. So it’s perhaps inevitable that the reality of how capable journalism has proved to be in responding to change and the new digital reality, is something we wrestle with more and more in academia. I look at student journalists, graduating this year and I know the industry priorities — what they demand from new hires — have radically changed in the three or four years they have studied.

I remember being at an academic conference in the mid noughties and being asked, in the face of all this new stuff, what goes? Implicit in the question was that there was so much new stuff to cover, what of traditional journalism goes to make room.

The new normal it turns out is difficult target to hit

I’ve been lucky enough to have spent a good deal of the last few years working with newsrooms making the digital transition. I’ve been with over 400 journalists of various levels of seniority — junior reporters to group editors — in a room looking at one aspect or another of the shift to digital and one of the questions I always asked in one form or another was ‘what would make your job easier?” Answers varied. In the early days it was always a variation on the theme of ‘can we turn off the internet’. But, I don’t want to make too light here. Commonly the two top answers were: time and resources.

What ‘resources’ means varies. Sometimes it’s as simple as a new mobile phone. But more often than not it means people. More people with the right skills to do what needed to be done…by association the people we as journalism educators provide.

But it was time that was the deal breaker. More time to do all the things that needed to be done and be able to develop expertise with the new ideas and tools that pop up, finding ways to make them work.

A shared brain for industry

As a journalism educator how do I read that? For me more broadly it speaks of an industry that it recognises and reacts to challenges but doesn’t have time or resources to learn from them. It’s an industry telling me that it doesn’t have the time to work out what to do with the people it knows it needs but doesn’t know what they will do!

Time is something that seems to be in short supply for everyone. We all function in an attention economy —but time is literally money to media organisation.

In comparison, time something that we are relatively rich in in academia — (I’ve made a note to myself to duck behind the podium at this point….)

I know. If I asked you that ‘what would make your job easier” question, you’d no doubt give me the same answer — time and resources. Who wouldn’t like a new mobile phone?

But experimenting with new tools and thinking about what journalists might do with them and the types of journalism they might produce is something has always been part of what I do — it has to be when the landscape changes so fast. Its rare that any of us can simply roll out the same lecture as we did last year.

Early on in my career I got into the habit of blogging my experiments and my thinking. You had to really. Those of us teaching online and digital journalism were few and far between so it was the only way to call on the collective community brain before social media came along. More than that though, it’s opened the door to working on new ideas and collaboration and addresses that resources question. It’s helped ground me in the huge range of contexts where journalism is done.

– as a side note, blog, it’s an amazing tool for sharing and organising thoughts. We don’t do it enough. Experiment with Medium or WordPress. In a world of tweets and updates saying what people are thinking and doing, it’s a wonderful and essential way to say ‘why’ we think and do things.

One of my most popular posts this year is on how to create socially shareable video — with captions etc. — using powerpoint. Why? Because I needed to teach a class socially shareable video and there wasn’t time or consistent access to resources to teach them adobe premiere or after effects. I get loads of feedback on that from journalists who use it as freelancers small newsrooms where time and IT resources are limited.

The David Moyes Excuse generator was an perfect example of the power of Knowledge exchange.

Here’s another example. If you’re a soccer fan, some of you in the room might remember David Moyes. He was the manager of Man UTD for a little while. He didn’t do well. The running joke was that he always had an excuse for a poor performance. I was working with a group of journalists in a two day session called ‘the art of the possible’. Basically, two days of permission to experiment. — on a side note, how forward thinking was that of the media org! The journalists had an idea for a little widget that would automatically generate an excuse based on your problem. So, they thought about the excuses and I went away and did some hacking around with some javascript, to make — the David Moyes excuse generator.

Methodological interlude — we are journalism academics after all — ‘should journalists code is a common question, so here’s my code development methodology…

G.I.An.T.C.A.P

Google IAnThen Cut And Paste

I digress. By the end of the two days, it was up and running and on their site and it caused a bit of a stir — a bit of viral hit.

This basic code did the rounds of the newsgroup for a good long while, re-skinned as various things until eventually there was enough of a use case that a version of the functionality was made available in their content management system.

Now, the viral nature of this aside — and man do I wish I’d had a pound for every hit that thing generated — what I loved about this process (as with the PowerPoint example)was it showed open innovation can work. It made an impact not just as a piece of content but also on the way the organisation worked. In academic terms, that’s tangible knowledge exchange; Academia and industry working together, sharing knowledge in an open and informative way for mutual benefit.

In a world full of the known unknowns of that time vs resources equation, renting time with our collective brain, that’s something that the industry badly needs. We can be the pause before industry engages with the idea. If that means they become a little less reactive and more responsive to the digital churn. That benefits everyone. That target moves a little less.

But the impact of that collaboration is just as important for us as academics in a world where Knowledge exchange is not just an aspiration but a KPI.

Research and Knowledge exchange are the new normal

The balance of research and impact/KE we generate is increasingly being measured and assessed. If it doesn’t already, it will very likely define our contracts and workloads in the future. KE is sometimes a hard sell in arts and humanities but Journalism is such a unique blend of think and do, done in such close proximity to the industry, that it seems like an open goal right now.

That doesn’t mean ‘traditional’ research isn’t important. We aren’t just doers, we are critical thinkers and doers. It’s interesting that in Australia, the relationship between research and practice in J-schools seems closer than in the UK. There’s also often not a lot of love for the “ivory tower” in the industry- the idea that we don’t know what its like on the ground, is a frustratingly common throwback to a traditional view of traditional research.

But you know as well as I do that everything you’re talking about in this conference over the next few days is what the industry is talking about. Journalism research is a very much a live and relevant.

Research can be a painful process for people coming into education. It is for me. Its frustrating for those used to the speed of journalism. But if we can make clearer paths between research and knowledge exchange through things like phds by practice and more collaboration and pressure to recognise none traditional research outputs, then we are beginning to move beyond an perception of research as some process of generating esoteric ephemera no one sees.

That’s important to industry too. We sit in a really useful place to be a critical friend to journalism and if we can do that in an open and accountable way, through research and communicating what we do, that better places us to be honest brokers for journalism in broader policy discussions. We can turn that passion we have for the profession into advocacy with impact.

I don’t think there has ever been a time when that is more relevant and vital than now. As much as I hate the term, and I really do hate the term, it’s one of the most poisonous, critically empty phrases in use at the moment ,fake news has proved common cause for journalism and academia. Unlike broad contexts like digital which simply feels like a debate on disruption, the new world order of Trump and the increasingly partisan media landscape, feels like an existential threat we can get behind philosophically and professionally. It goes beyond genres and practice right to heart of what we think journalism is for, doesn’t it.

And in that context, I guess this is when I put my critical friend hat on.

The gaps in representation

It almost goes without saying that journalism is in very a difficult place to be right now. The bite seems harder and more vicious than ever before. Restructuring, layoffs, newsroom closures. Perhaps it feels all the more vicious right now, when we know good journalism remains vitally important . Now more than ever, we need to double down on living up to the ideological link between journalism and democracy — core ideas of keeping people reasonably and fairly informed about what is going on around them and holding those who seek to get in the way of that accountable.

But we know that the bites are leaving the biggest holes away from the world stage of TweetStorms and Trump. Journalism is not happening at a local level as it should.

Closures and consolidation in local and regional media have left gaps. People talk about the democratic deficit caused by a shrinking local media. Some go as far as to talk about news deserts. But these are not new problems to wrestle with.

In the past, the response to these issues has been a patchy mix of newsroom driven collaboration and a bottom-up community driven responses. The former often struggles by inheriting the systemic problems of sustainability from its parent. The latter often rendered invisible to the mainstream thanks to a deep seated institutional lack of diversity. But there’s movement in the right direction,

It’s interesting, for example, to see demands in Australia to offer tax exemptions for community media. In the UK, the Welsh government announced it has budgeted nearly 200k over 2 years to support the development of community and local news services. In 2012 the House of Lords even suggested that investigative journalism should be eligible for ‘charitable status’. Accountability, especially at a local level has reached the level of soft state intervention.

In the UK, as part of the licence fee settlement, the BBC has set up a local democracy reporters scheme — paying to put reporters into regional newsrooms to cover what UK journos would call ‘court and council’ — civic reporting. The material they create is shared in a common hub which other media organisations, including community and hyperlocal media can get access too.

That project has not been met with universal acclaim. Many in journalism seem pre-programmed to resist intervention in journalism in any form, including other journalism organisations. But it does show that outside of the punch and judy of populist politics and industry debate, there is a broad recognition and concern for the sustainability of ‘accountability journalism’.

But perhaps the most promising but challenging response to the issue is a rise in third sector organisations entering the space. Non-profits doing accountability journalism and in one form or another, giving their content away.

As a model it isn’t new. It gave us Propublica. But more recently, driven by investment from organisations like Google, Facebook’s Newsroom project or Craig Newmark’s foundation, there is a growing, influential and relatively cash rich ‘3rd and 4th sector’ of accountability journalism spinning up. Whats positive is that we are also seeing a growing presence of universities and academics in the mix. There’s the News Integrity Initiative in the US for example.

I know the issue of the ‘duopoly’ of Facebook and Google is a common windmill for us to tip at here. But whatever motivation you ascribe to the funders, the money and support is there and that’s shifted the focus back to the viability of model for philanthropic funding journalism.

I know it’s a model that’s of interest to you here in Oz. The Public Interest Journalism Foundation for example is asking questions of sustainability and, like others, looking to philanthropy and recognition of non-profit media organisations I mentioned earlier.

New models for the local journalism army

What’s good to see is that is starting to filter down to a local level.

In the UK for example, Google have funded the not-for-profit media organisation The Bureau of Investigative Journalism to set up The Bureau Local, which uses a community model to build up investigations, often data driven, into stories with national significance but built for local use and impact — they share the content for anyone to use.

Effectively uncoupled from the economic model of traditional journalism, locally focussed accountability journalism organisations take on a bridging role. They see themselves actively stepping into the gaps left behind by journalism but retaining a close proximity to the identity of journalism — it’s about connecting community with journalism.

Fourteen years ago two students of mine started a hyperlocal blog called Blog Preston. That now runs as a CIC ( a form of company registration under uk law designed for social enterprises) with an aim to strengthen Preston as a community. That blog is now arguably as visible, if not more so than the local paper — it allows it to experiment with innovative ideas like a print edition which pushed over 10,000 copies into the city (the local daily newspaper in Preston has a certified circulation of 9,874). It’s made them visible and vocal advocates for their city and the community has responded in support.

We are also seeing experiments with new models of ownership and accountability within the organisations— cooperatives like community newspaper The West Highland Free Press. It serves a geographical area of over 250 square miles with a readership of 8,000 covering the islands of Skye, Lewis and Harris. The newspaper has been worker-owned since 2009 and also has a flourishing website. There is also The Bristol Cable which operates as a coop both financially and editorially.

As part of their commitment to community and as part of their business model, these organisations also offer training and support for journalists looking to learn new skills -especially data journalism — but theres a reading of who is a journalists here that might sit uncomfortably with some traditionalists. If citizen journalism gave you existential shivers then you’re in for a rocky ride. Many of these organisations are also vocal in their criticism of traditional journalism- they are there because traditional journalism has failed.

Now I’m not sure I would agree with that. But for whatever reason — there’s whole conferences in that — there are gaps and they are being filled by people who have affinity with journalism but aren’t the mainstream. They are in but not of journalism.

It’s a really positive development. But issues of sustainability still loom. There aren’t enough of these organisations and there certainly aren’t enough at local level.

It’s time for universities to step up to the local gap

So to end with here’s a little thought experiment and chance for me to be a bit provocative.

The best journalism education is hands-on. We create working journalists by having them work as journalists.

Industry demands that of us and it’s our commitment to the student — “we’ll give you the skills you need”.

I think it’s right that the stories our students tell in learning those skills are about real people, real events. I don’t want simulations or classroom exercises to feed the gallery of newsrooms hacks who question the experience our students have

We go to great lengths to create learning experiences and even media platforms in the service of that process – course websites; Papers and magazines; Broadcast output

But let me ask you a question. Who is your competition?

Is there anyone else publishing news where your uni is based? What about the local newspaper? Is there one? How do you rate it? How does the community rate it?

Chances are they well respected. All of the studies I’ve read show the level of trust and confidence in local media is still high. People value their local media outlets.

But let flip that question a bit. How confident are you in the current media climate it’s that it will be able to keep going?

Lets put some numbers in play here.

UCLan has a student cohort in journalism of over 150 students. At MMU its nearer 200. Being conservative, that’s 20–30 ‘reporters’ who at various times of the year will be out in the local community looking for stories to tell. Be it basic reporting or more in depth investigations.

So given the resources, time and a relative level of financial security universities have what’s stopping journalism courses filling that accountability gap?

  • Why not start a Blog Preston or a bureau local from within the university?
  • Why not go out and build a coop like the Bristol cable?
  • Why not buy the local newspaper or radio station?

There is an opportunity right now in journalism education, even if it’s just a thought experiment, for us to flip the model of how we work.

At the moment our feet are firmly planted in industry and in academia. But in the current media climate, we are at risk of simply delivering students with the prescribed skills and critical underpinning into an industry that, through attrition will take them further away from those communities where they learnt their trade. We need to think about how we can plant our feet firmly in the community around us — ground ourselves there and reach out to the industry. We can’t hope that journalism finally sorts itself out and reaches back.

In shifting that perspective, we don’t lose anything. We can still service the need to provide students trained with the skills industry needs, and we do that experimentation and thinking that industry can’t do. But we can also do what journalism is supposed to do, a role the industry is increasingly struggling to service , and that’s to make sure that our communities are represented.

So I worry about the next 18 years. In part because yes, some things will stay the same. Yes students won’t turn up to lectures sometimes; University bureaucracy won’t go away. That can get boring. But where I think it really matters, things are really going to change. They already have. The industries of journalism and academia I found myself over 20 years ago have changed radically, often despite their best efforts. We need to think about how we respond to that.

The what and the how are just going to be the moving target they always were. What is more important is that we hold on to the why we do what we do and vital that we think more deeply about who benefits. Because people do really do benefit.

So as much as I worry, I don’t really see myself doing anything else.

It’s a lot of work. But why wouldn’t you want to do this stuff…

As journalism academics we not only have the chance to influence the influencers. We create the influencers. We are the influencers! (i’ve got a note to myself here to not try and laugh like a power crazed maniac at this point)

How powerful and empowering is that?

Two fundamentals that define good data journalism

Defining data journalism is a hostage to fortune but as I start teaching a data journalism module I’ve boiled it down to two things visible methodology and data.

I’m teaching a module on Data Journalism to second year undergraduates this year. It’s not the first time we’ve done that at the university. A few years ago three colleagues of mine, Francois Nel, Megan Knight and Mark Porter ran a data journalism module which worked in partnership with the local paper. I’ve also been tormenting the students with elements of data journalism and computational journalism across all four years of our journalism courses.

There are a couple of things I wanted to do specifically with this data journalism module (over and above the required aims and outcomes). The first thing was, right from the start, to frame data journalism as very much a ‘live conversation’. It’s exciting, and rare these days, that students can dive into a area of journalism and not feel they are treading on the toes of an existing conversation. The second thing was to try and get them thinking about the ideological underpinnings of data journalism.

Data journalism as a discourse borrows most heavily and liberally from the vocational underpinnings of journalism — the demand of journalism to serve the public and hold to account that John Snow and others have talked about. But it also draws on the rigour of science, the discipline of code, design thinking, narrative and social change; anything to bring shape, structure and identity. This is often a good thing, especially for journalism, where new ideas are few and far between and it takes a lot to challenge the orthodoxy. Perhaps that’s why data journalism is seen as an indicator for prosperous media companies. But it’s also a bad thing when it’s done uncritically. I’ve written lots about how I think data journalism borrows the concept of open for its own purposes for example. Often much of the value of data journalism seems implied.

The fluid nature of data journalism discussion makes it difficult to identify “schools “of data journalism thought — I don’t think there’s a bloomsbury group of data journalism yet!*- but there are attempts to codify it. Perhaps the most recent (and best) is Paul Bradshaw’s look at 10 principles for data journalism in its second decade. It’s a set of principles I can get behind 100% and it’s a great starting point for the ideological discussion I want the students to have.

That said, and pondering this as I put together teaching materials, I think things could be a little simpler — especially as we begin to identify and analyse good data journalism. So if there was a digitaldickinson school of data journalism I think there would be a simple defining idea…

If you can’t see, understand and ideally, interact with either of those in the piece, it may be good journalism but it’s not good data journalism.

When good journalism becomes good data journalism

Here’s two examples to make the point.

The Guardian published a piece uses Home Office data to reveal that the asylum seekers are being housed by some of the poorest councils in the UK. A story that rightly caught the eye of Government and campaigners alike. Exceptional journalism. Poor data journalism.

An exceptional piece of investigation, great journalism but this would score low as a piece of data journalism

The problem with the piece is that, although it relies heavily on the data used it is light on the method and even lighter on the underpinning data. The data it uses is all public (there is no FOI mentioned here) and there isn’t even a link to the source let alone the source data.

Contrast that with a piece from the BBC looking at the dominance of male acts at festivals. 

The BBC’s piece might be seen as frivolous, but no less a piece of journalism.

An introduction to the method ticks the boxes for me.

It’s a fascinating piece but the key bit for me is at the end where there is a link to find out how the story was put togetherThat’s the think that makes this great data journalism.The link takes you to a github repository for the story which includes more about the method, unpublished extras and, importantly, the raw data.

The BBC England Data Unit GitHub page is a good example of how to add value to data journalism stories.

The BBC take is a full-service, all bases covered example of good data journalism; its the blue ray with special features version of the article. To be fair to the Guardian piece, they do talk a little about the ‘how’. But not on the level of the I also recognise that in these days of tight resources, not every newsroom needs to create this level of detail. But using github to store the data or even just linking to the data direct from the article is a step in the right direction — its often what the journalists would have done anyway as part of the process of putting the article together.

Making a point

I’ve picked the Guardian and BBC stories here as examples of data-driven journalism. These are two stories that put data analysis front and centre in the story. But I recognise that I’m the one calling them ‘data journalism’. I’m making a comparison to prove a point of course, but my ‘method’ aside, the point I think stands — beyond the motivations, aims and underpinning critical reasons, when the audience access the piece, without the method and the data can we really say its data journalism.

I want my data journalism students to really think about why we see data journalism as a thing that is worthy of study not just practice. Not in a fussy academic way but in a very live way. It isn’t enough to judge what is produced by the standards of journalism alone (I’m guessing the Guardian piece would tick the ‘proper journalism’ box for many). But it isn’t ‘just journalism’ and it isn’t just a process. If the underlying principles and process aren’t obvious in the content that the readers engage with, then it’s just an internal conversation. It has to be more than that.

For me ,right now, outside of the conversation, good journalism starts with a visible method and data.

*I guess if there was they would vehemently deny there was one.

Is Data Journalism any more open?

Last year I wrote about how the 2016 Data Journalism awards illustrated that journalism hasn’t quite got to grips with the full meaning of open data. So I thought I’d take a look at this years crop and see if things had improved.

This is last years definition for the open data category:

Open data award [2016] Using freedom of information and/or other levers to make crucial databases open and accessible for re-use and for creating data-based stories.

This years was the same save for an addition at the end.(my emphasis)

Open data award [2017] Using freedom of information and/or other levers to make crucial datasets open and accessible for re-use and for creating data-driven journalism projects and stories. Publishing the data behind your project is a plus.

A plus! The Open Data Handbook definition would suggest it’s a bit more than a plus…

Open data is data that can be freely used, re-used and redistributed by anyone — subject only, at most, to the requirement to attribute and sharealike

…if you want people to re-use and re-distribute then people need the data.

Lets take a look at this years shortlisted entries and see how they do with respect to the open data definition.

So, in the order they appear on the shortlist…

Analyzing 8 million data from public speed limit detectors radars, El Confidencial, Spain

This project made use of Spain’s (relatively) new FOI laws to create “an unique PostgreSQL database” of traffic sanctions due to exceeding the speed limits. A lot of work behind the scenes then to analyse the results and a range of fascinating stories off the back of it. It’s a great way to kick the tyres of the legislation and they’ve made good use of it.

Most of the reporting takes the same form. The story is broken down into sections each accompanied by a chart. The charts are a mix of images and interactives. The interactive charts are delivered using a number of platforms including Quartz’s Atlas tool but the majority use DataWrapper. That means that the data behind the chart is usually available for download. Most of the heavy lifting for users to search for their area is done using TableauPublic which means that the data is also available for download. The interactive maps, made on Carto, are less open as there is no way to get at the data behind the story.

Verdict: Open(ish) — this makes good use of open government legislation to create the data, but is that really open data. The data in the stories is there for people to download but only for the visualisations. That’s not the whole data set. There also isn’t an indication of what you can do with the data. Is it free for you to use?

Database of Assets of Serbian Politicians, Crime and Corruption Reporting Network — KRIK, Serbia (this site won the award)

For their entry independent investigative journalism site KRIK created “the most comprehensive online database of assets of Serbian politicians, which currently consists of property cards of all ministers of Serbian government and all Serbian presidential candidates running in 2017 Elections.” Reading the submission it’s a substantial and impressive bit of work, pulling in sources as diverse as Lexis and the Facebook Graph. They even got in a certified real estate agency “which calculated the market values of every flat, house or piece of land owned by these politicians” Amazing stuff done in a difficult environment for journalism.

Verdict: Closed — This is a phenomenal act of data journalism and would in my view, been a deserving winner in any of the categories. But the data, whilst searchable and accessible and certainly available, isn’t open in the strict sense.

#MineAlert, Oxpeckers Investigative Environmental Journalism, South Africa

Using information access legislation and good old journalistic legwork, Oxpeckers Centre for Investigative Environmental Journalism pulled together a dataset of mine closure information that revealed the impact of a chaotic mining sector in South Africa. The data highlighted the number of derelict mines that hadn’t been officially closed and were now being illegally and dangerously mined. There’s a nice multimedia presentation to the story and the data is presented as an embedded Excel spreadsheet.

The project has been developed and supported by a number or organisations including Code for Africa. It’s no surprise then that the code behind parts of the project via github. The data itself is also available through the OpenAfrica data portal where the licence for reuse is clear.

Verdict: Open. The use of github and the OpenAfrica data portal add to the availability of the data which is clearly accessible in the piece too.

Pajhwok Afghan News, Afghanistan

Independent news agency Pajhwok Afghan News have created a data journalism ‘sub-site’ that aims to “use data to measure the causes, impact and solutions driving news on elections, security, health, reconstruction, economic development, social issues and government in Afghanistan.”

The site itself offers a range of stories and a mix of tools. Infogr.am plays a big part in the example offered in the submission. But other stories make use of Carto and Tableau Public. The story “Afghan women have more say in money that they earned themselves than property in marriage” uses Tableau a lot and that means the data is easy to download, including the maps. That’s handy as the report the piece is based on (which is linked) is only available as a PDF

Verdict: Open(ish) — the use of Infogr.am as the main driver for visualisation does limit the availability of the data, but the use of Tableau and Carto do raise the barriers a little.

ProPublica Data Store, ProPublica, United States

The not-for-profit investigate journalism giant Pro-Publica have submitted a whole site. A portal for the data behind the stories they create Interestingly Pro-Publica also see this project as a “potential way to defray the costs of our data work by serving a market for commercial licenses.” that means that as a journalist you could pay $200 or more to access some of the data.

Verdict: Open. Purists might argue that the paywall isn’t open and ideally it would be nice to see more of the data available and then the service and analysis stuff on top rather than the whole datasets being tied up. That said, its not like ProPublica are not doing good work with the money.

Researchers bet on mass medication to wipe out malaria in L Victoria Region, Nation Media Group, Kenya

This piece published by The Business Daily looks at plans to enact a malaria eradication plan in Lake Victoria region. The piece takes data from the 2015 Kenya Malaria Indicator Survey amongst other places to assess the impact of plans to try and eradicate the disease.

Verdict: Closed. The work done to get the data out of the reports (lots of pdf) and visualise it is great and its a massively important topic. But the data isn’t really available beyond the visualisations.

What’s open?

Like last year it’s a patchy affair when it comes to surfacing data. Only two of the entries make their data open in a way that sits comfortably in the definition of open data. For the majority, the focus here is on using open government mechanisms to generate data and that’s not open data.

As noted last year, what open data journalism should be, is really about where you put the pipe;

  • open| data journalism — data journalism done in an open way.
  • open data | journalism — journalism done with open data.

By either definition, this year’s crop are better representative of open data use but fall short of an ‘open’ ethos that sits at the heart of open data.

Does it matter?

I asked the same question last year; In the end, does the fact that the data isn’t available make the journalism bad? Of course not. The winner, KRIKS is an outstanding piece of journalism and there’s loads to learn from the process and thinking behind all the projects. But I do think that the quality of the journalism could be reinforced by making the data available. After all, isn’t that the modern reading of data journalism? Doesn’t making our working out and raw data more visible build trust as well as meaning?

Ironically perhaps, Pro-Publica highlights the problem in the submission for their data store project —

“Across the industry, the data we create as an input into our journalism has always been of great value, but after publication it typically remained locked up on the hard drives of our data journalists — of no use either to other journalists, or to anybody else who might find value in it.”

Publishing the data behind your project is what makes it open.

If you think I’m being picky, I’d point out that I’m not picking these at random. This is the shortlist for the open data category. These are what the judges (and the applicants) say are representative of open data. I think they could go further.

As I’ve noted before, if the practice of data journalism is to deliver on transparency and openness, then it needs to be part of that process. It needs to be open too. For me I’d like to see the “Publishing the data behind your project is a plus” changed for next year to an essential criteria.

Local votes for hyperlocal #DDJ

There’s a good deal of interest in my feeds in a BBC report Local voting figures shed new light on EU referendum. The work has been a bit of a labour of Hercules by all accounts.  

Since the referendum the BBC has been trying to get the most detailed, localised voting data we could from each of the counting areas. This was a major data collection exercise carried out by my colleague George Greenwood.

This was made more difficult by a number of issues including the fact that: “Electoral returning officers are not covered by the Freedom of Information Act, so releasing the information was up to the discretion of councils.”

But the data is in and the analysis is both thorough and interesting.  I particually like the fact that the data they collected is available as a spreadsheet at the end of the article. There are gaps and there have been some issues with this (but its already being put to good use.) . More and more I’m seeing data stories appear with no  link to the data used or created as a result of the reporting.

Getting local.

In a nice bit of serendipity, Twitter through up a link to a story on Reading (Katesgrove Hill)  based hyperlocal The Whitley Pump. The story, ‘Is east Reading’s MP voting for his constituency?‘, starts with the MP for Reading East, Rob Wilson questioning an accusation that he voted against his constituents in the recent Article 50 vote.  His response was  prove it! saying “Could you provide the evidence on how my constituency voted? My understanding is that no such breakdown is available.” That’s just what Adam Harrington of The Whitley Pump set out to do.

The result is a nice bit of data journalism that draws on a number of sources including council data and draws the conclusion: “There is nothing to support a view that Reading East voted to leave the EU, and available data makes this position implausible.” 

If nothing else, its a great example of how hyperlocal data journalism can work. Unlike the BBC the Pump didn’t need to deliver across the whole country but it did follow a lot of the same methods and fall foul of many of the same issues, not least the lack of data in the first place.

Encouraging data practice at hyperlocal level. 

The BBC’s recent announcement on the next steps for its local democracy reporters scheme include mention of a local Data Journalism Hub. In a blog post officially announcing the scheme, Matthew Barraclough noted:

We hope to get the Shared Data Hub in action very soon. Based in Birmingham, BBC staff will work alongside seconded journalists from industry to produce data-driven content specifically for the local news sector.

It would be great to see that opportunity to work and learn alongside the BBC included hyperlocals like the Whitley Pump.

Image courtesy of The European Parliament on Flickr.

Why open data needs to be “Citizen literate”

A “data literate” citizen isn’t someone who knows how to handle a spreadsheet — it’s someone who inherently understands the value of data in decision making.

So says Adi Eyal in a piece very much worth a read, called Why publishing more open data isn’t enough to empower citizens  over on IJnet.

I’m right behind the sentiment expressed in the headline.

I’m fascinated by the tensions caused by the use of open data – or perhaps more specifically the rhetoric of its use.  I often find myself questioning the claims of the ‘usefulness’ of open data, especially when they are linked to social and community outcomes. I share Eyal’s view that  whilst there may be some big claims, “there is not yet a larger body of work describing how open data has brought about systemic, long-term change to societies around the world.”

Some might argue (me included) that its just too early to make judgements.  As idealistic and iconoclastic as the promises may be at times, I do think it is just a matter of time before we begin to see tangible and consistently replicable  social benefit from the use of open data.

But the the key challenge is not the destination or how long it takes to get there. It’s how we do it.

In the IJNet piece Eyal makes a distinction between simply freeing the data and its effective use, especially by average citizens. He makes a strong case for the role of “infomediaries” :

These groups (data wranglers, academics, data-proficient civil society organizations, etc.) turn data into actionable information, which can then be used to lobby for tangible change.

I’m very drawn to that idea and it reflects the way the open data ecosystem is developing and needs to develop. But I do think there’s an underlying conflation in the article that hides a fundamental problem in the assumption that infomediaries are effective bridges – It assumes that open data and open government data are the same thing.

It’s an important distinction for me.  The kind of activities and infomediaries the article highlights are driven in the most part by a fundamental connection to open government (and its data).  There is a strong underpinning focus on civic innovation in this reading of the use and value of open government data. I’d argue that Open Data is driven more  by a strong underpinning of economic innovation – from which social and civic innovation might be seen as  as value created from the use of services they provide.

There is a gap between those who hold the data and use it make decisions and those that are affected by those decisions.   I don’t think that open data infomediaries always make that gap smaller,  they simply take up some of the  space.  Some do reach across the gap more effectively than others – good data journalism for example.  But others, through an economically driven service model, simply create another access point for data.

From an open data ecosystem point of view this is great, especially if you take a market view. It makes for vibrant open data economy and a sustainable sector.  From the point of view of the citizen, the end user, the gap is still there. They are either left waiting for other infomediaries to bring that data and its value closer or required to skill-up enough to set out across the gap themselves.

The space between citizens and government is often more of a market economy rather than a citizen driven supply chain.

There is a lot of the article that I agree with but I’d support the points made with a parallel view and suggest that as well as data literate citizens as Eyal describes them, open data infomediaries need to be “citizen literate”:

A citizen literate data infomediary isn’t one that just knows how to use data – its one that understands how citizens can effectively use data to be part of a decision making process.

 

 

 

 

 

The BBC, Local democracy, hyperlocal and journalism.

I spent the afternoon in Birmingham at the BBC finding out more about their Local Democracy Reporters scheme.  It’s a project I’ve been keeping an eye on for a number of reasons.

The promise of 150 new jobs in journalism, especially ones that are exclusively aimed at covering local government,  is clearly of interest to me as a Journalism lecturer.  It’s more opportunities for students and journalists for one thing.  But the focus on civic reporting also begins to address an area that I think is under-resourced and under-valued (by producers and consumers alike).  The scheme also includes plans for a content hub called the News Bank for material created by the reporters open for anyone to apply to use. This would also includes content from the BBC’s fast developing Regional Data Journalism unit.

The combination of data, hyperlocal and civic content is too good for me to ignore.

What’s in it for hyperlocals?

One of the underpinning reasons for this scheme is to “share the load” of accountability journalism. The role of journalism as holding the powerful to account is one that many feel is being lost,  especially at a local government level. People talk about a democratic deficit and news deserts; towns with no journalistic representation at all.  Many see hyperlocals as an essential part of filling the gap but its notoriously hard to create a sustainable hyperlocal business model.  So it is no surprise that hyperlocal and community media representatives have been following the development of the project with interest.  When the BBC promise a pot of money to improve local democratic reporting who better to benefit from the cash!

So how would the scheme work?

The fine detail of the plan is still being pulled together, but in principle the scheme would be something like this:

The BBC will have create contracts for  Local Democracy Reporters but they won’t manage the reporters. Rather than 150 separate contracts, they have packaged them up into ‘bundles’ containing a number of reporters per geographic patch.  Local news organisations can then bid to take on these contracts on behalf of the BBC. The organisation will be responsible for the reporter both editorially and also from a straight HR point of view (sick leave, appraisals etc. ). The BBC have a number of criteria and requirements for anyone wanting to bid. This includes a proven track record in producing good quality content and the capacity to properly employ and manage a member of staff.

The content created by the reporters as well as any prospects will be made available on a shared News Bank.  So as well as the ‘host’ organisation, other media organisations can use the content created.  There would be no exclusives for host  organisations; when the content drops, it drops for everyone with access to the content hub. So you don’t need to employ a local democracy reporter to get access to the content on the Newsbank. But  you would need apply to the BBC for access. As long as you fulfil their criteria – adherence to basic editorial standards and a track record in producing good quality content – you’re in!

There is a good deal of simplification here on my part. There is a tonne more detail in the plans that were presented today but we were asked not to share too much. Which is fine by me.

But at the event today, I made a few broad notes on some issues and observations.

 

  • Defining ‘bundles’ – A number of the hyperlocal operators in the room noted that the bundles suggested by the BBC sometimes didn’t make sense when you knew the local geography and political landscape.  Others noted that they seemed to mirror the regional media orgs patches. The BBC noted that the geography of the scheme was, in some part, driven by the location of BBC local offices, who would have a role in overseeing the project. That said the BBC were very open to feedback on the best way to divide up the patches . A positive role for Hyperlocals and it shows the value that the focus on a patch can bring.
  • Scale and partnerships – Many of the hyperlocals in the room felt that the decision to package up reporters by patch and the criteria they set for qualifying organisations effectively shut them out of the process.  They might be able to manage one reporter but not three or four across a large patch. One solution offered was working in partnership with larger, regional media organisations to deliver contracts in an area. e.g. An established media player such as Trinity Mirror or Johnston Press could take on the contract and then work in partnership with a hyperlocal to deliver the content whilst the larger org takes on the HR and management issues.  I think the devil is in the detail but it strikes me as a good compromise. But its fair to say, that idea wasn’t warmly received by many of the hyperlocals in the room. I think the the best way to describe the reason is ‘because trust issues’. Interestingly the idea of collaboration between hyperlocals to create collaborative networks to bid got very little comment of it seems interest.
  • Value to the tax payer – The BBC are clearly caught between a rock and hard place with initiatives like this.  They have money that they want to use to ‘share the load’ but at the same time, would be under huge amounts of scrutiny for what is produced and who they work with.  Accountability is something they take very seriously and the BBC are masters at getting themselves in knots trying to be fair and balanced to everyone.  Often they just can’t win.  The scheme as presented today highlighted some of those tensions.  By ‘outsourcing’ the management of the journalists they deal with the issues of the BBC barging into a sector and skewing the market. But at the same time, the need for accountability means the scheme is run through with ‘checks and balances’ the Beeb would apply to ensure the license fee payers were getting value for money.  Its not quite the hands-off it could be.  It also seems that the ‘value for money’ tests stretches to ensure that the material collected by the reporters is also useful to the BBC and their reporting.  Not quite having your cake and eating it as maybe confusing who you are baking the cake for.

But in the midst of the accountability knots and the predictable cynicsism animosity that underpins the relationship between some hyperlocals and the regional media, I think something really important slipped by thats worth keeping an eye on.

The BBC seal of approval

To get access to the NewsBank organisations will need to submit an application to the BBC. General noises around the criteria suggest these will include caveats on quality content and track in producing news content. Orgs will also need to show  a commitment to the same editorial guidelines for balance and impartiality as the BBC. But details of the process of assesment where sketchy.

But lets look at that another way.  In short the BBC will become a local media accreditation body.  

I don’t know how I feel about that. To be clear, I certainly don’t perceive an suspicious motives. But it still makes me uneasy.

I guess you could read it in the same way as hyperlocal’s being recognised as publishers by Google so they could feature in Google News.   Perhaps, as long as the process was transparent, its not a bad thing that some standards are defined. But then I think the sector doesn’t really have a problem in that area.

I don’t know.  But of all the issues this scheme raises, it feels like the one most likely to generate unintended consequences.

All of that said, its worth watching and supporting. Looking beyond the implementation, which is never going to tick all the boxes,  I do think the scheme when it roles out will mark one of, if not the biggest investments in civic journalism in the UK that isn’t technology driven.  I might go as far as to say its the only journalism first investment in civic innovation that I’ve seen in the UK.

It may not work across the board but you’ve got to admire the idea.

Related

Making Instagram video with Powerpoint

Audio slideshows are something I’ve included in my practical teaching for a little while. The combination of images and well recorded audio is, for me, a compelling form of content and it can be an easy video win for non-broadcast shops.

When I work with the students and journalists exploring the concept, I try and look for free or cheap solutions to the production process. In the past I’ve used everything from Windows Movie Maker to Youtube’s simple editor app to put packages together. But this year when I was putting the workshops together, I wanted to focus on social platforms and go native video on Instagram.

Video on Instagram

It’s not the first time I’ve looked at Instagram video. A few years ago, having seen a presentation about the BBC’s Instafax project (in 2014!), I had a look at cheap and free tools to use to create video for Instagram. But things have moved on — like the BBC’s use of Instagram.

So I started to look at how I might use the combination of accessible tools with a view to doing an update on that post. I found my self thinking about Powerpoint.

Why Powerpoint!

When I talk to students about video graphics, I often point them to presentation apps like Google Slides and Powerpoint as simple ways to create graphic files for their video packages. They have loads of fonts, shapes and editing tools in a format they are familiar with (more of them have made a powerpoint presentation than worked with video titling tool!). The standard widescreen templates are pretty much solid for most video editing packages, and you can export the single slides as images. So I took a quick look at Powerpoint to remind myself of the editing tools. Whilst I was playing around with export tools, I discovered that it had an export to video. So I opened up powerpoint to see how far I could go and about an hour later and some playing around and I had the video below.

I worked through the process on a Windows version of Powerpoint, but the basic steps are pretty much the same for a Mac. If you’re on a MAC then Keynote is also a good alternative which will do all of the stuff you can do with powerpoint but with the added bonus that it will also handle video.

Here’s what I did. (You can download the Powerpoint file and have a look I’m making that available as CCZero)

You can see a video walk-through of parts of the process or scroll down for more details.

The process

  • Open Powerpoint and start with a basic template
  • Click the Design Tab and then select Slide Size > Custom Slide Size(Page Setup on Mac)
  • Set the width and height to an equal size to give us the Square aspect ratio of Instagram. Click OK. Don’t worry about the scaling warning

You can set a custom slide size for Powerpoint which means we can create custom slides that fit with Instagram and other platforms.

You can now play around with the editing tools to place text, images and other elements on each slide.

Animating elements

The tools to add shapes and text are pretty straightforward, but one effect that seems popular is ‘typewriter’ style text, where the words animate onscreen. Luckily thats built in on Powerpoint.

  1. Add a Text box and enter the text. Make sure you have the text box selected not the text
  2. Go to the Animations tab, select the text box and click on Appear.
  3. Open the Animations Pane in the tool bar
  4. In the Animations pane right-click on the text box (it will be named with any text you’ve added) appropriate animation and select Effect Options
  5. In the Animate text select by word. You can speed the text up using the delay setting (Note. You can’t do this with the Mac version).

The typewriter effect is a common one on many social videos. One which powerpoint makes short work of.

For the rest, its worth experimenting with basic transitions and animations before you try anything too complex. Once you start to get separate elements moving around you’ll need to think about text as separate elements — you’ll end up with ‘layers’ of text; but that’s no different from a video editor.

Adding Audio

You can add audio to individual slides or to play as an audio ‘bed’ across all the slides.
You can add audio to individual slides or to play as an audio ‘bed’ across all the slides.

A common feature of Audio Slideshows on Instagram (and other social platforms) is that the text drives the story; the audio is often music or location sound that adds a feel for the story. In this example I used sound that I recorded on the scene but you could use any audio e.g. a music track.

You can also adjust the timing of slides to match the audio or just to give you control over the way slides transition and display.

Transitions and timing give you control over how long and how content appears

Exporting your video

Once you’re happy with your presentation you can create a vide version:

  • Click the File tab
  • Select Export > Create a Video

You have a few choices here. The quality setting allows you to scale the video. Presentation quality exports at 1080×1080; Internet quality 720×720 and Low Quality at 480×480. I went for Internet Quality as it kept the file size down without compromising the quality too much.

You can also set the video to use the timings you set up in each slide or to automatically assign a set time to each slide. Which one you pick will depend on the type of video you want to make.

Exporting to video is one of the default options in powerpoint. PC and Mac will save to MP4

Getting video on Instagram

Instagram has no browser interface for uploading. So once the video is exported, you’ll need to transfer the final file to your mobile device. I didn’t struggle emailing files around but you might want to look at alternatives like WeTransfer or GoogleDrive as a way of moving the files around from desktop to mobile device.

Beyond Instagram

It’s worth noting, even belatedly, that your video doesn’t have to be square. Instagram is happy with standard resolutions of video. You could use a standard 16×9 template and Instagram will be fine. I just wanted to be a bit more ‘native video’. But there is nothing stopping you setting up templates for Twitter video (W10cm X H5.6cm Landscape video) or Snapchat (W8.4 cm X H15cm — Portrait video).

Conclusions

There are limitations to using Powerpoint;

  • You need Powerpoint — It’s an obvious one, but I recognise that not everyone has access Office. That said. It can also be the only thing people do have! It’s a trade off.
  • Its not happy with video — If I embed a video into the presentation, Powerpoint won’t export that as part of the video. According to the help file there are codec issues. I haven’t experimented with windows native video formats which may help but it seems like a bit of a mess. It’s a shame. It will take an MP4 from an iphone and play it well. It will spit out an MP4 but it won’t mix the two! Those of you on a mac, this is the point to move to Keynote. Keynote is quite happy to include video.
  • Effects can get complicated — once you get beyond a few layers of texts then the process of animation can be tricky. In reality its no more or less tricky than layering titles in Premier Pro. The Animation Pane also makes this a little easier by giving you a timeline of sorts.
  • Audio can be a faff — The trick with anything other than background sound is timing. Knowing how long each slide needs to be to track with the audio can add another layer of planning that the timeline interface of an editing package makes more intuitive.
  • It’s all about timing — without a timeline, making sure your video runs to length is a pain. With the limitations of some platforms that could mean some trial and error to get the correct runtime.

But problems aside, once you’ve set up a presentation to work, I could see it easily being used as a template on which to build others. The slideshows are also pretty transferable as media is packaged up in the ppt file.

It’s not an ‘ideal’ solution but it was fun seeing just where you could take the package as an alternative platform for social video.

Don’t forget, you can download the PPT file I used and have a dig around (CCZero). Let me know if you find it useful.

Mapping Drone near misses in Google Earth*

My colleague Andrew Heaton from the Civic Drone Centre set me off on a little adventure with mapping tools when he showed me a spreadsheet of airprox reports involving drones.

In my head an airprox report describes what is often called a ‘near miss’ but more accurately, the UK Airprox board describe it as this…

An Airprox is a situation in which, in the opinion of a pilot or air traffic services personnel, the distance between aircraft as well as their relative positions and speed have been such that the safety of the aircraft involved may have been compromised.

The board produce very detailed reports (all in PDF!) on all events reported to them, not just drones, and they pack that all up in a very detailed spreadsheet each year. You can also get a sheet that has all reports from 200–2016! (h/t Owen Boswarva). If you look at those sheets and you just want drone reports look for ‘UAV’. There is also a very detailed interactive map of UK Airprox locations you can look through.

But given I’m on a bit of a spreadsheet/maps thing at the moment, I thought it would be fun to see if I could get the data from the spreadsheet into Google Earth . Why? Well, why not. But I did think it would be cool to be able to fly through the flight data!

Getting started.

The Airprox spreadsheet

At first glance the data from the Airprox board looks good. The first thing to do is tidy it up a bit. The bottom twenty or so rows are reports that have yet to go to the ‘board’. So the details on location are missing. I’ve just deleted them. Each log also got latitude and longitude data which means mapping should be easy with things like Google Maps. But a look over it shows the default lat and long units are not in the format I’d expected.

This sheet uses a kind of shorthand for Northings and Eastings. These are co-ordinates based on distance from the equator — the N you can see in the Latitude — and distance to the west and east of the Greenwich Meridian line, the W and the E you can see in the Longitude. To get it to work with stuff like Google maps and other off the shelf tools it would be more useful to have it in decimal co-ordinates eg. 51.323 and -2.134.

Converting the lat and long

This turned out to be not that straight forward. Although there are plenty of resources around to convert coordinate systems, the particular notation used here tripped me up a little. A bit of digging around including a very helpful spreadsheet and guide from the Ordnance Survey and some trial and error, sorted me out with a formula I could use in a spreadsheet.

Decimal coordinates = (((secs/60)+mins)+degrees)). 

If the Longitude is W then *-1 eg.(((secs/60)+mins)+degrees))*-1So to convert 5113N 00200W to decimal 

Latitude =((((00/60)+13)/60)+51) = 51.21666667
Longitude =((((00/60)+00)/60)+2)*-1 = -2

Running that formula through the spreadsheet gave me a set of co-ordinates in decimal form. To test it I ran them through Google Maps.

Getting off the ground.

Google maps is great but its a bit flat. Literally. The Airprox data also contain altitude information and that seems like an important part of the data to reflect in any visualization around things that fly!. That’s why Google Earth sprang to mind.

To get data to display in Google Earth you need to create KML files. At their most basic these are pretty simple. You can add a point to a map with a simple text editor and a basic few lines like the one below. Just save it with a KML extension e.g. map.kml

<?xml version="1.0" encoding="UTF-8"?> 
<kml xmlns="http://earth.google.com/kml/2.0"> 
<Document>
<Placemark> 
 <name>Here is the treasure</name> 
 <Point>
  <coordinates>
    -0.1246, 51.5007
  </coordinates>
 </Point>
</Placemark>
</Document> 
</kml>

Any KML files usually open in Google Earth by default and when it opens it should settle on something a bit like the shot below.

Google Earth jumps to the point defined in the KML file.

Adding some altitude to the point is pretty straight forward. The height, measured in meters is added as a third co-ordinate. You also need to set the altitudeMode of the point “which specifies a distance above the ground level, sea level, or sea floor” for the point

<?xml version="1.0" encoding="UTF-8"?> 
<kml xmlns="http://earth.google.com/kml/2.0"> 
<Document>
<Placemark> 
 <name>Here is the treasure</name> 
 <Point>
  <coordinates>
    -0.1246, 51.5007, 96 
  </coordinates>
   <altitudeMode>relativeToGround</altitudeMode>
 </Point>
</Placemark>
</Document> 
</kml>

The result looks something like this.

Setting the altitudeMode and setting an altitude co-ordinate gives your point a lift.

But hold your horses! There’s a problem.

The Altitude column in the Airprox sheet is not in Meters. Its in Feet.

When it comes to distances aviation guidance mixes its unit. Take this advice from the Civil Aviation Authority’s DroneCode as an example:

Make sure you can see your drone at all times and don’t fly higher than 400 feet

Always keep your drone away from aircraft, helicopters, airports and airfields

Use your common sense and fly safely; you could be prosecuted if you don’t.

Drones fitted with cameras must not be flown:

within 50 metres of people, vehicles, buildings or structures, over congested areas or large gatherings such as concerts and sports events

On the ground its meters but height is in Feet! So the altitude data in our sheet will need converting. Luckily Google sheets comes to the rescue with a simple formula:

=CONVERT(A1,"ft","m")

A1 = altitude in feet

Once we’ve sorted that out, we can look at creating a more complete XML file from a spreadsheet with more rows.

Creating a KML file from the spreadsheet

The process of creating a KML file from the Airprox data was threatening to become a mammoth session of cut-and-paste, typing in co-ordinates into a text editor. So anything that can automate the process would be great.

As a quick fix I got the spreadsheet to write the important bits of code using the =concatenate formula.

=CONCATENATE("<Placemark> <name>",A1,"</name><Point> <coordinates>", B1,",",C1,",",D1,"</coordinates <altitudeMode>absolute</altitudeMode> </Point> </Placemark>")

Where 
A1 = the text you want to appear as the marker
B1 = the longitude
C1 = the latitude
D1 = the altitude

The spreadsheet can do most of the coding for you using the =concatenate formula to build up the string (click the image to see the spreadsheet)

To finish the KML file, you select all the cells with the KML code in and then paste that into a text file with a standard text that makes up a KML header and footer.

<?xml version="1.0" encoding="UTF-8"?> 
<kml xmlns="http://earth.google.com/kml/2.0"> 
<Document>

paste the code from the cells here.

</Document> 
</kml>

Your file will look something like the code below. There’ll be a lot more of it and don’t worry about the formatting.

<?xml version="1.0" encoding="UTF-8"?> 
<kml xmlns="http://earth.google.com/kml/2.0"> 
<Document>
<Placemark> <name>Drone</name><Point> <coordinates>-2,51.2166667,91.44</coordinates> <altitudeMode>relativeToGround</altitudeMode> </Point> </Placemark><Placemark> <name>Drone</name><Point> <coordinates>-2.0166667,51.2333333,91.44</coordinates> <altitudeMode>relativeToGround</altitudeMode> </Point> </Placemark><Placemark> <name>Unknown</name><Point> <coordinates>-2.6833333,51.55,2133.6</coordinates> <altitudeMode>relativeToGround</altitudeMode> </Point> </Placemark><Placemark> <name>Model Aircraft</name><Point> <coordinates>0.25,52.2,259.08</coordinates> <altitudeMode>relativeToGround</altitudeMode> </Point> </Placemark>
</Document> 
</kml>

The result of the file above looks something like this.

With a simple file you can add lots of points with quite a bit of detail.

Is it floating?

When we zoom in to a point it can be hard to tell if the marker is off the ground or not especially if we have no reference point like Big Ben! Luckily you can set the KML file to draw a line between the ground and the point to make it clearer. You need to set the <extrude> option by adding it to the point data:

<Placemark> <name>Unknown</name><Point> <coordinates>-2.6833333,51.55,2133.6</coordinates> <altitudeMode>relativeToGround</altitudeMode> <extrude>1</extrude></Point> </Placemark>

The result looks a little like this:

Wrapping up, some conclusions (and an admission)

There is more that we can do here to get our KML file really working for us; getting more data onto the map; maybe a different icon. But for now we have pretty solid mapping of the points and good framework from which to explore how we can tweak the file (and maybe the spreadsheet formula) to get more complex mapping.

Working it out raised some immediate points to ponder:

  • It was an interesting exercise but it started to push the limits of a spreadsheet. Ideally the conversion to KML (and some of the data work) would be better done with a script. But I’m trying to be a bit strict and keep any examples I try as simple as possible for people to have a go.
  • The data from the Airprox board is, erm, problematic. The data is good but it needs a clean and some standard units wouldn’t go a miss. It could also do with some clear licensing or terms of use on the site. I could be breaking all kinds of rules just writing this up.
  • The data doesn’t tell a story yet. There needs to be more data added and it needs to be seen in the context i.e the relationship to flight paths and other information.

And now the admission. I found a pretty immediate solution to this exercise in the shape of a website called Earth Point. It has a load of tools that make this whole process easier including an option to batch convert the odd lat/long notation. It also has a tool that will convert a spreadsheet into a KML file (with loads of options). The snag is that it does cost for a subscription to do batches of stuff. However Bill Clark at EarthPoint does offer free accounts for education and humanitarian use which is very nice of him.

So I used the Earthpoint tools to do a little more tweaking, with some pleasing(to me) results.

You can download the KML file and have a look yourself Let me know what you think and if you have a go.

Thanks to Andrew Heaton for advice and helpful navigation round the quirks of all things drones and aviation. If you have any interest in that area I can really recommend him and the work the CDC do.

*Yes, I’m pretty sure ‘near misses’ isn’t the right word but forgive me a little link bait.