Mobile navigation

AI SPECIAL 

What we’ve learnt so far at National World

The regional news publisher has been experimenting with AI across a number of fronts. Tim Robinson, editorial director and publisher, looks at what progress they’ve made so far.

By Tim Robinson

What we’ve learnt so far at National World

Q: What have been your key learnings?

A: Like every publisher, we are keen to understand how any new technology can improve our ability to maximise our audiences and their experience of our content.

The AI revolution is the latest — and potentially to date the greatest — opportunity to improve our working practices, unleash greater creativity, and ultimately increase margins so we continue to survive and thrive.

But it’s still a gold rush era.

If you can remember the dot-com bubble in the late ’90s, you’ll be familiar with the vibe: a headlong rush to adopt technological tools without properly considering the final returns or the myriad of other implications along the way.

Just like then, being first to market isn’t necessarily the long-term prize.

You don’t have to be a tortoise in this race against the tech-savvy hares, but stepping back to consider just what you’re trying to achieve, how to benchmark your aspirations and properly calculate the final ROI may deliver you the greatest value in the long term.

Some of the ways forward may be major projects, some just small steps along the way — which I’ll describe here.

Q: In which use-cases have you had the best results?

A: Our digital teams are looking at a variety of tools to improve audience reach and engagement.

But we also still have a significant print portfolio which brings in hard cash and has a loyal readership willing to pay every day.

As the publisher of our city daily newspapers, I was keen they didn’t get left behind and wanted to see if we could use simple AI-driven tools and methods to make our papers better.

We are making strides with a very detailed programme of automated page design, but that’s a story for another day.

In the short-term, we have tried to adopt tools which simply enhance the presentation of print content.

You’ll have seen online publishers using tools like ChatGPT to auto-summarise articles and by doing so, increase page depth (the number of subsequent articles clicked on).

Well, in print, we have been using Google Gemini to summarise in-depth reads down from 1,000 words to 40.

Our authors just paste it in and ask Gemini to pick out the salient points.

Every day, we have a big read on pages 6-7 across all city editions — but if you don’t have time or the inclination to read it all, don’t turn the page, it’s chopped down for you.

It may sound corny, but we call this summary “In a nutshell” and it’s in the middle of each spread, literally contained in a graphic of a walnut shell.

If you want to know readers’ big talking points in the city today, please take the time to enjoy our letters page — but you could just read the 50-word summary on page 2, again produced by Google Gemini.

More startlingly we are using AI-created imagery to replace pedestrian stock imagery.

We’ve been running graphics pages summarising key social issues of interest to our key print audiences for a couple of years.

They feature subjects like how to heat your home better, foods to increase well-being, the cost of convenience shopping, that kind of thing, with a barrow-load of stats visualised by our designers.

These now benefit from photo-realistic bespoke illustrations created by Adobe Firefly (other programmes are available).

If you want a picture of a lady holding a fermented milk drink, walking her dog and carrying a selection of specific vegetables, you’d be hard pressed to find it in a stock library — but a bit of clever prompting in Firefly and ... seconds later ... here are four different options in high-def.

A man looking at his phone being bombarded on all sides by gambling ads?

An elderly gent wearing a suit made out of hot water bottles? No problem.

And trust me, they are visually very, very engaging.

No reader has ever complained.

The only adverse reaction has come from a few fellow members of the journalism community when I posted some of our graphics pages on social media and they told me I was “taking work from photographers”.

I’m not and they are completely missing the point.

Three best practice top tips

  1. Don’t be afraid to have a go. Many AI tools in this area are driven by basic language prompts. No doubt there will be university courses in prompting soon, but it’s not like writing code or a dark art, it’s more a say-what-you-see approach (like Catchphrase perhaps?).
  2. Be completely honest. We attribute every image and every piece of AI-generated copy to the programme which created it, just like it were a human author. Readers will respect the openness if that’s something they’re worried about.
  3. Don’t forget what you’re trying to achieve. Experimentation is good, but we’re not just playing. We want to improve the experience for readers in a cost-efficient, creative way so that they engage longer, enjoy better and want to come back for more tomorrow.

Tim and the other contributors to our AI Special will take part in an ‘AI Special – Q&A’ webinar on Tuesday, 28 January. Click here for more information and to register.


This article was included in the AI Special, published by InPublishing in December 2024. Click here to see the other articles in this special feature.