What and who are bylines for? I ask that as an editor who learnt his craft at a consumer magazine that didn’t believe in bylines, adding them only when it was absolutely necessary, as on first person pieces or for celebrity authors.
Before social media, influencers and deepfakes, this was not uncommon. Bylines were at worst an ego trip for journalists or a favour to their journalist friends; at best a way to connect with the reader and provide a contact for feedback — although most readers have never much cared which hack wrote what.
Things have changed. Bylines have become more important for accountability and authority, in a media landscape where trust has become so much more of an issue. Trust is an issue across all media but recently especially so for video, for several reasons.
We like to think that seeing is believing. This is important for holding power to account. And trust works both ways — how do we know to trust what’s real as well as what’s not? A prince with his arm around a trafficked teenager for example, looks bad, but if the prince suggests it may not be real, how can we be absolutely sure it is? It can be hard to prove authenticity, so just questioning the image or video can lead enough people to doubt it and make the idea that it’s a fake plausible for some (although perhaps not in that particular case).
Images and videos have always had the potential to mislead. And that issue is still with us — as the fallout of resignations and recriminations following the BBC’s serious editing mistake that changed the import of Trump’s January 6th speech show. For print and online text articles we have the […] — ellipsis in square brackets — to indicate something skipped — but not between partial sentences nearly an hour apart. That would warrant a new paragraph at least, closing one quote and opening another. Video editors cannot just join two halves of two sentences together and make them look like one. It was a major mistake in an otherwise good episode of an excellent news programme from a trusted institution. Furthermore, I reckon publishers reading this will also have better processes for handling complaints about editorial mistakes than the BBC demonstrated.
Video is becoming the main news media for younger generations. And the fact that 70% of Generation Alpha would prefer vertical video to landscape tells you something about where and how they are used to consuming it. The next generation will learn about trust, deepfakes and how to detect them from a new addition to the national curriculum. Adult viewers, who themselves struggle enough sifting fact from fiction, will soon be trying to explain how to detect deep fakes — a difficult and fast-moving target.
Telltale signs
How on earth will they go about that? CBBC Newsround’s site, with the help of BBC Verify, suggests three things to look out for: features like the right number of fingers; facts like weather on the day and location; movement that’s robotic or cartoonish. All giveaways AI is getting better at avoiding as the technology quickly moves on. There may be more subtle giveaways detectable by algorithms (perhaps by AI) rather than people, but there too we have a fake / truth technology arms race, with one always trying to stay ahead of the other.
Publishers are fast learning where AI is their friend or foe or both, from identifying market opportunities through streamlining production to online discovery. AI is revolutionising video. Yes, it can be used maliciously to mislead with sneaky edits, or for wholesale fakery of complete videos of events that are pure fiction — and will become harder to detect. On the other hand, it can be a useful editing tool, provide increasingly impressively natural voiceovers or assemble complete videos from a text, still and moving images.
I’m sure all the publishers reading InPublishing would quite rightly have no problem citing AI’s role in what they produce. Why wouldn’t you? Well, unfortunately, a recent audience survey presents us with a conundrum. The public agree with publishers: AI should be credited where credit is due. But the same survey found they are also less likely to trust content in which AI has had a hand. So you can certainly let your audience know where you’ve employed AI. But then be prepared for them not to trust it once they know that.
That makes source so important. You believe it because of where it comes from and who made it — which brings us back to bylines.
Technologies to detect fakes and test for truth will have its uses but, like anti-virus or firewall safety, it will be in an arms race, not infallible, always playing catchup and struggling with false positives. Most people will rely on what they think they can trust, most of the time. And that means trusted brands, while the human byline will add essential authenticity.
Credits for AI in content still look like a novelty today but that won’t last long as it spreads to all corners and all stages of publishing. When it’s involved in every article or video in some way, the credit becomes less useful. AI’s multifaceted involvement will become both commonplace and harder to explain with a simple credit. I predict that’s when mainstream media will stop trying to credit AI and instead ensure it credits the human. No byline will mean AI. Bylines will be back for good.
This article was first published in InPublishing magazine. If you would like to be added to the free mailing list to receive the magazine, please register here.
