Mobile navigation

Dickon Ross’s publishing world 

Robot vs Human

Will columnists be replaced by ChatGPT et al? Ultimately, that will be decided by who writes better columns. Dickon Ross has submitted two versions of his regular column, one written by ChatGPT, one written by himself. Compare and contrast…

By Dickon Ross

Robot vs Human

To ChatGPT: Write an article of 800 words about whether artificial intelligence could write a compelling opinion article or personal column, in the style of Dickon Ross writing in publishing industry magazine InPublishing.

ChatGPT replies:

Can Artificial Intelligence Pen a Captivating Opinion Article or Personal Column?


In the ever-evolving landscape of technological advancements, artificial intelligence (AI) has made tremendous strides in various fields. From autonomous vehicles to natural language processing, AI has left an indelible mark on society. However, one question that continues to intrigue both readers and writers alike is whether AI possesses the capability to craft a compelling opinion article or personal column. In this article, we delve into the possibilities and limitations of AI-generated content, exploring the potential for machines to express opinions and evoke emotional responses.

The Rise of AI in Content Generation

AI has witnessed significant advancements in natural language processing, machine learning, and deep learning algorithms, enabling it to comprehend and generate human-like text. Various AI models, such as OpenAI's GPT-3, have demonstrated their ability to generate coherent and contextually relevant content based on vast amounts of training data. This progress has spurred curiosity about the potential for AI to master the art of opinion writing and personal columns.

Crafting Compelling Arguments

One essential element of opinion articles and personal columns is the ability to present persuasive arguments. Human writers often draw on their knowledge, experiences, and emotions to construct thought-provoking claims. While AI lacks personal experiences and emotions, it can be programmed to analyze vast amounts of data and draw correlations. By using pre-existing information, an AI model can generate well-researched points and present them in a logical manner. However, it is crucial to note that the absence of genuine emotions and personal anecdotes may limit the depth of connection an AI-generated article can establish with readers.

Capturing Authenticity and Voice

Opinion pieces and personal columns are known for their unique voices and distinct perspectives. Writers infuse their work with personal anecdotes, wit, and a particular tone that captivates readers. Though AI algorithms can mimic human language, they struggle to replicate the nuances and idiosyncrasies that make writing truly authentic. While an AI model can produce coherent sentences and mimic a specific writing style, it often lacks the personal touch and emotional resonance that stem from human experiences. Consequently, AI-generated content may come across as technically proficient but lacking in genuine human connection.

Emotional Engagement

One of the primary aims of opinion articles and personal columns is to evoke emotional responses from readers. Skilled writers have the ability to elicit empathy, inspire passion, or challenge conventional thinking. However, emotions are deeply rooted in human experiences, making it difficult for AI to truly connect on an emotional level. AI algorithms lack the inherent understanding and empathy required to touch readers' hearts. While they can analyze sentiment and employ rhetoric techniques, the emotional impact of AI-generated content may pale in comparison to human-written pieces.

Adapting to Dynamic Scenarios

Opinion writing often involves addressing real-time events, societal trends, or cultural shifts. Successful writers possess the ability to adapt their work to these dynamic scenarios, injecting fresh perspectives and capturing the zeitgeist. AI algorithms, on the other hand, rely on pre-existing data and struggle to incorporate up-to-the-minute information effectively. Although AI can analyze large datasets and provide statistical insights, it may fail to capture the nuances of evolving situations, ultimately hindering its ability to produce timely and relevant opinion pieces.

The Future of AI-Generated Opinion Pieces

While AI-generated content currently falls short in terms of authenticity, emotional engagement, and adaptability, it would be unwise to dismiss its potential altogether. As AI technology continues to advance, it may bridge the gap by incorporating sentiment analysis, personalized writing styles, and real-time data integration. Future AI models may be designed to interact with readers, gather feedback, and refine their content accordingly, thus enhancing their ability to create captivating opinion articles and personal columns.


The question of whether AI can write a compelling opinion article or personal column remains complex. While AI has demonstrated remarkable progress in generating coherent and contextually relevant text, it still struggles to replicate the authentic voice, emotional connection, and adaptability that human writers bring to their work. However, as AI technology evolves, it is not inconceivable that AI-generated content may become more compelling and persuasive. Ultimately, the future of AI-generated opinion pieces lies in striking a delicate balance between technical proficiency and the art of human expression.

Can Artificial Intelligence Pen a Captivating Opinion Article or Personal Column?

Dickon Ross writes: Should the editor replace me with AI? Read our efforts and see what you think. It would be awkward to be made redundant by a technology we’ve been covering in E&T for decades. It’s been a long time coming but this year, the world has woken up to its potential and pitfalls thanks to some freely available online generative AI tools that anyone can try for themselves. We too got our hands on it for the first time.

I first tried this experiment in E&T’s art and engineering special with my Editor’s Letter – the regular monthly welcome and introduction to each issue. It did a pretty good job, getting the typical structure and some familiar phrasing of this type of article that tends to be quite formulaic. The AI had a good go at following that formula and produced a decent enough introduction to the subject, although it didn’t quite make the introduction to the issue.

Here in InPublishing, I’ve given it a different and I think more difficult task if only because the task is less clear than introducing a subject and welcoming the reader. Again, it does a pretty good job. It’s comprehensive, it’s clearly structured and it flows well. But, as it acknowledges itself, it has limitations. In fact, some of its points I would have included here, but now it’s done it for me, there’s no point in repeating them and so I’ve had to adjust what I would have said without it.

Nevertheless, ChatGPT has obvious shortcomings as a columnist. It doesn’t have the benefit of personal experience to elucidate points. It struggles with humour, managing at best a weak pun but no amusing anecdotes or witty, ironic observations. It lacks any real depth or intelligent, insightful and meaningful comment. So, all spot-on for this column then! Ta-daaaa!

But that also reveals where AI, at least for now, falls short: interpretation, originality, authenticity, self-expression and true creativity. It reads more like Mr Spock than Dickon Ross. It’s all logic, no emotion – not a bad thing necessarily, but just different. Unlike Mr Spock though, these generative AI chat bots don’t always get things right, as underlined by a some well-publicised failures. Google’s Bard, in a demo meant to show off its abilities, named the wrong telescope as the source of the first picture of an exoplanet, a mistake called out by real life astrophysicists and one that wiped $100m off Alphabet’s share value. Meanwhile, ChatGPT has been making up sources and even legal precedent cases for court submission.

Handle with care

This shouldn’t be surprising. These natural language processing models, just one approach to AI but the one most people are now more familiar with, are strong on language and structure but not so hot on hard facts. They will improve in time but in the meantime, humans will have to take what they say with a pinch of salt – and learn how to use them better.

Their purpose is quite narrow. They respond well to the right questions, producing a plausible attempt at a piece of writing on a specified subject in a particular style. They’re really clever but the results are passable, not brilliant. The writing tends to be a pedestrian summary of a subject, presenting the various angles on a subject in a way not unlike how an encyclopaedia or Wikipedia might do it. To that they add certain structures, phrases and style. It looks the part. But that’s about it.

They lack anything more original: the writers’ personal experiences, unique opinion or truly fresh views. They risk reproducing bias that’s already out there and sometimes hide it as balanced. They tend to produce a fairly pedestrian overview of an area which of course has its place but can mean it misses out on what makes articles compelling. Readers are attracted to articles that are opinionated, fresh, original, startling or even shocking. And for a compulsive read, it needs to do all these things at once.

For now, I can see they have their uses in journalism. It could be a good tool for generating an article starting point or checking the various angles and issues in a subject. It may help with organising thoughts or providing a structure. But in time, it will do much more.

As a journalist covering the early, faltering steps of new tech, I’ve come to realise that while the products themselves rarely live up to the hype, the technology goes beyond. We overestimate gadgets’ importance in the short term, underestimate technology trends in the longer term – although that longer term is getting shorter with every new wave. The legal and regulatory framework for AI is well behind, and if past experience is anything to go by, it won’t catch up.

AI is here to stay. Problems like bias and inaccuracy will be addressed. Deeper problems like originality, creativity or authenticity may take longer.

For that you may have to wait for general AI which better emulates human-like thinking. These generative AI art and chat bots are great examples of narrow AI – programs that do a specific task well. But artificial general intelligence will apply what it learns to new situations and a broader range of problems, make connections in multiple ways like humans do and will look more like human intelligence, with the appearance if not the possession of consciousness or self-awareness.

This is the stuff of sci-fi disasters and a famous fictional example is Skynet, the artificial general superintelligence system from the Terminator movies. As we all know, Skynet became self-aware at 2:14 EDT (or 6:14am GMT!) in the morning of August 29, 1997. All hell broke loose when the machines turned on the humans and it all ended badly with luminous-eyed metallic robots stomping on human skulls. That’s AI for you. Unpredictable.

But it’s a long way from some dodgy facts and quite cheesy pictures. Next issue: generative AI and visual ‘art’.

This article was first published in InPublishing magazine. If you would like to be added to the free mailing list to receive the magazine, please register here.