Our recent AI Special Q&A webinar (which you can watch a recording of by registering here) featured the fourteen contributors to our AI Special answering questions from me and from attendees.
There were a few questions that we didn’t manage to get to. The questions are listed below, along with answers provided by the contributors.
The questions:
- What experience or advice do people have with “trans-formatting” — taking an original article, and formatting it for different audiences, different platforms (social, etc), different frameworks of writing? (asked by Walter)
- Some editors have a policy to not accept content created or aided by AI — do you think this is short-sighted? (asked by John)
- Are you concerned about the environmental impact of generating content / images using AI? (asked by Joanne)
- For publishers reluctant to adopt AI technologies due to potential loss of control, erroneous outputs, IP / licensing concerns — how would you allay their fears? (asked by Martin)
- AI has a habit of including unwanted elements in the images it creates. What can publishers do to minimise / avoid this? (asked by Martin)
- Should we be using AI to help us write articles or is that bad for SEO? (asked by ‘s’)
- If you were going through a digital transformation journey, for example building a new publishing website, how essential is it to include AI-enabled search with a personalised LLM tool? (asked by Sian)
- What disclosures or acknowledgements do you run accompanying AI generated content, be it a total illustration or a story first draft? (asked by Michelle)
- What’s next and truly viable for batch automation for a large volume of images? (asked by Mark)
Q: What experience or advice do people have with “trans-formatting” — taking an original article, and formatting it for different audiences, different platforms (social, etc), different frameworks of writing? (asked by Walter)
Jennifer Schivas, CEO, 67 Bricks: Our advice would be to find out from your users how they want to access your content before you start creating anything — otherwise you’ll end up with mountains of new content types but no one actually using them. For our work on the AI-generated podcast for Bone & Joint, we had a real user need to access the latest developments in an easily digestible audio format. Their community is time-poor but has a critical requirement to stay on top of the latest research, so building a tool to generate podcasts from the research articles made sense for them. As always, the first question is, ‘who is this for?’. Once you know that, there’s an ever-growing number of new products and tools that can be utilised to make more out of your existing content.
Another good example we’ve worked on is creating summaries of complex content or news tailored to specific groups. A C-suite reader might care most about strategic impact — what it means for their business, the risks, opportunities, and ROI — while a technical audience needs details on how it works, key challenges, and trade-offs. Journalists look for the story — why it matters now — while marketing and sales teams need a version that highlights the value proposition. In an example like this, the core information stays the same, but the framing and emphasis shift to meet the needs of each group.
Markus Karlsson, CEO and founder, Affino: Keep iterating your prompts until you have something that works. Have a test process in place so you can generate a sample number of articles that you are transforming, and keep working on the prompt until it is as good as it gets before you run larger batches.
Make sure you can wipe the changes, or roll back with versioning.
To get the best outcomes, make it possible for people to tune the transformations on each article if needed, with the ability to customise the prompt right within the editor. A great trick is to provide examples of what you do and don’t want in the output.
Aliya Itzkowitz, manager, FT Strategies: Last year, we supported a publisher to turn long-form articles into shorter, bullet points versions. We’re seeing more and more examples of this, such as Time Magazine’s ‘Person of the Year’ article. I used to call this ‘content repackaging’ but recently I’ve also heard it called ‘liquid content’.
One recommendation I have based on my experience is to have a clear definition of success and a way of measuring this when you begin. In our case, we worked with the newsroom and tech teams to come up with a consistent scoring system for our bullet points experiment. Example criteria in our scoring rubric included ‘authenticity to the meaning and tone of the original article,’ / ‘ability to capture the key points’ / ‘any hallucinations or inaccuracies’.
This enabled editors to come up with an objective score for the success of each AI-generated bullet point which could then be fed back to the tech team that was building and improving the solution.
Thomas Lake, director of product and technology, Infopro Digital: We have had success, and strong editorial buy-in from summarised versions of articles. After numerous trials and experiments, the three summary formats we have found most reliable when produced by AI are: bulleted lists, shorter summaries (for various purposes) and FAQs.
Peter Dyllick-Brenzinger, head of product and engineering, Purple: This is an excellent use case for AI. You can even use AI to work on the definition of your audience and even go so far as to create a “living persona” of you audience: you define a specific AI (“GPT”) with the characteristics of your audience. This gives you a chat with a person of your audience.
But even without going that far, you can experiment with different formats for different channels and audience. Also very helpful are examples: show the AI you are using examples of what you want to create. After the examples, provide the original and ask it to generate the new format.
Tom Pijsel, VP product management, WoodWing: Converting content from print to digital channels is a complex task; without the use of AI, the level of automation possible depends on the amount of freedom that designers have to manually create designs instead of relying on templates. AI is a great toolkit to help customers to optimise the conversion process.
In general, we see two flows:
- Ensure the text is exported as-is: in this flow, AI can help to detect the structure of the article. What is a title, what is a quote, etc
- Optimise the text for a digital channel: in this flow, AI will rewrite the article and optimise it for SEO. This flow will create articles that are better suited for digital channels, but it does require another round of review.
Q: Some editors have a policy to not accept content created or aided by AI — do you think this is short-sighted? (asked by John)
Markus Karlsson, CEO and founder, Affino: This is exceptionally short-sighted by now. The latest benchmarks are showing higher levels of results from AIs than equivalent PHD students. This combined with the latest integrated AIs with search and access to research databases highlights that if you are using the right technologies and production workflows then, if anything, the quality of your content will be better than without AI assistance.
It is hard to imagine the scenario where the majority of content published will not be with AI assistance or indeed wholly AI generated within the next decade, it is simply a question of when. If you then extrapolate from that, you realise the sooner a newsroom / editor tackles it head on, the better.
Aliya Itzkowitz, manager, FT Strategies: Yes, I think this is short-sighted as I believe AI, like any other technology, will become part of the content production process and, in many newsrooms, it already is. Saying you will not use AI is like saying you will not use data. For publishers who are more risk averse, it’s worth remembering that there are different stages throughout the content lifecycle at which you can involve AI.
For instance, I used to work for a company that used AI for news discovery — alerting reporters at the earliest indication of breaking news. On the other hand, AI can also be embedded into a CMS and act as a last-step sub-editor before publishing.
You may feel more comfortable using AI at the beginning and the end of the process, and leaving the writing and researching up to your highly skilled journalists.
The key is to make sure that — wherever in the process you choose to use AI — you are building in enough checks and balances (eg. a human-in-the-loop) to make sure that this is done responsibly. This will improve the chances that your content will remain high quality and trustworthy for your readers.
Stewart Robinson, managing director, Full Fat Things: I do not think it is short sighted to have a policy to accept content that is created or aided by AI. I think that editors that have a good fact checking process should plot their own course as they always have done.
Peter Dyllick-Brenzinger, head of product and engineering, Purple: Yes, in the future almost every piece of content will have been in touch with some kind of AI. We need to be transparent about the extent that the AI has been used.
Ian Mulvany, chief technology officer, BMJ Group: Yes, I do think that this is short-sighted because AI tools can be so tremendously helpful in a wide range of ways. It can be used to help with grammar, and clarity of writing, it can help as a tool to get over the problem of facing a blank page, and yes, it can be used to do much of the writing for you.
I fully understand concerns around where AI is taking over authorship, but where the tool is being used as a writing assistant, I see very few problems with that. What I would advice is transparency around use.
Tom Pijsel, VP product management, WoodWing: This depends on the business model of the customer; for some customers this could be a unique selling point. But in general, I think this is not the right approach.
I think it is important to be able to verify if AI is used by, for example, a freelancer. This way you can ensure that the content is either correctly reviewed or blocked from usage, and adheres to your policies. It is also important to invest in tooling to ensure your latest policies around AI governance are distributed to your internal and external staff.
Q: Are you concerned about the environmental impact of generating content / images using AI? (asked by Joanne)
Markus Karlsson, CEO and founder, Affino: Far less than before DeepSeek was released. It has now been shown that highly capable AIs can become far more energy efficient and will be able to run on devices as simple as smartphones before long. Previously, this was a great concern and frustration.
Stewart Robinson, managing director, Full Fat Things: We should be far more concerned about agricultural and energy production environmental impacts before looking at technology and specifically AI environmental impact. There is a very real opportunity for AI to help cut the carbon emissions in energy production if channelled correctly in the energy sector that could dwarf the usage from technology? Global efforts here should focus on the largest pollutants.
Derek Milne, commercial pixometrist, Pixometry: This is a pertinent question, particularly given recent advancements like DeepSeek and other next-generation AI models. These newer technologies are designed to be far more efficient than earlier systems, reducing computational demands.
AI-generated content and imagery do require substantial computing power, which has an obvious environmental impact. This is especially true during the model training stages and the cloud-based processing of content / images.
However, it is important to consider this in context. Traditional design tools like Photoshop, InDesign and Illustrator also consume energy, especially when running on high-performance machines for extended periods of time and extrapolated across large numbers of users.
In many cases, AI can streamline these workflows, dramatically reducing the time and, therefore, energy spent on ‘manual’ tasks.
So, while AI-generated content and images do require substantial power, the key question is whether the overall energy use is offset by the efficiency gains compared to the much more resource-intensive process of traditional content creation. With the emergence of more efficient AI models, the scales are increasingly tipping in favour of AI-driven approaches.
Currently, the key to minimising environmental impact is using AI efficiently, favouring cloud providers committed to renewable energy, and optimising your processes to avoid unnecessary calculation.
Peter Dyllick-Brenzinger, head of product and engineering, Purple: The energy use of current state-of-the-art models is quite concerning. However, the breakthrough of the Chinese model “DeepSeek” also showed that a much less resource intensive training of AI models is possible. I expect the development to go in that direction. At the same time, we should all pressure AI providers to use green energy for their data centres.
Ian Mulvany, chief technology officer, BMJ Group: What we are seeing is a continual improvement in the cost profile of running models. They seem to be coming down in cost by a factor of 10 every 12 months or so. That means that they are becoming more energy efficient over time, and that’s a trend I expect to see continue at pace; there are a lot of optimisations being published in the literature that have yet to make their way into production. On the other hand, the easier these tools are to use, and cheaper they become, the more overall use we will see.
That does mean that there is more focus on the energy production part of the value chain, and that opens up the potential for more investment into energy efficient solutions.
So where does that leave us on this question?
The ideal situation is one where we have data centres that are carbon neutral. I think the growth in demand for computing power is going to be a forcing function to speed up getting to data centres that operate in this way (the cycle time of innovation will be a function of how many data centres we try to bring online), so in the end, this growth in demand could be a good thing, but on the other hand, if we just layer more data centres without that move, then that will be problematic.
So, yes, I am worried, but I am cautiously optimistic.
Q: For publishers reluctant to adopt AI technologies due to potential loss of control, erroneous outputs, IP / licensing concerns — how would you allay their fears? (asked by Martin)
Jennifer Schivas, CEO, 67 Bricks: This technology is here to stay, so it’s important to consider your content strategy and the level of risk you’re willing to tolerate. We work with lots of clients for whom content integrity and accuracy are paramount — standards agencies, pharmaceutical publishing, business intelligence and insights etc... For those clients, we’ve developed ways of ensuring quality control — mostly around retaining a human-in-the-loop as part of an AI-assisted workflow and training the models on a high-quality dataset.
Markus Karlsson, CEO and founder, Affino: Simply start using the tools and building viable business models around them. The sooner they jump in the sooner they’ll understand the strengths and weaknesses of the AI toolkit and the fear fades away quickly. I have a saying that publishers should have more FOMO and less fear of AI, since most of the risks are to those who are too late to embrace the benefits of AIs.
It is also the case that every bit of software that publishers will be using in a few years will contain AI features, they will simply be unavoidable, especially as they will be rather good. Software that doesn’t embrace AI will also fade into obscurity.
Tim Robinson, editorial director and publisher, National World: All these concerns are genuine, but the technology (like any technology) isn’t going to go away. Several studies show that journalists are already using AI tools, whether or not their employer knows about it or sanctions it, so now is the time to form a proper view on how your newsroom will approach this. Cautious experimentation will be key. Used responsibly, AI will deliver huge efficiency to processes and open up whole new areas of content creativity.
Brian Alford, founder and CEO, Bright Sites: Most publishers we work with keep a human involved in the process to make sure that erroneous outputs are spotted before publication and this is true across the industry. AI software services like ours are secure, so that your IP is never used elsewhere and remains accessible only to you. It is absolutely possible to integrate AI without compromising on security.
Ben Tregenna, chief technology officer, Content Catalyst: AI presents significant opportunities for publishers. It also presents understandable concerns, but these can be mitigated. It is crucial that the AI technologies publishers adopt respect licensing boundaries to prevent proprietary data from being accessed outside a paid-for license or becoming part of other language models.
AI systems can be designed to minimise incorrect answers by requiring and displaying citations for sources used in the answer, allowing the system to provide a “Don’t Know” response in the case of few good content matches and by a program of beta testing alongside phased rollouts.
We also advise adding appropriate user warnings and disclaimers to make it clear where output has been AI generated.
Stewart Robinson, managing director, Full Fat Things: AI is like a bike rather than an extra human, so keep the processes that have always made your output high quality.
Peter Dyllick-Brenzinger, head of product and engineering, Purple: The most important thing about using AI today is to keep humans in the driver’s seat. AI is an assistant and not a fully automated agent. We are still far removed from the latter. AI being an assistant makes it much easier to handle. We need to think of AI as a gifted but somewhat unreliable intern. If seen in this light, the responsibilities are clear — and hopefully the fears somewhat dampened.
Ian Mulvany, chief technology officer, BMJ Group: My advice is to take a sober look at the risks of the specific use case you are looking at.
On the erroneous outputs front — there are tools and approaches that can significantly reduce error rates, but equally there are things that you can do that can improve time to market, while still retaining human oversight.
On IP licensing rights, the rate at which new models are coming to market, in both large and small configurations, indicates that as a class of technology, it’s just here now and is unlikely to go away.
If you are worried about giving up your own IP, there are plenty of models that can be run locally, or within your own infrastructure.
In every case where there might be an objection, there is a likely approach that can allay those fears.
More important is the question on whether the idea you have makes commercial sense. These technologies are not a silver bullet, and implementing them with appropriate safeguards and quality controls is not cost free.
Tom Pijsel, VP product management, WoodWing: I fully agree; publishers should prevent their content being used to train AI models without their knowledge or without a kick-back financial model.
Q: AI has a habit of including unwanted elements in the images it creates. What can publishers do to minimise / avoid this? (asked by Martin)
Markus Karlsson, CEO and founder, Affino: A great deal can be done with the right prompting: think adjacent to what you are looking for in terms of the output. For example, a prompt like ‘medieval viking family’ might present a multiracial family with modern lighting and faces, but ‘nordic viking family’ might give you something much closer to what you are looking for, ie. more rustic and appropriate to the time period and framing. It is not always obvious what the keywords are. In practice, some AI image creation tools are much better than others, so keep trying out never versions and alternatives.
Tim Robinson, editorial director and publisher, National World: Prompting is by nature a trial and error process, but our designers at National World have found practice and application have gradually improved the accuracy of the results. Prompt engineering will no doubt become a specialist skill as these technologies become more commonplace, but this shouldn’t put people off giving it a try. There is loads of prompt guidance on the internet to consult. Here’s ChatGPT’s guidance.
Stewart Robinson, managing director, Full Fat Things: This will improve but until it does, follow standard editorial and graphic processes that you already have for human curated images. This is not a big issue with any publisher that has a quality process for content.
Derek Milne, commercial pixometrist, Pixometry: This is a valid concern as AI-generated images will occasionally include unwanted elements, extra fingers, hands etc. However, there are several approaches that publishers can adopt to minimise or prevent this issue.
- Adopt a hybrid workflow: Arguably the most rewarding strategy for now; blending AI automation with traditional design techniques ensures that AI speeds up the creative process while the skilled photoshop operators provide the final level of polish. This approach allows for greater control over quality and consistency, making AI a valuable tool rather than a complete replacement for skilled artistry.
- Choose the right AI engine: Different AI models excel at different styles, so selecting the most suitable engine is key. Consider both image generation styles and quality along with the skill level of your operators. For example, Adobe Firefly is a powerful and accessible tool, while other engines may be better suited for creating highly photorealistic images. Experimenting with different models can help find the best fit for your needs.
- Refine and structure prompts carefully: The more precise and structured the prompt, the better the AI will understand what’s required. Avoid vague descriptions and clearly specify both the desired elements and anything that should be excluded.
Given the risk of extraneous fingers or unwanted elements, ask yourself: Does the image still effectively convey the intended story without these details? If so, simplifying or adjusting the prompt or composition may be the best approach — for now.
Q: Should we be using AI to help us write articles or is that bad for SEO? (asked by ‘s’)
Markus Karlsson, CEO and founder, Affino: A lot of AI text detection to minimise the impact of AI text on SEO is very poor quality; even AI text detection used by universities to test for cheating can positively mis-identify text from the last century as AI generated. So, we are in a strange place right now with this. It is clear that AI output can also be adjusted to not seem like AI outputs using some fairly easy to research methodologies and tricks.
In fact, this question itself is a fail because we have moved to the GEO phase as it eclipses SEO, with Google committing to have GEO cards for nearly all the key search terms; if you are not cited on the GEO card, then all the SEO in the world is not going to get you in the top 7 links and therefore you have already largely failed.
Brian Alford, founder and CEO, Bright Sites: When it comes to AI, Google seems to be mostly concerned with ‘scaled content abuse’ which are: “Pages and websites made up of content created at scale with no original content or added value for users.” This is not the same as using AI to help you to write or research an article. The same SEO principles still apply if you’ve used AI; you should be aiming to create original, authoritative content.
Stewart Robinson, managing director, Full Fat Things: I think companies should be using AI to help but whether it writes the article, fact checks it, suggests a structure or provides background information is up to the talent wielding the AI sword. Your editorial process should still bring through your editorial voice.
Peter Dyllick-Brenzinger, head of product and engineering, Purple: If there is still something unique about the article, there is no SEO problem if AI was used in the creation. There is only a SEO problem if there is nothing unique about the content. This is true for most completely generated articles. So, for example, if you ask ChatGPT, “Write an article about a trip to Copenhagen”. This is most probably bland “AI slop”. However, if you did your own research, better yet even travelled to Copenhagen and used AI to create the article based on your material, there is no harm done to SEO.
Ian Mulvany, chief technology officer, BMJ Group: Irrespective of whether it is bad for SEO, at the moment, I think that this is a disservice to other humans. The term that has been coined is “AI Slop”, content with no nutritional value.
Christian Scherbel, CEO, Smartico: No — at least not inherently. But if it’s done poorly or sloppily, then yes, it can.
Google’s goal is the same as yours: delivering quality content that’s engaging, relevant, and genuinely useful. If AI is used just to repeat existing information, stuffing articles with generic phrases like “it is worth noting” or “consequently” (that AI picked up from science books), the content won’t connect with readers. And when people don’t engage — because it feels robotic and adds no real value — Google picks up on those signals, like low dwell times, and ranks it lower.
However, when used correctly, AI can be a powerful tool. For small news updates, company announcements, or business profiles, AI-generated content can be a great way to expand coverage efficiently. The key? Always have a human proofread and refine it. AI should be the co-pilot, not the captain.
Done right, AI isn’t a threat to SEO — it’s an opportunity.
Tom Pijsel, VP product management, WoodWing: AI can help you to create better content, however it should be positioned as a co-pilot and advise the user. If user and AI go hand in hand, you get the best of both worlds.
Q: If you were going through a digital transformation journey, for example building a new publishing website, how essential is it to include AI-enabled search with a personalised LLM tool? (asked by Sian)
Jennifer Schivas, CEO, 67 Bricks: AI-enabled search and personalised LLM tools aren’t just enhancements, they’re quickly becoming fundamental to modern digital experiences. These tools are already becoming the norm, so the real question is — why wouldn’t you include them? If you’re investing in a new publishing platform, it makes sense to future-proof it by incorporating the tools that will define the next generation of user experience.
Markus Karlsson, CEO and founder, Affino: If you are not doing this then your new site is already obsolete. It is like creating a website today that doesn’t work on mobile phones, simply unthinkable.
Ben Tregenna, chief technology officer, Content Catalyst: AI-enabled search can be game-changing for publishers, especially for those with vast libraries of specialised content. Businesses need instant access to insights that empower them to make swift, well-informed decisions.
AI search is faster and more immersive than conventional search and can extract relevant data and insight from a content portfolio and synthesise this information into a concise summary. This opens up discovery of your full content library to users and facilitates a more personalised experience.
Additionally, publishers can get a deeper insight into the intent and motivation of their users as they interact more naturally with AI search and discovery tools.
Publishers who implement AI chat discovery are likely to see significant ROI in terms of productivity, customer satisfaction and retention.
Stewart Robinson, managing director, Full Fat Things: I would not include any feature that I couldn’t see a way to monetise directly. If you are ad supported, then these AI tools need to replace your normal multiple ad supported page loads with something. For subscription sites, I see AI search as an add on. Would your marketing team see them as a real subscription driver?
These are the real decisions.
Thomas Lake, director of product and technology, Infopro Digital: Google and many other providers have shown that a vectorised database and AI-enabled search retrieval yields better results that tradition indexing. I would strongly recommend using those tools to power the site search, even if what the user sees is closer to a traditional SERP. There are some strong supplier options out there which derisks the potential overhead of trying to keep up with the technology.
Peter Dyllick-Brenzinger, head of product and engineering, Purple: A personalised LLM tool is currently not very common on publisher websites, so I would say it is not essential. However, depending on the brand, it could make sense to offer a functionality like that.
Ian Mulvany, chief technology officer, BMJ Group: This entirely depends on how much of your traffic is driven by on-site search. Where you have ambiguous user intent, then these tools can help the user to get to the right resource on your site as they can match queries semantically, but I think this should be considered now as a default evolution of search, rather than somethings special and specific on its own.
Q: What disclosures or acknowledgements do you run accompanying AI generated content, be it a total illustration or a story first draft? (asked by Michelle)
Markus Karlsson, CEO and founder, Affino: We are increasingly seeing publishers who have been using AI tools for a while reduce their level of disclosure. My gut feeling is that publishers will disclose that they are using AI during introductory and test phases, but then cease to do so over time.
Tim Robinson, editorial director and publisher, National World: Where we use AI-generated images in the National World daily print titles, we are always clear about this. The caption always credits the image to the programme (Adobe Firefly) which has generated it. Readers will appreciate the transparency, although our images are generally creative illustrations and not presented as “real” photo-journalism. Any summary content we publish is similarly labelled. Attribution and transparency will be absolutely crucial as the amount of generative AI content increases.
Tom Pijsel, VP product management, WoodWing: I think it is important to internally flag AI generated content; to what extend you flag the external content depends on your review process and usage. A fully AI generated image should be marked in my opinion, but if you use the AI as a co-pilot to copy-edit your article, it would not make sense. The latter would be similar to marking every article for which you used a spellchecker. It would be counter productive and result in a lack of trust with the audience.
Q: What’s next and truly viable for batch automation for a large volume of images? (asked by Mark)
Markus Karlsson, CEO and founder, Affino: Strange question, what is the context? Also, batch automation need not have anything to do with AI. There is loads of scope for AI batch automation in terms of optimising pictures and making them more on brand, in addition to the standard features like cut-outs, viewing enhancements et al. The biggest factor is how consistent the tools are in their delivery for the outcome you are looking for, something which AI image tools still struggle with today but will likely be solved before too long.
Derek Milne, commercial pixometrist, Pixometry: There are two clear areas that AI based batch automation will succeed in the short term. Firstly, the upscaling of low resolution and poor-quality images to a higher and more refined quality; Secondly, the detection of whether images are traditionally captured or AI generated — which will only become more prevalent.
- Recreating details: Image upscaling improves the resolution, content and quality of images, particularly in situations where the original files are too small or lack detail (think user generated content, web-sourced images or eyewitness photos). These processes will be able to intelligently recreate missing details, enhance edges, remove noise and improve overall quality, ensuring images reproduce perfectly for print or high-resolution digital displays.
- Image authentication: AI-generated images are becoming increasingly prevalent and extremely photo realistic. Being able to detect and flag such images is critical for preventing misinformation and ensuring the authenticity of published content. AI based engines can already accurately identify images generated using the majority of common engines, Firefly, Midjourney, Dall-E etc. and many more, however their adoption is still in its infancy. Expect to see this technology become more available in the next year.
With these two technologies on the near horizon, their potential to significantly enhance imaging quality, streamline workflows, and improve overall accuracy is becoming increasingly evident. Monitoring these advancements closely will be essential for publishers aiming to embrace and implement efficiency-lead solutions for their imaging needs.
