Mobile navigation

REVIEW 

AI Unleashed: five takeaways

New Scientist has just held the first event in its ‘Business Insights’ series, a new B2B offering from the successful science publisher. The conference title was ‘AI Unleashed: Revolutionising the future of your business’. James Evelegh was there.

By James Evelegh

AI Unleashed: five takeaways
Standing room only.

The conference, held on 28th September at 8 Fenchurch Place, London, was standing room only. This was probably only be expected given the huge hype surrounding the subject matter and the strength of the New Scientist brand.

The morning event was moderated by Jackie Scully, executive director at content marketing agency Think and the lineup of panellists and speakers included leading figures from government, academia and business.

There were plentiful insights and advice.

Here are my five takeaways:

1. AI isn’t new, but its democratisation is…

The strand of computer science that we call ‘artificial intelligence’ can trace its origins back to the early 1940s.

Generative AI, which has fuelled the current hype, has been around since at least the 1960s. ‘ELIZA’ was an early natural language processing computer program created between 1964-67 by Joseph Weizenbaum at MIT.

What has changed is the acceleration of AI development over the last decade driven by huge increases in computing power and the availability of massive data sets with which to train the AI models.

The launch last November of ChatGPT put a hugely powerful generative AI model into the hands of the general public, thereby, at a stroke, democratising and mainstreaming the technology.

This is why we are all talking about it now.

2. Not all AI is generative and not all generative AI is ChatGPT.

Because of the hype surrounding the release of ChatGPT and Bard, many people are making the incorrect assumption that ‘AI’ is synonymous with ‘generative AI’. The reality is that generative AI is just one of a family of AI solutions and just using generative AI models could potentially limit your chances of success.

AI solutions are often at their most effective when they combine together, said Ben Clinch, principal enterprise architect, information architecture, BT Group.

The fact is that generative AI might well not be the right AI solution for your business.

As Elena Simperl, professor of computer science and deputy head of department for enterprise and engagement in informatics at King’s College London, pointed out, some of the non-generative AI solutions will be easier and cheaper to implement.

3. ‘Is generative AI trustworthy?’ is the wrong question.

There has been a lot of discussion since ChatGPT’s launch about the “trustworthiness” of generative AI solutions. Trustworthiness is a quality you associate with people and organisations, not technology, said Mhairi Aitken, ethics research fellow at the Alan Turing Institute.

Asking the question points to a fundamental misunderstanding of how generative AI works.

Generative AI creates a compelling sequence of words and the rightness or otherwise of the response will depend on the data sets (and their inherent biases) used to train it and the precise wording of the prompt used to generate the response.

It will always provide an answer, and this is not necessarily a good thing. In that, generative AI was likened to an articulate, but sneaky, 13 year old.

The predisposition of the platform to ‘hallucinate’ (ie provide an answer even if none is justified by its training data) is a significant challenge.

To meet this challenge, businesses need to:

  • Work on the assumption that the output cannot be used verbatim, but will need to be checked by humans.
  • Recognise this tendency and develop internal guidelines and codes of practice to mitigate it, along with providing appropriate training and quality controls.
  • To accelerate their understanding and use of ‘prompt engineering’. This is an emerging discipline whereby users work out, and document, the form of words and phrasing that when used in a prompt, will be most likely to elicit a correct answer. One commonly used strategy is to prompt the AI to provide an answer ‘with reference to’ such and such a resource. Put simply, constraining the AI in this way produces better outcomes.
  • Use generative AI technology, but only referencing predefined data sets (ie creating walled gardens of your own data). In the short term, this option is likely to only be available to big blue chip companies.

4. AI provides huge opportunities, but there are risks too.

According to Paul Scully, minister for tech and the digital economy, AI presents an “enormous opportunity to break out of the shackles of ‘business as usual’.”

The opportunity is there for it to facilitate and accelerate significant advances for the benefit of humankind, greater productivity in the way businesses operate and to enable completely new business models to emerge.

Victoria Edwards, CEO of FIDO Tech, described how her company was using AI to help create a world without water scarcity, through using AI to solve the challenge of leakage detection.

The problem is extremely serious. During her short talk, she said, 1,562,500,000 litres of water would be lost around the world.

Her company’s solution was to use sensors and the AI they have developed to interpret the readings from those sensors to tell water companies, a) if there’s a leak; b) how big it is; c) where it is.

This is potentially transformational stuff.

David Wakeling, head of the markets innovation group at legal firm Allen & Overy, described how, by introducing an artificial intelligence platform built on Open AI’s latest models, to help lawyers draw up contracts and the like, they were seeing savings of two hours per lawyer per week. Aggregate that up across their global workforce of 3,700 lawyers, and you have what he modestly described as an “enormous but a little bit boring productivity gain”.

For Darshan Chandarana, emerging technologies leader at PwC, experimenting with generative AI was a no-brainer. They see themselves as a knowledge management company, so any new technology that might help them do it better has to be embraced.

He saw the opportunity for PwC as two-fold, delivering both “bottom up” (they had identified four thousand use-cases where AI can be used for productivity gains) and “top down” improvements (exploring what business models are available today, utilising this new technology, that weren’t available yesterday).

The risks to society were well illustrated by examples from the education sector, where, said Elena Simperl, students do take ChatGPT answers verbatim and submit them as their own work. Education leaders are having to rethink their application vetting (cover letter, personal statements) and examination procedures, to take into account the widespread use of generative AI.

The lack of attribution in generative AI’s responses is a big cause for concern, and poses a real threat to publishers and the creative industries more widely.

Some authors, it was noted, had taken their work off the open web.

Elena Simperl also anticipates that the lack of attribution might lead to a significant reduction in participation on the open web, on platforms like Wikipedia. If generative AI simply sucks up all the information from such platforms and regurgitates it, attribution-free, what incentive is there to contribute?

5. If you’re not yet engaging with AI, now would be a good time to start.

According to Emily Wilson, editor in chief of New Scientist, we are at a “unique point in history”.

“There have been AI moments before and after every AI summer there has been an AI winter,” Emily continued, “but this time it’s different; what we have now is enough to change everything.”

She quoted a Stanford University academic: “People who don’t know how to use AI will be replaced by people who do.”

At the very least, every organisation is duty bound to properly evaluate if and how they can harness the power of AI.

For companies looking to get started down the AI road, the panellists had lots of advice:

  • Start experimenting, but expect initial failure.
  • Research the subject and look for best practice, but you don’t need to work it all out yourself; there is a huge body of existing research / advice available from industry, academia and government.
  • Don’t be scared.
  • Open your mind: you need to think in different ways, otherwise the most you can ever hope for is a 5% productivity gain.
  • Throw preconceptions out the window; start with a blank sheet of paper.
  • Embrace the potential and use your imagination.
  • Realise that there is no one-size-fits-all solution. There is no “right model”; what works for you will depend on your organisation.
  • Focus on use-cases and delivering a return on investment.

The raw material of AI is data. As Darshan Chandarana told attendees: “your data is probably horrible”. So, it’s never too late to start cleaning and structuring your data. AI might be amazing, but the old adage of “garbage in, garbage out” still applies.

A number of speakers addressed the perception that AI posed an existential threat to humanity and could wipe us all out. Helen Margetts, professor of society and the internet, Mansfield College, University of Oxford, reassured us that such talk was overblown: “You can all sleep soundly.”

That’s a relief.


This article was first published in InPublishing magazine. If you would like to be added to the free mailing list to receive the magazine, please register here.