Liz Coll

GenAI: from shock and awe to get stuck in and solve?

Amongst the noisy hype and fear mongering about Generative AI, there’s been steady progress on getting stuck in and solving immediate problems caused by this new breed of AI. Liz Coll doesn't think they can be whole answer but can show how particular weaknesses and harms might be overcome.

As legislators across the world negotiate rules for the world of Generative AI, innovators, consumer groups and lawyers have been busy developing tools and actions to empower creators and consumers. In this blog, I look at how people are feeling about Generative AI and what innovative solutions might tell us about its future.

The shock and awe of Generative AI?

Generative AI or GenAI is a broad term encompasses a set of advanced machine learning technologies that can deliver convincing images, text, speech or music. The public release of ChatGPT in November 2022 and subsequent incorporation into Microsoft’s Bing search engine has seen the conversation about AI ramp up to new levels with plenty of shock and a fair amount of awe for what these new systems can do.


The hype around its capabilities has been offset by dire warnings, sometimes from unexpected sources. The global tech companies who have led the creation of publicly available GenAI models are now saying its potential for harm is so great that it needs to be paused and regulated. Given that these are the same people who usually lobby against rules on the basis they will ‘stifle’ innovation, the letter was met with scepticism from those who want to focus on existing harms to individuals, society and the environment. Regardless, opinion is divided as to whether tools like ChatGPT, Bard, DALL-E etc are a fun diversion and useful information organiser or an unstoppable, malevolent force.


In this blog, I want to get beyond this and show that amongst the noisy hype about ChatGPT and its fellows there are innovators, lawyers and consumer advocates finding ways to overcome its weaknesses and make it better for all.

Taking the temperature on Generative AI?

Crudely put, industry, investors and governments are excited about its potential, and keen to stake out an early lead in the space, whilst of course acknowledging risks. Civil society on the other hand is trepidatious. Digital rights organisations have a long understanding of how the products of mass, social, data-driven technology can be used to discriminate and exclude people and diminish human rights and wellbeing.


The complexity and speed at which GenAI is being rolled out to users without any governance or guardrails raises the alarming prospect that phenomena like fake news and disinformation could get much worse.


Employees have been worried for a while about how ever more increasing automation will threaten their jobs. Generative AI will increase these concerns given that it’s no longer only manufacturing jobs in the firing line. GenAI could replace customer service and content-delivery roles as well as many internal company processes.

Consumer attitudes are still developing

As more mainstream awareness of Generative AI is relatively new, evidence of consumer usage and attitudes is only just coming through.  Amongst users there’s some interesting dissonance around the question of reliability and accuracy. Results from a new survey from Euroconsumers found that while only 31% of users believe it’s a reliable source of information, 73% are satisfied with the reliability of the answers. Looking at it the other way, over two thirds of people who use ChatGPT (69%) don’t think it’s a reliable source, yet almost three quarters of them are finding the answers it gives helpful.


This can perhaps be explained by the fact that public attitudes about GenAI are still very much in formation. Generational and socio-economic factors influence attitudes but as a broad brush we can say that people see its potential but are nervous about the risks.  A global survey from IPSOS-Mori reflects this ambiguity, they found that, on average across 31 countries 52% of adults say AI driven products and service make them nervous, whereas 54% are excited about them.

Tech and legal solutions to GenAI

It’s always fascinating to see the range of responses to the challenges of all types of digital technologies. The response to GenAI is no different, along with proposals for new legislation, a burgeoning array of tools and techniques to empower creators and consumers in the world of GenAI is growing.


Of course, these may not fix all the challenges but they tell us something important – firstly about the way that GenAI is impacting people and businesses. Secondly, that there’s a host of campaigners, lawyers, developers and innovators establishing new solutions to offset the risks of AI and make it safer and fairer.


Some of these solutions are using technology to respond to the challenge of fakery in AI, or verifying the provenance of GenAI outputs. For example, digital watermarks make tiny adjustments to the word pattern of an AI-generated text which create a ‘fingerprint’ that can identify how it was produced.


There’s also software to help prevent adaptation by an AI system, for example, MIT have released ‘Photoguard’ which makes invisible changes to a photo that then prevent it being modified by an AI system.


In other cases, tools already familiar in other tech environments are being repurposed for GenAI tracing tasks.  The site Have I been trained? is a great example of empowering creators with the ability to check whether images they have created have been used to train a GenAI system. There is also an option to opt their images out of usage by training models. This will be the basis of a stronger consent layer for GenAI where creators can easily make informed choices about which of their work they want to be part of training models.


We’re also seeing calls for labelling of AI generated content. While this may not be as innovative as other examples, it is a potentially helpful are solutions that relies on good old fashioned labels designed to help people understand what they are seeing – although in fast moving digital environments, such information remedies have limited impact.


In related developments, the first legal cases against companies using AI to generate content and information have been launched, using long-established legal principles to uphold fair treatment.  Novelists are suing OpenAI and Meta for breaching copyright law by using their texts to train the ChatGPT and LLaMA language models.


Public enforcement attempts have also begun, as discussed at the recent Start Talking, consumer groups in Italy, Belgium, Portugal and Spain have filed a complaint with their national consumer protection authorities against Microsoft for perpetuating misleading commercial practices when it comes to product recommendations that could influence purchasing decisions.


It’s interesting to contrast these developments with the early bans of ChatGPT in countries like Italy or by universities – seen as necessary but also undeniably blunt instruments.  As the dust has settled, we’ve seen other solutions arise which alongside critical legislative changes, international co-operation and new philosophical concepts are able to deepen understanding both of problems, and of what can be done about it.


We’re moving from ‘shock and awe’ to ‘get stuck in and solve’. It won’t be enough on its own but it’s heartening to see this diverse groundswell response which could complement wider rights-based legislation and protection.


***Euroconsumers showcases diverse perspectives and opinions from various stakeholders, however these do not necessarily reflect the views of Euroconsumers***