Back
Positioning & Advocacy
01.09.2022

What’s next for the EU Artificial Intelligence Act?

The proposal for the EU AI Act was released by the Commission in April 2021, and has been subject to major lobbying from civil society and industry. When we last wrote about the AI Act in December last year, we were optimistic that a first parliament position might be available by the summer break. Currently, co-legislators are still finalizing their positions before the trilogue process can begin.

Cross-sector AI Act regulation makes for slow progress

Prior to the summer break, and with the Czech presidency in place, some progress has been made on working through these and creating compromise texts in some areas. However, the complexity of the regulation and its interaction with sector-specific laws covering areas which may use AI such as finance or health, has slowed down its progress. By April 2022, the joint lead committees of IMCO and LIBE published their draft report which included proposed amendments to the original text. By June over 3,000 amendments had been submitted.

Five standing committees are also involved in producing opinion reports which is slowing progress, with a vote on the JURI committee opinion report delayed until after the summer break. Positions continue to fall broadly into two camps – those who describe themselves as pro-innovation and competition and therefore want to see less restrictions placed on businesses, and those who want to ensure fundamental rights are protected in a context of increased data collection, analysis and remote decision making.

Changes needed to protect consumers in AI Act

The breadth of the definition of AI had been a contentious issue which the Czechs have proposed to resolve by narrowing what counts as AI so that it does not unintentionally capture standard types of software. This would align the definition with the OECD one, which is helpful for international integration.

Many pro-consumer recommendations were tabled as amendments by MEPs including:

  • Introduction of a horizontal provision establishing a set of mandatory basic principles like fairness, accountability or transparency that would apply to all AI systems.  This is seen as critical to ensure that fundamental consumer expectations are met, whether the product is deemed as high risk or not.
  • Introduction of new rights for consumers, such as the right to be represented by a consumer organisation when exercising their rights, plus the possibility to join a collective redress action. This will help strengthen the enforcement side of the Act. Finally, the right to an explanation was also proposed by several political groups.
  • A number of prohibited practices were broadened and strengthened. For example, it was proposed that the prohibition on social scoring is extended to private entities.
  • Third-party assessment was proposed as the conformity assessment procedure for high-risk AI systems in order to avoid providers ‘marking their own homework’ by assessing conformity to standards in-house.

The new text has also tightened up on what is classed as a ‘high risk’ use case (and thus needing to be compliant with European standards), removing some applications relevant to consumers like setting insurance premiums.

However, a new understanding of what is meant by ‘high-risk’ has been introduced, based on the immediacy of the impact of a decision it makes and/or enacts. In other words, it will be the lack of human review or intervention which would make the application a high risk to fundamental rights.

Appropriate use of standardization for AI Act

Another area of progress in the AI Act’s development has been the issuing of a Standards Request from the European Commission to European Standardization Organizations. As a reminder, the AI Act takes a product safety approach which involves using harmonized technical standards to specify how requirements set out in the Act can be met. If an AI system complies with these standards then it means they comply with the Act and the system can be placed on the market (in other words implementation of a harmonized standard provides a ‘presumption of conformity’).

Technical standards are useful in many fields, but their use in the AI Act has raised the concern among consumer groups that they may be used to interpret or define legal requirements which impact on fundamental rights. Doing this through private standardization bodies with limited participation from civil society and not through an open democratically accountable process adds to concerns.

However, consumer groups have seen some success in raising the need for stronger participation processes given the significance of AI systems to society, and continue to make the argument that standards must not veer into areas of public policy and law which might require a  level of interpretation (such as bias in data) and should stick with solely technical aspects. 

AI regulation around the world

The UK and US are amongst the countries keeping a close watch on the AI Act as the EU develops the first comprehensive, horizontal regulation of its kind.  One driver for getting to grips with AI are US and EU concerns that China is stealing a march on AI particularly in terms of standardisation. China has been open about its strategic use of standards, knowing that if one set of standards embeds itself early on then it will be difficult to change.

To counter this influence, the US and EU established the EU-US Trade and Technology Council (TTC) in 2021 with a commitment to cooperate on developing innovative and trustworthy AI systems, and to work together to establish technical requirements and standards rooted in ‘shared democratic values’ and the principles reflected in the OECD recommendation on AI.

AI regulation in the US

Nationally, the US has taken a more sectoral approach, considering guidelines across different regulators such as the Food and Drug Administration and the FTC regarding the use of algorithms in decision making. The most cross-departmental approach comes from the  Department of Commerce which is developing a risk management framework with NIST (National Institute of Standards and Technology).

AI regulation in the UK

The UK very recently published its AI regulation strategy which had much in common with the EU AI Act in terms of addressing high-risk use cases, and promoting innovation by putting less focus on low risks. As with the US approach and the proposed new definition in the EU compromise text, the UK framework draws on OECD work, this time the cross-sectoral set of OECD principles on AI.  These include principles of: robustness, accountability and transparency.

The UK strategy avoided settling on a definition of AI and instead focused on characteristics and capabilities. It also placed responsibility for developing guidance and enforcing good practice on a group of market regulators covering communications, competition, consumer protection, data protection, medical care and financial conduct. This is intended to reflect the reality that AI systems and use cases are spread across many sectors and remits.

Innovation and trust: two sides of the same coin

For Euroconsumers, building trust is key to successful roll out of AI and the ability for people and society to make the most of its innovation opportunities. Our research has shown that consumers are optimistic that AI can improve their lives, but have ongoing concerns about losing privacy and control, being manipulated and not being able to get redress. Addressing these concerns directly through input from consumers, testing and consumer-centered design will make products much more appealing when they reach the market. Making AI fully trustworthy is therefore essential both for consumers and companies, with BEUC, we advocate amongst other things for the following to help make AI trustworthy.

  • Establishing a set of basic principles such as transparency, accountability and fairness that would apply to all AI systems, not just those that are high risk.
  • Adding strong rights, including the right to redress for consumers
  • Considering economic harm of AI in risk assessments
  • Curtailing some consumer uses of AI that could lead to serious harm such as the use of social scoring by private entities, and the emotion recognition.
  • Requiring third party, independent assessment to assess that ‘high-risk AI systems’ conform to the relevant standards

Too often innovation and developing a regulatory framework to improve transparency and trust are set against each other in opposition. But this is not helpful or accurate. In fact, innovation will be much easier to deliver where there is consumer trust. People will be more willing to adopt new technologies if it is clear that the risks and downsides have been considered and dealt with before something goes wrong in practice. New technology can be developed that is more relevant and appealing to consumers.

As the proposed regulation progresses into 2023 and beyond, Euroconsumers and its members will be campaigning, with BEUC, to ensure that consumer interests are central to the regulation, delivering to them the innovation consumers are calling for and is fostered by openness and trust.