NEMO recently published recommendations for policy makers addressing AI and museums and urge that they will be considered as the European Parliament continues to develop and work on AI issues.
- A political vision for museums and cultural heritage in an AI-driven society
Recognising the unique position of museums and cultural heritage as pillars of trust within society, it is imperative to integrate them into a regulatory framework. Artificial intelligence in museums needs to be addressed and shaped so that technological developments do not simply reshape museums from the outside. Collaborative efforts between governments, regulatory bodies, and museum professionals can ensure that museums play a pivotal role in the development of ethical practices related to emerging technologies. - Financial investments to apply AI successfully in the Public Cultural Domain
Financial resources must be allocated for infrastructure, equipment and highly qualified human resources, enhancing museums’ professional capacities.AI needs to source high-quality, interoperable data and properly described metadata. Copyright issues must be resolved. Museum professionals need adequate skills to perform these tasks, to keep pace with rapidly evolving AI capabilities and to address sector-specific concerns. Furthermore, standing commitments to support the cultural heritage sector should be expanded to ensure the quality and quantity of digitalisation required by Cultural Heritage Data Spaces and the European Collaborative Cultural Heritage Cloud. - Establishment of a European AI innovation hub for cultural heritage
To foster creativity, innovation and collaboration, to centralise expertise and knowledge and to face challenges for the sector associated with AI, there is a need for a dedicated competency centre in Europe. This space would serve as a hub to bring together expertise and practices, knowledge and resources in a network of and for professionals, ensuring digital innovation and development across the diverse European Cultural heritage sector - in alignment with the values of human-centred design, privacy, and open-source practices.
The AI Act's relation to the cultural sector
Culture Action Europe (CAE) has analysed the Act’s relation to the cultural sector and points to articles concerning marking AI-generated content and copyright requirements for the use of data to train AI. CAE explores the new regulations provided by the AI Act and how they might apply to the cultural sector as follows:
Labelling AI-Generated Content
According to Article 50, providers of the AI systems that produce audio, image, video, or text content must ensure that it is marked in a machine-readable format. This rule doesn’t apply when the AI systems just edit your input or do not substantially change it. The provision will help recognise whether a work is created by a human or by AI.
How will this obligation be implemented? Most likely, through watermarking techniques. However, watermarking AI-generated content is problematic, especially in the case of texts, since it is easy to remove or work around by paraphrasing the text so it doesn’t resemble ChatGPT syntax and lexicon. Now it is the challenge for AI providers—those who develop an AI system and introduce it to the market: the future will show what techniques will emerge to comply with the AI Act.
The Act also imposes obligations on the deployers—essentially users but for more professional purposes—when they produce deep fakes with the help of AI. Deep fakes are content that mimics real people, objects, and events but is, in fact, not real. In these cases, deployers must disclose that the content has been artificially generated or manipulated.
If the deep fake is part of an evidently artistic, creative, satirical, fictional analogous work or programme, then the ‘disclosure of the generated or manipulated content must be done in an appropriate manner that does not impede the display or enjoyment of the work.’
Earlier drafts excluded ‘creative’ deep fakes from the transparency obligations. The European authors’ and performers’ organisations advocated for the labelling of all deep fakes and even obtaining consent from the person depicted or otherwise concerned. The nearly final text has found some middle ground between the first version and the requirements of the community.
Copyright Protection and Data Sources
Artists have raised concerns about the unauthorised use of their work by AI developers, often without acknowledgement or compensation. In principle, the AI Act does not prohibit AI developers from using datasets, including creative works, to train their models. It refers to the Copyright in the Digital Single Market Directive, Article 4(3), which allows for the use of works for text and data mining.
However, Article 53 of the AI Act also reminds the AI providers that according to this Directive, rightsholders can ‘expressly reserve the use of works’. If they do so via ‘machine-readable means in cases where content is made publicly available online’ (in practice it means updating your website’s terms and conditions), the AI providers must not use this data. If they wish to conduct text and data mining on such works, they should secure permission from the rightsholders. And negotiate the price.
If you are a photographer with a beautiful portfolio posted online and don’t want it used to train AI systems, you should explicitly state on your website that you forbid the use of your works for text and data mining.
Another copyright provision requires AI providers to publicly disclose a ‘sufficiently detailed’ summary of the data used in training general-purpose AI models. The AI Office will set a standardised format for such data summaries. But if you think you’ll see the title of all your poems in the AI’s technical documentation, don’t be too quick to celebrate. The descriptions will most likely be high-level, outlining datasets rather than providing a list of all the works. The lawmakers’ intention was to facilitate the verification of data used for AI training and ensure compliance with EU copyright rules.
Next Steps
The AI Act outlines the main rules for AI providers and deployers. More practical information on the implementation of the rules will follow from the new European Commission’s AI Office. The Act is a Regulation applicable to all member states, but it will take approximately two years to become fully applicable (except for some provisions on prohibited practices, codes of practice and general-purpose AI rules that will be implemented earlier).
The EU is often criticised for overregulating, which can stifle creativity and innovation. This time, the lawmakers have opted for a comparatively balanced approach. Over the next two years, AI will develop even faster, and new cases and practices will puzzle artists. Meanwhile, it would be good to learn not just how to restrict, label or avoid AI but also how to use it to enhance the arts’ visibility, accessibility, heritage protection, and co-creation. That’s one of the tasks of the Digital Action Group CAE is launching—we’ll keep you posted about our findings.
Culture Action Europe points out in its original article, that the text is purely informational and should not be regarded as legal advice.