The transposition of the Copyright Directive in Italy has been announced on June 7, 2021 establishing its entry into force in our country.
On one hand, authors, publishers and trade associations have been fighting hard for years to regulate the use of online content and to establish a new balance between the values at stake in the Single Digital Market; on the other hand, big-tech companies, led by giants such as Google and Facebook, have pushed to mediate the regulatory proposals, which they consider disruptive of a business model that is now consolidated.
Article 15 and Article 17, mentioned several times because they are among the main protagonists of the Directive, are two cornerstones that mark the change. The first introduces the connected right recognised to newspaper publishers to publish links to their own news, while the second introduces the obligation for the large content platforms to make the “maximum efforts” to obtain licences from the rights holders.
The European Commission has, therefore, furnished the guidelines provided for by paragraph 10 of Art. 17 of the Copyright Directive relative to the discipline of the suppliers of Internet services, with the intention of promoting a correct and uniform transposition of Art. 17 by the Member States and to furnish a support to the operators of the market who will have to conform to the national transpositions of such prevision. In fact, they are now defined as providers of online content sharing services and are considered as subjects who carry out, on their own, an activity of communication to the public. It is also specified that to fall within the definition, the service of the information society must organise and promote the contents uploaded by the users with the purpose of profit – specifying through recital 63 – that “the profit deriving from the contents uploaded could be obtained, directly or indirectly, by organising and promoting such contents in order to attract a wider public, also by classifying them and resorting to promotions aimed at them The purpose of profit should not be presumed on the basis of the mere fact that the service is an economic operator or by reason of its legal form. The profit motive should be linked to the profits derived from organising and promoting user-uploaded content in order to attract a wider audience, for example, but not exclusively, by placing advertisements next to user-uploaded content.“
This is a very important distinction that still needs to be clarified in detail. However, what is appreciable is the willingness to go and make more responsible all those services that base their business model on the diffusion of contents – gain that can also be derived from the exploitation of indirect channels given by the use of data obtained from the profiling activity of the traffic generated by the user.
In addition, the guidelines indicate among the methods to be adopted to limit the uploading of content without permission, the adoption of suitable tools for the recognition of content. This, after having made the ‘maximum efforts’, which must be documented, to obtain a licence from the rights holder, which can also cover any non-commercial exploitation of the work, i.e. that does not bring significant revenues. Otherwise, the provider of online content sharing services will be held liable for the illegal activity carried out by its users and perpetrated on the internet through the functionalities of the same platform.
The guidelines specify that the behaviour of the service providers will have to be assessed on a concrete basis considering the specific case and assessing how “maximum efforts must be made only to ensure the unavailability of specific works and other materials for which the rightholders have provided the service providers with the “relevant and necessary information”. Recital 66 specifies that if rightholders do not provide such information that meets the requirements of Article 17(4), providers of online content sharing services are not liable for unauthorised uploads” .
In relation to this, we have returned to speak of the so-called “upload filters”, that is, of the technical instruments used (already before the entry into force of the Directive) by the service providers for the automatic control of the material uploaded on their sites.
The first platforms offering the possibility to monetise the exploitation of audiovisual programmes on the Internet have been created several years ago. First of all, the YouTube platform, which allowed to regulate the exploitation of audiovisual contents uploaded by users (UGC, User Generated Content – ed.) through the system known as ContentID. This is precisely a technology that is now indicated as a modus operandi by the European Commission guidelines.
Types of identification systems
Simplifying, file identification systems can be reduced to three types:
– Metadata: each file (audio, video…) is composed of a series of information describing its content. This information is called metadata. For example, it can be the name of the artist, the title of the song, the length, the rights holder, the description, the category, etc., all elements that together can contribute to the identification of a match.
– Hash: a hash is an alphanumeric string that uniquely identifies the content of a file in its entirety. It is often referred to as a hash value or the message digest (or simply digest).
– Fingerprint: the fingerprint “works” on the characteristics of the content (for example, in the case of a song, the musical notes, the repetitions of the same, the frequency, etc.). This type of recognition is the most accurate and is based on the optimised logic of the algorithm “the Fourier transform” (editor’s note – Fast Fourier Transformation) which does no more than modify the domain of a function of time (a signal) into a frequency domain, thus allowing the study of the composition in terms of frequency, amplitude and phase of the signal itself. The result of applying the algorithm forms the fingeprint. The comparison of the similarity points of the fingerprints allows the identification of audio matches.
To generalise, the process that allows the identification of contents in video/file hosting platforms is commonly defined as fingerprinting, which – in extreme synthesis – allows to assign a fingerprint outlining the characteristics of an audio/video signal generated through the application of a mathematical function.
This process can be activated when the content is uploaded onto the platform (upload procedure) but can also be recalled at a later date for a posteriori verification.
The information relating to the identification of the content is saved in the platform’s database (identified as a reference file) and then used for comparison (matching phase) with any digital assets present on the system.
Video sharing platforms usually use technology that allows the audio track to be distinguished from the video track and the most appropriate fingerprinting methods are applied. For example, for the audio track the spectrogram analysis is considered for fractions of time, while for the video track numerical processing algorithms are used that take into account a series of information such as length, timeframe, encoding, as well as the characteristics of certain frames such as the percentage of colour based on the evolution in time, scene changes and their correlation with the audio track present.
Depending on the rules established in the asset, the content can then be managed (blocked, tracked or monetised).
The effectiveness of fingerprint-based matching has now reached high standards, allowing high reliability in recognition even in the presence of reduced or partial content (see table below with a comparison of reliability/costs of the types of automatic content recognition).
In short, this is the situation today, but it is sufficient to surf the Net to notice the considerable number of public and private subjects who consider that the Directive and, in particular, its guidelines are in breach of fundamental rights.
On 15 July, the Court of Justice of the EU, in the case brought by Poland to obtain the amendment of the provisions of the Directive that it considered discriminatory (specifically Article 17, letters b) and c) of paragraph 4) as they infringed the right to freedom of expression and information established and protected by the EU itself, pronounced a sentence rejecting Poland’s appeal.
The grounds will be later this year, but other European countries are already considering challenging the Directive and its Guidelines.
We are therefore only at the beginning of what promises to be not only a legal battle but also a “philosophical” one on the balancing of the interests of freedom on the Net.
Milan, 3 August 2021
Niccolò Lasorsa Borgomaneri