Prior Art Automaton Installation, 2019

Prior Art Automaton Artwork

Prior Art Automaton was initially exhibited as drawings in the Dutch Pavilion of the Venice Biennale 2018 and produced by ICIA. For the Rob Law exhibition, Studio Alight have turned this conceptual idea into a large-scale sculptural installation. The work questions the fundamental principles around corporate utilization of intellectual property.

In most systems of patent law, Prior art is constituted by all information that has been made available to the public in any form before a given date that might be relevant to a patent’s claim of originality. An automaton is a self-operating machine designed to automatically follow a predetermined sequence of operations, or respond to predetermined instructions. For the Prior Art Automaton work, Studio Alight have used Artificial Intelligence and has designed a machine that prints, and thus, creates novelty, prior art, at a pace unachievable by humans. This protects smaller companies or individuals because they no longer need to compete with large global corporations in the system, a competition that would normally be experienced as that of David and Goliath.

Prior Art Automaton was produced by ICIA for the Rob Law Exhibition. This piece was critically reviewed by Sara Arvidsson of the Göteborg Post on 31 Aug 2019 (link).



As stated above, Prior Art within the Patent System is a very important and first step in the patenting process.To describe this work more frankly, Prior Art Automaton is a sculptural installation consisting of an automaton printing novelty, and this printed output is created by a recursive neural network trained on a corpus of patent texts to write new novel texts. Within the installation there is also a directional speaker that reads the novelty out loud while you stand at the stream of prior arts. The voice made from another neural network trained with deep-learning techniques, such as the voice of Amazon Polly and etc.
The Automaton has basically learned the patterns of how patent language is written by the patent system’s practitioners because it was trained on this system’s output and with this training it spits out new language. It does this faster than any of us humans could ever do. This notion is expanded by the accompanying Rob Law exhibition text created by the ICIA Konsthall:

The automaton takes away the possibility to have patents that can be used in infringement. The availability of this prior art protects smaller companies or individuals because they no longer need to compete with large global corporations in the system, a competition that would normally be experienced as that of David and Goliath. This is the end of patents as we know it.

Now, that last statement – it is rather bold. Yet, even so, this work is bringing up the concern that is bubbling up in our world where things are shifting into being made by machines, machines with “minds” of some sort. And we humans must reflect on how we train those machines and the ethics around these machines doing the job that has historically always been inherently human. Questions come up such as: Who owns these machines? Who interprets them? What type of ethics committee is needed to regulate the fairness of its invention? Who is now the higher – often more educated – class? And etc.

The installation is both an archive of the work the automaton has written, an experience of an industrialization of a world where novelty is invented by the machines and the human is displaced to the role of the paper filler (and possibly the reader of philosophy in-between those moments of refill in my utopia).



Amazon Web Services. (2019). Amazon Polly – Text to Speech in 47 Voices and 24 Languages | Amazon Web Services. [online] Available at: https://aws.amazon.com/blogs/aws/polly-text-to-speech-in-47-voices-and-24-languages/ [Accessed 5 Dec. 2019].

En.wikipedia.org. (2019). Automaton. [online] Available at: https://en.wikipedia.org/wiki/Automaton [Accessed 5 Dec. 2019].

En.wikipedia.org. (2019). Prior Art. [online] Available at: https://en.wikipedia.org/wiki/Prior_art
[Accessed 5 Dec. 2019].

Karpathy.github.io. (2020). The Unreasonable Effectiveness of Recurrent Neural Networks. [online] Available at: http://karpathy.github.io/2015/05/21/rnn-effectiveness/ [Accessed 7 Feb. 2020].