Data-Driven Art, or Why the Auction of an AI Generated Painting is So Boring.

Anton Haugen
5 min readAug 26, 2020

--

Portrait of Edmond Bellamy, a painting generated by a generative adversarial network and auctioned at Christie’s.
SOURCE: Wikpedia

When Christie’s auctioned the first painting generated by “artificial intelligence” for $432,500 in 2018, the auction house and artist collective Obvious fell into a familiar cultural trope. From Kasparov and Deep Blue to Ken Jennings and Watson, the relationship between humans and artificial intelligence is often depicted in the popular imagination as one of inherent competition and eventual replacement.

One could argue that the Christie’s painting “Portrait of Edmond de Belamy,” the product of a generative adversarial network, merely extends this cultural paradigm into the realm of contemporary art by appearing primarily as a reproduction of the style of its initial data set, documentation of paintings produced from the 14th to the 19th century. But why should a neural network’s artistic style resemble the humanist work of 17th century Dutch masters? Or for that matter any style that we find familiar?

As the philosopher Benjamin H. Bratton suggests in his New York Times essay “Outing A.I. : Beyond the Turing Test”, limiting our notions of A.I. to what replicates and could “pass” as — and thus surpass — humans prevents this technology from illuminating what is unknown and hides those discoveries that are not easily reconcilable with our contemporary notions and unconscious biases of what it means to be human.

Whether through theoretical speculations or by posing ethical dilemma with immediacy, artistic inquiry alongside bleeding edge tech could help engender new discussions on what it means to work within these technologies and to have these technologies work on us. Here are five contemporary artists who are engaging with data-driven practices to articulate new alternatives to the narrative of competition surrounding the popular discourse about artificial intelligence.

Agnieszka Kurant (b. 1978)

SOURCE: Culture.Pl

Doubts concerning traditional notions of authorship lie at the center of Kurant’s practice, which concerns itself with how collective intelligence can be a productive force. Her work “The End of Signature” (2014-) collects the signatures of museum and gallery goers and blends them into a single signature through aggregating their differences. Her collaboration with the MIT Center for Art, Science & Technology, “‘Animal Internet” (2017) created a live stream fictional creatures that appeared to be alive through movements sourced from analyses of mentions of protest movements on Twitter and data from Amazon Mechanical Turk workers.

Ian Cheng (b. 1984)

SOURCE: YouTube

Seeking a form that could communicate a sense of aliveness and that could also change over time outside of his control, the artist Ian Cheng began designing simulations which use machine learning. Based around the controversial ideas of Julian Jaynes’ book The Origin of Consciousness in the Breakdown of the Bicameral Mind, “Emissaries” is a simulation which takes place in a primitive civilization currently facing a catastrophe. Though most entities respond to each other and environmental factors, one agent, ‘the emissary’ has a set of narrative goals which he or she attempt to accomplish. The civilization sometimes supports the emissary and other times persecute the individual. Cheng’s series “BABY” puts chat bots in conversation with each other allowing their automated scripts to respond to one another, often resulting in philosophical discussions concerning the nature of free will and the meaning of life.

Holly Herndon (b.1980)

Completed after her PhD in Composition at Harvard’s Center for Computation Research in Music and Acoustics, the artist’s 2019 album Proto was written alongside “Spawn” referred to as an artificial intelligence baby housed in a souped-up gaming P.C., a neural net voice model trained on thousands of hours of Herndon and her partner’s voice to imitate both imitate her voice as well as the choir’s. Often during the album Spawn’s voice is not isolated but featured among human voices, invoking the ways in which we as internet users interact with both human and nonhuman others. On developing Spawn, Herndon has said:

“There’s this tendency in AI to make invisible all of the human labor that goes into it. We wanted to acknowledge the people that trained the AI — to hear their voices, instead of creating some glossy, perfect image of something that’s otherworldly.

To find a way for Spawn to be part of her tour, Herndon and her choir would perform a call and response song with her audience to train the neural network further.

American Artist (b. 1989)

SOURCE: American Artist

American Artist is a black artist currently exploring the relationship between race, machine learning, and the police force. Their speculative film “2015,” which premiered at his Queens Museum exhibition “My Blue Window” (2019) is set in 2015, the year in which the NYPD began using predictive policing. Predictive policing uses machine learning to dispatch officers to high risk crime zones before any incident is reported. The film and smart phone application use hood cam footage overlaid with a fictional user interface that identifies potential threats and crimes, inviting comparisons to science fiction movies like Minority Report, only the film’s lack of action and narrative cohesion takes place in the banality of the every day and America’s racial divide. “Sandy Speaks” (2016) is a chat application built with AIML and entitled after Sandra Bland’s video blog series about how to interact with police and made the year prior to her violent death while in police custody. The artist described the chat bot as serving as a symbolic and tragic starting point for the larger conversation our country needs to have about police brutality and race.

James Bridle (b. 1980)

SOURCE: Serpentine Galleries

Though sometimes his writing can fall into dystopian dichotomies, Bridle’s artistic work comes out of a deep anxiety towards the monopolization of technology and data collection . His 2016 digital commission for the Serpentine Galleries, “Cloud Index”, made in the midst of the Brexit vote, found a correlation between voting and weather by using a neural network to analyze British satellite imagery of cloud patterns against polling data from the past eight years. Knowing that there is a correlation could one then change cloud patterns through cloud seeding to favor particular political outcomes? His exhibition “Failing to Distinguish Between a Tractor Trailer and the Bright White Sky,” titled after the driver-less Tesla Model S’s error that led to the death of its driver Joshua Brown, featured “Gradient Ascent,” a film documenting the trip of his own autonomous car to Mount Parnassus, built from open source software, including the Android app he designed Austeer, which tracks and records steering wheel movement.

--

--

Anton Haugen
Anton Haugen

No responses yet