How could A.I. restitch the Unicorn Tapestries?

Anton Haugen
5 min readMar 16, 2021
“The Unicorn in Captivity”

Last fall, after my first trip to the Met Cloisters, I started learning more about the Unicorn Tapestries, a seven tapestry series over 500 years old, with each tapestry comprising tens of thousands of threads. The seven tapestries tell the seemingly simple story of a Unicorn’s capture, captivity, and murder and have entranced visitors for generations. Though a friend had mentioned that they were an allegory about the stations of the cross, much of the imagery found in them can be interpreted to contradict this — like the detail of the unknown botany (perhaps of extinct plant varieties or products of the artist(s)’s imagination), the companion animals, and the expressions of the Unicorn’s face. For me, part of the works’ appeal is in the question of how we enter the world of the works, whether through the inhuman or the human subjects. Perhaps this is a painterly question, one outside of the flatness and decorative and somewhat non-individualistic aesthetic ideals behind medieval tapestry, a subject that I’m obviously not well versed in although I fondly recollect those in Hearst Castle or the Morgan Library as well as those described in some footnotes of the “Merry Wives of Windsor” to be found in the drinking halls of Shakespeare’s days, where they were placed above the fireplace. Prior to Rockefeller, the most famous and first recorded owner of the tapestries were one of my favorite writers, Francois de Rochefoucauld, the nobleman and writer of maxims, which, like the tapestries, allure through their playful, yet constant approach to systemic coherency but subvert total cohesion total through their stylistic flourishes.

Something I keep returning to whenever I think about the Unicorn Tapestries now, as I learn more about programming is the work of Gregory and David Chudnovsky, better known as the Chudnovsky brothers, who built a supercomputer in their Upper West Side apartment out of mail order parts to discover digits of pi and to see if the number could have any rational pattern. Through a case of celebrating those in the STEM field primarily for their eccentricities and curious personas rather than their discoveries, typical of the kind of thing you’d find in the pre-Trump New Yorker, the Chudnovsky brothers were introduced to me due to a curious problem concerning the Unicorn Tapestries. The Chudnovsky brothers were described by the writer as mathematicians being somewhat held from academic posts for their use of computers in their number theory work, something that I and a contemporary audience may have a hard time grasping, and because the brothers insisted that they were one mathematician who happened to inhabit two separate bodies, meaning they would require one academic post for the two of them.

As part of a restoration effort during a renovation of the Met Cloister, the Metropolitan’s documentation department photographed every square inch of each of the Unicorn Tapestries. Only when these high resolution photographs were completed, they lacked the processing power to reassemble these images from the high resolution tiles. Through mutual connections, the Chudnovsky brothers and their supercomputer were enlisted to help solve the problem. When the Chudnovsky brothers brought the shopping bags full of discs to their apartment, after forgetting then retrieving one, the New Yorker writer takes care to describe, from the produce section of their neighborhood supermarke, they aimed to reconstruct the Unicorn tapestries with a software program, purely on a numeric level, outside of any visual clues, only using the numeric representations of the image. Their first task, reassembling the 30 digital tiles of the Unicorn in Captivity.

Only, when they took a first crack, they discovered the tiles did not completely line up! For example, in one tile, the two halves of a flower may line up, but the stems in the ornate border may not. The brothers were convinced the tapestry was moved or the camera was jolted during documentation, which the Met denied, although at one point a suddenly opened door had let in light that changed some coloration. Their math did reveal that there was some movement in the tapestry. They created a vector displacement map comprised of 15,000 arrows to track how the tapestry was moving, finding that the tapestry’s movement had nothing to do with being moved or jolted. In fact, the tapestry when placed on the floor of the Met’s Wet Lab had slowly started to relax, undulating in the manner of a still pond. To reassemble the 240 million pixels, they created warping transformations, a mathematics used to find patterns among similar features, something perhaps familiar to those who have worked with handwriting recognition and speech recognition. After 4 months of devising the software, the supercomputer ran for 30 straight hours, running 7.7 quadrillion calculations to recreate a “The Unicorn in Captivity” as a digital file. Perhaps because of its immediacy, the digital image they reconstructed, like the historical versions of the Mechanical Turk, hides the very human work required to create them. Though maybe that’s the real magic of them.

The 2012 work of Chudnovsky brothers is an incredible story of how computation can help solve practical problems at an arts institutions, but revisiting their story recently, I started to wonder how their work could have been completed with deep learning, and precisely, what kind of deep learning architecture one would apply.

The principles behind their work resemble those employed to create Deep Fake videos, videos using generative adversarial networks to resurrect Rob Kardashian for someone’s 40th birthday, or make Barack Obama deliver a fake speech, videos generally, where people speak or do things that they never in fact spoke or did.

In addition, the work of the Chudnovsky brothers has similar patterns to those found in computer vision problems, in which images may be warped to fit a recorder’s movement through a space.

I investigated further by learning about the 2016 project WarpNet, a deep learning architecture that aligns an object in one image with a different object in another. WarpNet takes two images as an output and produces a 10 x 10 deformed lattice that defines a warp to occur. In the below image, the fourth column is the transformation while the third column reflects the output of the network.

From “WarpNet: Weakly Supervised Matching for Single-view Reconstruction”

Although, I suspect, the output lattice network would probably need to be greater than 10x10 to capture the warp transformations in the Chudnovsky brothers’ vector mapping, one can easily imagine that a low resolution tile of a digital image of the complete tapestry could be provided as a target, to give transformations to be performed on the high resolution tile. From there, the high resolution image could be easily stitched together like a puzzle, as the Chudnovskys had original anticipated.

--

--