Inkjet on belgian linen over CNC milled foam sculptures
2015, XPO Gallery, Paris France
1. An exhibition that attempts a reconstruction of the Great Portal at Cluny, a medieval church in France that rivaled St. Peters in Rome before it was dynamited to bits. The portal looks like a puzzle – small fragments are reassembled in a sparse composition. The whole thing has been composed by capturing 3d models of the dispersed fragments housed in museums across the world. A video narrates the technical challenges and details. It’s produced by the Conservatoire des Arts et Metiers.
2. A .zip file on a hard drive that contains a mesh (mesh.obj) and a texture (tex_0.jpg). The JPG turns out to be a fantastic puzzle of photographic fragments and shards. The mesh is a 3d model – a series of points joined together that define a form in space. The photorealistic model constituted on-screen is composed of these twin puzzle pieces – abstract, mathematically precise interconnected points in space, and a messy, exploded picture of photographic shards.
3. A url that may or may not be a conspiracy theory site. It clamors about evidence that the Shroud of Turin is real – or fake. It’s full of laborious technical details – the depth of the imprint, the chemical nature of the marks, the linen, pollen samples, etc… The site’s author has created a piece of software to extract a 3d model from the image on the shroud. The shroud is a technical apparatus – a picture that encodes data points for an object.
4. A director of the Conservatoire national des arts et métiers who invents Photogrammetry in 1849, a year or so after the invention of photography itself. It is a process that extracts 3-dimensional data from photographs. The process takes a set of images and constructs a data set of points in space.
5. An architectural surveyor who is afraid of heights as the result of a near-death accident. He turns to photogrammetry. it allows him to map and measure from a distance, without touching. In photogrammetry the photos are a data set and not an ends — not the final thing to look at. The single photograph does not produce any data, only the set of interrelated pictures does.
Surface Proxy is a show of picture surrogates, inkjet prints on linen wrapped around CNC milled foam sculptures.
Valla begins with Medieval French architectural and sculptural fragments that have made their way into the collections of museums in New York and Providence, where the artist lives and works. The objects at the various institutions are chosen because they bear visible scars of their transformation from fixed architectural ornaments in France to stand-alone sculptural works in the New York and Providence. The resulting set of objects is structured around themes of ephemerality, permanence, reconstruction, preservation and restoration.
In order to reproduce the fragments for his collection, Valla uses a process known as photogrammetry: objects are photographed from multiple angles and a 3d model of the objects is produced by triangulating the multiple images.
The origins of photogrammetry are intrinsically linked with the medieval content. The technique was invented by a French mathematician in 1849 and perfected in the later half of the 19th century by Albrecht Meydenbauer, a Prussian architect tasked with surveying historical monuments on the verge of complete decay. A near fatal fall from a gothic cathedral convinces Meydenbauer to employ the ‘magic scaffold’ of photogrammetry – to measure and model from a distance. Contemporary photogrammetry software has made this once specialized scientific process widely accessible.
This follows Valla’s previous explorations of what Harun Pharocki terms ‘Operative Images’ – images that are part of an operation, characterized by their aesthetic unintentionality. In photogrammetry, photographs become a means in a larger process rather than an end unto themselves. Valla relates this alternate history of photography to contemporary discussions around image-flow, social media, ‘big-data,’ and surveillance, all instances were value is extracted by aggregating image sets. The user misconstrues the image flow as being inherently about the images themselves, whereas corporate and surveillance value is derived from the relations between the pictures; their meta-data. As in photogrammetry, the pictures are a means rather than an end.
The 3d models produced by photogrammetry are images in a quite literal sense – extracted from optical technologies, they are formed on the computer screen by wrapping and folding flat pictures around hollow 3d volumes. The promise of 3d scanning and manufacturing is that we will be able to reproduce objects. The truth is that we get 3d images. Valla emphasises the increasing confusion between image and objects in these works.
The 3d models are output as 2d pictures; inkjet prints on linen that are wrapped around a 3d formwork. To capture their surfaces, Valla digitally drapes and shrouds the models, transfering the textures and colors onto 3d modeled fabric – a digital version of the Shroud of Turin. The resulting shrouds reproduce the 3d model as if through transparency, simultaneously revealing and distorting 3d model underneath. The printed and draped sheets are neither objects nor images, but picture surrogates for the sculpture underneath.
photos by Vinciane Verguethen
Artist Book, Drawings by 6560 individuals, custom software
2014, RRose Editions and XPO Gallery, Paris France
The drawings in this book were produced online by anonymous workers through Amazon.com’s micro-labor market known as Mechanical Turk. The drawings began with an intitial seed drawing that was copied 8 times. Each of these copies was then copied in turn, and so on, so that all the drawings are copies of copies of copies of the original seed drawing — a huge game of ‘telephone’ played out in a 2-dimensional grid. Each page tracks the distance of the drawings from the original seed.
The workers are culled through the online service of Amazon.com’s Mechanical Turk, a pre-existing system that offers workers pay in reward for specified tasks. For the Seed Drawings, each worker is paid 5¢ to copy the previous worker’s drawing— a paltry sum for a menial task. The Seed Drawings explore two aspects of contemporary networks: the online proliferation of copies and repeated memes, and the spread of cheap, crowdsourced micro-labor.
Webcam, wooden table, brick, trestle, inkjet on canvas
2014, XPO Gallery, Paris France
The Universal Texture Recreated transforms a flat satellite photograph into a 3D dimensional image, reconstructing images from Google Earth using the low-tech medium of domestic furniture. A webcam puts the image back online, cropping the image so that it is indistinguishable from Valla’s previous series Postcards from Google Earth. These are based on images captured from the screen while traveling through the Google Earth interface. This collection of pictures emphasizes edge conditions, the result of an automated process that fuses aerial photographs and cartographic data. As the source imagery is culled from different periods and vantage points, anomalies in wrapping the 3-D projection model appear.
Inkjet on belgian linen over CNC milled foam sculptures
2014, XPO Gallery, Paris, France
Wrapped terracotta neck-amphora (storage jar)
Attributed to the New York Nettos Painter
Metropolitan Museum of Art
Date: second quarter of the 7th century B.C.
Culture: Greek, Attic
Dimensions: H. 42 3/4 in. (108.6 cm); diameter 22 in. (55.9 cm)
Credit Line: Rogers Fund, 1911
Accession Number: 11.210.1
Wrapped Terracotta neck-amphora (storage jar) is a sculpture that is neither an object nor an image. Valla used digital 3D capture to reproduce a 7th Century BC Greek amphora on view at the Metropolitan Museum of Art. To highlight the artifacts produced by computer vision the completed reproduction is broken down into its constituent parts: a flat image wrapped around a 3D volume. The resulting sculpture echoes the packing of objects for storage and shipping, transitory moments when only an object’s image is accessible. The image ends up serving as surrogate for the sculpture underneath.
public texture maps from 3d models made with 123D Catch, website, twitter bot
tex-archive mines public models on the community website for Autodesk’s 123D Catch–a software that creates 3D models from photographs. These models form a diverse collection ranging from ancient artifacts to contemporary ephemera. tex-archive extracts the texture maps from the publicly uploaded models. These are 2D images used by the computer to skin the three-dimensional model and make it appear realistic. The result is an image never before been seen by human eyes. Here the nature of the image has changed–from a means to an end. To mark that change, tex-archive catalogues the images in the Library of Congress, via Twitter.
inkjet on vinyl, mdf
2014, Festival des Images, Vevey, Switzerland
This installation is based on Valla’s series Postcards from Google Earth. These images, captured by a satellite camera pointing down and typically consumed on a vertical monitor, are remapped to a horizontal ground plane. Large enough to be walked on, the images emphasize the vertigo inducing anomalies Valla finds in Google Earth.
Postcards from Google Earth are images captured from the screen while traveling through the Google Earth interface. This collection of pictures emphasizes edge conditions, the result of an automated process that fuses aerial photographs and cartographic data. As the source imagery is culled from different periods and vantage points, anomalies in wrapping the 3-D projection model appear.
Inkjet on paper
2014, RISD, Providence, RI
For the Forensic Still Lives Valla produced still life arrangements of nothing, leaving only the calibration targets and measurement tools used in commercial, forensic and archaeological photography. These are typically meant as reference points to include the maximum amount of measurable information in a photograph.
These tools are the forbears of photogrammetry, a type of software meant to turn photographs into 3D models. Valla used this software to produce models of the empty still lives, turning the system onto itself.
The final works are the digital artifacts produced by the software. These fragments (never meant to be seen by humans) are how the software produces, stores and transmits the photographic data.
3D Reproductions of objects from the Metropolitan Museum of Art, Inkjet on paper mounted on MDF, sintered nylon on laser-cut MDF tables
2014, Transfer Gallery, Brooklyn
Of the moral effect of the monuments themselves, standing as they do in the depths of a tropical forest, silent and solemn, strange in design, excellent in sculpture, rich in ornament, different from the works of any other people, their uses and purposes and whole history so entirely unknown with hieroglyphics explaining all, but being perfectly unintelligible, I shall not pretend to convey any idea. Often the imagination was pained in gazing at them.
— John Lloyd Stephens, Incidents of Travel in Central America, Chiapas, and Yucatan, 1841
This week, Twitterers around the world received some devastating news: The Twitter account @Horse_ebooks, a cult favorite, was human after all.
— Bianca Bosker, Twitter Hoax Reveals What We Desire Most From Machines, Huffington Post, Sept. 26, 2013
Surface Survey is comprised of digital prints and 3D printed sculptures, structured around concepts of archaeology, computer software, meaning-making, and images that are not meant for human consumption.
To explore these themes, Valla collects digital artifacts produced by software that turns photographs into 3D Models. The arranged fragments are left untouched, exhibiting the software’s process as-is. The work is comprised of both 2D images meant to be processed by the computer (but never seen by humans) and 3D printed fragments that indicate how the software pieces the shapes together.
Valla ‘s subjects are varied: from sculptural antiquities he photographed in the Metropolitan Museum’s collections, to contemporary ephemera, to 19th Century inventions. The work uncovers subtle shapes and textures that illustrate these objects in unexpected ways and cast a new light the algorithms that digitized them.
Valla’s work reflects on the human potential of meaning-making in unfamiliar, software-created images. He is interested in the relationship between how what a computer reads is so distant from what a human will understand. This interest extends into the language of computer image-making, suggesting an archaeology of computer software, whose extractions reveal the computer’s systematic logic.
Screenshots from Google Earth, postcards and postcard racks, inkjet on paper, website
2010-Ongoing, http://www.postcards-from-google-earth.com/, CAM, Raleigh, NC, XPO GALLERY, Paris, France, Thomassen Gallery, Gothenburg, Sweden, University at Buffalo, Buffalo, NY, Art Souterrain, Montreal, Canada, swissnex, San Francisco, CA, Lab for Emerging Arts and Performance (LEAP), Berlin, Germany, Wasserman Projects, Birmingham, MI, Phillips auction house, New York, NY, Tin Sheds Gallery, Sydney, Australia, Gallery Wendi Norris, San Francisco, CA, Villa Terrace Decorative Arts Museum, Milwaukee, WI
I collect Google Earth images. I discovered strange moments where the illusion of a seamless representation of the Earth’s surface seems to break down. At first, I thought they were glitches, or errors in the algorithm, but looking closer I realized the situation was actually more interesting — these images are not glitches. They are the absolute logical result of the system. They are an edge condition—an anomaly within the system, a nonstandard, an outlier, even, but not an error. These jarring moments expose how Google Earth works, focusing our attention on the software. They reveal a new model of representation: not through indexical photographs but through automated data collection from a myriad of different sources constantly updated and endlessly combined to create a seamless illusion; Google Earth is a database disguised as a photographic representation. These uncanny images focus our attention on that process itself, and the network of algorithms, computers, storage systems, automated cameras, maps, pilots, engineers, photographers, surveyors and map-makers that generate them.
Publicly available data from Nokia's 3d maps, website
Inkjet on Canvas
2012-Ongoing, 319 Scholes, New York, NY, DAAP Galleries, University of Cincinnati, Cincinnati, OH, Bitforms Gallery, New York, NY, ASC Projects, San Francisco, CA, Kunsthalle zu Kiel, Kiel, Germany, University of Massachusetts, Amherst, MA, University at Buffalo, Buffalo, NY
Note: This article was originally published on rhizome.org.
These artists (…) counter the database, understood as a structure of dehumanized power, with the collection, as a form of idiosyncratic, unsystematic, and human memory. They collect what interests them, whatever they feel can and should be included in a meaning system. They describe, critique, and finally challenge the dynamics of the database, forcing it to evolve.1
I collect Google Earth images. I discovered them by accident, these particularly strange snapshots, where the illusion of a seamless and accurate representation of the Earth’s surface seems to break down. I was Google Earth-ing, when I noticed that a striking number of buildings looked like they were upside down. I could tell there were two competing visual inputs here —the 3D model that formed the surface of the earth, and the mapping of the aerial photography; they didn’t match up. Depth cues in the aerial photographs, like shadows and lighting, were not aligning with the depth cues of the 3D model.
The competing visual inputs I had noticed produced some exceptional imagery, and I began to find more and start a collection. At first, I thought they were glitches, or errors in the algorithm, but looking closer, I realized the situation was actually more interesting — these images are not glitches. They are the absolute logical result of the system. They are an edge condition—an anomaly within the system, a nonstandard, an outlier, even, but not an error. These jarring moments expose how Google Earth works, focusing our attention on the software. They are seams which reveal a new model of seeing and of representing our world – as dynamic, ever-changing data from a myriad of different sources – endlessly combined, constantly updated, creating a seamless illusion.
3D Images like those in Google Earth are generated through a process called texture mapping. Texture mapping is a technology developed by Ed Catmull in the 1970’s. In 3D modeling, a texture map is a flat image that gets applied to the surface of a 3D model, like a label on a can or a bottle of soda. Textures typically represent a flat expanse with very little depth of field, meant to mimic surface properties of an object. Textures are more like a scan than a photograph. The surface represented in a texture coincides with the surface of the picture plane, unlike a photograph that represents a space beyond the picture plane. This difference might be summed up another way: we see through a photograph, welook at a texture. This is an important distinction in 3D modeling, because textures are stretched across the surface of a 3D model, in essence becoming the skin for the model.
Google Earth’s textures however, are not shallow or flat. They are photographs that we look through into a space represented beyond—a space our brain interprets as having three dimensions and depth. We see space in the aerial photographs because of light and shadows and because of our prior knowledge of experienced space. When these photographs get distorted and stretched across the 3D topography of the earth, we are both looking at the distorted picture plane, andthrough the same picture plane at the space depicted in the texture. In other words, we are looking at two spaces simultaneously. Most of the time this doubling of spaces in Google Earth goes unnoticed, but sometimes the two spaces are so different, that things look strange, vertiginous, or plain wrong. But they’re not wrong. They reveal Google’s system used to map the earth — The Universal Texture.
The Universal Texture is a Google patent for mapping textures onto a 3D model of the entire globe.2 At its core the Universal Texture is just an optimal way to generate a texture map of the earth. As its name implies, the Universal Texture promises a god-like (or drone-like) uninterrupted navigation of our planet — not a tiled series of discrete maps, but a flowing and fluid experience. This experience is so different, so much more seamless than previous technologies, that it is an achievement quite like what the escalator did to shopping:
No invention has had the importance for and impact on shopping as the escalator. As opposed to the elevator, which is limited in terms of the numbers it can transport between different floors and which through its very mechanism insists on division, the escalator accommodates and combines any flow, efficiently creates fluid transitions between one level and another, and even blurs the distinction between separate levels and individual spaces.3
In the digital media world, this fluid continuity is analogous to the infinite scroll’s effect on Tumblr. In Google Earth, the Universal Texture delivers a smooth, complete and easily accessible knowledge of the planet’s surface. The Universal Texture is able to take a giant photo collage made up of aerial photographs from all kinds of different sources — various companies, governments, mapping institutes — and map it onto a three-dimensional model assembled from as many distinct sources. It blends these disparate data together into a seamless space – like the escalator merges floors in a shopping mall.
Our mechanical processes for creating images have habituated us into thinking in terms of snapshots – discrete segments of time and space (no matter how close together those discrete segments get, we still count in frames per second and image aspect ratios). But Google is thinking in continuity. The images produced by Google Earth are quite unlike a photograph that bears an indexical relationship to a given space at a given time. Rather, they are hybrid images, a patchwork of two-dimensional photographic data and three-dimensional topographic data extracted from a slew of sources, data-mined, pre-processed, blended and merged in real-time. Google Earth is essentially a database disguised as a photographic representation.
It is an automated, statistical, incessant, universal representation that selectively chooses its data. (For one, there is no ‘night’ in Google’s version of Earth.) The system edits a particular representation of the world. The software edits, re-assembles, processes and packages reality in order to form a very specific and useful model. These collected images feel alien, because they are clearly an incorrect representation of the earth’s surface. And it is precisely because humans did not directly create these images that they are so fascinating. They are created by an algorithm that finds nothing wrong in these moments. They are less a creation, than a kind of fact – a representation of the laws of the Universal Texture. As a collection the anomalies are a weird natural history of Google Earth’s software. They are strange new typologies, representative of a particular digital process. Typically, the illusion the Universal Texture creates makes the process itself go unnoticed, but these anomalies offer a glimpse into the data collection and assembly. They bring the diverging data sources to light. In these anomalies we understand there are competing inputs, competing data sources and discrepancy in the data. The world is not so fluid after all.
By capturing screenshots of these images in Google Earth, I am pausing them and pulling them out of the update cycle. I capture these images to archive them – to make sure there is a record that this image was produced by the Universal Texture at a particular time and place. As I kept looking for more anomalies, and revisiting anomalies I had already discovered, I noticed the images I had discovered were disappearing. The aerial photographs were getting updated, becoming ‘flatter’ – from being taken at less of an angle or having the shadows below bridges muted. Because Google Earth is constantly updating its algorithms and three-dimensional data, each specific moment could only be captured as a still image. I know Google is trying to fix some of these anomalies too – I’ve been contacted by a Google engineer who has come up with a clever fix for the problem of drooping roads and bridges. Though the change has yet to appear in the software, it’s only a matter of time.
Taking a closer look, Google’s algorithms also seem to have a way to select certain types of aerial photographs over others, so as more photographs are taken, the better ones get selected. To Google, better photographs are flatter, have fewer shadows and are taken from higher angles. Because of this progress, these strange images are being erased. I see part of my work as archiving these temporal digital typologies. I also call these images postcards to cast myself as a tourist in the temporal and virtual space – a space that exists digitally for a moment, and may perhaps never be reconstituted again by any computer.
Nothing draws more attention to the temporality of these images than the simple observation that the clouds are disappearing from Google Earth. After all, clouds obscures the surface of the planet so photos with no clouds are privileged. The Universal Texture and its attendant database algorithms are trained on a few basic qualitative traits – no clouds, high contrast, shallow depth, daylight photos. Progress in architecture has given us total control over interior environments; climate controlled spaces smoothly connected by escalators in shopping malls, airports, hotels and casinos. Progress in the Universal Texture promises to give us a smooth and continuous 24-hour, cloudless, daylit world, increasingly free of jarring anomalies, outliers and statistical inconsistency.
(1) Quaranta, Domenico, “Collect the WWWorld. The Artist as Archivist in the Internet Age,” in Domenico Quaranta et al., Collect the WWWorld, exhibition catalogue, LINK Editions, September 2011.
(2) “WebGL Earth Documentation – 2 Real-time Texturing Methods,” WebGL Earth Documentation – 2 Real-time Texturing Methods, N.p., n.d. Web. 30 July 2012.
(3) Jovanovic Weiss, Srdjan and Leong, Sze Tsung, “Escalator,” in Koolhaas et al., Harvard Design School guide to shopping, Köln, New York, Taschen, 2001.
Symbols extracted from Seed Drawings, pen-plotter, marker on paper
2012, XPO Gallery, Paris, France
The Mechanical Turk Alphabets are derived form the Seed Drawings. Each symbol in the alphabets was a previously unrecognizable glyph that was frequently copied in the Seed Drawings. These recurring symbols emerged on their own through thousands of iterative copies. Perhaps these are the internet equivalents to archaeologists’ entoptic phenomenon (some images) – erronously though to be the ur-forms of primitive art accessed in altered psychological states.
The prints were produced using a marker attached to a pen plotter – a machine mimicking the stroke of a stylus.
Drawings by thousands of individuals, custom software, inkjet on paper
2011-Ongoing, Dorsch Gallery, Miami, FL, School of Visual Arts, New York, NY, Brown University, Providence, RI, Indianapolis Museum of Contemporary Art, Indianapolis, IN
The Seed Drawings are a set of drawings that are repeated copies, much like a drawing-version of the game ‘Telephone’, and produced by online workers. The process begins with one simple drawing (the ‘Seed’) and each worker copies the previous drawing. The completed drawings track the evolution of the copies over time, as well as the ruptures produced by those workers who ignore the instructions and change the drawings. The workers are culled through the online service of Amazon.com’s Mechanical Turk, a pre-existing system that offers workers pay in reward for specified tasks. For the Seed Drawings, each worker is paid 5¢ to copy the previous worker’s drawing— a paltry sum for a menial task. The Seed Drawings explore two aspects of contemporary networks: the online proliferation of copies and repeated memes, and the spread of cheap, crowdsourced micro-labor.
Image recognition software, hi resolution scans of my studio wall, inkjet on canvas mounted on plywood
Oil paintings ordered online, email correspondance
2010, “Vanishing Point,” Bitforms Gallery, New York, NY
Oil paintings ordered online copied in sequence by several painters, email correspondance
2012, Indianapolis Museum of Contemporary Art, Indianapolis, IN
Terms resulting in unique search results in Google image search, .Zip Archive, 75.9 MB, inkjet on paper
2012, rhizome.org, Splatterpool Gallery, Brooklyn, Brown University, Providence, RI
Hapax Phaenomena is a collection of historically unique images discovered by Google image search from collaborators Clement Valla and John Cayley. The fragile and tenuous Phaenomena are organized into subcategories within the five folders; 1_discordant_wonderfulness; 2_nondurable_megabyte; 3_inventive_monetarism; 4_patriotic_leaseback; and 5_diatomic_roach. Each Phaenomena is accompanied by a certificate of authenticity, a screenshot of its moment of global and historical singularity taken by one of the artists.
Currently available through ‘The Download’ on Rhizome.org:
Oil painting ordered over the internet
2009, Wassaic Project, Wassaic, NY, RISD, Providence, RI
“Zhongbo copies…” is a an oil painting ordered over the internet from the Wushipu “Chinese Painting Village” in Xiamen, China. It was commissioned for a show in Wassaic, NY.
These “Chinese Painting Villages” are reported to produce over 60% of the world’s oil paintings, a majority of which are copies of famous paintings.
“Zhongbo copies…” begins with a landscape painting by Frederic Church of the Hudson Valley. A contemporary image of Wassaic (in the Hudson Valley) from Google Earth is collaged onto Church’s painting. The artist in China was asked to paint the collaged image, and add in the view from her/his window in Xiamen. The resulting painting is a collage of geographical locations and representational media.
Artist Book, Custom Software, Amazon Mechanical Turk
Custom software recreates a Sol LeWitt drawing. The software also posts instructions on Amazon.com’s Mechanical Turk. Human workers execute the drawings online based on the instructions. The workers are paid 5¢ for each drawing. The software then assembles the Mechanical Turk drawings in a grid. The software drawings and the Mechancial Turk drawings are presented side by side.