Clement Valla is a New York based artist whose work focuses on contemporary picture-producing apparatuses, and how they transform representation and ways of seeing.
contact, bio
Parallel Lines on a Freeform Surface
Inkjet on linen
2016, Providence College Galleries, Providence RI
TEXT
xyz_flatpc_galleries_16-10-26_03pc_galleries_16-10-26_06pc_galleries_16-10-26_10pc_galleries_16-10-26_14pc_galleries_16-10-26_16pc_galleries_16-10-26_21pc_galleries_16-10-26_23blue_box_a7_270blue_box_level_hammer_a9_172blue_box_a13_359grids_a6_400vent_footprint_a2_29-90

Photography produces perspectival images, which depict space in two dimensions via particular conventions: only one side of objects is depicted, objects further away get smaller, and lines converge to the horizon and to a vanishing point.
Many devices regulate and measure light waves, but they do not always produce perspectival representations—CAT scans, light-spectral analysis, radar, GPS, QR code readers, photograms, airport security backscatter–imaging devices, to name a few. These devices produce pictures too, but pictures we are not used to reading as “real,” in the way that perspectival drawings, paintings and photos are seen as representations of the “real.” It is easier to categorize these non-perspectival pictures as representing data. And yet, photography is no less a mathematical construct of regulated and measured light waves—it is just that the data coalesce on the picture plane to produce the perspectival images that we are trained to read. We overlook so much of the specificity of how photographs are actually produced when we think they create images of the “real.” Photography is actually part of a much broader category of remote-sensing apparatuses and part of a long history regulating, measuring, and ordering space on a picture plane.
The pictures in Parallel Lines on a Freeform Surface attempt to use photographic means to produce nonperspectival pictures. The images borrow from maps, from architectural drawing, and from painting before perspective. They are a new hybrid: non-perspective photographs that unfold, measure, and order three dimensionality according to other two-dimensional pictorial logics, such as orthographic, axonometric, or cartographic projections.
Each print is presented at a 1:1 scale. Each object pictured is equal in measurement to the actual objects (as opposed to shrinking the further they are from the lens, as in perspective photographs). This collection of pictures uses the gallery as laboratory, employing whatever objects are present as the tests and standards to measure this new kind of picture making. The images are presented at scale, in situ, allowing for a comparison of these picture-maps to the real gallery they depict.

 

Download the catalog here.

close
Surface Proxy
Inkjet on belgian linen over CNC milled foam sculptures
2015, XPO Gallery, Paris France
TEXT
Surface Proxy, exhibition by Clement Valla at XPO Gallery from April 16th to May 21st, 2015. © vinciane verguethen/voyez-vousSurface Proxy, exhibition by Clement Valla at XPO Gallery from April 16th to May 21st, 2015. © vinciane verguethen/voyez-vousSurface Proxy, exhibition by Clement Valla at XPO Gallery from April 16th to May 21st, 2015. © vinciane verguethen/voyez-vousSurface Proxy, exhibition by Clement Valla at XPO Gallery from April 16th to May 21st, 2015. © vinciane verguethen/voyez-vousSurface Proxy, exhibition by Clement Valla at XPO Gallery from April 16th to May 21st, 2015. © vinciane verguethen/voyez-vousSurface Proxy, exhibition by Clement Valla at XPO Gallery from April 16th to May 21st, 2015. © vinciane verguethen/voyez-vousSurface Proxy, exhibition by Clement Valla at XPO Gallery from April 16th to May 21st, 2015. © vinciane verguethen/voyez-vousSurface Proxy, exhibition by Clement Valla at XPO Gallery from April 16th to May 21st, 2015. © vinciane verguethen/voyez-vousSurface Proxy, exhibition by Clement Valla at XPO Gallery from April 16th to May 21st, 2015. © vinciane verguethen/voyez-vousSurface Proxy, exhibition by Clement Valla at XPO Gallery from April 16th to May 21st, 2015. © vinciane verguethen/voyez-vousSurface Proxy, exhibition by Clement Valla at XPO Gallery from April 16th to May 21st, 2015. © vinciane verguethen/voyez-vousSurface Proxy, exhibition by Clement Valla at XPO Gallery from April 16th to May 21st, 2015. © vinciane verguethen/voyez-vousSurface Proxy, exhibition by Clement Valla at XPO Gallery from April 16th to May 21st, 2015. © vinciane verguethen/voyez-vousSurface Proxy, exhibition by Clement Valla at XPO Gallery from April 16th to May 21st, 2015. © vinciane verguethen/voyez-vousSurface Proxy, exhibition by Clement Valla at XPO Gallery from April 16th to May 21st, 2015. © vinciane verguethen/voyez-vousSurface Proxy, exhibition by Clement Valla at XPO Gallery from April 16th to May 21st, 2015. © vinciane verguethen/voyez-vousScreen Shot 2015-08-26 at 11.06.32 AMScreen Shot 2015-08-26 at 11.06.54 AMScreen Shot 2015-08-26 at 11.05.53 AMScreen Shot 2015-08-26 at 11.05.26 AM

Download the Catalog for Surface Proxy

Surface Proxy is a show of picture surrogates, inkjet prints on linen wrapped around CNC milled foam sculptures.

Valla begins with Medieval French architectural and sculptural fragments that have made their way into the collections of museums in New York and Providence, where the artist lives and works. The objects at the various institutions are chosen because they bear visible scars of their transformation from fixed architectural ornaments in France to stand-alone sculptural works in the New York and Providence.

In order to reproduce the fragments for his collection, Valla uses a process known as photogrammetry: objects are photographed from multiple angles and a 3d model of the objects is produced by triangulating the multiple images.

The origins of photogrammetry are intrinsically linked with the medieval content. The technique was invented by a French mathematician in 1849 and perfected in the later half of the 19th century by Albrecht Meydenbauer, a Prussian architect tasked with surveying historical monuments on the verge of complete decay. A near fatal fall from a gothic cathedral convinces Meydenbauer to employ the ‘magic scaffold’ of photogrammetry – to measure and model from a distance. Contemporary photogrammetry software has made this once specialized scientific process widely accessible.

This follows Valla’s previous explorations of what Harun Pharocki terms ‘Operative Images’ – images that are part of an operation, characterized by their aesthetic unintentionality. In photogrammetry, photographs become a means in a larger process rather than an end unto themselves. Valla relates this alternate history of photography to contemporary discussions around image-flow, social media, ‘big-data,’ and surveillance, all instances were value is extracted by aggregating image sets. The user misconstrues the image flow as being inherently about the images themselves, whereas corporate and surveillance value is derived from the relations between the pictures; their meta-data. As in photogrammetry, the pictures are a means rather than an end.

The 3d models produced by photogrammetry are images in a quite literal sense – extracted from optical technologies, they are formed on the computer screen by wrapping and folding flat pictures around hollow 3d volumes. The promise of 3d scanning and manufacturing is that we will be able to reproduce objects. The truth is that we get 3d images. Valla emphasises the increasing confusion between image and objects in these works.

 

photos by Vinciane Verguethen

close
Wrapped terracotta neck-amphora (storage jar)
Inkjet on belgian linen over CNC milled foam sculptures
2014, XPO Gallery, Paris, France
TEXT
P1040091

Wrapped terracotta neck-amphora (storage jar)
Attributed to the New York Nettos Painter
Metropolitan Museum of Art
Period: Proto-Attic
Date: second quarter of the 7th century B.C.
Culture: Greek, Attic
Medium: Terracotta
Dimensions: H. 42 3/4 in. (108.6 cm); diameter 22 in. (55.9 cm)
Classification: Vases
Credit Line: Rogers Fund, 1911
Accession Number: 11.210.1

Wrapped Terracotta neck-amphora (storage jar) is a sculpture that is neither an object nor an image. Valla used digital 3D capture to reproduce a 7th Century BC Greek amphora on view at the Metropolitan Museum of Art. To highlight the artifacts produced by computer vision the completed reproduction is broken down into its constituent parts: a flat image wrapped around a 3D volume. The resulting sculpture echoes the packing of objects for storage and shipping, transitory moments when only an object’s image is accessible. The image ends up serving as surrogate for the sculpture underneath.

close
Surface Survey
3D Reproductions of objects from the Metropolitan Museum of Art, Inkjet on paper mounted on MDF, sintered nylon on laser-cut MDF tables
2014, Transfer Gallery, Brooklyn
TEXT
by Mike GartenImage_3331Image_3371_1Image_3263Image_3287Image_3294Image_3298Image_3300_1Image_3313Image_3317

Of the moral effect of the monuments themselves, standing as they do in the depths of a tropical forest, silent and solemn, strange in design, excellent in sculpture, rich in ornament, different from the works of any other people, their uses and purposes and whole history so entirely unknown with hieroglyphics explaining all, but being perfectly unintelligible, I shall not pretend to convey any idea. Often the imagination was pained in gazing at them.

— John Lloyd Stephens, Incidents of Travel in Central America, Chiapas, and Yucatan, 1841

This week, Twitterers around the world received some devastating news: The Twitter account @Horse_ebooks, a cult favorite, was human after all.

— Bianca Bosker, Twitter Hoax Reveals What We Desire Most From Machines, Huffington Post, Sept. 26, 2013

Surface Survey is comprised of digital prints and 3D printed sculptures, structured around concepts of archaeology, computer software, meaning-making, and images that are not meant for human consumption.
To explore these themes, Valla collects digital artifacts produced by software that turns photographs into 3D Models. The arranged fragments are left untouched, exhibiting the software’s process as-is. The work is comprised of both 2D images meant to be processed by the computer (but never seen by humans) and 3D printed fragments that indicate how the software pieces the shapes together.
Valla ‘s subjects are varied: from sculptural antiquities he photographed in the Metropolitan Museum’s collections, to contemporary ephemera, to 19th Century inventions. The work uncovers subtle shapes and textures that illustrate these objects in unexpected ways and cast a new light the algorithms that digitized them.
Valla’s work reflects on the human potential of meaning-making in unfamiliar, software-created images. He is interested in the relationship between how what a computer reads is so distant from what a human will understand. This interest extends into the language of computer image-making, suggesting an archaeology of computer software, whose extractions reveal the computer’s systematic logic.

 

close
Seed Drawings
Drawings by thousands of individuals, custom software, inkjet on paper
2011-Ongoing, Dorsch Gallery, Miami, FL, School of Visual Arts, New York, NY, Brown University, Providence, RI, Indianapolis Museum of Contemporary Art, Indianapolis, IN
TEXT
Valla_Clement_04s52_3000s51_3000s51_detailcomposite100427Valla_Clement_06composite100818s31.small_composite091202composite091214composite0912191

The Seed Drawings are a set of drawings that are repeated copies, much like a drawing-version of the game ‘Telephone’,  and produced by online workers. The process begins with one simple drawing (the ‘Seed’) and each worker copies the previous drawing. The completed drawings track the evolution of the copies over time, as well as the ruptures produced by those workers who ignore the instructions and change the drawings. The workers are culled through the online service of Amazon.com’s Mechanical Turk, a pre-existing system that offers workers pay in reward for specified tasks. For the Seed Drawings, each worker is paid 5¢ to copy the previous worker’s drawing— a paltry sum for a menial task. The Seed Drawings explore two aspects of contemporary networks: the online proliferation of copies and repeated memes, and the spread of cheap, crowdsourced micro-labor.

close
The Universal Texture
Inkjet on Canvas
2012-Ongoing, 319 Scholes, New York, NY, DAAP Galleries, University of Cincinnati, Cincinnati, OH, Bitforms Gallery, New York, NY, ASC Projects, San Francisco, CA, Kunsthalle zu Kiel, Kiel, Germany, University of Massachusetts, Amherst, MA, University at Buffalo, Buffalo, NY
TEXT
combclementvalla2clementP1010121photo 1photo 2P1010118

Note: This article was originally published on rhizome.org.

These artists (…) counter the database, understood as a structure of dehumanized power, with the collection, as a form of idiosyncratic, unsystematic, and human memory. They collect what interests them, whatever they feel can and should be included in a meaning system. They describe, critique, and finally challenge the dynamics of the database, forcing it to evolve.1

I collect Google Earth images. I discovered them by accident, these particularly strange snapshots, where the illusion of a seamless and accurate representation of the Earth’s surface seems to break down. I was Google Earth-ing, when I noticed that a striking number of buildings looked like they were upside down. I could tell there were two competing visual inputs here —the 3D model that formed the surface of the earth, and the mapping of the aerial photography; they didn’t match up. Depth cues in the aerial photographs, like shadows and lighting, were not aligning with the depth cues of the 3D model.

The competing visual inputs I had noticed produced some exceptional imagery, and I began to find more and start a collection.  At first, I thought they were glitches, or errors in the algorithm, but looking closer, I realized the situation was actually more interesting — these images are not glitches. They are the absolute logical result of the system. They are an edge condition—an anomaly within the system, a nonstandard, an outlier, even, but not an error. These jarring moments expose how Google Earth works, focusing our attention on the software. They are seams which reveal a new model of seeing and of representing our world – as dynamic, ever-changing data from a myriad of different sources – endlessly combined, constantly updated, creating a seamless illusion.

3D Images like those in Google Earth are generated through a process called texture mapping. Texture mapping is a technology developed by Ed Catmull in the 1970’s. In 3D modeling, a texture map is a flat image that gets applied to the surface of a 3D model, like a label on a can or a bottle of soda. Textures typically represent a flat expanse with very little depth of field, meant to mimic surface properties of an object. Textures are more like a scan than a photograph. The surface represented in a texture coincides with the surface of the picture plane, unlike a photograph that represents a space beyond the picture plane. This difference might be summed up another way: we see through a photograph, welook at a texture. This is an important distinction in 3D modeling, because textures are stretched across the surface of a 3D model, in essence becoming the skin for the model.

Google Earth’s textures however, are not shallow or flat. They are photographs that we look through into a space represented beyond—a space our brain interprets as having three dimensions and depth. We see space in the aerial photographs because of light and shadows and because of our prior knowledge of experienced space. When these photographs get distorted and stretched across the 3D topography of the earth, we are both looking at the distorted picture plane, andthrough the same picture plane at the space depicted in the texture. In other words, we are looking at two spaces simultaneously. Most of the time this doubling of spaces in Google Earth goes unnoticed, but sometimes the two spaces are so different, that things look strange, vertiginous, or plain wrong. But they’re not wrong. They reveal Google’s system used to map the earth — The Universal Texture.

The Universal Texture is a Google patent for mapping textures onto a 3D model of the entire globe.2 At its core the Universal Texture is just an optimal way to generate a texture map of the earth. As its name implies, the Universal Texture promises a god-like (or drone-like) uninterrupted navigation of our planet — not a tiled series of discrete maps, but a flowing and fluid experience. This experience is so different, so much more seamless than previous technologies, that it is an achievement quite like what the escalator did to shopping:

 No invention has had the importance for and impact on shopping as the escalator. As opposed to the elevator, which is limited in terms of the numbers it can transport between different floors and which through its very mechanism insists on division, the escalator accommodates and combines any flow, efficiently creates fluid transitions between one level and another, and even blurs the distinction between separate levels and individual spaces.3

In the digital media world, this fluid continuity is analogous to the infinite scroll’s effect on Tumblr. In Google Earth, the Universal Texture delivers a smooth, complete and easily accessible knowledge of the planet’s surface. The Universal Texture is able to take a giant photo collage made up of aerial photographs from all kinds of different sources — various companies, governments, mapping institutes — and map it onto a three-dimensional model assembled from as many distinct sources. It blends these disparate data together into a seamless space – like the escalator merges floors in a shopping mall.

Our mechanical processes for creating images have habituated us into thinking in terms of snapshots – discrete segments of time and space (no matter how close together those discrete segments get, we still count in frames per second and image aspect ratios). But Google is thinking in continuity. The images produced by Google Earth are quite unlike a photograph that bears an indexical relationship to a given space at a given time. Rather, they are hybrid images, a patchwork of two-dimensional photographic data and three-dimensional topographic data extracted from a slew of sources, data-mined, pre-processed, blended and merged in real-time. Google Earth is essentially a database disguised as a photographic representation.

It is an automated, statistical, incessant, universal representation that selectively chooses its data. (For one, there is no ‘night’ in Google’s version of Earth.) The system edits a particular representation of the world. The software edits, re-assembles, processes and packages reality in order to form a very specific and useful model. These collected images feel alien, because they are clearly an incorrect representation of the earth’s surface. And it is precisely because humans did not directly create these images that they are so fascinating. They are created by an algorithm that finds nothing wrong in these moments. They are less a creation, than a kind of fact – a representation of the laws of the Universal Texture. As a collection the anomalies are a weird natural history of Google Earth’s software. They are strange new typologies, representative of a particular digital process. Typically, the illusion the Universal Texture creates makes the process itself go unnoticed, but these anomalies offer a glimpse into the data collection and assembly. They bring the diverging data sources to light. In these anomalies we understand there are competing inputs, competing data sources and discrepancy in the data. The world is not so fluid after all.

By capturing screenshots of these images in Google Earth, I am pausing them and pulling them out of the update cycle. I capture these images to archive them – to make sure there is a record that this image was produced by the Universal Texture at a particular time and place. As I kept looking for more anomalies, and revisiting anomalies I had already discovered, I noticed the images I had discovered were disappearing. The aerial photographs were getting updated, becoming ‘flatter’ – from being taken at less of an angle or having the shadows below bridges muted. Because Google Earth is constantly updating its algorithms and three-dimensional data, each specific moment could only be captured as a still image. I know Google is trying to fix some of these anomalies too – I’ve been contacted by a Google engineer who has come up with a clever fix for the problem of drooping roads and bridges. Though the change has yet to appear in the software, it’s only a matter of time.

Taking a closer look, Google’s algorithms also seem to have a way to select certain types of aerial photographs over others, so as more photographs are taken, the better ones get selected. To Google, better photographs are flatter, have fewer shadows and are taken from higher angles. Because of this progress, these strange images are being erased. I see part of my work as archiving these temporal digital typologies. I also call these images postcards to cast myself as a tourist in the temporal and virtual space – a space that exists digitally for a moment, and may perhaps never be reconstituted again by any computer.

Nothing draws more attention to the temporality of these images than the simple observation that the clouds are disappearing from Google Earth. After all, clouds obscures the surface of the planet so photos with no clouds are privileged. The Universal Texture and its attendant database algorithms are trained on a few basic qualitative traits – no clouds, high contrast, shallow depth, daylight photos. Progress in architecture has given us total control over interior environments; climate controlled spaces smoothly connected by escalators in shopping malls, airports, hotels and casinos. Progress in the Universal Texture promises to give us a smooth and continuous 24-hour, cloudless, daylit world, increasingly free of jarring anomalies, outliers and statistical inconsistency.

Notes:

(1) Quaranta, Domenico, “Collect the WWWorld. The Artist as Archivist in the Internet Age,” in Domenico Quaranta et al., Collect the WWWorld, exhibition catalogue, LINK Editions, September 2011.

(2) “WebGL Earth Documentation – 2 Real-time Texturing Methods,” WebGL Earth Documentation – 2 Real-time Texturing Methods, N.p., n.d. Web. 30 July 2012.

(3) Jovanovic Weiss, Srdjan and Leong, Sze Tsung, “Escalator,” in Koolhaas et al., Harvard Design School guide to shopping, Köln, New York, Taschen, 2001.

close
Postcards from Google Earth
Screenshots from Google Earth, postcards and postcard racks, inkjet on paper, website
2010-Ongoing, http://www.postcards-from-google-earth.com/, CAM, Raleigh, NC, XPO GALLERY, Paris, France, Thomassen Gallery, Gothenburg, Sweden, University at Buffalo, Buffalo, NY, Art Souterrain, Montreal, Canada, swissnex, San Francisco, CA, Lab for Emerging Arts and Performance (LEAP), Berlin, Germany, Wasserman Projects, Birmingham, MI, Phillips auction house, New York, NY, Tin Sheds Gallery, Sydney, Australia, Gallery Wendi Norris, San Francisco, CA, Villa Terrace Decorative Arts Museum, Milwaukee, WI
TEXT
_MG_9679deception_passredmon_PIR0333Screen Shot 2013-02-09 at 11.55.32 AM

I collect Google Earth images. I discovered strange moments where the illusion of a seamless representation of the Earth’s surface seems to break down. At first, I thought they were glitches, or errors in the algorithm, but looking closer I realized the situation was actually more interesting — these images are not glitches. They are the absolute logical result of the system. They are an edge condition—an anomaly within the system, a nonstandard, an outlier, even, but not an error. These jarring moments expose how Google Earth works, focusing our attention on the software. They reveal a new model of representation: not through indexical photographs but through automated data collection from a myriad of different sources constantly updated and endlessly combined to create a seamless illusion; Google Earth is a database disguised as a photographic representation. These uncanny images focus our attention on that process itself, and the network of algorithms, computers, storage systems, automated cameras, maps, pilots, engineers, photographers, surveyors and map-makers that generate them.

Visit the site

close
Iconoclashes
Images from the Metropolitan Museum of Art website with the keyword ‘God’ or ‘Religion’, Custom Software, Adobe Photoshop Photomerge script, inkjet on paper
2013, Thomassen Gallery, Gothenburg, Sweden, Mulherin + Pollard, New York, NY, ADA Gallery, Richmond, VA
TEXT
_MG_9691_MG_9710Mulherin_Pollard_4Mulherin_Pollard_1Mulherin_Pollard_6Mulherin_Pollard_5

Collaboration with Erik Berglin

“As is well known from art historians and theologians, many sacred icons that have been celebrated and worshipped are called acheiropoiete; that is, not made by any human hand. Faces of Christ, portraits of the Virgin, Veronica’s Veil; there are many instances of these icons that have fallen from heaven without any intermediary.To show that a humble human painter has made them would be to weaken their force, to sully their origin, to desecrate them.”
—Bruno Latour, What is Iconoclash?, 2002

“Iconoclashes” are a clashing of objects in various time periods, in an assortment of cultures, representing a multiplicity of religions.

The starting point of these images is the Metropolitan Museum of Art’s public web archive; specifically all photographs of objects tagged with the keywords ‘God’ or ‘Religion.’

These source images were randomly grouped and digitally merged with a Photomerge script inside Adobe Photoshop. The script is a common algorithm used to stitch separate images together into longer panoramas. In the case of “Iconoclashes,” the script attempts to blend these “God”-tagged images together, creating chimeric deities, hybrid talismans, and surreal stellae, gods and statues.

The “Iconoclashes” appear as a typical cataloging of a museum’s archive. The Met Museum uses standards when photographing their archive— all objects are presented on a generic grey surface, with similar lighting and they all occupy the same percentage of the frame, regardless of scale. The “Iconoclashes” exploit this styling; the photomerge script works only because of this stylistic consistency.

Yet these are not typical museum images; these are not objects that could ever exist. The images are smooth and photoreal, but the space, the colors and the physics simply don’t add up. Their strangeness is the product of an algorithm rather than a human creator.

more at iconoclashes.com

close
Three Digs A Skull

2015, Published by Library of the Printed Web
TEXT
Three_Digs_A_Skull_1280Three_Digs_A_Skull-1280

The 38 texture maps included in Three Digs A Skull were created by photo-modeling software. The maps are used to add photorealistic surface information to web-based 3D models scanned by different users. They are produced by an algorithm and meant to be parsed by modeling software, but not seen by humans. As such, their aesthetic is non-intentional.

Each visitor to tex_archive.com activates a search for a new texture map, which is then presented, archived, and posted to twitter at @tex_archive (where it is cataloged by the Library of Congress). By retrieving these images the texture maps are removed from their normal operation for display to a human viewer. Divorced from the software and 3D model, they no longer function as operative images. The tex_archive records this transformation.

Three Digs A Skull shifts the context again, re-framing the tex_archive as a chance-retrieved inventory of object-images.

tex-archive.com
twitter.com/tex_archive

close
Samples from Seed Drawing 51
Artist Book, Drawings by 6560 individuals, custom software
2014, RRose Editions and XPO Gallery, Paris France
TEXT
DSC_0068DSC_0070DSC_0071DSC_0073DSC_0072DSC_0074DSC_0066

The drawings in this book were produced online by anonymous workers through Amazon.com’s micro-labor market known as Mechanical Turk. The drawings began with an intitial seed drawing that was copied 8 times. Each of these copies was then copied in turn, and so on, so that all the drawings are copies of copies of copies of the original seed drawing — a huge game of ‘telephone’ played out in a 2-dimensional grid. Each page tracks the distance of the drawings from the original seed.

The workers are culled through the online service of Amazon.com’s Mechanical Turk, a pre-existing system that offers workers pay in reward for specified tasks. For the Seed Drawings, each worker is paid 5¢ to copy the previous worker’s drawing— a paltry sum for a menial task. The Seed Drawings explore two aspects of contemporary networks: the online proliferation of copies and repeated memes, and the spread of cheap, crowdsourced micro-labor.

for more information contact:
RRose Editions
XPO Gallery

close
The Universal Texture Recreated (46°42’3.50″N, 120°26’28.59″W)
Webcam, wooden table, brick, trestle, inkjet on canvas
2014, XPO Gallery, Paris France
TEXT
cvalla_09featured-2valla

The Universal Texture Recreated transforms a flat satellite photograph into a 3D dimensional image, reconstructing images from Google Earth using the low-tech medium of domestic furniture. A webcam puts the image back online, cropping the image so that it is indistinguishable from Valla’s previous series Postcards from Google Earth. These are based on images captured from the screen while traveling through the Google Earth interface. This collection of pictures emphasizes edge conditions, the result of an automated process that fuses aerial photographs and cartographic data. As the source imagery is culled from different periods and vantage points, anomalies in wrapping the 3-D projection model appear.

close
tex-archive
public texture maps from 3d models made with 123D Catch, website, twitter bot
2014, http://tex-archive.com
TEXT
688272_tex_0652444_tex_0709819_tex_1tex-archive

tex-archive mines public models on the community website for Autodesk’s 123D Catch–a software that creates 3D models from photographs. These models form a diverse collection ranging from ancient artifacts to contemporary ephemera. tex-archive extracts the texture maps from the publicly uploaded models. These are 2D images used by the computer to skin the three-dimensional model and make it appear realistic. The result is an image never before been seen by human eyes. Here the nature of the image has changed–from a means to an end. To mark that change, tex-archive catalogues the images in the Library of Congress, via Twitter.

close
Postcards from Google Earth Installation
inkjet on vinyl, mdf
2014, Festival des Images, Vevey, Switzerland
TEXT
Vevey, Festival Images 2014 - Postcards from Google Earth, Clement Valla (Photo © Celine Michel )

This installation is based on Valla’s series Postcards from Google Earth. These images, captured by a satellite camera pointing down and typically consumed on a vertical monitor, are remapped to a horizontal ground plane. Large enough to be walked on, the images emphasize the vertigo inducing anomalies Valla finds in Google Earth.
Postcards from Google Earth are images captured from the screen while traveling through the Google Earth interface. This collection of pictures emphasizes edge conditions, the result of an automated process that fuses aerial photographs and cartographic data. As the source imagery is culled from different periods and vantage points, anomalies in wrapping the 3-D projection model appear.

close
Forensic Still Life (Orange)
Inkjet on paper
2014, RISD, Providence, RI
TEXT
cvalla_04

For the Forensic Still Lives Valla produced still life arrangements of nothing, leaving only the calibration targets and measurement tools used in commercial, forensic and archaeological photography. These are typically meant as reference points to include the maximum amount of measurable information in a photograph.
These tools are the forbears of photogrammetry, a type of software meant to turn photographs into 3D models. Valla used this software to produce models of the empty still lives, turning the system onto itself.
The final works are the digital artifacts produced by the software. These fragments (never meant to be seen by humans) are how the software produces, stores and transmits the photographic data.

close
3d-maps-minus-3d.com
Publicly available data from Nokia's 3d maps, website
2013, http://www.3d-maps-minus-3d.com/
TEXT
Screen Shot 2013-11-04 at 18.06.56 PM

3d-maps-minus-3d is a website that allows you to browse Nokia’s 3D maps with all of the 3D information removed.

visit the project

close
Mechanical Turk Alphabets
Symbols extracted from Seed Drawings, pen-plotter, marker on paper
2012, XPO Gallery, Paris, France
TEXT
P1010139P1010140P1010141P1010142P1010143P1010134P1010145

The Mechanical Turk Alphabets are derived form the Seed Drawings. Each symbol in the alphabets was a previously unrecognizable glyph that was frequently copied in the Seed Drawings. These recurring symbols emerged on their own through thousands of iterative copies. Perhaps these are the internet equivalents to archaeologists’ entoptic phenomenon (some images) – erronously though to be the ur-forms of primitive art accessed in altered psychological states.

The prints were produced using a marker attached to a pen plotter – a machine mimicking the stroke of a stylus.

close
Blank Process (Areas of interest on the studio wall as determined by a computer vision algorithm)
Image recognition software, hi resolution scans of my studio wall, inkjet on canvas mounted on plywood
2012,
TEXT
P1010078P1010087P1010090photo
close
Mark Copies J.F. Kensett’s Lily Pond, Newport, Rhode Island, and Paints in the View from his Studio Window
Oil paintings ordered online, email correspondance
2010, “Vanishing Point,” Bitforms Gallery, New York, NY
TEXT
Mark Copies J.F. Kensett’s Lily Pond and Paints in the View from his Studio WindowZhongbo Copies J.F. Kensett’s Almy's Pond and Paints in the View from his Studio Window
close
Helen Copies Lucy Copies Mr. Lin Copies Jane Copies Steven Copies Andy Copies Mark Copies Zhongbo Adds a Skyscraper to Almy’s Pond, Newport
Oil paintings ordered online copied in sequence by several painters, email correspondance
2012, Indianapolis Museum of Contemporary Art, Indianapolis, IN
TEXT
IMG_3241123OLYMPUS DIGITAL CAMERA678zhongbo_email

Zhongbo Adds a Skyscraper to Almy’s Pond, Newport

email correspondence between clementvalla@gmail.com and cn@goldappleart.com
      July 9, 2012 6:03 Pm
      SubJect: cuStom Painting
      1 attachment, 78 Kb

Hello,
I would like to order a reproduction of the landscape painting attached below. However, could you add some- thing to the painting: a typical building or skyscraper that you can see from your window.
The painting should be 50cm by 40cm.
Thanks and best,
Clement
 
 

      July 9, 2012 10:43 Pm
      SubJect: Re: cuStom Painting
      1 attachmentS, 121 Kb

Dear Clement:
Ok,Thanks again.also,we do a draft,please take a look,is it ok???
The total amount is 82.4 USD , (include the shipping cost is still 23.6 USD, the paintings is 58.8 USD,) You need to pay 82.4 x in advance 50% = 41.2 USD for the paintings
You can use paypal to pay,Our Paypal account number is: cn@ goldappleart.com
The paintings will be completed within 12 days after we re- ceived your advance deposit 41.2 USD
Attached to the letter price list as a reference.
We would like to hear from you.
Best regards !
Zhongbo
 
 

      July 10, 2012 10:27 am
      SubJect: Re: cuStom Painting

Dear Zhongbo,
I have made a payment of 41.2 USD through paypal. You can proceed with the draft that you sent.
Best,
Clement
 
 

      July 20, 2012 6:39 am
      SubJet: Re: Painting comPleted
      1 attachment, 139 Kb

Dear Clement:
The paintings was completed ,attached the photos,please take a look,thanks.
We would like to hear from you.
Best regards!
Zhongbo
 
 

      July 20, 2012 10:37 am
      SubJect: Re: Painting comPleted

Dear Zhongbo,
This looks great, thanks. I will make the final payment through paypal.
The painting should be shipped to the following address:

Clement Valla
Rhode Island School fo Design Department of Graphic Design Two College Street Providence, RI 02903-2784 United States
Phone: +1.917.407.4934
Thanks and best,
Clement
 
 

      July 24, 2012 7:33 am
      SubJet: Re: Painting ShiPPed out
      1 attachment, 271 Kb

Dear Clement:
The paintings have been shipped out. You should be get them around 5 days.
Tracking number is 6547058685 by DHL.
You should see the shipping process http://www.dhl.com/en/ express/tracking/shippers_reference.html 2 days later.
Best regards !
Zhongbo
 
 

close
Hapax Phaenomena
Terms resulting in unique search results in Google image search, .Zip Archive, 75.9 MB, inkjet on paper
2012, rhizome.org, Splatterpool Gallery, Brooklyn, Brown University, Providence, RI
TEXT
Screen Shot 2015-08-05 at 9.50.50 AMScreen shot 2011-09-24 at 1.51.53 PMP1000950P1000952

Hapax Phaenomena is a collection of historically unique images discovered by Google image search from collaborators Clement Valla and John Cayley. The fragile and tenuous Phaenomena are organized into subcategories within the five folders; 1_discordant_wonderfulness; 2_nondurable_megabyte; 3_inventive_monetarism; 4_patriotic_leaseback; and 5_diatomic_roach. Each Phaenomena is accompanied by a certificate of authenticity, a screenshot of its moment of global and historical singularity taken by one of the artists.

Currently available through ‘The Download’ on Rhizome.org:
http://rhizome.org/the-download/

 

Or here:
http://clementvalla.com/xfer/Hapax_Phaenomena.zip

close