Blog

Personal Depersonalization System at Figure One in August

Patch Adams checking out Personal Depersonalization System at Figure One

My new work ‘Personal Depersonalization System’ is on view in August at Figure One in Downtown Champaign, IL. The work is part of a show titled ‘Accepted Knowing: Peer Review’, curated by Nicki Werner, Maria Lux, and Jeanie Austin. The space is open on Tuesdays from 12-5p and Thursdays from 5-9p until the closing reception on August 26, from 6-9p. If you have a chance, I highly recommend the show.

Recording of Shift for Six Udderbots

Jacob Barton
photograph by Chris Marolf

On January 19, 2011, Jacob Barton performed the world premiere of my musical work titled Shift (2010), at the Contemporary Arts Center of Virginia in Virginia Beach, VA. Barton is the world’s premiere Udderbot virtuoso, and Shift is written for one live udderbot and five recorded udderbots.

Shift was originally an unfinished composition of mine for large chamber ensemble, but I rewrote it for six udderbots when Jacob commissioned a new piece from me last fall. The score provided him with rhythms and pitch cluster choices, but doesn’t specify specific pitches. Thus Jacob made those choices from the given material and wrote out a final score. Because the work is for six udderbots and Jacob is only one, he recorded five of the parts on tape and then plays the final part live with the tape during performance.

Related Links:

Interactive Robotic Painting Machine’s Works

In preparation for its debut performance this Tuesday, I’m posting a few works by my interactive robotic painting machine. The machine takes about 70 minutes to paint these works, while the performance will only run 10-15 minutes. As such, the machine will be painting quite a bit smaller in order to make it within that timeframe. More soon…

Painting by Interactive Robotic Painting Machine (2011)
oil on canvas, 15x10"

Painting by Interactive Robotic Painting Machine (2011)
oil on canvas, 15x10"

Painting by Interactive Robotic Painting Machine (2011)
oil on canvas, 15x10"

Painting by Interactive Robotic Painting Machine (2011)
oil on canvas, 15x10"

Painting by Interactive Robotic Painting Machine (2011)
oil on canvas, 15x10"

Interactive Robotic Painting Machine Makes Debut

On April 26, my interactive robotic painting machine will make its public debut at the Krannert Center for the Performing Arts in Urbana, IL, 7:30p. The result of over a year’s worth of work, the machine uses artificial intelligence to paint its own body of work and to make its own decisions. It also listens to its environment and considers what it hears as input into the painting process.

Watching my interactive robotic painting machine make a painting. ~3 second exposure.

So why is this first appearing at a performing arts center? Because it will appear in a collaborative work between myself and composer Zack Browning, titled Head Swap. Head Swap is a work for amplified violin and interactive robotic painting machine that will mix music, art, and robotics into one multidisciplinary performance. The machine will paint a painting during the piece, using what it hears to help it evaluate what it makes. The violinist, Benjamin Sung, will play Browning’s music by using what he sees the machine paint as guidance through the score. The machine also functions as a musical instrument itself, feeding pitched chords (dyads usually) back into the work.

There will be two other, older collaborations between Zack and myself on the concert. Back in the ’90s, I wrote a sound synthesis software package called GACSS, that both Zack and I used for our work. Zack will have two pieces on this concert for instrument and computer-generated sound that utilize the software I wrote. This was a fruitful collaboration that produced a lot of rocking music!

I’ll be posting lots of documentation of the robotics project over the next month or so, but if you’re in town, please come by and check out the concert.

Related Content:

Variable Mirror at Anka Gallery in April

My work Variable Mirror (2009) has been accepted into the PXL show at the Anka Gallery in Portland, Oregon. The PXL show is about work that explores the impact of the pixel on the world. Variable Mirror is part of my Flexible Pixels Project.

still image capture from Variable Mirror (2009)

During the opening reception of the show, a presentation will be given (and webcast) by Russell Kirsch. Kirsch is considered to be one of the founders of digital imaging, and is credited with having created the first digital image in 1957. Having originally crafted the pixel as inherently square, Kirsch now believes that pixels should be variable in shape. He will present a new technique for doing so.

I look forward to seeing what he’s working on. My Flexible Pixels Project is constructed in response to the current fixed shape and use of the pixel. My works in this project explore what happens if you break the rules of the pixel and allow them to vary in size, shape, and arrangement. While Kirsch and I are are after quite different goals, he’s the first person I’ve heard other than myself talk about this notion of variable pixels. And who would have expected it would be from the person who created the pixel in the first place?

The show is open from April 7 through April 29th, with a First Thursday opening (and presentation by Kirsch) on April 7 from 6-9p, and an Open House reception on April 21 from 4-7p.

Speed of Reality at Co-Prosperity Sphere

My installation work titled Speed of Reality is part of a group show in Chicago titled Artsplosia. Artsplosia presents the work of the MFA students at the University of Illinois. Twenty-two artists in addition to myself are presenting work that varies in media from painting to sculpture, drawing to new media.

installation shot of Speed of Reality at Co-Prosperity Sphere in Chicago

The show opened at Co-Prosperity Sphere, 3219 S. Morgan St., Chicago, on March 26 and runs through April 2. There will be a closing reception on April 1 at 6p. I will be there, so stop by if you’re in town!

What Are Art/Music Machines For?

When I started working with computer music around 1990, the technology was quite different than what we have available today. Computers didn’t come with sound cards, there was no SuperCollider or Max/MSP, and disk space to store created sounds was extremely limited. To engage with the medium, I got my start working in the UIUC Computer Music Project, a lab which provided a home-built digital-to-audio converter (DAC), and a Music-V type language to work with called Music 4C. I authored (coded) various instruments for Music 4C and used them to create new works.

Make Something New

Some of my colleagues at the time focused their efforts on recreating existing instruments. They wanted an “accurate” string sound, or a pretty trumpet or clarinet timbre to come out of the speakers.

I never understood this.

If they wanted a pretty trumpet, clarinet, or violin, the floor below us was full of musicians proficient with those instruments practicing to get better every day. They could just ask one or more of them to play their piece.

A Yamaha TX81Z synthesizer ready to crank out the cheesy flute sounds

Given the potential of a new medium to make new sound, why try to reproduce the old? I wanted my software to make sounds I had never heard before—that nobody had ever heard before. This might sound like a tall order, but this is precisely what the world had already gotten over and over with each new instrument invented. Computer music was just another new instrument, but a flexible one that could facilitate the invention of many new instruments. It was an opportunity to break the rules of physics, not to adhere to them!

MIDI Fills The Planet With Crap

Then there were those that chose to focus their attention on MIDI synthesizers. While there were one or two needs I saw MIDI as useful for, mostly it seemed to be filling the planet with more crap. MIDI was nothing but canned sounds trying to imitate physical instruments, but failing miserably. Their reproductions lacked the timbral complexity of the original. In other words, MIDI took what was interesting about sound and destroyed it.

Eventually I focused my energies on the creation of GACSS, a software package that allowed myself and others to easily create and compose with (previously unheard of) new sounds. Over the years MIDI continued to flourish and still dominates today. However, composers have new tools such as Max/MSP that make it easy to create new sounds without any difficult programming, so now there’s really no excuse not to make your own thing.

What Should a Painting Machine Make?

My collaborative robotic painting machine (very much still in progress). Paintings on the wall behind it are a few of its first sketches.

Fast forward to now, and one of my current projects is a collaborative robotic painting machine. This machine will develop its own body of work, accept and consider input from others, and will use that input to create both paintings and music. There are a number of painting machines out in the world that people have created. But what’s driving me nuts, twenty years after I started down this road, is that these new painting machines are typically painting paintings that already exist or that the artist can already create themselves.

Perhaps the best known painting machine is Harold Cohen’s AARON. While it’s really just software these days (no hardware component), his goal for the machine is for it to paint paintings like Harold Cohen paints paintings. It’s an interesting technical problem, but why? Harold Cohen can already paint like Harold Cohen! Unquestionably there’s an interesting element there in seeing how the computer does it differently than Harold. But I ask the same question now that I asked before: why not use the machine to create something we haven’t seen?

I want to create machines that make things I haven’t yet seen or heard. To do so, I exploit those areas where the machine excels—and the humans don’t. In the case of a painting machine, the system is a lot better with qualities like repetition, accuracy, and endurance than I am. I can use these technological advantages to my benefit, while treating those things I’m better at (such as quick pattern recognition) as system constraints. Ideally, the result will be a new creation, something that could not have existed otherwise. But no matter what, I know it won’t be something I saw or heard last month, last year, or from all of history. And that means I have something to look forward to.

World Premiere of Shift Tonight

Jacob Barton playing the Udderbot

Jacob Barton will be performing the world premiere of my musical work titled Shift (2010), tonight (January 19, 2011) at the Contemporary Arts Center of Virginia in Virginia Beach, VA. Barton is the world’s premiere Udderbot virtuoso, and Shift is written for one live udderbot and five recorded udderbots. He is currently traveling on his Udderbot World Tour 2011.

From Jacob’s website:

Made of a glass bottle, a rubber glove, and water, the udderbot’s quirky appearance and unassuming timbre make it a “friendly” instrument; however, with a range greater than the concert flute’s, the udderbot is no mere novelty. The recital will feature music for solo udderbot player; udderbot with electronics; chamber music with traditional instruments; and multiple udderbots. The udderbot will substitute for its electronic predecessor, the theremin, for the oldest piece on this recital, Martinu’s “Fantasie” from 1944. A majority of the music being performed is “microtonal”, i.e. using notes and intervals that fall “between the keys” of the piano (but are no trouble at all for the udderbot).

Related Links:

Temporal Imaging, Reality TV, and The Vision Machine

In preparation for leading a discussion last fall, I did a deep reading of chapter 5 of Paul Virilio’s The Vision Machine (download a free PDF). As I’m finding is typical with Virilio’s writing, he packs a lot of ideas into a short number of words. But there are a few overarching threads that weave in and out of the chapter. I present a couple of them below. While the book was published in 1994, it is still very relevant as a critique of today’s technology and culture. In fact, at times it seems eerily predictive of our present.

Is TV Affecting Our Ability to Store Memories?

Of most interest for me was the idea that the temporal nature of film, video, and computer graphics is altering our ability to create memories. The way I interpret him is that we only have available to us so much “depth of time” (in relation to depth of field), and splitting our experience into discrete time slices thus limits how much of it we can retain. Before temporal imaging (e.g. film or video), our memories were not stored in a frames-per-second manner, but were more about story and sense perception.

In real, non-imaged life, the temporal resolution of memory is affected by the context of the event being remembered. Think about the difference between memories of a casual conversation versus a car accident. The conversation is likely to result in memories primarily about some of the things that were said, with very little temporal resolution of the physical surroundings. But in a car accident, the context results in an extreme temporal focus. Time appears to slow down as each detail of the impending critical situation is imprinted on our brains. We would have a very different story to tell about each of those events, and the types of memories recorded were dependent on the context in which they were perceived.

25 second clip from Hell’s Kitchen w/o audio, Season 8, Episode 1, 2009

But in virtual experience driven by temporal imaging such as film or TV, the pace of cuts can flood our short term memory. Consider the above video clip, taken from the opening of the reality TV show Hell’s Kitchen, Season 8 (I have removed the audio). How much of this can you retain? Recent studies suggest that we can only remember three or four things at once. In a real life experience, the context of an event can expand the length of these three or four things. But the frames-per-second nature of video, and the fast-paced nature of video edits may be robbing us of the ability to perform such an expansion. Virilio would say that our temporal construction of memory is an essential part of what helps us distinguish between real and fake, and if virtual imaging is changing our ability to store memories then it’s changing our ability to identify the real.

This is one of the concepts I explore in my recent work, titled Speed of Reality. The piece looks at how these fast edits are affecting our perception of reality TV. Does it change how we form memories? If yes, does it change it in non-TV contexts as well? Is “reality” media altering non-mediated reality?

Pervasive Computer Vision

The Vision Machine, by Virilio

While my interpretations regarding memory formation outlined above were of most interest, Virilio focuses most of the chapter on what he calls the Vision Machine. He saw us as entering a world of sightless vision, where machines create their own view of the world for their own purposes—thus changing our relationships to reality and power. This “splitting of viewpoint” between humans and computers is an essential component for artificial intelligence.

Vision machines that perceive us and interpret us for their own purposes, even in our private spaces, has lead to an erosion of public and private space. They also enable a new genre of military deterrence, where images, and the manipulation of them, become more effective as ammunition than conventional weapons are. This leads to a “total dissimulation” where wars are fought through images by the Perceptron, a 24-hour real-time telesurveillance-enabled and -controlled vision machine. In this space the plausible or implausible replaces true and false—it’s all about what the images may or may not represent. The speed of imaging becomes paramount, and thus is the machine’s primary sense. As such, extensive time gives way to an experience of ‘intensive time,’ or technologically accelerated moments beyond our comprehension that result in a new concept of reality where speed prevails over time and space.

Certainly Virilio’s predictions about sightless vision have come true. In the New York Times’ recent article, titled Smarter Than You Think – When Computers Keep Watch, the author details many of the ways that computer vision technologies have crept into our daily lives. Everything from the health care industry to computer gaming companies now employ ‘smart’ vision technologies in order to improve human-computer interaction and data collection. Without question, these applications are fun and potentially useful. But the technologies are also used by law enforcement and corporations to elicit our unspoken reactions and thoughts about the things we see and hear.

Google’s landmark recognition program Goggles can recognize architectural artifacts in photographs and automatically provide background information about them. Some have asked Google to include facial recognition in the program, something they could easily do. Eric Schmidt, Google’s CEO, says they think doing would be “too sensitive,” and risk “enabling stalker behavior.” At least for now humans appear to be in control of the technology. But as these applications become more and more tied into national surveillance camera networks, and increasingly used by machines for machines, Virilio’s prediction of a Perceptron could inch closer to reality.

How Accessible Should An Artwork Be?

I have heard and held a variety of opinions over the years on the issue of accessibility in the arts.

Blasting Away My Audience with Loud, Ugly Music

Cover image for an old CD of my music

When I started out in music school I was proud if a listener walked out on a performance of my work, unable to understand or enjoy it. All the more space for those that did (e.g. me and the few other people on the planet who had a context to understand the piece). This was fine for a year or two, but the more I listened and the more I wrote, the less excited I became about driving away my audience. It wasn’t that I wanted my music to be any less complex, or to be any prettier. But I did want a wider audience, and in order to get one I had to try harder to provide a context for those who lacked one.

Looking for Increased Accessibility in my Music

By the time I left music school I was writing some of my most raucous sounding work, but it was definitely more accessible. By integrating more recognizable forms, working harder on titles that reflected something in the music, and shifting my construction of rhythm and pitch into areas that showed awareness of the outside world, people stopped walking out. The result wasn’t a reduction in complexity or a dumbing down of concept, but an increase in layered meaning (more on that in a bit).

Outside a New Medium Looking For a Context

For the last ten years or so my artistic focus has been on the visual arts. When I started the switch to visual my artistic medium was new, but my conceptual approach was already well developed. I still wanted that higher degree of accessibility, but to provide it in the new medium I would need a broader understanding of the visual contexts viewers bring to a work. Thus began an intensive self-driven course in the artwork of others.

One venue for this self study was museums. I spent a lot of time in them, looking at everything I could. Very quickly I ran back into this accessibility issue, but this time it was as the outsider looking in. I shortly grew tired of taking my time to look at work that cared nothing about me. Even as I learned more and more, I unsurprisingly continued to find works that presumed a context I didn’t have. But here I was, a knowledgable and interested person trying to get something from the work, and kept finding pieces that provided nothing.

Layered Meaning

By layered meaning I intend to describe a set of meanings that provide something of interest to viewers/listeners with a variety of backgrounds. These viewers range from the interested novice (someone willing to spend a few seconds looking at my work), all the way up to seasoned artists or critics who bring a lifetime of art historical context with them when they consider what I’ve made. I want each of these two, and everyone in between, to take something away from the piece: an idea, a suggestion, and/or a question. Perhaps more importantly, I want my work to invite those with less context to engage with the work and learn more about it. In other words, I want their efforts to be rewarded, not repelled.

Layered Meanings in Speed of Reality

A good example of my focus on this is a recent piece titled Speed of Reality. This work, which explores issues of speed, editing, and sound in reality TV, is composed to provide something for each of these viewers.

For those without a developed art understanding, the piece presents a portrayal of a visual medium many are familiar with—reality TV. If your only context is having watched a reality TV show, you will hopefully walk away from this piece thinking about how what you watch is a constructed presentation with particular intentions. If you spend a bit more time watching the work, you can understand more about the mechanics of that construction, and perhaps how it relates to intention. In other words, the piece invites you to consider it further.

For those with a sophisticated art historical understanding, the piece tries to provide all of the above and more. They might also think about the new ways reality TV is mediating reality, or how the structure of the program alters the meaning of the content. Maybe they would even tie it into ideas of mine regarding how memory is formed in the face of fast-paced cuts, ideas that grew out of my reading of Paul Virilio. A viewer with a complex background in new media would hopefully take away that I’m interested in how algorithmic the editing of these shows has become.

As the artist I can’t begin to predict all of what the viewer might see in my work. But by bringing intention to the concept, and by working to address those with a variety of backgrounds, I hope to engage my audience in a way that leaves them thinking about rather than (solely) cursing what they’ve seen or heard.

Related Content