Blog

Grosser Wins Winter 2011 Creative Divergents Award

I just found out I’ve been awarded the Winter 2011 Creative Divergents Award! Very happy! The work I entered was my Interactive Robotic Painting Machine.

Interactive Robotic Painting Machine

This was a juried competition, but also included a public component where anyone could vote on and comment about the entries. I’m honored to be included amongst the previous winners: Nick Rodrigues, Lin Zhang, Hye Neon Nam, Phillip Stearns.  This season’s jury included Nam, as well as Travis Lee Street, Gabriela Vainsencher, and the competition’s founder, Dawn Graham.

Needless to say, I’ll be putting the prize money ($1000) to good use, funding technology for my next piece.

Recording of Not Pitch Released on CD and Digital

Contemporary saxophonist and assistant professor Rhonda Taylor has released a new compact disc that includes her recording of my Not Pitch for amplified baritone saxophone and computer-generated sounds.  The CD is titled Interstice, is published on the shh label (#0135), and is available for purchase on disc or as a digital download via iTunes, Amazon, or CD Baby.

The cover of Interstice, by Rhonda Taylor

I wrote Not Pitch in 1995 for my good friend and saxophonist Taimur Sullivan. He premiered it that year at the Settlement Music School in Philadelphia, PA as part of a concert with his sax quartet PRISM. Taimur now also directs the saxophone studio at the North Carolina School of the Arts, where he has been a professor for many years. In recent years, Rhonda has toured the work around the United States, culminating in her release of the piece on this new CD. This included a live performance at her CD release concert on March 2, 2012 at New Mexico State University where she is an assistant professor.

Saxophonist Rhonda Taylor performing Not Pitch at New Mexico State University, 2009. photograph by: Robert Yee

Rhonda is an incredible saxophonist, and, quite frankly, performs the work masterfully.  Her sound on the baritone is intense and active, and she brings a hard core virtuosity to the piece. I couldn’t be happier with her recording of the work, and my only suggestion is for you to play it loud. Before this CD project, Rhonda has released recordings of other contemporary composers, including a disc of works by my good friend and composer Rick Burkhardt.

I created the computer-generated sound component of the work using my custom synthesis software package titled GACSS. The program note I included with the disc reads as follows:

Looking back on this piece from the present, I see similar themes with my current practice. For example, I’m interested in the issue of agency. Who has it? Who doesn’t? Applying this question to a performance of Not Pitch: is the tape running the show, or is Rhonda? When I composed the work, who had more power then? The compositional system I setup to generate material, or myself as an operator of that system? And when it came time to produce the tape, how much control did my synthesis software hold over the process? I have my suppositions about these questions. You will have your own.

The rest of Rhonda’s performances on this CD are just as good as the one of mine.  I highly recommend you pick up a copy.

Thanks to Rhonda for all her hard work and support of this piece!

Related Content

n to Watch: Benjamin Grosser

I have a solo show at Figure One as part of their n to Watch series during the month of February.  I’m showing two works: Speed of Reality, in the front room, and Protocols of Looking, in the back.

Protocols of Looking (2011) at Figure One

Protocols of Looking (2011) at Figure One

n to Watch is a year-long set of shows curated by Allyson Purpura, Terri Weissman, and Jimmy Luu “that brings relevant and engaging ideas from the School of Art + Design’s classrooms to a public forum.”  Figure One is located at 116 N. Walnut in Downtown Champaign.  Open hours are Wed. 12-4, Thu-Sat 5-9.  There is a closing reception on Friday, Feb. 24th from 6-9p.

Interview with Roboteknia Magazine

I recently talked with Fernanda Ares of Roboteknia, a Mexican robotics magazine, about my work titled Interactive Robotic Painting Machine.  In addition to chatting about the machine itself, we discussed the role of technology in the arts, whether robots have ‘souls,’ and where things are headed in the art/tech world.

Roboteknia, January, 2012, pp. 24-25

While Roboteknia published an English to Spanish translation of our email interview in the January 2012 issue, I am posting my original English answers below. In what follows, the questions are all Fer’s and the answers are all mine.

Interview with Roboteknia, January, 2012

Why did you decide to combine arts with technology?

I’m compelled to make art and I’m fascinated by technology.  Therefore the simple answer is that I’m combining the two things I’m most passionate about into one unified endeavor so I can do them both at the same time.  However, my fascination with technology goes beyond a general interest and borders closer to obsession—an obsession with how ubiquitous digital technologies are changing our experience of the world.  Whenever I use these technologies I am constantly analyzing them in terms of their cultural, social, and physiological effects.  These analyses lead me to questions that I transform into works of art.  I use those same technologies a medium in that art in order to draw the viewer in, to give them a point of entry that has parallels with their own technological experience.  Therefore the less simple answer is that my mixing of art and technology is a composed strategy that enables a broadly accessible critical examination of those things I most care about.

What are the arts’ possibilities through the technology?

Throughout history, each new artistic medium has brought new possibilities to the artists who use them.  For example, oil paint was a new technology at one time, and it enabled a vivid color representation than had not been seen before.  But few people carry oil paint in their pockets, while almost everyone carries at least one computer with them everywhere they go (e.g. cell phone, laptop, tablet, etc.).  They anthropomorphize them, covet them, and curse them, but regardless of the feeling of the moment, they live with them all day long.  This commonality of technological experience opens up new territories for artists to explore when they use those same technologies as a medium in their work.

An additional aspect of technology-based art that’s most interesting to me is its ability to enable interactions between the art and the viewer.  While previous mediums encouraged thought and contemplation, technologically-interactive artworks can now also engage in an active dialog with the viewer.  They can listen and respond, they can watch and change, they can feel and adjust.  I use such interactions as a strategy in my work to encourage personal consideration of the questions I’m exploring.

From where did you get the idea to create a robot that can paint? What is the purpose of your project?

I have long been interested in the potential role of adaptive or artificially-intelligent systems to the task of art making.  As a composer, I have developed software that writes scores and/or creates sound.  But as my focus switched to visual art, and specifically into investigations of technology, I started to think about how our everyday interactions are increasingly mediated by technology.  These systems, such as mobile phones, search engines, or social networking sites, are designed to anticipate and support our needs and desires while facilitating those interactions.  As these systems grow in complexity, or intelligence, how does that intelligence change what passes through them?  Further, how does that intelligence evolve to make its own work for its own needs?

This last question served as the launching point for my Interactive Robotic Painting Machine.  Does an art-making machine of my design make work for me or for itself?  How does machine vision differ from human vision, and is that difference visible in the output?  Is my own consciousness reinforced by the system or does it become lost within?  In other words, it this machine alive, with agency as yet another piece of the technium, or is it our own anthropomorphization of the system that makes us think about it in these ways?

What I built to consider these questions is an interactive robotic painting machine that uses artificial intelligence to paint its own body of work and to make its own decisions. While doing so, it listens to its environment and considers what it hears as input into the painting process. In the absence of someone or something else making sound in its presence, the machine, like many artists, listens to itself. But when it does hear others, it changes what it does just as we subtly (or not so subtly) are influenced by what others tell us.

What has been your biggest challenge on this project?

Scale.  The project is so broad and varied in its requirements that I had to keep refocusing myself on the final result.  Only so much could be planned ahead of time and so I just had to dive in and start.  I began with the building and assembling of the machine’s hardware, adapting an open-source CNC design as the starting point.  Then I configured the low-level computer controls of the machine’s motors.  Just that much took a long time, although it was probably only half of the work.  From there I had to design and realize the machine’s “brains.”  This required learning a new programming language, banging my head up against mathematical problems beyond my experience, understanding and adaptation of an artificial intelligence algorithm, and more.  In the end I have a many hundred pound animated being driven by three networked computers, in need of constant care and feeding from a human (me).  It has been quite a ride.

Where you have shown this project?

So far, few people have seen the machine in person, but many thousands have seen it via the Internet or other media.  I hope to show it live soon, and am talking with a group in Germany about showing it there.

What was the people’s reaction?

The reaction has been extensive.  My videos on Vimeo.com have received more than 70,000 plays in the last few months.  The machine has been written about in major online spaces, including the Huffington Post, Engadget, HackADay, Make: Blog, Salon, Discovery News, Boing Boing, and more.  Lately the machine is getting more press from international sources, including sites from Taiwan, Greece, France, Russia, Brazil, Japan, Spain, Germany, Poland, and of course, Mexico.  I’m humbled by the widespread response.

People’s individual reactions range from amazement to fear to admiration to near hostility.  Some are worried that the machine harkens the coming a Terminator-type age of machine domination.  Others, including several artists, worry that I’m going to put painters out of business.  A few critique the paintings by themselves, seemingly uninterested in the machine’s role in their creation.  But mostly the response has been positive and supportive.  I most enjoy it when others discuss the machine in the same way that I do: as its own person, its own entity with its own feelings and goals.

When did you decide to leave the conventional arts and start getting deeper on new technologies?

It was about 3-4 years ago.  I have been making art of some sort for as long as I can remember.  In college I studied music composition and performance.  Then I worked in the sciences, running facilities for visualization and imaging, directing and producing visualizations and animations, and writing software for remote and virtual instrument control.  During that time I switched my artistic focus from music to visual, making paintings and photographs.  Eventually I started to merge the two, and now making art is my full-time focus.

Who do you admire in this industry?

I am inspired by many artists and composers.  Iannis Xenakis is a composer whose pioneering work combining computers and music has had a lasting effect on my aesthetic sensibility.  I had the great fortune to study with Salvatore Martirano, inventor of the world’s first real-time interactive composition machine and an invaluable teacher.  Within the visual arts, I admire the works of Bill Viola, Dan Graham, Roxy Paine, Rafael Lozano-Hemmer, Jim Campbell, Golan Levin, Camille Utterback, and many more.

Are there more artists doing the same as you?

There have been a number of automated or semi-automated drawing or painting machines over the years.  Perhaps the most well known is Harold Cohen’s AARON, a software-based painting system that he has trained for more than 30 years to create paintings like his own.  Others, such as Leonel Moura have built autonomous robots that draw or paint.  Roxy Paine has a series of works that create sculptures or paintings using a specific automated process.

While many of these works have been inspirational for me, I believe that what sets my machine apart from these is my interest in creating not an autonomous robot, but a thinking AND feeling artificially-intelligent artist, one that makes its own paintings in its own style while listening for input from its audience.  One that considers that input as feedback into its own artistic process, potentially modifying what it does based on what it hears.  This opens up questions of agency while challenging our notions of creativity and inspiration.

What are the countries that are going ahead in arts and technology?

Berlin, Germany has become a center of activity within technology-based new media art.  Europe in general appears to be quite active in this area.  I’m sure other countries or regions are just as active, but few provide artists with the same levels of public institutional support as they do.  Thus it may simply be that there are many pockets of activity I’m unaware of due to lack of promotion or resources to publicize the efforts.  The USA certainly has a growing base of artists working with technology.  Art schools across the country continue to add new media programs that support technology-based art practices.  However, support for this work within US museum structures is still in its infancy.

There are plenty of other places around the globe where interesting things are happening but I wouldn’t pretend to be able to exhaustively list them.

Are you working right now on another project?

I am currently working on a piece uses protocols of interaction between two embodied computational systems to explore the role of gestural communication and how it is being changed by online social spaces.  These systems, which will be represented by interactive human forms on screens, will use artificial intelligence to enable gestured communication between themselves, and between themselves and the viewer.  With this work I continue to be interested in issues of agency and anthropomoprhization.  What is the relationship between these systems and the viewer? Are they the subject or the object?

How are robots influencing your daily life?

Almost any product we touch these days was built or partially-built by a robot somewhere.  Everything from our cars to our phones, from our furniture to our food was assembled, drilled, or packaged by a robotic system.  If we expand our definition of robotics to include artificially-intelligent computational systems, then that list grows even longer.  These machines and systems have long-reaching implications in terms of their political, social, cultural, and economic effects on society.  If it can be automated, it will.  If it can’t, it will be soon.  At this point we literally cannot escape these systems because we have designed a society dependent on them.

You are teaching arts in a university. Are your students interested in this kind of project? What is the function that you perform so you can motivate them?

This year I am teaching two different courses on the topic of interaction, one that focuses on conceptual design and another that emphasizes the realization of designed interactive experience.  In each course I show lots of examples from artist’s works, and we discuss readings by artists and scholars on relevant topics.  Today’s undergraduates can’t recall a world without the world wide web.  They can barely recall a world without cell phones.  I find that because of this, at first they sometimes have trouble seeing how their own human-machine interactions are the result of composed pathways.  But they also have a hunger to see it, to consider it, and to redesign them for themselves and others.  As a teacher, the best thing I can do is to give them the tools to teach themselves, to show them how I learn, and to react to and critique their creations.  That is all they need.

Could you consider yourself a robot developer?

I consider myself an artist, not a robotics developer.  My machines are crude from an engineering perspective.  My programming is crude from a software perspective.  But regardless, my machine is an animated being with its own ideas and its own motivations.  For me, what it makes us think about is more important than what it literally makes for us.

Where or how did you learn robotic?

I picked up all the basics from books and the Internet.  I am a strong believer in the importance of free information, and so much of my art practice depends on it.  I post my own how-to information on my website and various forums around the web in order to return the favor to others.

What is your answer to the people who say that a robot can not create art?

They are wrong.  Look!

Do you think that in every robot, there is a creative human soul?

I’m not sure I’d use the word ‘soul’, but I understand the question.  Perhaps I’d rephrase it to ask whether a robot is alive?  Does it have agency?  Can it make decisions for itself and not just for its designer?  How does what it sees differ from what we see, and is that difference visible in what it produces?  Is a designer’s consciousness reinforced by the system she builds, or does it become lost within?  I won’t answer these questions for you.  Instead I built my Interactive Robotic Painting Machine as a way to encourage everyone to think about them for themselves.

Where did you assemble your machine?

Since it took me over a year from start to finish, the machine was assembled in multiple locations.  These include my home studio and workshop, a warehouse studio I rented, and now my studio at the university.  The programming of the system was accomplished in those places, as well as in my favorite coffee shop.

How does the robotic painting machine work?

The system is built from a complex mix of hardware and software components, all networked together and managed from a central control system. This central software utilizes a genetic algorithm (GA) as its decision engine, making choices about what it paints and how it paints it. Audio captured by its shotgun microphone is subject to real-time fast Fourier analysis, providing the system with useful data about what it hears. The resulting painting gestures are transformed into codes that can be sent to the cartesian robot that manipulates a paint brush in three dimensions. These codes break down each gesture into a series of primitive moves, describing everything from how much pressure to use on a brush stroke to how to put more paint on the brush.

Three separate but networked computers manage the system. The first runs the central control software, custom written in Python. Using the GA, this software begins each painting with a random collection of individual painting gestures, and proceeds to paint them. As it paints, it listens to its environment, considering what it hears as input into the gesture it just made. These sounds, along with its own biases, are considered at the end of each generation of gestures and used to produce a new set of gestures from the old. Thus a single painting is a rendering of many generations on a single canvas that illustrate a path towards an evolutionarily desirable (i.e. most fit) result. A second computer manages the brush camera, its projection, and also performs the audio analysis, sending that data to the central machine. A third computer acts as the low-level manipulator of the robot, accepting move commands from the central system and using those to drive stepper motors that move the robot in real-time.

The robot itself was built by adapting an open source CNC design. In addition to fabricating and assembling each piece from scratch, I made significant custom modifications to the linear drive systems in order to facilitate fast rapids while maintaining repeatability. Luckily, because of the largeness of a paintbrush head, my accuracy requirements are less than a typical CNC design. This allowed me to design the drive system to be especially fast while using relatively low-cost and low-power motors.

It is important to understand that what the machine paints is not a direct mapping of what it hears. Instead, the system is making its own decisions about what it does while being influenceable by others. To understand this, I suggest you consider the machine an artist in its own right. Just as a human artist is influenced by what they hear (an influence that is sometimes easy to see and other times not so easy), the machine is influenced by what it hears. What it makes will be different in the absence of input, but it is not easy to trace how any input manifests as change.

Is technology the future of art?

New technologies will always play a role in new art.  We’re still painting, still using cameras, and still drawing with charcoal.  All of these were new at one point.  What seems most interesting about this moment in time is our focus and fascination with technology as a subject in addition to a medium.  Will that change?  I wouldn’t pretend to be able to predict where art will go, but I expect our obsession with new technologies will continue unabated.  As long as that is the case, artists will make art about it.

Upcoming Presentation: What Does Software Want? [new date]

On February 14th I’ll be giving a presentation at the Beckman Institute for Advanced Science and Technology on my artistic practice and research.  The talk is titled “What Does Software Want? Recent Artistic Projects and Research.”

Speed of Reality (2010)
installation view
Co-Prosperity Sphere, Chicago, IL (photo by C. Bakker)

The event is free and open to the public.  It will be held on February 14, 2012, from 12-1pm in room 2269 (2nd floor tower room) Beckman Institute.  Beckman is located at 405 N. Mathews Ave. in Urbana, IL.  Free pizza will be served, or you can bring your own lunch if you prefer.

Here’s the abstract and bio I submitted:

What Does Software Want?  Recent Artistic Projects and Research

Ben Grosser
MFA Candidate, School of Art+Design, University of Illinois

Software now facilitates many aspects of daily life, whether it’s your phone, your bank, your car, or your refrigerator.  Though immaterial, software is still a designed object created by humans.  I’m interested in how those designs serve their creators, as well as the systems those creations reside within.  Why do those designs result in certain kinds of human-computer interfaces?  How do those interfaces interact with humans to achieve their goals?  What does software want?   In this talk I will present an overview of my recent research questions around this topic and the resulting artworks I have generated in response.

Ben Grosser is an artist and a composer, and is currently an MFA Candidate in New Media in the School of Art+Design at UIUC.  Previously he was the Director of the Imaging Technology Group at the Beckman Institute.  His artistic work has been covered widely in the online press, including articles on Boing Boing, the Make Blog, Engadget, Fast Company, and Discovery News.  The Huffington Post recently said of his Interactive Robotic Painting Machine that “Grosser may have unknowingly birthed the apocalypse.”  The St. Louis Riverfront Times called his music “very loud and ugly.”

Hope to see you there!

Press on Reload The Love!

WebProNews recently posted a great article about my Reload The Love! project. Author Drew Bowling examined the work from social and technological angles, with a particular interest on how the piece questions our reliance on Facebook’s notification icons:

The project is equal parts social psychology experiment and software development ingenuity as it cleverly explores the value social networks have on a person’s self-esteem and how something as seemingly minuscule as red word bubble notifications can impact a person’s mood.

Take a look and read for yourself.

Related Content

News Coverage of Interactive Robotic Painting Machine (Expanded List)

A few weeks ago I wrote about the online coverage of my Interactive Robotic Painting Machine project. Since then the list has continued to grow, with writeups on a number of additional high-profile sites, as well as expanded international coverage.

Interactive Robotic Painting Machine on the front page of Engadget.

While much of the writing addressed the artistic or philosophical aspects of the work, a few included some humor as well. A few of my favorite quotes:

Grosser may have unknowingly birthed the apocalypse with his Painting Machine. —Huffington Post

He [Grosser] created a robot that’s — how to put this gently? — a thin-skinned neurotic. —Fast Company

I can already hear the outcry of the artists’ associations for protection of their works as soon as the following robots would fall into the hands of pirates. —Robonews [Germany], via Google Translate

If Warhol were still around, we’re pretty sure the man would’ve snatched up this contraption as a Factory-approved objet d’art. —Engadget

Overall a much nicer, more cultured use of artificial intelligence than, say, a swarm of flying death robots. —the creators project [UK]

An incomplete list of coverage around the ‘net (if a link is missing, it’s because the page went offline since this list was first published):

How the Technological Design of Facebook Homogenizes Identity and Limits Personal Representation

by Benjamin Grosser

Download the paper (PDF)

Note: this paper was published in Hz #19 in 2014. Please cite that version.

The Facebook gender choice dropdown on their new account signup page.

Abstract

This paper explores how the technological design of Facebook homogenizes identity and limits personal representation. I look at how that homogenization transforms individuals into instruments of capital, and enforces digital gates that segregate users along racial boundaries. Using a software studies methodology that considers the design of the underlying software system, I examine how the use of finite lists and links for personal details limits self-description. In what ways the system controls one’s visual presentation of self identity is analyzed in terms of its relation to the new digital economy. I also explore the creative ways that users resist the limitations Facebook imposes, as well as theorize how technological changes to the system could relax its homogenizing and limiting effects.

Introduction

Ever since its inception, people have used the technology of the Internet to represent themselves to the world. Sometimes this representation is a construction based on who they are outside the network, such as with a personal webpage or blog. Other times people use the built-in anonymity of the Internet to explore and engage alternative identities. This identity tourism (Nakamura, 2002) takes place within game spaces (e.g. MUDs, MMORPGs), chat rooms, or forums, as well as within those spaces already mentioned such as webpages and blogs. In each case, the underlying technology that facilitates this network society of digital representations is software. How this software is designed by its creators determines the ways that users can (and cannot) craft their online representation.

The most popular network space for personal representation is Facebook, the world’s largest online social network. The site has more than 500 million active users and has become the most visited website in the United States, beating out Google for the first time in 2010 (Cashmore, 2010). Facebook functions as a prime example of what Henry Jenkins (2006) calls “participatory culture,” a locus of media convergence where consumers of media no longer only consume it, but also act as its producers. Corporations, musicians, religious organizations, and clubs create Facebook Pages, while individuals sign up and fill out their personal profiles. The information that organizations and people choose to share on Facebook shapes their online identity. How those Pages and profiles look and the information they contain is determined by the design of the software system that supports them. How that software functions is the result of decisions made by programmers and leaders within the company behind the website.

This paper explores how the technological design of Facebook homogenizes identity and limits personal representation. I look at how that homogenization transforms individuals into instruments of capital, and enforces digital gates that segregate users along racial boundaries (Watkins, 2009). Using a software studies methodology that considers the design of the underlying software system (Manovich, 2008), I look at how the use of finite lists and links for personal details limits self-description. In what ways the system controls one’s visual presentation of self identity is analyzed in terms of its relation to the new digital economy. I also explore the creative ways that users resist the limitations Facebook imposes, as well as theorize how technological changes to the system could relax its homogenizing and limiting effects.

Continue reading

News Coverage of Interactive Robotic Painting Machine

Interactive Robotic Painting Machine on the Front Page of Engadget

After I posted a page about my Interactive Robotic Painting Machine about ten days ago, it received coverage by a number of high-profile blogs, including Boing Boing, Engadget, and Make. I was interviewed for a new syndicated news show called “Right This Minute” (I’ll blog more details about the air date when I know), Twitter was bouncing with links to the project, and my short video received more than 35,000 plays. It has been quite a week! I appreciate all the interest, emails, and questions that everyone has sent my way.

An incomplete list of coverage around the ‘net:

Personal Depersonalization System Covered By News-Gazette

From the 'Art and About' blog on News-Gazette.com

Melissa Merli, arts writer for the News-Gazette, wrote about my new work Personal Depersonalization System on her Art and About blog. The article, titled “Figure One show looks at knowledge acquisition and subverting Google”, explores the piece at length. Calling my work “one of the most relevant or timely pieces,” Merli asks about my attempts to depersonalize my Google profile: “Can Ben run? Can he hide?”

The group show this article is about, Accepted Knowing, is on view at Figure One through August 26, 2011.

Related Content: