MFA D+T Bootcamp 2012

Category Icon

///Trbr – code

This is the final incarnation of Sound Tagger, our group’s final code project (Me, Salome, Siri, and Huy), though I hope only the first of many iterations. I’m quite proud of what we accomplished and wanted to say thanks to Conor for being awesome and helping us through it, as well as the rest of our team for being even awesomer and actually making the damn thing. With this I’ve finish my last bootcamp project, so that’s why this sounds like a fucking Academy Awards speech.


My final project is killing some mosquitos.

You will have 20 second to kill 15 mosquitos that flying around the sketch, otherwise, they gonna bit you.

How It Works
1. Program draws interface.
2. User inputs spray-paint as a line.
3. The line is invisibly divided into small intervals.
3. Program captures mouse’s x, y positions on the line’s intervals.
4. Program chooses notes according to mouse’s x, y positions on the line’s intervals.
5. Program plays notes.
6. User might replay the notes.
7. User might use the save music function.

[import libraries]
– soundcipher
– minim

[set variables]
– mouse positions
– etc.

– size
– smooth
– no fill
– no stroke
– variable values

– background
– buttons
– color palette toggle on/off
– reset button
– change background button
– play button
– record button

– set the areas for each button

– set variables (startTime, channel, instrument, pitch, dynamics, duration, articulation, pan)
– set tempo
– instantiate notes
– select notes according to interval of mousePressed and mouseReleased (mouseX, mouseY)

[record the music]
1. (group) the painted (midi notes)
2. record the (midi notes in the (group))
3. hit the (save (button)) to save the (recorded (midi notes)) as .wav

– mousePressed (start painting)
—– start Timer
– mouseReleased (stop painting)
—– stop Timer
– Timer
—-1. start an interval
—-2. on each interval:
———- (paint) and grab (mouseX, mouseY)
———- translate (mouseX, mouseY) to (notes on scale)
———- send (notes on scale) to ((audio) class)

– if (mousePressed)
—– draw (random (scattered points))
—– (point’s density) increases near (mouseX, mouseY)
—– set the (area) of the (scattered points) to (circle)

Group updates
Project’s cool name is ‘Sound Tagger’.
Each person’s role map will be posted by Joe once he remembered to.

Objective: Create a drawing tool that has 3 different drawing settings. These settings are changed when specific words are inputted. The words are represented by the style, stroke, and movement of each individual tool. Each tool can also change colors based on key functions. Each tool might also have a sound output function.



1) research different types of drawing tools, review old code examples and research music libraries, review key functions.

2) Identify what text will represent each tool.

3) Identify other attributes.

4)    Sudo code

5) research text call outs

6) code and test

7) debug


Sudo Code Sketch:


//import processing.pdf.*; // calling the import file processing library
//float angle = 0; // name to call for the ability to change the angle
// int line1= ; // integer for option of line 1
// int line2= ; // integer for option of line 2
// int line3= ; // int for option of line 3
// int aline= ; // array line

//void setup() {
//  size (600,600); // size of canvas
//  stroke(0); // the line
//  strokeWeight(random(5)); // the thickness of the line
//  background(255); // background color
//  smooth(); // style of line


//  //Three drawing options
//  //Line option one symetrical black ink tool:
//  point (X, Y);// (some for the line to start from)
//  point ((a number-X), Y); // (point for line to start from)
//  //strokeWeight(()); // line weight for this drawing tool
//  stroke(0, 5);// stroke color

////  // Line two option
//  fill(0); // the color
//  ellipse(mouseX, mouseY, 50, 50); // the line
//  // line three option
//  smooth();
//  colorMode(HSB, 100); // color of the line
//  ellipse(mouseX, mouseY, 100, 100); // position of the ellipse

//void mouseDragged() {
//  stroke (0,90);
//  strokeWeight(1);
//  line(pmouseX, pmouseY, mouseX, mouseY); // line
//  line((600-pmouseX), pmouseY, (600-mouseX), mouseY); // position minus the width and height of the sketch

//void mouseClicked() { // changing backgrond color when mouse clicked
//  background(255);

//Key Pressed (or words first start with keys)
// If B Key is pressed line changes color to blue
//If G key is pressed change line color to green
// If Y key is pressed chane line color to yellow
// If R key is pressed change line color to red

//Word Written to change line quality
//If om is written change to line option 2
// If paz is written change line to option 3

// Notes:
//I need lines to be called only if they are through the key pressed.
//I think this means that I need to make each type of line its own function.
// I would like to add a song that plays in the background.


I will be working with Siri, Salome, and Joe on an ambitious project that is called Sound Tagger at the moment. I will be in charge of the design/coding of the drawing tools (paint, paint drips, color selectors). Here are some sketches of ideas that I had.

A video showing an artist painting while listening to music.
Conceptually alike.
Similar in how you paint, but it records the sound as your finger touch the screen instead of producing one.

Initial prototype
The idea is to create a drawing tool that can generate sounds according to the outcome of the paint, in a form of street spray-paint art. As you move the mouse, a specific sound will be heard up to the location on the canvas, the color picked and the mouse’s speed. Once finished, hit the play button to hear what the music turns out. This could be acknowledged as another form of musical instrument.

This is a group project where I will be working with Joe, Salome and Huy. My part so far involves the pseudocode, interface visuals and code clean-up.

Related ideas
1. Make the paint dance as being played, such as rotate, bend, twist, etc. Acting like a music visualizer.
2. Create a website for the project by using processing.js with a function which you could share the final result via social media websites. Basically, make it public.
3. Have an option to paint on an actual website. This could either be done by inserting a screen-capture function or finding a way to convert the tool into a browser add-on.

Screenshot sketches
1. Starting screen.
2. Empty canvas.
3. Start painting.
4. Finished.
5. Click play to hear the sound.


My inspiration projects are a little weird, but totally applicable:

1. The musical app formerly known as Fruity Loops – – The functionality of the app has completely informed every aspect of the design decisions I’ve made for this idea

2. The EyeWriter – – The idea of taking unorthodox approaches to create art, use of non-traditional drawing tools, and the general innovative spirit of this project have pushed me to try and realize this idea, because it’s not nearly as daunting as what they accomplished.

3. Music Landscape – – My first stab at this idea, and a clunky HTML/JS attempt at the synesthesia between drawing and music. This is the heart of what I want to do for my final.


This application will seek to create an interface for drawing with spray paint on a series of changable background “canvases” that creates music based on the paint that is applied. The pixels painted, as well as the color and background choice will define what types of sounds are made, and the motion across the screen will determine the exact configuration of notes to be played.




All Posts Announcements
  • Code
  • Web
  • Design