Chen Wei-Ching, Final


I first got interested with this pixel and sound translation concept while learning to use the “Processing” program.  The professor showed us an example of how sound vibrations can effect printed words.

We all have seen examples of sound being used to generate an actual vision, for example, in the hospital we’ve  seen things like heart beats translated into EKG data. Then this idea just dawned on me, if sound can turn into a vision, why not visa- versa?

After brainstorming with my professors, Austin and Dhairya, this concept finally came full circle; this was when I noticed that every single barcode could define a note, and my duty was to figure out how.


The concept “Stethoscope” is made with processing code, a usb webcam, a stethoscope, and earphones.


/* OpenProcessing Tweak of *@**@* */
/* !do not delete the line above, required for linking your tweak if you upload again */
import ddf.minim.*;
import ddf.minim.signals.*;

import gab.opencv.*;
import java.awt.*;

Minim minim; 
AudioOutput out; // create an output object
SineWave sine; // create a sine wave objectMinim minim; 

Capture video;
OpenCV myopencv;

int freq = 440;
float amp = 0.25;
int samples = 44100;
int x=0;
int y= 200;

//PImage pic;
int location = 0;
int fullSize;

void setup(){
 minim = new Minim(this);
 out = minim.getLineOut(Minim.STEREO, 512); // make an output object, set the buffer to 512 samples
 sine = new SineWave(freq, amp, samples); // start the sine wave with this default tone
 out.addSignal(sine); // add the wave to the output object so we can hear it minim = new Minim(this);
 String[] cameras = Capture.list();
 if (cameras == null) {
 println("Failed to retrieve the list of available cameras, will try the default...");
 video = new Capture(this, 640/2, 480/2);
 myopencv = new OpenCV(this, 640/2, 480/2);
 } if (cameras.length == 0) {
 println("There are no cameras available for capture.");
 } else {
 println("Available cameras:");
 for (int i = 0; i < cameras.length; i++) {

 // The camera can be initialized directly using an element
 // from the array returned by list():
 video = new Capture(this, cameras[15]);
 // Or, the settings can be defined based on the text in the list
 //cam = new Capture(this, 640, 480, "Built-in iSight", 30);
 // Start capturing the images from the camera
 //video = new Capture(this, 640/2, 480/2);
 //myopencv = new OpenCV(this, 640/2, 480/2);
 //pic = loadImage("test.jpg");
 //fullSize = pic.height*pic.width;
 //size(pic.width, pic.height);

void draw()
 if (video.available() == true){
 //Rectangle[] faces = myopencv.detect();;
 //image(pic, 0, 0);
 if(location == fullSize){
 location = 0;
 }else {
 int row = location / width;
 int pos = location -(row*width);
 rect(pos, row, 50,50);
 color myColor = get(width/2, height/2);
 float freq = map(int(floor(brightness(myColor))), 0, 255, 200, 900);
 float amp = map(int(floor(saturation(myColor))), 0, 255, .2, 2.00);
 color myFill = color(random(255),random(255),random(255),100);
 image(video, 0, 0 );

void captureEvent(Capture c) {;

Download the processing code from here




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s