Capture midi messages in Processing during playback

The 2nd midi in Processing example will use the Receiver interface to capture all the midi messages during the playback of a midi file. The program uses the custom GetMidi class to implement the Receiver interface. During the playback, it will display the NOTE_ON message with information of channel, octave and note.

The source code of the example is also in the Magicandlove GitHub repository.

Sample Processing screen during midi playback

Using midi in Processing for playback

This is my first use of midi in Processing. I do not use the MidiBus library for Processing. Instead, I try to use the standard midi package in Java. The SE8 standard Java package also contains the javadoc documentation.

Screenshot of the Processing sketch

The Processing source code and sample midi files are in the Magicandlove GitHub repository. The midi example files are downloaded from the midiworld website.

The code basically needs a Synthesizer class to render midi instruments into audio and a Sequencer class to playback the midi sequence.

Synthesizer synth = MidiSystem.getSynthesizer();
Sequencer player = MidiSystem.getSequencer();
synth.open();
player.open();

All the midi music files are in the data folder of the Processing sketch. To playback each piece of midi music, we need to convert each into a Java File object and use the following code to playback it. The variable f is a File object instance containing the midi file in the data folder.

Sequence music = MidiSystem.getSequence(f);
player.setSequence(music);
player.start();

Saving video from Processing with the jCodec 2.3

In the former post, I have tested using the jCodec 0.1.5 and 0.2.0 to save the Processing screen into an MP4 file. The latest version of jCodec 0.2.3 has, however, changed its functions for the AWT based applications. Here is the new code for Processing to use jCodec 0.2.3 to save any BufferedImage to an external MP4 file.

To use the code, you need to download from the jCodec website the following two jar files and put them into the code folder of your Processing sketch.

  • jcodec-0.2.3.jar
  • jcodec-javase-0.2.3.jar

The following code will write a frame of your Processing screen into the MP4 file for every mouse pressed action.

import processing.video.*;
import java.awt.image.BufferedImage;
import org.jcodec.api.awt.AWTSequenceEncoder;
 
Capture cap;
AWTSequenceEncoder enc;
 
public void settings() {
  size(640, 480);
}
 
public void setup() {
  cap = new Capture(this, width, height);
  cap.start();
  String fName = "recording.mp4";
  enc = null;
  try {
    enc = AWTSequenceEncoder.createSequenceEncoder(new File(dataPath(fName)), 25);
  } 
  catch (IOException e) {
    println(e.getMessage());
  }
}
 
public void draw() {
  image(cap, 0, 0);
}
 
public void captureEvent(Capture c) {
  c.read();
}
 
private void saveVideo(BufferedImage i) {
  try {
    enc.encodeImage(i);
  } 
  catch (IOException e) {
    println(e.getMessage());
  }
}
 
public void mousePressed() {
  saveVideo((BufferedImage) this.getGraphics().getImage());
}
 
public void exit() {
  try {
    enc.finish();
  } 
  catch (IOException e) {
    println(e.getMessage());
  }
  super.exit();
}

To save only the capture image, you can just replace the following saveVideo command.

saveVideo((BufferedImage) cap.getNative());

Charts in Processing

Here is the first test of using Charts from JavaFX in Processing. In the recent version of Processing, we are able to use FX2D renderer. The following is a simple pie chart example.


 

import javafx.scene.canvas.Canvas;
import javafx.scene.Scene;
//import javafx.stage.Stage;
import javafx.scene.layout.StackPane;
import javafx.collections.ObservableList;
import javafx.collections.FXCollections;
import javafx.scene.chart.*;
import javafx.geometry.Side;
 
void setup() {
  size(640, 480, FX2D);
  background(255);
  noLoop();
}
 
void draw() {
  pieChart();
}
 
void pieChart() {
  Canvas canvas = (Canvas) this.getSurface().getNative();
  Scene scene = canvas.getScene();
  //  Stage st = (Stage) s.getWindow();
  StackPane pane = (StackPane) scene.getRoot();
 
  ObservableList<PieChart.Data> pieChartData =
    FXCollections.observableArrayList(
    new PieChart.Data("Fat Bear", 10), 
    new PieChart.Data("Pooh San", 20), 
    new PieChart.Data("Pig", 8), 
    new PieChart.Data("Rabbit", 15), 
    new PieChart.Data("Chicken", 2));
  PieChart chart = new PieChart(pieChartData);
  chart.setTitle("Animals");
  chart.setLegendSide(Side.RIGHT);
 
  pane.getChildren().add(chart);
}

Java and time with Processing

Instead of using the Processing millis() function or the Java Timer class, we can also make use of the relatively new Instant and Duration classes in Java 8. Here is one simple example for demonstration.

The program uses 2 Instant variables: start, end. It computes the time duration between them using the Duration.between() function.

import java.time.Instant;
import java.time.Duration;
 
Instant start;
PFont font;
 
void setup() {
  size(640, 480);
  start = Instant.now();
  frameRate(30);
  font = loadFont("AmericanTypewriter-24.vlw");
  textFont(font, 24);
}
 
void draw() {
  background(0);
  Instant end = Instant.now();
  Duration elapsed = Duration.between(start, end);
  long ns = elapsed.toNanos();
  text(Long.toString(ns), 230, 230);
}

 

Screen capture in Processing

This sketch demonstrates the use of the Robot class in Java to perform screen capture in Processing. It will create Jodi like effect with feedback in computer screen. Have fun with it.

Here are the codes. It makes use of the Robot class.

 
import java.awt.Robot;
import java.awt.image.BufferedImage;
import java.awt.Rectangle;
 
Robot robot;
 
void setup() {
  size(640, 480);
  try {
    robot = new Robot();
  } 
  catch (Exception e) {
    println(e.getMessage());
  }
}
 
void draw() {
  background(0);
  Rectangle r = new Rectangle(mouseX, mouseY, width, height);
  BufferedImage img1 = robot.createScreenCapture(r);
  PImage img2 = new PImage(img1);
  image(img2, 0, 0);
}

Save video in Processing with JCodec

As a side product of current research, I manage to save a Processing screen in an MP4 video file with the use of the JCodec library. Download the former jcodec-0.1.5.jar into the code folder of your Processing sketch. The simplest way is to use the SequenceEncoder class to add a BufferedImage to the MP4 video. Remember to finish the video file before ending.

The following example captures the live video stream from a webcam and outputs to an external MP4 file in the data folder. Use the mouse click to control the recording.

Here is the source code.

import processing.video.*;
import org.jcodec.api.SequenceEncoder;
import java.awt.image.BufferedImage;
import java.io.File;
 
Capture cap;
SequenceEncoder enc;
String videoName;
boolean recording;
 
void setup() {
  size(640, 480);
  background(0);
  cap = new Capture(this, width, height);
  videoName = "bear.mp4";
  recording = false;
  frameRate(25);
  smooth();
  noStroke();
  fill(255);
  cap.start();
  try {
    enc = new SequenceEncoder(new File(dataPath(videoName)));
  } 
  catch (IOException e) {
    e.printStackTrace();
  }
}
 
void draw() {
  image(cap, 0, 0);
  String fStr = nf(round(frameRate));
  text(fStr, 10, 20);
  if (recording) {
    BufferedImage bi = (BufferedImage) this.getGraphics().getImage();
    try {
      enc.encodeImage(bi);
    } 
    catch (IOException e) {
      e.printStackTrace();
    }
  }
}
 
void captureEvent(Capture c) {
  c.read();
}
 
void mousePressed() {
  recording = !recording;
  println("Recording : " + recording);
}
 
void keyPressed() {
  if (keyCode == 32) {
    try {
      enc.finish();
    } 
    catch (IOException e) {
      e.printStackTrace();
    }
  }
}

The program also uses the undocumented functions, getGraphics() and getImage() to obtain the raw image of the Processing sketch window.

Artificial Neural Network in OpenCV with Processing

This is the first trial of the Machine Learning module, artificial neural network in OpenCV with Processing. I used the same OpenCV 3.1.0 Java built files. The program took the live stream video (PImage) from webcam and down-sampled to a grid of just 8 x 6 pixels of greyscale. It started by default in the training mode such that I could click on the left hand side of the screen for an image without a hat and on the right hand side for an image of myself wearing a hat. By pressing the SPACE key, it switched to the predict mode where by clicking the video would send the image to the neural network to see if I was wearing a hat or not. I used around 20 images for positive response and 20 images for negative response.

Here are the source codes.
 
The main program

import processing.video.*;
 
Capture cap;
boolean training;
ANN ann;
int w, h;
 
void setup() {
  size(640, 480);
  System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
  println(Core.VERSION);
  cap = new Capture(this, width, height);
  cap.start();
  background(0);
  training = true;
  w = 8;
  h = 6;
  ann = new ANN(w*h);
}
 
void draw() {
  image(cap, 0, 0);
}
 
void captureEvent(Capture c) {
  c.read();
}
 
void mousePressed() {
  PImage img = new PImage(w, h, ARGB);
  img.copy(cap, 0, 0, width, height, 0, 0, w, h);
  img.updatePixels();
  img.filter(GRAY);
  String fName = "";
  float [] grey = getGrey(img);
  if (training) {
    float label = 0.0;
    if (mouseX < width/2) {
      label = 0.0;
    } else {
      label = 1.0;
    }
    ann.addData(grey, label);
    fName = (label == 0.0) ? "Negative" : "Positive";
    fName += nf(ann.getCount(), 4) + ".png";
    img.save(dataPath("") + "/" + fName);
  } else {
    float val = ann.predict(grey);
    float [] res = ann.getResult();
    val = res[0];
    float diff0 = abs(val);
    float diff1 = abs(val - 1);
    if (diff0 < diff1) {
      println("Without hat");
    } else {
      println("With hat");
    }
  }
}
 
float [] getGrey(PImage m) {
  float [] g = new float[w*h];
  if (m.width != w || m.height != h) 
    return g;
  for (int i=0; i<m.pixels.length; i++) {
    color c = m.pixels[i];
    g[i] = red(c) / 256.0;
  }
  return g;
}
 
void keyPressed() {
  if (keyCode == 32) {
    training = !training;
    if (!training) 
      ann.train();
  }
  println("Training status is " + training);
}

The Artificial Neural Network class

import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.MatOfInt;
import org.opencv.core.MatOfFloat;
import org.opencv.ml.ANN_MLP;
 
public class ANN {
  final int MAX_DATA = 1000;
  ANN_MLP mlp;
  int input;
  int output;
  ArrayList<float []>train;
  ArrayList<Float>label;
  MatOfFloat result;
  String model;
 
  public ANN(int i) {
    input = i;
    output = 1;
    mlp = ANN_MLP.create();
    MatOfInt m1 = new MatOfInt(input, input/2, output);
    mlp.setLayerSizes(m1);
    mlp.setActivationFunction(ANN_MLP.SIGMOID_SYM);
    mlp.setTrainMethod(ANN_MLP.RPROP);
    result = new MatOfFloat();
    train = new ArrayList<float[]>();
    label = new ArrayList<Float>();
    model = dataPath("trainModel.xml");
  }
 
  void addData(float [] t, float l) {
    if (t.length != input) 
      return;
    if (train.size() >= MAX_DATA) 
      return;
    train.add(t);
    label.add(l);
  }
 
  int getCount() {
    return train.size();
  }
 
  void train() {
    float [][] tr = new float[train.size()][input];
    for (int i=0; i<train.size(); i++) {
      for (int j=0; j<train.get(i).length; j++) {
        tr[i][j] = train.get(i)[j];
      }
    }
    MatOfFloat response = new MatOfFloat();
    response.fromList(label);
    float [] trf = flatten(tr);
    Mat trainData = new Mat(train.size(), input, CvType.CV_32FC1);
    trainData.put(0, 0, trf);
    mlp.train(trainData, Ml.ROW_SAMPLE, response);
    trainData.release();
    response.release();
    train.clear();
    label.clear();
  }
 
  float predict(float [] i) {
    if (i.length != input) 
      return -1;
    Mat test = new Mat(1, input, CvType.CV_32FC1);
    test.put(0, 0, i);
    float val = mlp.predict(test, result, 0);
    return val;
  }
 
  float [] getResult() {
    float [] r = result.toArray();
    return r;
  }
 
  float [] flatten(float [][] a) {
    if (a.length == 0) 
      return new float[]{};
    int rCnt = a.length;
    int cCnt = a[0].length;
    float [] res = new float[rCnt*cCnt];
    int idx = 0;
    for (int r=0; r<rCnt; r++) {
      for (int c=0; c<cCnt; c++) {
        res[idx] = a[r][c];
        idx++;
      }
    }
    return res;
  }
}

Enumerate all files in the data folder of Processing

There are lots of ways to enumerate all the files inside the data folder of Processing sketch. Here are 2 of them. The first one uses the Java DirectoryStream class. The second one uses the static function walkFileTree from the Files class.
 
Example with DirectoryStream

try {
    DirectoryStream<Path> stream = Files.newDirectoryStream(Paths.get(dataPath(""))); 
    for (Path file : stream) {
      println(file.getFileName());
    }
  } 
  catch (IOException e) {
    e.printStackTrace();
}

Example with Files.walkFileTree

try {
    Files.walkFileTree(Paths.get(dataPath("")), new SimpleFileVisitor<Path>() {
      @Override
        public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
        println(file.getFileName());
        return FileVisitResult.CONTINUE;
      }
    }
    );
  } 
  catch (IOException e) {
    e.printStackTrace();
}