Face landmark convex hull detection in OpenCV with Processing

The 3rd exercise is the demonstration of obtaining the convex hull of the face landmark points in the OpenCV Face module. The program based on the face landmark information collected from the last post to find out the convex hull of the face detected.

The function is provided by the Imgproc (image processing) module of OpenCV. In the sample program, the following command will obtain the each point information of those points on the convex hull of the polygon.

Imgproc.convexHull(new MatOfPoint(p), index, false);

The first parameter, variable p is an array of type Point in OpenCV. The second parameter, index, is the returned value of type MatOfInt indicating all the points along the convex hull boundary. The integer value is the index in the original array p. The third parameter, false, indicates the clockwise orientation is false. By traversing the array index, we can obtain all the points along the convex hull.

The complete source code is now in my GitHub repository ml20180819a.

Face landmark detection in OpenCV Face module with Processing

The 2nd exercise is a demonstration using the Face module of the OpenCV contribution libraries. The official documentation for OpenCV 3.4.2 has a tutorial on face landmark detection. The Face module distribution also has a sample – Facemark.java.  This exercise is derived from this sample. There are 2 extra parameter files. One is the Haar Cascades file,  haarcascade_frontalface_default.xml we used in the last post for general face detection. The other one is the face landmark model file face_landmark_model.dat that will be downloaded during the building process of the OpenCV. Otherwise, it is also available at this GitHub link.

The program uses the Facemark class with the instance variable fm.

Facemark fm;

It is created by the command.

fm = Face.createFacemarkKazemi();

And load in the model file with the following,

fm.loadModel(datPath(modelFile));

where modelFile is the string variable containing the model file name.

Complete source code is in this GitHub repository.

 

Face detection with the OpenCV Face module in Processing

This will be the series of tutorials to elaborate the OpenCV face swap example. The 1st one is a demonstration of the face detection of the Face module, instead of using the Object Detection module. The sample program will detect faces from 2 photos, using the Haar Cascades file, haarcascade_frontalface_default.xml, located in the data folder of the Processing sketch.

The major command is

Face.getFacesHAAR(im.getBGR(), faces, dataPath(faceFile));

where im.getBGR() is the photo Mat returned from the CVImage object, im, faces is a MatOfRect variable returning the rectangle of all faces detected, and faceFile is a string variable containing the file name of the Haar Cascades XML file.

Complete source code is in the website GitHub repository, ml20180818a.

 

 

 

 

Darknet YOLO v3 testing in Processing with the OpenCV DNN module

This is the third demo of the OpenCV Deep Neural Network (dnn) module in Processing with my latest CVImage library. In this version, I used the Darknet YOLO v3 pre-trained model for object detection. It is based on the object_detection sample from the latest OpenCV distribution. The configuration and weights model files for the COCO datasets are also available in the Darknet website. In the data folder of the Processing sketch, you will have the following 3 files:

  • yolov3.cfg (configuration file)
  • yolov3.weights (pre-trained model weight file)
  • object_detection_classes_yolov3.txt (label description file)

 

You can download the source code in my GitHub repositories.

OpenPose in Processing and OpenCV (DNN)

This is the 2nd test of the OpenCV dnn module in Processing through my CVImage library. It used the OpenPose pre-trained Caffe model.

Since the OpenCV dnn module can read the Caffe model through the readNetFromCaffe() function, the demo sends the real time webcam image to the model for human pose detection. It made use of the configuration file openpose_pose_coco.prototxt and the saved model pose_iter_440000.caffemodel. The original reference of the demo is from the openpose.cpp official OpenCV sample and the Java implementation from  the GitHub of berak. You can download the model details below

The description of the OpenPose output can be found in their official GitHub site. The figure below is the posture information I used in my demo.

Again, the source code is maintained in my Magicandlove repositories of my GitHub. You can download from here.

Deep Neural Network (dnn) module with Processing

This is my first demo run of the dnn (deep neural network) module in OpenCV 3.4.2 with Processing, using my CVImage library. The module can input pre-trained models from Caffe, Tensorflow, Darknet, and Torch.  In this example, I used the Tensorflow model Inception v2 SSD COCO from here. I also obtained the label map file from the Tensorflow GitHub. The following 3 files are in the data folder of the Processing sketch.

  • frozen_inference_graph.pb
  • ssd_inception_v2_coco_2017_11_17.pbtxt
  • mscoco_label_map.pbtxt

The source code is in my GitHub repository of this website here.

OpenCV 3.4.2 Java Build

After the release of OpenCV 3.4.2, I have prepared the pre-built version of the Java libraries for OSX, Ubuntu, and Windows 8.1 platforms (64 bits).  In this release, there is more extensive support for the Java binding. I also packaged the library as the Processing library – CVImage. Please refer to latest book for details. In addition to the optflow contributed library, it also includes additional contributed libraries, such as bgsegm, face, and tracking.

CVImage for OpenCV 3.4.2

 

CVImage and PixelFlow in Processing

This is a quick demonstration of using the CVImage library, from the book, Pro Processing for Images and Computer Vision with OpenCV, and the PixelFlow library from Thomas Diewald.
 
Here is the video documentation.

The full source code is below with one additional class.

Main Processing sketch

import cvimage.*;
import processing.video.*;
import com.thomasdiewald.pixelflow.java.DwPixelFlow;
import com.thomasdiewald.pixelflow.java.fluid.DwFluid2D;
import org.opencv.core.*;
import org.opencv.objdetect.CascadeClassifier;
import org.opencv.objdetect.Objdetect;
 
// Face detection size
final int W = 320, H = 180;
Capture cap;
CVImage img;
CascadeClassifier face;
float ratio;
DwFluid2D fluid;
PGraphics2D pg_fluid;
MyFluidData fluidFunc;
 
void settings() {
  size(1280, 720, P2D);
}
 
void setup() {
  background(0);
  System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
  println(Core.VERSION);
  cap = new Capture(this, width, height);
  cap.start();
  img = new CVImage(W, H);
  face = new CascadeClassifier(dataPath("haarcascade_frontalface_default.xml"));
  ratio = float(width)/W;
 
  DwPixelFlow context = new DwPixelFlow(this);
  context.print();
  context.printGL();
  fluid = new DwFluid2D(context, width, height, 1);
  fluid.param.dissipation_velocity = 0.60f;
  fluid.param.dissipation_density = 0.99f;
  fluid.param.dissipation_temperature = 1.0f;
  fluid.param.vorticity = 0.001f;
 
  fluidFunc = new MyFluidData();
  fluid.addCallback_FluiData(fluidFunc);
  pg_fluid = (PGraphics2D) createGraphics(width, height, P2D);
  pg_fluid.smooth(4);
}
 
void draw() {
  if (!cap.available()) 
    return;
  background(0);
  cap.read();
  cap.updatePixels();
 
  img.copy(cap, 0, 0, cap.width, cap.height, 
    0, 0, img.width, img.height);
  img.copyTo();
 
  Mat grey = img.getGrey();
  MatOfRect faces = new MatOfRect();
 
  face.detectMultiScale(grey, faces, 1.15, 3, 
    Objdetect.CASCADE_SCALE_IMAGE, 
    new Size(60, 60), new Size(200, 200));
  Rect [] facesArr = faces.toArray();
  if (facesArr.length > 0) {
    fluidFunc.findFace(true);
  } else {
    fluidFunc.findFace(false);
  }
  for (Rect r : facesArr) {
    float cx = r.x + r.width/2.0;
    float cy = r.y + r.height/2.0;
    fluidFunc.setPos(new PVector(cx*ratio, cy*ratio));
  }
  fluid.update();
  pg_fluid.beginDraw();
  pg_fluid.background(0);
  pg_fluid.image(cap, 0, 0);
  pg_fluid.endDraw();
  fluid.renderFluidTextures(pg_fluid, 0);
  image(pg_fluid, 0, 0);
  pushStyle();
  noStroke();
  fill(0);
  text(nf(round(frameRate), 2, 0), 10, 20);
  popStyle();
  grey.release();
  faces.release();
}

The class definition of MyFluidData

private class MyFluidData implements DwFluid2D.FluidData {
  float intensity;
  float radius;
  float temperature;
  color c;
  boolean first;
  boolean face;
  PVector pos;
  PVector last;
 
  public MyFluidData() {
    super();
    intensity = 1.0f;
    radius = 25.0f;
    temperature = 5.0f;
    c = color(255, 255, 255);
    first = true;
    pos = new PVector(0, 0);
    last = new PVector(0, 0);
    face = false;
  }
 
  public void findFace(boolean f) {
    face = f;
  }
 
  public void setPos(PVector p) {
    if (first) {
      pos.x = p.x;
      pos.y = p.y;
      last.x = pos.x;
      last.y = pos.y;
      first = false;
    } else {
      last.x = pos.x;
      last.y = pos.y;
      pos.x = p.x;
      pos.y = p.y;
    }
  }
 
  @Override
    public void update(DwFluid2D f) {
 
    if (face) {
      float px = pos.x;
      float py = height - pos.y;
      float vx = (pos.x - last.x) * 10.0f;
      float vy = (pos.y - last.y) * -10.0f;
      c = color(random(100, 255), random(100, 255), random(50, 100));
      f.addVelocity(px, py, radius, vx, vy);
      f.addDensity (px, py, radius, 
        red(c)/255, green(c)/255, blue(c)/255, 
        intensity);
      f.addTemperature(px, py, radius, temperature);
    }
  }
}