Here is a list of the popular Chinese AI and CV related companies.
AI Information – 2019
OpenCV 4.0.0 Java Built and CVImage library
OpenCV 4.0.0 is now available in the official OpenCV.org website. I have compiled and packaged the CVImage library from my book together with the Java build of the new OpenCV library.
You can download the CVImage library here.
First try of P5 and OpenCV JS in Electron
This is my first try of the p5.js together with the official release of OpenCV JavaScript. I decided not to use any browsers and experimented with the integration in the Electron environment with Node.js. The first experiment is a simple image processing application using Canny edge detector. The IDE I choose to work on is the free Visual Studio Code and which is also available in multiple OS platforms. I have tested both in Windows 10 and Mac OSX Mojave. In Mac OSX, I first install the Node.js with Homebrew.
brew update brew install node |
Then I install the Electron as a global package with npm.
npm install -g electron |
For the Visual Studio Code, I also include the JavaScript support and the ESLint plugin. The next step is to download the p5.js and p5.dom.js code from the p5.js website to your local folder. I put them into a libs folder outside of my application folders. For OpenCV, it actually includes the pre-built opencv.js from its documentation repository. The version I used here is 3.4.3. The only documentation I can find for OpenCV JS is this tutorial.
For each of the Node.js application, you can initialise it with the following command in its folder. Alternately, you can also do it within the Terminal window from Visual Studio Code. Fill in the details when prompted.
npm init |
In Visual Studio Code, you have to add a configuration to use the electron command to run the main program, main.js, rather than using the default node command. After adding the configuration, it will generate the launch.json file like the following,
{ // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "type": "node", "request": "launch", "name": "Electron Main", "runtimeExecutable": "electron", "program": "${workspaceFolder}/main.js", "protocol": "inspector" } ] } |
For the programming part, I used a main.js to define the Electron window and its related functions. The window will load the index.html page. It is the main webpage for the application. It will then call the sketch.js to perform the p5.js and OpenCV core functions. The p5.js and OpenCV communicate through the use of the canvas object. The GUI functions, imread() and imshow() are used for such communication. This example will switch on the default webcam to capture the live video and perform a blur and Canny edge detection.
Source code is now available at my GitHub repository.
Intel Realsense depth image in Processing (Windows only)
The second testing is also based on the Java wrapper of the Intel Realsense SDK by Edwin Jakobs. The version is the display of the 16 bit depth image from Processing.
The source code again can be available from the GitHub repository of this post.
Intel Realsense colour image in Processing (Windows only)
The testing is based on the Java wrapper of the Intel Realsense SDK, version 2 found in the following GitHub repository.
https://github.com/edwinRNDR/librealsense/tree/master/wrappers/java.
It only provides the pre-built binary for Windows version. I used it to test with my Intel Realsense D415 camera. The image below is the screenshot of the camera view.
The source code can be found in the GitHub repository of this post.
Neural network style transfer in OpenCV with Processing
The example is the Processing implementation of the OpenCV sample, fast_neural_style.py to work with live style transfer using existing pre-trained Torch models.
The complete source code is in my GitHub repository of this website at ml20180827a.
Face landmark detailed information
Referring back to the post on face landmark detection, the command to retrieve face landmark information is
fm.fit(im.getBGR(), faces, shapes); |
where im.getBGR() is the Mat variable of the input image; faces is the MatOfRect variable (a number of Rect) obtained from the face detection; shapes is the ArrayList<MatOfPoint2f> variable returning the face landmark details for each face detected.
Each face is a MatOfPoint2f value. We can convert it to an array of Point. The array has length 68. Each point in the array corresponds to a face landmark feature point in the face as shown in the below image.
Face swap example in OpenCV with Processing (v.2)
To enhance the last post in face swap, we can make use of the cloning features of the Photo module in OpenCV. The command we use is the seamlessClone() function.
Photo.seamlessClone(warp, im2, mask, centre, output, Photo.NORMAL_CLONE); |
where warp is the accumulation of all warped triangles; im2 is the original target image; mask is the masked image of the convex hull of the face contour; centre is a Point variable of the centre of the target image; output will be the blended final image.
Complete source code is now in the GitHub repository, ml20180820b.