一、概述
人臉識別本質上是一個求相似度的問題,相同的人臉映射到同一個空間,他們的距離比較近,這個距離的度量可以是余弦距離,也可以是歐幾里得距離,或者其他的距離。下面有三個頭像。



A B C
顯然A和C是相同人臉,A和B是不同人臉,用數學怎么描述呢?假設有個距離函數d(x1,x2),那么 d(A,B) > d(A,C)。在真實的人臉識別應用中,函數d(x1,x2)小到一個什么范圍才認定為同一張人臉呢?這個值和訓練模型時的參數有關,這個將在下文中給出。值得注意的是,如果函數d為cosine,則值越大表示越相似。一個通用的人臉識別模型應該包含特征提取(也就是特征映射)和距離計算兩個單元。
二、構造模型
那么有什么辦法可以特征映射呢?對于圖像的處理,卷積神經網絡無疑是目前最優的辦法。DeepLearning4J已經內置了訓練好的VggFace模型,是基于vgg16訓練的。vggFace的下載地址:https://dl4jdata.blob.core.windows.net/models/vgg16_dl4j_vggface_inference.v1.zip,這個地址是怎么獲取到的呢?直接跟一下源碼VGG16,pretrainedUrl方法里的DL4JResources.getURLString方法便有相關模型的下載地址,VGG19、ResNet50等等pretrained的模型下載地址,都可以這樣找到。源碼如下
public class VGG16 extends ZooModel {
@Builder.Default private long seed = 1234;
@Builder.Default private int[] inputShape = new int[] {3, 224, 224};
@Builder.Default private int numClasses = 0;
@Builder.Default private IUpdater updater = new Nesterovs();
@Builder.Default private CacheMode cacheMode = CacheMode.NONE;
@Builder.Default private WorkspaceMode workspaceMode = WorkspaceMode.ENABLED;
@Builder.Default private ConvolutionLayer.AlgoMode cudnnAlgoMode = ConvolutionLayer.AlgoMode.PREFER_FASTEST;
private VGG16() {}
@Override
public String pretrainedUrl(PretrainedType pretrainedType) {
if (pretrainedType == PretrainedType.IMAGENET)
return DL4JResources.getURLString("models/vgg16_dl4j_inference.zip");
else if (pretrainedType == PretrainedType.CIFAR10)
return DL4JResources.getURLString("models/vgg16_dl4j_cifar10_inference.v1.zip");
else if (pretrainedType == PretrainedType.VGGFACE)
return DL4JResources.getURLString("models/vgg16_dl4j_vggface_inference.v1.zip");
else
return null;
}
vgg16的模型結構如下:
====================================================================================================
VertexName (VertexType) nIn,nOut TotalParams ParamsShape Vertex Inputs
====================================================================================================
input_1 (InputVertex) -,- - - -
conv1_1 (ConvolutionLayer) 3,64 1,792 W:{64,3,3,3}, b:{1,64} [input_1]
conv1_2 (ConvolutionLayer) 64,64 36,928 W:{64,64,3,3}, b:{1,64} [conv1_1]
pool1 (SubsamplingLayer) -,- 0 - [conv1_2]
conv2_1 (ConvolutionLayer) 64,128 73,856 W:{128,64,3,3}, b:{1,128} [pool1]
conv2_2 (ConvolutionLayer) 128,128 147,584 W:{128,128,3,3}, b:{1,128} [conv2_1]
pool2 (SubsamplingLayer) -,- 0 - [conv2_2]
conv3_1 (ConvolutionLayer) 128,256 295,168 W:{256,128,3,3}, b:{1,256} [pool2]
conv3_2 (ConvolutionLayer) 256,256 590,080 W:{256,256,3,3}, b:{1,256} [conv3_1]
conv3_3 (ConvolutionLayer) 256,256 590,080 W:{256,256,3,3}, b:{1,256} [conv3_2]
pool3 (SubsamplingLayer) -,- 0 - [conv3_3]
conv4_1 (ConvolutionLayer) 256,512 1,180,160 W:{512,256,3,3}, b:{1,512} [pool3]
conv4_2 (ConvolutionLayer) 512,512 2,359,808 W:{512,512,3,3}, b:{1,512} [conv4_1]
conv4_3 (ConvolutionLayer) 512,512 2,359,808 W:{512,512,3,3}, b:{1,512} [conv4_2]
pool4 (SubsamplingLayer) -,- 0 - [conv4_3]
conv5_1 (ConvolutionLayer) 512,512 2,359,808 W:{512,512,3,3}, b:{1,512} [pool4]
conv5_2 (ConvolutionLayer) 512,512 2,359,808 W:{512,512,3,3}, b:{1,512} [conv5_1]
conv5_3 (ConvolutionLayer) 512,512 2,359,808 W:{512,512,3,3}, b:{1,512} [conv5_2]
pool5 (SubsamplingLayer) -,- 0 - [conv5_3]
flatten (PreprocessorVertex) -,- - - [pool5]
fc6 (DenseLayer) 25088,4096 102,764,544 W:{25088,4096}, b:{1,4096} [flatten]
fc7 (DenseLayer) 4096,4096 16,781,312 W:{4096,4096}, b:{1,4096} [fc6]
fc8 (DenseLayer) 4096,2622 10,742,334 W:{4096,2622}, b:{1,2622} [fc7]
----------------------------------------------------------------------------------------------------
Total Parameters: 145,002,878
Trainable Parameters: 145,002,878
Frozen Parameters: 0
對于VggFace我們只需要前面的卷積層和池化層來提取特征,其他的全連接層可以丟棄掉,那么我們的模型可以設置成如下的樣子。

說明:這里用StackVertex和UnStackVertex的原因是,dl4j中默認情況下有都給輸入時是把張量Merge在一起輸入的,達不到多個輸入共享權重的目的,所以這里先用StackVertex沿著第0維堆疊張量,共享卷積和池化提取特征,再用UnStackVertex拆開張量,給后面用于計算距離用。
接下來的問題是,dl4j中遷移學習api只能在模型尾部追加相關的結構,而現在我們的場景是把pretrained的模型的部分結構放在中間,怎么辦呢?不著急,我們看看遷移學習API的源碼,看DL4J是怎么封裝的。在org.deeplearning4j.nn.transferlearning.TransferLearning的build方法中找到了蛛絲馬跡。
public ComputationGraph build() {
initBuilderIfReq(); ComputationGraphConfiguration newConfig = editedConfigBuilder .validateOutputLayerConfig(validateOutputLayerConfig == null ? true : validateOutputLayerConfig).build();
if (this.workspaceMode != null)
newConfig.setTrainingWorkspaceMode(workspaceMode); ComputationGraph newGraph = new ComputationGraph(newConfig);
newGraph.init(); int[] topologicalOrder = newGraph.topologicalSortOrder(); org.deeplearning4j.nn.graph.vertex.GraphVertex[] vertices = newGraph.getVertices(); if (!editedVertices.isEmpty()) {
//set params from orig graph as necessary to new graph
for (int i = 0; i < topologicalOrder.length; i++) {
if (!vertices[topologicalOrder[i]].hasLayer())
continue;
org.deeplearning4j.nn.api.Layer layer = vertices[topologicalOrder[i]].getLayer(); String layerName = vertices[topologicalOrder[i]].getVertexName(); long range = layer.numParams(); if (range <= 0)
continue; //some layers have no params
if (editedVertices.contains(layerName))
continue; //keep the changed params
INDArray origParams = origGraph.getLayer(layerName).params(); layer.setParams(origParams.dup()); //copy over origGraph params
} } else {
newGraph.setParams(origGraph.params()); }
原來是直接調用 layer.setParams方法,給每一個層set相關的參數即可。接下來,我們就有思路了,直接構造一個和vgg16一樣的模型,把vgg16的參數set到新的模型里即可。其實本質上,DeepLearning被train之后,有用的就是參數而已,有了這些參數,我們就可以隨心所欲的用這些模型了。廢話不多說,我們直接上代碼,構建我們目標模型
private static ComputationGraph buildModel() {
ComputationGraphConfiguration conf = new NeuralNetConfiguration.Builder().seed(123)
.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT).activation(Activation.RELU) .graphBuilder().addInputs("input1", "input2").addVertex("stack", new StackVertex(), "input1", "input2")
.layer("conv1_1",
new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nIn(3).nOut(64)
.build(), "stack")
.layer("conv1_2",
new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(64).build(),
"conv1_1")
.layer("pool1",
new SubsamplingLayer.Builder().poolingType(SubsamplingLayer.PoolingType.MAX).kernelSize(2, 2)
.stride(2, 2).build(),
"conv1_2")
// block 2
.layer("conv2_1",
new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(128).build(),
"pool1")
.layer("conv2_2",
new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(128).build(),
"conv2_1")
.layer("pool2",
new SubsamplingLayer.Builder().poolingType(SubsamplingLayer.PoolingType.MAX).kernelSize(2, 2)
.stride(2, 2).build(),
"conv2_2")
// block 3
.layer("conv3_1",
new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(256).build(),
"pool2")
.layer("conv3_2",
new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(256).build(),
"conv3_1")
.layer("conv3_3",
new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(256).build(),
"conv3_2")
.layer("pool3",
new SubsamplingLayer.Builder().poolingType(SubsamplingLayer.PoolingType.MAX).kernelSize(2, 2)
.stride(2, 2).build(),
"conv3_3")
// block 4
.layer("conv4_1",
new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(512).build(),
"pool3")
.layer("conv4_2",
new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(512).build(),
"conv4_1")
.layer("conv4_3",
new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(512).build(),
"conv4_2")
.layer("pool4",
new SubsamplingLayer.Builder().poolingType(SubsamplingLayer.PoolingType.MAX).kernelSize(2, 2)
.stride(2, 2).build(),
"conv4_3")
// block 5
.layer("conv5_1",
new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(512).build(),
"pool4")
.layer("conv5_2",
new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(512).build(),
"conv5_1")
.layer("conv5_3",
new ConvolutionLayer.Builder().kernelSize(3, 3).stride(1, 1).padding(1, 1).nOut(512).build(),
"conv5_2")
.layer("pool5",
new SubsamplingLayer.Builder().poolingType(SubsamplingLayer.PoolingType.MAX).kernelSize(2, 2)
.stride(2, 2).build(),
"conv5_3")
.addVertex("unStack1", new UnstackVertex(0, 2), "pool5")
.addVertex("unStack2", new UnstackVertex(1, 2), "pool5")
.addVertex("cosine", new CosineLambdaVertex(), "unStack1", "unStack2")
.addLayer("out", new LossLayer.Builder().build(), "cosine").setOutputs("out")
.setInputTypes(InputType.convolutionalFlat(224, 224, 3), InputType.convolutionalFlat(224, 224, 3))
.build(); ComputationGraph network = new ComputationGraph(conf);
network.init(); return network;
}
接下來讀取VGG16的參數,set到我們的新模型里。為了代碼方便,我們將LayerName設定的和vgg16里一樣
String vggLayerNames = "conv1_1,conv1_2,conv2_1,conv2_2,conv3_1,conv3_2,conv3_3,conv4_1,conv4_2,conv4_3,conv5_1,conv5_2,conv5_3";
File vggfile = new File("F:/vgg16_dl4j_vggface_inference.v1.zip");
ComputationGraph vggFace = ModelSerializer.restoreComputationGraph(vggfile); ComputationGraph model = buildModel(); for (String name : vggLayerNames.split(",")) {
model.getLayer(name).setParams(vggFace.getLayer(name).params().dup()); }
特征提取層構造完畢,提取特征之后,我們要計算距離了,這里就需要用DL4J實現自定義層,DL4J提供的自動微分可以非常方便的實現自定義層,這里我們選擇 SameDiffLambdaVertex,原因是這一層不需要任何參數,僅僅計算cosine即可,代碼如下:
public class CosineLambdaVertex extends SameDiffLambdaVertex {
@Override
public SDVariable defineVertex(SameDiff sameDiff, VertexInputs inputs) {
SDVariable input1 = inputs.getInput(0);
SDVariable input2 = inputs.getInput(1);
return sameDiff.expandDims(sameDiff.math.cosineSimilarity(input1, input2, 1, 2, 3), 1);
} @Override
public InputType getOutputType(int layerIndex, InputType... vertexInputs) throws InvalidInputTypeException {
return InputType.feedForward(1);
}}
說明:計算cosine之后這里用expandDims將一維張量拓寬成二維,是為了在LFW數據集中驗證模型的準確性。
DL4J也提供其他的自定層和自定義節點的實現,一共有如下五種:
- Layers: standard single input, single output layers defined using SameDiff. To implement, extend org.deeplearning4j.nn.conf.layers.samediff.SameDiffLayer
- Lambda layers: as above, but without any parameters. You only need to implement a single method for these! To implement, extend org.deeplearning4j.nn.conf.layers.samediff.SameDiffLambdaLayer
- Graph vertices: multiple inputs, single output layers usable only in ComputationGraph. To implement: extend org.deeplearning4j.nn.conf.layers.samediff.SameDiffVertex
- Lambda vertices: as above, but without any parameters. Again, you only need to implement a single method for these! To implement, extend org.deeplearning4j.nn.conf.layers.samediff.SameDiffLambdaVertex
- Output layers: An output layer, for calculating scores/losses. Used as the final layer in a network. To implement, extend org.deeplearning4j.nn.conf.layers.samediff.SameDiffOutputLayer
案例地址:https://github.com/eclipse/deeplearning4j-examples/tree/master/samediff-examples
說明文檔:https://github.com/eclipse/deeplearning4j-examples/blob/master/samediff-examples/src/main/JAVA/org/nd4j/examples/samediff/customizingdl4j/README.md
接下來,還有最后一個問題,輸出層怎么定義?輸出層不需要任何參數和計算,僅僅將cosine結果輸出即可,dl4j中提供LossLayer天然滿足這種結構,沒有參數,且激活函數為恒等函數IDENTITY。那么到此為止模型構造完成,最終結構如下:
=========================================================================================================
VertexName (VertexType) nIn,nOut TotalParams ParamsShape Vertex Inputs
=========================================================================================================
input1 (InputVertex) -,- - - -
input2 (InputVertex) -,- - - -
stack (StackVertex) -,- - - [input1, input2]
conv1_1 (ConvolutionLayer) 3,64 1,792 W:{64,3,3,3}, b:{1,64} [stack]
conv1_2 (ConvolutionLayer) 64,64 36,928 W:{64,64,3,3}, b:{1,64} [conv1_1]
pool1 (SubsamplingLayer) -,- 0 - [conv1_2]
conv2_1 (ConvolutionLayer) 64,128 73,856 W:{128,64,3,3}, b:{1,128} [pool1]
conv2_2 (ConvolutionLayer) 128,128 147,584 W:{128,128,3,3}, b:{1,128} [conv2_1]
pool2 (SubsamplingLayer) -,- 0 - [conv2_2]
conv3_1 (ConvolutionLayer) 128,256 295,168 W:{256,128,3,3}, b:{1,256} [pool2]
conv3_2 (ConvolutionLayer) 256,256 590,080 W:{256,256,3,3}, b:{1,256} [conv3_1]
conv3_3 (ConvolutionLayer) 256,256 590,080 W:{256,256,3,3}, b:{1,256} [conv3_2]
pool3 (SubsamplingLayer) -,- 0 - [conv3_3]
conv4_1 (ConvolutionLayer) 256,512 1,180,160 W:{512,256,3,3}, b:{1,512} [pool3]
conv4_2 (ConvolutionLayer) 512,512 2,359,808 W:{512,512,3,3}, b:{1,512} [conv4_1]
conv4_3 (ConvolutionLayer) 512,512 2,359,808 W:{512,512,3,3}, b:{1,512} [conv4_2]
pool4 (SubsamplingLayer) -,- 0 - [conv4_3]
conv5_1 (ConvolutionLayer) 512,512 2,359,808 W:{512,512,3,3}, b:{1,512} [pool4]
conv5_2 (ConvolutionLayer) 512,512 2,359,808 W:{512,512,3,3}, b:{1,512} [conv5_1]
conv5_3 (ConvolutionLayer) 512,512 2,359,808 W:{512,512,3,3}, b:{1,512} [conv5_2]
pool5 (SubsamplingLayer) -,- 0 - [conv5_3]
unStack1 (UnstackVertex) -,- - - [pool5]
unStack2 (UnstackVertex) -,- - - [pool5]
cosine (SameDiffGraphVertex) -,- - - [unStack1, unStack2]
out (LossLayer) -,- 0 - [cosine]
---------------------------------------------------------------------------------------------------------
Total Parameters: 14,714,688
Trainable Parameters: 14,714,688
Frozen Parameters: 0
=========================================================================================================
三、在LFW上驗證模型準確率
LFW數據下載地址:http://vis-www.cs.umass.edu/lfw/,我下載之后放在了F:facerecognition目錄下。
構造測試集,分別構造正例和負例,將相同的人臉放一堆,不同的人臉放一堆,代碼如下:
import org.Apache.commons.io.FileUtils;
import java.io.File;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.Random;
public class DataTools {
private static final String PARENT_PATH = "F:/facerecognition";
public static void main(String[] args) throws IOException {
File file = new File(PARENT_PATH + "/lfw");
List<File> list = Arrays.asList(file.listFiles());
for (int i = 0; i < list.size(); i++) {
String name = list.get(i).getName();
File[] faceFileArray = list.get(i).listFiles();
if (null == faceFileArray) {
continue;
} //構造正例
if (faceFileArray.length > 1) {
String positiveFilePath = PARENT_PATH + "/pairs/1/" + name;
File positiveFileDir = new File(positiveFilePath);
if (positiveFileDir.exists()) {
positiveFileDir.delete();
}
positiveFileDir.mkdir();
FileUtils.copyFile(faceFileArray[0], new File(positiveFilePath + "/" + faceFileArray[0].getName()));
FileUtils.copyFile(faceFileArray[1], new File(positiveFilePath + "/" + faceFileArray[1].getName()));
}
//構造負例
String negativeFilePath = PARENT_PATH + "/pairs/0/" + name;
File negativeFileDir = new File(negativeFilePath);
if (negativeFileDir.exists()) {
negativeFileDir.delete();
}
negativeFileDir.mkdir();
FileUtils.copyFile(faceFileArray[0], new File(negativeFilePath + "/" + faceFileArray[0].getName()));
File[] differentFaceArray = list.get(randomInt(list.size(), i)).listFiles();
int differentFaceIndex = randomInt(differentFaceArray.length, -1);
FileUtils.copyFile(differentFaceArray[differentFaceIndex], new File(negativeFilePath + "/" + differentFaceArray[differentFaceIndex].getName()));
}
}
public static int randomInt(int max, int target) {
Random random = new Random();
while (true) {
int result = random.nextInt(max);
if (result != target) {
return result;
}
}
}
}
測試集構造完成之后,構造迭代器,迭代器中讀取圖片用了NativeImageLoader,在《如何利用deeplearning4j中datavec對圖像進行處理》有相關介紹。
public class DataSetForEvaluation implements MultiDataSetIterator {
private List<FacePair> facePairList;
private int batchSize;
private int totalBatches;
private NativeImageLoader imageLoader;
private int currentBatch = 0;
public DataSetForEvaluation(List<FacePair> facePairList, int batchSize) {
this.facePairList = facePairList;
this.batchSize = batchSize;
this.totalBatches = (int) Math.ceil((double) facePairList.size() / batchSize);
this.imageLoader = new NativeImageLoader(224, 224, 3, new ResizeImageTransform(224, 224));
} @Override
public boolean hasNext() {
return currentBatch < totalBatches;
} @Override
public MultiDataSet next() {
return next(batchSize);
} @Override
public MultiDataSet next(int num) {
int i = currentBatch * batchSize;
int currentBatchSize = Math.min(batchSize, facePairList.size() - i);
INDArray input1 = Nd4j.zeros(currentBatchSize, 3,224,224);
INDArray input2 = Nd4j.zeros(currentBatchSize, 3,224,224);
INDArray label = Nd4j.zeros(currentBatchSize, 1);
for (int j = 0; j < currentBatchSize; j++) {
try {
input1.put(new INDArrayIndex[]{NDArrayIndex.point(j),NDArrayIndex.all(),NDArrayIndex.all(),NDArrayIndex.all()}, imageLoader.asMatrix(facePairList.get(i).getList().get(0)).div(255));
input2.put(new INDArrayIndex[]{NDArrayIndex.point(j),NDArrayIndex.all(),NDArrayIndex.all(),NDArrayIndex.all()},imageLoader.asMatrix(facePairList.get(i).getList().get(1)).div(255));
} catch (Exception e) {
e.printStackTrace(); } label.putScalar((long) j, 0, facePairList.get(i).getLabel());
++i; } System.out.println(currentBatch); ++currentBatch; return new org.nd4j.linalg.dataset.MultiDataSet(new INDArray[] { input1, input2},
new INDArray[] { label });
} @Override
public void setPreProcessor(MultiDataSetPreProcessor preProcessor) {
} @Override
public MultiDataSetPreProcessor getPreProcessor() {
return null;
} @Override
public boolean resetSupported() {
return true;
} @Override
public boolean asyncSupported() {
return false;
} @Override
public void reset() {
currentBatch = 0;
}}
接下來可以評估模型的性能了,準確率和精確率還湊合,但F1值有點低。
========================Evaluation Metrics========================
# of classes: 2
Accuracy: 0.8973
Precision: 0.9119
Recall: 0.6042
F1 Score: 0.7268
Precision, recall & F1: reported for positive class (class 1 - "1") only
=========================Confusion Matrix=========================
0 1
-----------
5651 98 | 0 = 0
665 1015 | 1 = 1
Confusion matrix format: Actual (rowClass) predicted as (columnClass) N times
==================================================================
四、用SpringBoot將模型封裝成服務
模型保存之后,就是一堆死參數,怎么變成線上的服務呢?人臉識別服務分為兩種1:1和1:N
1、1:1應用
典型的1:1應用如手機的人臉識別解鎖,釘釘的人臉識別考勤,這種應用比較簡單,僅僅只需要張三是張三即可,運算量很小。很容易實現
2、1:N應用
典型的1:N應用如公安機關的人臉找人,在不知道目標人臉身份的前提下,從海量人臉庫中找到目標人臉是誰。當人臉庫中數據量巨大的時候,計算是一個很大的問題。
如果不要求結構可以實時出來,可以離線用Hadoop MapReduce或者Spark來計算一把,我們需要做的工作僅僅是封裝一個Hive UDF函數、或者MapReduce jar,再或者是Spark RDD編程即可。
但對于要求計算結果實時性,這個問題不能轉化為一個索引問題,所以需要設計一種計算框架,可以分布式的解決全局Max或者全局Top的問題,大致結構如下:

藍色箭頭表示請求留向,綠色箭頭表示計算結果返回,圖中描述了一個客戶端請求打到了節點Node3上,由Node3轉發請求到其他Node,并行計算。當然如果各個Node內存夠大,可以將整個人臉庫的張量都預熱到內存常駐,加快計算速度。
當然,本篇博客中并沒有實現并行計算框架,只實現了用springboot將模型包裝成服務。運行FaceRecognitionApplication,訪問http://localhost:8080/index,服務效果如下:

本篇博客的所有代碼:https://gitee.com/lxkm/dl4j-demo/tree/master/face-recognition
五、總結
本篇博客的主要意圖是介紹如何把DL4J用于實戰,包括pretrained模型參數的獲取、自定義層的實現,自定義迭代器的實現,用springboot包裝層服務等等。
當然一個人臉識別系統只有一個圖片embedding和求張量距離是不夠的,還應該包括人臉矯正、抵御AI attack(后面的博客也會介紹如何用DL4J進行 FGSM 攻擊)、人臉關鍵部位特征提取等等很多精細化的工作要做。當然要把人臉識別做成一個通用SAAS服務,也是有很多工作要做。
要訓練一個好的人臉識別模型,需要多種loss function的配合,如可以先用SoftMax做分類,再用Center Loss、Triple Loss做微調,后續的博客中將介紹如何用DL4J實現Triple Loss(

),來訓練人臉識別模型。
快樂源于分享。
此博客乃作者原創,出處:https://my.oschina.net/u/1778239/blog/4575155