One Class Contrastive Loss for Anomaly Detection

Anjith George
3 min readFeb 15, 2021

In this tutorial, we will see howto use One Class Contrastive Loss (OCCL) for unseen attack detection, or for general anomaly detection.

The problem setting is unseen attack detection, or anomaly detection. Assume that we have enough samples to cover the variations of the normal class and some samples of the anomalies. Our task is to identify anomalies possibly leveraging the information from known anomaly samples. However, we cannot know the types of anomalies and available anomaly samples does not cover all the variations possible.

Typically, one class classifiers are well suited for such outlier detection tasks. However, in practice, the performance of one class classifiers are inferior compared to binary classifiers for known anomalies, since they do not leverage useful information from the known attacks.

Clearly, there is a necessity of a method which can learn a compact one class representation while utilizing the discriminative information from known anomalies.
In the proposed framework, we use a CNN based approach to learn a compact feature representation for the normal samples. We propose a novel One Class Contrastive Loss (OCCL), for this task.

The toy example illustrating the issue with binary classifiers when faced with an unknown example. Here the red line shows the decision boundary learned by a binary classifier, where as the green dotted line shows the decision boundary learned by a one class classifier

The above figure shows the motivation of the current approach. Typical classifiers over fits to training data and does not perform well when faced with unseen data at test time. However, the image shows an ideal case where the normal data (bonafide) is clustered together tightly making the job of the one class classifier easy.

The main objective of the OCCL loss is to ensure the normal data is tightly clustered and is far from known anomalies. We extend both Contrastive Loss and Center Loss to form the new One Class Contrastive Loss (OCCL).

Loss functions acting on the embedding space, left) bonafide representations are pulled closer to the center of bonafide class (normal samples) (green), while the attack (anomalies )embeddings(red) are forced to be beyond the margin. The attack samples (anomalies) outside the margin does not contribute to the loss, right) The loss as a function of distance from the bonafide (normal samples) center.

We start with the expression for contrastive loss and center loss given as follows:

Center Loss
Contrastive loss

The expression of OCCL is given as:

Where, DCw is the distance of samples from the center of normal class cluster.

The center of the normal cluster is updated during training time as

Now, we use this loss function in the pre-final embedding layer as an auxiliary loss function in addition to binary cross entropy (BCE). The framework for face presentation attack is shown below:

Framework for face PAD, at training time the network is trained using both OCCL and BCE losses. The features learned are used together with GMM for one class classification.

The representation learned can be used together with a one class classifier for anomaly detection.

Check the reference publication for the details and source code.

Reference:

  1. George, A. and Marcel, S., 2020. Learning one class representations for face presentation attack detection using multi-channel convolutional neural networks. IEEE Transactions on Information Forensics and Security, 16, pp.361–375.

--

--