Skip to main content

Selective Pseudo-Label Clustering

  • Conference paper
  • First Online:
KI 2021: Advances in Artificial Intelligence (KI 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12873))

Included in the following conference series:

Abstract

Deep neural networks (DNNs) offer a means of addressing the challenging task of clustering high-dimensional data. DNNs can extract useful features, and so produce a lower dimensional representation, which is more amenable to clustering techniques. As clustering is typically performed in a purely unsupervised setting, where no training labels are available, the question then arises as to how the DNN feature extractor can be trained. The most accurate existing approaches combine the training of the DNN with the clustering objective, so that information from the clustering process can be used to update the DNN to produce better features for clustering. One problem with this approach is that these “pseudo-labels” produced by the clustering algorithm are noisy, and any errors that they contain will hurt the training of the DNN. In this paper, we propose selective pseudo-label clustering, which uses only the most confident pseudo-labels for training the DNN. We formally prove the performance gains under certain conditions. Applied to the task of image clustering, the new approach achieves a state-of-the-art performance on three popular image datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abavisani, M., Patel, V.M.: Deep multimodal subspace clustering networks. IEEE J. Sel. Top. Signal Process. 12(6), 1601–1614 (2018)

    Article  Google Scholar 

  2. Bellman, R.: Dynamic programming. Science 153(3731), 34–37 (1966)

    Article  Google Scholar 

  3. Boongoen, T., Iam-On, N.: Cluster ensembles: a survey of approaches with recent extensions and applications. Comput. Sci. Rev. 28, 1–25 (2018)

    Article  MathSciNet  Google Scholar 

  4. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MATH  Google Scholar 

  5. Brock, A., Donahue, J., Simonyan, K.: Large scale GAN training for high fidelity natural image synthesis. arXiv:1809.11096 (2018)

  6. Caron, M., Bojanowski, P., Joulin, A., Douze, M.: Deep clustering for unsupervised learning of visual features. In: Proceedings of ECCV, pp. 132–149 (2018)

    Google Scholar 

  7. Chang, J., Wang, L., Meng, G., Xiang, S., Pan, C.: Deep adaptive image clustering. In: Proceedings of ICCV, pp. 5879–5887 (2017)

    Google Scholar 

  8. Clemen, R.T.: Combining forecasts: a review and annotated bibliography. Int. J. Forecast. 5(4), 559–583 (1989)

    Article  Google Scholar 

  9. Creswell, A., Bharath, A.A.: Inverting the generator of a generative adversarial network. IEEE Trans. Neural Netw. Learn. Syst. 30(7), 1967–1974 (2018)

    Article  Google Scholar 

  10. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc.: Ser. B (Methodol.) 39(1), 1–22 (1977)

    MathSciNet  MATH  Google Scholar 

  11. Ding, F., Luo, F.: Clustering by directly disentangling latent space. arXiv:1911.05210 (2019)

  12. Elgammal, A., Liu, B., Elhoseiny, M., Mazzone, M.: CAN: Creative adversarial networks, generating art by learning about styles and deviating from style norms. arXiv:1706.07068 (2017)

  13. Gao, B., Yang, Y., Gouk, H., Hospedales, T.M.: Deep clustering with concrete k-means. In: Proceedings of ICASSP, pp. 4252–4256. IEEE (2020)

    Google Scholar 

  14. Ghasedi Dizaji, K., Herandi, A., Deng, C., Cai, W., Huang, H.: Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization. In: Proceedings of ICCV (2017)

    Google Scholar 

  15. Goodfellow, I., et al.: Generative adversarial nets. In: Proceedings of NIPS, pp. 2672–2680 (2014)

    Google Scholar 

  16. Guo, X., Gao, L., Liu, X., Yin, J.: Improved deep embedded clustering with local structure preservation. In: Proceedings of IJCAI, pp. 1753–1759 (2017)

    Google Scholar 

  17. Guo, X., Zhu, E., Liu, X., Yin, J.: Deep embedded clustering with data augmentation. In: Proceedings of Asian Conference on Machine Learning, pp. 550–565 (2018)

    Google Scholar 

  18. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580 (2012)

  19. Huang, P., Huang, Y., Wang, W., Wang, L.: Deep embedding network for clustering. In: Proceedings of ICPR, pp. 1532–1537. IEEE (2014)

    Google Scholar 

  20. Hull, J.J.: A database for handwritten text recognition research. TPAMI 16(5), 550–554 (1994)

    Article  Google Scholar 

  21. Jiang, Z., Zheng, Y., Tan, H., Tang, B., Zhou, H.: Variational deep embedding: an unsupervised and generative approach to clustering. arXiv:1611.05148 (2016)

  22. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of CVPR (2019)

    Google Scholar 

  23. Kittler, J., Hatef, M., Duin, R.P., Matas, J.: On combining classifiers. TPAMI 20(3), 226–239 (1998)

    Article  Google Scholar 

  24. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Proceedings of NIPS, pp. 1097–1105 (2012)

    Google Scholar 

  25. Kuhn, H.W.: The Hungarian method for the assignment problem. Naval Res. Logist. Q. 2(1–2), 83–97 (1955)

    Article  MathSciNet  Google Scholar 

  26. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  27. Liang, J., Yang, J., Lee, H.-Y., Wang, K., Yang, M.-H.: Sub-GAN: an unsupervised generative model via subspaces. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11215, pp. 726–743. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01252-6_43

    Chapter  Google Scholar 

  28. Lipton, Z.C., Tripathi, S.: Precise recovery of latent vectors from generative adversarial networks. arXiv:1702.04782 (2017)

  29. Lloyd, S.: Least square quantization in PCM. IEEE Trans. Inf. Theory (1957/1982) 18, 129–137 (1957)

    Google Scholar 

  30. McConville, R., Santos-Rodriguez, R., Piechocki, R.J., Craddock, I.: N2D:(not too) deep clustering via clustering the local manifold of an autoencoded embedding. arXiv:1908.05968 (2019)

  31. McInnes, L., Healy, J., Astels, S.: HDBSCAN: hierarchical density based clustering. J. Open Sour. Softw. 2(11), 205 (2017)

    Article  Google Scholar 

  32. McInnes, L., Healy, J., Melville, J.: UMAP: uniform manifold approximation and projection for dimension reduction. arXiv:1802.03426 (2018)

  33. Mrabah, N., Bouguessa, M., Ksantini, R.: Adversarial deep embedded clustering: on a better trade-off between feature randomness and feature drift. arXiv:1909.11832 (2019)

  34. Mrabah, N., Khan, N.M., Ksantini, R., Lachiri, Z.: Deep clustering with a dynamic autoencoder: From reconstruction towards centroids construction. arXiv:1901.07752 (2019)

  35. Mukherjee, S., Asnani, H., Lin, E., Kannan, S.: ClusterGAN: latent space clustering in generative adversarial networks. arXiv:1809.03627 (2019)

  36. Opitz, D.W., Maclin, R.F.: An empirical evaluation of bagging and boosting for artificial neural networks. In: Proceedings of ICNN, vol. 3, pp. 1401–1405. IEEE (1997)

    Google Scholar 

  37. Pearlmutter, B.A., Rosenfeld, R.: Chaitin-Kolmogorov complexity and generalization in neural networks. In: Proceedings of NIPS, pp. 925–931 (1991)

    Google Scholar 

  38. Perrone, M.P.: Improving regression estimation: averaging methods for variance reduction with extensions to general convex measure optimization. Ph.D. thesis (1993)

    Google Scholar 

  39. Ren, Y., Wang, N., Li, M., Xu, Z.: Deep density-based image clustering. Knowl.-Based Syst. 197, 105841 (2020)

    Google Scholar 

  40. Wang, Y., Zhang, L., Nie, F., Li, X., Chen, Z., Wang, F.: WeGAN: deep image hashing with weighted generative adversarial networks. IEEE Trans. Multimed. 22, 1458–1469 (2019)

    Article  Google Scholar 

  41. Witten, I.H., Frank, E.: Data mining: practical machine learning tools and techniques with Java implementations. ACM SIGMOD Rec. 31(1), 76–77 (2002)

    Article  Google Scholar 

  42. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv:1708.07747 (2017)

  43. Xie, J., Girshick, R., Farhadi, A.: Unsupervised deep embedding for clustering analysis. In: Proceedings of ICML, pp. 478–487 (2016)

    Google Scholar 

  44. Yang, B., Fu, X., Sidiropoulos, N.D., Hong, M.: Towards k-means-friendly spaces: simultaneous deep learning and clustering. In: Proceedings of ICML, vol. 70, pp. 3861–3870. JMLR.org (2017)

    Google Scholar 

  45. Yang, J., Parikh, D., Batra, D.: Joint unsupervised learning of deep representations and image clusters. In: Proceedings of CVPR, pp. 5147–5156 (2016)

    Google Scholar 

  46. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    Chapter  Google Scholar 

  47. Zemel, R.S., Hinton, G.E.: Developing population codes by minimizing description length. In: Proceedings of NIPS, pp. 11–18 (1994)

    Google Scholar 

  48. Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for neural networks for image processing. arXiv:1511.08861 (2015)

  49. Zhao, W., Wang, S., Xie, Z., Shi, J., Xu, C.: GAN-EM: GAN based EM learning framework. arXiv:1812.00335 (2018)

  50. Zhou, P., Hou, Y., Feng, J.: Deep adversarial subspace clustering. In: Proceedings of CVPR (2018)

    Google Scholar 

  51. Zimek, A., Schubert, E., Kriegel, H.P.: A survey on unsupervised outlier detection in high-dimensional numerical data. Stat. Anal. Data Min.: ASA Data Sci. J. 5(5), 363–387 (2012)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

This work was supported by the Alan Turing Institute under the UK EPSRC grant EP/N510129/1 and by the AXA Research Fund. We also acknowledge the use of the EPSRC-funded Tier 2 facility JADE (EP/P020275/1) and GPU computing support by Scan Computers International Ltd.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Louis Mahon .

Editor information

Editors and Affiliations

Appendices

A Appendix A: Full Proofs

This appendix contains the full proofs of the results in Sect. 4.

1.1 A.1 More Accurate Pseudo-Labels Supplement

The only part omitted from the argument in the main paper is a proof for the claim about the entropy of the random variable X. This is supplied by the following proposition.

Proposition 1

Given a categorical random variable X of the form

$$\begin{aligned} p(X=c_0) = t \\ \forall c \ne c_0, p(X=c) = \frac{1-t}{C-1}, \end{aligned}$$

for some \(1/C \le t \le 1\), the entropy H(X) is a strictly decreasing function of t.

Proof

$$\begin{aligned} H(X) =&- t\log t - (1-t)\log \frac{1-t}{C-1} \\ \frac{d(H(X))}{dt} =&- \log t - 1 - \frac{1}{1-t} + \log \frac{1}{C-1}\\&+ \frac{t}{1-t} + \log 1-t \\ =&-2 - \log t - \log C-1 + \log t-1 \\ =&-2 - \log \left( \frac{t}{1-t}(C-1)\right) . \end{aligned}$$

The argument to the \(\log \) is clearly an increasing function of t for \(t>1\). Therefore, for \(1/C \le t < 1\), it is lower-bounded by setting \(t = 1/C\). This gives

$$\begin{aligned} \frac{d(H(X))}{dt}&\le -2 - \log \left( \frac{1/C}{1-1/C}(C-1)\right) \\&< -\log \left( \frac{1/C}{1-1/C}(C-1)\right) = -\log 1 = 0. \end{aligned}$$

The derivative is always strictly negative with respect to t, so, as a function of t, H(X) is always strictly decreasing.

1.2 A.2 Lemma 1 Supplement

The following is a proof for the claim that \(u_{ same } < u_{ diff }\), as stated in Sect. 4.

Decomposing \(u_{ same }\) according to the definition of variance (as the expectation of the square minus the square of the expectation) gives

$$\begin{aligned} \underset{x, x' \sim T}{\mathbb {E}}[w^T(x - x') - \eta w'(||x||^2-||x'||^2)]^2 \\ +\,\text {Var}(w^T(x - x') - \eta w'(||x||^2-||x'||^2)). \end{aligned}$$

The expectation term equals 0, as

$$\begin{aligned} w^T\underset{x, x' \sim T}{\mathbb {E}}[(x - x')] - \eta w'\underset{x, x' \sim T}{\mathbb {E}}[(||x||^2-||x'||^2)] \\ =\,(w\mathbb {E}[T] - \mathbb {E}[T]) - \eta w'(\mathbb {E}[||T||^2]-\mathbb {E}[||T||^2]) = 0 . \end{aligned}$$

By symmetry, we can replace covariances involving \(x'\) with the same involving x. The remaining term can then be rearranged to give

$$\begin{aligned} u_{same} = 2\text {Var}(w^Tx - \eta w'||x||^2) \\ = 2w^TCov(T)w + 2\eta w' Var(||x||^2) - 4\text {Cov}(w^Tx, \eta w'||x||^2). \end{aligned}$$

Now rewrite \(u_{ diff }\). Decomposing as above gives

$$\begin{aligned} \underset{x, x' \sim T}{\mathbb {E}}[w^T(x - x') - \eta w'(||x-x'||^2)]^2 \\ +\,\text {Var}(w^T(x - x') - \eta w'(||x-x'||^2))\,, \end{aligned}$$

and here the expectation term does not equal 0:

$$\begin{aligned} (w^T\underset{x, x' \sim T}{\mathbb {E}}[(x - x')] - \eta w' \underset{x, x' \sim T}{\mathbb {E}}[(||x-x'||^2)])^2 \\ =\,(\eta w')^2\underset{x, x' \sim T}{\mathbb {E}}[||x-x'||^2]^2. \end{aligned}$$

The variance term can be expanded to give:

$$\begin{aligned} \text {Var}(w^T(x - x') - \eta w'(||x-x'||^2)) \\ =\,2w^TCov(T)w + 2\eta w'\text {Var}(||x - x'||^2) \\ -\,4\text {Cov}(w^Tx, \eta w'||x - x'||^2). \end{aligned}$$

By comparing terms, we can see that this expression is at least as large as \(u_{same}\). First, consider the covariance terms.

Claim. \(\text {Cov}(w^Tx, \eta w'||x - x'||^2) = \text {Cov}(w^Tx,\eta w'||x||^2)\).

$$\begin{aligned}&\text {Cov}(w^Tx, \eta w'||x - x'||^2) \\&=\mathbb {E}[w^Tx\eta w'||x-x'||^2] - \mathbb {E}[w^Tx]\mathbb {E}[\eta w'||x - x'||^2] \\&= \eta w'\mathbb {E}[w^Tx||x-x'||^2] - 0\mathbb {E}[\eta w'||x - x'||^2]\\&= \eta w'\mathbb {E}[w^Tx||x-x'||^2] \\&= \eta w'\mathbb {E}[w^Tx\sum _{k}x^2 - 2xx' + x^{'2}] \\&= \eta w'\sum _{k}\mathbb {E}[w^Txx_k^2] - 2\mathbb {E}[w^Txx_k]\mathbb {E}[x'] + \mathbb {E}[w^Tx]\mathbb {E}[x^{'2}] \\&= \eta w'\sum _{k}\mathbb {E}[w^Txx_k^2] - 2\mathbb {E}[w^Txx_k]\vec{0} + 0\mathbb {E}[x^{'2}] \\&= \eta w'\sum _{k}\mathbb {E}[w^Txx_k^2] \\&= \eta w'\mathbb {E}[w^Tx\sum _{k}x_k^2] \\&= \eta w'\mathbb {E}[w^Tx||x||^2] \\&= \eta w'\mathbb {E}[w^Tx||x||^2] - 0\mathbb {E}[\eta w'||x||^2] \\&= \mathbb {E}[w^Tx\eta w'||x||^2] - \mathbb {E}[w^Tx]\mathbb {E}[\eta w'||x||^2] \\&= \text {Cov}(w^Tx,\eta w'||x||^2). \end{aligned}$$

So, we see the covariance terms are equal.

Next, compare the second variance terms

Claim. \(\text {Var}(||x - x'||^2) \ge \text {Var}(||x||^2)\).

$$\begin{aligned}&\text {Var}(||x - x'||^2) \\&=\, \text {Var}\left( \sum _{k=0}^{nz}(x)_k^2 + (x')_k^2 - 2(x)_k(x')_k\right) \\&=\, \text {Var}\left( \sum _{k=0}^{nz}(x)_k^2\right) + \text {Var}\left( \sum _{k=0}^{nz}(x')_k^2\right) + 2{{\,\mathrm{Var}\,}}\left( \sum _{k=0}^{nz} x_kx'_k\right) \\&=\, 2\text {Var}\left( \sum _{k=0}^{nz}(x)_k^2\right) + 2{{\,\mathrm{Var}\,}}\left( \sum _{k=0}^{nz} x_kx'_k\right) \\&=\, 2(\text {Var}(||x||^2) + {{\,\mathrm{Var}\,}}(x^Tx'))\\&\ge \, \text {Var}(||x||^2). \end{aligned}$$

Assuming that the data are not all identical, this implies that \(u_{ diff }\) is strictly greater than \(u_{same}\).

$$\begin{aligned}&u_{ diff } - u_{same} \\ =\,&(\eta w')^2\underset{x, x' \sim T}{\mathbb {E}}[||x-x'||^2]^2 + 2w^T{{\,\mathrm{Cov}\,}}(T)w \\&+2\eta w'\text {Var}(||x - x'||^2) - 4{{\,\mathrm{Cov}\,}}(w^Tx, \eta w'||x - x'||^2) \\&- ((2w^T{{\,\mathrm{Cov}\,}}(T)w + 2\eta w' Var(||x||^2) \\&- 4{{\,\mathrm{Cov}\,}}(w^Tx, \eta w'||x||^2))) \\ =\,&(\eta w')^2\underset{x, x' \sim T}{\mathbb {E}}[||x-x'||^2]^2 \\ \quad&+ 2\eta w'\left( \text {Var}(||x - x'||^2) - \text {Var}(||x||^2)\right) \\&- 4\left( {{\,\mathrm{Cov}\,}}(w^Tx, \eta w'||x - x'||^2) - {{\,\mathrm{Cov}\,}}(w^Tx, \eta w'||x||^2)\right) \\ =\,&(\eta w')^2\underset{x, x' \sim T}{\mathbb {E}}[||x-x'||^2]^2 \\ \quad&+ 2\eta w'\left( \text {Var}(||x - x'||^2) - \text {Var}(||x||^2)\right) \\ \ge \,&(\eta w')^2\underset{x, x' \sim T}{\mathbb {E}}[||x-x'||^2]^2 > 0. \end{aligned}$$

1.3 A.3 Lemma 2 Supplement

The following is the complete proof of Lemma 2, which was omitted from the main paper.

Proof

\(v_{ diff } - v_{same}\)

$$\begin{aligned} =\,&\mathbb {E}[(w^T(x-z) - w'(x-x')(x-z))^2] \\&-\mathbb {E}[(w^T(x-z) - w'(x+x')(x-z))^2] \\ =\,&\mathbb {E}[(w^T(x-z) - w'(x-x')^T(x-z))^2 \\&-(w^T(x-z) - w'(x+x')^T(x-z))^2] \\ =\,&\mathbb {E}[(w^T(x-z) - w'(x-x')^T(x-z) \\&+w^T(x-z) - w'(x+x')^T(x-z)) \\&(w^T(x-z) - w'(x-x')^T(x-z) \\&-w^T(x-z) - w'(x+x')^T(x-z))] \\ =\,&\mathbb {E}[(2w^T(x-z) - w'(x-z)^T(x-x'+x+x'))\\&(- w'(x-z)^T(x-x'-x-x'))] \\ =\,&\mathbb {E}[(2w^T(x-z) - 2w'(x-z)^T(x))(2w'(x-z)^T(x'))] \\ =\,&2\mathbb {E}[(w^T(x-z) - w'(x-z)^T(x))w'(x-z)^T]\mathbb {E}[x'] \\ =\,&2\mathbb {E}[(w^T(x-z) - w'(x-z)(x))w'(x-z)^T]\overrightarrow{0} = 0\,. \end{aligned}$$

1.4 A.4 Lemma 3 Supplement

The following is the complete proof of Lemma 3, which was omitted from the main paper.

Proof

$$\begin{aligned} {{\,\mathrm{Var}\,}}(T) =\,&\tfrac{1}{2}\underset{x, x' \sim T}{\mathbb {E}}[(x-x')^2] \\ =\,&\tfrac{1}{2}(\underset{x, x' \sim T}{\mathbb {E}}[(x-x')^2|y(x) = y(x')]P(y(x) = y(x')) \\&+\underset{x, x' \sim T}{\mathbb {E}}[(x-x')^2|y(x) \ne y(x')]P(y(x) \ne y(x'))\\ =\,&\tfrac{1}{2}(sP(y(x) = y(x')) + rP(y(x) \ne y(x'))\\ =\,&\frac{1}{2}\left( s\frac{1}{C} + r\frac{C-1}{C}\right) . \end{aligned}$$

Noting that \(s = 2\mathbb {E}[{{\,\mathrm{Var}\,}}(T|C)]\), and using Eve’s law, we have

$$\begin{aligned} d =\,&{{\,\mathrm{Var}\,}}(T) -s \\ =\,&\frac{1}{2}\left( s\frac{1}{C} + r\frac{C-1}{C}\right) -s \\ =\,&\frac{C-1}{2C}r - \frac{2C -1}{2C}s. \end{aligned}$$

1.5 A.5 Theorem 5 Supplement

The following is a more detailed version of the argument given in the main paper.

If \(y(x) = y(x')\), then Lemma 2 means that the expected distance of the encodings of x and \(x'\) to any data point from another cluster is unchanged by whether the update was from points with the same or with different labels. Similarly, the distance between any two other points is unchanged by whether the update was from points with the same or with different labels. This establishes that \(r_T = r_F\). As for the intra-cluster variance, it is smaller after the update with the same labels than with different labels. Lemma 1 shows that the expected distance between the encodings of the two points themselves is smaller if the labels were the same, and the same argument as above shows that all other expected distances within clusters are unchanged.

If \(y(x) \ne y(x')\), then Lemma 2 means that the expected distance of the encodings of x and any data point from the same cluster is unchanged by whether the update was from points with the same or with different labels (and the same for \(x'\)). Similarly, the distance between any two other points is unchanged by whether the update was from points with the same or with different labels. This establishes that \(s_T = s_F\). As for the inter-cluster variance, it is larger after the update with the same labels than with different labels. Lemma 1 shows that the expected distance between the encodings of the two points themselves is larger if the labels were different, and the same argument as above shows that all other expected distances within clusters are unchanged.

Table 4. Sizes of predicted clusters for MNIST.
Table 5. Sizes of predicted clusters for USPS.
Table 6. Sizes of predicted clusters for FashionMNIST.

B Appendix C: Extended Results

The results in the main paper report the central tendency of five different training runs for each dataset. Tables 4, 5, and 6 show the sizes of the clusters predicted by SPC for one randomly selected run out of these five. On MNIST and USPS, where the accuracy of SPC is \({>}98\%\), the predicted sizes are close to the true sizes. On FashionMNIST, where the accuracy is \({\sim }65\%\), there is a much greater variance. This accounts for the discrepancy in ACC and NMI for FashionMNIST. Most of the errors are put into one large cluster, specifically the cluster that was aligned to ‘coat’ is over three times larger than it should be. This hurts accuracy more than NMI, because the incorrect data points in the ‘coat’ cluster count for zero when calculating the accuracy, but they are not randomly distributed among the other classes, so the conditional entropy of a data point that was mis-clustered as a coat is \(<\log (10)\). Actually, most of the mistakes in the ‘coat’ cluster are pullovers or shirts, and almost none of them are, for examples, boots or tops. Comparing the cluster sizes for SPC-HDBSCAN and SPC-GMM also accounts for the differences across ACC and NMI between these two settings on FashionMNIST: SPC-GMM produces more uniformly-sized clusters, so the difference between ACC and NMI is smaller.

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mahon, L., Lukasiewicz, T. (2021). Selective Pseudo-Label Clustering. In: Edelkamp, S., Möller, R., Rueckert, E. (eds) KI 2021: Advances in Artificial Intelligence. KI 2021. Lecture Notes in Computer Science(), vol 12873. Springer, Cham. https://doi.org/10.1007/978-3-030-87626-5_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87626-5_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87625-8

  • Online ISBN: 978-3-030-87626-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics