International Journal of Innovative Research in Computer Science and Technology
Year: 2025, Volume: 13, Issue: 5
First page : ( 33) Last page : ( 37)
Online ISSN : 2350-0557.
DOI: 10.55524/ijircst.2025.13.5.5 |
DOI URL: https://doi.org/10.55524/ijircst.2025.13.5.5
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0)http://creativecommons.org/licenses/by/4.0
Article Tools: Print the Abstract | Indexing metadata | How to cite item | Email this article | Post a Comment
Kalyan Chakravarthy Kodela , Rohith Vangalla
Deep learning models often experience significant performance degradation under domain shift, where test data originates from a distribution different from the training data. This paper introduces Spectral Geometric Regularization (SGR), a novel framework designed to learn domain-invariant representations by aligning the intrinsic geometries of source and target domains. Unlike prior methods that often rely on statistical moment matching, SGR operates by minimizing the spectral discrepancy between the eigenvalues of the graph Laplacians constructed from feature manifolds. Grounded in the theory of the Laplace-Beltrami operator, the proposed spectral loss function encourages isometry—a fundamental geometric equivalence—between domains. We provide theoretical guarantees for our framework, establishing the differentiability of the spectral loss and deriving a probabilistic bound on the target error that directly links spectral alignment to improved generalization. As an architecture-agnostic regularizer, SGR presents a principled and theoretically sound alternative to existing domain adaptation paradigms.
[1] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, 2010, doi: 10.1109/TKDE.2009.191.
[2] Y. Ganin et al., “Domain-adversarial training of neural networks,” J. Mach. Learn. Res., vol. 17, no. 59, pp. 1–35, 2016. [Online]. Available: http://jmlr.org/papers/volume17/15-239/15-239.pdf
[3] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola, “A kernel two-sample test,” J. Mach. Learn. Res., vol. 13, pp. 723–773, 2012. [Online]. Available: https://jmlr.csail.mit.edu/papers/volume13/gretton12a/gretton12a.p
[4] N. Courty, R. Flamary, D. Tuia, and A. Rakotomamonjy, “Optimal transport for domain adaptation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 9, pp. 1853–1865, 2017. [Online]. Available: https://arxiv.org/pdf/1507.00504
[5] M. Belkin and P. Niyogi, “Towards a theoretical foundation for Laplacian-based manifold methods,” in Proc. Conf. Learn. Theory (COLT), 2005. [Online]. Available: http://www.learningtheory.org/colt2005/proceedings.pdf
[6] R. R. Coifman and S. Lafon, “Diffusion maps,” Appl. Comput. Harmon. Anal., vol. 21, no. 1, pp. 5–30, 2006. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1063520306000546
[7] J. R. Magnus, “On differentiating eigenvalues and eigenvectors,” Econometric Theory, vol. 1, no. 2, pp. 179–191, 1985. [Online]. Available: https://www.janmagnus.nl/papers/JRM009.pdf
[8] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan, “A theory of learning from different domains,” Mach. Learn., vol. 79, no. 1–2, pp. 151–175, 2010. [Online]. Available: https://link.springer.com/content/pdf/10.1007/s10994-009-5152-4.pdf
[9] P. Germain, A. Habrard, F. Laviolette, and E. Morvant, “A new PAC-Bayesian perspective on domain adaptation,” in Proc. Int. Conf. Mach. Learn. (ICML), 2016. [Online]. Available: http://proceedings.mlr.press/v48/germain16.pdf
[10] K. Zhang, B. Schölkopf, K. Muandet, and Z. Wang, “Domain adaptation under target and conditional shift,” in Proc. Int. Conf. Mach. Learn. (ICML), 2013. [Online]. Available: https://icml.cc/2013/papers/339.pdf
[11] C. Fefferman, S. Mitter, and H. Narayanan, “Testing the manifold hypothesis,” J. Amer. Math. Soc., vol. 29, no. 4, pp. 983–1049, 2016. [Online]. Available: https://www.ams.org/journals/jams/2016-29-04/S0894-0347-2016-01065-4/S0894-0347-2016-01065-4.pdf
[12] F. R. K. Chung, Spectral Graph Theory. Providence, RI, USA: Amer. Math. Soc., 1997. [Online]. Available: https://mathweb.ucsd.edu/~fan/research/revised.html
[13] Y. Mansour, M. Mohri, and A. Rostamizadeh, “Domain adaptation with multiple sources,” in Adv. Neural Inf. Process. Syst. (NeurIPS), vol. 21, 2009. [Online]. Available: https://papers.nips.cc/paper/2009/file/0e65972dce68dad4d52d063967f0a705-Paper.pdf
[14] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial discriminative domain adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017. [Online]. Available: https://openaccess.thecvf.com/content_cvpr_2017/papers/Tzeng_Adversarial_Discriminative_Domain_CVPR_2017_paper.pdf
[15] M. Long, Y. Cao, J. Wang, and M. I. Jordan, “Learning transferable features with deep adaptation networks,” in Proc. Int. Conf. Mach. Learn. (ICML), 2015. [Online]. Available: http://proceedings.mlr.press/v37/long15.pdf
[16] B. Sun and K. Saenko, “Deep CORAL: Correlation alignment for deep domain adaptation,” in Proc. Eur. Conf. Comput. Vis. (ECCV) Workshops, 2016. [Online]. Available: https://arxiv.org/pdf/1607.01719
[17] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2017. [Online]. Available: https://arxiv.org/pdf/1609.02907
[18] M. Hein, J.-Y. Audibert, and U. von Luxburg, “Graph Laplacians and their convergence on random neighborhood graphs,” J. Mach. Learn. Res., vol. 8, pp. 1325–1370, 2007. [Online]. Available: https://jmlr.csail.mit.edu/papers/volume8/hein07a/hein07a.pdf
[19] A. Singer, “From graph to manifold Laplacian: The convergence rate,” Appl. Comput. Harmon. Anal., vol. 21, no. 1, pp. 128–134, 2006. [Online]. Available: https://www.math.utexas.edu/~singer/pubs/acha2006.pdf
[20] M. M. Bronstein, J. Bruna, T. Cohen, and P. Veli?kovi?, “Geometric deep learning: Grids, groups, graphs, geodesics, and gauges,” arXiv preprint arXiv:2104.13478, 2021. [Online]. Available: https://arxiv.org/pdf/2104.13478
[21] Y. Lipman, R. M. Rustamov, and T. A. Funkhouser, “Biharmonic distance,” ACM Trans. Graph., vol. 29, no. 3, 2010. [Online]. Available: https://dl.acm.org/doi/pdf/10.1145/1805964.1805971
[22] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in Proc. Int. Conf. Mach. Learn. (ICML), 2017. [Online]. Available: http://proceedings.mlr.press/v70/arjovsky17a/arjovsky17a.pdf
[23] G. Peyré and M. Cuturi, “Computational optimal transport: With applications to data science,” Found. Trends Mach. Learn., vol. 11, no. 5–6, pp. 355–607, 2019. [Online]. Available: https://arxiv.org/pdf/1803.00567
[24] B. Amos and J. Z. Kolter, “OptNet: Differentiable optimization as a layer in neural networks,” in Proc. Int. Conf. Mach. Learn. (ICML), 2017. [Online]. Available: https://arxiv.org/pdf/1703.00443
[25] P. W. Battaglia et al., “Relational inductive biases, deep learning, and graph networks,” arXiv preprint arXiv:1806.01261, 2018. [Online]. Available: https://arxiv.org/pdf/1806.01261
[26] I. Mezi? and A. Banaszuk, “Comparison of systems with complex behavior,” Physica D: Nonlinear Phenom., vol. 197, no. 1–2, pp. 101–133, 2004. [Online]. Available: https://doi.org/10.1016/j.physd.2004.06.015
MS Scholar, Department of Software Engineering, ITU, SanJose, USA
No. of Downloads: 34 | No. of Views: 866
Suchetha N V, Anisha Upadhayaya H S, Anushree U Rao, H N Swati, Mallana Gowda G S.
January 2026 - Vol 14, Issue 1
Dhawal Jain, Omkar S, Pragathi, Sanidhya M Jain, Vishal Raju Angadi.
January 2026 - Vol 14, Issue 1
Supritha P O, Prajwal, Saikumar Laxman Pujari, Siddartha R, Adarsh Shendage.
January 2026 - Vol 14, Issue 1
