<?xml version="1.0" encoding="utf-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2d1 20170631//EN" "JATS-journalpublishing1.dtd">
<ArticleSet>
  <Article>
    <Journal>
      <PublisherName>IJIRCSTJournal</PublisherName>
      <JournalTitle>International Journal of Innovative Research in Computer Science and Technology</JournalTitle>
      <PISSN>I</PISSN>
      <EISSN>S</EISSN>
      <Volume-Issue>Volume 13 Issue 5</Volume-Issue>
      <PartNumber/>
      <IssueTopic>Computer Science</IssueTopic>
      <IssueLanguage>English</IssueLanguage>
      <Season>September - October 2025</Season>
      <SpecialIssue>N</SpecialIssue>
      <SupplementaryIssue>N</SupplementaryIssue>
      <IssueOA>Y</IssueOA>
      <PubDate>
        <Year>2025</Year>
        <Month>09</Month>
        <Day>24</Day>
      </PubDate>
      <ArticleType>Computer Sciences</ArticleType>
      <ArticleTitle>Spectral Geometric Regularization: Towards Isometric Invariance for Domain-Generalized Learning</ArticleTitle>
      <SubTitle/>
      <ArticleLanguage>English</ArticleLanguage>
      <ArticleOA>Y</ArticleOA>
      <FirstPage>33</FirstPage>
      <LastPage>37</LastPage>
      <AuthorList>
        <Author>
          <FirstName>Kalyan Chakravarthy Kodela</FirstName>          
          <AuthorLanguage>English</AuthorLanguage>
          <Affiliation/>
          <CorrespondingAuthor>Y</CorrespondingAuthor>
          <ORCID/>
                      <FirstName>Rohith Vangalla</FirstName>          
          <AuthorLanguage>English</AuthorLanguage>
          <Affiliation/>
          <CorrespondingAuthor>N</CorrespondingAuthor>
          <ORCID/>
           
        </Author>
      </AuthorList>
      <DOI>https://doi.org/10.55524/ijircst.2025.13.5.5</DOI>
      <Abstract>Deep learning models often experience significant performance degradation under domain shift, where test data originates from a distribution different from the training data. This paper introduces Spectral Geometric Regularization (SGR), a novel framework designed to learn domain-invariant representations by aligning the intrinsic geometries of source and target domains. Unlike prior methods that often rely on statistical moment matching, SGR operates by minimizing the spectral discrepancy between the eigenvalues of the graph Laplacians constructed from feature manifolds. Grounded in the theory of the Laplace-Beltrami operator, the proposed spectral loss function encourages isometry&amp;mdash;a fundamental geometric equivalence&amp;mdash;between domains. We provide theoretical guarantees for our framework, establishing the differentiability of the spectral loss and deriving a probabilistic bound on the target error that directly links spectral alignment to improved generalization. As an architecture-agnostic regularizer, SGR presents a principled and theoretically sound alternative to existing domain adaptation paradigms.</Abstract>
      <AbstractLanguage>English</AbstractLanguage>
      <Keywords>Domain Adaptation, Spectral Geometry, Regularization, Representation Learning, Laplace-Beltrami Operator, Generalization Theory</Keywords>
      <URLs>
        <Abstract>https://ijircst.org/abstract.php?article_id=1407</Abstract>
      </URLs>      
    </Journal>
  </Article>
</ArticleSet>