Black-box artificial intelligence (AI) models and their nature are such that it does not permit end-users to perceive or sometime reliance the outputs created by that type of model. Artificial Intelligence applications, where not only the solutions but also the selection paths that are lead to the solution are also important, like black-box Artificial Intelligence models, are not enough. Explainable Artificial Intelligence delivers that type of job and specifies a bundle of Artificial Intelligence models which are explainable by the end users. Freshly, A number of Explainable Artificial Intelligence models have addressed this issue surrounded by deficiency of explainability and interpret-ability of the black-box models in several application domains like energy, healthcare, and financial areas. The idea of Explainable Artificial Intelligence has increased and is gaining for high attraction. Freshly, its combination into the different Distributed systems is under described and need attention. This paper contains a detailed systematic review of past studies by using Explainable Artificial Intelligence models in the environment of Distributed system domains. We grouped these findings and studies in accordance to their methodological analysis and domains. Our focus is on the problems, challenges plus different issues and provide different future directions to provide guidelines for the researchers and the developers for potential upcoming scenarios and investigations.