The amount of information available nowadays is almost incalculable, presenting new opportunities to gain insight from this data. In this chapter we present some of the work done in field of Distributed Machine Learning and discuss a problem not often mentioned in the literature. The problem is related when the distributed information comes from different contexts. Different contexts can be defined as the different underlying laws of probability governing the data. This is a problem not always addressed, where the majority of the contributions assume that between distributed sources, there is no difference in the underlying law of probability. In this chapter a distributed regression model is presented that addresses this problem.