Today the scientific computation problems requiring intense problem-solving capabilities for problems arising from complex research and industrial application has driven in all global institution and industry segments the need for dynamic collaboration of many ubiquitous computing resources to be able to work together. The problem of minimizing the processing time of extensive processing loads originated from various sources presents a great challenge that, if successfully met, could foster a range of new creative applications. Inspired by this challenge, we sought to apply divisible load theory to the problem of grid computing involving multiple sources connected to multiple sinks. So far research in this area includes where tasks arrive according to a basic stochastic process to multiple nodes and presents a first step technique for scheduling divisible loads from multiple sources to multiple sinks, with and without buffer capacity constraints. The increasing need for multiprocessing systems and data-intensive computing has created a need for efficient scheduling of computing loads, especially parallel loads that are divisible among processors and links. During the past decade, divisible load theory has emerged as a powerful tool for modeling data-intensive computational problems. The purpose of this research is to obtain a closed form solution for the finish time, taking into consideration the adverse effect of the fault Single Installment with FIFO (First In, First Out) and LIFO (Last In, First Out) result allocation on a homogenous system. The system under consideration in this research utilizes job scheduling of a Divisible Load scheme that entails distributing arbitrarily divisible computational loads amongst eligible processors within a bus based distributed computing environment. Including, the aspect of Single Installment scheme of Divisible Load Theory (DLT), along with the Results Collection Phase. In this distributed system, there is a primary processor and a backup processor. All the processors periodically checkpoint their results on the backup processor. If any processor fails during the task execution, the backup processor takes over the failed process by rolling over to the time of the last check pointing. The study assumes that only one processor faults during a lifetime of single task execution. It is believed that the outcomes of this research may be beneficial to embedded system designers on how to approach fault-tolerant methods and performance improvement for Load distribution.