Usage of parallel and distributed computing systems is accompanied with big expenditures related to programming for such systems. The problem is that modern popular parallel programming tools, like MPI and OpenMP, are quite complex to use – software developers have to take care of distribution of computational tasks, synchronization, data exchange and so on.
Different methods have been used to simplify development and execution of parallel programs: generic tools for automatic program parallelization (for both shared memory and multicomputer systems); frameworks and libraries to solve particular classes of tasks (in general, it refers to applications with high level of data parallelism), and universal tools which are trying to simplify the technical side of the process of parallel and distributed systems programming.
Sometimes developers are using nonstandard calculus paradigms in their tools and libraries. One of such paradigms is Dataflow . Some variations of Dataflow are used in microprocessor architectures, supercomputers, computational threads organization in multithreaded environment, and inter-process communications on distributed systems.
In this paper, the authors describe methods and tools for programming in parallel and distributed environments relying on analysis of different Dataflow models including their own. The goal of proposed design is to simplify parallel software development without significant loss of algorithms execution efficiency.