Science Fair Project Encyclopedia
In software engineering, a pipeline consisting of chain of processes or other data processing entities, arranged so that the output of each element of the chain is the input of the of the next one. Usually some amount of buffer storage is provided between consecutive elements.
Pipelines are most efficiently implemented in a multi-tasking operating system, by launching all processes at the same time, and automatically servicing the data read requests by the each process with the data that written with the upstream process. In this way, the CPU will be naturally switched among the processes by the scheduler so as to minimize its idle time.
Usually, read and write requests are blocking operations, which means that the execution of the source process, upon writing, is suspended until all data could be written to the destination process, and, likewise, the execution of the destination process, upon reading, is suspended until at least some of the requested data could be obtained from the source process. Obviously, this cannot lead to a deadlock, where both processes would wait indefinitely for each other to respond, since at least one of the two processes will soon thereafter have its request serviced by the operating system, and continue to run.
For performance, most operating systems implementing pipes use pipe buffers, which allow the source process to provide more data than the destination process is currently able or willing to receive. Under most Unices and Unix-like operating systems, a special command is also available which implements a pipe buffer of potentially much larger and configurable size, typically called "buffer". This command can be useful if the destination process is significantly slower than the source process, but it is anyway desired that the source process can complete its task as soon as possible. E.g., if the source process consists of a command which reads an audio track from a CD and the destination process consists of a command which compresses the waveform audio data to a format like OGG Vorbis. In this case, buffering the entire track in a pipe buffer would allow the CD drive to spin down more quickly, and enable the user to remove the CD from the drive before the encoding process has finished. It should be noted that such a buffer command can be implemented using nothing but the already available operating system primitives for reading and writing data, however, to avoid wasteful active waiting , additional multithreading capabilities are desirable.
On single-tasking operating systems, the processes of a pipeline have to be executed one by one in sequential order; thus the output of each process must be saved to a temporary file, which is then read by the next process. Since there is no parallelism or CPU switching, this version if called a "pseudo-pipeline".
For example, the command line interpreter of MS-DOS ('COMMAND.COM') provides pseudo-pipelines with a syntax superficially similar to that of Unix pipelines. The command "dir | sort | more" would have been executed like this (albeit with more complicated temporary file names):
- Create temporary file 1.tmp
- Run command "dir", redirecting its output to 1.tmp
- Create temporary file 2.tmp
- Run command "sort", redirecting its input to 1.tmp and its output to 2.tmp
- Run command "more", redirecting its input to 2.tmp, and presenting its output to the user
- Delete 1.tmp and 2.tmp, which are no longer needed
- Return to the command prompt
Thus, pseudo-pipes acted like true pipes with a pipe buffer of unlimited size (not withstanding disk space limitations), with the significant restriction that a receiving process could not read any data from the pipe buffer until the sending process finished completely. Besides causing disk traffic that would have been unnecessary under multi-tasking operating systems, this implementation also made pipes unsuitable for applications requiring real-time response, like, for example, interactive purposes (where the user enters commands that the first process in the pipeline receives via stdin, and the last process in the pipeline presents its output to the user via stdout).
A pipeline only allows information to flow in one direction, like water flows in a pipe.
Pipes and filters can be viewed as a form of functional programming, using byte streams as data objects.
This pattern encourages the use of text streams as the input and output of programs. This reliance on text has to be accounted when creating graphic shells to text programs. See XMLTerm for an approach to this problem.
Process pipelines were invented by Douglas McIlroy, one of the designers of the first UNIX shells , and greatly contributed to the popularity of that operating system. It can be considered the first non-trivial instance of software componentry.
- Pipeline (Unix) for details specific to UNIX.
- Pipeline (computer) for other computer-related versions of the concept.
- Software design patterns
- Software componentry
- Software engineering
- Data massaging
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details