Shell: How to read pipe from another fd?
PostedIt might be that you remember when, a few days ago or so, I talked about piping to another fd ? Well, let me introduce you to part two ! :)
Yeah, because apparently some sort of "unusual" pipes is a thing I find myself needing/wanting to do often. Also, me & file descriptors makes 6, or something.
Anyhow, quick recap: last time I wanted to pipe output from one process to
another, except that I didn't want the reader
to be reading from its stdin.
There were reasons for that (having to do with password input), and the solution
was, as you may already know/recall, process substitution.
Now I find myself in some kind of "reverse" situation, as I don't mind if the
reader
will read from its stdin, however the writer
will not be writing to
its stdout !
Telling it where to write isn't a problem, the problem is that I also need it to continue writing (other) stuff to its stdout, which I want to see ending up on the terminal, as it should.
And the solution is...
...well, not so simple, it turns out. At least as far as I was able to gather.
For starter, process substitution can't help us here. Unless I'm missing something, it really is a pipe is disguise, but one designed to handle the issue I was having last time, where one wants not to read from stdin but another fd.
However, when one wants the writing to be happening on another fd, that still happens to be one end of a pipe whose other end will be a process' stdin, it can't help us.
And after some time, the solution I came up with remains... not ideal. Because the way I did it is, I'm using a fifo. But I don't like it all that much, notably I don't like needing to use a file system where I need writing permission and all that jazz, I'd much rather set up a pipe and voilĂ .
But let's not get ahead of ourselves.
Bang, bang...
So here's what I've found myself doing :
1
2
3
4
shmkfifo fifo
writer --write-to fifo &
reader <fifo
rm -r fifo
Which works. Which is nice. But, as I said, not all that nice. Also, maybe buggy.
Blame, blame...
Maybe buggy because there might be a race condition going on here. Imagine your
writer
opens the file/fifo in non-blocking mode, well then it's good luck that
the reader
is faster at opening it, otherwise things would fail.
We could of course ask/patch/whatever the writer
so that it opens its file in
blocking mode, and then no more race condition. Which is good.
But that still leaves the issue of having to deal with an actual FIFO in the file system, which I'd rather do without...
Revisiting process substitution
So remember how I said process substitution can't help us here ? Yeah, turns out that may have been wrong, actually.
Because one can simply redirect some fd into another process, via process substitution, and that does the trick :
1
2
3
bashexec 3> >(reader)
writer --write-to +3
exec 3>&-
(We understand here that +3
means to write to fd 3.)
And yes, it works. Normal output written to stdout goes to the terminal,
special data written to fd 3 goes to the reader
, and we close fd 3 so reader
can terminate when all is done.
Noticed how it reads bash
and not sh
above ? Yeah, it works with bash or
zsh, but not POSIX sh. So if that's a requirement of yours, I'm afraid you're
outta luck with this.
But if not, that's a nice & simple solution to have.
And for those looking for POSIX solutions, there might be hope as well!
List, aka Group command
Having encountered such a situation once again, I did once again try to come up with a fifo-free solution, and... may have found one!
Here's an ominous one-liner for you to ponder at :
1
sh{ { writer --write-to +3 } 3>&1 >&4 4>&- | reader } 4>&1
Yeah, apparently that does it. Simple, eh?
Specifically, our writer
writes its specially-crafted data to fd 3 and its
standard output to its stdout, as things were intended.
That fd 3 is redirected to the stdout which will be piped into our reader
,
yes, that's the magic bit right there :-) Though it does all feel quite normal,
once you've managed to see past all those redirections, which is nice actually.
Anyhow, the stdout (fd 1) is now redirected to fd 4, which from the outer list is redirected to the original fd 1/stdout, aka our good ol' terminal.
(We also close the fd 4, since we won't need it anymore.)
Now that solution doesn't feel too complicated, though I'm sure I'll have forgotten all about it in a day or two - hence me writing this. (Well, that, plus if it can be helpful to others, maybe you, it's a win-win...)
It works, it needs no FIFO or nothing, and it is POSIX, so just about any POSIX compatible shell should do.
zsh, zsh...
One last thing before I go : if you're using zsh, you might want to pay
attention to the multios
option, otherwise you're likely to get unexpected
results.
Indeed, zsh has this thing where, with multios
, it duplicates things. This can
be very nice, since you don't need tee
and can simply do:
1
zshecho something >&1 >log
Here something
will be written both to stdout and to log
.
And it even works in reading mode, i.e:
1
2
zshcat foo | sort <bar
sort <foo <bar
Those two commands will give you the same results, aka sorted content of both
files. So really, why bother the cat
? :)
However.
All this to say, this happens in case of redirection, and as we've just seen a pipe is a redirection.
Coming back to our POSIX solution above, notice how we do have a pipe and a
redirection of the same fd, stdout, in the inner list : it is both sent to fd
4 (which is our terminal) and also piped into reader
.
So, guess what happens ?
Yep, anything writer
writes to its stdout ends up duplicated, going both to
the terminal (as expected) and to our reader
, as not-so-expected I would
guess.
There's a solution of course : turn it off!
1
zshunsetopt multios