Discussion:
[9fans] rc vs sh
(too old to reply)
Rudolf Sykora
2012-08-28 09:00:12 UTC
Permalink
Hello,

I am just curious...
Here
http://9fans.net/archive/2007/11/120

Russ Cox writes he uses bash as his default shell. Does anybody know
the reason? Is this for practicality within the linux environment? Or
has he found rc too limiting?

Ruda
hiro
2012-08-28 09:47:10 UTC
Permalink
because he uses a mac.
dexen deVries
2012-08-28 09:52:22 UTC
Permalink
Post by Rudolf Sykora
Hello,
I am just curious...
Here
http://9fans.net/archive/2007/11/120
Russ Cox writes he uses bash as his default shell. Does anybody know
the reason? Is this for practicality within the linux environment? Or
has he found rc too limiting?
FWIW, i'm using bash as the interactive shell too, in `konsole' terminal
emulator, because of bash' interactive line edition and command history. 9term
doesn't fit me.

all scripting -- both standalone and in mkfiles -- goes in rc, thou.
--
dexen deVries

[[[↓][→]]]
Rudolf Sykora
2012-08-28 10:27:46 UTC
Permalink
Post by dexen deVries
FWIW, i'm using bash as the interactive shell too, in `konsole' terminal
emulator, because of bash' interactive line edition and command history. 9term
doesn't fit me.
The thing is that he, according to the reference, switches off
the line edition provided by the readline library.
Hence, although I understand that the mentioned edition tools may be a reason
for using bash for somebody, it doesn't really seem to be Russ' reason.

Ruda
Lucio De Re
2012-08-28 13:07:28 UTC
Permalink
Post by dexen deVries
FWIW, i'm using bash as the interactive shell too, in `konsole' terminal
emulator, because of bash' interactive line edition and command history. 9term
doesn't fit me.
all scripting -- both standalone and in mkfiles -- goes in rc, thou.
Russ uses bash because it is uniformly crappy across all architectures
he has an interest in. There's a similar conversation going on in the
go-nuts user group on google. It is illuminating.

++L
Rudolf Sykora
2012-08-28 13:40:07 UTC
Permalink
Post by Lucio De Re
Russ uses bash because it is uniformly crappy across all architectures
he has an interest in. There's a similar conversation going on in the
go-nuts user group on google. It is illuminating.
I have been unable to locate the conversation...
Anyway, I do not understand how uniform crappiness can be advantageous...

Ruda
Lucio De Re
2012-08-28 14:10:39 UTC
Permalink
Post by Rudolf Sykora
Anyway, I do not understand how uniform crappiness can be advantageous...
The issue raised on Go-Nuts is that Bash shouldn't be used for
installing Go, /bin/sh should be used instead. The response is that
Bash is the most uniformly implemented of the /bin/sh's out there and
that none of the other shells (generally referred to as /bin/sh) can
be relied upon not to have incompatible foibles that would trip up a
complicated script headed #!/bin/sh. Therefore, intentionally using
#!/bin/bash (or #!/usr/bin/env -c /bin/bash - from memory) is much
more likely to work without adjustments.

Hence, it makes sense to stick to bash in one's day-to-day work.

Of course, rc would be preferable, but the target platforms for Go are
not all adequately endowed, and Byron's rc proves the point: it is
slightly incompatible with Plan 9's rc.

++L
erik quanstrom
2012-08-28 14:16:15 UTC
Permalink
Post by Lucio De Re
The issue raised on Go-Nuts is that Bash shouldn't be used for
installing Go, /bin/sh should be used instead. The response is that
Bash is the most uniformly implemented of the /bin/sh's out there and
that none of the other shells (generally referred to as /bin/sh) can
be relied upon not to have incompatible foibles that would trip up a
complicated script headed #!/bin/sh. Therefore, intentionally using
#!/bin/bash (or #!/usr/bin/env -c /bin/bash - from memory) is much
more likely to work without adjustments.
Hence, it makes sense to stick to bash in one's day-to-day work.
Of course, rc would be preferable, but the target platforms for Go are
not all adequately endowed, and Byron's rc proves the point: it is
slightly incompatible with Plan 9's rc.
i think there's an unexplained twist in the logic. byron's rc is a non
sequitor. rc (the real one from p9p) could be made to run on all the go platforms,
and would be uniform across them all. one suspects therefore that
the reason is either that rc doesn't run on all the platforms, or that
the go team thinks rc is not preferrable. for example it could be
that the thought is that rc might distract from the real point here,
go.

i don't know. but the problem isn't the consistency of rc. byron's
rc doesn't count. that's like saying the bourne shell is not consistent
because of bash.

- erik
Lucio De Re
2012-08-28 14:58:23 UTC
Permalink
Post by erik quanstrom
i don't know. but the problem isn't the consistency of rc. byron's
rc doesn't count. that's like saying the bourne shell is not consistent
because of bash.
But I am saying that, and I believe that is what motivates the Go Team
to continue using Bash. They know that Bash works. They also know
that at any time it is possible for a version of /bin/sh (not the
abstract shell, but the particular instance installed on a particular
platform) to bite them in the butt because of the innumerable
variations. They avoid that by using Bash on Unix, command.com on
Windows (I think) and rc on Plan 9.

Byron's rc is a straw man, but it illustrates the conditions in the
wild.

++L
hiro
2012-08-28 14:23:06 UTC
Permalink
env bash - posix 2.0
hiro
2012-08-28 14:34:39 UTC
Permalink
https://groups.google.com/forum/#!topic/golang-nuts/aC7Qr1qtZ2I
Kurt H Maier
2012-08-28 14:13:32 UTC
Permalink
Post by Lucio De Re
Post by Rudolf Sykora
Anyway, I do not understand how uniform crappiness can be advantageous...
The issue raised on Go-Nuts is that Bash shouldn't be used for
installing Go, /bin/sh should be used instead. The response is that
Bash is the most uniformly implemented of the /bin/sh's out there and
that none of the other shells (generally referred to as /bin/sh) can
be relied upon not to have incompatible foibles that would trip up a
complicated script headed #!/bin/sh.
Typical Go shit there. If the scripts are so complicated that it's a
pain in the ass to find a way to run them, fix the stupid scripts.
Lucio De Re
2012-08-28 14:52:34 UTC
Permalink
Post by Kurt H Maier
ypical Go shit there. If the scripts are so complicated that it's a
pain in the ass to find a way to run them, fix the stupid scripts.
They did, by building the "go" command.

Do you think you can provide any guarantees that the subset of /bin/sh
features common to all current instances of /bin/sh is adequate to
build a moderately demanding open source package?

Or are you oriented towards kiloLOCs of test code to see which
features are implemented and provide compatability a la autoconf?

++L
Kurt H Maier
2012-08-28 15:05:21 UTC
Permalink
Post by Lucio De Re
Or are you oriented towards kiloLOCs of test code to see which
features are implemented and provide compatability a la autoconf?
Excellent example of a false dilemma. I'm oriented towards exerting the
effort to make something that isn't shitty. I'm at peace with the go
developers decision to avoid that effort. Are you?

Anyway, bash uses autoconf as well. So all you've done is push the mess
one step farther away from your code. Why not just cut the cord? I'm
hearing "shell scripting is easy" and I'm hearing "acceptance testing is
too hard." Which is it? I can write portable shell scripts, but the
idiots on golang-nuts have explicitly said they don't WANT portable
shell scripts. They want to rely on bash, and all the GNU bullshit that
brings with it.
Dan Cross
2012-08-28 15:18:39 UTC
Permalink
Post by Kurt H Maier
Post by Lucio De Re
Or are you oriented towards kiloLOCs of test code to see which
features are implemented and provide compatability a la autoconf?
Excellent example of a false dilemma. I'm oriented towards exerting the
effort to make something that isn't shitty.
Wonderful! Please point me to your new programming language so I can
have a look?
Post by Kurt H Maier
I'm at peace with the go
developers decision to avoid that effort. Are you?
So are you saying that because they use bash to build the system, the
language is shitty? Or just the build system is shitty?
Post by Kurt H Maier
Anyway, bash uses autoconf as well. So all you've done is push the mess
one step farther away from your code. Why not just cut the cord? I'm
hearing "shell scripting is easy" and I'm hearing "acceptance testing is
too hard." Which is it? I can write portable shell scripts, but the
idiots on golang-nuts have explicitly said they don't WANT portable
shell scripts. They want to rely on bash, and all the GNU bullshit that
brings with it.
Writing a shell script is easy. Writing a shell script to build a
non-trivial piece of software across $n$ different platforms is hard.

I can't speak for the Go team, but I suspect their decision is a
pragmatic compromise: should they spend their (limited) time creating
and maintaining a portable version of 'rc' that can be built (how,
exactly? With a script that's just a straight run of commands or
something?) on a bunch of different platforms so that they can write
some beautiful script to build Go, or should they produce some lowest
common denominator shell script in the most common shell out there
that builds the system and then spend the time they save concentrating
on building a cool programming language?

I don't think the gain from the former approach is really worth the
cost to the latter.

To put it another way, why not cut the cord? Because it takes time
away from doing something they consider more important.

More generally, if your impression of Go as a language ("Typical go
shit...") is based on what shell they chose for the build script, then
I'm not sure you have your priorities straight.

- Dan C.
Kurt H Maier
2012-08-28 15:30:36 UTC
Permalink
Post by Dan Cross
Wonderful! Please point me to your new programming language so I can
have a look?
I don't think it would do you any good, since you are apparently unable
to differentiate between programming languages and build systems.
Post by Dan Cross
So are you saying that because they use bash to build the system, the
language is shitty? Or just the build system is shitty?
I have other issues with Go as a language, but the build system is
unmitigated shit.
Post by Dan Cross
Writing a shell script is easy. Writing a shell script to build a
non-trivial piece of software across $n$ different platforms is hard.
And yet people do it all the time.
Post by Dan Cross
To put it another way, why not cut the cord? Because it takes time
away from doing something they consider more important.
Incorrect. There's a whole world of people out there; some of them
would be willing to build and maintain an elegant, portable shell
script. That's the point of having an open development process, I
thought. I understand the need for the core devs to focus on the task
at hand: language building. It is idiotic not to delegate the build
system to someone willing and able to devote the time to it.
Post by Dan Cross
More generally, if your impression of Go as a language ("Typical go
shit...") is based on what shell they chose for the build script, then
I'm not sure you have your priorities straight.
Fortunately, your assessment of my priorities is meaningless. "Typical
Go shit" referred to the ceaseless lack of focus on quality endemic to a
schizophrenic community that was organized around a language without a
mission. Go is still evolving in two separate directions; one camp sees
it as yet another language for web shit, and one camp sees it as a real
programming language for actual programs. I long ago lost interest in
seeing who will eventually win, but in the meantime every bad decision
seems to have some chorus of supporters who take every piece of
criticism personally. *Those* are the people who need to examine their
priorities.
Dan Cross
2012-08-28 15:45:35 UTC
Permalink
Post by Kurt H Maier
Post by Dan Cross
Wonderful! Please point me to your new programming language so I can
have a look?
I don't think it would do you any good, since you are apparently unable
to differentiate between programming languages and build systems.
Oh no, I can't. Please, by all means, point me to whatever it is that
you have produced that demonstrates your prowess in this area so that
I can learn more.
Post by Kurt H Maier
Post by Dan Cross
So are you saying that because they use bash to build the system, the
language is shitty? Or just the build system is shitty?
I have other issues with Go as a language, but the build system is
unmitigated shit.
Irrelevant.
Post by Kurt H Maier
Post by Dan Cross
Writing a shell script is easy. Writing a shell script to build a
non-trivial piece of software across $n$ different platforms is hard.
And yet people do it all the time.
Irrelevant.
Post by Kurt H Maier
Post by Dan Cross
To put it another way, why not cut the cord? Because it takes time
away from doing something they consider more important.
Incorrect. There's a whole world of people out there; some of them
would be willing to build and maintain an elegant, portable shell
script. That's the point of having an open development process, I
thought. I understand the need for the core devs to focus on the task
at hand: language building. It is idiotic not to delegate the build
system to someone willing and able to devote the time to it.
Not the way that community is currently set up, so irrelevant.
Post by Kurt H Maier
Post by Dan Cross
More generally, if your impression of Go as a language ("Typical go
shit...") is based on what shell they chose for the build script, then
I'm not sure you have your priorities straight.
Fortunately, your assessment of my priorities is meaningless. "Typical
Go shit" referred to the ceaseless lack of focus on quality endemic to a
schizophrenic community that was organized around a language without a
mission. Go is still evolving in two separate directions; one camp sees
it as yet another language for web shit, and one camp sees it as a real
programming language for actual programs. I long ago lost interest in
seeing who will eventually win, but in the meantime every bad decision
seems to have some chorus of supporters who take every piece of
criticism personally. *Those* are the people who need to examine their
priorities.
And yet the produced the language, and people use it. But you clearly
know better, so please, by all means, show me what you've produced
that's useful that I can learn something from.

- Dan C.
Kurt H Maier
2012-08-28 17:43:34 UTC
Permalink
Post by Dan Cross
Oh no, I can't. Please, by all means, point me to whatever it is that
you have produced that demonstrates your prowess in this area so that
I can learn more.
you sound upset
Post by Dan Cross
Irrelevant.
The topic at hand is not irrelevant to the topic at hand.
Post by Dan Cross
Irrelevant.
The topic at hand is not irrelevant to the topic at hand.
Post by Dan Cross
Not the way that community is currently set up, so irrelevant.
Hence "typical Go shit."
Post by Dan Cross
And yet the produced the language, and people use it. But you clearly
know better, so please, by all means, show me what you've produced
that's useful that I can learn something from.
I have no urges to prove myself to you. They have produced a language,
yes. They have not produced a worthwhile build system or development
community.
Dan Cross
2012-08-28 18:20:27 UTC
Permalink
Post by Kurt H Maier
Post by Dan Cross
Oh no, I can't. Please, by all means, point me to whatever it is that
you have produced that demonstrates your prowess in this area so that
I can learn more.
you sound upset
Not at all.
Post by Kurt H Maier
[...]
Post by Dan Cross
And yet the produced the language, and people use it. But you clearly
know better, so please, by all means, show me what you've produced
that's useful that I can learn something from.
I have no urges to prove myself to you.
I see no reason to take you your opinion any more seriously than
anyone else's. You're entitled to it; that doesn't mean you are
right.
Post by Kurt H Maier
They have produced a language, yes.
They have not produced a worthwhile build system
You are conflating bootstrapping the language with the language's
build system. The go command is actually quite nice.

The use of bash in Go is tiny. Why fixate on it when you could go
build something useful, instead?
Post by Kurt H Maier
or development community.
Evidence suggests otherwise.

Anyway, I have neither the time nor the inclination to get into a
pissing match with some random person on the Internet about Go's use
of bash. If it's such a serious problem for you, well, I hope you
figure out a way to work around it. If not, then I don't know what to
tell you. In either case, good luck!

- Dan C.
Kurt H Maier
2012-08-28 18:26:06 UTC
Permalink
Post by Dan Cross
You are conflating bootstrapping the language with the language's
build system. The go command is actually quite nice.
Also, the go command is useless unless the bootstrap build system can
construct it. I'm not conflating anything, I'm just not talking about
the build system you like.
Post by Dan Cross
The use of bash in Go is tiny. Why fixate on it when you could go
build something useful, instead?
Because a corrected build system would be useful to me. Is this a
complicated concept?
Post by Dan Cross
Evidence suggests otherwise.
I have yet to see such.
Post by Dan Cross
Anyway, I have neither the time nor the inclination to get into a
pissing match with some random person on the Internet about Go's use
of bash. If it's such a serious problem for you, well, I hope you
figure out a way to work around it. If not, then I don't know what to
tell you. In either case, good luck!
I wish you would have ascertained you had nothing to tell me earlier in
the thread. Thank you for your support.
Dan Cross
2012-08-28 18:39:13 UTC
Permalink
Post by Kurt H Maier
Post by Dan Cross
You are conflating bootstrapping the language with the language's
build system. The go command is actually quite nice.
Also, the go command is useless unless the bootstrap build system can
construct it. I'm not conflating anything, I'm just not talking about
the build system you like.
I don't *like* it, I just don't *hate* it. Two very different concepts.
Post by Kurt H Maier
Post by Dan Cross
The use of bash in Go is tiny. Why fixate on it when you could go
build something useful, instead?
Because a corrected build system would be useful to me.
Well, if you could explain a) how it's currently broken, and b) how a
'corrected' version would be useful, others might be more sympathetic
to your concerns. From most perspectives, it doesn't appear broken at
all; it works fine, it's just not what you would have done.
Post by Kurt H Maier
Is this a complicated concept?
No. But it's basic tact and consideration to fully explain oneself if
one expects a useful response.
Post by Kurt H Maier
Post by Dan Cross
Evidence suggests otherwise.
I have yet to see such.
*shrug* Don't know what to tell you, then.
Post by Kurt H Maier
Post by Dan Cross
Anyway, I have neither the time nor the inclination to get into a
pissing match with some random person on the Internet about Go's use
of bash. If it's such a serious problem for you, well, I hope you
figure out a way to work around it. If not, then I don't know what to
tell you. In either case, good luck!
I wish you would have ascertained you had nothing to tell me earlier in
the thread. Thank you for your support.
I somehow get the feeling that few people have anything to tell you
that you're willing to listen to.

- Dan C.
Jeremy Jackins
2012-08-28 20:42:57 UTC
Permalink
Post by Dan Cross
Well, if you could explain a) how it's currently broken, and b) how a
'corrected' version would be useful, others might be more sympathetic
to your concerns. From most perspectives, it doesn't appear broken at
all; it works fine, it's just not what you would have done.
Speak for yourself, please.
Dan Cross
2012-08-28 21:20:06 UTC
Permalink
Post by Jeremy Jackins
Post by Dan Cross
Well, if you could explain a) how it's currently broken, and b) how a
'corrected' version would be useful, others might be more sympathetic
to your concerns. From most perspectives, it doesn't appear broken at
all; it works fine, it's just not what you would have done.
Speak for yourself, please.
Which part? One was a request to provide a substantive argument, the other
an objective fact.

- Dan C.
Kurt H Maier
2012-08-28 18:47:57 UTC
Permalink
Post by Dan Cross
Well, if you could explain a) how it's currently broken, and b) how a
'corrected' version would be useful, others might be more sympathetic
to your concerns. From most perspectives, it doesn't appear broken at
all; it works fine, it's just not what you would have done.
Literally dozens of people have already explained why this would be
useful. It's currently broken because it makes unnecessary assumptions
about target platforms in exchange for less up-front work on the part of
the devs. These assumptions are often incorrect.
Post by Dan Cross
*shrug* Don't know what to tell you, then.
Thanks for your input.
Post by Dan Cross
I somehow get the feeling that few people have anything to tell you
that you're willing to listen to.
I'm enriched by this information.
Uriel
2012-08-28 20:40:13 UTC
Permalink
Post by Kurt H Maier
Post by Dan Cross
So are you saying that because they use bash to build the system, the
language is shitty? Or just the build system is shitty?
I have other issues with Go as a language, but the build system is
unmitigated shit.
Post by Dan Cross
Writing a shell script is easy. Writing a shell script to build a
non-trivial piece of software across $n$ different platforms is hard.
And yet people do it all the time.
Go currently builds out of the box on Linux, FreeBSD, OS X, Windows,
and Plan 9 (and probably more places), I don't know many build systems
that can do that.

The build system was basically replaced by Russ since you last looked at it.

The remaining few bash lines are there basically for historical
reasons, because Russ (or anyone else) could not be bothered to
replace them with Go or C code which is what is used in the rest of
the build system.

Which again, makes this whole thread even more ridiculous, as in the
time you people have spent whining about it you could have removed the
last remaining lines of bash from the Go build system. But guess what?
Nobody really cares.

This whole argument has turned into a typical bike shed, everyone has
an opinion because it is such a trivial matter that anyone can easily
argue about it for hours.
Post by Kurt H Maier
Post by Dan Cross
To put it another way, why not cut the cord? Because it takes time
away from doing something they consider more important.
Incorrect. There's a whole world of people out there; some of them
would be willing to build and maintain an elegant, portable shell
script. That's the point of having an open development process, I
thought. I understand the need for the core devs to focus on the task
at hand: language building. It is idiotic not to delegate the build
system to someone willing and able to devote the time to it.
Nobody has volunteered to do it, so to blame Russ for actually solving
a problem nobody else has bothered to work on is very unfair.
Post by Kurt H Maier
Post by Dan Cross
More generally, if your impression of Go as a language ("Typical go
shit...") is based on what shell they chose for the build script, then
I'm not sure you have your priorities straight.
Fortunately, your assessment of my priorities is meaningless. "Typical
Go shit" referred to the ceaseless lack of focus on quality endemic to a
schizophrenic community that was organized around a language without a
mission. Go is still evolving in two separate directions; one camp sees
it as yet another language for web shit, and one camp sees it as a real
programming language for actual programs. I long ago lost interest in
seeing who will eventually win, but in the meantime every bad decision
seems to have some chorus of supporters who take every piece of
criticism personally. *Those* are the people who need to examine their
priorities.
Go is not "still evolving", Go 1 was released a while ago, and there
are absolutely no plans to change the language for the foreseeable
future.

Whatever it is used for "web shit" or "real programs" doesn't change
the language in any way.

And if you think ken can be persuaded to change the language based on
what people building "web shit" wants, then you really don't know ken
much.
Lucio De Re
2012-08-28 15:36:58 UTC
Permalink
Post by Kurt H Maier
Post by Lucio De Re
Or are you oriented towards kiloLOCs of test code to see which
features are implemented and provide compatability a la autoconf?
Excellent example of a false dilemma. I'm oriented towards exerting the
effort to make something that isn't shitty. I'm at peace with the go
developers decision to avoid that effort. Are you?
Sure, feel free to make something that isn't shitty, there's plenty
out there that can be improved. The machinery to install Go (from
sources) is hardly the most important amongst them.

And, yes, I am: I use Plan 9 as my development platform, /bin/ksh
(pdksh, I presume) as my Unix shell on NetBSD. On Ubuntu I could not
give it a passing thought, my environment gets recycled way too often
to bother.

What the Go Team choose to use as the underlying shell isn't
important, unless it impacts their delivery of the Go code. Arguing
that there are in fact portable /bin/sh scripts that can be written
does not actually write such scripts. No one stops anyone from taking
the purportedly "bash" scripts, converting them to tidy, clean /bin/sh
scripts and submitting them to the Go Team for inclusion. The Go
Team's reaction will still be: "How do you know that these scripts
will work under all possible instances of /bin/sh?" Solution: replace
the #!/bin/sh with #!/usr/bin/env -c /bin/bash. Why not?
Post by Kurt H Maier
Anyway, bash uses autoconf as well. So all you've done is push the mess
one step farther away from your code. Why not just cut the cord? I'm
hearing "shell scripting is easy" and I'm hearing "acceptance testing is
too hard." Which is it? I can write portable shell scripts, but the
idiots on golang-nuts have explicitly said they don't WANT portable
shell scripts. They want to rely on bash, and all the GNU bullshit that
brings with it.
Bash using Autoconf is a straw man, there are almost certainly binary
releases of Bash out there for all Unix-y platforms on which the Go
development system is likely to work or be portable to. I may
misremember, but before the Go tool was released, the Plan 9 release
managed to get itself compiled using ape/sh. As far as I can tell,
the dependence isn't in Bash features as much as in the consistency
across Bash versions.

++L
Kurt H Maier
2012-08-28 15:35:48 UTC
Permalink
Post by Lucio De Re
Sure, feel free to make something that isn't shitty, there's plenty
out there that can be improved. The machinery to install Go (from
sources) is hardly the most important amongst them.
The Go team has already explicitly stated they are note interested in a
better build system. I don't know if it's plain NIH or a secret bash
fetish, but they're not buying. Improving software is not a zero-sum
game; Go development is not a closed system. The build system can be
improved without impeding other progress.
Post by Lucio De Re
Solution: replace
the #!/bin/sh with #!/usr/bin/env -c /bin/bash. Why not?
Because there are plenty of systems out there without env or bash.
Post by Lucio De Re
I may
misremember, but before the Go tool was released, the Plan 9 release
managed to get itself compiled using ape/sh. As far as I can tell,
the dependence isn't in Bash features as much as in the consistency
across Bash versions.
...which is another unproved assumption.
erik quanstrom
2012-08-28 15:41:05 UTC
Permalink
Post by Kurt H Maier
Post by Lucio De Re
Solution: replace
the #!/bin/sh with #!/usr/bin/env -c /bin/bash. Why not?
Because there are plenty of systems out there without env or bash.
so what's the reason for this argument on 9fans? is it that it makes
building go on plan 9 harder?

- erik
Kurt H Maier
2012-08-28 15:47:04 UTC
Permalink
Post by erik quanstrom
so what's the reason for this argument on 9fans? is it that it makes
building go on plan 9 harder?
I think it started out with rc users defending their purity of essence.
I'm just an Unattached Lensman with the Galactic Bullshit Patrol.
hiro
2012-08-28 16:04:24 UTC
Permalink
great, it's becoming a pissing contest.
Lucio De Re
2012-08-28 16:05:04 UTC
Permalink
Post by Kurt H Maier
Post by Lucio De Re
Solution: replace
the #!/bin/sh with #!/usr/bin/env -c /bin/bash. Why not?
Because there are plenty of systems out there without env or bash.
Worth a try, though! There is very little shell code left in the Go
release. Maybe I'll give it a try on my pristine NetBSD machine.

But note that even if it does work, it is still not possible for the
Go Team to release the scripts as /bin/sh scripts because, as you have
clearly not yet grasped, not all /bin/sh instances out there can be
shown to be compatible with any one /bin/sh script, not matter how
"portable"!

Maybe the following illustration will help: Given Unix, Plan 9 and
Windows as target platforms, how does one go about releasing a single
build environment for all of them that on invocation automatically
produces the target package? I didn't give it a lot of thought, but
it seems to me that the totally general build (bootstrap?) operation
for this purpose does not (and, it seems to me, cannot) exist.

++L
Kurt H Maier
2012-08-28 17:46:21 UTC
Permalink
Post by Lucio De Re
But note that even if it does work, it is still not possible for the
Go Team to release the scripts as /bin/sh scripts because, as you have
clearly not yet grasped, not all /bin/sh instances out there can be
shown to be compatible with any one /bin/sh script, not matter how
"portable"!
This problem has been solved in myriad ways by myraid teams. Requiring
bash is one of the solutions. My only actual statement is that a better
solution would be de-shitting the build process so that it doesn't
require such a precise set of software to operate. That's all. Relax.
Lucio De Re
2012-08-28 18:19:52 UTC
Permalink
Post by Kurt H Maier
My only actual statement is that a better
solution would be de-shitting the build process so that it doesn't
require such a precise set of software to operate.
Does that translate into being able to supply an example of such a
"de-shitting" process the Go Team could and should have followed? An
irresistible paragon of building prowess? Something even the autoconf
people would be tempted to follow?

++L
Kurt H Maier
2012-08-28 18:18:14 UTC
Permalink
Post by Lucio De Re
Does that translate into being able to supply an example of such a
"de-shitting" process the Go Team could and should have followed? An
irresistible paragon of building prowess? Something even the autoconf
people would be tempted to follow?
You're still approaching this like it's some kind of pissing match.
Understand that "we did this because it was easy and it works so
whatever" is perfectly valid. Understand as well that it's also a
non-optimal solution, and it's not been without its own problems.
The root of the problem isn't that some magical man has to swoop in with
all the answers all at once, it's that the core devs don't give a shit,
which (again) is their prerogative.
Uriel
2012-08-28 20:44:04 UTC
Permalink
Post by Kurt H Maier
Post by Lucio De Re
Sure, feel free to make something that isn't shitty, there's plenty
out there that can be improved. The machinery to install Go (from
sources) is hardly the most important amongst them.
The Go team has already explicitly stated they are note interested in a
better build system. I don't know if it's plain NIH or a secret bash
fetish, but they're not buying. Improving software is not a zero-sum
game; Go development is not a closed system. The build system can be
improved without impeding other progress.
Where did the Go team say explicitly they are not interested in a
better build system?

They just said they are not interested in replacing the last very few
remaining bits of bash unless there is certainty that it wont break
the build in some system.

The current build system already provides facilities to do this, just
nobody has bothered to replace the last remaining bits of (ba)sh.
Post by Kurt H Maier
Post by Lucio De Re
Solution: replace
the #!/bin/sh with #!/usr/bin/env -c /bin/bash. Why not?
Because there are plenty of systems out there without env or bash.
Post by Lucio De Re
I may
misremember, but before the Go tool was released, the Plan 9 release
managed to get itself compiled using ape/sh. As far as I can tell,
the dependence isn't in Bash features as much as in the consistency
across Bash versions.
...which is another unproved assumption.
What is the unproven assumption? One of the Go devs already pointed
out that the current shell scripts don't work for (who knows what
reason) FreeBSD's default shell, even when they make no use of any
bash-specific features as far as anyone knows.
t***@polynum.com
2012-08-28 16:06:41 UTC
Permalink
Post by Lucio De Re
Do you think you can provide any guarantees that the subset of /bin/sh
features common to all current instances of /bin/sh is adequate to
build a moderately demanding open source package?
Yes. This is what is done by the R.I.S.K. framework used for building
KerGIS and kerTeX. The minimal being a subset of POSIX.2 for the tools,
since it is trivial to provide such subset for whatever environment (APE
on Plan9; this is what does Mingw for Windows; Cygwin is the overloaded
version; R.I.S.K. will probably provide one in the future).
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
Aram Hăvărneanu
2012-08-28 16:12:15 UTC
Permalink
Post by t***@polynum.com
Yes. This is what is done by the R.I.S.K. framework used for building
KerGIS and kerTeX.
I'm pretty sure that R.I.S.K has more than 2,250 lines of code. That's
the LOC count of \.(ba)?sh$ stuff in the Go tree. Also, nobody seemed
to mention that Go also ships with rc files to build on Plan 9...
--
Aram Hăvărneanu
t***@polynum.com
2012-08-28 20:43:56 UTC
Permalink
[Since the previous one did not reach the list (?), I send it once more]
Post by Aram Hăvărneanu
Post by t***@polynum.com
Yes. This is what is done by the R.I.S.K. framework used for building
KerGIS and kerTeX.
I'm pretty sure that R.I.S.K has more than 2,250 lines of code. That's
the LOC count of \.(ba)?sh$ stuff in the Go tree. Also, nobody seemed
to mention that Go also ships with rc files to build on Plan 9...
result of "wc -l rk*":

838 rkbuild
1121 rkconfig
60 rkguess
247 rkinstall
256 rkpkg
2522 total

This is with comments of course. The rest are the trivial parameters
files for each system. (rkguess is used to sketch such a parameters file
on a new system.)

And this does what no other framework does: be able to remove
intermediary products, such that you can compile a resulting n
megabytes package with just slightly more than n megabytes of
space...

To be clear: I'm answering the "there is not anything existing proving
this can be done differently". I'm not arguing about the choices
of the Go developers: they do the work; they do as they see fit.
Period.

And as for KerGIS vs. GRASS; kerTeX vs. TexLive; it costed me less time
to do R.I.S.K. from scratch, knowing thus anything about how it works,
how to fix, how to improve, why it fails, than to try to use existing
monsters.

Sending to /dev/null is the developer's primary tool. (As well as for
the readers of my messages, probably...)
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
t***@polynum.com
2012-08-28 16:32:57 UTC
Permalink
Post by Aram Hăvărneanu
Post by t***@polynum.com
Yes. This is what is done by the R.I.S.K. framework used for building
KerGIS and kerTeX.
I'm pretty sure that R.I.S.K has more than 2,250 lines of code. That's
the LOC count of \.(ba)?sh$ stuff in the Go tree. Also, nobody seemed
to mention that Go also ships with rc files to build on Plan 9...
$ wc -l rk*

838 rkbuild
1121 rkconfig
60 rkguess
247 rkinstall
256 rkpkg
2522 total

This is with comments of course. The rest are the trivial parameters
files for each system. (rkguess is used to sketch such a parameters file
on a new system.)

And this does what no other framework does: be able to remove
intermediary products, such that you can compile a resulting n
megabytes package with just slightly more than n megabytes of
space...
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
Lucio De Re
2012-08-28 18:13:22 UTC
Permalink
Post by t***@polynum.com
The minimal being a subset of POSIX.2 for the tools,
Maybe I'm pushing too hard here, but even Posix isn't followed by all
implementations of /bin/sh (no, I'm not sure, but there is no proof
possible, as the future is also a factor). Thing is, Bash is
well-defined, by a single implementation.

That's the original issue: why Bash over RC? Go is incidental, but a
data point in a big statistical set. All Bashes are equal, all
instances of /bin/sh aren't: it's not better, it's singular.

++L
a***@skeeve.com
2012-08-30 06:14:23 UTC
Permalink
Post by Lucio De Re
All Bashes are equal,
Even this isn't true. Bash is at 4.2 and people still report "issues"
with 3.x. (Same with gawk; gawk is at 4.0.1, people still send it bug reports
about 3.1.3, which is 10 years old!)

I am not familiar with the use of Bash in Go; I suspect that they stick
to stuff that will work across Baash versions though.

Arnold
Lucio De Re
2012-08-30 07:41:14 UTC
Permalink
Post by a***@skeeve.com
I am not familiar with the use of Bash in Go; I suspect that they stick
to stuff that will work across Baash versions though.
The difficult bit for argumentative people to grasp is that the Go
Team use features that are portable across bourne-like shells, they
just refuse to commit to that level of compatibility. On the one
hand, requiring Bash gives them a reliable grounding and on the other
if they slip up and use an advanced feature the entire castle doesn't
come tumbling down on their head.

Putting it another way, they use Bash specifically to minimise this
type of argumentative bike-shedding.

++L
t***@polynum.com
2012-08-30 09:34:51 UTC
Permalink
Post by Lucio De Re
The difficult bit for argumentative people to grasp is that the Go
Team use features that are portable across bourne-like shells, they
just refuse to commit to that level of compatibility. On the one
hand, requiring Bash gives them a reliable grounding and on the other
if they slip up and use an advanced feature the entire castle doesn't
come tumbling down on their head.
This is indeed what I do. The request for a _subset_ of POSIX.2 is so
that this will work for whatever Bourne like shell is there. When
something doesn't work on an OS, I circumvent using a more limited
subset on all. Bash is used as /bin/sh on a lot of Linuces, and R.I.S.K.
then uses Bash, but with limited POSIX.2 features. (I had to circumvent
some use of regexp because Plan9 APE relies on Plan9 regexp that does not
implement "\{m,n\}" for example. I had to circumvent a peculiarity of
Bash that exists with error when a function has an empty body---adding
just a comment will do...--- Etc.)

But as I said, this is not to argument about Go developers' choices:
they do as they see fit, since I do it this way for myself ;) But an
alternative is not only possible: it exists. (This is neither a plea for
a wide use of R.I.S.K. since it has been published by side-effect: I
didn't want to "support" it for external uses. But if I make a project
public, I publish the framework too...)
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
Lucio De Re
2012-08-30 10:26:26 UTC
Permalink
Post by t***@polynum.com
they do as they see fit
I think their philosophy is sound, not just an arbitrary choice. The
alternative is a commitment that can only be fulfilled by applying
resources best utilised on the focal issue.

For example, the kerTeX installation relies on an ftp client that
accepts a URL on the command line. My UBUNTU installation has no such
ftp command. That leaves you with the choice between driving the more
conventional ftp program with a small script (not nice, but it can be
done) or require (as you do for LEX and YACC) that wget be installed
everywhere, not just where ftp isn't of the neat BSD variety.

It's a choice you make on behalf of the user and you can be sure that
a significant portion of your target market would prefer the opposite.
A very small portion will also stand up and criticise you if you go
the wget rule, whereas it is much harder to challenge the use of ftp
with a script. However, of the two, wget is more robust.

That's the way it is. Sometimes one has the luxury of doing things
properly, sometimes it is more critical to arrive at a result first.
A healthy ethos would encourage tidying up behind one, but the costs
are seldom justified in the present development climate. Future
conditions may be different and perhaps we can then all feel justified
in chipping in to tidy up behind our less tidy pioneers.

++L
t***@polynum.com
2012-08-30 10:29:41 UTC
Permalink
Post by Lucio De Re
For example, the kerTeX installation relies on an ftp client that
accepts a URL on the command line.
Slight correction: this is not the kerTeX installation, this is sugar
for simplifying installation in the most common cases. The basis does
not request ftp. The "primitives" (R.I.S.K. proper) do not use this.

But I think we mostly agree on the issues.
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
Lucio De Re
2012-08-30 10:50:05 UTC
Permalink
Post by t***@polynum.com
The basis does
not request ftp.
I apologise for working with too little information. I have long
wanted to have TeX installed for the rare occasions when I want to
explore the TeX Book, so I took a chance. I'm waiting to find the
energy to solve the libc/libm/libl problem I encountered :-)

That, or the stable Plan 9 installation to install kerTeX on.

++L
Dan Cross
2012-08-30 10:35:42 UTC
Permalink
[Special to Lucio: Email to proxima.alt.za from Google's SMTP servers
is failing; it looks like they're listed in rbl.proxima.alt.za.]
Post by Lucio De Re
Post by t***@polynum.com
they do as they see fit
I think their philosophy is sound, not just an arbitrary choice. The
alternative is a commitment that can only be fulfilled by applying
resources best utilised on the focal issue.
For example, the kerTeX installation relies on an ftp client that
accepts a URL on the command line. My UBUNTU installation has no such
ftp command. That leaves you with the choice between driving the more
conventional ftp program with a small script (not nice, but it can be
done) or require (as you do for LEX and YACC) that wget be installed
everywhere, not just where ftp isn't of the neat BSD variety.
It's a choice you make on behalf of the user and you can be sure that
a significant portion of your target market would prefer the opposite.
A very small portion will also stand up and criticise you if you go
the wget rule, whereas it is much harder to challenge the use of ftp
with a script. However, of the two, wget is more robust.
That's the way it is. Sometimes one has the luxury of doing things
properly, sometimes it is more critical to arrive at a result first.
A healthy ethos would encourage tidying up behind one, but the costs
are seldom justified in the present development climate. Future
conditions may be different and perhaps we can then all feel justified
in chipping in to tidy up behind our less tidy pioneers.
Very well put.

- Dan C.
dexen deVries
2012-08-28 14:16:33 UTC
Permalink
Post by Lucio De Re
Post by Rudolf Sykora
Anyway, I do not understand how uniform crappiness can be advantageous...
The issue raised on Go-Nuts is that Bash shouldn't be used for
installing Go, /bin/sh should be used instead. The response is that
Bash is the most uniformly implemented of the /bin/sh's out there and
that none of the other shells (generally referred to as /bin/sh) can
be relied upon not to have incompatible foibles that would trip up a
complicated script headed #!/bin/sh. (...)
that somehow reminds me of autoconf
*runs away*

--
dexen deVries

[[[↓][→]]]

I'm sorry that this was such a long lett­er, but I didn't have time to write
you a short one. -- Bla­ise Pasc­al
Balwinder S Dheeman
2012-08-29 09:16:24 UTC
Permalink
Post by dexen deVries
Post by Rudolf Sykora
Hello,
I am just curious...
Here
http://9fans.net/archive/2007/11/120
Russ Cox writes he uses bash as his default shell. Does anybody know
the reason? Is this for practicality within the linux environment? Or
has he found rc too limiting?
FWIW, i'm using bash as the interactive shell too, in `konsole' terminal
emulator, because of bash' interactive line edition and command history. 9term
doesn't fit me.
I added command line editing and history to ash under minix a decade
ago, it worked like a charm; the same ash is known as dash in Debian
world, but did not bother to submit a patch.

Whereas, the bash man-page itself explains a lot:

BUGS
It's too big and too slow.

There are some subtle differences between bash and traditional
versions of sh, mostly because of the POSIX specification.

Aliases are confusing in some uses.

Shell built-in commands and functions are not stoppable/restartable.

Compound commands and command sequences of the form `a ; b ; c' are
not handled gracefully when process suspension is attempted. When
a process is stopped, the shell immediately executes the next
command in the sequence. It suffices to place the sequence of
commands between parentheses to force it into a sub-shell, which
may be stopped as a unit.

Array variables may not (yet) be exported.

There may be only one active co-process at a time.

GNU Bash-4.2 2010 December 28 BASH(1)
Post by dexen deVries
all scripting -- both standalone and in mkfiles -- goes in rc, thou.
It's strange that though ArchLinux uses bash as a system shell :P
--
Balwinder S "bdheeman" Dheeman
(http://werc.homelinux.net/contact/)
Charles Forsyth
2012-08-29 15:10:39 UTC
Permalink
Changing my default shell from bash to rc caused even terminal windows (let
alone 9term ones) to appear almost instantly on Ubuntu
(9term windows appear instantly on the click).

We slowly standardise on slow standards, with degradation everywhere.
hiro
2012-08-29 15:22:57 UTC
Permalink
For compatibility I use ash in xterm and rc in 9term ;)
Charles Forsyth
2012-08-29 15:34:31 UTC
Permalink
I have an rc script, allowing u ./configure, u make, u man, ...

% cat bin/u
#!/bin/rc
SHELL=/bin/sh
path=(/usr/local/sbin /usr/local/bin /usr/sbin /usr/bin /sbin /bin
/usr/bin/X11 /usr/games)
MANPAGER=/bin/cat
exec $*
Post by hiro
For compatibility I use ash in xterm and rc in 9term ;)
Dan Cross
2012-08-28 15:09:33 UTC
Permalink
Post by Rudolf Sykora
Hello,
Howdy.
Post by Rudolf Sykora
I am just curious...
Here
http://9fans.net/archive/2007/11/120
Russ Cox writes he uses bash as his default shell. Does anybody know
the reason? Is this for practicality within the linux environment? Or
has he found rc too limiting?
So rc is a nice shell, but it's most useful in a particular
environment that has evolved with it in a very pleasant way. If one
is constrained to work outside of that environment, then rc isn't so
much better than any other shell.

Note that I'm not referring to the implementation; rc is certainly
nicer than bash in this sense, but rather the tangible function from a
user perspective. If one is in an environment where the majority of
one's coworkers are stuck using bash and one needs to retain
shell-level compatibility with them for some reason or another, then
it makes sense to use bash, as aesthetically unpleasing as that may
be.

One has to ask oneself, is rc worth it? If the level of productivity
increase that came from using rc instead of bash was greater than the
cost of maintaining a custom environment built around rc, then one
would might make an argument for using it. But how many of us can
honestly say that's the benefits are so great? The basic command,
pipe and stdout redirection syntax is the same. It's the same if I
want to run a process or pipeline in the background. I can set the
prompts to be the same and configure things so that copy/paste works
in an identical fashion across the two. And those are the VAST
majority of things I do with a shell; to be honest, 99% of the time, I
don't even think about what shell I'm running; regardless of what it
is.

And rc is not perfect. I've always felt like the 'if not' stuff was a kludge.

- Dan C.
erik quanstrom
2012-08-28 15:26:19 UTC
Permalink
Post by Dan Cross
And rc is not perfect. I've always felt like the 'if not' stuff was a kludge.
no, it's certainly not. (i wouldn't call if not a kludge—just ugly. the
haahr/rakitzis es' if makes more sense, even if it's wierder.)

but the real question with rc is, what would you fix?

i can only think of a few things around the edges. `{} and $ are
obvious and is some way to use standard regular expressions. but
those really aren't that motivating. rc does enough.

perhaps (let's hope) someone else has better ideas.

- erik
Lucio De Re
2012-08-28 15:40:49 UTC
Permalink
Post by erik quanstrom
perhaps (let's hope) someone else has better ideas.
The Inferno shell was (is) slick!

++L
erik quanstrom
2012-08-28 15:34:34 UTC
Permalink
Post by Lucio De Re
Post by erik quanstrom
perhaps (let's hope) someone else has better ideas.
The Inferno shell was (is) slick!
and iirc, the slickness depends on limbo.

- erik
dexen deVries
2012-08-28 18:24:48 UTC
Permalink
Post by erik quanstrom
Post by Dan Cross
And rc is not perfect. I've always felt like the 'if not' stuff was a kludge.
no, it's certainly not. (i wouldn't call if not a kludge—just ugly. the
haahr/rakitzis es' if makes more sense, even if it's wierder.)
but the real question with rc is, what would you fix?
switch/case would make helluva difference over nested if/if not, if defaulted
to fall-through.

variable scoping (better than subshel) would help writing larger scripts, but
that's not necessarily an improvement ;-) something similar to LISP's `let'
special form, for dynamic binding.
--
dexen deVries

1972 - Dennis Ritchie invents a powerful gun that shoots both forward and
backward simultaneously. Not satisfied with the number of deaths and permanent
maimings from that invention he invents C and Unix.
erik quanstrom
2012-08-28 18:44:40 UTC
Permalink
Post by dexen deVries
switch/case would make helluva difference over nested if/if not, if
defaulted to fall-through.
maybe you have an example? because i don't see that. if not works
fine, and can be nested. case without fallthrough is also generally
what i want. if not, i can make the common stuff a function.
Post by dexen deVries
variable scoping (better than subshel) would help writing larger
scripts, but that's not necessarily an improvement ;-) something
similar to LISP's `let' special form, for dynamic binding.
there is variable scoping. you can write

x=() y=() cmd

cmd can be a function body or whatever. x and y are then private
to cmd. you can nest redefinitions.

x=1 y=2 {echo first $x $y; x=a y=b {echo second $x $y; x=α y=β {echo third $x $y}; echo ret second $x $y}; echo ret first $x $y}
first 1 2
second a b
third α β
ret second a b
ret first 1 2

you should try the es shell. es had let and some other scheme-y
features. let allows one to do all kinds of tricky stuff, like build
a shell debugger in the shell, but my opinion is that es was more
powerful and fun, but it didn't buy enough because it didn't really
expand on the essential nature of a shell. what can one do to
manipulate processes and file descriptors.

- erik
Dan Cross
2012-08-28 19:41:26 UTC
Permalink
Post by erik quanstrom
Post by Dan Cross
And rc is not perfect. I've always felt like the 'if not' stuff was a kludge.
no, it's certainly not. (i wouldn't call if not a kludge—just ugly.
Kludge perhaps in the sense that it seems to be to work around an
issue with the grammar and the expectation that it's mostly going to
be used interactively, as opposed to programmatically. See below.
Post by erik quanstrom
the haahr/rakitzis es' if makes more sense, even if it's wierder.)
Agreed; es would be an interesting starting point for a new shell.
Post by erik quanstrom
but the real question with rc is, what would you fix?
I think in order to really answer that question, one would have to
step back for a moment and really think about what one wants out of a
shell. There seems to be a natural conflict a programming language
and a command interpreter (e.g., the 'if' vs. 'if not' thing). On
which side does one err?
Post by erik quanstrom
i can only think of a few things around the edges. `{} and $ are
obvious and is some way to use standard regular expressions. but
those really aren't that motivating. rc does enough.
I tend to agree. As a command interpreter, rc is more or less fine as
is. I'd really only feel motivated to change whatever people felt
were common nits, and there are fairly few of those.
Post by erik quanstrom
perhaps (let's hope) someone else has better ideas.
Well, something off the top of my head: Unix pipelines are sort of
like chains of coroutines. And they work great for defining linear
combinations of filters. But something that may be interesting would
be the ability to allow the stream of computations to branch; instead
of pipelines being just a list, make them a tree, or even some kind of
dag (if one allows for the possibility of recombining streams). That
would be kind of an interesting thing to play with in a shell
language; I don't know how practically useful it would be, though.
Post by erik quanstrom
Post by Dan Cross
switch/case would make helluva difference over nested if/if not, if
defaulted to fall-through.
maybe you have an example? because i don't see that. if not works
fine, and can be nested. case without fallthrough is also generally
what i want. if not, i can make the common stuff a function.
Post by Dan Cross
variable scoping (better than subshel) would help writing larger
scripts, but that's not necessarily an improvement ;-) something
similar to LISP's `let' special form, for dynamic binding.
(A nit: 'let' actually introduces lexical scoping in most Lisp
variants; yes, doing (let ((a 1)) ...) has non-lexical effect if 'a'
is a dynamic variable in Common Lisp, but (let) doesn't itself
introduce dynamic variables. Emacs Lisp is a notable exception in
this regard.)
Post by erik quanstrom
there is variable scoping. you can write
x=() y=() cmd
cmd can be a function body or whatever. x and y are then private
to cmd. you can nest redefinitions.
x=1 y=2 {echo first $x $y; x=a y=b {echo second $x $y; x=α y=β {echo third $x $y}; echo ret second $x $y}; echo ret first $x $y}
first 1 2
second a b
third α β
ret second a b
ret first 1 2
This syntax feels clunky and unfamiliar to me; rc resembles block
scoped languages like C; I'd rather have a 'local' or similar keyword
to introduce a variable in the scope of each '{ }' block.
Post by erik quanstrom
you should try the es shell. es had let and some other scheme-y
features. let allows one to do all kinds of tricky stuff, like build
a shell debugger in the shell, but my opinion is that es was more
powerful and fun, but it didn't buy enough because it didn't really
expand on the essential nature of a shell. what can one do to
manipulate processes and file descriptors.
es was a weird merger between rc's syntax and functional programming
concepts. It's neat-ish, but unless we're really ready to go to the
pipe monad (not that weird, in my opinion) you're right. Still, if it
allowed one to lexically bind a file descriptor to a variable, I could
see that being neat; could I have a closure over a file descriptor? I
don't think the underlying process model is really set up for it, but
it would be kind of cool: one could have different commands consuming
part of a stream in a very flexible way.

- Dan C.
Aram Hăvărneanu
2012-08-28 20:31:33 UTC
Permalink
Post by Dan Cross
But something that may be interesting would
be the ability to allow the stream of computations to branch; instead
of pipelines being just a list, make them a tree, or even some kind of
dag (if one allows for the possibility of recombining streams).
Rc has this. It's great. See section 10 of the rc paper or <{command}
in the rc manual. I use it all the time to see differences between
programmatically generated things.
--
Aram Hăvărneanu
Bakul Shah
2012-08-28 20:14:30 UTC
Permalink
Post by Dan Cross
Post by erik quanstrom
perhaps (let's hope) someone else has better ideas.
Well, something off the top of my head: Unix pipelines are sort of
like chains of coroutines. And they work great for defining linear
combinations of filters. But something that may be interesting would
be the ability to allow the stream of computations to branch; instead
of pipelines being just a list, make them a tree, or even some kind of
dag (if one allows for the possibility of recombining streams). That
would be kind of an interesting thing to play with in a shell
language; I don't know how practically useful it would be, though.
Coming up with an easy to use syntax for computation trees (or
arbitrary nets) is the hard part. May be the time is ripe for
a net-rc or net-scheme-shell.

The feature I want is the ability to pass not just character
values in environment or pipes but arbitrary Scheme objects.
But that requires changes at the OS level (or mapping them
to/from strings, which is a waste if both sides can handle
structured objects).
erik quanstrom
2012-08-29 01:39:06 UTC
Permalink
Post by Bakul Shah
The feature I want is the ability to pass not just character
values in environment or pipes but arbitrary Scheme objects.
But that requires changes at the OS level (or mapping them
to/from strings, which is a waste if both sides can handle
structured objects).
!? the ability to pass typed records around is an idea that was
tarred, feathered, drawn and quartered by unix. files, and therefore
streams, have no type. they are byte streams.

one of the advantages of unix over, say, ibm systems, is that in unix
it is not the os' business to care what you're passing about. but by
the same token, if you are the application, you get to arrange these
things by yourself.

rc already passes structured data through the environment.
rc variables in the environment are defined as

var: [^ctl-a]*
| ([^ctl-a]*) ctl-a list

so there is precident for this in shells.

- erik
erik quanstrom
2012-08-29 01:43:04 UTC
Permalink
Post by erik quanstrom
var: [^ctl-a]*
| ([^ctl-a]*) ctl-a list
sorry. s/list/var/

- erik
Bakul Shah
2012-08-29 02:13:30 UTC
Permalink
Post by erik quanstrom
Post by Bakul Shah
The feature I want is the ability to pass not just character
values in environment or pipes but arbitrary Scheme objects.
But that requires changes at the OS level (or mapping them
to/from strings, which is a waste if both sides can handle
structured objects).
!? the ability to pass typed records around is an idea that was
tarred, feathered, drawn and quartered by unix. files, and therefore
streams, have no type. they are byte streams.
I was not talking about "records" but s-expressions. "json"
is kind of sort of the same thing. Without a generally useful
and simple such mechanism, people end up devising their own.
The 9p format for instance. And go has typed channels.
Post by erik quanstrom
rc already passes structured data through the environment.
rc variables in the environment are defined as
var: [^ctl-a]*
| ([^ctl-a]*) ctl-a list
so there is precident for this in shells.
And this.
erik quanstrom
2012-08-29 02:23:20 UTC
Permalink
Post by Bakul Shah
Post by erik quanstrom
Post by Bakul Shah
The feature I want is the ability to pass not just character
values in environment or pipes but arbitrary Scheme objects.
But that requires changes at the OS level (or mapping them
to/from strings, which is a waste if both sides can handle
structured objects).
!? the ability to pass typed records around is an idea that was
tarred, feathered, drawn and quartered by unix. files, and therefore
streams, have no type. they are byte streams.
I was not talking about "records" but s-expressions. "json"
is kind of sort of the same thing. Without a generally useful
and simple such mechanism, people end up devising their own.
The 9p format for instance. And go has typed channels.
it sounds like you're saying 9p isn't useful. .... i must be reading
your post incorrectly.

- erik
Bakul Shah
2012-08-29 02:44:20 UTC
Permalink
Post by erik quanstrom
Post by Bakul Shah
Post by erik quanstrom
Post by Bakul Shah
The feature I want is the ability to pass not just character
values in environment or pipes but arbitrary Scheme objects.
But that requires changes at the OS level (or mapping them
to/from strings, which is a waste if both sides can handle
structured objects).
!? the ability to pass typed records around is an idea that was
tarred, feathered, drawn and quartered by unix. files, and therefore
streams, have no type. they are byte streams.
I was not talking about "records" but s-expressions. "json"
is kind of sort of the same thing. Without a generally useful
and simple such mechanism, people end up devising their own.
The 9p format for instance. And go has typed channels.
it sounds like you're saying 9p isn't useful. .... i must be reading
your post incorrectly.
9p is quite useful. But the same semantics could've been
implemented using a more universal but compact structured
format such as s-expr. It is not the only choice but to me it
seems to strike a reasonable balance (compared to bloaty XML
at one extreme, tightly packed binary structures at another,
and byte streams with printf/parse encode/decode at the third
extreme).
erik quanstrom
2012-08-29 04:28:32 UTC
Permalink
Post by Bakul Shah
Post by erik quanstrom
Post by Bakul Shah
Post by erik quanstrom
Post by Bakul Shah
The feature I want is the ability to pass not just character
values in environment or pipes but arbitrary Scheme objects.
But that requires changes at the OS level (or mapping them
to/from strings, which is a waste if both sides can handle
structured objects).
!? the ability to pass typed records around is an idea that was
tarred, feathered, drawn and quartered by unix. files, and therefore
streams, have no type. they are byte streams.
I was not talking about "records" but s-expressions. "json"
is kind of sort of the same thing. Without a generally useful
and simple such mechanism, people end up devising their own.
The 9p format for instance. And go has typed channels.
it sounds like you're saying 9p isn't useful. .... i must be reading
your post incorrectly.
9p is quite useful. But the same semantics could've been
implemented using a more universal but compact structured
format such as s-expr. It is not the only choice but to me it
seems to strike a reasonable balance (compared to bloaty XML
at one extreme, tightly packed binary structures at another,
and byte streams with printf/parse encode/decode at the third
extreme).
i don't see the problem. 9p is not in any way special to the kernel.
only devmnt knows about it, and it is only used to mount file servers.
in theory, one could substitue something else. it wouldn't quite be
plan 9, and it wouldn't be interoperable, but there's no reason it couldn't
be done. authentication speaks special protocols. venti speaks a special
protocol. so i don't see why kernel support would even be helpful in
implementing your s-expression protocol. and there's no reason
a 9p over s-expression device can't be implemented.

imho, the reason for constraining 9p to exactly the operations needed
is to make it easy to prove the protocol correct.

- erik
dexen deVries
2012-08-28 19:34:25 UTC
Permalink
Post by erik quanstrom
(...)
Post by dexen deVries
variable scoping (better than subshel) would help writing larger
scripts, but that's not necessarily an improvement ;-) something
similar to LISP's `let' special form, for dynamic binding.
there is variable scoping. you can write
x=() y=() cmd
thank you good sire, for you've just made my day.


now i see i can do:

x=1 y=2 z=3

...and only `z' retains its new value in the external scope, while `x' and `y'
are limited in scope.


horray for rc and helpful 9fans,
--
dexen deVries

1972 - Dennis Ritchie invents a powerful gun that shoots both forward and
backward simultaneously. Not satisfied with the number of deaths and permanent
maimings from that invention he invents C and Unix.
arisawa
2012-08-29 00:06:35 UTC
Permalink
Hello,
Post by dexen deVries
x=1 y=2 z=3
...and only `z' retains its new value in the external scope, while `x' and `y'
are limited in scope.
No.

ar% a=1 b=2 c=3; echo $a $b $c
1 2 3
ar% a=() b=() c=()
ar% a=1 b=2 {c=3}; echo $a $b $c
3
ar%

Kenji Arisawa
dexen deVries
2012-08-29 08:12:17 UTC
Permalink
Post by Rudolf Sykora
Hello,
Post by dexen deVries
x=1 y=2 z=3
...and only `z' retains its new value in the external scope, while `x' and
`y' are limited in scope.
No.
ar% a=1 b=2 c=3; echo $a $b $c
1 2 3
ar% a=() b=() c=()
ar% a=1 b=2 {c=3}; echo $a $b $c
3
ar%
indeed, thanks.
--
dexen deVries

[[[↓][→]]]

I'm sorry that this was such a long lett­er, but I didn't have time to write
you a short one. -- Bla­ise Pasc­al
Bakul Shah
2012-08-28 19:53:02 UTC
Permalink
Post by erik quanstrom
Post by dexen deVries
=20
switch/case would make helluva difference over nested if/if not, if
defaulted to fall-through.
maybe you have an example? because i don't see that. if not works
fine, and can be nested. case without fallthrough is also generally
what i want. if not, i can make the common stuff a function.
Post by dexen deVries
variable scoping (better than subshel) would help writing larger
scripts, but that's not necessarily an improvement ;-) something
similar to LISP's `let' special form, for dynamic binding.
there is variable scoping. you can write
x=3D() y=3D() cmd
cmd can be a function body or whatever. x and y are then private
to cmd. you can nest redefinitions. =20
x=3D1 y=3D2 {echo first $x $y; x=3Da y=3Db {echo second $x $y; x=3D=CE=
B1=
y=3D=CE=B2 {echo third $x $y}; echo ret second $x $y}; echo ret first $x=
$y}
first 1 2
second a b
third =CE=B1 =CE=B2
ret second a b
ret first 1 2
This is basically the same as let. Instead of
let x=1 y=2 foo
you say
x=1 y=2 foo
and this is lexical scoping. try

lex=1 { echo $lex; }
echo $lex
vs
{ var=1; echo $var; }
echo $var
erik quanstrom
2012-08-28 20:34:10 UTC
Permalink
Post by Dan Cross
Post by erik quanstrom
the haahr/rakitzis es' if makes more sense, even if it's wierder.)
Agreed; es would be an interesting starting point for a new shell.
es is great input. there are really cool ideas there, but it does
seem like a lesson learned to me, rather than a starting point.
Post by Dan Cross
I think in order to really answer that question, one would have to
step back for a moment and really think about what one wants out of a
shell. There seems to be a natural conflict a programming language
and a command interpreter (e.g., the 'if' vs. 'if not' thing). On
which side does one err?
since the raison d'être of a shell is to be a command interpter, i'd
go with that.
Post by Dan Cross
I tend to agree. As a command interpreter, rc is more or less fine as
is. I'd really only feel motivated to change whatever people felt
were common nits, and there are fairly few of those.
there are nits of omission, and those can be fixable. ($x(n-m) was added)
Post by Dan Cross
Post by erik quanstrom
perhaps (let's hope) someone else has better ideas.
Well, something off the top of my head: Unix pipelines are sort of
like chains of coroutines. And they work great for defining linear
combinations of filters. But something that may be interesting would
be the ability to allow the stream of computations to branch; instead
of pipelines being just a list, make them a tree, or even some kind of
dag (if one allows for the possibility of recombining streams). That
would be kind of an interesting thing to play with in a shell
language; I don't know how practically useful it would be, though.
rc already has non-linear pipelines. but they're not very convienient.

i think part of the problem is answering the question, what problem
would we like to solve. because "a better shell" just isn't well-defined
enough.

my knee-jerk reaction to my own question is that making it easier
and more natural to parallelize dataflow. a pipeline is just a really
low-level way to talk about it. the standard
grep x *.[ch]
forces all the *.[ch] to be generated before 1 instance of grep runs on
whatever *.[ch] evaluates to be.

but it would be okay for almost every use of this if *.[ch] were generated
in parallel with any number of grep's being run.

i suppose i'm stepping close to sawzall now.

- erik
Bakul Shah
2012-08-28 22:46:32 UTC
Permalink
Post by erik quanstrom
my knee-jerk reaction to my own question is that making it easier
and more natural to parallelize dataflow. a pipeline is just a really
low-level way to talk about it. the standard
grep x *.[ch]
forces all the *.[ch] to be generated before 1 instance of grep runs on
whatever *.[ch] evaluates to be.
Here the shell would have to understand program behavior.
Consider something like

8l x.8 y.8 z.8 ...

This can't be parallelized (but a parallelizable loader can be
written).

May be you can define a `par' command (sort of like xargs but
invokes in parallel).

echo *.[ch] | par -1 grep x
Post by erik quanstrom
but it would be okay for almost every use of this if *.[ch] were generated
in parallel with any number of grep's being run.
i suppose i'm stepping close to sawzall now.
Be careful!
erik quanstrom
2012-08-29 01:28:07 UTC
Permalink
Post by Bakul Shah
Post by erik quanstrom
my knee-jerk reaction to my own question is that making it easier
and more natural to parallelize dataflow. a pipeline is just a really
low-level way to talk about it. the standard
grep x *.[ch]
forces all the *.[ch] to be generated before 1 instance of grep runs on
whatever *.[ch] evaluates to be.
Here the shell would have to understand program behavior.
Consider something like
8l x.8 y.8 z.8 ...
This can't be parallelized (but a parallelizable loader can be
written).
ya, ya. improving on rc in a noticable way is hard.
and thinking aloud is a bad idea. and a good way to look foolish.

- erik
dexen deVries
2012-08-29 08:09:49 UTC
Permalink
Post by erik quanstrom
my knee-jerk reaction to my own question is that making it easier
and more natural to parallelize dataflow. a pipeline is just a really
low-level way to talk about it. the standard
grep x *.[ch]
forces all the *.[ch] to be generated before 1 instance of grep runs on
whatever *.[ch] evaluates to be.
but it would be okay for almost every use of this if *.[ch] were generated
in parallel with any number of grep's being run.
(in Linux terms, sorry!)

you can get close with find|xargs -- it runs the command for every -L <number>
lines of input. AFAIK xargs does not parallelize the execution itself.


find -name '*.[ch]' | xargs -L 8 grep REGEX
--
dexen deVries

[[[↓][→]]]

I'm sorry that this was such a long lett­er, but I didn't have time to write
you a short one. -- Bla­ise Pasc­al
Dan Cross
2012-08-29 08:38:08 UTC
Permalink
Post by erik quanstrom
Post by Dan Cross
Post by erik quanstrom
the haahr/rakitzis es' if makes more sense, even if it's wierder.)
Agreed; es would be an interesting starting point for a new shell.
es is great input. there are really cool ideas there, but it does
seem like a lesson learned to me, rather than a starting point.
Starting point conceptually, if not in implementation.
Post by erik quanstrom
Post by Dan Cross
I think in order to really answer that question, one would have to
step back for a moment and really think about what one wants out of a
shell. There seems to be a natural conflict a programming language
and a command interpreter (e.g., the 'if' vs. 'if not' thing). On
which side does one err?
since the raison d'être of a shell is to be a command interpter, i'd
go with that.
Fair enough, but that will color the flavor of the shell when used as
a programming language. Then again, Inferno's shell was able to
successfully navigate both in a comfortable manner by using clever
facilities available in that environment (module loading and the
like). It's not clear how well that works in an environment like
Unix, let alone Plan 9.
Post by erik quanstrom
Post by Dan Cross
I tend to agree. As a command interpreter, rc is more or less fine as
is. I'd really only feel motivated to change whatever people felt
were common nits, and there are fairly few of those.
there are nits of omission, and those can be fixable. ($x(n-m) was added)
Right.
Post by erik quanstrom
Post by Dan Cross
Post by erik quanstrom
perhaps (let's hope) someone else has better ideas.
Well, something off the top of my head: Unix pipelines are sort of
like chains of coroutines. And they work great for defining linear
combinations of filters. But something that may be interesting would
be the ability to allow the stream of computations to branch; instead
of pipelines being just a list, make them a tree, or even some kind of
dag (if one allows for the possibility of recombining streams). That
would be kind of an interesting thing to play with in a shell
language; I don't know how practically useful it would be, though.
rc already has non-linear pipelines. but they're not very convienient.
And somewhat limited. There's no real concept of 'fanout' of output,
for instance (though that's a fairly trivial command, so probably
doesn't count), or multiplexing input from various sources that would
be needed to implement something like a shell-level data flow network.

Muxing input from multiple sources is hard when the data isn't somehow
self-delimited. For specific applications this is solvable by the
various pieces of the computation just agreeing on how to represent
data and having a program that takes that into account do the muxing,
but for a general mechanism it's much more difficult, and the whole
self-delimiting thing breaks the Unix 'data as text' abstraction by
imposing a more rigid structure.

There may be other ways to achieve the same thing; I remember that the
boundaries of individual writes used to be preserved on read, but I
think that behavior changed somewhere along the way; maybe with the
move away from streams? Or perhaps I'm misremembering? I do remember
that it led to all sorts of hilarious arguments about what the
behavior of things like, 'write(fd, "", 0)' should induce in the
reading side of things, but this was a long time ago.

Anyway, maybe something along the lines of, 'read a message of length
<=SOME_MAX_SIZE from a file descriptor; the message boundaries are
determined by the sending end and preserved by read/write' could be
leveraged here without too much disruption to the current model.
Post by erik quanstrom
i think part of the problem is answering the question, what problem
would we like to solve. because "a better shell" just isn't well-defined
enough.
Agreed.
Post by erik quanstrom
my knee-jerk reaction to my own question is that making it easier
and more natural to parallelize dataflow. a pipeline is just a really
low-level way to talk about it. the standard
grep x *.[ch]
forces all the *.[ch] to be generated before 1 instance of grep runs on
whatever *.[ch] evaluates to be.
but it would be okay for almost every use of this if *.[ch] were generated
in parallel with any number of grep's being run.
i suppose i'm stepping close to sawzall now.
Actually, I think you're stepping closer to the reducers stuff Rich
Hickey has done recently in Clojure, though there's admittedly a lot
of overlap with the sawzall way of looking at things.

- Dan C.
erik quanstrom
2012-08-29 13:57:06 UTC
Permalink
Post by Dan Cross
Post by erik quanstrom
rc already has non-linear pipelines. but they're not very convienient.
And somewhat limited. There's no real concept of 'fanout' of output,
for instance (though that's a fairly trivial command, so probably
doesn't count), or multiplexing input from various sources that would
be needed to implement something like a shell-level data flow network.
Muxing input from multiple sources is hard when the data isn't somehow
self-delimited.
[...]
There may be other ways to achieve the same thing; I remember that the
boundaries of individual writes used to be preserved on read, but I
think that behavior changed somewhere along the way; maybe with the
move away from streams? Or perhaps I'm misremembering?
pipes still preserve write boundaries, as does il. (even the 0-byte write) but tcp of course by
definition does not. but either way, the protocol would need to be
self-framed to be transported on tcp. and even then, there are protocols
that are essentially serial, like tls.
Post by Dan Cross
Post by erik quanstrom
i suppose i'm stepping close to sawzall now.
Actually, I think you're stepping closer to the reducers stuff Rich
Hickey has done recently in Clojure, though there's admittedly a lot
of overlap with the sawzall way of looking at things.
my knowledge of both is weak. :-)

- erik
Charles Forsyth
2012-08-29 15:07:28 UTC
Permalink
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.78.5331
Paul Haeberli, ConMan: A Visual Programming Language for Interactive
Graphics (1988)
I supervised a student who did an implementation for a Blit-like
environment on the Sun3 as a project;
unfortunately I didn't keep a copy. I remember there were several things
left to work out based on the paper.
(The Blit-like environment replaced megabytes of Sunview, in case you were
wondering, and enabled some serious fun.
Sunview enabled some serious head-banging.)
Dan Cross
2012-08-30 10:05:47 UTC
Permalink
Post by erik quanstrom
Post by Dan Cross
Post by erik quanstrom
rc already has non-linear pipelines. but they're not very convienient.
And somewhat limited. There's no real concept of 'fanout' of output,
for instance (though that's a fairly trivial command, so probably
doesn't count), or multiplexing input from various sources that would
be needed to implement something like a shell-level data flow network.
Muxing input from multiple sources is hard when the data isn't somehow
self-delimited.
[...]
There may be other ways to achieve the same thing; I remember that the
boundaries of individual writes used to be preserved on read, but I
think that behavior changed somewhere along the way; maybe with the
move away from streams? Or perhaps I'm misremembering?
pipes still preserve write boundaries, as does il. (even the 0-byte write) but tcp of course by
definition does not. but either way, the protocol would need to be
self-framed to be transported on tcp. and even then, there are protocols
that are essentially serial, like tls.
Right. I think this is the reason for Bakul's question about
s-expressions or JSON or a similar format; those formats are
inherently self-delimiting. The problem with that is that, for
passing those things around to work without some kind of reverse
'tee'-like intermediary, the system has to understand the the things
that are being transferred are s-expressions or JSON records or
whatever, not just streams of uninterpreted bytes. We've steadfastly
rejected such system-imposing structure on files in Unix-y type
environments since 1969.

But conceptually, these IPC mechanisms are sort of similar to channels
in CSP-style languages. A natural question then becomes, how do
CSP-style languages handle the issue? Channels work around the muxing
thing by being typed; elements placed onto a channel are indivisible
objects of that type, so one doesn't need to worry about interference
from other objects simultaneously placed onto the same channel in
other threads of execution. Could we do something similar with pipes?
I don't know that anyone wants typed file descriptors; that would
open a whole new can of worms.

Maybe the building blocks are all there; one could imagine some kind
of 'splitter' program that could take input and rebroadcast it across
multiple output descriptors. Coupled with some kind of 'merge'
program that could take multiple input streams and mux them onto a
single output, one could build nearly arbitrarily complicated networks
of computations connected by pipes. Maybe for simplicity constrain
these to be DAGs. With a notation to describe these computation
graphs, one could just do a topological sort of the graph, create
pipes in all the appropriate places and go from there. Is the shell
an appropriate place for such a thing?

Forsyth's link looks interesting; I haven't read through the paper in
detail yet, but it sort of reminded me of LabView in a way (where
non-programmers wire together data flows using boxes and arrows and
stuff).
Post by erik quanstrom
Post by Dan Cross
Post by erik quanstrom
i suppose i'm stepping close to sawzall now.
Actually, I think you're stepping closer to the reducers stuff Rich
Hickey has done recently in Clojure, though there's admittedly a lot
of overlap with the sawzall way of looking at things.
my knowledge of both is weak. :-)
The Clojure reducers stuff is kind of slick.

Consider a simple reduction in Lisp; say, summing up a list of numbers
or something like that. In Common Lisp, we may write this as:

(reduce #'+ '(1 2 3 4 5))

In clojure, the same thing would be written as:

(reduce + [1 2 3 4 5])

The problem is how the computation is performed. To illustrate,
here's a simple definition of 'reduce' written in Scheme (R5RS doesn't
have a standard 'reduce' function, but it is most commonly written to
take an initial element, so I do that here).

(define (reduce binop a bs)
(if (null? bs)
a
(reduce binop (binop a (car bs)) (cdr bs))))

Notice how the recursive depth of the function is linear in the length
of the list. But, if one thinks about what I'm doing here (just
addition of simple numbers) there's no reason this can't be done in
parallel. In particular, if I can split the list into evenly sized
parts and recurse, I can limit the recursive depth of the computation
to O(lg n). Something more like:

(define (reduce binop a bs)
(if (null? bs)
a
(let ((halves (split-into-halves bs)))
(binop (reduce binop a (car halves)) (reduce binop a (cadr halves)))

If I can exploit parallelism to execute functions in the recursion
tree simultaneously, I can really cut down on execution time. The
requirement is that binop over a and bs's is a monoid; that is, binop
is associative over the set from which 'a' and 'bs' are drawn, and 'a'
is an identity element.

This sounds wonderful, of course, but in Lisp and Scheme, lists are
built from cons cells, and even if I have some magic
'split-into-halves' function that satisfies the requirements of
reduce, doing so is still necessarily linear, so I don't gain much.
Besides, having to pass around the identity all the time is a bummer.

But in clojure, the Lisp concept of a list (composed of cons cells) is
generalized into the concept of a 'seq'. A seq is just a sequence of
things; it could be a list, a vector, some other container (say, a
sequence of key/value pairs derived from some kind of associated
structure), or a stream of data being read from a file or network
connection.

What's the *real* problem here? The issue is that reduce "knows" too
much about the things it is reducing over. Doing things sequentially
is easy, but slow; doing things in parallel requires that reduce know
a lot about the type of thing it's reducing over (e.g., this magic
'split-into-halves' function. Further, that might not be appropriate
for *all* sequence types; e.g., files or lists made from cons cells.

The insight of the reducers framework is that one can just ask the
container to reduce itself. Basically, pass it a function and say,
"here, reduce yourself with this function however you see fit." Then,
random-access containers can do things in parallel; lists and files
and things can do things sequentially; associative containers can do
whatever they want, etc. The implementation is kind of interesting;
more information is here:
http://clojure.com/blog/2012/05/08/reducers-a-library-and-model-for-collection-processing.html

Your example of running multiple 'grep's in parallel sort of reminded
me of this, though it occurs to me that this can probably be done with
a command: a sort of 'parallel apply' thing that can run a command
multiple times concurrently, each invocation on a range of the
arguments. But making it simple and elegant is likely to be tricky.

- Dan C.
Charles Forsyth
2012-08-30 10:43:12 UTC
Permalink
typed command languages:
I F Currie, J M Foster, Curt: The Command Interpreter Language for Flex
http://www.vitanuova.com/dist/doc/rsre-3522-curt.pdf
dexen deVries
2012-08-30 13:41:15 UTC
Permalink
Post by Dan Cross
(...)
Your example of running multiple 'grep's in parallel sort of reminded
me of this, though it occurs to me that this can probably be done with
a command: a sort of 'parallel apply' thing that can run a command
multiple times concurrently, each invocation on a range of the
arguments. But making it simple and elegant is likely to be tricky.
now that i think of it...

mk creates DAG of dependences and then reduces it by calling commands, going
in parallel where applicable.

erik's example with grep x *.[ch] boils down to two cases:
- for single use, do it simple & slow way -- just run single grep process for
all files
- but when you expect to traverse those files often, prepare a mkfile
(preferably in a semi-automatic way) which will perform search in parallel.

caveat: output of one grep instance could end up in the midst of a /line/ of
output of another grep instance.
--
dexen deVries

[[[↓][→]]]

I'm sorry that this was such a long lett­er, but I didn't have time to write
you a short one. -- Bla­ise Pasc­al
erik quanstrom
2012-08-30 13:47:59 UTC
Permalink
Post by dexen deVries
caveat: output of one grep instance could end up in the midst of a /line/ of
output of another grep instance.
grep -b. but in general if the bio library had an option to output line-wise,
then the problem could be avoided. otherwise, one would need to mux the
output.

- erik
erik quanstrom
2012-08-30 14:29:31 UTC
Permalink
Post by erik quanstrom
grep -b. but in general if the bio library had an option to output
line-wise, then the problem could be avoided. otherwise, one would need to
mux the output.
to quote you, erik,
Post by erik quanstrom
pipes still preserve write boundaries, as does il
so, hopefully, a dumb pipe to cat would do the job...? :^)
grep-single-directory:VQ: $FILES_IN_THE_DIR
grep $regex $prereq | cat
i think you still need grep -b, because otherwise grep uses
the bio library to buffer output, and bio doesn't respect lines.

- erik
dexen deVries
2012-08-30 14:26:29 UTC
Permalink
Post by erik quanstrom
Post by dexen deVries
caveat: output of one grep instance could end up in the midst of a /line/
of output of another grep instance.
grep -b. but in general if the bio library had an option to output
line-wise, then the problem could be avoided. otherwise, one would need to
mux the output.
to quote you, erik,
Post by erik quanstrom
pipes still preserve write boundaries, as does il
so, hopefully, a dumb pipe to cat would do the job...? :^)

grep-single-directory:VQ: $FILES_IN_THE_DIR
grep $regex $prereq | cat
--
dexen deVries

[[[↓][→]]]

I'm sorry that this was such a long lett­er, but I didn't have time to write
you a short one. -- Bla­ise Pasc­al
erik quanstrom
2012-08-30 14:26:43 UTC
Permalink
The thing is that mk doesn't really do anything to set up connections
between the commands it runs.
it does. the connections are through the file system.

- erik
Dan Cross
2012-08-30 14:33:07 UTC
Permalink
Post by erik quanstrom
The thing is that mk doesn't really do anything to set up connections
between the commands it runs.
it does. the connections are through the file system.
No. The order in which commands are run (or if they are run at all)
is based on file timestamps, so in that sense it uses the filesystem
for coordination, but mk itself doesn't do anything to facilitate
interprocess communications between the commands it runs (for example
setting up pipes between commands).

- Dan C.
erik quanstrom
2012-08-30 14:41:38 UTC
Permalink
Post by Dan Cross
Post by erik quanstrom
The thing is that mk doesn't really do anything to set up connections
between the commands it runs.
it does. the connections are through the file system.
No. The order in which commands are run (or if they are run at all)
is based on file timestamps, so in that sense it uses the filesystem
for coordination, but mk itself doesn't do anything to facilitate
interprocess communications between the commands it runs (for example
setting up pipes between commands).
what i was saying is that mk knows and insures that the output files
are there. the fact that it's not in the middle of the conversation is
an implementation detail, imho.

that is, mk is built on the assumption that programs communicate through
files; $O^c communicates to $O^l by producing .$O files. mk rules
know this.

- erik
dexen deVries
2012-08-30 14:48:39 UTC
Permalink
Post by erik quanstrom
what i was saying is that mk knows and insures that the output files
are there. the fact that it's not in the middle of the conversation is
an implementation detail, imho.
that is, mk is built on the assumption that programs communicate through
files; $O^c communicates to $O^l by producing .$O files. mk rules
know this.
shouldn't be the case for rules with virtual targets (V). such rules are
always executed, and the order should only depend on implementatino of DAG
traversing. ``Files may be made in any order that respects the preceding
restrictions'', from manpage.

if mk was used for executing grep in parallel, prerequisites would be actual
files, but targets would be virtual; probably 1...$NPROC targets per
directory.


anyway, a meld of Rc shell and mk? crazy idea.
--
dexen deVries

[[[↓][→]]]

I'm sorry that this was such a long lett­er, but I didn't have time to write
you a short one. -- Bla­ise Pasc­al
Lucio De Re
2012-08-30 15:13:05 UTC
Permalink
Post by dexen deVries
anyway, a meld of Rc shell and mk? crazy idea.
Inferno (Vitanuova) released a "mash" a ways back, but apparently the
sources were lost. It was mind-bogglingly interesting!

++L
Charles Forsyth
2012-08-30 15:13:10 UTC
Permalink
Errr ... no. Twice: mash was not VN code but brucee's preemptive strike
against a POSIX shell for Lucent's Inferno;
VN's Inferno had a shell with a different style done by Roger Peppe.
Post by Lucio De Re
Inferno (Vitanuova) released a "mash" a ways back, but apparently the
sources were lost.
Burton Samograd
2012-08-30 15:07:01 UTC
Permalink
Post by dexen deVries
anyway, a meld of Rc shell and mk? crazy idea.
Inferno (Vitanuova) released a "mash" a ways back, but apparently the sources were lost. It was mind-bogglingly interesting!
In case anyone's interested (like I was):

http://www.vitanuova.com/inferno/man/1/mash.html

--
Burton Samograd

This e-mail, including accompanying communications and attachments, is strictly confidential and only for the intended recipient. Any retention, use or disclosure not expressly authorised by Markit is prohibited. This email is subject to all waivers and other terms at the following link: http://www.markit.com/en/about/legal/email-disclaimer.page

Please visit http://www.markit.com/en/about/contact/contact-us.page? for contact information on our offices worldwide.
s***@9front.org
2012-08-30 15:11:54 UTC
Permalink
Post by dexen deVries
anyway, a meld of Rc shell and mk? crazy idea.
What was mash?

-sl

Dan Cross
2012-08-30 14:24:31 UTC
Permalink
Post by dexen deVries
Post by Dan Cross
(...)
Your example of running multiple 'grep's in parallel sort of reminded
me of this, though it occurs to me that this can probably be done with
a command: a sort of 'parallel apply' thing that can run a command
multiple times concurrently, each invocation on a range of the
arguments. But making it simple and elegant is likely to be tricky.
now that i think of it...
mk creates DAG of dependences and then reduces it by calling commands, going
in parallel where applicable.
- for single use, do it simple & slow way -- just run single grep process for
all files
- but when you expect to traverse those files often, prepare a mkfile
(preferably in a semi-automatic way) which will perform search in parallel.
caveat: output of one grep instance could end up in the midst of a /line/ of
output of another grep instance.
The thing is that mk doesn't really do anything to set up connections
between the commands it runs.
erik quanstrom
2012-08-30 13:33:33 UTC
Permalink
Post by Dan Cross
rejected such system-imposing structure on files in Unix-y type
environments since 1969.
[...]
Post by Dan Cross
other threads of execution. Could we do something similar with pipes?
I don't know that anyone wants typed file descriptors; that would
open a whole new can of worms.
i don't see that the os can really help here. lib9p has no problem
turning an undelimited byte stream → 9p messages. there's no reason
any other format couldn't get the same treatment.

said another way, we already have typed streams, but they're not
enforced by the operating system.

one can also use the thread library technique, using shared memory.
Post by Dan Cross
Consider a simple reduction in Lisp; say, summing up a list of numbers
(reduce #'+ '(1 2 3 4 5))
(reduce + [1 2 3 4 5])
this reminds me of a bit of /bin/man. it seemed that the case statement
to generate a pipeline of formatting commands was awkward—verbose
and yet limited.

fn pipeline{
if(~ $#* 0)
troff $Nflag $Lflag -$MAN | $postproc
if not{
p = $1; shift
$p | pipeline $*
}
}

fn roff {
...
fontdoc $2 | pipeline $preproc
}
Post by Dan Cross
http://clojure.com/blog/2012/05/08/reducers-a-library-and-model-for-collection-processing.html
Your example of running multiple 'grep's in parallel sort of reminded
me of this, though it occurs to me that this can probably be done with
a command: a sort of 'parallel apply' thing that can run a command
multiple times concurrently, each invocation on a range of the
arguments. But making it simple and elegant is likely to be tricky.
actually, unless i misread (i need more coffee), the blog sounds just like
xargs.

- erik
Dan Cross
2012-08-30 14:25:39 UTC
Permalink
A parallel apply sort of thing could be used with xargs, of course;
'whatever | xargs papply foo' could keep some $n$ of foo's running at
the same time. The magic behind 'papply foo `{whatever}' is that it
knows how to interpret its arguments in blocks. xargs will invoke a
command after reading $n$ arguments, but that's mainly to keep from
overflowing the argument buffer, and (to my knowledge) it won't try to
keep multiple instances running them in parallel.
Oops, I should have checked the man page before I wrote. It seems
that at least some version of xargs have a '-P' for 'parallel' mode.
Dan Cross
2012-08-30 14:21:45 UTC
Permalink
Post by erik quanstrom
Post by Dan Cross
rejected such system-imposing structure on files in Unix-y type
environments since 1969.
[...]
Post by Dan Cross
other threads of execution. Could we do something similar with pipes?
I don't know that anyone wants typed file descriptors; that would
open a whole new can of worms.
i don't see that the os can really help here. lib9p has no problem
turning an undelimited byte stream → 9p messages. there's no reason
any other format couldn't get the same treatment.
Yeah, I don't see much here unless one breaks the untyped stream model
(from the perspective of the system).
Post by erik quanstrom
said another way, we already have typed streams, but they're not
enforced by the operating system.
Yes, but then every program that participates in one of these
computation networks has to have that type knowledge baked in. The
Plan 9/Unix model seems to preclude a general mechanism.
Post by erik quanstrom
one can also use the thread library technique, using shared memory.
Sure, but that doesn't do much for designing a new shell. :-)
Post by erik quanstrom
Post by Dan Cross
Consider a simple reduction in Lisp; say, summing up a list of numbers
(reduce #'+ '(1 2 3 4 5))
(reduce + [1 2 3 4 5])
this reminds me of a bit of /bin/man. it seemed that the case statement
to generate a pipeline of formatting commands was awkward—verbose
and yet limited.
fn pipeline{
if(~ $#* 0)
troff $Nflag $Lflag -$MAN | $postproc
if not{
p = $1; shift
$p | pipeline $*
}
}
fn roff {
...
fontdoc $2 | pipeline $preproc
}
Ha! That's something. I'm not sure what, but definitely something (I
actually kind of like it).
Post by erik quanstrom
Post by Dan Cross
http://clojure.com/blog/2012/05/08/reducers-a-library-and-model-for-collection-processing.html
Your example of running multiple 'grep's in parallel sort of reminded
me of this, though it occurs to me that this can probably be done with
a command: a sort of 'parallel apply' thing that can run a command
multiple times concurrently, each invocation on a range of the
arguments. But making it simple and elegant is likely to be tricky.
actually, unless i misread (i need more coffee), the blog sounds just like
xargs.
Hmm, not exactly. xargs would be like reducers if xargs somehow asked
stdin to apply a program to itself.

A parallel apply sort of thing could be used with xargs, of course;
'whatever | xargs papply foo' could keep some $n$ of foo's running at
the same time. The magic behind 'papply foo `{whatever}' is that it
knows how to interpret its arguments in blocks. xargs will invoke a
command after reading $n$ arguments, but that's mainly to keep from
overflowing the argument buffer, and (to my knowledge) it won't try to
keep multiple instances running them in parallel.

Hmm, I'm afraid I'm off in the realm of thinking out loud at this
point. Sorry if that's noisy for folks.

- Dan C.
Charles Forsyth
2012-08-30 14:45:22 UTC
Permalink
If you look at the paper I referenced, you will. Similar abilities appeared
in systems that supported persistence and persistent programming
languages (cf. Malcolm Atkinson, not Wikipedia).
Post by erik quanstrom
i don't see that the os can really help here.
Charles Forsyth
2012-08-30 14:55:57 UTC
Permalink
As another example, also from Flex,
J M Foster, I F Currie, "Remote Capabilities", The Computer Journal, 30(5),
1987, pp. 451-7.

http://comjnl.oxfordjournals.org/content/30/5/451.full.pdf
Post by Charles Forsyth
If you look at the paper I referenced, you will. Similar abilities
appeared in systems that supported persistence and persistent programming
languages (cf. Malcolm Atkinson, not Wikipedia).
erik quanstrom
2012-08-30 14:34:21 UTC
Permalink
Post by Dan Cross
Hmm, I'm afraid I'm off in the realm of thinking out loud at this
point. Sorry if that's noisy for folks.
THANK YOU. if 9fans needs anything, it's more thinking.

i'm not an edison fan, but i do like one thing he said, which was
that he had not failed, but simply discovered that the $n ways
he'd tried so far do not work.

- erik
erik quanstrom
2012-08-30 14:44:40 UTC
Permalink
Post by Dan Cross
Post by erik quanstrom
said another way, we already have typed streams, but they're not
enforced by the operating system.
Yes, but then every program that participates in one of these
computation networks has to have that type knowledge baked in. The
Plan 9/Unix model seems to preclude a general mechanism.
that's what i thought when i first read the plan 9 papers. but it
turns out, that it works out just fine for file servers, ssl, authentication,
etc. why can't it work for another type of agreed protocol? obviously
you'd need something along the lines of tlsclient/tlssrv if you wanted
normal programs to do this, but it might be that just a subset of programs
are really interested in participating.
Post by Dan Cross
Post by erik quanstrom
one can also use the thread library technique, using shared memory.
Sure, but that doesn't do much for designing a new shell. :-)
the shell itself could have channels, without the idea escaping
into the wild.

- erik
Continue reading on narkive:
Loading...