Discussion:
[9fans] gcc not an option for Plan9
(too old to reply)
Richard Miller
2013-03-23 09:53:29 UTC
Permalink
With gcc 4.8.0, the implementation of gcc is now in C++... And to
compile a compiler, one needs a C++ compiler...
This is not an insurmountable obstacle. In fact it's the normal
situation when retargeting any self-compiled compiler for a new
instruction set.
Steve Simon
2013-03-23 09:56:19 UTC
Permalink
I wonder if the new gcc will be written in cfront compatible
c++ - that would work... ☺

-Steve
t***@polynum.com
2013-03-23 21:34:40 UTC
Permalink
Post by Steve Simon
I wonder if the new gcc will be written in cfront compatible
c++ - that would work... ?
I guess the answer is: no, since the compiler has to be C++ 2003
compatible. But I guess too that your mention of cfront was a joke...
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
t***@polynum.com
2013-03-23 10:05:19 UTC
Permalink
Post by Richard Miller
With gcc 4.8.0, the implementation of gcc is now in C++... And to
compile a compiler, one needs a C++ compiler...
This is not an insurmountable obstacle. In fact it's the normal
situation when retargeting any self-compiled compiler for a new
instruction set.
Except that C is a great language because it is both high
level enough and low level (near machine) that a compiler written in C
without optimizations and pure integer is "easy" (less expensive) to
write from scratch. Here, the dependencies increase.
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
l***@proxima.alt.za
2013-03-23 10:24:07 UTC
Permalink
Post by t***@polynum.com
Except that C is a great language because it is both high
level enough and low level (near machine) that a compiler written in C
without optimizations and pure integer is "easy" (less expensive) to
write from scratch. Here, the dependencies increase.
I wouldn't cry too many tears over GCC. Having investigated Hogan's
port of GCC (3.0) to Plan 9, my impression is that GCC would never
really fit in with the Plan 9 paradigm, it is way too expensive and
unrewarding to bend it into shape, C++ notwithstanding.

Hence Go, together with the upgraded (if you want to call them that)
Plan 9 development tools. I'm still of the opinion that a convergence
of the Plan 9 tools and the Go development can become the Esperanto of
information technology, given that ease of portability to foreign
architectures is a founding principle. Only time will tell, sadly I
don't see any organisation or authoritative person recommending 8c et
al for development, where I expect that would be a step forward.

The obsession with optimisation, in part, is to be blamed, too. But
not alone.

Just as a side note, I was hoping to port Plan 9 to the Olimex
LinuXino, one of many project that may or may not see the light of
day. It comes with some or other variety of Linux, but has too little
memory (64MiB) to be more than an embedded prototyping system and the
default Linux release comes without the GCC development system. It
struck me that the Go system could be cross-compiled for Linux/Arm on
my Plan 9 network and used on the LinuXino. In fact, I have
implemented some small applications in this way although I have had no
occasion to do more than that. If I could figure a way to compile the
Go distribution with its own tools, I may be able to prove that Go is
a viable release development system without GCC backing it, something
we have shown to a smaller audience with the Plan9/386 distribution.

++L
Peter A. Cejchan
2013-03-23 11:40:02 UTC
Permalink
@Lucio: I still hope that some clone of plan9/nix/nxm will merge with Go
... just my dream, and I am just an embryo of a programmer
(as multiply stated here and elsewhere) so take it easy.... however, I'm
moving all my old stuff (and creating new one) to Go
[unfortunately, I am afraid I will never see the 9GoNix OS ;-) brought into
life]

Cheers,
peter.
Post by l***@proxima.alt.za
Post by t***@polynum.com
Except that C is a great language because it is both high
level enough and low level (near machine) that a compiler written in C
without optimizations and pure integer is "easy" (less expensive) to
write from scratch. Here, the dependencies increase.
I wouldn't cry too many tears over GCC. Having investigated Hogan's
port of GCC (3.0) to Plan 9, my impression is that GCC would never
really fit in with the Plan 9 paradigm, it is way too expensive and
unrewarding to bend it into shape, C++ notwithstanding.
Hence Go, together with the upgraded (if you want to call them that)
Plan 9 development tools. I'm still of the opinion that a convergence
of the Plan 9 tools and the Go development can become the Esperanto of
information technology, given that ease of portability to foreign
architectures is a founding principle. Only time will tell, sadly I
don't see any organisation or authoritative person recommending 8c et
al for development, where I expect that would be a step forward.
The obsession with optimisation, in part, is to be blamed, too. But
not alone.
Just as a side note, I was hoping to port Plan 9 to the Olimex
LinuXino, one of many project that may or may not see the light of
day. It comes with some or other variety of Linux, but has too little
memory (64MiB) to be more than an embedded prototyping system and the
default Linux release comes without the GCC development system. It
struck me that the Go system could be cross-compiled for Linux/Arm on
my Plan 9 network and used on the LinuXino. In fact, I have
implemented some small applications in this way although I have had no
occasion to do more than that. If I could figure a way to compile the
Go distribution with its own tools, I may be able to prove that Go is
a viable release development system without GCC backing it, something
we have shown to a smaller audience with the Plan9/386 distribution.
++L
l***@proxima.alt.za
2013-03-23 12:25:28 UTC
Permalink
Post by Peter A. Cejchan
[unfortunately, I am afraid I will never see the 9GoNix OS ;-) brought into
life]
I think Plan 9 spoils us, the OS is just a tool, not a faith. Just as
Go is not a faith, just a logical evolution of Alef, through Limbo, to
the platforms and conditions that prevail today. What matters is to
be able to produce code that runs on useful platforms and does not
require blood, sweat and tears to be made to work. Myself, I want to
teach underprivileged kids to program, the OS platform of choice is
Plan 9, but I also need to prepare them for the Windows, Linux and OSX
world.

++L
erik quanstrom
2013-03-23 16:09:04 UTC
Permalink
Binaries are one order of magnitude larger and the go tool & part of the
runtime code are, well….
sorry to be dense. larger than what?

- erik
Francisco J Ballesteros
2013-03-23 16:19:38 UTC
Permalink
Than plan 9's C ones.
Post by erik quanstrom
Binaries are one order of magnitude larger and the go tool & part of the
runtime code are, well….
sorry to be dense. larger than what?
- erik
erik quanstrom
2013-03-23 16:23:05 UTC
Permalink
Post by Francisco J Ballesteros
Than plan 9's C ones.
Post by erik quanstrom
Binaries are one order of magnitude larger and the go tool & part of the
runtime code are, well….
sorry to be dense. larger than what?
ah. i thought you were saying that it was an order of magnitude
larger than the unix version of go.

by the way, does this scale with lines of go code, or is it just
that the trivial go executable is megs?

- erik
Gorka Guardiola
2013-03-23 16:39:52 UTC
Permalink
Post by erik quanstrom
ah. i thought you were saying that it was an order of magnitude
larger than the unix version of go.
by the way, does this scale with lines of go code, or is it just
that the trivial go executable is megs?
A simple hello world is megs.

G.
Rob Pike
2013-03-23 17:15:20 UTC
Permalink
Much of which is symbols. Plus, a a simple computer has gigs of memory.

Yes, it's remarkable how much bigger programs are now than they were
20 years ago, but 20 years ago the same things were being said. I
understand your objection - I really do - but it's time to face the
future. The smart phone in your pocket is roughly 100 times faster
than the machine Plan 9 was developed on and has 1000 times the RAM.
Computers are incredibly powerful now, and the technologies of today
can use that power well (as I claim Go does) or poorly (as some others
do), or ignore it at the risk of obsolescence.

-rob
erik quanstrom
2013-03-23 17:20:37 UTC
Permalink
Post by Rob Pike
Much of which is symbols. Plus, a a simple computer has gigs of memory.
so, assuming demand loading, this is more of a
disk space issue rather than a memory issue?

- erik
Rob Pike
2013-03-23 19:29:36 UTC
Permalink
so, assuming demand loading, this is more of a
disk space issue rather than a memory issue?

It's only an issue on mailing lists and discussion groups.

-rob
erik quanstrom
2013-03-23 19:31:21 UTC
Permalink
Post by erik quanstrom
so, assuming demand loading, this is more of a
disk space issue rather than a memory issue?
It's only an issue on mailing lists and discussion groups.
i was hoping to know if the symbols are used for reflection.

- erik
Rob Pike
2013-03-23 19:34:02 UTC
Permalink
Yes, they are necessary for reflection. Fmt uses reflection - and uses
it well, as rminnich has attested.
Post by erik quanstrom
Post by erik quanstrom
so, assuming demand loading, this is more of a
disk space issue rather than a memory issue?
It's only an issue on mailing lists and discussion groups.
i was hoping to know if the symbols are used for reflection.
- erik
Rob Pike
2013-03-23 19:33:32 UTC
Permalink
For example, looking at what go install does wrt what a few mkfiles would
do for the same go source is illustrative of what I'm trying to say.

I've never seen a mkfile that builds a transitive dependency graph
given only the source code, downloads the relative dependencies from
the network, builds all the dependencies, and installs the result. Yes
mk could do that, but it would need a lot of help, and that help is
not going to materialize. Why use mk when the source code has all the
information you need to build the program?

I was a big fan of mk, and it (or make, depending) is still used to
help bootstrap the Go installation, but honestly I do not miss writing
mkfiles one bit.


-rob
Francisco J Ballesteros
2013-03-23 19:39:05 UTC
Permalink
Post by Rob Pike
Why use mk when the source code has all the
information you need to build the program
speed.
You have a fast and nice compiler.
I only copy a std mkfile to each dir with go source. I dont write them.
andrey mirtchovski
2013-03-23 19:58:28 UTC
Permalink
with mkfiles you can never have something like http://godoc.org. in
fact, it would be very difficult to make something like godoc for any
other language without major support from the authors or volunteers.

what godoc.org does is amazing -- when you type in a query for
something that looks like a go package it will attempt to download it
and generate the package documentation from the source code on the
fly. no interaction from the author or website maintainer need to
happen, all is done by the go tool, usually with enough speed that not
much waiting is involved. all the package needs to do is abide by a
few rules in naming imports.

try it for yourself (these packages will surely not be in the index):

http://godoc.org/code.google.com/p/goxscr/qcs
http://godoc.org/code.google.com/p/goxscr/deco
http://godoc.org/code.google.com/p/goxscr/palette
http://godoc.org/code.google.com/p/goxscr/rorschach
http://godoc.org/code.google.com/p/goxscr/spirograph

the stuff that falls out of such a tool is even more impressive.
here's an import graph for one of the xscr programs:
http://godoc.org/code.google.com/p/goxscr/moire?view=import-graph

here's the one for godoc:
http://godoc.org/code.google.com/p/go/src/cmd/godoc?view=import-graph
Francisco J Ballesteros
2013-03-23 20:11:36 UTC
Permalink
I used noweb, and web before that, long before go was conceived.
In fact, I was a huge fan of that. Knuth literate programming was fun.
it was tiny compared with godoc tool. Although the go tool is tiny compared
with eclipse or even the old code warrior.

I like the language, and worked to get it running on our systems.
Its nice how the go tool does some of the things it does, although there are
other things it does that I prefer to do with other tools.

I was just mentioning some facts about it I dont like.
Post by andrey mirtchovski
with mkfiles you can never have something like http://godoc.org. in
fact, it would be very difficult to make something like godoc for any
other language without major support from the authors or volunteers.
what godoc.org does is amazing -- when you type in a query for
something that looks like a go package it will attempt to download it
and generate the package documentation from the source code on the
fly. no interaction from the author or website maintainer need to
happen, all is done by the go tool, usually with enough speed that not
much waiting is involved. all the package needs to do is abide by a
few rules in naming imports.
http://godoc.org/code.google.com/p/goxscr/qcs
http://godoc.org/code.google.com/p/goxscr/deco
http://godoc.org/code.google.com/p/goxscr/palette
http://godoc.org/code.google.com/p/goxscr/rorschach
http://godoc.org/code.google.com/p/goxscr/spirograph
the stuff that falls out of such a tool is even more impressive.
http://godoc.org/code.google.com/p/goxscr/moire?view=import-graph
http://godoc.org/code.google.com/p/go/src/cmd/godoc?view=import-graph
Kurt H Maier
2013-03-24 00:44:33 UTC
Permalink
Post by erik quanstrom
so, assuming demand loading, this is more of a
disk space issue rather than a memory issue?
It's only an issue on mailing lists and discussion groups.
-rob
Also in university campuses and web programming shops, which are the
only other two places Go is discussed.

khm
Francisco J Ballesteros
2013-03-23 17:30:41 UTC
Permalink
Although, in general, I agree. I think that having the resources doesn't mean
we have to consume them (although we might if that pays off, of course).

For example, looking at what go install does wrt what a few mkfiles would
do for the same go source is illustrative of what I'm trying to say.
Post by Rob Pike
Much of which is symbols. Plus, a a simple computer has gigs of memory.
Yes, it's remarkable how much bigger programs are now than they were
20 years ago, but 20 years ago the same things were being said. I
understand your objection - I really do - but it's time to face the
future. The smart phone in your pocket is roughly 100 times faster
than the machine Plan 9 was developed on and has 1000 times the RAM.
Computers are incredibly powerful now, and the technologies of today
can use that power well (as I claim Go does) or poorly (as some others
do), or ignore it at the risk of obsolescence.
-rob
hiro
2013-03-23 17:44:49 UTC
Permalink
Post by Francisco J Ballesteros
Although, in general, I agree.
Are you surprised that you do?
Nemo
2013-03-23 17:52:24 UTC
Permalink
I'll try to say it in a different way.

I asked Siri and (s)he said (s)he does not consume many resources.
Now, that's nice. I'm willing to give up the machine resources for that, or
for dialling by voice on my car.

*But*, I'm not sure that to print "Hi there" I need a few megs, nor am I sure
that to install a and compile a few sources I need to see hundreds of stats/reads;
The funny thing is that the compilers seem to be really fast, but the go tool
fixes that problem. Fortunately, a few mkfiles fix the go tool problems.

Also, doing a cp /bin/echo /bin/hg improves things a bit.

This is just IMHO, the language is nice as are parts of the runtime.
I'm glad go is out there.
Post by hiro
Post by Francisco J Ballesteros
Although, in general, I agree.
Are you surprised that you do?
hiro
2013-03-23 17:23:23 UTC
Permalink
What matters is to be able to produce code
What matters is to get rid of code.
hiro
2013-03-23 17:33:32 UTC
Permalink
I feel like the future is repeating itself. Don't know what you find
so worthy in this.
hiro
2013-03-23 17:31:45 UTC
Permalink
it's time to face the future.
will go be able to run in the browser with activex? is it compatible
with node.js?
t***@polynum.com
2013-03-23 17:37:39 UTC
Permalink
Post by Rob Pike
Yes, it's remarkable how much bigger programs are now than they were
20 years ago, but 20 years ago the same things were being said.
Can we conclude that the added power is lost for the result obtained
from the applications, since it is taken by the "machinery"? Just as if
a washing powder giving already a "perfect" result 20 years ago had been
"improved" (from perfect to more than perfect) being able to give a
perfect result even if you make a knot around the heavy dirt before
throwing in the washing-machine: you have a perfectly clean result (as
you could obtain before, but without making the knot), except that
you have to make the knots before and try to unmake them after?

I remember when I started to work in a surveyor office. There was
microstation, back in early 90s, that ran on a DOS extender with a
perfect graphical performance (you were able to work flawlessly,
zooming, panning or whatever). You were never waiting for the
application or the display; it worked faster than your input.

Once Windows "improved" came, it took several years for the computers to
give the very same user experience, by an order of magnitude increase in
power for the "PC". It had to recover from Windows improvements first...
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
erik quanstrom
2013-03-23 17:55:15 UTC
Permalink
Post by t***@polynum.com
I remember when I started to work in a surveyor office. There was
microstation, back in early 90s, that ran on a DOS extender with a
perfect graphical performance (you were able to work flawlessly,
zooming, panning or whatever). You were never waiting for the
application or the display; it worked faster than your input.
Once Windows "improved" came, it took several years for the computers to
give the very same user experience, by an order of magnitude increase in
power for the "PC". It had to recover from Windows improvements first...
yes. this is a big problem. incremental improvement often fails.
and we see this today with newer phones performing poorly with
"new and improved" software.

the way to get out of this trap is to provide real improvement
by doing something new. (the term of art is "disruptive"—a
rather annoying term. :-)) obviously the new approach isn't going
to be as polished as the old approach. but if the new thing
is a real improvement, folks will put up with the regressions
in unimportant areas.

there was an old way to say this, "you can't make an omlette
without breaking a few eggs". :-)

- erik
Rob Pike
2013-03-23 19:17:29 UTC
Permalink
It's pointless to complain about the size of "hello world". It's not a
real program. In Go's case it's larger than a C binary because the
libraries (and the presence of a runtime) are capable of much more
under the covers, but by the time you write a real program in Go
you'll find the ratio of Go binary to C binary isn't nearly so large;
the incremental cost to the binary of a Go source file compared to a C
Go file is negligible.

A house is much heavier than a tent, but it also has a much stronger foundation.

-rob
Rob Pike
2013-03-23 19:19:07 UTC
Permalink
Thanks, Andrey, although what you say about Unicode and fmt isn't
true. Believe it or not, we care about sizes and arranged that fmt
doesn't need to import the whole Unicode tables, only the small subset
it needs.

-rob
Francisco J Ballesteros
2013-03-23 19:32:32 UTC
Permalink
I have a few programs written, including fs sync tools and a few other things.
I guess the largest one might be 10k lines.
The language is nice, although binaries are still large. I mentioned hello world
because that was the trivial example. I saw the same effect with other real
world programs.

I admit it might be necessary, but I wouldnt say sizes are comparable.


Also, I was reading the discussion andrey mentioned by the time it happened.
I guess it didnt reach this list until now because go didnt run on plan 9 until
recently.
Post by Rob Pike
It's pointless to complain about the size of "hello world". It's not a
real program. In Go's case it's larger than a C binary because the
libraries (and the presence of a runtime) are capable of much more
under the covers, but by the time you write a real program in Go
you'll find the ratio of Go binary to C binary isn't nearly so large;
the incremental cost to the binary of a Go source file compared to a C
Go file is negligible.
A house is much heavier than a tent, but it also has a much stronger foundation.
-rob
Rob Pike
2013-03-23 19:37:17 UTC
Permalink
If go install is slow on Plan 9, it's because Plan 9's file system is
slow (which it is and always has been), and because go install does
transitive dependencies correctly, which mk does not.

-rob
Rob Pike
2013-03-23 19:43:24 UTC
Permalink
I just did a go install, after a clean, of the biggest binary I'm
working on, using my pokey old mac laptop. It took 0.9 seconds, most
of which was spent in 6l and not the go tool. It could be faster, but
it's plenty fast enough.

The public won't use mk or make. If you want to succeed in the world,
you need to find a more modern way to build software. It's been clear
for a long time that that is not a relevant criterion for this
community any more, and although it makes me sad I have moved on.

I regret responding to this thread, and will move on there, too.

-rob
Kurt H Maier
2013-03-24 00:46:04 UTC
Permalink
Post by Rob Pike
The public won't use mk or make. If you want to succeed in the world,
Oh good, is this where we find out we've all been using the wrong
version of 'success'? Not everone has your goals. Still.
Post by Rob Pike
I regret responding to this thread
Agreed.
Francisco J Ballesteros
2013-03-23 19:45:52 UTC
Permalink
might be, but I was also thinking on macos x, not just 9.
Post by Rob Pike
If go install is slow on Plan 9, it's because Plan 9's file system is
slow (which it is and always has been), and because go install does
transitive dependencies correctly, which mk does not.
-rob
dexen deVries
2013-03-25 09:43:44 UTC
Permalink
(...) and because go install does
transitive dependencies correctly, which mk does not.
anybody care to explain what is the limitation of mk here? can't wrap my head
around it...
--
dexen deVries

[[[↓][→]]]
Gorka Guardiola
2013-03-25 10:26:21 UTC
Permalink
Post by dexen deVries
anybody care to explain what is the limitation of mk here? can't wrap my head
around it...
It only knows about the rules you give it. It does not understand the real dependencies in your software.
Also, because of this you tend to give it general rules which are not always right.
There are more, but these are the ones Rob was referring to, I think.

(Just in case, I am not making a point for or against mk, just trying to answer his question)

G.
hiro
2013-03-25 10:33:55 UTC
Permalink
Post by Gorka Guardiola
It does not understand the real dependencies in your software.
what does "understand" mean in that context?
I would think if this is all done automagically with go it would need
to follow even more general rules, no?
Gorka Guardiola
2013-03-25 10:42:40 UTC
Permalink
mk doesn't parses '#include' directives in C and even if it did, it wouldn't help.
I think that's what hes referring to.
Yes.
Gorka Guardiola
2013-03-25 10:40:57 UTC
Permalink
Post by hiro
what does "understand" mean in that context?
I would think if this is all done automagically with go it would need
to follow even more general rules, no?
No, they are concrete and specialized for go (the language). Go (the tool)
knows about the different ways the go program can be compiled, how the imports work, etc. and deduces what to do from that.

The general rules in mk are more "everytime you see a file ending like this, do that".

G.
Bence Fábián
2013-03-25 10:40:32 UTC
Permalink
mk doesn't parses '#include' directives in C and even if it did, it
wouldn't help.
I think that's what hes referring to.
Post by hiro
Post by Gorka Guardiola
It does not understand the real dependencies in your software.
what does "understand" mean in that context?
I would think if this is all done automagically with go it would need
to follow even more general rules, no?
dexen deVries
2013-03-25 11:00:43 UTC
Permalink
mk doesn't parses '#include' directives in C
gnu make can use output of gcc -M as rules describing prerequisites. it's
somewhat tedious and error-prone, though, as indicated by multitude of -Mx
<file> options.
--
dexen deVries

[[[↓][→]]]
dexen deVries
2013-03-25 10:43:36 UTC
Permalink
Post by hiro
Post by Gorka Guardiola
It does not understand the real dependencies in your software.
what does "understand" mean in that context?
I would think if this is all done automagically with go it would need
to follow even more general rules, no?
if mk understood 8c's construct ``#pragma lib "libbio.a"'' and used it to link
correct libraries, it could be said to understand the actual dependencies as
expressed by code.

of course, the deeper you go into this rabbit hole, the closer you get to
something resembling GNU autotools.
--
dexen deVries

[[[↓][→]]]


``we, the humanity'' is the greatest experiment we, the humanity, ever
undertook.
l***@proxima.alt.za
2013-03-25 11:02:22 UTC
Permalink
Post by dexen deVries
of course, the deeper you go into this rabbit hole, the closer you get to
something resembling GNU autotools.
The autotools attempt to be the answer to questions that have stopped
being asked ten or twenty years ago. That baggage is what I presume
the Go developers have tried to eliminate in advance.

++L
Bence Fábián
2013-03-25 11:09:23 UTC
Permalink
What if you want to make postscript from troff files?
Mk is a general tool and a good one. But it will never
have intricate knowledge about the files it works on.
Post by dexen deVries
if mk understood 8c's construct ``#pragma lib "libbio.a"'' and used it to link
correct libraries, it could be said to understand the actual dependencies as
expressed by code.
of course, the deeper you go into this rabbit hole, the closer you get to
something resembling GNU autotools.
erik quanstrom
2013-03-25 13:30:44 UTC
Permalink
Post by Bence Fábián
What if you want to make postscript from troff files?
Mk is a general tool and a good one. But it will never
have intricate knowledge about the files it works on.
mk does know how to expand and archive (see ar(6)) and
tell if the source is out-of-date wrt the archive without the
presence of the intermediate.

- erik
t***@polynum.com
2013-03-25 11:16:28 UTC
Permalink
Post by dexen deVries
if mk understood 8c's construct ``#pragma lib "libbio.a"'' and used it to link
correct libraries, it could be said to understand the actual dependencies as
expressed by code.
Except that there is a topological sorting to be done, since the
linking with libraries is order dependent. And one will need to
explicitely state that some lib depends on some others for the
linking (except if the symbols in the library are scanned to detect
unsatisfied dependencies and a search is done in standard directories
to find a library satisfying the dependency etc.). From a superficial
analysis, there is no easy way to do what the Go tools are doing
(by setting rules and giving informations in the packages to build
the dependencies), except for simple cases. (I wanted to compile kerTeX
on all OSes statically; when coming to a 2D interface to METAFONT, I
need to link against what window libraries are there; and once you
tackle the task to try to deduce from a dynamically shared X11 library
what are the exact dependencies to explicitely state all the static
versions to link against, you start seeing the mess...)
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
Charles Forsyth
2013-03-25 11:55:19 UTC
Permalink
Post by t***@polynum.com
since the
linking with libraries is order dependent. And one will need to
explicitely state that some lib depends on some others for the
linking (except if the symbols in the library are scanned to detect
unsatisfied dependencies and a search is done in standard directories
to find a library satisfying the dependency etc.).
the loaders do that using pragma lib:
The order of search to resolve undefined symbols is to load
all files and libraries mentioned explicitly on the command
line, and then to resolve remaining symbols by searching in
topological order libraries mentioned in header files
included by files already loaded ...
t***@polynum.com
2013-03-25 12:10:04 UTC
Permalink
Post by Charles Forsyth
The order of search to resolve undefined symbols is to load
all files and libraries mentioned explicitly on the command
line, and then to resolve remaining symbols by searching in
topological order libraries mentioned in header files
included by files already loaded ...
Who said that nothing is learnt from reading the list ?! Thanks: didn't
know it was doing all that (it seems it does for static libraries what a
dynamically shared libraries loader does for these ones).
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
erik quanstrom
2013-03-25 13:35:34 UTC
Permalink
anybody care to explain what is the limitation of mk here? can't
wrap my head around it...
It only knows about the rules you give it. It does not understand the
real dependencies in your software. Also, because of this you tend to
give it general rules which are not always right. There are more, but
these are the ones Rob was referring to, I think.
interesting plan 9 examples:
1. too much. plan 9 libraries are always rebuilt in their entirity,
even though it's possible to only rebuild the updated files.
2. too little. plan 9 programs seldom depend on system header
files, or the c compiler.

neither is a big issue in practice. but that leads me to an interesting
question, does go rebuild everything if the go compiler has changed?

- erik
andrey mirtchovski
2013-03-25 14:38:07 UTC
Permalink
Post by erik quanstrom
neither is a big issue in practice. but that leads me to an interesting
question, does go rebuild everything if the go compiler has changed?
i think it stops at package "runtime". at least that's what it builds
first when I tell it to "rebuild everything".
Ori Bernstein
2013-03-25 17:30:53 UTC
Permalink
In C, source files do not depend on other source files, they merely depend on
headers, eg:

foo.$O: foo.c foo.h bar.h
bar.$O: bar.c bar.h baz.h
baz.$O: baz.c baz.h

So, because the dependencies exist and do not depend on the outputs of other
commands, it's possible to build the obect files in any order. However, in
Go, the exported symbols (the equivalent of headers) are generated from source
and put into the libraries, meaning you end up with a dependency graph like this:

foo.$O: foo.c bar.o
bar.$O: bar.c baz.o
baz.$O: baz.c

'mk' can handle that if you explicitly give the list of object files that a
source file depends on, but it has no good way of probing it automatically,
as a build progresses. making maintaining mkfiles tedious and error prone.

Effectively, it means that every time you change an import in Go, you would
have to modify the mkfile. 'go build' is able to determine the build order
needed automatically without the need to maintain an out-of-line dependency
graph that can get out of date.

At least in my understanding.

On Mon, 25 Mar 2013 10:43:44 +0100
Post by dexen deVries
(...) and because go install does
transitive dependencies correctly, which mk does not.
anybody care to explain what is the limitation of mk here? can't wrap my head
around it...
--
dexen deVries
[[[↓][→]]]
--
Ori Bernstein <***@eigenstate.org>
David Leimbach
2013-03-23 23:03:45 UTC
Permalink
Yup

Sent from my iPhone
Post by Rob Pike
It's pointless to complain about the size of "hello world". It's not a
real program. In Go's case it's larger than a C binary because the
libraries (and the presence of a runtime) are capable of much more
under the covers, but by the time you write a real program in Go
you'll find the ratio of Go binary to C binary isn't nearly so large;
the incremental cost to the binary of a Go source file compared to a C
Go file is negligible.
A house is much heavier than a tent, but it also has a much stronger foundation.
-rob
andrey mirtchovski
2013-03-23 19:13:53 UTC
Permalink
this is not a new discussion, it started in november 2009. the fact
that it's just coming to 9fans is a sign of how far behind the times
we are :(

the go runtime is ~380k. that one you must carry always even in an
empty program (see below). what you're complaining about is the
side-effect of importing fmt and other big packages such as os and
net. with fmt you're sucking in all the unicode code tables and the
reflection code used to process printing arguments (which you can't
prove will not be used). the initial jump is big, but as your code
grows the binary size tends to increase slower -- all of the imports
are already in. the biggest program from the Go distribution
frequently in use is "godoc". it deals with xml, json, binary
encoding, directory navigation, document preparation, source code
parsing, compression/decompression and serves the major website for go
-- golang.org... that program, statically linked, is 8 megs (64-bit).
i've seen things that deal with graphics which get to 20 megs. that is
reasonable, i think.

here's an illustrative progression of go binary sizes:

$ cat > t.go
package main
func main(){
}
$ go build t.go; ls -l t
-rwxr-xr-x 1 andrey wheel 384880 Mar 23 12:43 t
$ cat > t.go
package main
func main(){
println("hello")
}
$ go build t.go; ls -l t; ./t
-rwxr-xr-x 1 andrey wheel 389008 Mar 23 12:43 t
hello
$ cat > t.go
package main
import "unicode"
var _ = unicode.MaxRune
func main() {
}
$ go build t.go; ls -l t
-rwxr-xr-x 1 andrey wheel 581024 Mar 23 12:45 t
$ cat > t.go
package main
import "fmt"
var _ = fmt.Println
func main(){
}
$ go build t.go; ls -l t
-rwxr-xr-x 1 andrey wheel 1481920 Mar 23 12:44 t
ron minnich
2013-03-23 19:23:23 UTC
Permalink
I'll happily pay the price of bigger binaries for things such as the %v format.

I don't write hello, world that often, or even care about its size when I do.

One demo we used to do for Unix was show we could write an executable
program that was 2 bytes. It was cute. Did it matter, in the end? Not
really. But we used to call 4k programs bloated.

I have a hard time worrying about 1M binaries on $200 machines with
12 GB/s memory bandwidth and 4G memory.
It's 2013.

ron
Gorka Guardiola
2013-03-23 19:56:30 UTC
Permalink
Post by ron minnich
I'll happily pay the price of bigger binaries for things such as the %v format.
I don't write hello, world that often, or even care about its size when I do.
Hello world was just an example, please don't make a straw man out of it.
If you want real programs which are bigger that I (we) actually use that will
be (much) bigger in go:

ls, cp rm mv cat acid, I can go on.

Small programs are useful and important.

There is a price to pay, and if you get something useful out of it, it may be
a fair price to pay, as I said in my other e-mail, I was just stating
a fact, binaries
are bigger and for example replacing the minimal sets of commands of
the system, this can make the
minimal system at least 5 times bigger easy.
Post by ron minnich
I have a hard time worrying about 1M binaries on $200 machines with
12 GB/s memory bandwidth and 4G memory.
It's 2013.
A lot of my friends have cheap phones that run out of memory
all the time. There is not a one size fits all in engineering, there
are compromises
and uses.

Higher level programming means paying the cost of bigger binaries,
that may be ok
for some uses and not for others. I like writing go code, it is fun, and
has a fairly high level of abstraction while letting you access the
system easily
(I am looking at you Java).

As I said, at least from me, it was not a complaint, just a statement of a fact.
I have spent quite some amount of time lately getting go working on
arm in Plan 9
because I value the language.

G.
andrey mirtchovski
2013-03-23 23:45:35 UTC
Permalink
Post by Gorka Guardiola
If you want real programs which are bigger that I (we) actually use that will
ls, cp rm mv cat acid, I can go on.
Small programs are useful and important.
here's a representative set. the programs are identical in behaviour
and arguments to the Plan 9 set. the size is as reported by du, in
kilobytes:

1456 ./date/date
1460 ./cat/cat
1564 ./cleanname/cleanname
1564 ./tee/tee
1736 ./echo/echo
1764 ./cp/cp
1772 ./uniq/uniq
1780 ./cmp/cmp
1780 ./freq/freq
1780 ./wc/wc
1792 ./comm/comm
Post by Gorka Guardiola
binaries are bigger and for example replacing the minimal sets of commands of
the system, this can make the
minimal system at least 5 times bigger easy.
if that was a real issue you were trying to solve there are things you
can do to help yourself. most notably sticking everything in a single
binary and invoking the right function based on your argv0. it took me
less than 15 minutes to convert the above code to work as a single
binary and most of that was in handling clashing flags (it would've
been a non-issue if I had used flagsets when writing the original
programs). size at the very end:

$ date > test.txt
$ ln -s $GOPATH/bin/all cat
$ ln -s $GOPATH/bin/all wc
$ ./cat test.txt
Sat Mar 23 17:32:42 MDT 2013
$ ./wc test.txt
1 6 29 test.txt
$ du -k $GOPATH/bin/all
1888 /Users/andrey/bin/all

the size of the original binaries on plan9 is 588k. what was a factor
of 30 is now a factor of 3. all tests still pass and it took less time
to complete than writing this email.

there's an even better solution, but it won't work on plan9 because
the go tool is slow there :)
Bruce Ellis
2013-03-23 23:58:52 UTC
Permalink
I recall one guy at the labs(!) who would ruthlessly avoid printf because
it dragged in too much stuff. I think he ran out of people to argue with 30
years ago.
Post by andrey mirtchovski
Post by Gorka Guardiola
If you want real programs which are bigger that I (we) actually use that
will
Post by Gorka Guardiola
ls, cp rm mv cat acid, I can go on.
Small programs are useful and important.
here's a representative set. the programs are identical in behaviour
and arguments to the Plan 9 set. the size is as reported by du, in
1456 ./date/date
1460 ./cat/cat
1564 ./cleanname/cleanname
1564 ./tee/tee
1736 ./echo/echo
1764 ./cp/cp
1772 ./uniq/uniq
1780 ./cmp/cmp
1780 ./freq/freq
1780 ./wc/wc
1792 ./comm/comm
Post by Gorka Guardiola
binaries are bigger and for example replacing the minimal sets of
commands of
Post by Gorka Guardiola
the system, this can make the
minimal system at least 5 times bigger easy.
if that was a real issue you were trying to solve there are things you
can do to help yourself. most notably sticking everything in a single
binary and invoking the right function based on your argv0. it took me
less than 15 minutes to convert the above code to work as a single
binary and most of that was in handling clashing flags (it would've
been a non-issue if I had used flagsets when writing the original
$ date > test.txt
$ ln -s $GOPATH/bin/all cat
$ ln -s $GOPATH/bin/all wc
$ ./cat test.txt
Sat Mar 23 17:32:42 MDT 2013
$ ./wc test.txt
1 6 29 test.txt
$ du -k $GOPATH/bin/all
1888 /Users/andrey/bin/all
the size of the original binaries on plan9 is 588k. what was a factor
of 30 is now a factor of 3. all tests still pass and it took less time
to complete than writing this email.
there's an even better solution, but it won't work on plan9 because
the go tool is slow there :)
Francisco J Ballesteros
2013-03-24 08:49:20 UTC
Permalink
andrey, I agreed the language is nice
and that's why I also use it.
I just pointed out that binaries are one
order of magnitude larger, as
you just proved.

perhaps I shouldn't have raised this.
I didn't want to bother anyone.
Post by andrey mirtchovski
Post by Gorka Guardiola
If you want real programs which are bigger that I (we) actually use that will
ls, cp rm mv cat acid, I can go on.
Small programs are useful and important.
here's a representative set. the programs are identical in behaviour
and arguments to the Plan 9 set. the size is as reported by du, in
1456 ./date/date
1460 ./cat/cat
1564 ./cleanname/cleanname
1564 ./tee/tee
1736 ./echo/echo
1764 ./cp/cp
1772 ./uniq/uniq
1780 ./cmp/cmp
1780 ./freq/freq
1780 ./wc/wc
1792 ./comm/comm
Post by Gorka Guardiola
binaries are bigger and for example replacing the minimal sets of commands of
the system, this can make the
minimal system at least 5 times bigger easy.
if that was a real issue you were trying to solve there are things you
can do to help yourself. most notably sticking everything in a single
binary and invoking the right function based on your argv0. it took me
less than 15 minutes to convert the above code to work as a single
binary and most of that was in handling clashing flags (it would've
been a non-issue if I had used flagsets when writing the original
$ date > test.txt
$ ln -s $GOPATH/bin/all cat
$ ln -s $GOPATH/bin/all wc
$ ./cat test.txt
Sat Mar 23 17:32:42 MDT 2013
$ ./wc test.txt
1 6 29 test.txt
$ du -k $GOPATH/bin/all
1888 /Users/andrey/bin/all
the size of the original binaries on plan9 is 588k. what was a factor
of 30 is now a factor of 3. all tests still pass and it took less time
to complete than writing this email.
there's an even better solution, but it won't work on plan9 because
the go tool is slow there :)
Steve Simon
2013-03-24 09:02:44 UTC
Permalink
I am intrigued by go but I mostly write embedded code for a day job
and I believe go doesn't really cover that space well. The other
part of my job is image processing which would be apropriate for Go
but my employer has mandated c++ so that is the end of that.

I do have a few honest questions about the current state of Go.

Is there a standardised GUI binding for go, somthing cross-platform?

Is there any concensus as go could be used on bare metal or is that
just un-realistic given garbage collection and the relatively large
runtime.

has there been discussion on how some of the runtime could be moved into
a go-specific OS - would that even be a good/interesting idea?

-Steve
l***@proxima.alt.za
2013-03-24 09:22:15 UTC
Permalink
All below is opinion, possibly uninformed.
Post by Steve Simon
Is there a standardised GUI binding for go, somthing cross-platform?
Not yet, although I think the pressure is building. At the moment,
from where I am, it seems that a lot of development relies on
interfacing with a web browser and there are a few specialised
packages that interface with the conventional Unix toolkits (GTK,
etc.). These last are not, in my opinion, cross-platform (enough).
Post by Steve Simon
Is there any concensus as go could be used on bare metal or is that
just un-realistic given garbage collection and the relatively large
runtime.
I have seen no sign of this, but here I think Rob is right, the
hardware is moving into the cross-hairs, rather than Go bend over to
it. The ARM A13 and A10 as well as the AVRs have shown that big CPU
feature footprint does not have to mean a big physical footprint.
Maybe the contrary applies. The GPUs puzzle me in this context, but
I'm not sure how relevant they are.
Post by Steve Simon
has there been discussion on how some of the runtime could be moved into
a go-specific OS - would that even be a good/interesting idea?
It's something that struck me recently too. Go as it stands is not
ideal for operating system development and there has been discussion
of how the runtime and especially the garbage collector could be
redesigned for bare metal, but I think the demand needs to become more
shrill before the design can become more than just a concept.

++L
Nicolas Bercher
2013-03-25 14:45:56 UTC
Permalink
Post by ron minnich
I'll happily pay the price of bigger binaries for things such as the %v format.
I don't write hello, world that often, or even care about its size when I do.
One demo we used to do for Unix was show we could write an executable
program that was 2 bytes. It was cute. Did it matter, in the end? Not
really. But we used to call 4k programs bloated.
I have a hard time worrying about 1M binaries on $200 machines with
12 GB/s memory bandwidth and 4G memory.
It's 2013.
Even if I deeply love small and beautiful code, the only thing I really
care in everyday's life is computers responsiveness. And I found myself
too often unhappy with nowadays computers responsiveness.
And maybe it's only a matter of code design quality. One may write
really good and efficient code that will consume tens of KBytes when
compiled. So to me the code size is not that much a big deal as far as
the result works pretty well.

Nicolas
hiro
2013-03-23 19:29:21 UTC
Permalink
Post by erik quanstrom
incremental improvement often fails.
why does it fail? I don't see why this has to be a rule.

a frequently annoying counterexample is when they yet again reinvent
the wheel, include a new "compatible" implementation of all the old
features and some new features, all based on some better design - and
most of the old bugs are gone, lots of things just work, lots of new
stuff even - but lots of stuff that used to work is now bugged also.
Gorka Guardiola
2013-03-23 17:47:05 UTC
Permalink
Post by Rob Pike
Much of which is symbols. Plus, a a simple computer has gigs of memory.
Yes, it's remarkable how much bigger programs are now than they were
20 years ago, but 20 years ago the same things were being said. I
understand your objection - I really do - but it's time to face the
future. The smart phone in your pocket is roughly 100 times faster
than the machine Plan 9 was developed on and has 1000 times the RAM.
I was merely stating facts, not criticising, as I am still learning about the go
implementation, and I am sure there are good reasons why all this is true.
In any case, the argument that there is more memory now is not a very good one
unless you use it for something useful. The phone is very powerful, but
it also runs on a battery and has to do many things on a budget, so that
is not really an argument. Saying why you need/use the giant tables for
may be.


- With greater power comes greater need for restrain.

G.
Kurt H Maier
2013-03-24 00:41:59 UTC
Permalink
Post by Rob Pike
Much of which is symbols. Plus, a a simple computer has gigs of memory.
Yes, it's remarkable how much bigger programs are now than they were
20 years ago, but 20 years ago the same things were being said. I
understand your objection - I really do - but it's time to face the
future. The smart phone in your pocket is roughly 100 times faster
than the machine Plan 9 was developed on and has 1000 times the RAM.
Computers are incredibly powerful now, and the technologies of today
can use that power well (as I claim Go does) or poorly (as some others
do), or ignore it at the risk of obsolescence.
-rob
Ah, yes, the old "put more ram in your macbook" argument. Some guy
bought an iphone, so we should immediately throw away any computers that
are not at least iphones. This saves valuable programmer time, because
they don't have to optimize things instead of doing important work like
providing sixteen SQL libraries.

To the future!

khm
t***@polynum.com
2013-03-24 09:48:50 UTC
Permalink
To go back to the original subject, since gcc(1) has taken the C++ path,
I will be more than happy of an increase of Go programs since, thanks to
the work of some people, Go for Plan9 is possible.

As for the Go language, it is sufficiently near C, with extensions that
feel natural, to be interesting and easy to grasp for a C programmer
(and the gotour allows to start easily).

But it is the same as some parts of mathematics: I was not really
interested in them because they were not in the neighborhood of
what I was interested in at the moment. I finally got interested
in these parts later, when my wandering suddenly arrived in these
neglected parts, reminding me "something" (and then I saw why it
was interesting). So Go is stocked in a part of my mind, perhaps
simply waiting for a problem it will be the tool to solve.

But since I'm one of the few who use litterate programming (cweb), I
would probably start by wrinting a goweb instead of using the dedicated
tools...
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
t***@polynum.com
2013-03-24 12:03:09 UTC
Permalink
Post by t***@polynum.com
But since I'm one of the few who use litterate programming (cweb), I
would probably start by wrinting a goweb instead of using the dedicated
tools...
https://bitbucket.org/santucco/goweb
https://groups.google.com/d/topic/golang-nuts/6UlkxEB49Rc
Thanks for the info!
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
Dustin Fechner
2013-03-24 11:56:54 UTC
Permalink
Post by t***@polynum.com
But since I'm one of the few who use litterate programming (cweb), I
would probably start by wrinting a goweb instead of using the dedicated
tools...
https://bitbucket.org/santucco/goweb

The original thread from golang-nuts:
https://groups.google.com/d/topic/golang-nuts/6UlkxEB49Rc
Gorka Guardiola
2013-03-23 16:20:41 UTC
Permalink
Post by erik quanstrom
Binaries are one order of magnitude larger and the go tool & part of the
runtime code are, well….
sorry to be dense. larger than what?
C
l***@proxima.alt.za
2013-03-23 16:37:59 UTC
Permalink
Post by erik quanstrom
Binaries are one order of magnitude larger and the go tool & part of the
runtime code are, well….
sorry to be dense. larger than what?
My guess "larger than they need to be" because the Go linker does not
drop unused library modules. Nemo may mean something else, of course.

++L
Francisco J Ballesteros
2013-03-23 16:06:41 UTC
Permalink
go runs already on 9.

Binaries are one order of magnitude larger and the go tool & part of the
runtime code are, well….

but it's already there.
Post by Peter A. Cejchan
I still hope that some clone of plan9/nix/nxm will merge with Go
Peter A. Cejchan
2013-03-25 05:48:44 UTC
Permalink
Yes, I run Go on native Plan9,
what I wanted toi say is that I would be happy to see some minimalistic (in
a spirit of plan9) written in, and integrated with, Go...
[Beat my English, as usually, I am not a native speaker]

And yes, binaries are extraordinarily huge (no idea, why). However, I still
like Go better than C, even though i _trelly loved C_ since i go my first C
compiler.

++pac
Post by Francisco J Ballesteros
go runs already on 9.
Binaries are one order of magnitude larger and the go tool & part of the
runtime code are, well
.
but it's already there.
I still hope that some clone of plan9/nix/nxm will merge with Go
l***@proxima.alt.za
2013-03-25 06:27:33 UTC
Permalink
Post by Peter A. Cejchan
Yes, I run Go on native Plan9,
Go breaks away from a number of traditions that have long become
obsolete and that is its main merit. The price is not only in having
to adjust to the change, but also in some sacred cows being
slaughtered in the process.

But Go also opens the door to better ways of doing things. The build
system, raw as it still is, is streets ahead of any conventional build
system, but it is tightly coupled to the language. Portability across
platforms is much easier, in the Plan 9 tradition, but requires a set
of build tools ([568][ac]) that users are not familiar with and [568]l
becomes the new bottleneck, to many users' surprise.
Cross-development - my favourite feature - becomes much easier, but I
am having a great deal of trouble getting my head around all the
complications it brings with it.

Philosophically, Plan 9 has rattled the proverbial cage and Go is an
earthquake by comparison. The outcome is still to be evaluated. But
not everyone is going to see it in the same way.

Of relevance here is that if Rob and Russ and Ken had let
considerations such as pampering slow hardware, we'd have a different
language and many features would not be available. At the same time,
the need for a slim version of Go will grow with acceptance of the fat
model and then people like Kurt may be inspired to restore in the
linker the ability to trim libraries of unused modules (don't hold
your breath!).

If the Go developers had started from the other end, as I would have
been tempted to do, the outcome would definitely look nothing like
what we have.

The nice bit is that there are enough people out there to consider
such options and some of them are actually willing to publish their
efforts.

The people who insists that ONE tool should encompass all these
options are those who are too unproductive to do it themselves and
fail to see that no-one owes them.

In my other life managing a backpackers, I see way too many young
people who seem to think that our generation somehow owe them
something they are able but not willing to seek for themselves. I
could tell you where most of them seem to come from, but I'm sure that
would be unfair to all those they leave behind while spending money
they did not earn to travel in comfort around the world.

++L

PS: Gorka is making amazing progress with the plan9/arm port and the
reason I know is that I've just tested his latest efforts on the
Sheevaplug and the present obstacle does not seem unsurmountable -
but it is very real, so "it's not working yet". Watch golang-dev
on Google Groups for updates.
Kurt H Maier
2013-03-25 16:45:32 UTC
Permalink
I see once again it's a matter of tone.
Post by l***@proxima.alt.za
Philosophically, Plan 9 has rattled the proverbial cage and Go is an
earthquake by comparison. The outcome is still to be evaluated. But
not everyone is going to see it in the same way.
Absolutely correct. In the sector I work, it will be years and years
before any of Go's benefits trickle up; its mode of parallelism does not
apply well to my systems. In the meantime, it does nobody any good to
pre-emptively stifle someone's investigations by posting terse messages
about how irrelevant it is and how the current decisions are the right
ones -- at least without providing any kind of technical insight inside
the message.
Post by l***@proxima.alt.za
Of relevance here is that if Rob and Russ and Ken had let
considerations such as pampering slow hardware, we'd have a different
language and many features would not be available. At the same time,
the need for a slim version of Go will grow with acceptance of the fat
model and then people like Kurt may be inspired to restore in the
linker the ability to trim libraries of unused modules (don't hold
your breath!).
Indeed. One shouldn't hold breath on this topic, because Kurt's not in
the business of repairing software. In fact, Kurt's business doesn't
overlap with Plan 9 or Go in the slightest -- he's a Plan 9 hobbiest who
has assessed that Go solves very specific problems which don't affect
him. That's why he's subscribed to 9fans and not golang-nuts.
Post by l***@proxima.alt.za
The people who insists that ONE tool should encompass all these
options are those who are too unproductive to do it themselves and
fail to see that no-one owes them.
In my other life managing a backpackers, I see way too many young
people who seem to think that our generation somehow owe them
something they are able but not willing to seek for themselves. I
could tell you where most of them seem to come from, but I'm sure that
would be unfair to all those they leave behind while spending money
they did not earn to travel in comfort around the world.
Yes, let's talk about entitlements. There are people here who seem to
think that one tool should encompass all options, and that that tool is
the Labs distribution. All of this acrimony needs to go away.
Different people have different goals, and it's not the job of the 9fans
hegemony to ascribe value to these goals. I personally don't care if
someone wants to get gcc 4.8 built on Plan 9, for exactly the same
reason I don't care about the size of the binaries Go produces. I don't
need or want either tool, but when people -- especially people in
current or former leadership positions on these projects -- show up just
to slag articles of curiosity for no particular reason, I'm going to
make fun of them.

I know that the population of 9fans contains a sizeable percentage of
people who would like to cast Plan 9 in amber, to hold it immutable as
an example to future generations. That's an unrealistic expectation.

khm
l***@proxima.alt.za
2013-03-25 19:04:59 UTC
Permalink
Post by Kurt H Maier
I know that the population of 9fans contains a sizeable percentage of
people who would like to cast Plan 9 in amber, to hold it immutable as
an example to future generations. That's an unrealistic expectation.
Maybe, but maybe that's the best we can do, given that the conditions
that gave rise to Plan 9 have long ceased to exist and are unlikely to
recur. A version of Plan 9 untainted by the predominance of the Intel
and Windows philosophies is needed to reminds us of how things could
have turned out.

But it is all much more complex than a mailing list could possibly
track. Since the 1970s, it is clear that technology has taken a
number of quantum leaps that could have landed in different places and
all the missed opportunities, good and bad, need to be recorded as
well as explored because we sure as hell are not in the best of all
possible futures, right now. Unless you're some virtual entity such
as Microsoft, Intel, Google or Facebook and could not care less about
the fate of individuals.

Oops, that's probably a bit rambly, but I think most observers will
understand.

++L
s***@9front.org
2013-03-25 19:19:16 UTC
Permalink
Post by l***@proxima.alt.za
Maybe, but maybe that's the best we can do, given that the conditions
that gave rise to Plan 9 have long ceased to exist and are unlikely to
recur.
And maybe we're missing the point but there are still
a few of us out there using Plan 9 daily as our primary
interface to computers. For some of us, for the tasks we
are performing, we just don't need all this modern crap
that seems to have broken everyone else's heart.

-sl
aram
2013-03-25 19:31:27 UTC
Permalink
Post by s***@9front.org
Post by l***@proxima.alt.za
Maybe, but maybe that's the best we can do, given that the conditions
that gave rise to Plan 9 have long ceased to exist and are unlikely to
recur.
And maybe we're missing the point but there are still
a few of us out there using Plan 9 daily as our primary
interface to computers. For some of us, for the tasks we
are performing, we just don't need all this modern crap
that seems to have broken everyone else's heart.
-sl
And how do you manage to browser the web?

Just to do that I'm forced to use software bigger and more complex
than plan9 itself.

Sincerely,
Aram
--
http://thewayofthecode.posterous.com/
erik quanstrom
2013-03-25 19:38:26 UTC
Permalink
Post by l***@proxima.alt.za
Maybe, but maybe that's the best we can do, given that the conditions
that gave rise to Plan 9 have long ceased to exist and are unlikely to
recur. A version of Plan 9 untainted by the predominance of the Intel
and Windows philosophies is needed to reminds us of how things could
have turned out.
what conditions do you feel gave rise to plan 9 that no longer exist?

- erik
l***@proxima.alt.za
2013-03-25 19:46:35 UTC
Permalink
Post by erik quanstrom
what conditions do you feel gave rise to plan 9 that no longer exist?
Modem speeds below 19200bps? Reality engines costing as much as
houses and considerably less accessible? Skill sets in IT
practitioners way above the norm? Everything that the desktop PC
eventually brought to an end, I suppose.

In a word, elitism, largely earned rather than inherited. You did
ask!

++L
s***@9front.org
2013-03-25 19:51:48 UTC
Permalink
Post by erik quanstrom
what conditions do you feel gave rise to plan 9 that no longer exist?
I think there is a feeling that Plan 9 was created to address
specific problems (refraining from turning easy jobs into hard
jobs, translated as getting real work done on slow hardware) that
have been overtaken by history and the observed progression of
technology.

-sl

hiro
2013-03-23 17:08:40 UTC
Permalink
Post by Peter A. Cejchan
@Lucio: I still hope that some clone of plan9/nix/nxm will merge with Go
... just my dream, and I am just an embryo of a programmer
(as multiply stated here and elsewhere) so take it easy.... however, I'm
moving all my old stuff (and creating new one) to Go
[unfortunately, I am afraid I will never see the 9GoNix OS ;-) brought into
life]
do I need 3d glasses to read your text? words don't really stick out
with all these random parantheses, strange punctuations and
double-spaces.
Peter A. Cejchan
2013-03-25 05:53:00 UTC
Permalink
Sorry for that. I am not a natuive speaker. English uses different
punctuation then my mothertongue.
However, I hope you got what I wanted to say.

++pac
Post by hiro
Post by Peter A. Cejchan
@Lucio: I still hope that some clone of plan9/nix/nxm will merge with Go
... just my dream, and I am just an embryo of a programmer
(as multiply stated here and elsewhere) so take it easy.... however, I'm
moving all my old stuff (and creating new one) to Go
[unfortunately, I am afraid I will never see the 9GoNix OS ;-) brought
into
Post by Peter A. Cejchan
life]
do I need 3d glasses to read your text? words don't really stick out
with all these random parantheses, strange punctuations and
double-spaces.
Peter A. Cejchan
2013-03-23 09:54:14 UTC
Permalink
So, you perceive it, too.... unfortunately, then there will be no more
computers, even electric power.... nomads don't need it, and won't care :-(
++pac
IMHO, with the advent of a crisis compared to which 1929 will be a
minor storm, there will be a general disgust and lack of trust and a
return for crucial things to small is beautiful (and safer).
t***@polynum.com
2013-03-23 10:10:03 UTC
Permalink
Post by Peter A. Cejchan
So, you perceive it, too.... unfortunately, then there will be no more
computers, even electric power.... nomads don't need it, and won't care :-(
++pac
IMHO, with the advent of a crisis compared to which 1929 will be a
minor storm, there will be a general disgust and lack of trust and a
return for crucial things to small is beautiful (and safer).
I guess you are in Europe too? The only ones to refuse to see it can be
described by François Mauriac's sentence: "The ostrich hides in the sand
is little brainless head, trying to persuade itself that its feathered
rear is offending nobody's eyes."
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
Peter A. Cejchan
2013-03-23 10:15:16 UTC
Permalink
Yep...
Post by t***@polynum.com
Post by Peter A. Cejchan
So, you perceive it, too.... unfortunately, then there will be no more
computers, even electric power.... nomads don't need it, and won't care
:-(
Post by Peter A. Cejchan
++pac
IMHO, with the advent of a crisis compared to which 1929 will be a
minor storm, there will be a general disgust and lack of trust and a
return for crucial things to small is beautiful (and safer).
I guess you are in Europe too? The only ones to refuse to see it can be
described by François Mauriac's sentence: "The ostrich hides in the sand
is little brainless head, trying to persuade itself that its feathered
rear is offending nobody's eyes."
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
Bakul Shah
2013-03-23 12:17:12 UTC
Permalink
It has long been the case that gcc can only be compiled with gcc. Switching its impl. lang. to c++ doesn't make the porting problem any worse.

The other "industrial strength" open source c/c++ compiler clang/llvm is also written in c++.

They can both be built on windows so it would certainly be possible to port them to plan9 if there was real demand.
l***@proxima.alt.za
2013-03-23 12:48:17 UTC
Permalink
Post by Bakul Shah
It has long been the case that gcc can only be compiled with gcc. Switching its impl. lang. to c++ doesn't make the porting problem any worse.
The other "industrial strength" open source c/c++ compiler clang/llvm is also written in c++.
They can both be built on windows so it would certainly be possible to port them to plan9 if there was real demand.
+1

++L
Winston Kodogo
2013-03-25 01:00:11 UTC
Permalink
"To go back to the original subject"

Surely this is the first time that has ever been done on 9fans?

This is 9fans, not 'Nam. There are rules.
Dan Cross
2013-03-25 01:20:04 UTC
Permalink
Eh, not so much anymore. The morlocks have taken over, which is a shame:
9fans used to be one of the best technical mailing lists on the Internet.
Those days are long gone. The ankle biters are too numerous now.

(Cue requisite flames.)

- Dan C.
Post by Winston Kodogo
"To go back to the original subject"
Surely this is the first time that has ever been done on 9fans?
This is 9fans, not 'Nam. There are rules.
Lyndon Nerenberg
2013-03-25 01:27:53 UTC
Permalink
"Trolling is a art" they tell themselves.
On Slashdot.
andrey mirtchovski
2013-03-25 01:24:42 UTC
Permalink
Post by Dan Cross
The ankle biters are too numerous now.
"Trolling is a art" they tell themselves.
Kurt H Maier
2013-03-25 01:36:54 UTC
Permalink
Post by Dan Cross
9fans used to be one of the best technical mailing lists on the Internet.
Those days are long gone. The ankle biters are too numerous now.
(Cue requisite flames.)
- Dan C.
I agree. It's horrible that you can't seem to have any sort of
technical discussion these days without some guy butting in and telling
everyone to shut up because computers are so crazy fast that nothing
even matters, or that nobody else cares about the technology involved,
etc. It's a shame.

khm
Dan Cross
2013-03-25 01:42:09 UTC
Permalink
Yeah. Or someone who is arguably the biggest problem on the list adding
absolutely nothing other than some uninformed, dogmatically driven, rigid
meta-commentary. Maybe that's all that person can do. He should keep
feeling smug while turning the crank, though: he obviously knows more than
the guy who designed and wrote most of it.
Post by Kurt H Maier
Post by Dan Cross
9fans used to be one of the best technical mailing lists on the Internet.
Those days are long gone. The ankle biters are too numerous now.
(Cue requisite flames.)
- Dan C.
I agree. It's horrible that you can't seem to have any sort of
technical discussion these days without some guy butting in and telling
everyone to shut up because computers are so crazy fast that nothing
even matters, or that nobody else cares about the technology involved,
etc. It's a shame.
khm
Jacob Todd
2013-03-25 01:45:07 UTC
Permalink
Stop.
andrey mirtchovski
2013-03-25 01:47:25 UTC
Permalink
Stop.
Collaborate and listen.
Kurt H Maier
2013-03-25 01:54:52 UTC
Permalink
Post by Dan Cross
Yeah. Or someone who is arguably the biggest problem on the list adding
absolutely nothing other than some uninformed, dogmatically driven, rigid
meta-commentary. Maybe that's all that person can do. He should keep
feeling smug while turning the crank, though: he obviously knows more than
the guy who designed and wrote most of it.
...wait. Are we talking about you? Because I was talking about Rob
Pike.

khm
Dan Cross
2013-03-25 02:02:12 UTC
Permalink
Post by Kurt H Maier
Post by Dan Cross
Yeah. Or someone who is arguably the biggest problem on the list adding
absolutely nothing other than some uninformed, dogmatically driven, rigid
meta-commentary. Maybe that's all that person can do. He should keep
feeling smug while turning the crank, though: he obviously knows more
than
Post by Dan Cross
the guy who designed and wrote most of it.
...wait. Are we talking about you? Because I was talking about Rob
Pike.
When you have produced a fraction of a percent of what Rob Pike has
produced over his career, I might take you seriously. Until then:
http://en.wikipedia.org/wiki/Dunning–Kruger_effect

And I'm out.

- Dan C.
Kurt H Maier
2013-03-25 02:08:02 UTC
Permalink
Post by Dan Cross
When you have produced a fraction of a percent of what Rob Pike has
http://en.wikipedia.org/wiki/Dunning–Kruger_effect
If, over the course of my life, I produce a fraction of a percent of
what Rob Pike has produced, it will be too much. But let me assure you
that I am chilled to the quick that Dan Cross* does not take me
seriously on the internet.
Post by Dan Cross
And I'm out.
Thanks for letting us know.

khm



* who?
s***@9front.org
2013-03-25 19:35:45 UTC
Permalink
And how do you manage to browser the web?
9front did some work on mothra. For trivial javascript
I try charon (which was sufficient to configure my wifi
router). As a last resort I VNC to another operating
system.

For the vast majority of what I do, mothra is sufficient.

-sl
Loading...