Discussion:
[9fans] GNU Make
(too old to reply)
boyd, rounin
2004-06-01 16:41:38 UTC
Permalink
and what was wrong with mk?
ron minnich
2004-06-01 21:03:51 UTC
Permalink
Post by boyd, rounin
and what was wrong with mk?
well with gnu make you can image building gcc.

I am wondering if you couldn't build gcc 0.1 or whatever, and once that
was working, iterate up to later versions by building on the first
version. My memory from 0.1 was that it was pretty portable.

ron
boyd, rounin
2004-06-01 21:14:00 UTC
Permalink
Post by ron minnich
I am wondering if you couldn't build gcc 0.1 or whatever, and once that
was working, iterate up ...
yeah, i was thinking of stepwise iteration.
Russ Cox
2004-06-01 21:43:49 UTC
Permalink
Post by ron minnich
I am wondering if you couldn't build gcc 0.1 or whatever, and once that
was working, iterate up to later versions by building on the first
version. My memory from 0.1 was that it was pretty portable.
why bother? gcc3 is already ported.
ron minnich
2004-06-01 21:49:31 UTC
Permalink
Post by Russ Cox
Post by ron minnich
I am wondering if you couldn't build gcc 0.1 or whatever, and once that
was working, iterate up to later versions by building on the first
version. My memory from 0.1 was that it was pretty portable.
why bother? gcc3 is already ported.
because the last word I had was that it was very painful for the person
who did the port, said person is (sadly) no longer with us, and the work
could not be repeated by others. The last item is the one of most concern.

ron
Russ Cox
2004-06-01 22:03:17 UTC
Permalink
Post by ron minnich
because the last word I had was that it was very painful for the person
who did the port, said person is (sadly) no longer with us, and the work
could not be repeated by others. The last item is the one of most concern.
surely if it were time to repeat the work,
starting with gcc3 would be a lot less
painful than starting with gcc 0.1.
boyd, rounin
2004-06-01 22:12:40 UTC
Permalink
one word: native
Charles Forsyth
2004-06-02 07:39:48 UTC
Permalink
all Plan 9 software to use a library of error messages that also
includes a numeric code?
Numeric codes are a bad idea, that plan 9 was well rid of from unix.
they do not scale well in a distributed system with distributed development.

even the internet protocols that use them tend to degenerate into `good, bad, ugly'
based on the first digit (0, 4, 5 say).

it's just one reason that NFS had terrible trouble across systems with
different errno values. precise diagnostics ended up being mapped into EIO,
which wasn't always the right answer, just because there was no portable
way to convert an arbitrary code from one system to an arbitrary code in another.
n*m indeed. of course, you could have MegaErrnoInc act as a global
registry of mappings, as Sun tried with Sun RPC, but for this application it doesn't
work well.

some uniformity in the error strings would be desirable,
but numeric codes should not be used.
John Murdie
2004-06-02 09:12:50 UTC
Permalink
Post by Charles Forsyth
all Plan 9 software to use a library of error messages that also
includes a numeric code?
Numeric codes are a bad idea, that plan 9 was well rid of from unix.
they do not scale well in a distributed system with distributed development.
even the internet protocols that use them tend to degenerate into `good, bad, ugly'
based on the first digit (0, 4, 5 say).
it's just one reason that NFS had terrible trouble across systems with
different errno values. precise diagnostics ended up being mapped into EIO,
which wasn't always the right answer, just because there was no portable
way to convert an arbitrary code from one system to an arbitrary code in another.
n*m indeed. of course, you could have MegaErrnoInc act as a global
registry of mappings, as Sun tried with Sun RPC, but for this application it doesn't
work well.
some uniformity in the error strings would be desirable,
but numeric codes should not be used.
I, too, dislike numeric error codes, but how do you think multiple
natural languages should be accommodated? I've no experience of this
with Plan 9. Does it cope well?

John A. Murdie
Department of Computer Science
University of York
Charles Forsyth
2004-06-02 09:43:43 UTC
Permalink
but how do you think natural languages should be accommodated?
that's a reasonable question.

as it happens, the %r messages are typically the least of your worries, and indeed if
you've got real end-users (ie, non-programmers) are
often best regarded as internal error diagnostics for implementors,
and left as-is so that those implementor will recognise them.
it's hard enough debugging programs as it is, without the Chinese Whispers effect
compounded by language changes within.

for user interfaces we've found historically that the use of a hashed map string -> string works well
(it's rather more than that, because there can be pre- and post-processing,
but that gives the idea), and any or all %r messages that need to escape
can go through that. that overall approach worked well on a big commercial
system that supported all the major Western European languages, including dialects.
it was much easier to manage than the contemporary msgcat scheme (i think that was it).
you need extra things to do sorting, and comparison, and more; and perhaps most
of all you need good translators. the scheme allowed annotations for strings, so that
one could distinguish `the times' from `The Times', the latter not to be translated, or
depending on context(!) to be translated to some local equivalent (eg, `USA Today'),
although as that last example shows it's often quite a loose translation.
ron minnich
2004-06-02 16:12:51 UTC
Permalink
Post by Charles Forsyth
Numeric codes are a bad idea, that plan 9 was well rid of from unix.
they do not scale well in a distributed system with distributed development.
I agree with everything you're saying. But if you're going to
interoperate with Unix systems, there aren't a lot of options.

After all, strcmp() on a table of error strings is hardly an improvement
over numeric codes.

ron
boyd, rounin
2004-06-02 21:34:24 UTC
Permalink
Post by Charles Forsyth
Numeric codes are a bad idea, that plan 9 was well rid of from unix.
they do not scale well in a distributed system with distributed development.
agreed.
Richard Miller
2004-06-02 08:55:04 UTC
Permalink
Would it be out of the question to enhance
all Plan 9 software to use a library of error messages that also
includes a numeric code?
You mean like
IEF450I JOBXXX STEPA - ABEND=S0C4 REASON=00000010

Yes, those were the good old days.
Charles Forsyth
2004-06-02 09:57:38 UTC
Permalink
and gives no pointers to additional information? At least with the
IBM manuals, it was possible to find out where the above originated
and to establish an apporximation to the cause.
Messages and Codes was notoriously unhelpful in many cases.
often the diagnosis and suggested resolution was infuriatingly
similar to the answer to the joke
``Doctor! It hurts when I do this.''.
ron minnich
2004-06-02 14:00:24 UTC
Permalink
I can only presume that dhog was under pressure to build GCC (how could
he not be?) and that it was simpler to bootstrap it using GCC itself.
As far as I understand, both GCC and the binutils are fairly portable,
there ought to be a way to port GCC using the native compiler in its APE
impersonation.
yes, but I was hoping the iterative process from 0.x would get us native,
i.e. skip APE. In the early days gcc was very portable, and would compile
under just about any C compiler. Nowadays, what with all the lovely
extras, I'm not sure it compiles well under non-gcc-compatible compilers.
Lastly, APE attempts to compute errno by searching a list of error
strings to find a match. Would it be out of the question to enhance
all Plan 9 software to use a library of error messages that also
includes a numeric code?
I did this on v9fs in the early days: error messages in my 9p looked
like this:
xxxx You screwed up

xxxx was the error code. You screwed up was an error message. On systems
without errstr, you returned xxxx as a number, otherwise, you return the
message. So you had an errno and errstr compatible TERRROR message at the
protocol level.

I proposed this a while back to some folks but got a negative response.

I still find that doing strcmp() on a set of error strings to produce
errno is both gross and non-portable due to subtle variations in error
strings between different Unixes.


ron
C H Forsyth
2004-06-02 14:31:13 UTC
Permalink
Post by ron minnich
I still find that doing strcmp() on a set of error strings to produce
errno is both gross and non-portable due to subtle variations in error
strings between different Unixes.
i thought ape was doing strcmp on a set of Plan 9 error strings to
produce unix errno.

the errno values aren't portable either. i suppose you could use
ENAME but you'd still need a table, and the set of names is still open-ended.
with errno, you can't just index into a table
to remap things because even the range isn't predictable.
i've seen variations even within one manufacturer.

i think it might, however, be useful to have more predictable strings
(is it `directory entry not found' or `file not found')
Charles Forsyth
2004-06-02 14:36:56 UTC
Permalink
Post by C H Forsyth
(is it `directory entry not found' or `file not found')
of course i meant: or `file does not exist' ...
Charles Forsyth
2004-06-02 15:28:03 UTC
Permalink
Post by ron minnich
yes, but I was hoping the iterative process from 0.x would get us native,
i.e. skip APE. In the early days gcc was very portable, and would compile
under just about any C compiler. Nowadays, what with all the lovely
extras, I'm not sure it compiles well under non-gcc-compatible compilers.
it seems to assume sys/types.h and (funnily enough) errno.h, amongst others,
so it's mildly APE; many of the others are included only if many things are #defined
so might not matter. the assumed library is essentially APE

system.h (644 lines) has many things like this:
/* 1 if we have C99 designated initializers. */
#if !defined(HAVE_DESIGNATED_INITIALIZERS)
#define HAVE_DESIGNATED_INITIALIZERS \
((GCC_VERSION >= 2007) || (__STDC_VERSION__ >= 199901L))
#endif

/* 1 if we have _Bool. */
#ifndef HAVE__BOOL
# define HAVE__BOOL \
((GCC_VERSION >= 3000) || (__STDC_VERSION__ >= 199901L))
#endif

/* Test if something is a socket. */
#ifndef S_ISSOCK
# ifdef S_IFSOCK
# define S_ISSOCK(m) (((m) & S_IFMT) == S_IFSOCK)
# else
# define S_ISSOCK(m) 0
# endif
#endif

/* Test if something is a FIFO. */
#ifndef S_ISFIFO
# ifdef S_IFIFO
# define S_ISFIFO(m) (((m) & S_IFMT) == S_IFIFO)
# else
# define S_ISFIFO(m) 0
# endif
#endif

but then i can't find them used.
Taj Khattra
2004-06-03 05:50:28 UTC
Permalink
(Although if a working troff pops up in the ports CVS tree,
the current ports version has some issues (#9 prefixes
are busted in a few spots), but the tarball i posted a
while ago works for me.
boyd, rounin
2004-06-03 08:30:35 UTC
Permalink
i think we [Christophe ?] and i found that the registers could
be now named with multi char names, yesterday. iirc:

.nr foo 1 1

...

\([foo]

now, that is not troff.
Douglas A. Gwyn
2004-06-07 08:55:11 UTC
Permalink
Post by boyd, rounin
i think we [Christophe ?] and i found that the registers could
.nr foo 1 1
...
\([foo]
now, that is not troff.
What is especially unfortunate is that SoftQuad did a
better job of designing such extensions to troff, but
whoever did the above came up with an incompatible
scheme.
Jon Snader
2004-06-07 13:19:11 UTC
Permalink
Post by Douglas A. Gwyn
Post by boyd, rounin
i think we [Christophe ?] and i found that the registers could
.nr foo 1 1
...
\([foo]
now, that is not troff.
What is especially unfortunate is that SoftQuad did a
better job of designing such extensions to troff, but
whoever did the above came up with an incompatible
scheme.
What's unfortunate is that we are spending more and more of our
time on this list bashing Unix/Linux/Gnu, instead of addressing
the advantages and problems with Plan 9. It's always fun to stick
a finger in the eye of a competitor, of course, but providing a
vehicle for this pleasure is not my understanding of what Plan 9
is all about.

What's unfortunate is that our bashing is often misinformed, both
in spirit and in detail. The above comments are a case in point.
Is there anyone here who really maintains that the two character
name space in [nt]roff is not a disadvantage? Why should we
complain that groff has removed this disadvantage? Why should we
not remove it from [nt]roff? The answer given above is that the
extension is incompatible, but that is incorrect. The groff
rules for using registers are the same as for [nt]roff except
that register names greater than 3 characters use brackets
*instead* of the opening parenthesis:

\n[foo] (NOT \([foo])
\*[bar]
etc.

Since I have personally typeset many of the Version 7 papers with
groff, I can attest to its compatibility with [nt]roff.

What's unfortunate is that many of us here act as if no one else has
anything to teach us. Plan 9 is great software and a shining
example of excellence in design, but just maybe it isn't the
final answer on all questions.

jcs
Douglas A. Gwyn
2004-06-09 09:19:40 UTC
Permalink
Post by Jon Snader
What's unfortunate is that our bashing is often misinformed, both
in spirit and in detail. The above comments are a case in point.
Is there anyone here who really maintains that the two character
name space in [nt]roff is not a disadvantage? Why should we
complain that groff has removed this disadvantage? Why should we
not remove it from [nt]roff? The answer given above is that the
extension is incompatible, but that is incorrect. The groff
rules for using registers are the same as for [nt]roff except ...
Too bad you didn't bother to understand the comment to which
you replied. The incompatibility I mentioned was between
groff and SoftQuad's prior extension of the register names.
Now there are multiple formats for extended troff source
documents, something that could easily have been avoided.
Post by Jon Snader
What's unfortunate is that many of us here act as if no one else has
anything to teach us.
The GNU developers seem to provide support for that.
Aharon Robbins
2004-06-10 10:59:18 UTC
Permalink
Post by Douglas A. Gwyn
Post by Jon Snader
What's unfortunate is that our bashing is often misinformed, both
in spirit and in detail. The above comments are a case in point.
Is there anyone here who really maintains that the two character
name space in [nt]roff is not a disadvantage? Why should we
complain that groff has removed this disadvantage? Why should we
not remove it from [nt]roff? The answer given above is that the
extension is incompatible, but that is incorrect. The groff
rules for using registers are the same as for [nt]roff except ...
Too bad you didn't bother to understand the comment to which
you replied. The incompatibility I mentioned was between
groff and SoftQuad's prior extension of the register names.
Now there are multiple formats for extended troff source
documents, something that could easily have been avoided.
SoftQuad's troff is pretty much dead, and has been that way for quite
a while. You can't find it on their web site and I wonder who is still
using it? O'Reilly used it for some of their books, but they switched
to using GNU Troff for their formatting a large number of years ago,
keeping SQtroff only for reprints of the books done with it.

(FWIW, even they have moved off troff to other technologies.)

In terms of sheer numbers, SQtroff can't hold a candle to
Groff, and I'd be curious to know how many people are still
using SQtroff at all.

As also mentioned, Groff is an *excellent* troff implementation.
With the compatibility flag, I have successfully printed Unix
documentation from 1980 with zero problems. (The System III
doc for the MM macros, using the System III tmac.m file!)

I have even had groff diagnose mistakes in my input files that
Unix troff just silently accepted!
Post by Douglas A. Gwyn
Post by Jon Snader
What's unfortunate is that many of us here act as if no one else has
anything to teach us.
The GNU developers seem to provide support for that.
I beg your pardon? I, at least, as a GNU developer, am well aware of
what there is to learn from others. Other GNU developers are no more
subjective about their work than many of the people here are. Which was
the original poster's point, methinks.

Two more cents out of pocket,

Arnold
r***@vitanuova.com
2004-06-10 12:34:45 UTC
Permalink
Post by Aharon Robbins
I have even had groff diagnose mistakes in my input files that
Unix troff just silently accepted!
unix/plan9 troff silently accepts most mistakes, in my experience...
Douglas A. Gwyn
2004-06-10 13:24:27 UTC
Permalink
Post by r***@vitanuova.com
Post by Aharon Robbins
I have even had groff diagnose mistakes in my input files that
Unix troff just silently accepted!
unix/plan9 troff silently accepts most mistakes, in my experience...
In fact the troff input language definition requires
that such things as uninitialized variables not be
diagnosed. It's a feature, not a bug.

Bruce Ellis
2004-06-03 10:09:37 UTC
Permalink
ummm, excuse me - but troff was the perhaps the
best supported program at the labs. if groff works
with <random input> and troff doesn't well
guess which one is wrong. when in doubt avoid
programs that start with 'g' ... except grep.
and why isn't there a ggrep. that would get
100 morons typing straight away.

brucee
Post by Lyndon Nerenberg
4.4BSD introduced the 'doc' macro package. It's a more structured
variant of the 'an' macro package. All the 4.4BSD man pages were changed
over to use -mdoc.
Writing a Plan9 native set of 'doc' macros is on my todo list, but it's
not going to happen any time soon. (Although if a working troff pops up
in the ports CVS tree, this will become a much higher priority for me.)
--lyndon
boyd, rounin
2004-06-03 10:21:24 UTC
Permalink
when in doubt avoid programs that start with 'g' ... except grep.
i'm trying to think of a use for a GNU command that's called:

gawd
Bruce Ellis
2004-06-03 10:25:45 UTC
Permalink
rue st gmartin, or grue st martin?
when in doubt avoid programs that start with 'g' ... except grep.
gawd
boyd, rounin
2004-06-03 10:31:00 UTC
Permalink
Post by Bruce Ellis
rue st gmartin, or grue st martin?
grue

could be some meta GPS mapping glop. damn, another one:

glop
Bruce Ellis
2004-06-03 11:15:30 UTC
Permalink
lets get real and write "goo", oh that's gcc.
(object oriented, works on some playforms with
maximal pain.)
Post by Bruce Ellis
rue st gmartin, or grue st martin?
grue
glop
ron minnich
2004-06-02 16:15:52 UTC
Permalink
Post by Richard Miller
You mean like
IEF450I JOBXXX STEPA - ABEND=S0C4 REASON=00000010
Yes, those were the good old days.
Computers weren't commodities, then. Was that any worse than a pop-up
window that describes everything except what the error is really about
and gives no pointers to additional information? At least with the
IBM manuals, it was possible to find out where the above originated
and to establish an apporximation to the cause.
I don't get it, you think 'phase error' is not a useful error message :-)

ron
Trickey, Howard W Howard
2004-06-02 16:25:14 UTC
Permalink
Does anyone else here believe that APE is worth enhancing?
No. Well, hardly anyone.
The Plan 9 user community regards APE as a "failure of vision"
not worth pursuing.
Dan Cross
2004-06-02 23:53:46 UTC
Permalink
Post by Trickey, Howard W Howard
Does anyone else here believe that APE is worth enhancing?
No. Well, hardly anyone.
The Plan 9 user community regards APE as a "failure of vision"
not worth pursuing.
I disagree. I see it as a necessary evil which it would be a mistake
to discard, or, just as bad, allow to decay to the point at which
discarding is necessary.

- Dan C.
ron minnich
2004-06-02 16:54:27 UTC
Permalink
Hm, maybe not, but perhaps somebody can sanity check me here. It
seems to me that the crucial factor lies with the return codes from
the Plan 9 system calls when used by APE simulation procedures.
or:
- 9p server on Plan 9 to Unix/Linux client (they want errno)
- u9fs server on Unix to Unix client

Still, it is hard to argue with the proposition that propagating errno is
propagating braindamage. Maybe we'd better just drop this :-)

ron
boyd, rounin
2004-06-02 17:00:01 UTC
Permalink
error strings to codes or another language is a nightmare.

so is:

echo us centric string > /dev/foo/ctl
Steve Simon
2004-06-02 17:01:24 UTC
Permalink
All I'm after is a somewhat less expensive approach to translating an
error message to its nearest Unix errno.
Expense is related to how common an occurance the event is. Errors are
(hopefully) rare so their expense in supporting imported (IE ape)
software is not so unreasonable.

I thing texture error messares are one pf plan9's great streangths.

I have had sam give "/n/netware/a/b/xxx.c cannot open - file locked"
on occasion. File locking is an alien concept to plan9 but I still
get the correct and informative error from the imported netware filesystem.

-Steve
Joel Salomon
2004-06-02 19:42:50 UTC
Permalink
And while I'm at it, has anyone figured out why even with the actual
object macros, I can't get Plan 9 troff (or nroff?) to present the
NetBSD man pages anywhere near readably? Suggestions on how to get
this fixed will be gratefully accepted. Rewriting the man pages is
not much of an option, of course.
Differences between plan9's -man and groff's -man, or is netbsd using
another macro set? Just a guess.

I have run into minor incompatibilities typesetting plan9's /sys/doc with
groff, though I'm think that it's utf in the files that's causing the
confusion.

--Joel
Lyndon Nerenberg
2004-06-03 04:43:38 UTC
Permalink
Post by Joel Salomon
And while I'm at it, has anyone figured out why even with the actual
object macros, I can't get Plan 9 troff (or nroff?) to present the
NetBSD man pages anywhere near readably? Suggestions on how to get
this fixed will be gratefully accepted. Rewriting the man pages is
not much of an option, of course.
Differences between plan9's -man and groff's -man, or is netbsd using
another macro set? Just a guess.
4.4BSD introduced the 'doc' macro package. It's a more structured
variant of the 'an' macro package. All the 4.4BSD man pages were
changed over to use -mdoc.

Writing a Plan9 native set of 'doc' macros is on my todo list, but it's
not going to happen any time soon. (Although if a working troff pops up
in the ports CVS tree, this will become a much higher priority for me.)

--lyndon
a***@9srv.net
2004-06-03 02:00:54 UTC
Permalink
// Does anyone else here believe that APE is worth enhancing?

i do, for what that's worth. posix is awful, but i'd rather see APE brought
up to par than put the work into the GNU stuff. i think rsc had talked
about it at some point (or i read it on one of his pages somewhere), too.

i'd rather everything was written nativly, of course. but it's not. and
we've all got better things to do that rewrite ghostscript.
Kenji Okamoto
2004-06-03 03:32:28 UTC
Permalink
Post by a***@9srv.net
we've all got better things to do that rewrite ghostscript.
I believe the problem between ghostscript and Plan 9 lies in the
fact that each uses different character code set, CID-keyed and
UTF-8. ☺

Kenji
Charles Forsyth
2004-06-03 08:49:02 UTC
Permalink
The former is a continuing "failure of vision" that will eventually be
resolved out of necessity. The latter may be solved with the former
i wonder if a message got lost. in another commercial product a group
of us years ago handled messages in many natural languages in programs without fuss,
in a serious commercial product that was, and i believe still is,
widely used with many Western European languages. it did not need the
use of message codes. strings worked well.
the text of the message in the program was its own `index'. still seems
straightforward to me. the strings are anyway the subject
of translation by the translators! you can't get away from them.
all those strcmps? hash. works for messages in files, too,
though there are other searching techniques. more recently, an even
simpler scheme for messages was successfully used in Limbo applications.
i used a hash table, but as it happens, i accidentally produced
a degenerate one. i still didn't notice for two years, even with profiling;
it's just not that much of a bottleneck in many cases.

it's the least of your worries as i said before: working out how to arrange
the strings to cope with the differing requirements of various natural languages
for word order, or `dictionary order' (which can vary across dialects of the
same language), and several other things, all require extra mechanisms.
for each string in a program, one needs to decide whether the text is essentially
`program' or whether it's `speech'.
it's helpful to have a conventional form for the text of such messages so
they can be extracted automatically for translation (assuming the compiler can't assist).

in many cases, system diagnostics
should not be translated (or indeed must not be translated) because they are input
to other programs, or internal diagnostics that should remain as-is.
actually, given the extent of program tools when scripting, it's probably
true that most existing messages wouldn't be translated anyway.
one of the nice things about the change to largely GUI-oriented interfaces
is that it's more obvious which bits of text are intended to be understood
by users of an application. most output of programs such as sed, file, etc.
would not be seen directly (by a `real' end user).

the failure of vision here is to think that just using integers will solve anything.
first, it complicates distributed development (who assigns the integers?
shall we have the usual hack of a `user-defined range'? what happens when
i federate two systems?), as the varying assignment
within Unix systems shows, and they were dealing with a largely fixed set of
system calls and outcomes (so in principal one could enumerate most of
the possibilities). even then, quite a few things settle for EIO.
that's a great help.

to get round the problem of differing errno assignments in Unix, one could
use EAGAIN (or is it EWOULDBLOCK?), ENOENT, etc. but hang on: that's
just a string and you'll still need to put it through a map. EBAHGUM.

more important, with user-level file servers
the set of possible diagnostics is unbounded, because the range of application
is not limited as it was (at least until recently) in Unix.

there is a smaller `failure of vision' though: Lucio is right that a little
more discipline in forming the text of the strings might help.
file servers that really do serve up real files could use the same text
for the same errors. it's a rather tedious job to go round the source to do it,
but it's no more tedious than collecting messages for translation in any case.
boyd, rounin
2004-06-03 09:20:55 UTC
Permalink
you could do %r as a cryptographic hash for translation.

you return the %r chunk and then hash it into an associative array of translated messages, indexed by hash.
Rob Pike
2004-06-03 14:53:14 UTC
Permalink
How about %#r returns a 32-bit binary, none too involved, hash?
i am confused. what about all that information you can have
because it's a string, things that very from invocation to
invocation? unix says 'bus error'; plan 9 tells you the pc of
the fault, or in other cases the file name that failed, the
fp trap condition, and so on and so on. it's not some
set of 23 errors; it's a huge informative space. why give that
up?

-rob
boyd, rounin
2004-06-03 15:06:00 UTC
Permalink
Post by Rob Pike
unix says 'bus error'; plan 9 tells you the pc of
the fault, or in other cases the file name that failed, the
fp trap condition, and so on and so on. it's not some
set of 23 errors; it's a huge informative space. why give that
up?
i wasn't advocating that. the core messages such as 'bus error'
(or whatever) can be translated. the pc or fp error codes are
gonna be a) not useful to translate or b) impossible to translate.

i 'spose you could return numeric info in roman numerals, but ... ;)
Rob Pike
2004-06-03 15:16:22 UTC
Permalink
Because there is madness at the end of that road, too?
why?
Yes, that is all extremely human-friendly, but one can't invest
infinite amounts of computing power in interpreting error messages
that do not reach the user. Or that the user is going to ignore
anyway.
what are you talking about?
Once again, my opinion is that errors ought to be reported as
concisely as possible within ambiguity constrains and that it ought to
be possible to request the additional details separately. But I seem
to be in a minority.
as concisely as possible is '23'. are you advocating that?
the errors are there *for the user*, not for programs.
corrupting a system that works hard -- and mostly
successfully -- to deliver clear, helpful, detailed error
messages, in order to make some bastard subsystem
spend less CPU time *for users accustomed to unhelpful
messages* (for such is unix's lot) is perverse.

APE can go jump in a lake.

-rob
r***@vitanuova.com
2004-06-03 15:27:57 UTC
Permalink
Post by Rob Pike
the errors are there *for the user*, not for programs.
IMO it is useful, occasionally, for a program to be able
to distinguish between classes of error.

but perhaps you disagree?
boyd, rounin
2004-06-03 15:47:35 UTC
Permalink
Post by r***@vitanuova.com
but perhaps you disagree?
perhaps the should only return BROKEN and tell the user instead of the ECALLBACKINAWHILEIFYOURFEELINGLUCKYPUNK ... ?
boyd, rounin
2004-06-03 15:25:13 UTC
Permalink
Because there is madness at the end of that road, too?
no, rob is right. _core_ textual error messages are extensable
and loss of information is unacceptable. send along the, say
english, text and crytpographically hash it and then look up
the hash in your table for whatever the language is you
wanna translate it into.

if you get a miss, examine the english text and add in
a new message (by hand).
Continue reading on narkive:
Loading...