Post by erik quanstromyou are arguing for a cartoon hammer that runs away when you're
not looking at it.
That is an excellent definition of optimization! Typical
optimizations:
- putting a variable in a register
- caching a value in a register
- moving a computation outside a loop
- inlining
- not doing something that is not going to be used
- reusing a register (so the old variable no longer exists)
- moving a computation to callsite from the called function
(partial inlining)
- partial evaluation
- caching a result
- don't allocate a variable if not used
In all these cases, the generated code will look different
compared to when *no* optimization is done.
The issue again is, who draws the line? And where? And is it
the same for every processor model? [An optimization on one
can be a pessimization on another]. Note that even the
simple flat memory model of C is an illusion.
The "stalin" Scheme to C compiler can do whole program
optimization and produces amazingly efficient programs --
often better than hand written C code. Letting it do the
optimization saves a lot of developer time, and more
importantly, one can write clear understandable code and not
have to hand optimize it into spaghetti code (where some
obscure bug can hide). Now the compiler is slow so I would
only use it if I am going to run something for a very long
time.
I would argue that one should not put limits on what can be
optimized but should keep the language simple and clearly pin
down semantics as precisely as possible.
Post by erik quanstromPost by Bakul ShahPost by erik quanstromand i can design a standards-compatable compiler that will break
most any c program.
Care to give an example? I am genuinely interested. The C
standard is far from perfect but it has worked well enough.
[Where is doug gywn when we need him?]
oh, there are a number of obvious under-handed tricks. let's see,
for an intel compiler, change the structure padding to something bizarre
like 3 bytes. that will break a large number of linux drivers. (the 57711
for sure!) rearrange structure elements other than the first.
you can change from signed-preserving to unsigned preserving.
that'll catch a lot of folks out. do ones-complement arithmetic
(i believe that's still legal). have 10-bit bytes. 20 bit shorts, 40 bit
ints, 45-bit longs and 80-bit vlongs. (i'm not sure that's completely
legal, but you get the point.) make pointers to different types different
sizes. the list goes on. the combinations are silly, too. default
unsigned characters with unsigned preserving behavior. good luck
debugging that!
My head hurts! Clearly programs that break won't be standard
compliant :-)
Post by erik quanstromPost by Bakul ShahPost by erik quanstromwhat i think has broken down is common sense among compiler
writers. they're too focused on being fast, and not focused enough
on being *useful*.
I agree but I am afraid they do that because that is what the
market demands. Pretty much most of the software that the
Internet runs on + used by most people + businesses uses C or
C++ so C/C++ and their standards have been *useful* (but see
below). Fast does matter when you scale up (a big company may
need 18K servers instead of 20K and that results in savings
and less heat generation).
i don't understand your use of the world "useful". to me, the c
standard has become like a political issue. it's something to
game for personal advantage.
gcc etc. are used to deliver a lot of code that is used in
real word. And without a standard there would've been lot
less interoperability and far more bugs.
You seem to be arguing for K&R as the standard or something
but we already tried that until 1989. A standard was needed
due to the success of C and with indepenedent implementations
that interpreted unwritten things in a different way from K&R.
I doubt Ritchie and co wanted to take an active and central
role (and I am not familiar with the history) but my guess
is only that could've kept the standard simple and readable.