Thank you, sqweek. The second golden Golden Apple with καλλιστι on
it is totally yours. The first one went to Russ Cox.
Post by sqweekYou don't care who mounts what where, because the rest of the system
doesn't notice the namespace change.
So essentially there shouldn't be a problem with mounting on a single
"public" namespace as long as there is one user on the system. mount
restriction in UNIX systems was put in place because multiple users exist
some of whom may be malicious. Virtualization and jailing will relax that
requirement.
Post by sqweekAs Pietro demonstrated, no interface configuration is necessary here.
Only because the concept is hidden in Plan 9, though I don't know how.
_Someone_ or _something_ has to decide whether to route your packets
through, say, a ppp interface or an eth interface--when both interfaces are
present--and to do that according to configuration. That won't happen on
its own.
When P. G. suggested an imaginary "motorola" file server he never said how
the file server is supposed to access the cellular network. If it's going
to happen by tunnelling through another protocol, e.g. IP, then the
question remains of _which_ interface to choose from. And if it's going to
happen over some special protocol then it must occupy a place on the
network stack over some _configured_ network interface.
On a different note, what purpose did his "-M 'RAZR V3' 555 555 5555"
switches serve? Don't they qualify as interface configuration?
Post by sqweekCertainly. If someone has access to, say, the physical machine, then
they have the ability to boot whatever operating system they wish,
potentially modified to their liking and do whatever they want with
the hardware.
This result comes from the "disposable" machine paradigm, in which the
machine you work at need not be in any way _significant_ to you. It doesn't
quite match the "home computer" scheme of things. If someone manages to
boot your home/portable personal computer they are set for collecting all
you've stored on it. In that respect, I don't see how the "disposable"
machine paradigm can be applicable. Your personal machine is not disposable.
It seems the security ascribed to disposable machines comes from that "user
data" is stored on a different, presumably safer, machine in, for example,
some sort of data warehouse at a data center. This isn't a new
idea--actually, it's _very_ old--and it's not what happens in home (or
personal) computing.
Post by sqweekPlan 9 respects that. Not trusting the hostowner is a waste of effort.
Not with reliable biometric authentication, but that's out of scope here.
Post by sqweekUh, what now? You either have an interesting definition of home
computer or some fucked up ideas about plan 9. You only need a cpu
server if you want to let other machines run processes on your
machine. You only need an auth server if you want to serve resources
to a remote machine.
Neither statement is true. On a home computer you certainly need a term.
You'll need a cpu for a number of tasks. And you'll need auth if there's
going to be more than one user on the system, or if you need a safe way of
authenticating yourself to your computer. A single glenda account doesn't
quite cut it. If you're going to access your storage you'll need some
fs('s), too.
The bottom line is: term is _certainly_ not enough for doing all the tasks
a *BSD does, and requiring a home computer to do all these tasks is far
from inconceivable. One *BSD system is almost functionally equivalent to a
combination of term, cpu, auth, and some fs('s).
Post by sqweek"each machine?" I thought we were talking about my "home computer"???
If you have a home network, you have ONE auth server.
No. The point is if you have _one_ home machine and _multiple_ users you'll
have to store authentication information on that same machine. It is no
more a "disposable terminal," its security becomes as important as the
security of a heavily used auth server. The "disposable" machine paradigm
fails as miserably as the the traditional UNIX paradigm: one machine, many
users.
Now, your home computer may be a true single user machine but you store
_some_ authentication information on it anyway; those of yours, namely.
Such machine is in that respect as vulnerable as a UNIX machine. It has to
be _physically_ guarded. It's no more a "disposable" machine.
Post by sqweekincantation, that's beside the point. In 9p, the abstraction is a file
tree, and the interface is
auth/attach/open/read/write/clunk/walk/remove/stat.
ioctl and VFS are suspiciously similar even though they serve less generic
functions.
Post by sqweekargue HTTP is simpler because it just has GET/PUT/DELETE/HEAD, but you
have to deal with rfc822 formatted messages and different transfer
encodings and auth mechanisms and all sorts of options coming out your
ass.
This is classic. Complication is a sign of maturation. Plan 9 has evaded
that by not maturing, by avoiding diversification. Before you get angry I
must say that's my "personal" opinion. Nothing I'm going to "force" unto
you. Nothing I _can_ force unto you.
Post by sqweeknetwork operations - everything is done via /net. Thanks to private
namespaces, you can transparently replace /net with some other crazy
[compatible] filesystem, which might load balance over multiple
How does that differ from presenting of a network interface by a block
device on UNIX? And why should avoiding system calls be considered an
advantage? Your VFS layer could do anything expected from /net provided
that file system abstraction for the resources represented under /net is
viable in the first place.
Post by sqweekimplemented on any system, which is true [to an extent]. But it's
apparent than no others have the taste to do it as elegantly as plan 9 -
It's not a matter of taste. There are situations, many situations actually,
where the file system abstraction is plainly naive. Sticking with it for
every application verges on being an "ideology."
The VFS approach is by no means inferior to Plan 9's everything-is-a-file,
but on UNIX systems it is limited to resources that can be meaningfully
represented as file systems. Representing a relational database as a file
system is meaningless. The better representation is something along the
lines of the System::Data::DataGrid class on Microsoft .NET framework.
Post by sqweekEris, if you've further issues to raise, we should take this off-list.
No more "issues." I simply rest my case here.
Post by sqweekOn Wed, Aug 20, 2008 at 8:56 PM, Eris Discordia
Post by Eris DiscordiaPost by sqweekNo. Private namespaces.
And how does that solve the problem of whom to trust with mounting?
You don't care who mounts what where, because the rest of the system
doesn't notice the namespace change. But it sounds like what you're
really talking about is who to trust with device access, so lets roll
with that.
Post by Eris DiscordiaOr with configuring a network interface?
As Pietro demonstrated, no interface configuration is necessary here.
Post by Eris DiscordiaIf someone has access to, say, eth0 then
they have access to eth0. No amount of private namespaces keeps them from
reading everything that goes through eth0, including other users'
unencrypted traffic.
Certainly. If someone has access to, say, the physical machine, then
they have the ability to boot whatever operating system they wish,
potentially modified to their liking and do whatever they want with
the hardware. Plan 9 respects that. Not trusting the hostowner is a
waste of effort.
Post by Eris DiscordiaPlan 9's model says if you have physical access to a terminal there is no
way to secure _that_ terminal against your mischief. Therefore, it
totally trusts you _that_ terminal. However, your home computer doesn't
run only a terminal. To be usable, it has to run at least a cpu and an
auth, in addition to a term.
Uh, what now? You either have an interesting definition of home
computer or some fucked up ideas about plan 9. You only need a cpu
server if you want to let other machines run processes on your
machine. You only need an auth server if you want to serve resources
to a remote machine.
Post by Eris DiscordiaNow, where is the difference between running
authentication on the same machine as the terminal and the traditional
UNIX way of keeping authentication/authorization databases on each
machine?
"each machine?" I thought we were talking about my "home computer"???
If you have a home network, you have ONE auth server.
Post by Eris DiscordiaPost by sqweekSorry, that should have been "no such file or directory". You need a
mkdir.
The directory could've been there beforehand.
(the file was there beforehand :D)
Post by Eris DiscordiaIn any case, your deflection
has nothing to do with the fact that Pietro Gagliardi's demand for "a few
commands" to accomplish a certain task has been supplied with an adequate
UNIX answer.
He's under the false impression that abstraction actually _does_ things,
and that because Plan 9 has an everything-is-a-file model it is any more
trivial to access a cell phone over its proprietary communication
protocol over the cellular network. An impression perpetuated by the
9people.
Sure, at the end of the day you're still pushing the same packets
it defines an interface. Sure, each file server has its own
incantation, that's beside the point. In 9p, the abstraction is a file
tree, and the interface is
auth/attach/open/read/write/clunk/walk/remove/stat.
The nice part about the interface is it is simple and consistent.
Once you know what each of those messages mean, you are set - there
aren't really any sharp corners to watch out for. I mean you could
argue HTTP is simpler because it just has GET/PUT/DELETE/HEAD, but you
have to deal with rfc822 formatted messages and different transfer
encodings and auth mechanisms and all sorts of options coming out your
ass.
Mind you, a lot of the time you only care about files and
open/read/write/clunk are all you need. Case in point, awk or rc in
plan 9 have zero networking code, yet it is entirely possible to have
them communicate over tcp or whatever protocols are supported in /net
since they can open/read/write. In fact, there are no syscalls for
network operations - everything is done via /net. Thanks to private
namespaces, you can transparently replace /net with some other crazy
[compatible] filesystem, which might load balance over multiple
connections or somesuch. Network transparency means you can use /net
from a different machine and everything just works - hang around some
less technical folk sometime and tell me NAT doesn't deserve to die.
Even with resources like http://portforward.com available, port
forwarding is an impassable obstacle for many people.
I'd like to take a moment to note your unix example used the same
abstraction. You said elsewhere that plan 9's filesystems could be
implemented on any system, which is true [to an extent]. But it's
apparent than no others have the taste to do it as elegantly as plan 9 -
it's all MORE APPS (netcat), MORE FEATURES (tcp code in gawk/bash),
MOOORRREE CODE.
Eris, if you've further issues to raise, we should take this off-list.
-sqweek