Discussion:
[9fans] fossil (again)
(too old to reply)
t***@polynum.com
2012-01-10 02:40:48 UTC
Permalink
So I have reinstalled Plan9 ("howto" do without CD etc. will be written
in some days).

Since I want to grasp fossil (no venti) for now, I have allocated "only"
1 Gb to the thing.

With an almost virgin installation (only some Mb more since I'm
debugging kerTeX on Plan9), and no snapshots, there are less than 300Mb
of data, and fossil announces taking... 800Mb (with 1 Gb, I have only
20% free).

Can somebody point me to a doc that a less than gifted mind, it
seems, can have a rough idea about what's going on, what space is
needed etc; is there a "garbage collector", that is for "-t" files and
dirs, once the file is deleted the blocks are freed and available, or is
there commands to run to reclaim space etc.?

I _read_ the man pages. But this is still not clear...

Side note: are there statics about the Plan9 distribution, to know what
is the best size of blocks? It seems that there is a lot of small text
files, so 8kb is perhaps too much.

TIA
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
David du Colombier
2012-01-09 22:27:01 UTC
Permalink
Are you sure you disabled temporary snapshots? You can disable
them by removing the line "snaptime", or just removing "-t", or
setting "-t none" on that line, in fossil configuration.
Sorry, I mean "-a", not "-t".
--
David du Colombier
David du Colombier
2012-01-09 22:38:54 UTC
Permalink
Are you sure you disabled temporary snapshots? You can disable
them by removing the line "snaptime", or just removing "-t", or
setting "-t none" on that line, in fossil configuration.
Sorry again, I typed too fast. Of course I mean "-s" for
temporary snapshots.
--
David du Colombier
David du Colombier
2012-01-09 22:23:33 UTC
Permalink
Post by t***@polynum.com
Can somebody point me to a doc that a less than gifted mind, it
seems, can have a rough idea about what's going on, what space is
needed etc; is there a "garbage collector", that is for "-t" files and
dirs, once the file is deleted the blocks are freed and available, or
is there commands to run to reclaim space etc.?
Are you sure you disabled temporary snapshots? You can disable
them by removing the line "snaptime", or just removing "-t", or
setting "-t none" on that line, in fossil configuration.

After disabling snapshots, have you removed the old snapshots
with "snapclean 0" in fossilcons?

In the default configuration, Fossil take temporary snapshots
every hour and discard them after two days.

How do you measure disk usage? Do you use "df" in fossilcons?
Post by t***@polynum.com
Side note: are there statics about the Plan9 distribution, to know
what is the best size of blocks? It seems that there is a lot of
small text files, so 8kb is perhaps too much.
8 KB seems like a good default to me. If you really want,
you can specify a different block size with fossil/flfmt -b.

Beware it cannot be superior to 57 KB on the current
Plan 9 Venti.
--
David du Colombier
t***@polynum.com
2012-01-10 17:11:29 UTC
Permalink
Are you sure you disabled temporary snapshots? You can disable
them by removing the line "snaptime", or just removing "-t", or
setting "-t none" on that line, in fossil configuration.
Yes, this is the first thing I've done.
After disabling snapshots, have you removed the old snapshots
with "snapclean 0" in fossilcons?
No. I thought that if low == high epochs, there is no room left for
cleaning?
How do you measure disk usage? Do you use "df" in fossilcons?
Yes.
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
David du Colombier
2012-01-10 13:47:19 UTC
Permalink
Post by t***@polynum.com
No. I thought that if low == high epochs, there is no room left for
cleaning?
Temporary snapshots will automatically expire after snapLife,
specified by snaptime -t (0 is unlimited), every day or
every snapLife if inferior.

Running snapclean 0 will discard all snapshots and will
set epoch low = hi. Running snapclean without argument will
only discard snapshots older than snapLife or everything
if unspecified.

You can display the current epochs with the "epoch" command
in fossilcons.

You can check the remaining temporary snapshots with:

% 9fs snap
% ls /n/snap
--
David du Colombier
t***@polynum.com
2012-01-10 18:00:09 UTC
Permalink
Post by David du Colombier
Post by t***@polynum.com
No. I thought that if low == high epochs, there is no room left for
cleaning?
Temporary snapshots will automatically expire after snapLife,
specified by snaptime -t (0 is unlimited), every day or
every snapLife if inferior.
Running snapclean 0 will discard all snapshots and will
set epoch low = hi. Running snapclean without argument will
only discard snapshots older than snapLife or everything
if unspecified.
You can display the current epochs with the "epoch" command
in fossilcons.
% 9fs snap
% ls /n/snap
Thanks for the explanations!
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
t***@polynum.com
2012-01-13 18:33:23 UTC
Permalink
Post by David du Colombier
[...]
Running snapclean 0 will discard all snapshots and will
set epoch low = hi. Running snapclean without argument will
only discard snapshots older than snapLife or everything
if unspecified.
[...]
% 9fs snap
% ls /n/snap
So, there is no snapshot, only plan9.iso + apx. 40Mb for kerTeX, /tmp
empty, and I still have almost 750 Mb used and only 220Mb free?

Is there a way to know what occupies roughly twice the size of the
files?

Furthermore, fossil/flchk is "deprecated in favor of console", but
should work, and does not because there is no venti. (on console, check
reports no leak and no problem)

Anybody having ideas how to debug? Unless the 8kb default blocksize
for the distribution explains the overhead... But that seems to me a bit
too much!
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
David du Colombier
2012-01-12 21:29:19 UTC
Permalink
Post by t***@polynum.com
So, there is no snapshot, only plan9.iso + apx. 40Mb for kerTeX, /tmp
empty, and I still have almost 750 Mb used and only 220Mb free?
Is there a way to know what occupies roughly twice the size of the
files?
I don't know. I just set up a tiny Fossil file system and
extracted the current Plan 9 CD image on it:

term% du -sh plan9.iso
276.5566M plan9.iso

main: df
main: 397,328,384 used + 674,422,784 free = 1,071,751,168 (37% used)
Post by t***@polynum.com
Furthermore, fossil/flchk is "deprecated in favor of console", but
should work, and does not because there is no venti. (on console,
check reports no leak and no problem)
Use fossil/flchk -f when Fossil is not connected to Venti.

You should not use fossil/flchk to fix a running Fossil.
The "check" command from the Fossil console is safe
because it halts Fossil before running the check and
unhalts it after.
Post by t***@polynum.com
Anybody having ideas how to debug? Unless the 8kb default blocksize
for the distribution explains the overhead... But that seems to me a
bit too much!
You should probably try to compare with "du -sh".
--
David du Colombier
erik quanstrom
2012-01-12 22:08:33 UTC
Permalink
Post by David du Colombier
You should probably try to compare with "du -sh".
why will -h make a difference? i don't see it in the code.

- erik
David du Colombier
2012-01-12 22:25:20 UTC
Permalink
Post by erik quanstrom
why will -h make a difference? i don't see it in the code.
It's more convenient. But why this question?
It doesn't matter anyway.

My point was just to compare the size reported by fossilcons df
with the size reported by du.
--
David du Colombier
erik quanstrom
2012-01-12 22:37:53 UTC
Permalink
Post by David du Colombier
Post by erik quanstrom
why will -h make a difference? i don't see it in the code.
It's more convenient. But why this question?
It doesn't matter anyway.
i read your email as implying that -h returned substantively
different results than otherwise.

i guess i just misinterpreted what you were saying.

- erik
t***@polynum.com
2012-01-13 22:00:31 UTC
Permalink
Post by David du Colombier
You should probably try to compare with "du -sh".
BTW, I have "played" with du(1) since it answers partially (Erik gave
data) about the optimization of blocksize. Namely the "-b" option. If I
understand correctly, this does take only "data" block, so no inode or
whatever, but may give a clue about the optimization of blocksize. So:

term% du -s /
364996379 /

term% du -s -b 2048 /
365017964 /

term% du -s -b 8192 /
365080272 /

term% du -s -b 512 /
365002735 /

Well... on this partial evaluation, the winner is 1024, but for 80k,
not a lot to shout about.

For the occupation of fossil, with 8kb:

fsys blocks: total=126524 used=92045(72.7%) free=34464(27.2%) lost=15(0.0%)

So I have roughly twice the size of files in fossil occupation: fossil,
754 Mb, for 365 Mb of "real" data.

I don't get it!
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
erik quanstrom
2012-01-10 00:44:23 UTC
Permalink
Post by t***@polynum.com
Side note: are there statics about the Plan9 distribution, to know what
is the best size of blocks? It seems that there is a lot of small text
files, so 8kb is perhaps too much.
i did these calculations for the files in / on my worm. i used values
from ken's file server for a variety of block sizes. the program is
careful to count all the indirect blocks as well, but for simplicity i
ignore directories rather than working hard to guess how much
storage they're using. (can be wrong if entries are deleted.)

i think these numbers will be similar to those of fossil.

blksize files blocks mb used
16384 35738 1427263 22300
8192 35738 2675543 20902
4096 35738 7775796 30374

obviously, there are two competing forces at work here. the amount
of space wasted off the tail of the last block, and the amount of blocks
required to map the data into the inode completely. it seems that for
my mix of files, 8k is a winner.

- erik
Bruce Ellis
2012-01-10 05:04:14 UTC
Permalink
Many a moon ago Basser was doing something about incorporating various
bits into the a 32V Vax system.

They went for a 1K block filesystem (don't remember which one, early
BSD?) but spent a large amount of time on retrofitting 512B blocks. I
wrote a small program that traversed a disk and reported on 1K vs 512B
usage. The 1K filesystem used 27% less space. There's a random one
point sample space. They pushed ahead with the smaller block size as
"it must have less waste". Then again a lot of silly things were done
and I moved to Murray Hill - don't let theory and measurements get in
the way of a their comfortable paying jobs - be safe and stupid. A
University was not the place for Computer Science.

Then again I have a Coraid so my bits are safe.

I bought a Netgear NAS recently. It seemed like a bargain at a little
over the price of the 1T drive it came with. I put a 1T drive in the
second slot and hope this will be a good place to put stuff (the 2nd
drive mirrors the first). If i had the time I'd buy a second one and
hack it ruthlessly to support 9P. It supports CIFS, http, ftp, and has
a torrent client(!) of all things. Don't underestimate dumping a tar
of current work via ftpfs. As primitive a solution as you could ask
for it is great for my disparate herd of stuff. Type "bu" in some root
on some OS when you've done your work for the day.

Your call.

brucee
Post by t***@polynum.com
Side note: are there statics about the Plan9 distribution, to know what
is the best size of blocks? It seems that there is a lot of small text
files, so 8kb is perhaps too much.
i did these calculations for the files in / on my worm.  i used values
from ken's file server for a variety of block sizes.  the program is
careful to count all the indirect blocks as well, but for simplicity i
ignore directories rather than working hard to guess how much
storage they're using.  (can be wrong if entries are deleted.)
i think these numbers will be similar to those of fossil.
blksize files   blocks  mb used
16384   35738   1427263 22300
8192    35738   2675543 20902
4096    35738   7775796 30374
obviously, there are two competing forces at work here.  the amount
of space wasted off the tail of the last block, and the amount of blocks
required to map the data into the inode completely.  it seems that for
my mix of files, 8k is a winner.
- erik
--
Don't meddle in the mouth -- MVS (0416935147, +1-513-3BRUCEE)
Lyndon Nerenberg
2012-01-10 05:30:34 UTC
Permalink
Post by Bruce Ellis
Your call.
You didn't give us your number.
Bruce Ellis
2012-01-10 05:34:20 UTC
Permalink
confused me. you mean 27% or what's in my signature?
Post by Lyndon Nerenberg
Post by Bruce Ellis
Your call.
You didn't give us your number.
--
Don't meddle in the mouth -- MVS (0416935147, +1-513-3BRUCEE)
t***@polynum.com
2012-01-10 12:14:45 UTC
Permalink
Post by erik quanstrom
i think these numbers will be similar to those of fossil.
blksize files blocks mb used
16384 35738 1427263 22300
8192 35738 2675543 20902
4096 35738 7775796 30374
obviously, there are two competing forces at work here. the amount
of space wasted off the tail of the last block, and the amount of blocks
required to map the data into the inode completely. it seems that for
my mix of files, 8k is a winner.
Thanks for the data, Erik!
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
t***@polynum.com
2012-01-10 12:20:05 UTC
Permalink
Post by Bruce Ellis
[...]
Then again I have a Coraid so my bits are safe.
I bought a Netgear NAS recently. It seemed like a bargain at a little
over the price of the 1T drive it came with. I put a 1T drive in the
second slot and hope this will be a good place to put stuff (the 2nd
drive mirrors the first). If i had the time I'd buy a second one and
hack it ruthlessly to support 9P. It supports CIFS, http, ftp, and has
a torrent client(!) of all things. Don't underestimate dumping a tar
of current work via ftpfs. As primitive a solution as you could ask
for it is great for my disparate herd of stuff. Type "bu" in some root
on some OS when you've done your work for the day.
I will probably buy some appliance sooner or later, to secure my bits.
But I have to understand at least the basics of dealing with fossil
first, and the minimal space requirements of the thing ;)
--
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
Continue reading on narkive:
Loading...