Home
openafs@conference.openafs.org
Wednesday, January 14, 2015< ^ >
Room Configuration
Room Occupants

GMT+0
[02:02:13] mvita leaves the room
[05:20:05] Jeffrey Altman leaves the room
[05:29:59] Jeffrey Altman joins the room
[05:36:59] Jeffrey Altman leaves the room
[05:42:48] Jeffrey Altman joins the room
[14:13:13] ballbery leaves the room
[14:22:17] mvita joins the room
[14:22:29] ballbery joins the room
[14:23:32] meffie joins the room
[14:45:52] mvita leaves the room
[14:47:23] mvita joins the room
[14:53:39] mvita leaves the room
[14:59:04] mvita joins the room
[15:00:39] wiesand joins the room
[15:05:28] kaduk joins the room
[15:15:58] mvita leaves the room
[15:43:38] wiesand leaves the room
[16:11:49] mvita joins the room
[16:49:53] <kaduk> Hey meffie, I hear we have some afs3-stds document updates in the works ;)
[16:50:22] <meffie> yes, it's a new year. time for a fresh start!
[18:24:08] meffie leaves the room
[18:58:56] <kaduk> How many dcaches would I expect/want to have on a beefy machine (ballpark)?  Is this 64k thing in 10801 indicative?
[19:40:19] <jhutz@jis.mit.edu/owl> The thing in 10801 doesn't affect the number of dcaches.
It affects the way the hash table is sized based on the number of dcaches.
[19:41:06] <kaduk> I am aware.
[19:41:08] <kaduk> I want to size the hash table to get a given average chain length, but cap it so it does not consume
[19:41:15] <kaduk> absurd amounts of memory
[19:46:42] <jhutz@jis.mit.edu/owl> Automatic tuning, based on my 2005 analysis, sets
files  = MIN(blocks/32, 1000000)
dcache = MAX(2000, MIN(files/2, 10000))
[19:48:08] <kaduk> test.dialup.mit.edu has 10k, too, okay.
[19:51:25] <jhutz@jis.mit.edu/owl> ... but apparently that's not exactly what happens today.
files is actually MAX(blocks/32, 1.5 * blocks/(chunkSize-10)),
but is clipped to 1000 on the low end and the number of blocks
on the upper then.
-more-
[19:51:43] <jhutz@jis.mit.edu/owl> ... and dcache size is computed as I described.
[19:52:41] <jhutz@jis.mit.edu/owl> So the change in 10801 will, in a default configuration with typical
modern cache sizes, probably result in allocating a 32K-entry hash
table to index 10000 entries.
[19:53:02] <jhutz@jis.mit.edu/owl> Oh, no, it won't.
[19:53:16] <jhutz@jis.mit.edu/owl> It only changes things when someone explicitly sets a large dcache size.
[19:54:54] <jhutz@jis.mit.edu/owl> So, the patch as given doesn't seem unreasonable.
The default can end up with a table with 2K buckets and 2000 entries.
[19:55:24] <kaduk> I mean, a 32k-entry hash table with 10k entries is going to have few collisions...
[19:56:34] <jhutz@jis.mit.edu/owl> That depends on what the hash function is.
[19:57:01] <kaduk> and the distribution of values being hashed
[20:01:26] <jhutz@jis.mit.edu/owl> Yes, and in this case, taken together, those factors don't bode well
for larger numbers of buckets:
./afs.h:#define    DVHash(v)    ((((v)->Fid.Vnode + (v)->Fid.Volume )) & (afs_dhashsize-1))
[20:01:41] mvita leaves the room
[20:02:16] <jhutz@jis.mit.edu/owl> (note that both volume IDs and vnode numbers have much less variability
in their high-order bits than in their low-order bits)
[20:02:52] <jhutz@jis.mit.edu/owl> And the comment notwithstanding, primality of afs_dhashsize-1 is not
relevant
[20:02:55] <kaduk> Well, the main point of my exercise is to switch to using the jenkins hash function instead of just a bitmask
[20:06:25] <jhutz@jis.mit.edu/owl> I'm not aware of the function you're talking about.
[20:07:10] <kaduk> src/opr/jhash.h
[20:16:01] <kaduk> I probably could have saved you the effort of typing that up. :-/
[20:24:50] <jhutz@jis.mit.edu/owl> whatever
[20:39:55] <kaduk> Anyway, compare with 11673 (as noted on 10801)
[20:43:15] <kaduk> which, sigh, is probably going to fail to build because I was dumb in its grandparent.
[22:45:12] kaduk leaves the room
[22:56:37] ballbery leaves the room
[22:58:11] ballbery joins the room
Powered by ejabberd Powered by Erlang Valid XHTML 1.0 Transitional Valid CSS!