[00:48:49] --- dev-zero@jabber.org has become available [00:51:19] --- dev-zero@jabber.org has left: offline [01:06:50] --- abo has become available [01:35:38] --- dev-zero@jabber.org has become available [01:35:46] --- dev-zero@jabber.org has left: offline [05:04:39] --- mmeffie has become available [05:29:13] --- Jeffrey Altman has left: Replaced by new connection [05:29:48] --- Jeffrey Altman has become available [06:08:56] --- cclausen has become available [06:15:48] --- Derrick Brashear has become available [07:01:56] --- deason has become available [07:36:44] --- deason has left [07:37:26] --- deason has become available [07:58:42] --- reuteras has left [08:35:54] is http://git.openafs.org/git/openafs.git supposed to be live now? [08:36:57] I believe it wasn't going to be up for another two weeks or so [08:38:42] true [08:42:34] you'll see an announcement when there's something intended for use [08:46:38] ah, tx. [08:47:06] (there were still a few more things to do as of the end of the workshop) [08:47:47] Basically what still needs doing is ... [08:48:02] a) Tools. There should be a page in the wiki detail what needs to be written [08:48:22] simon - are any of the existing repos relatively usable? eg., the latest one? [08:48:24] b) Changing the ownership of the first commit it "IBM", so it's clearer what stuff we just haven't touched [08:48:58] c) Reordering patchsets so that we always commit the head first, and then add "Cherry-picked from" lines to the branch commits, so we can tie commits on head and branch together [08:49:07] d) Killing the 1.5 branch. [08:49:22] e) Installing the tools on the Stanford box. [08:50:42] stevenjenkins: They're all usable. But if you clone from them and then start development, you're going to be sad, because when we regenerate the upstream tree your clone will be orphaned. (There will be no common ancestry between your clone and the production OpenAFS tree, so you'll never be able to merge, or pull new changes) [08:51:27] that's not a problem. I just want to try to locate some pieces. [08:51:31] the wiki still needs to be fixed. i will deal with that shortly [08:51:37] and jumping around versions in CVS is a real pain.. [08:51:56] I've been having great fun with the git pickaxe. [08:52:02] heh [08:52:17] yeah, we've had a few (internal) OpenAFS git repos over the past year...lots of fun. [08:52:31] And we got into a 'I've written more code than you' on the last day of the workshop, with the wonder of git querying. [08:53:08] who won? [08:53:17] Derrick. [08:53:26] Because he was given the original source to commit. [08:53:33] haha [08:54:39] will there be instructions for migration to git if we've already been working on development within a cvs checkout? [08:54:56] Yes. [08:55:05] cvs diff -u and apply a patch to a git clone checkout? [08:55:14] or is there anything special, outside of "pull diffs, and apply" [08:55:14] cvs diff > /tmp/diff ; patch < /tmp/diff [08:55:15] yeah [08:55:26] Nothing special. [08:55:34] ok, just making sure [08:55:36] You might want to add special sauce, but you don't have to. [09:19:40] a host package locking question: host.h talks about 3 different locks and a locking precedence: global hash, global list, and host mutex. However, I only see host_glock_mutex -- can someone point me to the other two? [09:29:56] h_Lock is the only other lock; the hash and list are both protected by H_LOCK [09:32:33] --- mmeffie has left [09:33:56] what does h_Lock protect ('use the source, luke' is an ok answer..) [09:35:25] each host object has one [09:36:13] so it's t he 'host mutex'? [09:38:27] http://www.dementia.org/twiki/bin/view/AFSLore/GitTools [09:38:34] given the comment i guess so [09:38:51] mind if I clean up the comment? [09:42:12] the real issue behind this is I think we've seen a race condition in the thread-specific *uclient [09:42:31] so I'm looking into what locking would be appropriate. [09:42:36] i'm not sure how that's possible [09:42:50] one sec and I'll dig it up. [09:42:51] each thread has its own. each thread can run one thing at a time. [09:43:13] unless there is something odd about hot thread behavior [09:43:17] * stevenjenkins nods. I understand that. [09:43:26] but we use the "thread" key, so that shouldn't hurt us [09:43:47] I'm not explaining this well. One sec and I'll try to summarize better. [10:04:12] Jeff, if you haven't been following this, it's quite complicated. [10:06:11] Under the new scheme, what once was "RFC Editor" is split into several pieces. The RFC Series Editor and Independent Submissions Editor are not equivalent positions; they do not do the same job. [10:07:54] --- mmeffie has become available [10:12:44] The RFC Series Editor is responsible for overall coordination/management of the whole ball of wax, more or less. The Independent Submissions Editor is the stream editor responsible for the independent submissions stream. It performs for that stream those functions which are performed by the IESG, IRSG, and IAB for the IETF, IRTF, and IAB streams, respectively. -more- [10:16:03] Neither position is responsible for the tasks for which most document authors currently interact with the RFC-Editor. Specifically, all of the mechanics of turning an I-D into an RFC, including editing, interacting with authors, shephers, and the stream managers, coordinating with IANA, assigning RFC numbers, etc. are handled by the RFC Production Center, which is a resource operated by a paid contractor and shared among all streams. Actually providing and maintaining the archives and handling the mechanics of the errata process is done by the RFC Publisher, which is also a paid contrator. [10:25:49] --- Russ has become available [10:39:48] --- abo has left [10:48:07] --- Derrick Brashear has left [11:28:39] --- Derrick Brashear has become available [11:51:59] derrick, where does the data getting passed to a pioctl come from? I understand it's a part of the delcaration, and I understand it's accessed through ain, how does that passing take place? [11:52:17] "badly" [11:52:23] summatusmentis: Kernel magic [11:52:44] you effectively marshall stuff in. look at src/venus/fs.c and see how it builds the blob. [11:53:26] or do you mean once that blob is built how does the kernel "get it"? [11:53:38] in that case, it's a userspace to kernelspace copy [11:53:45] no, I don't care as much about the kernel getting it [11:53:51] ok [11:53:57] so src/venus/fs.c [11:54:13] and you need to code that to match what the other side expects to decode (and vice versa for the return) [11:54:26] one of steven jenkins' coworkers gave us something to make it better but [11:54:37] 1) it was going to be cumbersome to take [11:54:49] 2) at the time at least part of the implementation was inefficient [11:55:38] so, for instance, src/venus/fs.c, line 3189. I see that pioctl() is being given blob as a parameter, but it looks like blob.in is given the same value as in, and in is set to space, and space doesn't seem to be set anywhere [11:56:03] hang on [11:56:15] which branch [11:56:31] 1.5.whatever [11:56:47] mostly recent cvs pull, within... the past 2 days? [11:56:57] in->num_servers = (MAXSIZE - 2 * sizeof(short)) / sizeof(struct spref); in->flags = vlservers; [11:57:09] sure looks like in is written into to me [11:57:29] --- stevenjenkins has left [11:57:31] in this case the blob which goes in contains a struct sprefrequest [11:57:47] oh wow, I should learn to read [11:58:02] thanks :) [11:58:19] Hmmm. Cache bypass is going to need some love for newer kernels. [11:58:28] note that struct sprefinfo is what comes out, and is written by the kernel, and decoded by the kernel [11:58:53] cache bypass needs some love for things which aren't linux, too, but the expertise to do it isn't falling out of the sky, alas. [11:59:19] It won't even build on newer kernels, sadly. I might have a look at some fixes when I get a moment. [12:00:12] It does occur to me that its background read queue would be a good solution to making Linux faster, too, as we could then go and do readahead in the background. [12:00:27] --- deason has left [12:00:39] --- deason has become available [12:00:55] --- stevenjenkins has become available [12:01:14] it's probably the case that the pioctl improvements should be refined and go into the code, incidentally, if they can be made workable without compatibility or performance impact, but at this point i suspect there's no one with time to do anything about it. [12:01:40] there's already code to do readahead in the background on 1.5. it just needs to be smarter [12:02:02] Well, it needs to not actively fight with the way Linux does readahead, too. [12:02:47] Ultimately, we don't (cache bypass aside) implement readpages. Which means we get the Linux native readpages(). Which just calls readpage() until its done. And our readpage implementation blocks until the read() call succeeds. [12:03:55] As far as I can see, it means that if you go and try to read 4k, and the kernel decides that you should do readahead for 128k, you won't get your 4k back until the kernel is done. [12:04:18] ah, right. i remember that. [12:04:49] of course in this case the readahead i meant would be done in a background daemon, not as you, so it would (ideally) be cached by the time you were ready to read it, at least [12:05:24] Yes, I'm more worried here about the disk to page cache read, rather than the remote file server to disk. [12:06:57] for PGetSPrefs, it's reading the ranked server list from sa, which is given value by afs_srvAddrs[], where is afs_srvAddrs coming from? (src/afs/afs_pioctl.c 3619) [12:07:21] probably "global variable". hang on. [12:07:43] look at afs_server.c [12:09:16] oh, that makes a lot more sense. thanks [12:37:39] question: iirc, the LINUX_USE_FH client code was created since recent linux removed iget() any reason we didn't just use the AFS_CACHE_VNODE_PATH stuff that exists for other platforms? [12:42:39] > any reason we didn't just use the AFS_CACHE_VNODE_PATH the variant i wrote first did that. it lacked some of the flexibility. [12:43:01] the tickets and cvs history will help if you care to read all the gory detail [12:45:37] is one going to obsolete the other eventually, or anthing like that? [12:46:24] linux iget stuff can never obsolete anything on other platforms, however, simon has done work since which would potentially allow us to support many backends at the same time [12:46:42] I meant, LINUX_USE_FH v AFS_C_V_P [12:46:46] (by path and by inode could exist in the same platform if it became advantageous to do so) [12:46:57] sure, so did i [12:47:01] right, I see [12:47:15] LINUX_USE_FH can never obsolete AFS_C_V_P completely. [12:49:05] > support many backends at the same time as in, a runtime switch for path or inode? [12:49:49] ideally, yes [12:51:23] Of course, it's only a small leap from there to multiple cache data stores of different types... [12:52:02] Oooh, or a read-only cache type, so you could ship a liveCD or similar with a pre-populated cache [12:52:48] there are actually a number of interesting things which become possible [12:53:33] can you tell me anthing more specific about that? (simon's stuff) or Simon, if he's watching [12:53:35] once you are willing/able to split, consider segregating cache data by storage speed with a policy engine to decide who gets what. [12:54:03] all simon's done so far is give us a union which allows additional types to be represented instead of my hack [12:54:03] I've been doing some work on a ukernel afsd; part of it may have been along the same lines [12:54:31] oh, hm, perhaps not [12:56:36] I just have a runtime switch instead of the A_C_V_P define; not really significant [12:59:26] --- matt has become available [13:00:05] steven: are you claiming there's a race to setspecific a new uclient? [13:05:02] Mark's done some more work that lets you use pretty much any Linux filesystem as a cache, on top of my union patch. [13:05:37] From memory, the drawback of that is that you have to allocate some memory per filehandle, because you don't actually know how big a filehandle actually is. [13:06:04] derrick, stupid question. Is scope limited by file? for instance, can "global variable" afs_srvAddrs[] be used outside of src/afs/afs_server.c? [13:06:21] summatusmentis: Yes. [13:06:36] to which questions? [13:06:38] -s [13:06:53] Well, it depends on how it's defined. But if it's not 'static', then its visible everywhere unless you play linker games. [13:07:06] static == "local to this file" [13:07:09] (for globals) [13:07:13] otherwise, use wherever [13:07:42] alright, cool [13:07:50] However, it's only visible at compile time. If you want something in everyfile, then you have to prototype it, either in the file you're using it in, or (much better) in a header file, which you include both in the origin, and destination files. [13:09:55] that makes sense. thanks [13:12:58] --- bpoliakoff has become available [13:31:54] did marc post more on his ACL file? [13:32:30] the only thing i know of was the negative length from fetchdata thing but it was working. [13:32:57] my suggestion would be "use a capability bit for per-file acl support; if the client claims it, be compatible there" [13:33:20] likewise, fileserver gets a compatiblity bit after which its vnodes become CForeign [13:33:54] I'm worried about capability bits... [13:34:00] how's that? [13:34:14] i suppose there is the issue that if a volume moves off a capable server, "then what" [13:34:16] I think there's a danger that the unused ones are going to attract squatters like every other 'spare' field seems to have. [13:34:39] they're easy enough to register that there's no reason to attract squatters [13:35:16] That's true of many other things. I think we need to make very clear to people that we're going to break them, and break them badly, if they assign their own uses to 'reserved' protocol fields. [13:35:30] ok. well, that's not unique to this [13:36:17] don't fear that, as russ said, we have to trust precedent. afs3-standardisation owns the bits. [13:37:26] --- stevenjenkins has left [13:37:37] we can't just melt down in fear that someone will violate our protocol, because presuming someone really wants to, we cannot stop it. it has to be the bad guy's fault if he does. [13:38:00] I bet that will depend upon who the bad guy is [13:38:19] e.g. someone who has money can buy their way out of it [13:38:56] But the alternative doesn't make any sense. The idea behind protocols is that we will follow them. [13:39:44] --- haba has become available [13:39:51] * haba home [13:41:03] --- stevenjenkins has become available [13:44:46] Okay, so I have a new readpage implementation for Linux which gives around a 20% speed up on cached reads (on this VM, at least) [13:46:29] That can't hurt. :) [13:46:45] The layering violations aren't pretty. [13:47:08] Well. I was expecting we'd have to do some reorganizing at some point. [13:47:18] I'm trying to get readpages() style readahead going now. We currently just disable readahead, which hurts us when doing splice() and friends. [13:47:32] Why not use readpages? [13:47:41] I am just using readpages :) [13:47:58] But if you want to use the kernel implementation, you need a non-blocking readpage. [13:48:39] Ah, so our bypass readpages won't change atm? [13:49:09] Not just yet, no. [13:49:21] It would be desirable for it to keep working. [13:49:27] It's broken ATM. [13:49:36] Linux moving-targetness. [13:49:50] Can you clarify? [13:50:02] The names of the various pagevec functions have changed. [13:50:18] Oh, for recent kernels. That's annoying. [13:50:23] Indeed. [13:50:43] Basically, you now have to tell the LRUQ the type of page you are giving it. [13:51:50] --- haba has left [13:53:13] ok. [13:55:00] --- haba has become available [13:57:24] --- mdionne has become available [14:02:44] > any reason we didn't just use the AFS_CACHE_VNODE_PATH stuff that exists for other platforms? I tested many filesystems with it at the time on Linux, but only ext2/3 were reliable. others ended up oopsing under load. [14:03:04] --- dev-zero@jabber.org has become available [14:03:06] --- dev-zero@jabber.org has left: offline [14:07:46] with the current Linux code, the initial open is done by path, subsequent opens with the stored file handle. you can pretty much use any filesystem you want to hold the cache [14:19:15] --- mmeffie has left [14:31:38] > "use a capability bit for per-file acl support; if the client claims it, be compatible there" Yes, for the negative length thing, if the client is known to support per-file ACLs we'd return the negative value - otherwise we assume the client is treating us as "foreign" and we'd return 0 [14:35:55] uh, isn't that sense backwards? [14:36:03] no per-file ACL being negative, and vice versa vice versa? [14:36:21] > likewise, fileserver gets a compatiblity bit after which its vnodes become CForeign For old clients the volumes will have to be flagged with VLF_DFSFILESET in the vldb (at create time?). I would think that newer clients would see the per-file ACL support and not use CForeign [14:37:02] No, we can't return a negative length for old clients, they need to see 0. [14:37:13] i guess a little thought's involved since moving a per-file ACL volume off a server able to support it should nominally clear VLF_DFSFILESET [14:37:33] erm. which old clients? [14:37:52] Yes, the flag should be cleared if moving to a non file-ACL server [14:38:17] old clients = clients not modified in any way to know about file ACLs [14:38:39] they already get negative lengths, and work, don't they? [14:38:44] Is it the case the CForeign allows those clients to do something useful? [14:39:02] When CForeign is set, the negative lenght is a problem [14:39:34] --- stevenjenkins has left [14:39:36] ok, so then we're on the same page for old clients. did you envision new clients allowing negative length with CForeign? [14:39:41] CForeign allows them to not assume permissions come from the parent directory (for access to cached files) [14:40:10] --- deason has left [14:40:10] I was thinking new clients would try to work without CForeign [14:40:20] --- deason has become available [14:40:30] is there an advantage to doing so? [14:40:56] would any of this actually break DFS compatibility? [14:40:57] I don't know - just seems odd to switch all clients over to "DFS" mode, no? [14:41:26] if "DFS" just means "per-file ACLs" then it seems fine to me [14:41:37] and there might be differences with assumed permissions because of ownership, etc. [14:42:23] anyone still care about compatibility with real DFS? [14:42:37] That seems doubtful. [14:43:14] we took the real translator out of what will be 1.5.61 [14:43:22] well, dauth/dpass anyway [14:43:30] BTW from what I saw, the negative length's only use is to decrease a counter for some statistics [14:43:31] --- stevenjenkins has become available [14:43:37] but other things (hostafs) use CForeign already [14:44:06] --- haba has left [14:44:22] on a side note, is there any hope of a free field in the on-disk vnode structure? [14:44:26] aix6 can bite me. or perhaps it can bite simon's "warnings left in afs" clock [14:44:47] I'll just have to remember not to build on AIX :) [14:45:04] I really must give you the last of the prototyping patches. But I think I'll leave that until git is done. [14:45:42] bit32 reserved6 was the spare, and it got used for long length [14:46:01] e.g. "new volume format ftw" [14:46:16] shouldn't it be renamed to show what it's used for? [14:46:19] At this rate, we're about to write AFSv4 [14:46:48] We'd better write AFSv4. [14:47:03] I'm looking at a better on-disk structure for the ACLs, but everything looks complicated unless I canput a pointer in the vnode [14:47:25] > shouldn't it be renamed to show what it's used for? probably. also the comment fixed. [14:47:38] The pointer is on-disk? [14:47:55] --- deason has left [14:47:59] well, it needs to be backed by something [14:48:08] yes, we need to have the connection between a vnode and its ACL on disk somewhere [14:48:37] --- deason has become available [14:48:42] ..and synced back along with the vnode when it gets written back [14:49:10] Sooner or later, we're going to need a new volume format ... [14:49:24] I don't know if per file ACLs are the correct driver for that, but it's going to need done. [14:49:35] we could steal back the magic [14:49:49] but yes, i think "new format" and "conversion tool" [14:49:55] That sounds like a line from a cheesy 80s song. [14:49:59] Per-file ACLs alone aren't enough, but with xattr, it's enough. [14:50:02] 70s, to me [14:50:07] --- stevenjenkins has left [14:50:13] per-file acls aren't enouygh for what? [14:50:55] wrt simon's 'correct driver' [14:51:37] maybe the original format I had wasn't so bad - is a fixed 180 bytes per file acceptable? [14:52:01] Even if the file has no ACLs? [14:52:15] yes, fixed overhead for all files [14:52:18] that could add up to lots of disk space on larger cells [14:52:25] what's the format of the record? [14:52:40] e.g. i am wondering if it could be linked hash objects. [14:52:49] the same as the ACL structure [14:53:15] --- stevenjenkins has become available [14:53:16] per-file ACLs would handle the same ACLs as directories currently? [14:53:23] if there's an available pointer in the vnode, we can do much better and not that complicated [14:53:26] including the site-specific bits and negative ACLs? [14:53:28] assuming we consider VnodeDiskObject internal (and why wouldn't we?) i propose reserved6->vn_length_hi, and can offer a patch [14:54:01] if you can assume short and long magic will never be pointers you can do something. it's ugly [14:54:10] cclausen: yes, I used the same format as is used for the directory vnodes [14:55:24] --- Derrick Brashear has left [14:56:37] the new format I was coding would be a table of ACLs with reference counts, and a pointer used for keeping track of free slots for allocation [14:56:58] Is VnodeDiskObject exposed in volume dumps? If so, mind you don't break Teradactyl. [14:57:24] It would be nice if whatever is done supported xattr--there would seem to be ways to get that. [14:57:49] The volume index number should be incremented. [14:57:53] I'm not sure why we care about Teradactyl? [14:58:07] Because they do the backups for my site. :) [14:58:07] You care about anyone parsing dumps in a supported manner. [14:58:29] well, in that case, does it work with my TSM buta file-level backups? [14:59:34] Are you using OpenAFS's tools, or a third parties? Does that third party still support those tools? [15:00:50] BTW: A very unscientifically measured 40% speedup for large files with readpages() implemented. Reading small bits of a file is a little bursty, at the moment, though. [15:02:44] generalizing for xattrs would more difficult - variable length, etc. ACLs are a simple case. [15:03:10] Simon: nice, will be curious to see your code [15:03:17] I think, ultimately, xattrs is going to need some way of treating each additional attribute as a unique FID. [15:03:36] That seems counterintuitive. [15:04:04] Not really, if you adopt the view of extended attributes representing additional streams of data within a file. [15:05:07] mdionne: Happy to share. [15:05:36] That doesn't make claiming multiple fids seem intuitive. [15:06:34] A FID ends up representing a stream of data (be it the 'main' contents, or one of the alternate streams). I'm not sure what's so unintuitive about that. Perhaps you could reword your objection? [15:07:25] I wonder if it would be possible to use the xattrs of the data file in the underlying file system (for namei) [15:07:46] that could limit support for specific server platforms [15:07:56] Multiple streams might want to be represented as a collection of fids, I'm not certain. But the xattr abstraction carries the notion of efficiently searched keyspace for data values of limited length. [15:08:25] sure, but it means we don't have to deal with the complexity ourselves [15:09:04] I think derrick has mentioned the xattr->xattr idea before--I think it might be attractive when it -does- work. [15:10:37] It has occurred to multiple folks that xattrs (and perhaps acls) could be efficiently handled by something like a per-volume embedded database. but other than marcus, I don't know of anyone who has suggested files be represented that way (experiements that tried that don't seem very successful). [15:12:20] For map-to-fid, with acls as an alternate stream, you have to have 2 fids, I guess for the fid and its acls? and then you have an fid which should not be published, and can it have...acls and xattrs? [15:13:05] So it felt more intuitive that such a stream, if that's how you represented it, wouldn't have an fid. [15:16:11] I guess it depends on how you intend on representing these things. For example, one of the suggestions for how to simply do metadata is to leverage AppleDouble files. [15:16:32] (a side effecgt of xattr->xattr, I think, is possibly quite variable performance depending on the underlying fs) [15:16:47] In that case, both the FID and the file are visible to the client and, in fact, the client would be responsible for managing the xattrs in that file. [15:18:39] Yes. It might be nice to support a couple of notions. I certainly have uses for POSIX type attrs, including ones that clients might be forbidden to view. [15:19:35] --- Derrick Brashear has become available [15:19:48] Two specific concepts. Undelete support. And de-duplication support. [15:27:00] > It would be nice if whatever is done supported xattr [15:27:12] if it's a new format i agree. if it's for this format, i don't think that's possible [15:28:19] if there is a new format in the works, would it be possible to support hard-links (with the same volume of course) [15:28:36] Only if per-directory ACLs die a death. [15:28:53] > That doesn't make claiming multiple fids seem intuitive. . second fid backed by an appledouble file, except until there are lrevised directory objects, this sucks [15:29:12] There's no technical reason (to the best of my knowledge) not to support hard links, beyond the fact that it means you can have the same file with two different sets of access permissions. [15:29:27] I'll handle that myself [15:29:38] thus using just one additional fid, one additional directory object, it's portable, it can be supported today, and it can be merged tomorrow if you have an xattr-capable backend [15:29:39] I just want to be able to store rsync snap-shots in AFS [15:30:21] Actually, I think parentVnode might break down ... [15:30:47] But the _directory_ format can definitely handle the same FID in multiple locations. [15:31:13] hard links within the same volume are only a limitation because of security [15:31:49] parentVnode already breaks for mount points, sooner or later we need to deal anyway [15:32:02] (sorry for the behind-ness; i was walking) [15:32:07] --- deason has left [15:32:26] i now have found a pub and am waiting to see which of mcgarr or i is sad at the end of the night [15:32:36] --- deason has become available [15:32:40] Why will one of you be sad? [15:33:36] either the red wings win (i'm sad) or the pens win (and he is less sad, and there's one more game) [15:33:46] --- stevenjenkins has left [15:33:49] Ah. Sport. [15:33:56] I've heard that security reasoning for hard links before, but I didn't/don't understand it [15:34:04] doesn't that still happen with normal hard-links on non-AFS? [15:34:07] Capability bits are numbers, rather than protocol fields. We've had pretty good luck with people registering things that are numbers [15:34:28] non-AFS doesn't have all of its access control done at the directory level. [15:34:28] hard links across directories: which directory's ACL do you enforce? [15:34:30] and why? [15:34:43] in not-AFS, you have no directory ACLs. [15:34:51] ... when it's not easy to tell which path the user took to reach the file ... [15:35:00] yeah, especially that [15:35:20] how about check both and if ACL the same, then use that [15:35:33] How do I find both? [15:35:43] I thought you'd just use the the ACL for the directory of each individual one [15:35:49] (not that I've thought it out or anything) [15:35:53] if you don't know there are multiple, then just use the one that you know about [15:36:11] And then the behaviour changes if the user happens to have gone into the other directory? [15:36:17] i.e. foo/a and bar/a are links to the same file; access foo/a get the ACL for foo, access bar/a get the ACL for bar [15:36:24] yes [15:36:40] that's actually useful semantics for legit cases. [15:36:43] behavior changes because you used the other path would play havoc on the Axs cache [15:36:51] oh, I see [15:36:54] a technical reason [15:36:59] which doesn't give a rat's ass about path [15:37:02] --- stevenjenkins has become available [15:37:03] Also, knowing which path the request came from is _hard_. [15:37:12] well, technical, e.g. "do you want fast or not?" [15:37:35] I just want hardlinks in AFS volumes so that I can get some remaining hard-to-backup things into AFS [15:37:37] and what simon said. we already have issues of sometimes the OS does weird things [15:37:52] so hardlink them within one directory? [15:37:53] the ACLs would be the same on all directories in this volume so its not a problem in my case [15:38:08] rsync snapshots don't work like that [15:38:21] I need to preserve the entire directory tree [15:38:32] you already admitted "open to modifying code". rsync is open source.... [15:38:45] whoa now [15:38:51] Or, you could modify the cache manager... [15:38:56] that too [15:38:58] I said nothing about modifying rsync [15:39:02] i did [15:39:12] and I may not be able to so that [15:39:14] you said "rsync needs X" and i said "so give it that" [15:39:19] you, or anyone? [15:39:20] certain appliances come with rsync [15:39:40] and those appliances come with afs? [15:39:49] I completely accept that AFS having hardlinks would be useful. It would make certain of my users happy. [15:39:51] no, just rsync [15:39:56] making modifications to it rather hard [15:40:00] i'm not dicking with you. i am trying to understand the problem domain [15:40:08] is it talking to rsyncd? [15:40:17] modify the other side's rsync? [15:40:29] rsync with --link-dest option [15:40:36] In the long run, hardlinks would be good. They make things like git clones much, much cheaper. [15:40:38] since, if you write to afs, some machine must be the one with afs, and it's not the one with the "closed" rsync [15:41:17] yeah, I guess that is true. the rsync with AFS access can always be modified [15:41:25] ding! [15:41:26] I wonder if, in addition to have volumes flagged as supporting per-file ACLs, we can have volumes which do not support directory ACLS, and so permit hard links. [15:41:32] someone has to be the writer [15:41:45] that may be possible [15:41:51] i've often thought of gining up an rsync that stores things in some sort of blobby format that gets translated to what it needs at the protocol level. it's usally at the bottom of my list, though. [15:42:10] hard links, [15:42:12] devices, [15:42:15] fifos, [15:42:17] --- deason has left [15:42:20] and a pony for Derrick. [15:42:26] /dev/pony? [15:42:41] cat /dev/pony | /proc/mincer ? [15:43:14] --- stevenjenkins has left [15:44:28] you do realize you just posited the notion of executable programs in /proc, right? [15:44:44] Well, it is a mincer ... [15:45:55] who did? [15:46:03] oh [15:46:36] --- stevenjenkins has become available [15:48:40] --- deason has become available [16:00:30] --- cclausen has left [16:08:21] > Is VnodeDiskObject exposed in volume dumps? [16:08:24] not as such [16:30:17] --- matt has left [16:31:03] --- edgester has become available [16:42:07] Bah. We can't nicely background our readpage copies, as the mechanism to receive notifications about page cache changes is EXPORT_SYMBOL_GPL. [16:42:28] "assume they complete" [16:42:42] No, it's having to start them that's a problem. [16:43:05] You've got the cache filesystem async reading into a backing page. [16:43:19] When that unlocks the page, you then want to copy the contents into your own page, and unlock that. [16:43:28] Problem is watching for that page unlock. [17:00:42] so is google only unreachable from where i am, or also cmu? [17:00:45] ok then [17:05:11] google works fine for me [17:05:35] cmu.edu also works [17:10:56] --- Derrick Brashear has left [17:11:28] --- bpoliakoff has left [17:23:04] and, google works from CMU [17:30:54] --- cclausen has become available [18:24:06] --- edgester has left [18:36:58] --- Russ has left: Disconnected [18:55:49] --- rra has become available [19:21:25] --- mdionne has left [20:38:24] --- Derrick Brashear has become available [20:48:29] google worked from where i was by the time the pens game started. and then i folded up [21:22:00] --- Derrick Brashear has left [21:27:57] --- Derrick Brashear has become available [22:03:16] --- Jeffrey Altman has left [22:32:28] --- deason has left [22:39:20] --- Jeffrey Altman has become available [22:43:56] --- reuteras has become available [23:29:58] --- dev-zero@jabber.org has become available [23:30:04] --- dev-zero@jabber.org has left: offline [00:00:22] --- rra has left: Disconnected