[01:09:23] --- Stephan Wiesand has become available [04:41:08] --- paul.smeddle has become available [04:57:21] Hello [05:02:50] --- Stephan Wiesand has left [05:12:15] --- Stephan Wiesand has become available [05:12:40] Hello [05:16:37] /topic 1.6.2pre1 [05:18:10] ok, didn't work... don't believe everything you find on the net... [05:24:52] I've just tried, and received: Error message from server: Forbidden [05:26:37] --- meffie has become available [05:27:26] Hi Mike [05:27:51] good morning/afternoon [05:36:49] --- meffie has left [05:39:39] --- paul.smeddle is now known as Paul Smeddle [05:39:39] --- Paul Smeddle is now known as paul.smeddle [05:39:39] --- paul.smeddle is now known as Paul Smeddle [05:41:49] --- Paul Smeddle is now known as paul.smeddle [05:41:50] --- mmeffie has become available [05:42:15] --- paul.smeddle is now known as Paul Smeddle [05:42:42] --- mmeffie is now known as meffie [05:47:17] --- Paul Smeddle is now known as paul.smeddle [05:47:17] --- paul.smeddle is now known as Paul Smeddle [05:52:21] --- Paul Smeddle is now known as paul.smeddle [05:57:23] --- paul.smeddle is now known as Paul Smeddle [06:02:27] --- Paul Smeddle is now known as paul.smeddle [06:02:27] --- paul.smeddle is now known as Paul Smeddle [06:07:29] --- Paul Smeddle is now known as paul.smeddle [06:07:29] --- paul.smeddle is now known as Paul Smeddle [06:12:32] --- Paul Smeddle is now known as paul.smeddle [06:12:32] --- paul.smeddle is now known as Paul Smeddle [06:17:34] --- Paul Smeddle is now known as paul.smeddle [06:17:34] --- paul.smeddle is now known as Paul Smeddle [06:22:38] --- Paul Smeddle is now known as paul.smeddle [06:22:38] --- paul.smeddle is now known as Paul Smeddle [06:27:40] --- Paul Smeddle is now known as paul.smeddle [06:27:40] --- paul.smeddle is now known as Paul Smeddle [06:32:43] --- Paul Smeddle is now known as paul.smeddle [06:32:43] --- paul.smeddle is now known as Paul Smeddle [06:37:46] --- Paul Smeddle is now known as paul.smeddle [06:37:46] --- paul.smeddle is now known as Paul Smeddle [06:42:49] --- Paul Smeddle is now known as paul.smeddle [06:47:51] --- paul.smeddle is now known as Paul Smeddle [06:52:22] --- Paul Smeddle is now known as paul.smeddle [06:52:22] --- paul.smeddle is now known as Paul Smeddle [06:52:35] --- Paul Smeddle is now known as paul.smeddle [06:52:35] --- paul.smeddle is now known as Paul Smeddle [06:52:42] --- Paul Smeddle is now known as paul.smeddle [06:52:42] --- paul.smeddle is now known as Paul Smeddle [06:52:54] --- Paul Smeddle is now known as paul.smeddle [06:52:54] --- paul.smeddle is now known as Paul Smeddle [06:57:57] --- Paul Smeddle is now known as paul.smeddle [07:00:02] good morning / afternoon all [07:00:15] Hi All [07:01:35] That was the second glitch in the invitation - "afternoon" is rather local. [07:01:35] --- deason has become available [07:02:21] :) [07:02:52] Hello [07:03:00] --- paul.smeddle is now known as Paul Smeddle [07:03:37] Hmm, give me a moment, my client seems to be generating nick changs every few minutes [07:03:47] --- Paul Smeddle has left [07:03:47] --- paul.smeddle has become available [07:03:47] --- paul.smeddle has left [07:03:47] --- paul.smeddle has become available [07:03:47] --- paul.smeddle has left [07:03:48] --- paul.smeddle has become available [07:03:48] --- paul.smeddle has left [07:03:48] --- paul.smeddle has become available [07:03:48] --- paul.smeddle has left [07:03:48] --- paul.smeddle has become available [07:03:48] --- paul.smeddle has left [07:03:49] --- paul.smeddle has become available [07:03:49] --- paul.smeddle has left [07:03:49] --- paul.smeddle has become available [07:03:49] --- paul.smeddle has left [07:03:49] --- paul.smeddle has become available [07:03:49] --- paul.smeddle has left [07:03:49] --- paul.smeddle has become available [07:03:50] --- paul.smeddle has left [07:03:59] and it's getting worse... [07:05:02] Meanwhile, maybe: Ken, did you perform the test installation you planned? [07:05:33] --- paul.smeddle has become available [07:06:16] I did. I built RPMs for Fedora 18 based on 7472866, and set up a new test cell from scratch with them on two VMs. Everything seemed to work fine, including homedirs in AFS with pam_afs_session [07:07:38] Obviously this doesn't come close to exercising everything, since all the volumes were very small, and I wasn't testing interoperability with other platforms, etc. But it is something. [07:08:12] It is! Great. Alas, it seems we'll have to add at least a few changes before pre1. [07:09:25] Paul, your client looks ok now? [07:09:28] So, I might be missing something, but the current HEAD of openafs-stable-1_6_x is a patch to the windows client. Is that right? [07:10:02] correct, commit b2d17370 [07:10:05] Stephan, I think my gmail chat box and my client were fighting over my nick in the conference [07:10:29] yeah, I've seen that happen on Gmail too :( [07:10:49] Ok, that windows patch doesn't matter, right? [07:11:16] I was just wondering why it was on the branch ... [07:11:24] occasionally Jeff pulls Windows changes into 1.6 [07:11:30] ok [07:12:37] Shall we go through the list of changes Jeff brought up yesterday? [07:13:03] Sounds like a good plan. [07:13:34] Ok, the first one was https://rt.central.org/rt/Ticket/Display.html?id=131505 [07:13:50] (1.6 has a windows client, too...) [07:14:00] 1.6.x is the last SMB based series of OpenAFS for Windows. There will be a Windows 1.6.2. [07:14:00] ah, I have no login to rt [07:14:08] guest/guest [07:14:13] gotcha [07:14:58] 131505 is near-trivial to write a patch for; I can submit something lightly tested today [07:15:22] Ok. [07:15:49] Then there's gerrit 8484. [07:16:46] unfortunately I didn't get to test 8484 but I understand it's pretty crucial [07:17:03] Sounds like it's almost a given [07:17:59] yeah, I assumed that was going in [07:18:08] And Derrick and Andrew +1'ed it. I think it should be merged. [07:18:55] 8484 has been merged [07:19:01] deason mentions it is fairly easy to test. [07:19:22] ok. Then there's ""8512 and 8513 or something else that fixes the RX_INVALID_OPERATION abort problem" [07:19:53] those are still under review and require testing [07:20:26] --- Derrick Brashear has become available [07:20:28] do they block 1.6.2? [07:20:33] Was there not the option of removing the patch that added RX_INVALID_OPERATION ? [07:21:32] Yes, Jeff mentioned that. Assuming that we want to get serious about pre1 after next week's meeting, is it likely that we'll have something to discuss then? [07:22:21] a resolution to the rx issue to discuss, or a likely pre1 to discuss? [07:22:49] if you want to wait a week, we can discuss the issue then; if you don't want to wait, the functionality can just be ripped out [07:22:57] Let me try to explain what the RX_INVALID_OPERATION patch does. [07:22:59] anything fixing the issue ready to be merged, be it a revrt [07:23:30] yes, please [07:24:38] if ripping it out is a safe fallback, postponing it sounds attractive? [07:24:41] OSD is now implemented by adding a new service to the existing processes. [07:25:17] yes, we have a safe fallback of a revert. [07:25:41] OSD clients that issue a request on the new serviceId to a non-OSD or old OSD server will currently have to wait for a timeout to occur because the rx stack drops packets for unrecognized serviceIds on the floor. [07:25:51] (still interested in Jeff's explanation) [07:27:32] The RX_INVALID_OPERATION abort is intended to indicate to the client that the serviceId is not recognized so that the client doesn't wait the full timeout and retry period before giving up. The client without RX_INVALID_OPERATION also cannot distinguish between a server that offers the serviceId and one that simply stopped responding. [07:28:19] Though I'm biased towards OSD: let's see whether we have something solid next week - If not, rip it out? [07:29:31] that would be my preference also. [07:29:38] The original patch responded with RX_INVALID_OPERATION aborts to any packet on a connection that not only was not a recognized serviceId but any connection that could not be found or established. This was discovered last week to cause abort storms between clients and servers under weird conditions. [07:30:15] So it doesn't hinder testing under non-weird condistions? [07:30:16] 8512 I believe is an appropriate constraint of RX_INVALID_OPERATION aborts only for the serviceId case. [07:30:58] Under normal conditions for existing non-OSD clients, you should never see RX_INVALID_OPERATION. [07:31:06] That can be tested. [07:31:33] 8513 sounds like it will be harder to test [07:31:33] Testing RX_INVALID_OPERATION for unrecognized serviceIds is also relatively easy to test. [07:31:54] 8513 is a related but different issue. [07:32:55] okay, but consensus is to defer a week, right? can we move on? [07:33:04] At the moment, rx is not properly setting the direction flag (client initiated or not) in abort packets which permits the receiver to become confused as to whether or not the abort actually applies to the connection. [07:33:50] I guess we agree on deferring 8512. [07:34:08] 8513 is not a new problem, is it? [07:34:57] 8513 is not a new problem. [07:35:36] Defer, and leave it for 1.6.3 unless we have something solid next week? [07:35:54] Great, so who's going to test 8512? [07:39:23] Right, do we have any other changes on the table? [07:39:36] 8512 and 8513 will be tested by Chaskiel Grundman but there is no commitment for when [07:39:48] ok [07:40:10] Ok. Then Jeff mentioned "one or two OS X issues". [07:40:47] none of which have patches yet [07:40:59] Blocking 1.6.2? [07:41:24] or put another way, patches likely in the next week? [07:41:25] no, but i will send you pullups if i get resolution and it can be done affecting nothing else [07:41:33] cool [07:41:53] I don't think the OSX issues should block a pre1 [07:41:59] no [07:42:11] Ok. [07:42:49] Anyone else who sees changes that must happen before pre1 yet? [07:42:54] yes [07:43:19] (several things, listing one at a time...) [07:43:38] gerrit 8489, but I don't expect that to be controversial [07:44:07] agreed [07:45:14] there is gerrit 8368/8370, which is the stuff to panic on certain getdslot errors [07:45:30] Ok, Andrew, can you submit 8489 for 1_6_x? [07:45:39] there are the subsequent changes to try to not panic, as well, but [07:45:40] stephan: yes [07:46:25] ...but that is possibly not as 'safe', since trying to recover from those situations can result in the cache information getting screwed up (hash tables corrupt, etc) [07:46:35] that is, they "can" do that if the patches are wrong [07:47:09] so, I was thinking to just include 8368/8370 to be safer [07:47:51] I'd really have to look at this one very long to judge. [07:48:10] is 8368 just a logging change, not a functional change? [07:48:16] well, 8368 looks trivial [07:48:33] er, sorry, yeah, I just had it categorized under that issue [07:48:45] that one doesn't matter so much [07:48:45] Right, but the other one looks scary. [07:49:24] Is this fixing anything that would be new in 1.6.2? [07:49:56] well, at least 8370 can go in before pre1; in my head the options were either 8370 itself, or 8370 plus... I'd need to look up the numbers for the other commits [07:50:41] 8370 is... sort-of new; the general problem we're trying to solve is one that has always existed and very difficult to reproduce and get information about [07:50:57] but 8370 specifically is for fixing an incorrect/incomplete previous fix for the issue [07:51:24] and is that incomplete fix post-1.6.1? [07:53:36] I think so... it was bacb835a [07:53:46] Asking differently, is 8370 required to avoid a known regression in 1.6.2? [07:54:03] okay, yes, it is post-1.6.1 [07:54:32] no, it is not for a regression, but I do not see how it can make things worse [07:54:56] it panics in a situation that would otherwise cause a panic or cache corruption later [07:56:16] Ask Andrew to submit it against 1_6_x and discuss it next week? [07:56:18] my take is at worst it moves a panic and at best it saves you from corruption: take it [07:56:50] cool [07:57:10] Andrew, can you do it? [07:57:16] yes, certainly [07:57:17] moving on? [07:57:26] ok for me [07:57:51] gerrit 8287; not a huge issue, but the fix is trivial [07:58:47] I vote for pull it up [07:59:05] Looks innocent to me. Paul? [08:01:03] (I'm just going to move on... if paul has something, we can come back :) [08:01:17] ok :-) [08:02:00] gerrit 8203; that one itself is not significant, but I'd like to flag "some DAFS VRequestSalvage_r stuff" for next week [08:02:17] there are some other changes (or maybe just one) related to that, but I need to check on them [08:02:36] no discussion required; just want to note it here if someone checks for what to talk about next week, moving on... [08:02:56] gerrit 8197; I'm not sure if we want this is; just raising the possibility [08:03:05] "want this in" [08:03:59] sorry, no isses with 8287 -- was interrupted [08:04:26] 8197 seems innocuous enough but i don't care overly [08:04:41] Sorry for being slow reading them in gerrit. 8197 avoids contention in the client or speeds it up? [08:05:02] the motivation for 8197 is you can currently get spammed by log messages if you do something requiring locks that runs in an RO volume [08:05:27] which seems silly, because the warning we give is not applicable to RO data [08:05:46] it's not new or anything (at least, not newer than the "byte-range locks ignored blah blah" message) [08:06:06] if you could rid me of those... [08:06:35] I'm a fan of 8197 in theory but if it's a new feature, maybe it should wait for 1.6.3? [08:06:42] without 8197 does the client send lock request to the file server? [08:07:26] jaltman: yes, but the fileserver ignores it [08:07:30] er, wait [08:07:35] for byte-range locks, no [08:07:37] i dont think of 8197 as a feature [08:08:32] for whole-file locks, "yes but the current fileserver ignores them" [08:09:08] sounds a bit like a feature to me - but... [08:09:14] with 8197 the client stops sending whole-lock requests for .readonly volume objects to the file server? [08:10:54] I think so; I think both types go through afs_lockctl but I may be remembering wrong [08:11:19] (and I'm hearing that this should wait; I'm fine with that) [08:13:02] I'm not going to argue with you have the actual knowledge. But maybe this one could wait? [08:13:08] in my mind there are two issues. one, if the client is sending lock requests it should not be to the file server, that is a bug which should be fixed. lock requests can be very noisy on the wire and put load on the file servers so if this fixes that bug I want it. The second issue is the logging. It should be possible to suppress the log messages on .readonly objects without avoiding tracking of the locks but I don't really care. The windows clients do track locks on readonly objects because it is important that windows file systems do so. [08:13:19] but this is not windows [08:14:03] important for windows, in terms of tracking locally on the client? [08:14:25] since the file servers won't, the client must [08:15:42] the fact that the data won't change is not the only reason locks are obtained by applications. [08:15:57] yeah, I just mean, a lock on an RO isn't going to prevent activity on other machines; if it's for recording info that needs to be returned for some kind of "gimme what locks are held" query then okay [08:16:26] two processes may serialize activities between themselves by obtaining an exclusive lock on a file [08:18:01] machine-local tho [08:18:03] yes, but on the fileserver? I didn't think you could get an excl lock on the fileserver for an ro [08:18:04] in windows, that must be enforced by each file system. The UNIX/Linux vfs may handle that at a layer above the individual file systems. I don't know [08:18:27] if just machine-local, then yeah, I get it [08:18:29] you cannot get a lock on a .readonly from an AFS file server [08:18:54] Postpone to 1.6.3? [08:19:06] yes yes, I'm just making sure you're not relying on that for mutual exclusion between machines [08:19:13] Stephan: yes, sorry for the distraction [08:19:33] Anything else on the list? [08:19:44] Andrew: No problem. It's interesting! [08:19:46] the most common case is a process that forks and coordinates with its child [08:20:42] gerrit 7916; unless I'm confused at what I'm looking at, this is the trivial second half of an existing fix I just forgot to pull [08:21:42] pull it [08:21:44] pull it [08:22:07] Fine with me. [08:22:19] gerrit 8462; trivial, probably should've been done a while ago [08:22:35] oh, yeah, 7916 is needed. [08:23:24] pull it [08:23:28] Hmm, "idledead" makes me shiver. [08:23:47] this one really is trivial; don't worry about it [08:24:02] moving on... gerrit 8471 [08:24:05] it makes the unix clients behave like windows has for years. I'm not worreid [08:24:35] 8471 needs some more review/comments/etc, but I really would like something to address this to go in [08:24:54] it makes it not-difficult to panic unix non-linux clients [08:25:13] er, wait, I need to check when that was introduced.... [08:25:15] i marked it so i'd remember to review it, i want to check the variables [08:25:33] or no, it's definitely applicable on 1.6, that's where I saw it, duh [08:25:52] it will apply to 1.6 also, yes [08:26:19] (that's the last thing I have, just btw) [08:26:39] it seems to change behaviour on linux too? [08:27:50] linux will have dynamic vcaches enabled, by default, and so it won't matter there [08:28:06] well no, that patch removes the !afsd_dynamic_vcaches part of that conditional [08:28:11] but it doesn't need to; I could add it back [08:28:26] that would make me more comfortable [08:29:17] I think currently that loop doesn't have an infinite-loop check for linux, then, but afaik that's never been a problem for dynamic vcaches [08:30:19] I'll split it into two patches; removing the !afsd_dynamic_vcaches I think should go in sometime, but I will target that for post -1.6.2 [08:30:38] Ok. Andrew, I guess you volunteer for submitting all those to gerrit for 1_6_x? [08:31:00] the ones I mentioned that just need submitting, yeah [08:31:14] should I just be approving/submitting these without waiting for anything? [08:31:24] that was my question too [08:32:28] I think those that have consensus should be safe to just approve, like 7916 [08:32:56] yes. But there was at one Derrick wanted to review. [08:33:17] it's not submitted to master yet so that's fine [08:33:24] nothing to pull up (yet) [08:33:26] Paul and Stephan will submit the approved items on 1.6 [08:33:27] oh sorry, I forgot one... 8463; simon's comment seemed to indicate that was trivially "yes this should be done" [08:34:39] Derrick merged it to master [08:34:51] pull it [08:35:21] Looks ok to me at first glance. [08:35:27] we should finish 8464 as jeff describes also, but that's out of scope for this meeting. [08:36:00] Shivering some more... [08:36:03] 8464 I haven't had a chance to review; I was deferring it until later, as it's not new [08:36:15] it's not ready for 1.6 let alone master [08:36:57] Postponed? [08:37:12] postponed, but probably further than just next week [08:37:25] so anyone other than Andrew have any more commits we should look at? [08:37:37] Andrew: that's what I thought, yes. [08:37:43] Can we discuss what our policy will be for pulling in anything into 1_6_x this week? Are we "freezing" except for the issues we mentioned today? [08:38:26] i have some very minor fixes which are already on master, that i've not pulled up to the 1.6 branch. [08:38:36] mike, know what they are? [08:38:50] the policy should be that nothing is submitted to 1_6_x from this point forward without paul and stephan approving its inclusion. we do not want to create a 1_6_2 branch [08:39:16] push to gerrit, submit nothing unless the release manager(s) want it. [08:39:29] they can be put into gerrit but they should not be submitted until after 1_6_2 is finished [08:39:50] sounds good [08:40:01] At least not before a discussion in the next meeting. [08:40:09] right [08:40:20] I have to go through the windows fix list and determine what is applicable that has not been pulled. I will do so before next wednesday [08:40:21] This week, we should only submit things agreed upon toady. [08:41:00] so.... am I not ever supposed to be using my rights on 1_6_x? should I even have them? [08:41:22] well, presumably you can for the things we just talked about [08:41:24] I think you can merge anything we decided is ok today [08:41:27] for example gerrit 7920 [08:41:55] mike, i think that should be pulled up. [08:42:15] ok, well do. [08:42:20] Deason: You could think of it as "we're currently in pre-release freeze, so release manager approval is needed" [08:42:27] okay [08:43:27] sounds good to me [08:43:52] anything else, meffie? [08:45:02] commit 03b87df bozo: dont lie when binding to any address [08:45:15] yeah. that too. [08:45:23] that fixes a log message, introduced in 1.6.x [08:46:27] yes, at least put it in gerrit [08:46:36] Looks similar to one we already agreed on. [08:47:18] Or no, rather not. [08:47:27] another bozo fix; 170ce3d bozo: retry start after error stops [08:48:18] i would be fine with that one but it's less trivial/obvious [08:48:52] I would wait, just because of the size of that, and it's borderline feature [08:49:12] 1.6.3? [08:49:49] yeah, I'd say that's a feature [08:50:17] ok [08:51:11] Ok, anyhting else? [08:52:08] 639ca37 vol: rate-limit volume usage updates [08:52:08] there is: 639ca37 vol: rate-limit volume usage updates [08:53:11] yes please [08:53:21] --- meffie has left [08:53:21] I'd like that one :D [08:53:22] yes [08:54:00] No point in objecting, I guess ;-) [08:54:01] We're running with it. [08:54:06] --- mmeffie has become available [08:54:32] (sorry network is fickle) [08:54:35] Is this fsync()'ed whenever written? [08:55:17] no [08:55:31] --- Marc Dionne has become available [08:56:05] Why does it make such a difference then to write every 5 secs only? [08:56:24] --- Marc Dionne has left [08:56:32] because right now you spin on writing it to disk repeatedly if you access the vol e.g. thousands of times per second [08:56:34] If you're doing lots of updates, you can exceed that 127 counter which was there previously [08:56:40] sorry access [08:57:01] --- Marc Dionne has become available [08:57:11] Thanks. Just asking, not objecting. [08:57:23] I have a vested interest in this patch, so I'll defer to Stephan [08:57:30] just answering :) [08:57:57] I'd like to see 8470, 8469, 8466 for 3.7 [08:58:06] Paul: Ok. After all, it's already tested ;-) [08:58:59] that's probably a good idea. since that's a driver for this release! [09:00:13] Right. [09:00:23] marc: it's not your turn ;) but "yes" [09:01:04] 8046 fixes a reported oops [09:01:47] did we get consensus on Mike's 5803 (vol: rate-limit volume usage updates)? [09:02:03] Ken: I think we did. [09:02:10] ok great [09:03:08] Can we leave 8046 for 1.6.3? Dropping a lock sounds serious. [09:03:24] deadlocking also sounds unsafe. [09:03:43] it fixes an oops... that's an afs lock around a linux vfs operation [09:04:02] we don't normally need glock for linus vfs stuff; usually we need to _not_ hold it, like here [09:04:20] it's a take, from my standpoint [09:04:29] +1 [09:04:52] Not going to argue then. [09:04:57] --- Marc Dionne has left [09:05:15] it's also at the very end of a (fairly high-level) function during cleanup; we're not going to be confusing something because glock was dropped [09:05:17] --- Marc Dionne has become available [09:06:00] marc: anything else? [09:06:40] That's all i see for my stuff [09:06:59] mmeffie: anything else? [09:07:01] Marc, will you send those to gerrit for 1_6_x? [09:07:28] Sure [09:08:49] 8332 is just a cleanup. I wouldn't bother [09:09:33] Not for pre1. [09:10:20] Looks like we're through for today then? [09:11:16] I can send minutes to the release-team list, unless you want to, stephan? [09:11:18] double checking, nothing obvious [09:11:46] Ken, I'm not going to fight for that ;-) [09:11:58] But maybe Paul will? [09:12:01] no! [09:12:02] ken, thank you [09:12:07] ;) [09:12:12] volser: preserve stats over reclones and restores is a feature. [09:13:05] After all, we need something for 1.6.3... [09:13:16] indeed :) [09:13:22] that actually may be worth discussing next week [09:13:27] or next version, that's fine too :) [09:13:53] Ok, trying to summarize: [09:14:15] (imo, the stats aren't useful without that patch, so even if it's broken...) [09:14:52] Everybody will send their favourites to gerrit for 1_6_x. [09:15:23] Paul and me have to submit them after review. [09:15:26] I'll email this summary to the list: http://pastebin.com/kbdvAgUC along with a link to today's Jabber log [09:15:51] Paul and I will invite for the next meeting, soon. [09:16:20] wow, quick work [09:16:20] jabber log will not be useful til after it rolls over overnight, fwiw [09:16:38] The next meeting will be the last call for including changes before 1.6.pre1? [09:16:55] nice summary ken [09:16:56] Really? I have used same-day jabber logs before. [09:17:07] --- mmeffie is now known as meffie [09:17:34] it looks useful right now, to me [09:18:07] huh. yeah, reloaded, it's fine. weird. [09:18:11] There is some testing it would be nice to see by next week, to resolve that RX_INVALID_OPERATION question [09:18:31] but as Jeff says, the tester isn't on this conference :( [09:19:23] Ken: Great. [09:20:08] I spoke with Chaskiel via back channel. He is aware just hasn't had the window to apply the patches. The systems running 1.6.x ~HEAD are production [09:23:11] Anything else to discuss? [09:24:19] Paul, last words? [09:24:30] ktdreyer: > Derrick: http://gerrit.openafs.org/8463: rx: Lock call for KeepAliveOn/KeepAliveOff 8464 is postponed, 8463 should be fine to go in [09:24:34] (from the pastebin notes) [09:24:43] roger, I'll adjust it [09:25:07] nothing from me [09:25:49] Ok. Thanks a lot everyone! [09:26:08] Thank for a successful meeting. [09:26:16] thanks all [09:26:40] Yes, thanks [09:26:45] and see you here next week [09:27:05] I'll sign off for an hour, but will check back later, just in case. Thanks again. [09:27:10] --- Stephan Wiesand has left [09:37:07] --- paul.smeddle has left [10:01:35] --- Derrick Brashear has left [10:19:07] --- meffie has left [10:44:16] --- mmeffie has become available [10:47:43] --- paul.smeddle has become available [10:51:03] --- stephan.wiesand has become available [10:52:38] --- Derrick Brashear has become available [10:57:42] --- mmeffie has left [11:00:58] --- mmeffie has become available [11:03:13] --- mmeffie has left [11:26:35] Today's meeting was much more substantial than I anticipated (or hoped). [11:37:56] Yes, it was basically what we were looking for next week! [11:39:43] --- paul.smeddle has left [11:40:23] --- stephan.wiesand has left [11:53:31] --- Marc Dionne has left [11:53:49] --- Marc Dionne has become available [12:34:16] --- deason has left [12:34:16] --- deason has become available [12:40:31] --- Marc Dionne has left [12:40:36] --- Marc Dionne has become available [13:20:59] --- Derrick Brashear has left [13:21:45] --- Derrick Brashear has become available [13:32:24] --- Derrick Brashear has left [13:51:18] --- Derrick Brashear has become available [13:58:27] --- Marc Dionne has left [14:41:50] --- Derrick Brashear has left [15:40:19] --- Jeffrey Altman has left: Replaced by new connection [15:40:27] --- Jeffrey Altman has become available [16:12:01] --- deason has left [18:23:02] --- Derrick Brashear has become available [18:31:38] --- Derrick Brashear has left [19:01:13] --- ktdreyer has left