[02:13:21] --- Stephan Wiesand has become available [04:56:53] --- Jeffrey Altman has left: Disconnected [05:01:12] --- Jeffrey Altman has become available [06:42:04] any objections to shipping http://gerrit.openafs.org/#change,9682 with 1.6.3? [06:46:55] not from me [07:17:58] it's a utility tool. you should, imo, feel free to do whatever the hell you want to any such noncritical infrastructure pieces. if you wanna make them nice and featureful, go nuts. [07:18:50] It will be interesting to see if further voices join in on the ihandle_sync thread. [07:19:09] i feel like i need to rereview the code. [07:19:48] i recalled doing review when we took the change on master and seeing where we were doing the work instead, and now i can't find it or my notes. [07:19:55] I don't think it's so much the code (although the commit message is deceptive - it's not ext3's journalling intervals that affect things, but how frequently pdflush flushes pages, which is a property of every Linux file system) [07:20:11] We're not doing the work elsewhere, that's kind of the point. [07:20:40] Without the background sync thread, we stop flushing data files out to disk with fsync. [07:20:51] We still flush the namei special files, iirc. [07:20:55] we do. [07:21:03] And we might still flush directory objects. [07:21:07] What's the alternative? Back to the foreground syncs that were done before? [07:21:51] we had this argument before and someone will always be unhappy. what guarantee to you make to the client? what does that guarantee me? [07:21:54] mean, that is [07:22:01] I think we'd hear howls of pain if we did that, too. The background sync change made things much, much faster. Especially on journalled filesystems where each sync is a metadata event. [07:23:17] As Derrick says, some people believe that returning success to a client means that their data has made it to media, so if the fileserver falls over, it won't be lost. However, even fsync() doesn't provide that guarantee. [07:23:23] On a fileserver with a battery backed write cache, would one notice? [07:23:39] If the fsync isn't there, yes. [07:24:07] I mean would one notice a significant slowdown? [07:24:18] Yes [07:25:14] fsync() generally ties up the filesystem. Lots of locks have to be taken so that it can get things into a consistent state, and then push stuff out to disk. The sync also doesn't return until the I/O completes. [07:25:21] Tough one. [07:25:40] So, both the thread performing the sync will be blocked, but other threads that are trying to access data on the same block device will slow down (or stop) [07:26:10] Then you throw in filesystems like ZFS, where a sync in effect creates a snapshot. ZFS syncs are mindblowingly expensive. [07:26:34] Right. [07:27:07] So I don't think going back to what we had is an option [07:27:23] (although Chaskiel seems to think that doing so locally for his site is) [07:28:03] so leave ih_sync_all to be enabled behind a switch for people like him? [07:28:20] I didn't dare suggest that. [07:28:29] i just did :) [07:28:43] I actually think it would be better to have foreground syncing available behind a switch, and remove background syncing entirely. [07:29:10] that's invasive for a 1.6.3 change [07:29:29] background syncing is where the nasty multi-threaded races that Andrew is so concerned about come from. Putting it behind a switch just makes it more likely that those exist (and probably grow) [07:29:49] i think it is the right answer ultimately. but for 1.6.3 i think that means we have to do a lot more testing than is short term feasible [07:30:02] Have a 3-way switch? [07:30:23] --- deason has become available [07:30:35] abort/retry/fail? [07:31:09] OFF/FG/BG [07:31:13] I think we should continue on our path for 1.6.3, but suggest to Chaskiel that if he is prepared to do the work on making foreground sync happen again, we'd take it with a suitable command line option (—i-really-like-my-data-please-dont-trash-it ) [07:31:29] No, background should die for the reasons Andrew has outlined. [07:31:59] It seems Chaskiels preferred option. [07:32:01] In general, we've been against —please-corrupt-my-file options [07:32:08] honestly, once foreground sync is back, leaving background behind a switch is basically asking for damage since it *will* bitrot [07:32:12] But then maybe under the assumption that it will be fixed. [07:36:21] We have this change in 1.6.2 making the syncs non-concurrent. That's insufficient? [07:36:46] I'm in the foreground sync or no sync camp. I would be opposed to making changes to the sync processing in a stable series except that I am more opposed to data corruption. [07:37:03] The 1.6.2 change fixes the currently observed problems. But (speaking for Andrew, here) I think the concern is that background is inherently fragile. [07:37:19] There's also the view that it provides no benefit, but that's obviously being disputed. [07:38:15] It's a heck of a change for a problem that hasn't been observed. [07:38:34] I can speak for myself. I agree with Andrew that we have been burned by the background sync code too many times and the YFS team does have additional months to devote to tracking down future failures if they happen to occur. [07:39:01] s/does/doesn't/ ? [07:39:13] s/does/does NOT [07:41:33] Speaking for my servers: I'd rather run without syncs than with something strongly suspected to trash my users' data eventually. [07:42:00] I would be inclined to leave sync as it is in 1.6.2 until someone is willing to contribute a foreground sync option [07:42:19] I don't think removing background sync has to happen in 1.6.3 [07:42:39] I figure Andrew wouldn't concur. [07:43:58] There will be no unanimous agreement. That is why we have a release manager. :-) [07:44:01] Indeed [07:44:30] Great... well I knew this would happen. [07:44:46] that we would make you decide? ;) [07:45:19] That there would be decisions to take that make someone sad. [07:45:27] welcome to my life [07:45:48] Ultimately, it's the gatekeepers who decide ;-) [07:45:56] we decided to let you decide [07:46:37] fundamentally i think it comes down to this: [07:47:08] are you willing to defer a release until there is a version of foreground fsync which has been tested (possibly because it's just the one we previously junked) [07:47:22] if so, that plus a switch, and kill ih_sync_all [07:48:10] if not, ih_sync_all behind a switch. 1.6.3 is what it is. the code will not bitrot; either ih_sync_all already works or already doesn't [07:48:22] Such a release would bring lots of other significant changes. Simon has a point about not letting that happen. [07:48:29] That's the make everyone happy option. Or, make Andrew unhappy by not removing ih_sync_all, or make Chaskiel happy by leaving everything as is. [07:48:58] isn;t that second or actually an and? [07:49:23] Yeah, sorry. Final option is 'or make Chaskiel unhappy by removing it' [07:50:15] another perspective is that we just made a change to sync in 1.6.2 and it should be permitted to bake for a while before making further changes [07:51:01] That change would be killed together with the thread, no? [07:51:11] it would [07:51:47] How's this angle: [07:52:03] There is no known data corruption problem. [07:52:16] Let it in, and be the default. Stable series. [07:52:25] But add an option to kill it. [07:52:56] an option to kill what? [07:53:07] background syncing [07:53:16] it's already in [07:53:24] and already is the default [07:53:37] so you propose adding a switch which just turns off ih_sync_all. [07:53:40] adding a switch to turn off background syncing if desired is another option [07:53:53] it is the option of least change, to be sure [07:54:03] Yes, sorry, that's what I meant. [07:54:11] I would be happy wiat [07:54:17] me too [07:55:57] Would we want a compile time or run time switch? [07:56:22] (compile time is much cheaper?) [07:56:45] run time, ideally. [07:57:50] run time please [07:58:01] compile time switches are too hard to test [07:58:13] and result in bitrot [07:58:26] Runtime switches just result in different bit rot. [07:58:46] also it means people have to build their own binaries and then you get into fun when bugs are reported and you can't do anything with their core [07:58:58] at least buildbot catches some errors and we can add tests [08:00:56] This can't be too hard too implement? [08:01:15] Just don't create the thread if the switch is set? [08:03:00] basically. wrap lines 177 thru 190 in an if() {} [08:03:03] (of ihandle.c) [08:03:45] I'm not really sure of the benefit beyond doing this, beyond trying to please all of the people [08:04:34] it's exceedingly trivial to do, so, why not please all the people? [08:05:29] It's supposed to be a stable release series. The benefit is that we act accordingly, and still provide an easy way to opt out of a feature we consider potentially dangerous. [08:06:11] So, the downsides are that it doesn't reduce any of the code complexity associated with the feature. All that it does is make it less likely that developers will actually notice if they break something. [08:07:05] back in like 10 minutes [08:10:23] So, one more choice, one more person to make unhappy (now it's Simon). [08:10:49] Yeah, choices, choices, choices [08:11:00] How about releasing 2.0 in May? ;-) [08:11:27] I don't particularly care. If I end up having to spend any serious time with the ihandle package, I know I'm going to end up ripping it all out and starting again, anyway. [08:13:04] I just think that, in general, options are costly, and have to deployed with care. They're expensive for users (because they have to understand all of the options, and which ones are appropriate to their environment), and they're expensive for developers (because they cause multiple different code paths, all of which must be understood, and because you end up with O(n^2) things that you should test). [08:14:23] I agree. [08:14:56] From a distance, the whole issue suggests there is something fundamentally wrong with the "ihandle package" (whatever that is)? [08:15:54] Oh, it's horrible. It was written in the days where operating systems were slow, and limited, and you had to do all of your own caching to work around their performance (and capacity) issues. [08:19:40] Andrew just answered on the devel list. Let's see what comes from that thread. [08:19:44] oh man, lots of scrollback [08:19:57] happily, a lot of what I said I think was also said here [08:20:16] if the ih_sync_all removal doesn't happen in 1.6.3, that's not so bad [08:20:37] Seems we're all on the same page. It's just that there is no good solution. Let's face it. [08:20:38] I expected it to get deferred if someone seriously complained, regardless of how legitimate any complaints were [08:20:51] (I just didn't expect it to come from non-CERN :) [08:21:45] I think CERN understand the tradeoffs. The amount of caching they have on their fileservers, I doubt that calling fsync() actually makes any difference to permanance. [08:21:56] The one option making *me* sad is deferring the issue further. [08:22:05] well, the 'good solution' to me is possible, but it requires work to put fsync() calls in better places [08:22:29] iirc, a lot of ihandle stuff calls it nearly constantly for certain operations, because there's no way to 'batch' a bunch of writes that go together [08:22:34] How much work? [08:23:12] also I don't mind deferring this stuff because I think chaskiel will get much angrier than I will if you make him unhappy vs me :) [08:24:31] I don't have a good way of estimating that. um, "smaller than dafs"? :) [08:24:58] large enough (in my opinion) to not be useful in a discussion for what to do for the next few stable releases [08:25:11] more of a long-term thing [08:25:12] I don't care that much for anger levels of individuals. I care for the least-bad solution - before 1.6.3pre1. [08:25:27] Well, dafs took.... several years? ;-) [08:25:51] Ok, got it. [08:26:06] I didn't mean it's anywhere near dafs in size [08:26:15] but like, that's my really really broad estimate, ha [08:26:40] and 'anger level' may not seem as important, but... I dunno, with something like this, I think reported issues and such has a lot to do with perception [08:26:46] I think the challenge would be the complexity of the code that has to be understood to make the decisions, rather than the scale of code that would have to be written. [08:26:54] yeah [08:27:24] if someone thinks the syncs are essential, and we remove them, I'm sure I'll hear a lot of complaining or bug reports etc, giving an example of how it corrupted data [08:27:46] when there's like a million ways a volume can get corrupt, because we don't ensure consistncy during a lot of things [08:27:53] Indeed. [08:28:11] But if you are that way inclined, removing syncs is an obvious thing for folk to point a finger at. [08:28:14] Why isn't it happening on my servers...? [08:28:17] also, for syncs on fileserver writes, what seems possibly odd to me is that we always fsync() at the end [08:28:32] since we can detect an application-level fsync() on the client... [08:28:44] Stephan: well, do you crash them in the middle of write ops all the time? [08:29:04] No. [08:29:06] If you're not in the habit of kicking out power leads, you're probably okay :) [08:29:21] It does happen though. [08:29:25] that's the scenario we're generally talking about with this [08:29:52] but you know, when it's the difference of like .001% and .01% of data loss, you're not likely to notice if you lose power, like, once [08:30:17] (or .1% and 1%, or whatever) [08:30:49] Can you spell it out to me once more? Removing the thread will negatively affect file content integrity only? [08:31:02] but if it happens once, naturally whatever you think caused it is then really likely to happen and why did you release this why do you like losing data [08:31:06] (in the case of a crash) [08:31:48] so, one scenario where this is relevant is a write to the fileserver from the client [08:31:49] My vice partitions are mounted data=writeback... [08:32:09] normally at the end of the last write, we fsync() the file [08:32:24] (the 'last write' because we can issue several write operations, if it spans different cache chunks, etc) [08:33:03] if that fsync() is not issued, and the power goes out, then that data could be lost, even though we returned success to the client e.g. 2 seconds ago [08:33:20] and not only lost, but I think for different filesystems/platforms there are different failure moes [08:33:21] modes [08:33:33] like, you get the missing content as full of NUL bytes or something [08:34:02] Yeah, that depends on how your filesystem handles metadata updates. [08:34:11] and depending on which writes hit the disk before the power goes out, we may have recorded that the file size is e.g. 8K, but the write that extended it from 4k to 8k got lost [08:34:31] so that is an inconsistency in the volume, which prior to patch mumble mumble, isn't ever detected [08:34:58] (some patch I submitted checks the filesize on disk for consistency; but even if that's valid that of course doesn't mean everything's okay) [08:35:08] so that's like, the danger [08:35:21] currently we don't issue that fsync() but of course we do it like 10 seconds later or whatnot [08:35:46] the current argument is that if we don't issue the fsync() at all, the operating system (and underlying fs, etc) already makes a similar guarantee that the data is "ok" N seconds in the future [08:36:00] And we'd still do that (after 10 seconds) without the sync thread? [08:36:40] (did my immediately preceding response already answer that?) [08:36:51] It did. [08:37:50] The file on the vice partition is never closed unless the fileserver runs out of handles? [08:37:54] Now, not all operating systems make such a clear guarantee. For example, whilst ext3/4 makes clear guarantees about journalled metadata information, what pdflush does is a little more vague. [08:38:12] the thing about the client's fsync is that since we don't necessarily push everything at once, only NCHUNKSATONCE, so the client's fsync is... when? as far as the fileserver knows. otherwise, using the client's intent to provide a guarantee would be a reasonable thing to do [08:38:21] right now the 10-second delay thing is one of those things where someone not familiar with afs internals can sometimes see it and they say "this makes zero sense, what are you doing" [08:38:49] The problem with having an fsync RPC is that RPCs aren't strictly ordered ... [08:39:13] shadow: not sure I'm reading that correctly; the client does provide a "please fsync now" option [08:39:18] to the fileserver [08:39:21] Simon, I believe ext4's guarantees are much weaker than ext3's. [08:40:00] > "please fsync now" option if the RPCs are handled in order [08:40:03] >The file on the vice partition is never closed unless the fileserver runs out of handles? generally yes, assuming everything's running okay, and there are no vol ops, etc etc [08:40:22] ouch. [08:40:38] I don't see what the fd being closed has to do with this, though [08:40:51] Closing the fd makes it more likely that the fs will flush the data [08:40:56] s/fs/os/ [08:41:24] it doesn't require an fsync-level synchronization, but just 'more likely', okay [08:41:24] when AFS_FSYNC is sent, we already fsync(). it's just not done in the ihandle package [08:41:32] If you're running in whatever the "i don't like my data much" ext4 option, I think you can end up with pdflush only ever pushing stuff to disk when the system comes under memory pressure. [08:41:45] >if the RPCs are handled in order well, they're handled in order for a single client; we don't issue them in parallel [08:41:52] or I'm not understanding what you're saying [08:42:22] in practice it's probably ok. in theory there are edge conditions. that's all. [08:42:40] Simon: I thought pdflush had a default (of... 30 seconds?) of writing dirty data to disk [08:43:02] that is, not a guarantee/deadline of 30 seconds, but if it makes a pass and the dirty block is dirty >30s, it goes [08:43:07] That's just the default. [08:43:10] that number may be way off [08:43:16] pdflush of course only matters on linux [08:43:23] but if you change it, that's the user's explicit "hey I don't want things syncing" notification [08:43:32] Yeah, other operating systems have their own foibles. [08:44:08] But if you're running with writeback, pdflush is the only thing that makes sure your data hits disk - the journalling interval is irrelevant. [08:44:42] (That's why writeback is a security problem, because if you end up getting a block that's been previously used by someone else, and the machine crashes before the data makes it out, you get the last user's data) [08:44:43] but it seems like these things are providing the user-visible "sync/delayed sync" option that I mentioned [08:44:51] as in, that option already exists [08:44:56] though sync/nosync, I can see, certainly [08:45:23] I think the purists would argue that the RPC returning success should mean that the data is on permanent storage. [08:45:40] We watered down that guarantee with background syncs, but maybe they didn't notice then. [08:45:58] I don't think so, since we have an option that explicitly says "make sure the data has hit the physical disk" [08:46:10] I mean, if that option isn't set, it hasn't hit physical disk; if the option is set, it has [08:46:12] (on success) [08:46:39] unless that's what you're saying; if we return success on AFS_FSYNC, it's a guarantee that it's on physical disk [08:47:14] I'm sure I have heard the argument that returning success at all should indicate that it is on physical disk. Which, as you say, negates AFS_FSYNC [08:47:50] well I say, if you want that interpretation, it's not hard to do; just always pass the flag :) [08:47:54] answer from chas on -devel [08:48:08] Anything else starts to seem vague. Why should the fileserver enforce that data is written out at least X seconds after it has been received, rather than that being the OS's responsibility? [08:49:23] i continue to be content with default ih_sync_all to off, switch to allow it on, for 1.6.3, as long as we promise it will go away in favor of something (or in favor of total removal) by 1.6.4 [08:49:30] The lack of close() does make things slightly exciting. [08:49:46] meaning: 1) either it works today or not. 2) bitrot doesn't matter, we won't care in the next release. [08:49:50] If a process opens a file, writes to it, and never calls fsync() or close(), is the OS really responsible for writing the data out? [08:50:25] Per the standards lawyers; no. [08:50:44] But if an OS did not actually write the data out in such a case, the users would be in an uproar. [08:51:21] (The BSD analog of linux's pdflush is the kernel syncer thread, which runs once a minute or so, syncing all vnodes with dirty pages, btw.) [08:51:37] But I can see why an OS with a lot of data to write back would prioritise data from closed file handles. And, on a busy system, that might mean that you wait a long time for a write. [08:52:24] windows has a lazy writer thread which builds a prioritized list of dirty pages to flush based upon policy applied to each file's handle by the file system. [08:53:59] does anything else 'delay' syncs like this? (from the application level) that's just why this seems so bonkers to me [08:54:29] like, if I hear that you put the syncs in the background thread, to me that just sounds very naive [08:54:46] delayed allocation? [08:54:55] like, someone saw that whatever this fsync() call does, it sure is taking a lot of time, let's put it in it's own thread [08:55:43] the current ih_sync_thread advocators are certainly not like that, but, that's just the only other time I feel like I've seen that [08:55:49] Well, one would need to find an appliation that holds large files open for a long time and never closes them. [08:55:56] AIUI, the reason for moving it all into the background thread was in order to batch syncs together [08:55:58] ...in order to be a good parallel. [08:56:33] How does, say, mysql, handle syncing to disk? [08:56:48] Jeff, do you mean "default ih_sync_all to on, switch to allow it off" ? Either way, could we keep that promise? [08:57:00] Most things these days use mmap(), which has its own lovely set of pitfalls [08:57:12] from the grumblings I always remember hearing, I thought mysql didn't have a lot of consistency guarantees, ha :) [08:57:25] that's probably from the myisam days, though, and unrelated to fsync [08:58:02] At some point the answer is: if you really care about your data, don't let your machines lose power, and have good backups. [08:58:55] my proposal was add a switch to permit the background thread to be turned off but that the default would be the background thread is enabled. this is to ensure that there are minimal changes in 1.6.3 and that if a future problem is discovered with background syncs in the 1.6 series that we have an easy way for system administrators to disable it. [08:59:37] Thanks Jef, that's what I thought. [08:59:56] also, regarding the 'danger' of leaving ih_sync_thread on... it did take a while for the most recent corruption to occur (or at least be noticed) [09:00:21] agreed and understood. [09:00:25] it seems likely any future problems are at least as unlikely, so it may not be so "urgent" [09:00:37] I didn't mean that it had to go in right now [09:01:16] We've been discussing it since the early 1.6.2pre days. [09:01:42] we've been discussing it on and off since 1.4.7 :) (or whenever it was) [09:01:51] (er, not including 'me' in that 'we' for all of that time) [09:01:54] 1.2 [09:02:10] well, I meant, specifically pulling out ih_sync_thread, but more generally, sure [09:02:52] hartmut's original proposal to add it was ~1.2.9 I think. [09:03:17] NB can someone explain why some talk about ih_sync_all and others about ih_sync_thread ? [09:03:34] the thread calls the function [09:03:40] I think they're the same thing, as far as this discussion goes [09:03:51] Good :) [09:03:52] the thread is *the only caller of* the function [09:03:55] I was saying ih_sync_thread to try to be more clear that I'm talking specifically that we do it in a background thread [09:05:33] If we don't do anything about it in 1.6.3, I don't want to hear about it ever again unless there's evidence it actually corrupts data. [09:07:47] well, do you want to delay 1.6.3 for discussions on this? [09:07:54] No. [09:08:13] also, a pony [09:08:21] ? [09:08:26] that sounds like a vote for doing something about it, then [09:08:46] that's arguably another idiom, although maybe not american [09:09:03] [begs for explanation] [09:09:11] "i would like all of these things which are kind of unlikely" [09:09:18] a pony is even more unlikely. [09:09:28] Thanks. [09:09:35] http://www.codinghorror.com/blog/2006/01/and-a-pony.html [09:09:51] But I don't think it fits. [09:10:06] We have all the choices. [09:10:16] a pony? it doesn't have to fit. i'll just eat it. problem solved. [09:10:54] We have all the different opinions. And a weak to gather more input on the list. [09:11:00] true [09:11:30] well, does 'adding an option' (to turn it on/off) count as 'doing something' so it may be revisited? [09:11:44] What would you do with a unicorn, Derrick? [09:11:47] Next Wednesday, we should take a decision. And not defer to the next release, where the situtation will be exactly the same. [09:11:49] or you mean you want whatever it is we do to be handled for this release, and in the future it's not changing unless there's a huge huge problem [09:11:58] okay [09:12:11] Well, the next release we do from master isn't going to have a background sync thread :) [09:12:15] Yes, adding an option counts as "doing something". [09:12:38] See my proposal to release 2.0 in May. Solved. [09:12:41] :-) [09:12:50] also pony [09:12:58] This time, yes. [09:13:03] 2.0 or 1.10? [09:13:26] 2.0 is defined as the release that has support for non-DES encryption types... [09:13:40] omg look at all the ponies! [09:14:00] Whatever. It's bee a while since I was told when all those would be out... [09:14:37] (folks here stare at me giggling) [09:15:44] Yeah, we may get to 1.100 yet [09:16:19] okay, but current plan is: "continue arguing, decide something by wednesday"? [09:16:41] do we need a patch by the wednesday meeting, if we want a runtime option? [09:16:47] Definitely let's see what happens on -devel. [09:17:24] Explosions! [09:17:58] Andrew, a patch would be great. Could you do it, even if I can't promise we won't throw it away? [09:18:44] an option to just turn the background sync on/off is easy, sure [09:19:07] an option to make the options on/off/delayed, while still easy, probably warrants more testing [09:19:21] but I can just submit background on/off ; that's easy to do [09:19:32] what should the option name be? [09:20:24] (oh also, one more problem with more runtime options I don't think has been brought up: there is a limit to what you can give to 'bos create' over the wire, iirc) [09:20:25] I was going to say "now we get to argue about the switch name". --make-chaskiel-happy={[off],on} [09:20:26] -[no]bgsync? [09:20:56] After all, naming is one of the two greatest problems in computer science. [09:21:22] nobgsync would seem reasonable [09:21:27] --disable-background-sync-and-i-understand-the-dangers-to-my-sanity [09:22:13] No, because we can't change the option name when we discover background sync is causing the hole in the ozone layer and we need to tell everyone to turn it off now. [09:22:47] default bgsync to on or off? [09:22:58] On. [09:22:59] (this can be argued later obviously, just getting initial opinions for the initial submission) [09:23:56] Stable release => opt-out of current behaviour. [09:24:37] And a strong recommendation to opt out. [09:26:23] hmm, so, this is intended as a 1.6-only change? since it's gone on master and we're not proposing bringing it back [09:26:44] absolutely not [09:26:44] so the option on master is a no-op for turning it off, but.... and error for turning it on? (or also a no-op) [09:26:59] I think it's a case for a 1.6-only change. [09:27:14] oh what a mess [09:27:30] I expected it would at least be a no-op on master, since if you upgrade to post-1.6 whatever it is.... [09:27:44] and it would be a bit confusing to have it in the man pages for only 1.6 :) [09:27:53] Yeah. Yuck. [09:27:56] but no no, I can figure something out [09:28:05] option generates and error at startup if set to on [09:28:09] no need to take up everyone's time here; we can argue against it in gerrit [09:28:12] Same thing, different default? [09:28:24] Maybe make a —sync= option and then have it take a load of different options, which can be different on each branch. [09:28:39] Which means we can error according to what's supported in a particular binary [09:28:40] Great idea... [09:28:58] sync=pony [09:29:12] sync=do-what-i-mean [09:29:55] Why wasn't it killed before 1.6.0? [09:30:17] no one noticed it was corrupting data [09:30:23] basically [09:30:28] people got tired of arguing about it, so this didn't come up again until another corruption bug happened [09:30:36] because so few people run master in production environments [09:31:09] Let's have the patch available, thanks andrew. [09:31:18] And decide next Wednesday. [09:32:55] 4 days of weekend - I'll hopefully get some more stuff merged tomorrow. [09:34:10] Andrew, is there an ETA for the list of things you'd like to push for? [09:35:47] "today"; I'll say, 22 hours from now [09:36:37] Thanks. [09:37:11] I take it there are no objections by the gatekeepers to changes agreed in yesterday's meeting? [09:37:47] --- meffie has become available [09:37:51] I have the impression that the gatekeepers will stand behind you on just about anything you decide. [09:38:02] dude, this is your show. [09:38:15] unless you do something crazy, we're in [09:39:13] Thanks. And there's always git revert if I do screw up. [09:39:24] s/if/when/ :-) [09:39:48] yep [09:40:21] Got to leave. Thanks a lot for the discussion and your patience. [09:40:32] --- Stephan Wiesand has left [10:23:53] --- meffie has left [10:29:26] --- stephan.wiesand has become available [16:10:28] --- stephan.wiesand has left [17:09:53] --- deason has left [23:59:45] --- Simon Wilkinson has left