[01:05:22] --- steven.jenkins has left [01:06:38] --- steven.jenkins has become available [03:48:06] --- jaltman has left: Replaced by new connection [03:48:07] --- jaltman has become available [03:50:21] --- haba has become available [05:53:12] --- reuteras has left [06:01:37] --- abo has become available [07:17:18] --- deason has become available [07:18:45] --- abo has left [07:20:22] --- mho has become available [07:27:20] --- dwbotsch has left [07:28:27] --- dwbotsch has become available [07:35:10] --- abo has become available [07:41:53] --- Simon Wilkinson has become available [07:55:01] --- Simon Wilkinson has left [08:09:04] --- meffie has become available [08:37:28] --- Simon Wilkinson has become available [09:11:50] --- mho has left [09:13:16] --- kaj has left [09:37:34] as of the 1.7.2 cygwin distribution, the docbook xsl stylesheets have been moved to a new location. [09:38:27] And I guess Windows doesn't get to run configure tests ... [09:38:58] --- haba has left [09:41:04] we don't require a specific distribution of docbook. The developer gets to specify the tool locations as part of their environment definition. [09:41:22] the default is cygwin and the paths for that are in the makefiles [09:43:14] I guess if it's just a choice between two paths, you could use a bit of shell in the Makefile to find the one that they have installed. [09:44:16] how? not all applications installed on Windows are in the path. cygwin isn't even installed at a fixed location. [09:44:50] So how do you install a path in the Makefile? [09:45:29] in order to build OpenAFS the developer has to defined the toolset using environment variables. [09:46:36] I'm not following what you're saying. Are the paths for the xsl stylesheets in the Makefiles or not? [09:46:42] which C compiler? which sdk? which ddk/wdk? where is idn sdk? which docbook? which doxygen? where is htmlhelp compiler? .... [09:47:34] there are default paths for the xsl stylesheets in the NTMakefile which are a guess in the case the dev did not specify where docbook is located. [09:48:04] if the location of CYGWIN is defined then the makefiles take a guess [09:48:26] So, presumably, you could take one guess, test to see if the files are there, and if they are not, then make another guess. [09:48:47] yes [09:48:53] its just annoying [09:49:16] yeh [09:51:58] the windows dev environment is much too complex but since we build using a combination of tools from different sources I really don't know how to simplify it [10:17:32] Well, this is sort of what autoconf is for. [10:18:04] Not, sadly, on Windows. [10:18:30] There are tools which aim to provide a cross-platform build system with automake like features (Scons, cmake, and friends), but I don't think we're ready to change over to using them. [10:18:45] um. i don't think they're ready for us to care. [10:18:59] it's not we who should be ready. if i have to think that hard, i don't want [10:21:51] --- Simon Wilkinson has left [10:30:27] --- Russ has become available [10:42:15] --- kaj has become available [10:44:00] --- kaj has left [10:44:06] --- kaj has become available [10:54:02] --- jaltman has left: Disconnected [11:03:39] --- jaltman has become available [11:10:24] --- deason has left [11:10:57] --- deason has become available [11:47:35] --- Simon Wilkinson has become available [12:29:25] so on solaris, we allow you to umount /afs with open files [12:29:43] if I want to prevent that, should I be looking at the vcache lists? is that sufficient? [12:30:38] I would have thought that the vnode reference counts would be the thing to look at. [12:32:16] Walk the vlruq, and check the ref counts (I can't remember if we count onto the vlruq on Solaris, so you'll either be checking for ones which are >0, or ones which are >1 ) [12:33:03] The OS should be keeping a refcount on the filesystem to prevent this. [12:33:16] It shouldn't be up to us. But if it is, then _we_ should be keeping a refcount. [12:33:43] apparently it's not; the FSs I see in opensolaris appear to be doing this check themselves, if I'm reading them correctly [12:35:13] When you solve it, make sure you handle the mmap() close() case correctly. Just hooking the open() and close() vnops is probably insufficient. [12:35:37] okay, so I can scan vc->vlruq (starting with just the mt root?) and look at the refcount? [12:36:23] Take a look at what ShakeLooseVCaches does. You want to do pretty much the same, but error out if you encounter a vcache with an active ref count. [12:36:40] That would be my approach, anyway. jhutz may have an alternate suggestion. [12:37:41] okay, VLRU was what I meant by a list (or queue, I suppose) of vcaches (or something with the free list, I assumed VLRU) [12:39:09] my confusion was whether a vcache is just a 'cache' in the sense it can get kicked out while some handle still stays open; but my impression is that it's not the case? [12:39:18] i.e. it's just our extension of the OS vnode [12:39:26] Scanning all the vcaches to see if they are referenced is silly. [12:39:32] supporting force unmount would be good tho, esp. if there's a vdrain equivalent [12:39:37] It means unmounting is linear in the number of vcaches, rather than constant [12:39:52] funmount :-) [12:40:21] there is the afs_vcount counter; I would say to just try to enable that for other OSs, but I presume it's tied closely with the dynamic vcache code right now? [12:40:34] Don't trust it. [12:42:03] > It means unmounting is linear in the number of vcaches, rather than constant it's not like this happens a lot; we don't work on stopping/starting the solaris client anyway [12:42:38] Well, it looks like we're using a private reference count on Solaris, so we should be able to use a filesystem ref count if you just find the right places. [12:42:57] It happens every time you shut a machine down. It matters if it takes noticeable time with modern cache sizes. [12:43:30] afs_vcount isn't what you want [12:47:08] > it looks like we're using a private reference count on Solaris what is it? I thought we were using v.v_count or something [12:47:38] For our purposes, we use vrefCount in our private extension to the structure. [12:48:14] er #define vrefCount v.v_count [12:48:23] Oh yuck. [12:49:00] Actually. How on earth is the SMP safe? [12:49:01] isn't this just so easy and fun to read? [12:49:28] I mean, the GLOCK will protect our usage of that, but what stops the rest of the kernel from racing us on ref count updates? [12:49:38] we don't lock the vnode itself or something? [12:49:52] we also don't update that ourselves, I don't think [12:49:53] unless we do [12:49:58] Oh, we do. [12:50:54] The vnode will be locked when we're dealing with VFS operations. But not when we're doing things like handling callback breaks, or anything that's done from daemon tasks. [12:51:05] I don't think directly on solaris... hard to read the ifdef maze in a few places... [12:51:20] at least in one place, we use VN_HOLD instead of VREFCOUNT_INC on solaris I think [12:51:32] which leads me to believe we should be using that instead if we're not [12:51:33] Ah - the VREFCOUNT macros aren't used on Solaris? [12:52:18] ask the ifdefs; my guess is maybe it's just used for accessing, but we use VN_HOLD/VN_RELE for actually getting/putting refs [12:53:03] Oh well, the way Oracle are going we won't need to care about Solaris in a few years, anyway. [12:53:33] None of this helps with your original question, though. [12:54:54] I assume I just need to find all of the points where references are added/removed from vnodes.... which I assume I will get wrong [12:55:54] I had thought the vfsp->vfs_count field would give me the kind of refcount I want, but it still seems to be 1 even if I have files open [13:00:55] Just finding all of those places won't help you if there are places where the OS takes an additional reference count on an existing vnode itself. [13:01:09] I don't know enough about how the VFS layer works in Solaris to know whether those exist or not. [13:11:37] I'm not sure I need every single reference, just a count of still-referenced vnodes [13:12:16] for instance, I thought gafs_inactivate / afs_InactiveVCache was called when the OS says that we can do away with the vnode... [13:13:28] I think inactivate is called when the reference drops to 0. That is, it isn't symmetrical with open() [13:14:33] or just looking at when vcaches go on/off the free list... or can they be unreferenced for awhile while staying off the free list? [13:14:45] What you ideally want is a function which is called every time a process obtains a reference to a vnode (this is 'open'), and one which is called everytime a process drops that reference. [13:17:34] well, I'm not immediately seeing how it would break if I add to the count on vget's, and dec on inactive's... but I'm not sure [13:18:18] Providing we do all of the reference counting, that should be fine. The issue would be if it is ever asymmetric (the kernel gets, we put, for example) [13:18:48] Oh, sorry misunderstanding. [13:19:03] You mean in afs_osi_vget() [13:19:26] or the caller, but yeah [13:22:31] That doesn't sound terrible. How well does it align with what other OpenSolaris filesystems are doing? [13:25:06] In fact, from what I've just read. vget() and inactive() are direct counterparts. If you increment a reference count in vget() and decrement it in inactive(), you'll have an accurate filesystem reference count. [13:28:44] zfs just seems to be looking at vfs_count.... but smbfs looks through some list of data structures it maintains... so I'm not really sure [13:29:12] maybe there's something we can do to make vfs_count more useful... I'll try to check that out [13:29:35] http://developers.sun.com/solaris/articles/solaris_internals_ch14_file_system_framework.pdf is less helpful than it could be, but points at vget and inactive being at either ends of the vnode lifecycle. [13:31:37] Also, we've been here before, it would appear ... http://archive.netbsd.se/?ml=openafs-devel&a=2007-02&t=3121778 [13:32:07] and there's this, too http://wiki.genunix.org:8080/wiki/index.php/Writing_Filesystems_-_VFS_and_Vnode_interfaces [13:32:36] That thread from openafs-devel contains advice from Sun, which seems to be "walk all of your active vnodes" [13:33:45] --- abo has left [13:33:57] It sounds like if you want to use vfs_count, then you need to do some other kind of magic. [13:34:15] i think perhaps a vfs count, combined with, if force unmount fails, walking all vcaches, is fine (sorry, was driving) [13:34:39] --- abo has become available [13:35:02] i looked at this question recently for macos when i was chasing a leak (not as permanent code) [13:35:36] of course on macos we *only* respond to umount -f, which.... blows. but i haven't found an answer yet (otherwise finder is happy to eject us) [13:35:48] i should reconfirm that's true [13:36:14] but vfs_count should probably just be made useful [13:40:53] VFS_HOLD and VFS_RELE, I think; zfs seems to use those [13:41:07] Yes. Those are the interfaces you need to use. [13:41:13] we almost certainly need more of those [13:41:32] probably in similar places to the [13:41:34] hey [13:41:35] I suspect that holding in vget and releasing in inactive is probably the right way to go. [13:41:44] the disconnected lock/unlock stuff should nearly overlap [13:42:09] is... yeah, i guess that'd work. [13:42:09] I don't think it does. [13:42:16] why not? [13:42:22] The disconnected stuff holds a lock for the lifetime of a single vnode operation. [13:42:24] yeah (vget/inactive), that may be it; trying to look at zfs as an example is difficult, though... it calls them in many places [13:42:41] i bet lofs is a better one to look at [13:43:24] You want to hold a reference to the VFS across the life of a vnode - as long as a process has a handle on a vnode, there should be a reference count on the vfs. [13:45:16] lofs maintains its own count.... which is updated in many places, augh [13:48:27] could just ask solaris people.... I suppose the port-solaris list would be good, if I had assurance that relevant people were on it [13:50:05] unclear they all are. lemme go look [13:51:35] dale's not. the rest of the people who seem likely are. [13:51:40] frank batschulat is [13:51:46] lofs only seems to increment the reference count when it gets and frees inodes. [13:52:06] a shame. it's a nice simple filesystem. maybe tmpfs? [13:52:11] hmmm, the old openafs-devel post suggests that we should be crawling all vcaches anyway, to flush them [13:52:11] There's a load of assertions, but I can only see two places where it actually manipulates it in opengrok. [13:52:20] like, zfs is a clusterfuck to look at [13:52:38] called in makelonode, but that has a bunch of callers; maybe I should look at them more [13:52:43] Flush shouldn't be needed to... regular unmount [13:53:23] right right, shouldn't be anything there anyway.... [13:55:11] The problem with lofs is that it's creating shadow vnodes, and doing the reference counting on them. The shadow vnodes are created on lookup, because the vfs layer doesn't actually know about them. [13:56:22] tmpfs would make jhutz upset (it walks a list of its vcaches) [14:05:55] ew [14:06:25] Hm, are there tarball releases of the 1.5.73.x point releases somewhere? [14:07:02] Russ: no. but they're tagged. [14:07:10] i can make them if you need them. [14:07:18] hey, how convenient, posts on port-solaris [14:07:30] sorry. i approved just now [14:07:42] 1.5.73.3 is now tagged. [14:08:33] I guess it doesn't matter; I can probably do the packaging from the tagged upstream release, particularly since I'm repackaging things anyway to remove WINNT. [14:08:52] And combining the doc and src trees. [14:09:00] So it's not like I'm preserving any connection with the tarball releases. [14:47:25] I've enabled git archive support on the openafs.git repository, FYI, since I want to use it for the Debian orig tarball generation. [14:48:08] Oh. I'd always meant to do that. Thanks! [14:53:05] 1.5 can now use other file systems for the AFS disk cache, can't it? [14:53:13] Yes. [14:53:16] Since LINUX_USE_FH will be defined with current kernels. [14:53:21] * Russ updates README.Debian accordingly. [14:53:55] USE_FH should go away at some point. It was only ever there as an "until we're brave enough". [14:54:09] I thought it wasn't available with older kernels. [14:54:38] It uses the exportfs interface that nfsd uses. It's available on any kernel with nfsd. [14:54:56] Ah, okay. Is that in 1.4 now as well? [14:55:01] The new cache handling? [14:55:27] No. Only the absolutely minimum that was necessary to support kernels without iget() went into 1.4.x [14:56:53] Okay, cool. I don't have to update the 1.4 documentation. [14:57:15] Let's see -- bos-new-config, bos-restricted-mode, and largefile-fileserver are now gone in 1.5 because they're just always enabled, right? [14:57:26] Yes, yes, and yes. I think. [14:57:40] yeah, they don't seem to be in the help any more. [14:57:58] --- deason has left [14:58:01] Basically, I went through the list of things we'd discussed getting rid of for 1.6, and did all of those that were easy. [14:58:05] Is there any documentation somewhere about what BosConfig changes are required for demand-attach? [14:58:28] Andrew must have known that you were about to ask that ... [14:59:10] --- deason has become available [14:59:17] http://blog.endpoint.com/2009/06/getting-started-with-demand-attach.html is the best that I have seen. [14:59:40] Also from Steven is http://www.dementia.org/twiki/bin/view/AFSLore/DemandAttach [15:00:14] Is there any man page documentation for the fs command for disconnected mode? [15:00:20] Excellent, thanks. [15:00:28] I'll try to turn that into documentation in the tree. [15:04:50] >> are there tarballs of 1.5.73.x? > can make them if you need them I think a 1.5.73.3 tarball would save me a build-depends on git for this presumptive openafs-devel freebsd packaging. [15:27:30] I could have sworn I already pushed to Gerrit a fix for the fact that src/rx/rx.c is executable. [15:28:18] I don't think I've seen one go past. [15:28:50] * Russ pushes one. [15:29:19] And now trying the new 1.5.73.3 client. We'll see if my system survives. :) [15:30:03] As long as your system isn't a Mac, you should be fine :) [15:31:10] Hopefully. :) I'm 0/3 on testing 1.5 clients on Debian. [15:32:30] --- deason has left [15:34:05] Annnd... 0 for 4. immediate aklog segfault. [15:34:38] Although it seems to have worked before it segfaulted. [15:35:02] And kernel panic on write. [15:35:10] yay! [15:35:19] I wonder what we've broken. [15:35:38] [2489747.568520] BUG: unable to handle kernel NULL pointer dereference at (null) [2489747.568524] IP: [] rx_WriteProc+0x59/0xa9 [openafs] [15:35:48] [2489747.568632] Call Trace: [2489747.568643] [] ? afs_linux_splice_actor+0x2c/0x40 [openafs] [2489747.568649] [] ? splice_from_pipe_feed+0x3e/0xb9 [2489747.568659] [] ? afs_linux_splice_actor+0x0/0x40 [openafs] [2489747.568668] [] ? afs_linux_splice_actor+0x0/0x40 [openafs] [2489747.568671] [] ? __splice_from_pipe+0x2f/0x51 [2489747.568674] [] ? splice_direct_to_actor+0xe6/0x19a [2489747.568683] [] ? afs_linux_ds_actor+0x0/0xa [openafs] [2489747.568693] [] ? afs_linux_storeproc+0x86/0x10a [openafs] [2489747.568707] [] ? afs_CacheStoreDCaches+0x181/0x3e4 [openafs] [2489747.568718] [] ? StartRXAFS_StoreData64+0x68/0x7c [openafs] [15:36:31] What kernel version? [15:36:46] 2.6.32-trunk-686-bigmem [15:36:55] (That's the faster writes stuff that uses splice to reduce the number of page copies. I wrote that, and it certainly used to work.) [15:37:35] Is there a panic message before the BUG in that log? [15:37:35] I suspect I'm going to have to reboot anyway at this point, so I'll try 2.6.32-3 and see if anything different happens. [15:37:44] Nope. :/ [15:37:56] Also, gdb output from aklog would be interesting to see. [15:38:15] Yeah. I should have captured that before I tried writing. [15:38:24] windlord:~rra> aklog *** glibc detected *** aklog: free(): invalid next size (fast): 0x08e6f3c8 *** [15:38:33] So heap corruption of some kind. [15:39:01] kaduk mentioned seeing heap corruption on aklog on FreeBSD recently. I wonder if it is the same problem. [15:39:24] Probably. Let me reboot and then I'll run it under valgrind and see what happens. [15:39:39] k [15:39:42] --- abo has left [15:40:10] --- Russ has left: Disconnected [15:40:34] --- abo has become available [15:41:57] I have a horrible feeling that with bigmem, that code needs a kmap and kunmap to be safe. [15:47:00] --- Russ has become available [15:48:13] The aklog backtrace from gdb is as unhelpful as you might expect. [15:48:41] valgrind produces *tons* of errors. [15:49:06] I should build a version with debugging symbols so that we can see where they all are. [15:49:40] Lots of uses of uninitialized variables and conditional jumps on uninitialized variables. [15:49:59] I wonder how much of that is LWP blowing valgrind's mind. [15:50:04] A bunch of invalid read errors from setcontext, which may just be par for the course from LWP. [15:50:33] > aklog Howdy. My plan to run aklog in valgrind had stalled, since I have an old ports tree that doesn't have a valgrind that works for amd64. Updating my ports tree is probably a full-weekend project. [15:50:36] aklog is quite possibly my fault, as I made a load of changes to it recently as part of the rxgk work. They all worked here, though. [15:50:36] But yeah, 5000 lines of errors. [15:50:59] No idea how much of that is just LWP noise. [15:51:21] But yes, the "gets tokens for the first cell passed on the command line and then segfault" is what I was seeing. [15:51:23] Also, aklog apparently leaks 307KB of memory, not that that probably matters. [15:52:12] Russ: How do you feel about trying untested patches? [15:52:30] I can do that fairly easily at the moment. [15:52:48] I haven't started the AFS client with 2.6.32-3 yet. Should I do that or wait for the patch? [15:52:55] Wait for the patch ... [15:52:57] Okay. [15:53:07] * Russ is building an aklog with debugging symbols now. [15:53:33] /afs/inf.ed.ac.uk/user/s/sxw/Public/openafs-splice-fix.patch should fix the splice problem. Completely untested, though. [15:55:16] Rebuilding the module now. [15:56:51] * Russ looks for anything in aklog that doesn't involve savecontext or setcontext. [15:58:11] Are you just running 'aklog' with no options? [15:58:22] Yup. [15:58:26] Without an AFS client even running. [15:58:38] This looks promising: [15:58:41] ==10221== Invalid write of size 1 ==10221== at 0x4190E24: _IO_default_xsputn (genops.c:480) ==10221== by 0x4164848: vfprintf (vfprintf.c:1333) ==10221== by 0x418582B: vsprintf (iovsprintf.c:43) ==10221== by 0x416ED5A: sprintf (sprintf.c:34) ==10221== by 0x804BA4E: auth_to_cell (in /home/eagle/dvl/openafs/src/aklog/aklog) ==10221== by 0x804D13C: main (in /home/eagle/dvl/openafs/src/aklog/aklog) ==10221== Address 0x42cd634 is 0 bytes after a block of size 4 alloc'd ==10221== at 0x4024C4C: malloc (vg_replace_malloc.c:195) ==10221== by 0x419A75F: strdup (strdup.c:43) ==10221== by 0x804B032: rxkad_build_native_token (in /home/eagle/dvl/openafs/src/aklog/aklog) ==10221== by 0x804B26D: rxkad_get_token (in /home/eagle/dvl/openafs/src/aklog/aklog) ==10221== by 0x804B7A2: auth_to_cell (in /home/eagle/dvl/openafs/src/aklog/aklog) ==10221== by 0x804D13C: main (in /home/eagle/dvl/openafs/src/aklog/aklog) [16:00:49] Okay, trying new kernel module now. [16:01:21] I can write to AFS. That's a good sign. :) [16:03:03] Hm. [16:03:07] But I'm seeing weird file corruption. [16:03:24] On stuff you're writing? [16:03:27] No. [16:04:05] (do) windlord:/afs/.ir/systems/i386_linux24/pubsw/lib> dir libiconv.so.2 lrwxr-xr-x 1 70526 root 17 2005-12-19 11:45 libiconv.so.2 -> [16:04:32] It claims to be a symlink pointing to nothing. [16:04:59] On another system: [16:05:01] exodus:/afs/.ir/systems/i386_linux24/pubsw/lib> ls -l libiconv.so.2 lrwxr-xr-x 1 70526 root 17 Dec 19 2005 libiconv.so.2 -> libiconv.so.2.3.0 [16:05:32] Okay, so that's a symlink with a bad target. [16:05:36] Running fs flush on it fixes it. [16:05:43] Yeh. I was going to ask that. [16:05:58] Maybe an artifact of the 1.4 -> 1.5 client upgrade? [16:06:00] It's possible that the earlier crash scrambled your disk cache. [16:06:21] Hm, does fs setcache 1 no longer purge the cache? [16:06:41] * Russ stops the client and kills CacheItems. [16:06:43] It should do. If it doesn't, then that's yet another bug. [16:06:54] windlord:/root# fs setcache 1 New cache size set. windlord:/root# fs setcache 0 New cache size set. windlord:/root# fs getcache AFS using 2039 of the cache's available 50000 1K byte blocks. [16:07:15] Grumble. [16:07:23] fs setcache 1 returned immediately too. [16:07:28] I'm used to it taking a while to blow away the cache. [16:07:57] Do you want to stick that in RT for later? [16:08:06] yeah, or I can forward these all to RT once I finish. [16:08:17] I think I've got a fix for aklog for you. [16:08:41] Okay, things seem good after flushing the cache and restarting. [16:08:52] I suspect we can write that one off to the previous crash scrambling things, or the 1.4 to 1.5 upgrade. [16:09:03] although if it's the latter we should warn people to blow away the cache when upgrading. [16:09:35] I've bounced between 1.4.x and 1.5.x in testing without touching the cache and not had any problems. [16:10:05] --- abo has left [16:10:29] Okay, probably just the crash, then. [16:10:42] All the data files in D0 were 0-length. [16:10:43] --- abo has become available [16:12:31] Which filesystem? [16:12:35] ext3 [16:13:15] /afs/inf.ed.ac.uk/user/s/sxw/openafs-aklog-nasty.patch is a somewhat ick fix for the aklog string sizing problem. [16:17:38] /afs/inf.ed.ac.uk/user/s/sxw/openafs-aklog-nasty.patch: Permission denied [16:17:51] Bah. Sorry. Wrong level. [16:19:37] * Russ hehs. Yeah, that's nasty. [16:19:46] Don't we have asprintf in the tree? [16:20:18] yeah, afs_asprintf. [16:20:23] I think that's what you want here. [16:23:33] --- Simon Wilkinson has left [16:25:47] --- Simon Wilkinson has become available [16:26:01] Sorry, Mac go boom. [16:27:04] I love the dummy write a few lines below that code. [16:27:17] There are just so many wrong things expressed by that comment. [16:27:46] Yeah. [16:28:11] But yeah, that looks like the code. Replacing that with a free and an afs_asprintf should take care of it. [16:28:16] * Russ tries that now to confirm. [16:28:22] Do we still support AIX4 ? If not, I guess we could consign that line to the oblivion to which it so richly deserves. [16:28:33] yeah, I checked that right now, but we support 4.2 still. [16:28:37] I don't know if 4.2 also had that bug. [16:30:53] Yup, that fixes it. [16:31:17] I have a patch with afs_asprintf -- I can push or let you push, whichever. [16:31:25] You wrote the patch - you push! [16:31:32] I'll push the splice fix. [16:32:36] http://gerrit.openafs.org/1705 [16:34:59] http://gerrit.openafs.org/1706 for aklog [16:35:04] --- summatusmentis has left: Lost connection [16:36:13] --- summatusmentis has become available [16:36:15] Is asprintf() not guaranteed to set the string pointer to NULL if it fails? [16:36:21] Correct. [16:36:32] So you don't need to set username = NULL? [16:36:32] POSIX says "contents of strp is undefined." [16:36:44] Correct that it's not guaranteed. [16:36:48] Ah. BSD says it's NULL. [16:37:05] Oh, sorry, not POSIX. [16:37:09] glibc says undefined. [16:37:13] asprintf isn't in POSIX. [16:37:33] I didn't check what our afs_asprintf does. [16:37:35] We may guarantee this. [16:38:13] We should probably just NULL it anyway, just to be on the safe side. [16:38:19] Ah, yes, we do guarantee it. [16:38:48] I suspect our asprintf came from Heimdal. Love has a very nice habit of setting all output variables to known values at the start of every function. [16:38:51] But yeah, it's good coding practice with asprintf, since it's not guaranteed. [16:47:07] --- summatusmentis has left [16:57:06] --- steven.jenkins has left [17:00:06] --- steven.jenkins has become available [17:00:17] --- phalenor has left [17:10:20] --- phalenor has become available [17:11:29] --- kula has left [17:28:10] --- shadow@gmail.com/owl91569290 has left [18:25:29] --- phalenor has left [18:28:59] --- Russ has left: Disconnected [18:31:23] --- deason has become available [18:32:31] --- kula has become available [18:35:31] --- phalenor has become available [18:37:46] > DAFS BosConfig changes you change the bnode type to 'dafs' and add a line for salvageserver line, as those links should demonstrate [18:51:35] --- Russ has become available [19:23:08] --- meffie has left [19:35:05] --- Simon Wilkinson has left [19:35:42] I pushed a doc update to Gerrit that should hopefully provide plenty of pointers. [19:36:01] http://gerrit.openafs.org/#change,1707 [20:03:03] --- shadow@gmail.com/owl4EAC463D has become available [20:44:50] --- Born Fool has become available [21:58:55] --- jaltman has left: Replaced by new connection [21:58:56] --- jaltman has become available [22:01:18] --- reuteras has become available [22:05:39] --- jaltman has left: Replaced by new connection [22:05:41] --- jaltman has become available [22:06:09] --- deason has left [22:07:56] --- Born Fool has left [22:28:30] --- kaj has left [22:48:29] --- steven.jenkins has left [22:51:06] --- steven.jenkins has become available [22:54:43] --- reuteras has left [23:02:23] --- dwbotsch has left [23:02:48] --- dwbotsch has become available [23:12:03] --- reuteras has become available [23:14:07] --- tharidufernando has become available [23:28:34] --- kaj has become available [23:56:40] --- haba has become available